Capability Engineers

Ship a full-stack app. End to end. In a day.

This is the starter program for capability engineers. One stack, one workflow, ten assignments. You are the architect and product owner. The agent writes the code. You learn by deciding, questioning, and verifying — not by reading source.

The program

Six phases, one artifact per phase.

You direct the agent

Claude Code or Codex writes every line. You never edit source. Learning happens in the prompts you write and the decisions you make.

Artifacts, not code

Each phase produces one human-readable artifact: a brief, a data model, a schema, an OpenAPI doc, a running app. That artifact is the contract between phases.

Verify without reading code

Every phase ends with a check you run yourself: a SQL query, an OpenAPI "Try it out", a user story walked in the browser.

The stack

SQLite. Python. React. Running on your laptop.

No Docker. No cloud. Two terminals, two commands. The same stack across all ten assignments, so the only variable is the thing you are building.

Database

SQLite

File-based, zero install, ships with Python. Schema in a single schema.sql. You write real SQL — no ORM.

Backend

Python + FastAPI

Python 3.12+, managed with uv. Pydantic for validation. Automatic OpenAPI at /docs — your API contract, visualised.

Frontend

React + Vite

TypeScript, plain fetch, types generated from the OpenAPI spec. Minimal styling, no component library by default.

Glue

Local-only

Vite dev server proxies /api/* to :8000. No CORS. No auth. One local user. One file on disk.

Prerequisites: Python 3.12+, uv, Node.js 20+ LTS. SQLite ships with Python.

The workflow

From rough idea to running app in six phases.

Each phase descends from the artifact of the last. If a phase reveals a gap, go back and fix the artifact before regenerating the code.

    Prompt library

    The prompts that do the work.

    Each phase has one or two prompts you can paste into your agent session. Adapt them to your assignment, but keep the structure — each prompt names the artifact it produces and the artifact it descends from.

    The ten

    Pick one you'd want to use.

    All ten are calibrated to the same envelope: four entities give or take one, three user stories, one aggregation report, one state transition. Five come from the clinical-operations domain. Five are general-purpose. Each teaches one identifiable concept beyond CRUD.

    Why these ten, and not others

    Ideas deliberately not on this list: external data reconciliation, clinical data quality checks, raw data to SDTM mapping, central monitoring with outlier detection, TLF generation, live clinicaltrials.gov feasibility, cross-registry benchmarking. Each is a good idea and worth building later. Each fails at least one of the starter criteria — typically the ones about specialist domain knowledge, external APIs, or statistics beyond aggregation. A future intermediary catalogue will revisit them.

    When you're ready

    Start here.

    1. 1
      Install the prerequisites. Python 3.12+, uv, Node 20+ LTS.
    2. 2
      Pick one assignment above. Choose one you would actually want to use, not one that sounds impressive. Read the brief ("Read the brief" on each card).
    3. 3
      Tell your reviewer which one. Before you do anything else. Their job is to push back if the scope is wrong — better now than after the brainstorm.
    4. 4
      Click "Use this template" on your chosen assignment's card. GitHub creates a fresh repo under your account. Clone it locally, open it in your agent, and run both servers to confirm the skeleton boots.
    5. 5
      Open the repo's index.html. Paste the Phase-1 prompt into a fresh agent session. Your brainstorm becomes docs/01-brief.md.
    6. 6
      Keep a log of the prompts you actually used. That log is the primary artifact of your learning. You will reference it in the retrospective.