I’ve been building for the web for 15 years. HTML, CSS, a bit of JavaScript. This is my attempt to find out how far AI can take me into territory I’ve never been able to reach on my own.
A bit of context
I should be upfront about where I’m coming from, because it’s relevant to everything that follows.
I’ve spent 15 years working on the web — mostly as a designer who codes. I’m comfortable and confident with HTML and CSS. I know my way around vanilla JavaScript. But frameworks like React, server-side logic, databases, authentication flows — that world has always been just out of reach. Not completely foreign, but the kind of thing where I’d get going, hit a wall I didn’t understand, and end up putting the project down.
AI-assisted coding has changed that equation for me. Not by making me a better engineer overnight, but by giving me a collaborator who can hold the complexity I can’t. I can describe what I want to happen. I understand the output well enough to reason about it, catch mistakes, and ask good questions. That turns out to be enough to build real things.
Appply is my experiment in testing that theory.
What Is Appply?
I’ve been job hunting and, like most people, I started with a spreadsheet. A column for company, a column for role, a dropdown for status. It works until it doesn’t — until you have 40 applications in flight and no idea when you last heard from anyone, until you realise you’ve sent the same generic cover letter to every role, until the whole thing becomes a source of anxiety rather than a tool for managing it.
Appply is a personal job application tracker that replaces the spreadsheet with something purpose-built.
Two things make it different from the trackers that already exist:
1. Status as a timeline. Every status change is timestamped and kept. The full history of every application is always visible. The status model reflects how job searches actually work — sub-statuses like “Ghosted”, “Job Rec Removed/Deactivated”, “Rescinded Application (Self)”, and “Sent Follow Up Email” sit within proper parent stages (Applied → Screening → Interview → Offer). A simple applied/rejected dropdown loses all of that nuance.
2. Cover letters that actually earn their output. The AI doesn’t just fill in a template. It reads the job description, spots where your saved CV doesn’t directly evidence what the role asks for, and asks you targeted questions before it starts writing — “This role asks for X; can you give a specific example from your experience?” The result is a structured, editable draft built from your real experience, not generic AI prose. A 30-minute manual process becomes a guided 5-minute flow.
The design principle behind both features: calm over busy. Fewer steps. Immediately obvious UI. No learning curve.
The tech (and why I’m not going to over-explain it)
Here’s what Appply is built on:
| Layer | Technology |
|---|---|
| Framework | Next.js 15 |
| Deployment | Cloudflare Workers |
| Database | Cloudflare D1 (SQLite) |
| AI | Cloudflare Workers AI |
| Auth | Clerk |
| Payments | Stripe |
| UI components | Shadcn UI + Tailwind CSS |
I’m not going to pretend I chose every one of these from a position of deep expertise. Some I picked because they came recommended. Some came bundled in a starter kit I found. The AI helped me understand the tradeoffs. What I can say is that the whole thing runs on Cloudflare’s infrastructure — the database, the AI, the server — which keeps it simple and cheap to run.
The BMad Framework
Before writing any code, I used a planning framework called BMad to think through what I was building and how.
BMad is a set of structured AI-assisted workflows for product planning. The idea is that instead of jumping straight into building and figuring it out as you go, you invest a few hours upfront producing documents — a product brief, a requirements document, an architecture plan — that give the AI (and you) a clear, consistent picture of what you’re making.
For someone like me, this was genuinely valuable. I’m good at thinking about products. I know what good UX looks like. But the moment I try to turn that into technical decisions — which database, how data flows, what happens when a user does X — I get lost. Having a structured process to walk through those questions, with AI helping me understand the implications, made me feel like I actually knew what I was building before I started building it.
The workflow I ran:
- Product Brief → What is this, who is it for, why is it different?
- PRD (Product Requirements Document) → What exactly does it need to do?
- Project Context → Rules and conventions every AI agent working on the code must follow
- Architecture → How it’s all wired together under the hood
Let me walk through each.
Step 1: Product Brief
The brief is the starting point — a short document that captures the product concept, the problem it solves, what makes it different, and who it’s for.
For Appply this was the easy part. I’d been living the problem. The brief crystallised the two things that mattered most (the status timeline and the evidence-driven cover letters), who the product is for (people actively job hunting), and where I’d find them first (my own network, then LinkedIn, then job seeker communities).
BMad also generates a compressed version of the brief for feeding to AI agents later, so they don’t lose context every time you start a new conversation.
Step 2: PRD
The PRD is where you get specific about what the product actually does. BMad walks you through it step by step: who uses it, what journeys they go on, what the app needs to do, and what it needs to not do badly (performance, security, privacy).
The output for Appply was 41 specific things the product must do, and 18 constraints around how it must do them.
Some of the “must do” things are obvious: add an application, update its status, view a history. Others came out of the process that I might not have written down on my own: that every status change needs a timestamp, that the AI can never mix up one user’s data with another’s, that deleting your account must delete everything.
The constraints are the ones that end up mattering most to the technical decisions later. Things like: status changes should feel instant even if the server hasn’t responded yet (which turns out to require a specific pattern in the code). Or: the AI cover letter must finish generating within 30 seconds or the infrastructure will time out.
These details are exactly the kind of thing I would have discovered halfway through building and had to awkwardly retrofit.
Step 3: Picking a Starting Point
Rather than building from zero, I found a solid starter kit: ixartz/Next-js-Boilerplate.
For a solo build, starting kits like this are a huge deal. It comes pre-wired with login, payments, internationalisation, UI components, and testing — all the infrastructure that would otherwise take weeks to set up. I could start with something that already works and build on top of it.
I did have to swap out the database layer. The starter came with a different database setup, and I wanted to use Cloudflare’s own database (D1) to keep everything on one platform. The AI handled the migration — I described what I wanted and it made the changes, explaining what it was doing along the way.
I made one mistake early on: I started with an older Cloudflare integration method that turned out to be the wrong one. The correct approach is the newer @opennextjs/cloudflare adapter. This cost me some time until the AI pointed me to the right documentation.
Step 4: Project Context — the rules document
This is one of the most useful things in the BMad process and also the least glamorous.
The project context is a document that records everything an AI agent needs to know before touching any code: the file structure, the naming conventions, which patterns are used for what, and — critically — a list of “never do this” rules.
Some examples from Appply’s rules:
- Never fetch data from the database inside a component that runs in the browser — only do it server-side
- Every database query must be tied to the current user’s ID — no exceptions
- Never write text directly in the code — it must go through the translations file first so internationalisation works
- The project uses a specific utility helper that’s different from the default — always use that one
Without this document, AI agents working on different features in different sessions start making slightly different assumptions. The code still works, but it quietly diverges in ways that become messy. The rules document prevents that.
Step 5: Architecture
The architecture step is where the AI helped me most, because it’s the part I understand least.
The BMad architecture workflow is a collaborative eight-step process. At each step, it asks questions, surfaces tradeoffs, and explains why certain decisions matter. By the end I had a proper document covering:
- How data moves between the browser and the database
- How user authentication works and where it’s enforced
- How state is managed when two different views (table and Kanban board) show the same data
- How the AI cover letter generation works without timing out
- Where the freemium limits get enforced
- What happens to all the data when someone deletes their account
The key insight I took from this step: the AI can write correct code, but it needs to know the rules of this specific project to write correct code for this project. The architecture document is where those rules live. Every agent that works on the codebase from this point on reads it first.
What the planning phase produced
By the end of the planning session, Appply had:
- A product brief
- A full requirements document (41 features, 18 constraints)
- A project context and rules file
- A complete architecture document
No code yet. But a very clear picture of what the code needs to do, how it needs to be structured, and what every AI agent working on it must follow.
The next phase is breaking the requirements into epics and stories, then implementing them one by one — each story executed by an AI agent working from these documents.
5 May 2026 — The First Real Build Day
Today I ran two full epics from backlog to done. Seven stories. The whole core tracker — working, tested, and reviewed.
I want to be honest about what that felt like, because it’s the part this experiment is really about.
What I actually did vs what the AI did
I didn’t write most of the code. The AI did. What I did was:
- Describe what each feature needed to do in plain language
- Review the output and check it made sense
- Catch things that looked wrong or felt off
- Make decisions when the AI presented options
- Run the tests and read the results
- Push back when the review found bugs and make sure they got fixed
That’s not nothing. But it’s also not traditional coding. It’s closer to directing than writing — which, coming from a design background, is a frame that actually makes sense to me.
What shipped
The database and wiring (Story 1-1)
The invisible foundation — setting up the database tables, the shared data types, and the plumbing that connects the browser to the database. Nothing to look at, but everything else depends on it.
The app shell (Story 1-2)
The signed-in layout: the sidebar navigation, the page structure, the responsive behaviour. This is the kind of work I could have done myself in HTML and CSS, but here it’s tangled up with authentication logic and routing rules that I’d have struggled with alone. The AI handled it cleanly.
Adding and viewing applications (Story 2-1)
The first thing that actually felt like a product. A form to add an application (company, role, date). A table that lists them. A small “add” button. Clicking a row opens a detail panel. Simple — but getting it right required the data fetching, the form handling, and the table rendering to all work together, which is exactly where my React knowledge starts to thin out.
Status history (Story 2-2)
This is the feature that makes Appply different from a spreadsheet. Every status change — “Applied”, “Screening”, “Interview”, “Offer”, “Closed” — is timestamped and added to a permanent history log. The detail panel shows the full timeline. When you update a status, the change appears instantly in the UI before the server has even responded, then quietly confirms (or rolls back if something went wrong). That instant response — the UI not waiting for the server — was something I’d seen described but never understood how to build. I understand it a bit better now.
Notes and delete (Story 2-3)
Notes live in the detail panel. You type, click away, and they save automatically. A small indicator tells you when it’s saving and when it’s saved. Delete lives in a right-click menu with a confirmation step to prevent accidents. Nothing technically dramatic, but the details matter — the auto-save timing, the save indicator states, the confirmation copy.
Kanban board (Story 2-4)
Drag and drop is one of those interactions that looks simple and is genuinely complicated under the hood. Five columns, one per pipeline stage. Cards you can drag between columns. Dropping a card fires the same status update as clicking in the table — so both views always show the same data. The ghost card that appears under your cursor while dragging. None of that would have been achievable for me without the AI.
View toggle and filter (Story 2-5)
The story that connected everything. A toggle between table view and board view, with your preference remembered between sessions. A search bar that filters by company, role, or status as you type — no waiting, no server call, just instant filtering of what’s already loaded. Both views handle the “nothing matches” state with a helpful message and a clear button.
The review process — where things got interesting
After each story, I ran something called a code review — a BMad workflow that reads the changes and looks for problems. It sends the code through three different “reviewers” simultaneously, each looking for different things: bugs, edge cases, and whether the feature actually does what it was supposed to do.
On Story 2-5 alone, the review found 6 real problems:
- If the data failed to load while in board view, the page showed nothing. No error, no message. Just blank columns. The fix was a single line — but without the review, I’d never have caught it because the happy path worked fine.
- If you typed a filter and got no results in board view, you’d see five empty Kanban columns with no explanation. The table view handled this correctly; the board view didn’t.
- The “no results” message in the table could appear before the data had even finished loading — so you’d type something, see “no applications match”, and then a moment later your applications would appear.
- The preference for table vs board view was being saved in a way that could crash in certain browsers’ private mode.
- The toggle buttons were announcing themselves twice to screen readers — once from a tooltip and once from a hidden label. Minor, but wrong.
- A missing attribute on some buttons meant they’d accidentally submit any form they were placed inside. Not a problem now, but the kind of thing that causes a confusing bug later.
All six were fixed. None of them would have made the product unusable — but two of them were real UX failures that real users would have hit. The review caught them before anyone else did.
This is the part of the experiment I find most interesting. The AI writes code faster than I ever could. But it also misses things — not through carelessness, just because it’s generating based on the brief it was given, not living in the product as a user. The review pass is the layer that catches the gap between “works in the happy path” and “works for real people”.
6 May 2026 — The Afternoon I Lost to a Markdown Editor
Extended application fields (Story 2-6)
The last story of the sprint added the fields that make the tracker actually useful for research: job URL, source (LinkedIn, referral, careers site…), salary, and contract type. These live in a new “About this role” section in the application detail panel. The modal also became two-pane at this point — the main detail on the left, a persistent activity feed on the right. It now looks like something you’d actually want to open.
I had modest ambitions for today. One small quality-of-life improvement: make the Notes and Job Description fields in the application detail properly support rich text, rather than just storing plain text.
Four hours later I was still at it.
What I wanted
A text editor with a formatting toolbar. Bold, italic, headings, lists, links. Nothing exotic — the kind of thing you’d find in any notes app. The fields already saved and loaded text correctly. I just wanted to add formatting on top.
Version 1: react-markdown
My first approach was simple: keep the textarea as-is, add a toolbar that inserts markdown syntax around selected text (wrapping ** around a selection to make it bold, for example), and add a preview toggle to show the rendered result.
This worked. It even worked nicely. The toolbar icons, the preview pane, the way the cursor stayed in the right place after you applied formatting — it all came together cleanly.
Then I spent a while polishing it. The preview toggle became a single eye icon that highlights when active. The toolbar buttons became proper icon buttons using lucide-react. A genuine glitch in the scroll position after formatting got fixed. Each thing was straightforward once you know what you’re doing — I didn’t.
Version 2: Lexical
By mid-afternoon, I decided the textarea-plus-preview approach felt clunky. You shouldn’t have to toggle between an edit view full of symbols and a preview pane to read what you wrote. A proper WYSIWYG editor — where you see formatted text as you type, no preview needed — would be significantly better to use.
So I pulled in Lexical, Meta’s open-source rich text editor framework. The concept is good: define the nodes and transformers you need, plug in the plugins for the behaviours you want (history, lists, links, markdown shortcuts), and you get a fully functional rich-text editing surface with relatively little custom code.
What made it take all afternoon was the gap between “relatively little” and “a few subtle things that need to be exactly right”:
- Keeping the external state in sync with the editor (the parent component passes a markdown string down; Lexical has its own internal state) required a custom plugin to handle the bridge without causing infinite update loops
- The toolbar buttons had to use
onMouseDownrather thanonClickto avoid stealing focus from the editor — but that breaks keyboard navigation, soonClickstill needs to be there for keyboard users - The heading toolbar button would silently erase all the text in the current line unless you explicitly passed
trueto transfer children when replacing the node - URL input via
window.promptis simple but leaves a security gap (users could technically enterjavascript:URLs), so that needed a filter
Each of these was a small thing. None of them was obvious from the documentation. All of them showed up in the adversarial code review that runs after every change — which meant fixing them, re-running the review, and checking that the fixes hadn’t introduced something else.
The final result works well. It’s a genuinely pleasant editor: you type markdown shortcuts (**bold**, # Heading, - list item) and they render live as formatted text; the toolbar gives you the mouse-friendly version of the same; undo/redo works; the existing markdown stored in the database loads and saves correctly without a schema change.
But if you’re reading this wondering why experienced developers laugh when someone says “it’ll only take an afternoon” — this is why. Every problem is small. There are just more of them than you expect.
Where things stand
The core tracker works. Add applications, update their status with a full history, add notes, view as a table or a Kanban board, filter by anything, drag cards between columns. Two epics done.
Next: saving CV versions, and the AI cover letter generator — the features that make this more than just a nice-looking spreadsheet.
Why I think this is worth sharing
I’m not writing this as a tutorial. I don’t have the expertise to teach React or database design.
I’m writing it because I think there’s a version of this story that a lot of people in design and front-end can relate to: knowing exactly what you want to build, being able to articulate it clearly, having good taste and real product instincts — and being blocked by a layer of technical complexity that’s just out of reach.
AI-assisted coding is changing that. Not by making everyone a senior engineer, but by making it possible to move further with the skills you already have.
This project is my test of how far that goes.
Appply is being built in public. The stack is Next.js 15, Cloudflare Workers, D1, Workers AI. The planning methodology is BMad.