In this issue:
Why your app's success is the worst-case scenario for a vibe-coded build
The 3 sentences that prevent the "full rewrite from scratch" ultimatum
The 30-second audit that catches a lying AI
The drop-in file that gives your AI permanent project memory
Why dev shops are quietly becoming AI code review shops
Section 1 — The Agentic Engineering Lesson
The Spec is the Magic Words
Most vibe-coding disaster stories sound the same: app gets stuck, user gets frustrated, project dies on the laptop. Tragic, but quiet.
Aron's customer had a different problem. His app worked. People were using it. He wanted to bring in more traffic, make it a real product.
Aron had to give him the hard answer:
"I cannot help you scale this. Not as it is. It needs a full rewrite from scratch."
The customer didn't have the time. He didn't have the budget. End of conversation.
Somewhere out there, someone is going to clone that idea screen by screen — and their version will work. They'll get the users. They'll get the revenue. They'll get the credit.
That's the worst-case scenario. It's the success scenario. And it's preventable with three sentences you tell your AI before it writes a single line of code.
This issue is the first proper installment of a framework we'll keep coming back to: Spec → Team → Loop. It's the three-phase mental model that turns vibe coding into agentic engineering.
Phase A: The Spec. The constraints you set before the AI starts.
Phase B: The Team. The cast of AI roles you assign — Doer, Checker, Manager.
Phase C: The Loop. The feedback cycle that tightens every output against the Spec.
Today is all about Phase A. Aron's latest video (6 min, no padding) is built around three Spec lines. Memorize them. Paste them into every project.
1. "Create a separate folder for the project and initialize an empty git repository in it."
This is your safety net. A git repository stores every previous version of your code, so when your AI assistant breaks something — and it will — you can rewind to the last working version in seconds.
The mental model is video-game saves. Without checkpoints, one bad change wipes out your progress and you start over. With them, you get checkpoint-and-respawn instead of permadeath.
After every change that actually works, follow up with: "Add all necessary files to the repository and commit changes." That's your save button.
2. "Check if Docker Compose is installed, and if not, install it. Make sure every component is containerized."
This is portability and scale. Containerization means each part of your app lives in a sealed box instead of spilling config and dependencies across your whole computer.
Why it matters the moment you grow: containers are clonable. Ten users? One container. Ten thousand? Ten thousand containers. No rewrite required.
Docker Compose is the industry-standard tool. Use those exact words — "Docker Compose," not "containers" — and your AI will reach for the right thing.
3. "For front-end tasks, use React. For back-end tasks, use Node.js."
This is the handover clause. The day will come when you need a real developer — investor pressure, a feature you can't vibe-code, or you just wanting your weekends back.
If your AI built the app on something obscure (a framework it picked at random because it sounded clever in the moment), no professional will recognize the codebase. That's how you end up with the rewrite-or-die ultimatum Aron's customer got.
React + Node.js are the most common front-end and back-end frameworks in use today. Any decent developer can read them. Aron's caveat is real: these aren't eternal truths, and if you have a specific reason to choose differently, fine. Just don't let the AI roll the dice for you.
A note before we move on: this Spec works whether you're building a SaaS for paying users or a personal CRM for your own contacts. Anything you want to use for more than three months needs to survive someone — eventually — looking at the code.
Section 2 — The Verify Check
OK, you fed the AI the Spec. How do you know it actually did the thing?
Three 10-second checks. Total time: under a minute.
Git is real: Ask the AI to run git status and paste the output. You should see a project folder and a list of files. If it errors with "not a git repository," step 1 was skipped. Push back.
Containers exist: Ask, "Show me the docker-compose.yml file." If there isn't one, containerization didn't actually happen — the AI may have nodded politely and moved on.
Stack is sane: Ask, "What's in package.json?" You should see "react" listed for the front-end. Your back-end should look like a Node project (mention of "express," "fastify," or just "node"). If the main framework is something else — Vue, Svelte, Django, Rails — ask the AI why before you let it continue.
This is the whole VERIFY pillar in miniature. You don't need to read the code. You need to read the evidence the code is doing what you asked.
Section 3 — Tip of the Week
The CLAUDE.md drop-in.
Here's a thing almost no vibe-coding tutorial mentions: most agentic coding tools that run on your computer — Claude Code, Codex CLI, Cursor — automatically read a file called CLAUDE.md from your project folder before your first message. Whatever's in that file becomes binding instructions for every session in that project.
Translation: anything you'd otherwise have to repeat in every new chat ("use git," "use Docker," "stop committing my .env file") you write down once. The AI shows up pre-briefed forever.
We made you the Bitwise version. It contains:
Aron's three magic words rewritten as binding rules — not one-time prompts the AI forgets two messages later
A .gitignore so the AI doesn't commit your secrets to git on day one
A Verify command under each rule — one line you can paste back to confirm the AI actually did what it claimed
Communication rules that force the AI to flag mocks, fake APIs, and unfinished work in the same response — the missing piece that kills "fix loop" hell
A project description block you fill in once, so the AI stops guessing what you're building
→ Download the Scale-Ready CLAUDE.md → Free with the newsletter. Already subscribed? Hit reply, we'll send it.
How to use it (60 seconds):
Save the file as CLAUDE.md (capitalized exactly) in your project's root folder.
Fill in the project description block at the bottom. Don't skip it — that's the part that makes the rest work.
Open Claude Code, Codex, or Cursor in that folder.
First prompt: "Read CLAUDE.md and tell me the rules in your own words." If the AI's summary is wrong, the file didn't load. Fix that before doing anything else.
On Lovable, Bolt, or v0? No filesystem to drop into. Paste everything below the --- FOR THE AI --- line into your first chat message. Same effect.
Section 4 — From the Trenches
A post making the rounds on r/SaaS this month:
"Dev shops will become AI code review shops. Non-coders building with Claude need the rubber stamp of production-ready."
Translation: there's a whole emerging industry of professionals who exist to clean up vibe-coded apps after the fact. You don't want to be their customer. You want to skip that bill entirely.
That's exactly what the Spec gives you.
Sign-off
That's it for this week.
Got an app you're already scared to scale? Hit reply and tell us what's in the codebase. We're collecting "rescue cases" for a future Bitwise teardown — anonymized, of course.
Build real software with AI. Know enough to trust it.
— Aron
Know a non-coder who builds with AI? Forward this to them — they'll thank you.
Next week: I'm breaking down what actually happens when your AI tool says "deployment successful" — and why you shouldn't always believe it.
See you Tuesday,
Aaron
