
Vibe Coding Is Breaking Production (Here's How to Do It Right)
April 10, 2026
Vibe coding is shipping broken, insecure software to production at a pace we have never seen before. Not because AI-generated code is inherently bad, but because the people using it are skipping every review step that keeps software alive past launch day. The fix is not to stop using AI. It is to treat AI output the way you would treat a pull request from a junior developer you have never worked with.
What Vibe Coding Actually Is (And Why It Took Off)#
The term comes from Andrej Karpathy, who described it as "fully giving in to the vibes" and letting AI write your code while you just steer with natural language prompts. You describe what you want. The AI generates it. You accept, run, and ship.
92% of US developers now use AI coding tools daily. GitHub reports that 46% of all new code is AI-generated. The appeal is obvious: what used to take a week takes an afternoon. Prototypes that needed a team of three now need one person and a prompt.
The speed is real. So are the consequences of skipping the boring parts.
The Incidents That Changed the Conversation#
Three events in early 2026 forced the industry to pay attention.
The Moltbook breach. A social network for AI agents launched in January 2026. The founder publicly said he "didn't write a single line of code." Within three days, security researchers at Wiz found the entire production database exposed. 1.5 million API keys. 35,000 email addresses. Private messages. The root cause was a misconfigured Supabase deployment with no Row Level Security, a mistake the AI generated and the founder never reviewed.
Apple's App Store crackdown. In March 2026, Apple quietly blocked updates for apps built with vibe coding platforms like Replit and Vibecode. The company later pulled the app "Anything" entirely. Apple cited rules about apps executing code that alters their own functionality, but the message was clear: generated apps that skip quality checks are not welcome.
The $150K API key leak. A developer shipped a vibe-coded project that exposed their cloud provider credentials in client-side JavaScript. The bill hit $150,000 before anyone noticed. GitGuardian tracked a 34% year-over-year increase in hardcoded secrets on GitHub in 2025, with AI-assisted commits showing a 3.2% secret-leak rate compared to the 1.5% baseline.
AI-Generated Code: The Numbers (2025-2026)
These are not edge cases. They are the predictable result of treating "it works on my machine" as a shipping standard.
When AI-Generated Code Is Fine vs. When It Needs Review#
Not all vibe coding is dangerous. The risk scales with what you are building and who will use it.
Low risk (ship it):
- Internal scripts and one-off automations
- Prototypes that will never touch real user data
- Learning projects and personal tools
- Boilerplate generation (test scaffolding, config files, CRUD routes)
High risk (review everything):
- Anything that handles authentication or user data
- Payment processing or financial logic
- Infrastructure configuration (database access, cloud permissions)
- Code that will run in production for more than a week
- Public-facing APIs
The pattern is simple. If a bug means embarrassment, vibe code freely. If a bug means a breach, a bill, or a ban, read every line.

The Productivity Paradox#
95% of developers report feeling more productive with AI coding tools. That feeling does not match reality. Code churn is up 41%. Refactoring dropped from 25% of changed lines in 2021 to under 10% by 2024. A METR study of experienced open-source developers found they were actually 19% slower when using AI tools across 246 real issues.
The problem is not speed. It is the illusion of speed. Developers generate more code faster, then spend more time debugging it. 63% of developers say they have spent more time fixing AI-generated code than it would have taken to write the original code themselves.
This does not mean AI tools are useless. It means they change where the work happens. The work shifts from writing to reviewing. If you skip the review, you are not saving time. You are borrowing it.
A Practical Review Checklist for AI-Generated Code#
I use this checklist every time I accept a block of AI-generated code that will touch production. It takes about five minutes per feature and has caught real bugs every single week.
AI Code Review Checklist
Warning: AI models do not understand your deployment environment. They generate code that works in isolation but often ignores your actual database permissions, CORS policies, and rate limits. Always test in a staging environment that mirrors production.
If you want to go deeper on reducing waste in your AI workflow, I wrote about cutting token usage by 50% with targeted prompting. And if you are a senior dev already using AI heavily, the data on why experienced developers get more out of prompt-style coding explains the review habit that makes the difference.
The Real Problem Is Not AI#
Vibe coding is a tool, not a methodology. The developers who treat it as a methodology are the ones showing up in breach reports and App Store rejection emails.
The fix is boring. Review the code. Run the scans. Test with bad input. Deploy to staging first. These are the same practices that existed before AI, and they matter more now because the volume of unreviewed code has exploded.
46% of new code is AI-generated. That is not going to slow down. The question is whether you review it like a professional or ship it like a demo. The Claude Code tips I rely on all share one thing in common: they treat AI output as a starting point, not a finished product.
The developers who thrive with vibe coding will be the ones who kept the boring habits.
Sources: Wiz (Moltbook breach), 9to5Mac (Apple crackdown), The New Stack, Towards Data Science, RedHunt Labs, Second Talent (statistics)