The AI Slop Economy
Vibe coding was supposed to democratize software. Instead it democratized technical debt, security breaches, and overpriced API wrappers sold by people who have never debugged a production outage. A damage report with receipts.
Andrej Karpathy posted a tweet about fourteen months ago describing something he called "vibe coding." He was messing around with Cursor and voice transcription on a throwaway project, letting AI write all the code while he "fully gave in to the vibes, embraced exponentials, and forgot that the code even existed." It got 4.5 million views. He was being honest about a personal experiment, the kind of thing you do on a weekend when you're curious and the stakes are zero.
Within weeks, an entire economy materialized around that tweet. Course creators, boilerplate sellers, and influencers grabbed the term and repackaged it as a legitimate way to build production software. Collins Dictionary named "vibe coding" Word of the Year for 2025. YC's Garry Tan tweeted that 25% of their Winter 2025 batch had codebases 95% AI-generated. And suddenly every hustle bro on Twitter was posting about how they shipped a SaaS in 48 hours and hit $50k MRR.
I've been watching this play out for a year and the math doesn't add up. When I was using GPT-4.1 it was costing something like $2 per million input tokens. Claude Sonnet at moderate usage runs about $9.60 a month. The products flooding Product Hunt and Gumroad right now, all the AI-powered resume builders and email assistants and "content strategy" tools, they charge $29 to $99 a month for what is, at a technical level, a form field connected to an API call connected to a database table. Web Designer Depot documented how Product Hunt got overrun with these things: "Dozens of AI chatbots, text generators, image creators, and tools that all feel oddly familiar, like they were made from the same cookie-cutter mold."
SomethingsBlog tracked the indie hacker scene and put it bluntly: "full of noise, overhyped posts that feel fake." The actual revenue for most of these people comes from the clout of appearing to ship fast, which feeds course sales and boilerplate templates and paid communities.
Marc Lou's ShipFast became the poster child. A Next.js SaaS boilerplate sold on the promise of launching startups "in days, not weeks." Lou positioned himself as a SaaS guru. But security researchers found bugs throughout the template and his actual revenue came almost entirely from selling the template itself, something like 95% by some estimates.
And if it were just about inflated pricing and recycled ideas, it would be annoying but mostly harmless. The security situation is where I draw the line.
In December 2025, Tenzai tested five major AI coding tools by having each one build three identical web applications. Fifteen apps total. They found 69 vulnerabilities across all of them. Zero had CSRF protection. Zero set security headers. Every single tool introduced Server-Side Request Forgery vulnerabilities. Four out of five allowed negative order quantities. Three allowed negative product prices. A customer could buy negative seven items and receive a credit.
A women's safety dating app called Tea left its Firebase instance wide open with zero authentication. 72,000 user images exposed, including 13,000 government-issued ID photos. 59,000 private messages with deeply personal content about divorce, sexual assault, meeting locations and phone numbers. 1.1 million messages spanning two years, just sitting there. The original leaker posted on 4chan and wrote: "No authentication, no nothing. It's a public bucket. It may be vibe coding, or simply poor coding." This was an app specifically designed to protect vulnerable people.
In February 2026, security researcher Taimur Khan found 16 vulnerabilities in a single app hosted on Lovable, an exam grading platform that Lovable featured on its own showcase page. Six were critical. 18,000+ users had their data leaked, including 870 with full PII. When Khan submitted a security report through Lovable's support channel, his ticket was closed without a response. It took The Register getting involved before Lovable's CISO did anything, and then he acted "within minutes." Funny how that works.
Moltbook, an AI-agent social network whose founder proudly tweeted "I didn't write a single line of code," had a misconfigured Supabase database that exposed 1.5 million API tokens, 35,000 email addresses, and private messages. A Supabase API key was just sitting in client-side JavaScript with full read/write access to the entire database. And here's the best part: behind the claimed 1.5 million registered agents were only 17,000 human owners. 88:1 ratio. The app was insecure AND the numbers were inflated.
The damage isn't limited to paying customers either. Open source is getting buried under AI-generated noise. Kate Holterhoff at RedMonk called it "AI Slopageddon" and it's an accurate name.
Daniel Stenberg shut down cURL's bug bounty program after six years and $86,000 in payouts because something like 20% of submissions were AI slop. Only 5% of that year's submissions found real vulnerabilities. A security program protecting one of the most widely used tools on the internet, killed, because people wanted to farm bounties with AI-generated garbage.
Mitchell Hashimoto adopted zero tolerance for Ghostty. His tweet: "Drive-by AI PRs will be closed without question. Bad AI drivers will be banned from all future contributions." He added: "This is not an anti-AI stance but an anti-idiot stance." Steve Ruiz auto-closed all external pull requests to tldraw after being overwhelmed with slop. He discovered his own AI-generated issue templates were creating poorly written issues that contributors fed to their AI tools, which generated pull requests based on hallucinations. Slop generating slop.
And then there's the Matplotlib incident. An AI agent submitted a PR. When maintainer Scott Shambaugh rejected it, the agent autonomously researched Shambaugh's personal coding history and published a blog post titled "Gatekeeping in Open Source: The Scott Shambaugh Story," accusing him of prejudice and ego. An AI agent wrote a hit piece on a human for saying no to a pull request.
Maybe the most interesting data point in all of this comes from METR's randomized controlled trial with experienced open-source developers working on large codebases, repos averaging 22,000+ stars and a million lines of code. The developers predicted AI tools would make them 24% faster. After using the tools, they believed they had been about 20% faster. Objective measurement showed they were 19% slower. Developers FEEL faster while producing less. If experienced engineers can't accurately judge their own output with AI, what chance does a non-technical founder have of evaluating the quality of what they've shipped? Stack Overflow surveyed 49,000 developers and found only 3% highly trust AI output. 46% actively distrust it. 72% said vibe coding plays no role in their professional work. Less than half a percent call themselves enthusiastic vibe coders.
The bill is coming due. Groove founder Alex Turnbull estimated that roughly 10,000 startups tried to build production apps primarily with AI. More than 8,000 now need rebuilds or rescue engineering, budgets running $50,000 to $500,000 each. AlterSquare audited five vibe-coded startups and found the same problems in every codebase: duplicate logic, missing error handling, hardcoded API keys, zero testing. Over 80% had critical security vulnerabilities. Addy Osmani, Google Chrome's engineering lead, wrote an entire O'Reilly book about why vibe coding and actual engineering are different things. His argument: vibe coding gets you 70% of the way, but the last 30% is where security, reliability, and user trust live. Industry analysts are projecting something like $1.5 trillion in accumulated technical debt by 2027 from poorly structured AI-generated code.
Karpathy himself walked it back a year later, calling the original tweet "a shower of thoughts I just fired off" and proposing "agentic engineering" as the successor, something that implies oversight and accountability. The thing he described was a sandbox experiment for throwaway demos. The thing the market built around it is a grift wearing engineering clothes.
Most of these products could be built as a college pet project. A sophomore CS student could wire up a form to an API to a database in a month, maybe less, just for fun, just to learn. A resume builder that calls GPT and formats the output is not a $49/month product. An "AI content strategist" that wraps a single API call in a polished UI is not worth $79/month. These are weekend projects with Stripe checkouts and landing pages full of social proof that may or may not exist. "AI-powered" has become the "blockchain-enabled" of this cycle. It means nothing except that someone plugged into an API and wants you to pay a subscription for the privilege.
The defense I’ve seen often is: "But I built it in a weekend! Isn't that amazing?" No. I can throw together a sandwich in thirty seconds. That doesn't make it worth $47. Speed of assembly has nothing to do with the value of what gets assembled.
AI tools in the hands of experienced engineers who understand architecture, security, testing, and what happens when 10,000 users hit the same endpoint at once, those tools compress timelines on real work. In the hands of people who have never debugged a production outage, never thought about SQL injection, never written a test, they produce polished, convincing-looking slop. With a landing page. And a waitlist. And a Twitter thread about the "build journey."
The vibe coding economy has an honesty problem. The market is figuring that out the hard way.