AI didn’t replace developers.
It replaced the part where you were forced to understand what you just shipped.
Now you can prompt your way to a feature, skim the diff, and merge something that “seems reasonable.” And then production does what production always does: finds the one weird browser + one slow network + one user flow that turns your “reasonable” code into a bonfire.
So who watches the vibe coder?
The Modern Threat Model: “It Looked Fine In The Chat Window”
Classic JavaScript failures still exist. Undefined references. Timing bugs. Bad data. Third-party scripts lighting themselves on fire.
The vibe-coded era adds a new class of risk: code that’s plausible, confident, and wrong in subtle ways.
Here’s what I see over and over:
- Happy-path code with missing guards (because the AI assumed
useris always defined, like a sweet summer child) - Copy-pasted patterns without the “why” (React hydration, async race conditions, event handler leaks… pick your poison)
- Framework cargo culting (it used a hook! it must be correct!)
- Invisible coupling (two “independent” changes that only break when they meet in production traffic)
- Quiet failures (caught exceptions, swallowed promises, or “works on my machine” logic that never hits your local test data)
Code review helps, but it has a blind spot: reviewers also skim. Everyone skims. Especially when the diff is 600 lines. Looks good to me.
Why Code Review Doesn’t Save You
If you’re reviewing understanding, you’re good.
If you’re reviewing vibes, you’re gambling.
The hard bugs aren’t “syntax wrong.” They’re “timing wrong,” “data wrong,” “browser weird,” “user did something you didn’t anticipate,” or “third-party broke and took you with it.”
And those don’t show up until real users show up.
The Only Real Fix: Production Visibility
You don’t need more opinions about what might break.
You need a system that tells you what broke, who it hit, and what happened right before it broke.
In practice, that means:
- Errors captured from real browsers (not just your dev machine)
- Context that explains the story of the error
- A way to separate “random internet noise” from “this is our problem”
That’s the job.
Monitor Your Production Errors
This is exactly what TrackJS is for.
TrackJS watches your production JavaScript and gives you:
- Actionable error reports (stack traces + the timeline of events that led to the error)
- Filtering + ignore rules so you can ditch extension noise and third-party garbage
- Impact visibility so you can see which errors are actually hurting users (and which ones are just embarrassing)
You can’t code review vibes. But you can watch what the vibes ship.
The Punchline
AI is going to keep writing code.
Non-engineers are going to keep shipping things they don’t fully understand.
And your site is going to keep being the place where all of that meets reality.
So: who watches the vibe coder?
You do. In production. With tooling that’s built for the messy, chaotic, real web.
If you want the easy button: start a TrackJS trial and let production tell you the truth before your users do.