Agentic Engineering vs. Vibe Coding
Nick Thompson
Summary
Vibe coding lets anyone generate code. Agentic engineering means understanding what you're deploying and why it matters. The gap between them is where enterprise risk accumulates — and most organizations can't tell which one their people are doing.
AI made the coding easy. The engineering was never about the coding.
AI didn't make everyone a developer. It made everyone think they're a developer. The tooling collapsed the cost of producing code to near zero, but it didn't touch the cost of producing code that's secure, maintainable, governable, and fit for production. That gap — between what gets generated and what actually survives contact with enterprise reality — is where the conversation about vibe coding vs. agentic engineering starts to matter.
The Conventional Wisdom Is Half Right
The prevailing narrative is simple: AI coding tools are a productivity multiplier. Point them at a problem, describe what you want, and ship faster than ever. And for a certain class of work — hackathons, quick POCs, weekend experiments, internal demos — that narrative is accurate. Vibe coding, where you describe intent loosely and accept or reject AI output based on whether it looks right, is genuinely effective when the stakes are low and the goal is exploration.
Credit where it's due: vibe coding democratized the ability to build functional software. That's not nothing. Non-technical domain experts are producing working prototypes that would have required a dev team and a sprint cycle two years ago. The barrier to entry for software creation has never been lower.
Here's where reality diverges from the narrative.
What You Don't Know Gets Compiled Too
AI is a multiplier — but it multiplies your engineering judgment, including the gaps in it. A software engineer who vibe codes isn't doing the same thing as a non-SWE who vibe codes, even if they're using identical tools and identical prompts. The engineer's instincts leak in. They catch anti-patterns in the output. They reject things a non-engineer wouldn't recognize as problems. They know what questions to ask before the AI starts generating.
What it boils down to is this: the differentiator isn't the AI. It's what you bring to the conversation with the AI.
Vibe coding means you describe intent and evaluate output on whether it appears to work. Agentic engineering means you architect the system, define constraints, ground the agent with context and standards, specify inputs and outputs, and review diffs — not demos. You treat the AI as a capable but junior engineer who needs direction, not a magic box that needs a wish.
The distinction isn't tooling. It's mindset, process, and — critically — vocabulary.
The Vocabulary Gap Is the Whole Ballgame
One of the most underestimated advantages a SWE brings to AI-assisted development is precision of language. Not programming language. Engineering vocabulary — the specific terms that collapse ambiguity and eliminate the AI spending tokens guessing what you meant.
Consider two people building the same thing: a user authentication system.
A non-SWE prompts: "Build me a login page where users can sign in and see their dashboard."
A SWE prompts: "Implement an OAuth 2.0 PKCE auth flow. Store access tokens in httpOnly secure cookies with SameSite=Strict. Handle token refresh rotation server-side. On successful auth, redirect to /dashboard. Return 401 with a structured error body on failure. Rate-limit the token endpoint to 10 requests per minute per IP."
Same goal. One sentence of engineering knowledge eliminated four rounds of iteration, avoided three security vulnerabilities, and produced output that could survive real users. The SWE's prompt didn't just save time — it controlled the blast radius of what the AI was allowed to generate.
This is what "what you don't know can haunt you" actually looks like in practice. You can't ask for what you don't know exists. The AI won't volunteer error handling, input validation, auth hardening, or state management concerns unless you specify them. It builds exactly what you asked for — and if you didn't know to ask, you get a prototype wearing production clothing.
It goes deeper than individual prompts. A SWE knows to think about concurrency, about idempotency in API design, about what happens when a database connection drops mid-transaction. They know that "it works on my machine" and "it works in production under load" are two entirely different statements. They know to ask about logging, observability, graceful degradation. None of this shows up in the output unless someone puts it in the input. The AI doesn't have opinions about your operational requirements — it has token predictions. The engineering judgment that shapes those predictions is yours or it's nobody's.
The Framework: Intent and Background Both Matter
I think about this as a 2×2. The approach matters, but the person wielding it matters more.
A SWE with casual intent — weekend project, proof of concept — is doing informed vibe coding. Fast, but with built-in guardrails. They know where the landmines are and they're consciously choosing to step over them.
A non-SWE with casual intent is doing classic vibe coding. Creative, useful for exploration, genuinely valuable for learning. But the landmines are invisible.
A SWE with production intent is doing agentic engineering. Architecture before generation. Constraints explicit. Quality gates in place. Output reviewable, maintainable, defensible. And critically — the agent is equipped. Grounding documents, coding standards, interface definitions, architectural decision records, project context. The agent doesn't just receive a prompt; it receives an operating environment. With proper plugins, resource access, and context, the agent handles not just code generation but testing, documentation, refactoring, and CI/CD tasks within boundaries you defined. That's the "agentic" part — it's not just a smarter autocomplete, it's a constrained collaborator operating within an engineered system.
A non-SWE with production intent is in dangerous territory. This is where "it works in the demo" becomes "it's in production" — with no architecture, no security posture, no understanding of what was generated or why. Just confidence and a deployed URL.
Being that the AI doesn't distinguish between these quadrants on its own, the responsibility sits entirely with the person driving it.
Why Enterprise Won't Have a Choice
This isn't just a personal best-practice argument. In enterprise environments, agentic engineering is going to be a requirement — not a preference. Three reasons.
You can't govern what you can't explain. Enterprise software needs to be auditable and defensible. "The AI generated it and I clicked accept" doesn't survive a compliance review, a security audit, or a post-incident analysis. Regulatory requirements around data privacy, financial controls, and healthcare compliance demand that someone with engineering judgment made intentional design decisions — not that someone accepted AI output that happened to run. Agentic engineering creates the paper trail: the spec, the constraints, the review rationale. Vibe coding creates a black box. And being that IP provenance and licensing questions aren't going away, the governance surface only gets wider.
Someone else has to support this at 2 AM. Vibe-coded output with no discernible architecture, inconsistent patterns, and zero documentation is a support nightmare. It runs today. In six months, when another engineer needs to debug it, extend it, or explain it during an incident — that's when the cost shows up. I've seen this pattern for a considerable amount of time across enterprise environments, long before AI entered the picture: the speed of initial delivery gets celebrated, and the ongoing cost of support gets absorbed silently until it doesn't. AI-generated code accelerates this cycle dramatically. Agentic engineering produces code that follows established team patterns and exists within a coherent architecture, because those constraints were provided to the agent before generation. The supportability is by design, not by accident.
Unscoped AI output is a security and data integrity risk. Agentic engineering means defining boundaries before the AI generates: what data goes in, what comes out, what the agent is and isn't allowed to touch. Without explicit I/O controls and scoping, you get hallucinated integrations, scope creep in the codebase, and outputs that interact with systems they were never intended to reach. In an enterprise context, that's not messy — it's a breach waiting to happen.
Both Have a Place — But Only One Scales
I'm not anti-vibe coding. I use it regularly — for the right things. Weekend project where the stakes are "does this look cool"? Vibe code it. Exploring a new API to see if it's worth investing real engineering time? Absolutely. The problem isn't the approach. The problem is vibe coding without knowing you're vibe coding, shipping prototypes with production confidence, and not having the engineering vocabulary to recognize what's missing from the output.
Now I'm not trying to sound the alarms. If you're a software engineer, AI tools are the most significant force multiplier you've encountered in your career. Your skills aren't less relevant — they're amplified. The things you know how to ask for, the patterns you recognize, the failure modes you've internalized from years of production incidents — all of that becomes higher-leverage when an AI is executing at the speed it executes.
But that amplification cuts both directions.
Look at the last three things you built with AI. For each one: could you explain every architectural decision to a senior engineer? Could someone else maintain it without a rewrite? Did you define the constraints, or did you let the AI decide? Do you know what's missing — or are you just confident that it runs?
Audit the quadrant you're actually operating in. Everything else follows from that.