Everyone calls themselves an "AI-assisted developer" now. But that label covers so much ground it's basically meaningless. Someone prompting Claude Code to build a full-stack app they can't debug? AI-assisted. A staff engineer orchestrating multiple AI agents from a CLI or a custom agentic harness? Also AI-assisted. These are not the same thing. We need better vocabulary.
Vibe coder
Andrej Karpathy coined the term in February 2025: "you fully give in to the vibes, embrace exponentials, and forget that the code even exists."
Prompt and pray. Accept whatever comes out. Ship it. If it breaks, prompt again. The code is a byproduct, not an artifact you own.
For prototyping and personal tools this is great, honestly. Non-engineers can build working software in an afternoon. The feedback loop is minutes, not days.
It falls apart when things need to survive contact with reality. You can't debug code you don't understand. And good luck reviewing a PR when you don't know what the code is supposed to do.
A lot of the "AI developer" accounts on Twitter are vibe coders who haven't realized it yet. They post demos that look impressive, ship apps that work for the happy path, but when something breaks they just prompt their way around it instead of understanding why.
Agentic engineer
Karpathy came back a year later and coined this one too: "you are not writing the code directly 99% of the time, you are orchestrating agents who do and acting as oversight." He added "engineering" to emphasize there's an art and science to it.
This is where I live.
An agentic engineer works with AI agents from a CLI: Claude Code, Codex, OpenCode. Almost no IDE, almost no text editor. Sorry, AI-enabled Cursor and VS Code users, you're probably not ready to trust agentic AI yet.
The agentic engineer cares more about the product than the code. You want to ship. The code is a means to that end, and you're happy to let an agent write it as long as the result is correct.
You're way faster because you skipped the typing. But you didn't skip the thinking.
People assume agentic means reviewing every line of code. I don't think so. You need to have verified the output adequately, which is a different skill. You build guardrails that verify for you: tests, types, linting, CI. When those pass, you review at the architecture level. Does this fit my mental model of how the system works? Are the responsibilities in the right place?
I spend most of my day in Claude Code. I describe what I want, check that the tests pass, look at the shape of the solution, and iterate. Sometimes I read every line because the change is tricky. Sometimes I skim and trust the test suite.
I'll be honest, there's a bit of an identity crisis here. How much code review do you actually need? How much does code quality matter when the agent can rewrite the whole thing in minutes? I don't think anyone has figured this out. It came up multiple times at The Pragmatic Summit this month: code review is becoming the bottleneck. AI agents produce more PRs, more diffs, more code, but reviewing is still human-speed. If you're generating code faster than your team can review it, you haven't sped up delivery. You've moved the queue.
When your guardrails aren't there, though, that's when you're in trouble. Thin test coverage and a rubber stamp CI pipeline means skipping the diff review is just vibe coding with extra steps.
Organic engineer
Writing code by hand, line by line, with minimal or no AI assistance. The artisanal approach to software.
The organic engineer's identity is tied to the craft. The pride and joy of writing code yourself, understanding every line, hitting flow states that prompting can't touch. Writing code is satisfying in a way that reviewing someone else's output (especially somebody else's AI) just isn't.
Some codebases and domains aren't there yet with AI tooling either, and the cost of a wrong suggestion in the wrong place is too high to gamble on. Some of the best engineers in the world work this way. They've tried AI tools and decided it's not worth it for what they do. Depending on the domain, they might be right.
For most web and app development in 2026, though, going full organic is leaving speed on the table. I think about AI as another layer of abstraction you're building upon, the same way we moved from assembly to C to higher-level languages. Each layer trades some control for leverage.
The hard part
The lines between these three are blurry and they shift. I catch myself sliding from agentic into vibe territory sometimes. The test coverage drops, I stop paying attention to what the agent is doing, and I don't notice until something breaks and I realize I've been a passenger for the last three features.
How to stay fast without going sloppy? That's what I keep coming back to.