I've been on both sides of live coding interviews. As a candidate, I've bombed problems I could solve in my sleep because someone was watching me type. As a hiring manager, I've watched great engineers freeze up and produce code that looked nothing like their actual work. After years of this, I'm convinced: live coding interviews are broken for everyone involved.
The anxiety tax
Here's what actually happens in a live coding interview: you're solving a problem you'd normally work through calmly at your desk, but now there's a stranger watching every keystroke, judging every pause, every backspace, every moment you stare at the screen thinking. The cognitive overhead of being observed while coding is enormous.
This isn't a soft complaint. Performance anxiety under evaluation is well-documented. Your working memory shrinks. Your ability to think abstractly degrades. The candidate who would methodically decompose a problem at work is now blanking on whether Map.get/2 or Map.fetch/2 returns nil while someone silently takes notes.
What you're measuring isn't engineering ability. It's tolerance for being watched. Some people are fine with it. Many excellent engineers are not. You're filtering for a personality trait that has zero correlation with job performance.
The environment problem
Now let's talk about the setup. Most live coding interviews happen in a browser-based editor or a shared screen on some platform the candidate has never used before. No autocomplete, no LSP, no snippets, no keybindings, no muscle memory. You've taken a professional who has spent years customizing their development environment and asked them to perform in a blank notepad.
This is especially brutal for anyone who uses a non-standard editor. If you're a Neovim user (hi!), being dropped into a CoderPad or HackerRank editor is like asking a pianist to perform on a keyboard with the keys rearranged. Your fingers try to do things that don't work. You're fighting the tool instead of solving the problem. Half your brain is occupied with "how do I select this line" instead of "how do I solve this algorithm."
Some companies let candidates share their screen and use their own editor. That helps with the tooling problem, but it doesn't fix the real issue: someone is still watching you code in real time. You still feel every second of silence when you're thinking. You still feel the pressure to type something, anything, rather than sit there and reason about the problem. The environment is familiar now, but the performance anxiety is exactly the same.
Testing the wrong skills
But here's what bothers me most: even if we fixed the anxiety and the tooling, live coding interviews are testing the wrong level of skill entirely.
Live coding zooms in on the micro. Do you remember this API by heart? Can you write the correct syntax without looking it up? Do you know which standard library function handles this edge case? It's trivia dressed up as engineering evaluation.
Meanwhile, the work that actually matters is macro. What should we build? How should this system be architected? Are we solving the right problem for the customer? Is this work even valuable, or are we building something nobody asked for? The best engineers I've worked with spend most of their energy on these questions. The syntax and API calls are the easy part: you look them up, you autocomplete them, you move on.
When you watch someone live code a linked list, you learn nothing about whether they can design a system that handles 10x growth, or whether they'll push back when a feature request doesn't make sense, or whether they'll choose the boring reliable solution over the clever fragile one. You're evaluating a carpenter by watching them hammer a single nail instead of looking at the houses they've built.
The AI elephant in the room
And then there's the question nobody wants to ask: in 2026, what exactly are you evaluating?
Every working engineer uses AI assistance daily. But it's gone way beyond Copilot autocomplete. The highest-performing engineers I know spend most of their day in a CLI like Claude Code or Codex, barely opening an IDE at all. They describe what they want, review the output, iterate. The actual keystroke-by-keystroke coding that live interviews test is increasingly not how the best work gets done.
When you ask someone to live code without AI tools, you're testing a skill that's becoming irrelevant to their daily work. And if you let them use AI, then what are you testing? How well they prompt? How fast they review diffs? The signal-to-noise ratio of a live coding interview was already questionable. AI hasn't just made it worse, it's made the entire premise obsolete.
Some companies have responded by making problems harder or more obscure. That just amplifies the anxiety problem while selecting for people who grind LeetCode, not people who build great software.
A confession
In the interest of honesty: one of my best hires ever came from a 10-minute in-person live coding session. She's still rocking. Ten minutes was enough because it was obvious. The live coding was almost incidental to what I actually picked up on: how she thought, how she communicated, how she approached the problem.
Another great hire was a remote session where I could hear chickens clucking in the background. We talked about the chickens. We talked about testing philosophies. We talked about the code. But we forgot to actually do the live coding part. It was one of the best interview sessions I've ever had.
Both of these worked despite the format, not because of it. The signal came from the conversation, the thinking, the person. The live coding was either over in minutes or never happened at all. That tells you something about where the real value lies.
What works better
There are alternatives that actually tell you something about how a person works.
Have them walk you through their own code. Ask the candidate to bring a project they're proud of, open it up, and explain it. You'll learn more about their engineering judgment in 15 minutes of "why did you choose this approach?" than in an hour of watching them implement a linked list. You see their real coding style, their real decision-making process, their ability to communicate technical ideas. And they're comfortable because it's their code, their editor, their context.
Give them a take-home, then discuss it. A small, well-scoped problem they solve on their own time, in their own environment, with their own tools (including AI, because that's how they'll work). Then have a conversation about their solution. Why these trade-offs? What would you change with more time? How would this scale? The take-home produces the artifact. The discussion reveals the thinking.
Work trials. This is the gold standard when it's feasible. Have the candidate work with your team on real problems for a few days or a week. Paid, of course. You see how they collaborate, how they handle ambiguity, how they ask questions, how they review code, how they debug. They see what your team is actually like. Both sides get real signal instead of theater.
Not every company can do work trials. It's logistically hard, especially for candidates with full-time jobs. But even a single paid day of pairing on real work tells you more than any whiteboard session ever could.
The real question
Every interview process is optimizing for something. Live coding optimizes for "can this person perform under artificial pressure in an unfamiliar environment without their normal tools?" That's a weird thing to optimize for.
What you actually want to know is: can this person solve real problems, make good technical decisions, communicate clearly, and work well with others? None of those require a live coding test. All of them are better evaluated through conversation, code review, and actual collaboration.
Stop making people do tricks. Start working with them.