Most interview questions I inherited when I started hiring were trivia wearing a trench coat: clever enough to feel rigorous, specific enough to feel fair, and almost entirely uncorrelated with anything I later cared about. This post is the short list of what I actually learned when I replaced them with work samples, plus the questions I keep because they still earn their slot.
The trench coat tells
Trivia-in-a-trench-coat interview questions share a family resemblance. I now spot them within about ten seconds:
- They have one right answer. Good candidates don’t arrive at “roughly correct” — they either know the gotcha or they don’t.
- They reward surface familiarity with a specific codebase or language feature, not judgment.
- Their rubric quietly measures “has this candidate memorised this specific trick?” rather than “can this candidate do the job?”
- The interviewer who loves them was asked the same question three jobs ago and is still a little proud of solving it.
When you catch yourself writing that kind of question, stop. The pride is the tell. Good questions don’t survive because they’re clever; they survive because they’re boring and they work.
Four questions I’ve retired
I won’t name the companies, but these are all questions I inherited and deleted once I started actually measuring what predicted good hires.
”Invert a binary tree on the whiteboard”
I’ve written exactly zero binary-tree inversions in a decade of production code. I have read maybe two. The question is entirely a filter for “did you recently interview-prep with this question.” It selects hard for people who are actively job-hunting and against people who are currently employed and happy, which is the wrong direction.
”How would you design Twitter?”
System-design questions are not the problem; “Design Twitter” is. It rewards candidates who’ve memorised the specific rollup of Twitter’s architecture — users, tweets, fanout-on-write, cache layer — over candidates who can actually reason about a system they haven’t seen before.
I now ask system-design questions about an obscure system the candidate has never heard of. “Design a system that matches real-time bids for ad inventory across six regions.” Novel enough that nobody’s memorised it; concrete enough that the tradeoffs are real. The good candidates ask clarifying questions. The memorisers freeze.
”What’s the time complexity of sort in [language X]?”
This is a pure trivia question. The correct answer is “look it up.” A senior engineer who memorises standard-library complexity bounds has traded shelf space for the thing that actually matters — judgment about when to worry about complexity, and when not to. I don’t want the memoriser.
”Walk me through [specific obscure CS concept]”
“Explain the CAP theorem.” “What’s the difference between BASE and ACID?” “What’s a Merkle tree?” These are fine as conversation starters. They are catastrophic as filters, because they select for people who learned the vocabulary recently rather than people who understand the underlying constraints. I’ve met engineers who can recite the CAP theorem in their sleep and can’t reason about eventual consistency in a production setting. I’ve met engineers who’ve never said the word “CAP” and have built correct distributed systems for a decade. Filter on the behaviour, not the vocabulary.
Three questions I kept
The ones that survived my rewrite all share a structure: they give the candidate a piece of real work, or something very close to it, and watch how they think about it.
”Here’s a real bug report from our system. Debug it.”
The gold-standard technical interview, in my experience. We give the candidate a real (anonymised) bug report: symptoms, logs, a partial stack trace, a commit that might be related. We watch them reason through it for 45 minutes. They can ask us anything.
This question is high-signal because it exercises:
- Curiosity — do they ask about the system before jumping to a fix?
- Pattern recognition — have they seen this kind of bug before, roughly?
- Judgment — do they know when to stop debugging and start hypothesising?
- Communication — can they explain their reasoning as they go?
It’s also fair, in a way memorisation questions aren’t: every candidate has the same bug, and the rubric is “did they show us how they think,” not “did they arrive at the one right answer."
"Tell me about a time you made a technical decision you now regret.”
Every senior engineer has one. The candidates who don’t are either very junior or very dishonest about their own work. The good answers are specific (the project, the year, the consequence), reflective (what they’d do differently), and fair to their past self (the decision was reasonable given the information they had).
The best answers include “and here’s what I changed about how I make decisions as a result.” That’s the signal. Not the regret itself — the ability to close the loop on it.
”What would you ask your future manager that you wouldn’t ask the recruiter?”
This is the single question I’ve asked where the answer has shifted my hiring decision the most.
Good candidates ask about concrete things: how does the team handle on-call, what does the code review culture look like, what happens when a feature ships late, how many engineers are currently on the team.
Red-flag candidates ask about signaling things: the promotion ladder, the impact track, the “strategic direction.” Not because those are bad questions — they’re legitimate — but because they’re the questions a candidate asks when they’ve read more career blogs than they’ve shipped. The first bucket is engineering judgment; the second is career optics.
The work-sample trick
For senior candidates, I now lean hard on work samples: the candidate does a small piece of real work, in their own time, in their own editor, on a problem close to the one they’d do day-one on the job. Not “build a compiler in 90 minutes.” More like “here’s a 400-line file that does X poorly; please refactor it in however you normally would, take as long as you need, explain your choices in a follow-up.”
The objections I’ve heard:
- “It’s not fair to people with less free time.” This is a real concern, but the fix is paying for the work sample (which I’ve done) or keeping it small (which I’ve done) — not going back to whiteboard trivia.
- “Candidates will use AI.” I hope they do! The job will involve AI. I want to see how they use it. A candidate who submits a clean AI-assisted refactor with a thoughtful explanation of what they kept and what they changed is demonstrating the actual 2026 skill.
- “It takes more of the candidate’s time than a 1-hour technical round.” Yes. It also gives me vastly better signal. The right adjustment is to have fewer interview rounds, not shorter ones.
The part where I admit I’m still wrong sometimes
I don’t want to claim I’ve cracked interviewing. I haven’t. I’ve rejected candidates who were probably fine, and I’ve hired candidates who turned out to be wrong fits. Every senior engineer hiring for a decade has the same list.
The thing I’m most confident about is that the trivia-in-a-trench-coat interview question is always worse than the alternative. Every time I retired one, I got better signal. Every time I retained one “for tradition,” I later wished I hadn’t.
If you’re hiring right now: audit your loop. For each question, ask “what does a great answer to this look like, and would that answer actually predict success on the job?” If you can’t write the predictive version down, retire the question. Replace it with a work sample, a real bug, or a clarifying conversation about the candidate’s past.
Your loop will get shorter. Your hires will get better. And you’ll stop asking people to invert binary trees on whiteboards, which, in 2026, is a small civilizational win.