The Most Dangerous Engineer in the Room
Nick Thompson
Summary
The industry rewards speed. It promotes the engineer who ships fast and celebrates the one who closes tickets in volume. But the engineer who slows down to ask what they don't know — the one running considerations before touching anything — that's the one you actually want near production.
Speed gets promoted. Preparation keeps the lights on.
An engineer picks up a ticket. Server's running hot — CPU pinned at 98%. They pull up Task Manager, spot the top process chewing resources, kill it, and close the ticket. Twelve minutes, start to finish. Efficient.
Twenty minutes later the monitoring lights up. Same server, now completely down. The process they killed was a backup agent running hot because the underlying storage array was throwing I/O errors. The backup service was the symptom. The storage array was the problem. And now there's no backup and a failing array — because someone treated what they saw without stopping to ask what they were looking at.
I've watched this pattern play out for 20 years across 100+ enterprise tenants. The details change — sometimes it's a killed process, sometimes it's a firewall rule, sometimes it's a "quick fix" to a DNS record — but the shape is always the same. An engineer with enough knowledge to act but not enough understanding to know what their action would actually do.
That's the most dangerous engineer in the room. Not the junior who asks too many questions. Not the one who takes an extra 15 minutes to read the documentation. The dangerous one is the engineer who's confident, competent enough to execute, and doesn't know what they don't know.
The 6 Ps Aren't a Bumper Sticker
Prior Planning Prevents Piss-Poor Performance. I've said it enough times that my team can probably recite it back to me unprompted. But the 6 Ps aren't just a catchy phrase you put on a whiteboard and forget about. They're an operating methodology — the difference between reacting to what's in front of you and understanding the system you're about to touch.
Planning in this context doesn't mean a two-week project plan. It means the 10 minutes before you execute anything where you ask yourself a short list of questions that most engineers skip: What is this system actually doing right now? What depends on it? What happens if my change doesn't work? What happens if it does work but I was wrong about the problem?
I've seen senior engineers with 15 years of experience skip this step because they were sure. And I've seen engineers two years into their career avoid catastrophic mistakes because they weren't sure and had the discipline to say so. Certainty without verification is just confidence wearing a hard hat.
Considerations: The Discipline Nobody Teaches
There's a concept I come back to constantly when mentoring engineers, and it's one I've never seen in a certification curriculum or a vendor training course. I call it "running considerations."
Before you touch anything — before you execute the command, push the change, modify the policy — you run through the considerations. What do I know for certain? What am I assuming? What don't I know? Who or what else does this affect? What's the rollback plan if I'm wrong?
This isn't a checklist you laminate and tape to your monitor. It's a mental discipline. It's the habit of treating every action as consequential until you've proven otherwise. And it's the single most reliable indicator I've found of whether an engineer is going to be safe in a production environment.
The engineers who run considerations aren't slower. They're faster — because they don't spend the afternoon recovering from the thing they broke in the first five minutes. They don't generate the emergency tickets, the war rooms, the post-incident reviews. The work that prevents incidents is invisible, which is exactly why it never gets celebrated. Nobody writes up a postmortem for the outage that didn't happen because someone spent 10 minutes reading before typing.
"You Don't Know What You Don't Know" Isn't a Cliche — It's the Whole Problem
The phrase gets thrown around so often it's lost its teeth. But it describes the most dangerous state an engineer can operate in: confident action within an incomplete mental model.
Here's what it actually looks like in practice. An engineer migrates a mailbox to Exchange Online. Straightforward, done it before. But this particular mailbox has a transport rule routing specific messages to a compliance journal. The engineer didn't know the rule existed because they scoped the migration as a mailbox move, not a mail flow change. The migration succeeds. The journaling breaks silently. Three weeks later, compliance notices a gap in their records and now it's an audit finding.
The engineer wasn't careless. They were thorough — within the boundaries of what they knew. The problem was that the boundaries were wrong, and nothing in their process forced them to discover that.
This is where "knowing what you don't know" becomes operational, not philosophical. It means building a habit of asking: what else touches this? What systems, what policies, what workflows am I not seeing because I scoped this too narrowly? You will never identify every dependency every time. But the engineer who habitually asks the question catches things that the engineer who doesn't will miss every single time.
The Preparation Paradox
The industry has a structural problem with preparation. We reward speed of execution — ticket closure rates, sprint velocity, deployment frequency. We measure output. We don't measure the quality of the thinking that preceded the output.
An engineer who resolves 40 tickets a week looks more productive than one who resolves 30. But if the first engineer generates 8 follow-up tickets from incomplete fixes and the second generates zero, who's actually more efficient? I've seen this math play out for a considerable amount of time, and the answer is never the one the dashboard suggests.
Preparation looks like doing nothing. An engineer staring at documentation for 15 minutes before executing a change looks idle. An engineer running considerations — mentally mapping dependencies, checking their assumptions, identifying what they don't know — produces no visible output until the moment they execute correctly on the first attempt. The paradox is that the most prepared engineer is the least visible one, right up until everyone else is in a war room and they're not.
The 6 Ps exist specifically to counter this. Prior planning isn't about being slow. It's about compressing the total time — including the recovery time from mistakes you didn't need to make.
This Isn't Just a Technical Problem Anymore
I'll say this briefly because it deserves mention, not because it's the point of the post: everything I've described is louder right now because of AI.
AI tooling makes it easier than ever to act confidently without understanding. You can generate infrastructure-as-code without understanding the infrastructure. You can deploy configurations without knowing what they configure. You can produce technically correct output that's operationally wrong because you didn't know enough to evaluate what the tool gave you. The gap between "I can do this" and "I understand what I'm doing" has never been wider.
But this isn't an AI problem. It's the same pattern I saw with cloud migrations in 2015, with automation rollouts in 2018, with zero-trust implementations in 2022. Every new capability wave produces engineers who know the tool but not the terrain. AI just compresses the timeline from months to minutes.
What to Actually Do About It
If you're an engineer reading this, the discipline is straightforward even if the habit takes time to build.
Before you execute, pause. Ask yourself: what am I assuming? What would I need to verify to be sure? What depends on the thing I'm about to change? If I'm wrong, what breaks — and can I reverse it?
If you can't answer those questions, you're not ready to execute. That's not a weakness. That's the most valuable engineering instinct you can develop — the willingness to say "I don't know enough yet" and mean it.
If you're leading a team, look at what you're actually rewarding. If your metrics celebrate speed and your culture punishes the engineer who asks clarifying questions or takes an extra 15 minutes to research before acting, you've built an environment that produces dangerous engineers. The fix isn't a process document. It's making preparation visible and valued — asking your engineers what they considered before they acted, not just whether the ticket is closed.
The most dangerous engineer in the room isn't the one who lacks skill. It's the one who has just enough skill to act and not enough understanding to know when they shouldn't. The 6 Ps, considerations, knowing the boundaries of your own knowledge — none of this is glamorous. None of it shows up on a dashboard. But it's the difference between an engineer who resolves incidents and one who creates them.
The engineers I trust most aren't the fastest ones. They're the ones who know exactly where their knowledge ends — and plan accordingly.