Writing

Latest

Black-and-white illustration of a bearded man in a baseball cap, hand on the brim, looking down, with stacked weight plates behind his head like a halo, suggesting the weight of judgment.

You Can't Align AI Judgment If You've Lost Your Own

The governance gap nobody talks about when your team ships AI-generated code

April 5, 2026·7 min read

Technical leaders are being asked to set AI standards, define acceptable output quality, and govern what gets into production. You cannot do any of that if you've lost your own connection to how your team builds. Here's what the drift looks like, and what it costs you when AI has already doubled your team's output.

Previous

Adoption Metrics Are Not Skill Metrics

AI usage numbers can look healthy while your engineers quietly stop being able to explain their own systems

April 4, 2026·6 min read

When I reviewed my team's AI adoption dashboard last quarter, every number was moving in the right direction. What the dashboard couldn't show me was who had stopped understanding the systems they were shipping. Adoption metrics and skill development metrics are not the same thing. I had been treating them like they were.

When You Push for 3x

You get 3x of whatever the path of least resistance produces.

April 2, 2026·6 min read

Jono Herrington pushed for higher velocity. The team delivered. Then he sat at a lunch table bragging about doubled output while his engineers knew exactly what they'd done to produce that number. The same pattern is playing out in every AI productivity mandate right now.

AI Is Shipping Your Blind Spots

Every prompt is one perspective.

March 28, 2026·7 min read

AI doesn't have blind spots. You do. When you prompt from one angle, AI ships those blind spots at scale. The feedback loop is the problem. Better prompts won't close it.