The last line of defense for inquiry: independent confirmation and protocol reflexivity

The Last Line of Defense for Inquiry: Independent Confirmation and Protocol Reflexivity

TL;DR: The inquiry protocol’s last line of defense is independent confirmation — a perspective free of confirmation bias that runs falsifiability testing to hunt for counterexamples. This post also covers how the protocol came to be (from 18 bugs of practice to a gap found while writing these articles) and plans for future reflexivity. In the previous post, I laid out the inquiry protocol’s seven conditions: three floor conditions (T1–T3) that force the AI to go deep enough, and four guardrails (HC1–HC4) that keep the inquiry process from spiraling out of control. This post covers the last line of defense — and how the protocol actually came to be. ...

2026-05-06 · 7 min · Alex Wang
Seven conditions to keep AI's 5-Why from going off the rails

Seven Conditions to Keep AI's 5-Why from Going Off the Rails

TL;DR: The inquiry protocol sets seven conditions to keep AI’s 5-Why on track: T1–T3 are floor conditions (can’t stop until all three are met), HC1–HC4 are guardrails (prevent the process from spiraling). T2’s preventive counterfactual check is the most important design — preventive framing forces the inquiry to go deep, while counterfactual questions deliberately construct negation scenarios to counter confirmation bias. ← Previous post The last post diagnosed three problems when AI runs 5-Why: stopping too early (depth insufficient), single-path tracking (breadth insufficient), and confirmation bias (reasoning bias). These three are independent but tend to show up together — a shallow conclusion becomes an anchor, which simultaneously compresses the exploration space and biases evidence selection. This post designs the inquiry protocol: encoding the tacit judgment of “when to stop, when to keep going” that human experts use, into explicit rules that bring AI’s reasoning quality up to the standard 5-Why actually requires. ...

2026-05-05 · 7 min · Alex Wang
Why AI Can't Do 5-Why Right: Stopping Too Early, Single-Path Tracking, and Confirmation Bias

Why AI Can't Do 5-Why Right: Stopping Too Early, Single-Path Tracking, and Confirmation Bias

TL;DR: AI fails at 5-Why in three ways: stopping too early (insufficient depth), single-path tracking (insufficient breadth), and confirmation bias (reasoning distortion). The three are independent but tend to show up together — a shallow conclusion becomes an anchor that compresses the exploration space and biases evidence selection. This post uses a real case where all four rounds of attribution went wrong to dissect each failure mode. This post sits at the intersection of two series: “Taming AI Coding Agents with TDD” and “AI Root Cause Diagnosis.” ...

2026-05-05 · 7 min · Alex Wang