Looking Back: Seven Human-AI Collaboration Patterns in the Aristotle Project

Five articles in. Time to step back and look at the path itself. Aristotle: Teaching AI to Reflect on Its Mistakes covered the design philosophy and initial implementation. claude-code-reflect: Same Metacognition, Different Soil told the story of porting across platforms. Trust Boundaries: One Idea, Two Systems proposed a trust tiering model. From Scars to Armor: Harness Engineering in Practice validated the theory through refactoring. A Markdown’s Three Lives: From Static Rules to a Git-Backed MCP Server evolved the rule storage from append-only to the GEAR protocol. ...

2026-04-16 · 11 min

A Markdown's Three Lives: From Static Rules to Git-Backed MCP Server

The previous article, From Scars to Armor: Harness Engineering in Practice, ended with Aristotle having a streamlined router (SKILL.md compressed from 371 lines to 84), an on-demand progressive disclosure architecture, and a working reflect→review→confirm workflow. But one thread never got pulled: Where do confirmed rules actually live? This article follows that thread. It wasn’t planned from the start. Three concrete problems in actual use forced the design out, step by step. ...

2026-04-16 · 21 min

From Scars to Armor: Harness Engineering in Practice

Three articles in. Back to code — and a hard look in the mirror. The first post, Aristotle: Teaching AI to Reflect on Its Mistakes, covered the design philosophy and a smooth implementation — three commits in one go. The second, claude-code-reflect: Same Metacognition, Different Soil, described the adaptation cost of moving the same philosophy to Claude Code — continuous iteration from V1 to V3. The third, Trust Boundaries: The Same Idea on Open and Closed Platforms, proposed a tiered trust model and a harness engineering framework. ...

2026-04-11 · 14 min

claude-code-reflect: Same Metacognition, Different Soil

Same metacognitive ability, different soil. The growing patterns look nothing alike. My previous post, Aristotle: Teaching AI to Reflect on Its Mistakes, had three core principles: immediate trigger, session isolation, human in the loop. These sound platform-agnostic. But when I moved the same philosophy to Claude Code, I discovered something: platform differences are much larger than expected. First Hurdle: Plugin System Differences Claude Code’s plugin and OpenCode’s skill are completely different systems. Just getting the plugin installed and recognized took several rounds of struggle. ...

2026-04-06 · 9 min

Aristotle: Teaching AI to Reflect on Its Mistakes

“Knowing yourself is the beginning of all wisdom.” — Aristotle Every time I work with an AI coding assistant, I run into the same problem. Mistakes that were corrected get repeated in the next session. The model isn’t stupid. There’s a structural gap in memory. For example. Last week I corrected a mistake the model made. It apologized, I accepted, we kept working. Today I started a new session, and the same mistake appeared again. ...

2026-04-06 · 6 min