<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>OpenCode on Chuanxilu for Skilled Homo sapiens</title><link>https://blog.chuanxilu.net/en/tags/opencode/</link><description>Recent content in OpenCode on Chuanxilu for Skilled Homo sapiens</description><generator>Hugo</generator><language>en-US</language><lastBuildDate>Sat, 18 Apr 2026 10:00:00 +0800</lastBuildDate><atom:link href="https://blog.chuanxilu.net/en/tags/opencode/index.xml" rel="self" type="application/rss+xml"/><item><title>Context Rot: An Easily Overlooked Problem in AI Coding</title><link>https://blog.chuanxilu.net/en/posts/2026/04/managing-context-length-in-ai-coding-sessions/</link><pubDate>Sat, 18 Apr 2026 10:00:00 +0800</pubDate><guid>https://blog.chuanxilu.net/en/posts/2026/04/managing-context-length-in-ai-coding-sessions/</guid><description>Someone in a group chat complained that GPT-5.4 performed worse than Doubao, ByteDance&amp;#39;s chatbot—the model would give irrelevant answers without even reading the question. After asking some follow-up questions, I learned they had fed it many documents and the conversation had gone on for a long time. This probably wasn&amp;#39;t the model&amp;#39;s problem—it was context rot. The conversation had gotten so long that the model could no longer &amp;#39;see&amp;#39; the current task clearly. This raises an overlooked problem: in the process of vibe coding or writing, how do you manage context effectively to avoid token and time wasted on model performance degradation?</description></item><item><title>A Markdown's Three Lives: From Static Rules to Git-Backed MCP Server</title><link>https://blog.chuanxilu.net/en/posts/2026/04/from-markdown-to-mcp-server-gear-protocol/</link><pubDate>Thu, 16 Apr 2026 19:00:00 +0800</pubDate><guid>https://blog.chuanxilu.net/en/posts/2026/04/from-markdown-to-mcp-server-gear-protocol/</guid><description>Aristotle&amp;#39;s reflection rules started as a flat Markdown file — append-only, forgotten, no rollback. When dozens of rules accumulated, I realized the file wasn&amp;#39;t enough. This started a design iteration path from append-only to Git-backed MCP Server. That path led to something called GEAR.</description></item><item><title>From Scars to Armor: Harness Engineering in Practice</title><link>https://blog.chuanxilu.net/en/posts/2026/04/from-scars-to-armor-harness-engineering-practice/</link><pubDate>Sat, 11 Apr 2026 01:00:00 +0800</pubDate><guid>https://blog.chuanxilu.net/en/posts/2026/04/from-scars-to-armor-harness-engineering-practice/</guid><description>The first version of Aristotle looked smooth. In practice, it exposed four architectural problems. Fixing them validated the trust model and harness engineering framework from Part 3 — every constraint encodes a trust judgment.</description></item><item><title>Trust Boundaries: The Same Idea on Open and Closed Platforms</title><link>https://blog.chuanxilu.net/en/posts/2026/04/a-trust-boundary-design-experiment/</link><pubDate>Mon, 06 Apr 2026 18:00:00 +0800</pubDate><guid>https://blog.chuanxilu.net/en/posts/2026/04/a-trust-boundary-design-experiment/</guid><description>The same reflection mechanism on different platforms, their complexity differing by an order of magnitude — but the complexity itself reveals a deeper question: when should we trust AI&amp;#39;s judgment, and when should we step in?</description></item><item><title>Aristotle: Teaching AI to Reflect on Its Mistakes</title><link>https://blog.chuanxilu.net/en/posts/2026/04/aristotle-ai-reflection/</link><pubDate>Mon, 06 Apr 2026 10:00:00 +0800</pubDate><guid>https://blog.chuanxilu.net/en/posts/2026/04/aristotle-ai-reflection/</guid><description>Installing reflection capability into AI coding assistants—when the model makes a mistake, immediately trigger root cause analysis and transform the correction into persistent rules.</description></item></channel></rss>