<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Claude Code on Chuanxilu for Skilled Homo sapiens</title><link>https://blog.chuanxilu.net/en/tags/claude-code/</link><description>Recent content in Claude Code on Chuanxilu for Skilled Homo sapiens</description><generator>Hugo</generator><language>en-US</language><lastBuildDate>Sat, 18 Apr 2026 10:00:00 +0800</lastBuildDate><atom:link href="https://blog.chuanxilu.net/en/tags/claude-code/index.xml" rel="self" type="application/rss+xml"/><item><title>Context Rot: An Easily Overlooked Problem in AI Coding</title><link>https://blog.chuanxilu.net/en/posts/2026/04/managing-context-length-in-ai-coding-sessions/</link><pubDate>Sat, 18 Apr 2026 10:00:00 +0800</pubDate><guid>https://blog.chuanxilu.net/en/posts/2026/04/managing-context-length-in-ai-coding-sessions/</guid><description>Someone in a group chat complained that GPT-5.4 performed worse than Doubao, ByteDance&amp;#39;s chatbot—the model would give irrelevant answers without even reading the question. After asking some follow-up questions, I learned they had fed it many documents and the conversation had gone on for a long time. This probably wasn&amp;#39;t the model&amp;#39;s problem—it was context rot. The conversation had gotten so long that the model could no longer &amp;#39;see&amp;#39; the current task clearly. This raises an overlooked problem: in the process of vibe coding or writing, how do you manage context effectively to avoid token and time wasted on model performance degradation?</description></item><item><title>Trust Boundaries: The Same Idea on Open and Closed Platforms</title><link>https://blog.chuanxilu.net/en/posts/2026/04/a-trust-boundary-design-experiment/</link><pubDate>Mon, 06 Apr 2026 18:00:00 +0800</pubDate><guid>https://blog.chuanxilu.net/en/posts/2026/04/a-trust-boundary-design-experiment/</guid><description>The same reflection mechanism on different platforms, their complexity differing by an order of magnitude — but the complexity itself reveals a deeper question: when should we trust AI&amp;#39;s judgment, and when should we step in?</description></item><item><title>claude-code-reflect: Same Metacognition, Different Soil</title><link>https://blog.chuanxilu.net/en/posts/2026/04/claude-code-reflect-different-soil/</link><pubDate>Mon, 06 Apr 2026 14:56:00 +0800</pubDate><guid>https://blog.chuanxilu.net/en/posts/2026/04/claude-code-reflect-different-soil/</guid><description>The same reflection mechanism lands on different platform foundations with very different landing postures and paths—from plugin installation to permission pitfalls to API concurrency, documenting the actual development process on Claude Code.</description></item></channel></rss>