<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Context Management on Chuanxilu for Skilled Homo sapiens</title><link>https://blog.chuanxilu.net/en/tags/context-management/</link><description>Recent content in Context Management on Chuanxilu for Skilled Homo sapiens</description><generator>Hugo</generator><language>en-US</language><lastBuildDate>Sat, 18 Apr 2026 10:00:00 +0800</lastBuildDate><atom:link href="https://blog.chuanxilu.net/en/tags/context-management/index.xml" rel="self" type="application/rss+xml"/><item><title>Context Rot: An Easily Overlooked Problem in AI Coding</title><link>https://blog.chuanxilu.net/en/posts/2026/04/managing-context-length-in-ai-coding-sessions/</link><pubDate>Sat, 18 Apr 2026 10:00:00 +0800</pubDate><guid>https://blog.chuanxilu.net/en/posts/2026/04/managing-context-length-in-ai-coding-sessions/</guid><description>Someone in a group chat complained that GPT-5.4 performed worse than Doubao, ByteDance&amp;#39;s chatbot—the model would give irrelevant answers without even reading the question. After asking some follow-up questions, I learned they had fed it many documents and the conversation had gone on for a long time. This probably wasn&amp;#39;t the model&amp;#39;s problem—it was context rot. The conversation had gotten so long that the model could no longer &amp;#39;see&amp;#39; the current task clearly. This raises an overlooked problem: in the process of vibe coding or writing, how do you manage context effectively to avoid token and time wasted on model performance degradation?</description></item></channel></rss>