Feature Request: Improve context management #544
Replies: 4 comments
-
https://github.com/GreatScottyMac/roo-code-memory-bank i found this and seem to be helpful to me |
Beta Was this translation helpful? Give feedback.
-
I'm wondering if anyone knows whether the Cline Memory Bank (remembering that Roo Code is a fork of Cline) is compatible with Roo Code. The Cline Memory Bank looks simpler or at least more straightforward compared to the memory bank available at https://github.com/GreatScottyMac/roo-code-memory-bank . This one might work better, but I haven't had the time to read through the prompts to fully understand them. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone! Looking at this discussion, I see we're all facing similar challenges with context management. The current memory bank solutions seem focused on storing predefined context, but I think we need something more dynamic. What if, instead of just managing what we already have in context, the AI could proactively discover what context it needs? My proposal builds on these ideas but takes a different approach:
I've been experimenting with LangChain for implementing this kind of system, and it provides good tools for memory management that could address the exact issues @nissa-seru mentioned about garbage collection priorities and duplicates. This would go beyond what the current memory banks provide by being adaptive rather than static. Would this direction be interesting to explore? I'd be happy to elaborate more on the technical approach or even work on a proof of concept if there's interest! |
Beta Was this translation helpful? Give feedback.
-
🌐 Versión en Español📂 Solicitud: Mejoras en la Gestión de Archivos Duplicados y Contexto en RooCodeDescripción:
Beneficios:
🌐 English Version📂 Request: Improvements in Duplicate File Management and Context Handling in RooCodeDescription:
Benefits:
|
Beta Was this translation helpful? Give feedback.
-
IIUC, the current logic garbage-collects the first-half of the conversation (except for the original task content) upon hitting ~80% model context length. A few optimizations to consider:
I would recommend doing one or the other as a "free" optimization, especially as the original task + context can be quite bulky in terms of tokens.
Having multiple versions of the same file in context:
When context is being garbage-collected, I would recommend:
Beta Was this translation helpful? Give feedback.
All reactions