Skip to main content
Samantha Maria Dawson-Banks

Persistent AI: Solving the LLM Context Problem

I'm in the middle of creating a Docker image for self-deployed persistent agentic workspaces. If you've ever felt the "do I need to tell you this again" pain of working with LLMs, you know exactly why I'm building this.

The Problem #

Standard AI interfaces often treat each session as a clean slate, losing the deep context and nuanced history of a project. I'm using storage as a persistent context to ensure that my AI agents (like Gemini-CLI or Copilot-CLI) retain the information they need to be truly effective partners.

The System #

I've already integrated this into a nightly auto CI/CD pipeline to a local repo. It's working to the point where:

This project leans into AI not just as a toy, but as a practical, enabling tool for developers and knowledge workers.

Stay tuned for the architecture breakdown and Docker configurations.