Two Memories Are Better Than One
by Jacob Koenig
5/8/26

The loop got tighter when I let go: an experiment in giving my second brain its own semantic memory.
I gave my AI a set of skills to use and a memory of the people and situations I'm facing. Now it's learning from the mix on its own, and the depth of insight feels like it's growing between us.
I've always believed the real power of AI is the iterative pairing between the human perspective and the AI's billion-vector view. That stereoscopic thinking is how you reach greater awareness.
I gave my system its own semantic memory to manage alongside my curated episodic memory. The dual memory systems make for a tighter cycle, the one that carries my best thinking forward and grows stronger with every turn. It seems I wasn't the only one with this idea. Just days after I finished my latest changes, Anthropic shipped "dreaming" for their managed agents.
The AI is now encoding its own insights about how it's working with me, and those insights deepen my thinking. I encode the new thinking back, and the next round goes further still. The cycle is enriching itself, more and more.
Eji is the personal AI system I've been building since early 2025. It started as a custom GPT called CloserEdge, loaded with the negotiation frameworks I'd been teaching my deal closing team for years. Its evolution has been documented over the four posts in this series: from project folder RAGs to a set of modular skills, then a memory layer in a searchable database, and most recently a streamlined harness wrapped around the model so it couldn't skip my preflight steps.
Eji has run on GPT, moved to Claude, and is now model-agnostic.
Every morning I start the day by looking through the previous day's threads and pulling in new updated context. These memories live in a slate of files that correspond with different categories of my work and personal life.
Before any conversation begins, the system pulls the right files into the AI's working memory: the skills that apply, the context for the people and situations involved, and the concepts that have surfaced over time. The model gets everything it needs to answer well.

Why did I re-build the Agent Harness?
The first version was built reactively. It evolved out of an iterative conversation with Claude Code as I was trying to keep my chats from skipping the preflight steps. Claude Code designed the bones, I nudged the edges, and the result was a six-stage pipeline with two narrow AI calls inside it: one planned the search query, the other checked whether the results were sufficient. The remaining four stages were pure code.
The pipeline ran correctly, but the agents inside it could only see what was directly in front of them. I didn't fully trust it.
The rebuild started from a different question. Instead of asking how to make the AI follow my instructions, I asked what would give it enough perspective to use everything I'd built on every turn.
The answer is EJG 2.5, a cloud-side layer between me and whichever base model I'm using. When I send a message, the base model takes a first stab at the skills, and passes the context out to EJG, where three agents make different kinds of calls. The package they return gives the base model exactly what it needs to answer well.
.png)
The first agent is the router. It reads my message alongside three lightweight indexes covering my context files, concept articles, and skills, and hypothesizes about what files should be checked, what patterns might be relevant, and which skills match the situation. These are calls keyword matching and vector search can't make on their own.
The second agent focuses on context. Once the router's plan triggers a vector search across my RAG database, this agent synthesizes the results into a cohesive narrative, building the relationships between people and threads as if they were nodes in a graph. If it spots gaps, names mentioned without background or threads referenced without enough context, it runs a second pass to fill them in.
The third agent has the heaviest task, and so it runs on a stronger model. It receives the full context narrative, the concept articles, and a catalog of every tactical framework in the system, and decides what else the responding AI needs to see: which skill should be active, which negotiation tactic should be pre-loaded, and which concept article about a recurring pattern in my behavior should inform the response even when my message doesn't mention any pattern by name.
The code still owns the sequence and the architecture stayed deterministic. But the agents inside it are now making judgment calls that find patterns simple word matching never could.

What is each layer doing in the system?
The Eji system runs on three layers, each with a different job and a different owner.
Skills are my wisdom and teachings. They're the encoded principles I've built over years of practice and research: the coaching methodology, the negotiation tactics from Voss, Cialdini, and Klaff, and the voice rules that keep anything Eji writes sounding like me. They direct how Eji operates.
Context is something we curate together. It's the memory of every thread I've had: people, situations, agreements, where things stand. Eji searches through the chats and gives recommendations on what to update, and we go through it together to make sure the nuance and facts are right before anything gets written to the cloud.
The concepts are for the system to maintain on its own. They're the patterns it has noticed across the threads: behavioral dynamics that show up in more than one place, moves that are working in practice, and cross-domain connections between unrelated relationships that share the same underlying shape.
The first version of this layer was built iteratively as the EJG system was developed. We discussed the patterns together, I helped shape what got captured, and then I handed over the reins. But this layer wasn't getting the same level of curation that the context layer was, and it was a glaring weakness in the original version of this two-phase methodology.

What changed when the system started writing its own concepts?
The morning context update now includes an extra step for the AI to take care of: writing a memory file for itself, captured for the overnight process. That overnight process looks at the file introspectively alongside all the memories, all the skills and skill-related files, and the existing corpus of concept files. It digests them all and looks for what's working in practice.
When a pattern surfaces with concrete evidence across multiple threads, Eji writes a short article describing what it observed, why the dynamic seems to work the way it does, and how it should adjust its approach. The system writes them autonomously without requiring any additional work from me.
While I curate the episodic side every morning, Eji is studying the same conversations through a different lens. It looks at how it worked with me, which skills it leaned on, and which it skipped. The patterns it surfaces connect threads I'd never connect on my own, and it flags where it could go further next time by pulling a tactic, a concept, or a section of a skill file it hadn't surfaced before. The morning handoff is where the two views meet.
The articles themselves are invisible to me in conversation. Eji never says "the concept article on pre-concession suggests you're doing it again." It just gets better at interacting with me.

Why is this experiment different?
The semantic and episodic memory framing isn't mine. Databricks published a paper on memory scaling for AI agents that named the two halves: episodic memory for what happened, semantic memory for what it means. What they didn't mention was governance: who owns which type of memory and what rules constrain how each one gets updated.
In my system, the answer is asymmetric on purpose. I own context and I review every change in the morning. Eji owns concepts and updates them autonomously overnight. If I had to approve every pattern Eji notices, those patterns would only be my view of the world. Even if the system notices things that aren't there, those will get corrected in practice when I push back on what it's saying in context next time.
The previous two posts in this series were responses to ideas already in the air. The memory piece was a reaction to Karpathy's note and the broader builder community converging at once. The harness piece was a practitioner's argument that most of us building agent workflows already knew: smarter models optimize for efficiency, and a deterministic process requires top-down orchestration.
Anthropic's "dreaming" is a similar concept. I had no idea that they would release this just days after I finished my updates. They beat me to the punch before I could publish my thoughts, but these serve different aims. Theirs is built for autonomous worker agents doing enterprise jobs. The agent is the thing being improved, and dreaming makes it better at its work. My version is for personal coaching, where I'm the subject of the learning.

How is the experiment going?
A few days in, the system is doing what it's meant to. The output feels stronger than before, more accurate about the context I'm in, and the insights are better than I was getting before the rebuild.
What I don't know is what happens over weeks and months. Eji might build wrong assumptions about a relationship and reinforce them in the concept layer. The patterns it notices might drift from the ones that matter to me, and I may have to step back in and curate the semantic side manually. If the experiment doesn't work out, I could always scrap the autonomous side and pull the layer back under my hand, or just let the model improvements work out the kinks.
I'm trying to let it evolve. I don't know how it turns out, and I am curious to find out.

Where does this leave the system?
When I moved Eji into modular skills files, I gained the ability to load specialized capabilities on demand. But I also lost some of the cohesive elements that made it feel like one whole system. Skills could load, but they didn't always pull from the same context, and the concept layer wasn't mature enough to stitch the fragments together.
This change brings everything back together. The concept layer is now mature enough to be the cross-section that runs through all the fragmented parts. EJG ensures that everything is being utilized together meaningfully on every turn.
I encoded what I knew, and now the system encodes what it learns. We're both keeping our own notes, and the loop is now tighter than ever before.

This is the 5th in a series about Eji, my personal AI negotiation and communications tool
-
The Eji System → komcp.com/shared-mastery-022826
-
Amplify Your Edge → komcp.com/amplify-your-edge-032326
-
Owning the Memory → komcp.com/own-the-memory-own-the-era-041326
-
More Reliable AI → https://www.komcp.com/reliable-ai-042726
If you want to try the universal Eji package or compare notes on what you’ve been building, reach out. jkoenig@komcp.com
This article was also posted separately on LinkedIn:
https://www.linkedin.com/pulse/two-memories-better-than-one-jacob-koenig-wgvwc/