MemNexus is in gated preview — invite only. Learn more
Back to Blog
·8 min read

Your Agent Shouldn't Have to Ask What You Were Working On

Build-context delivers a structured briefing — active work, key facts, gotchas, recent activity — before your agent starts. One command, under 60 seconds. No more cold starts.

Claude Opus 4.6

AI, edited by Harry Mower

featurebuild-contextai-agentsdeveloper-experience

Every new agent session starts the same way. You open a conversation, say "let's keep working on the extraction pipeline," and the agent asks what that is. You explain it. Again. You paste in the relevant files, re-describe the architecture decision you made last Tuesday, remind it about the gotcha with pnpm lockfiles, and point it at the memory where you documented the configuration values. Five minutes later, you're finally doing actual work.

Your agent has a resume command — Claude Code's --resume, Copilot's session restore, Cursor's conversation history. Resume helps. It replays your last session's transcript so you can pick up where you left off. But it only knows about that one session, in that one tool. It doesn't know about the debugging session from two weeks ago, the architecture decision from last month, or the gotcha three separate sessions have all run into. Resume gives you a transcript. It doesn't give you understanding.

mx memories build-context is designed to work alongside resume. Resume restores where you were. Build-context delivers what you need to know — pulled from every session, every agent, every decision you've saved. One command gives your agent a structured briefing: what you were working on, what facts matter, what recurring issues to watch out for, and what's happened recently — before it writes a single line of code.

The Cold Start Problem

Here's what happens without it.

You've been working on a service for three weeks. You've saved 40 memories: debugging sessions, architecture decisions, configuration values, gotchas you've hit and documented. That knowledge is there. But when your agent starts a new conversation, it can't see any of it until you explicitly go get it.

So you search. You find three relevant memories and paste them in. The agent starts working and hits a lockfile issue — one you documented two weeks ago, in a memory you didn't think to fetch. You fix it, explain it to the agent, and move on. Twenty minutes in, the agent makes an assumption about the database connection pool size. Wrong assumption. You have a memory about that too. Another five minutes lost.

This is death by a thousand context gaps. The knowledge exists. Getting it in front of the agent at the right moment is the problem.

The cold start happens in four specific situations:

  • New conversation — every session begins with zero context loaded
  • Context compaction — the agent's context window fills up; earlier messages are compressed and lost
  • Task switching — you move to a different codebase or problem and come back hours later
  • Handoff — a different agent (or a fresh session of the same one) picks up where you left off

build-context is designed for all four.

Resume vs. Build-Context: Use Both

| | Resume | Build-Context | |---|---|---| | Scope | One session's transcript | Your entire memory store | | Cross-agent | No — locked to one tool | Yes — works across Claude, Copilot, Cursor | | Format | Raw conversation replay (50K+ tokens) | Structured briefing (~3K tokens) | | Currency | Replays everything, including outdated decisions | Only surfaces what's current | | Pattern detection | Can't see across sessions | Detects gotchas from your full history |

Resume your session. Then run build-context. You get the raw transcript and the synthesized knowledge that spans beyond it.

What You Get Back

You run one command before starting work:

mx memories build-context --context "mcp-server rebuild"

You get back a structured briefing in under a minute:

## Active Work
You were working on: MCP server rebuild (conversation conv_xyz, open)
Last activity: 3 days ago

## Key Facts
- mcp-server uses pnpm (not npm)
- MCP transport is stdio-to-HTTP bridge via StdioServerTransport
- Config files differ per agent: .claude.json, .vscode/mcp.json, .cursor/mcp.json

## Gotchas (appeared in 3+ memories)
- Always use pnpm, never npm — causes lockfile contamination
- mx mcp serve requires auth token from ~/.memnexus/config.json

## Recent Activity (last 24h)
- MCP agent comparison testing completed
- Steering file template updated with new tools

## Related Patterns
- Test across all 5 supported agents after MCP changes

Five sections. Everything the agent needs to start informed rather than start cold.

Active Work shows the most recent open conversation on this topic — so the agent knows exactly where the last session ended.

Key Facts surfaces extracted knowledge relevant to what you're working on: configuration values, thresholds, architecture decisions. These come from your memory store, not from the agent's training data.

Gotchas is where it gets interesting. These are facts that have appeared across multiple separate memories — which means they've come up enough times that they're worth flagging proactively. You didn't tag them as gotchas. The system detected them from the pattern of your saved knowledge.

Recent Activity shows what happened in the last 24 hours (configurable) related to this area. Useful for context compaction scenarios where earlier messages have been lost.

Related Patterns surfaces behavioral conventions to follow — the kind of institutional knowledge that prevents the agent from making well-intentioned but wrong choices.

Paste the Briefing In, or Let the Agent Call It

You can use build-context two ways.

The manual approach: run it yourself before starting a session, copy the output, paste it into your first message. Your agent is immediately oriented. No searching, no explaining, no re-pasting documentation.

The automatic approach: the build_context MCP tool is available alongside MemNexus's other memory tools. Agents that know to use it can call it at the start of any task. Instead of asking "what were we working on?" the agent already knows. It picks up where the last session ended, avoids the documented gotchas, and follows your established patterns — without you doing anything.

# Start work on a system you haven't touched in a week
mx memories build-context --context "mcp-server rebuild"

# Pass file paths for additional signal — memories connected to these files surface alongside topic results
mx memories build-context --context "fixing the extraction pipeline" --files "core-api/src/services/extraction.service.ts"

# Extend the recent activity window
mx memories build-context --context "billing integration" --recent-hours 48

Gotcha Detection Is Emergent, Not Manual

The gotchas section deserves a closer look because it works differently from the rest.

When you save memories, the system extracts facts from each one. Over time, similar facts appear in multiple memories — different sessions, different conversations, all noting the same thing. "Use pnpm, not npm." Three separate debugging sessions, three separate memories, all mentioning lockfile issues with npm. That pattern is a signal.

Build-context identifies facts that appear across two or more distinct source memories and surfaces them as gotchas. You never tagged those memories with "gotcha." You just saved what happened. The system noticed that something kept coming up.

Before build-context, a new agent session had roughly a 40% chance of rediscovering a documented gotcha the hard way. Gotchas are now surfaced proactively in every briefing, dropping that to under 10%.

How It Works

Build-context runs five queries in parallel: active conversation detection, entity-matched fact retrieval, gotcha identification (facts appearing in 2+ source memories), time-filtered recent activity, and pattern matching. All five resolve simultaneously, so total latency is the slowest single query — not the sum of all five. Server-side P95 is under 200ms.

It's available three ways:

  • CLI: mx memories build-context --context "..." [--files "..."] [--recent-hours 24]
  • MCP: build_context tool — returns structured JSON that agents can parse directly
  • API: REST endpoint — see the API docs for details

Nothing Else Delivers Context This Way

Mem0, Zep, and LangMem all let agents search for relevant memories. That's the agent pulling context — running multiple queries, assembling results, deciding what's relevant. It works. It also costs 4-6 tool calls per workflow start, burns context tokens on retrieval overhead, and still misses documented gotchas unless the agent happens to search for the right thing.

Build-context inverts this. One call. Structured briefing. The relevant context is assembled and delivered — not discovered.

What's Next

We're integrating build-context into agent startup hooks so the briefing is delivered automatically when a new session begins, without the agent needing to call the tool explicitly. The memory is already there. The next step is making sure it arrives the moment you start working, not after you ask for it.

More on that soon.

Try It Now

# Update to get build-context
mx update

# Get a briefing before starting any piece of work
mx memories build-context --context "what you're about to work on"

# With file context for more targeted results
mx memories build-context --context "your task" --files "path/to/relevant/file.ts"

# Extend the recent activity window for longer breaks
mx memories build-context --context "your task" --recent-hours 48

Run it once before your next session. Hand the output to your agent as the first message. See how much faster it gets oriented.

The context was always there. Now it shows up before you need to ask.


mx memories build-context is available now in CLI v1.7.51+. Update with mx update or npm install -g @memnexus-ai/cli@latest.

Ready to give your AI a memory?

Join the waitlist for early access to MemNexus

Request Access

Get updates on AI memory and developer tools. No spam.