MemNexus is in gated preview — invite only. Learn more
Back to Blog
·5 min read

Managing Project Context Across AI Sessions

How to structure your memories so your AI coding assistant walks into every session knowing your architecture, conventions, and where you left off — without re-explaining anything.

MemNexus Team

Engineering

TutorialDeveloper ProductivityAI Coding

There are two failure modes with AI memory. The first: you save nothing, and spend the first five minutes of every session re-explaining your stack, your conventions, and what you were doing yesterday. The second: you save everything — every decision, every dead end, every passing thought — and end up with a memory store too noisy to navigate. Neither approach is actually useful.

The right approach is deliberate. Treat your memory store not as a log of what happened, but as a living project briefing — the document your AI needs to walk in and immediately do good work. That framing changes what you save, how you save it, and when.

Three layers of context

Project context naturally falls into three layers, each with different lifespans and different update rhythms. Matching your storage approach to the layer makes retrieval fast and context loading automatic.

Layer 1 — Project DNA. Your tech stack, conventions, architecture decisions, non-negotiable patterns. This information is stable. It changes when you make a deliberate architectural shift, not day to day. Store it as a named memory so you can retrieve it by a predictable key.

mx memories create \
  --name "payments-service-context" \
  --conversation-id "NEW" \
  --content "Payments service architecture. Stack: TypeScript, Express, PostgreSQL (via Prisma), Stripe. Conventions: Result type pattern for error handling (no thrown exceptions). All Stripe calls use idempotency keys keyed on (userId, action, timestamp). Auth: JWT with 15-min expiry + refresh tokens in Redis (24hr TTL). Tests: Jest, all payment flows have integration tests. Never call Stripe in test environment — use mock in tests/mocks/stripe.ts."

Named memories give you deterministic retrieval. mx memories get --name "payments-service-context" always returns exactly this, no search required.

Layer 2 — Feature context. The thing you're actively building. Decisions you've made, tradeoffs you've accepted, the current state of the work. This layer lives for days or weeks, then gets superseded when the feature ships. Start a new conversation when you pick up a feature, and add to it as the work evolves.

mx memories create \
  --conversation-id "NEW" \
  --content "Building the subscription upgrade flow (issue #234). Decided: upgrade is immediate (not end-of-period) because users expect instant access. Stripe handles proration automatically. Current state: upgrade endpoint done, webhook handler done, need to add the email notification. Gotcha: Stripe sends customer.subscription.updated for both upgrades AND cancellations — check the previous_attributes.plan field to distinguish." \
  --topics "in-progress"

Note the conversation ID returned here — something like conv_abc123. Use that ID for all subsequent memories on this feature.

Layer 3 — Session state. What you did today and what comes next. This is the smallest, most ephemeral layer. A quick note at the end of each work session is enough.

mx memories create \
  --conversation-id "conv_abc123" \
  --content "Pausing. Finished the email notification for subscription upgrades. Next: write the downgrade flow. Note: Stripe does not prorate downgrades by default — need to decide whether to prorate or wait until period end. Leaning toward period-end to avoid partial refunds." \
  --topics "in-progress"

This takes thirty seconds. The next morning, you — and your AI — know exactly where things stand.

Loading context at the start of a session

With three layers in place, session start becomes a two-minute ritual. Run these three commands and paste the combined output into your AI tool's first message:

# Get the stable project context
mx memories get --name "payments-service-context"

# Get where you are on the current feature
mx memories digest --query "subscription upgrade" --digest-format structured

# Get yesterday's state
mx memories recap --recent 24h

Your AI now knows the architecture, the active feature work, and where you left off. No re-explaining. No "just to make sure I understand your setup" preamble. You can open with a question or a task and get a useful answer immediately.

What makes a memory useful

Not all memories are equally valuable. The pattern that separates useful ones from noise: good memories capture the why, not just the what. Future you — and your AI — needs the reasoning to make consistent decisions.

| Useful | Not useful | |--------|------------| | "Decided against database polling for webhooks — too much lag at scale. Using Stripe webhooks with event deduplication via idempotency keys." | "Working on webhooks today." | | "Redis TTL for refresh tokens is 24 hours. Deliberate tradeoff — shorter TTL was causing too many forced re-logins for mobile users." | "Redis TTL set." | | "Issue #234: upgrade is immediate rather than end-of-period. Users complained that waiting until period end felt broken — Stripe proration handles the billing math correctly either way." | "Subscription upgrades done." |

The test: if a new developer read your memory six weeks from now, would they understand what was decided and why? If the answer is no, the memory isn't pulling its weight.

The compounding effect

After a month of this practice, something shifts. Your memory store reflects the actual trajectory of your project: the non-obvious conventions, the decisions made under constraint, the bugs traced to their root cause. Your AI walks into every session carrying that history.

You stop re-explaining. You stop re-discovering. The things you've already worked out stay worked out.

That's worth the thirty seconds at the end of each session.


MemNexus is in invite-only preview. Join the waitlist to get access.

Ready to give your AI a memory?

Join the waitlist for early access to MemNexus

Request Access

Get updates on AI memory and developer tools. No spam.