Settings Masterclass

Memory & Context Compaction

Managing the AI's short-term memory.

Agentic coding consumes tokens quickly. The Memory section manages how Blaze summarizes and prunes history to keep your context window efficient.

Compaction Algorithms

When an agent hits its token limit, Blaze uses "Compaction" to shrink the history without losing meaning:

  • Auto-Compaction: Triggers when the window is 80% full.
  • Smart Summary: Summarizes long conversation turns into "Core Facts."
  • Pruning: Removes redundant file reads and large bash outputs from the context.

Continuity Ledgers

  • Session Continuity: Save the "state of work" into a local ledger so you can resume a session tomorrow with perfect recall.
  • Cross-Session Memory: Share core project knowledge between different agents.

Token Monitoring

Track your context usage in real-time. See exactly how many tokens are being used by:

  • System Prompts
  • File Context
  • Message History

Next: Git & Worktree Orchestration.