Skip to content
GitHubDiscord

AI Memory (lexigram-ai-memory)

AI memory system for the Lexigram Framework — episodic, semantic, and working memory


Three-tier AI memory system for the Lexigram Framework. Provides working, episodic, and semantic memory with pluggable backends, automatic consolidation scheduling, token-aware context assembly, and multi-source retrieval — all wired through the DI container via MemoryModule. Zero-config usage starts with sensible defaults.

Terminal window
uv add lexigram-ai-memory
# Optional extras
uv add "lexigram-ai-memory[redis]"
from lexigram import Application
from lexigram.di.module import Module, module
from lexigram.ai.memory import MemoryModule
from lexigram.ai.memory.config import MemoryConfig
@module(imports=[
MemoryModule.configure(
MemoryConfig(default_backend="in_memory")
)
])
class AppModule(Module):
pass
app = Application(modules=[AppModule])
if __name__ == "__main__":
app.run()

Zero-config usage: Call MemoryModule.configure() with no arguments to use defaults.

application.yaml
ai_memory:
default_backend: "vector"
ttl_seconds: 2592000
consolidation:
enabled: true
interval_seconds: 3600.0
Section titled “Option 2 — Profiles + Environment Variables (recommended)”
Terminal window
export LEX_AI_MEMORY__DEFAULT_BACKEND=vector
# Environment variables for each field
from lexigram.ai.memory.config import MemoryConfig
from lexigram.ai.memory import MemoryModule
config = MemoryConfig(
default_backend="vector",
)
MemoryModule.configure(config)
FieldDefaultEnv varDescription
enabledTrueLEX_AI_MEMORY__ENABLEDEnable the AI memory subsystem
default_backend"in_memory"LEX_AI_MEMORY__DEFAULT_BACKENDBackend type: in_memory, cache, database, vector
ttl_seconds0LEX_AI_MEMORY__TTL_SECONDSDefault entry TTL in seconds (0 = never expire)
working.system_prompt_tokens500LEX_AI_MEMORY__WORKING__SYSTEM_PROMPT_TOKENSFixed token allocation for system prompt
working.recent_turns_fraction0.4LEX_AI_MEMORY__WORKING__RECENT_TURNS_FRACTIONFraction of remaining budget for recent turns
episodic.default_top_k5LEX_AI_MEMORY__EPISODIC__DEFAULT_TOP_KDefault number of episodes to retrieve
semantic.min_confidence0.6LEX_AI_MEMORY__SEMANTIC__MIN_CONFIDENCEMinimum confidence score for stored facts
consolidation.enabledTrueLEX_AI_MEMORY__CONSOLIDATION__ENABLEDWhether automatic background consolidation is active
consolidation.interval_seconds3600.0LEX_AI_MEMORY__CONSOLIDATION__INTERVAL_SECONDSHow often to run a consolidation pass
MethodDescription
MemoryModule.configure(config, enable_consolidation)Production-ready module with full consolidation pipeline
MemoryModule.stub(config)Test-friendly module with in-memory backends
  • Three-tier memory: Working (context assembly), Episodic (conversation episodes), Semantic (entity facts)
  • Pluggable backends: In-memory, Redis, SQLAlchemy, Qdrant/Chroma/PGVector
  • Token budget allocation: Distributes available tokens across memory sources
  • Automatic consolidation: Background scheduler promotes episodic to semantic storage
  • Multi-source retrieval: Unified MemoryRetriever queries all tiers with relevance ranking
  • Dynamic pruning: DynamicContextPruner fits context into hard token limits
async with Application.boot(modules=[MemoryModule.stub()]) as app:
# your test code
...
FileWhat it contains
src/lexigram/ai/memory/module.pyMemoryModule — DI module factory methods
src/lexigram/ai/memory/config.pyMemoryConfig and tier-specific config classes
src/lexigram/ai/memory/di/provider.pyMemoryProvider — wires all protocols and services
src/lexigram/ai/memory/working/manager.pyWorkingMemoryManager — context assembly
src/lexigram/ai/memory/episodic/store.pyEpisodicMemoryStore — episode storage
src/lexigram/ai/memory/semantic/store.pySemanticMemoryStore — fact storage
src/lexigram/ai/memory/consolidation/consolidator.pyMemoryConsolidator — consolidation pipeline
src/lexigram/ai/memory/retrieval/retriever.pyMemoryRetriever — multi-source retrieval