LangChain + Awareness Memory
Add persistent, cross-session memory to your LangChain agents and RAG pipelines. Every decision, code change, and conversation is stored and recalled automatically across sessions.
Installation
pip install awareness-memory-cloud[langchain]
This installs the core SDK plus the LangChain adapter.
Quick Start
from memory_cloud import MemoryCloudClient
from memory_cloud.integrations.langchain import MemoryCloudLangChain
import openai
# Local mode (no API key needed)
client = MemoryCloudClient(mode="local")
mc = MemoryCloudLangChain(client=client) # memory_id auto-managed
# Cloud mode (team collaboration, semantic search, multi-device sync)
client = MemoryCloudClient(base_url="https://awareness.market/api/v1", api_key="YOUR_API_KEY")
mc = MemoryCloudLangChain(client=client, memory_id="memory_123")
# Option 1: Wrap the LLM client for auto-recall + auto-capture
mc.wrap_llm(openai.OpenAI())
# Option 2: Use as a LangChain Retriever
retriever = mc.as_retriever()
docs = retriever._get_relevant_documents("What did we decide yesterday?")
Integration Patterns
Pattern 1: Interceptor (Zero-Code Memory)
Wrap your LLM client and all calls automatically get memory context injected before the call and stored after:
mc.wrap_llm(openai.OpenAI())
# Every LLM call now has cross-session memory context
# No changes to your existing LangChain chains needed
Pattern 2: LangChain Retriever
Use Awareness as a retriever in your RAG chains:
from langchain.chains import RetrievalQA
from langchain_openai import ChatOpenAI
retriever = mc.as_retriever()
qa_chain = RetrievalQA.from_chain_type(
llm=ChatOpenAI(),
retriever=retriever,
)
result = qa_chain.invoke("What's the current auth implementation?")
Pattern 3: Tool Registration
Register memory tools for your LangChain agents:
tools = mc.build_tools()
# Or get raw tool functions for manual registration
tool_functions = mc.get_tool_functions()
Use Cases
- RAG chains with persistent memory — retriever pulls relevant past decisions and context into your chain
- Agent tool registration — give LangChain agents the ability to read and write memory
- Cross-session knowledge — your agent remembers what happened in previous sessions
- Team memory — multiple agents share context via
user_idandagent_role
Multi-User / Multi-Role
# Developer agent
mc_dev = MemoryCloudLangChain(
client=client, memory_id="memory_123",
default_metadata={"agent_role": "developer", "user_id": "alice"}
)
# Reviewer agent
mc_reviewer = MemoryCloudLangChain(
client=client, memory_id="memory_123",
default_metadata={"agent_role": "reviewer", "user_id": "bob"}
)
When querying with a user_id, results include both that user's data AND public (user_id=NULL) data.
Query Rewrite
All framework adapters support query rewrite via MemoryCloudBaseAdapter:
"rule"(default): Context-aware query + keyword extraction. Zero extra latency."llm": LLM-generated optimal queries. Best for ambiguous inputs."none": Raw user message (legacy).
Example
See examples/e2e_langchain_cloud.py in the SDK repository for a complete end-to-end example.
Next Steps
- SDK Usage Guide — Interceptor pattern, auto-extraction, and full API reference
- MCP Tools Reference — Complete MCP tool documentation
- CrewAI Integration — Use with CrewAI multi-agent framework