Persistent memory for AI agents that improves over time. No fine-tuning. No model changes. Just smarter prompts built from experience.
# Before: AI makes same mistakes every session
agent.run("Test login form") # Uses sleep(), fails
agent.run("Test login form") # Uses sleep() AGAIN
# After: ALMA remembers what works
from alma import ALMA
alma = ALMA.from_config(".alma/config.yaml")
# Get memories before task
memories = alma.retrieve(
task="Test login form",
agent="helena"
)
# Agent now knows: "Don't use sleep()"
# Agent now knows: "Incremental validation works"
result = agent.run_with_context(memories)
Why I built ALMA
I was building AI agents for automated testing. Helena for frontend QA, Victor for backend verification. They worked great... until they didn't.
The same mistakes kept happening:
sleep(5000) for waits, causing flaky tests
Every conversation started fresh. No memory. No learning. Just an expensive LLM making the same mistakes I'd already corrected.
I tried Mem0. It stores memories, but no way to scope what an agent can learn. Helena could "learn" database queries she'd never use. I looked at LangChain Memory. It's for conversation context, not long-term learning. Different problem.
Nothing fit. So I built ALMA - Agent Learning Memory Architecture. The core insight: AI agents don't need to modify their weights to "learn." They need smart prompts built from relevant past experiences.
Three simple phases. No model modifications.
ALMA finds relevant past experiences, strategies that worked, and anti-patterns to avoid.
Memories are injected into prompts. The agent executes with accumulated knowledge.
Success becomes a new heuristic. Failure becomes an anti-pattern with what to do instead.
Purpose-built for AI agents that need to learn, remember, and improve.
Define what each agent can and cannot learn. Helena learns testing, not database queries.
Explicitly record what NOT to do, why it's bad, and what to do instead.
Senior agents share knowledge with juniors. Hierarchical memory inheritance.
Save state mid-workflow, resume after failures. Perfect for complex multi-step tasks.
Native MCP server with 16 tools. Works directly with Claude Code.
PostgreSQL, Qdrant, Pinecone, Chroma, SQLite, Azure Cosmos. Deploy anywhere.
See how ALMA compares to Mem0 and LangChain Memory.
| Feature |
ALMA
v0.6.0
|
Mem0 | LangChain Memory |
|---|---|---|---|
| Memory Scoping | can_learn / cannot_learn | Basic isolation | None |
| Anti-Pattern Learning | Yes | No | No |
| Multi-Agent Sharing | inherit_from + share_with | No | No |
| Workflow Checkpoints | Yes | No | No |
| MCP Integration | 16 tools | No | No |
| TypeScript SDK | Full-featured | No | Limited |
| Vector Backends | 6 options | Limited | Via integrations |
| Graph Memory | Neo4j, Memgraph, Kuzu | Limited | Entity memory |
| Event System | Webhooks + callbacks | No | No |
| License | MIT (fully open) | Partially open | MIT |
Ready to give your agents real memory?
Get StartedInstall ALMA and start giving your agents memory.
pip install alma-memory
npm install @rbkunnela/alma-memory
from alma import ALMA
# Initialize ALMA
alma = ALMA.from_config(".alma/config.yaml")
# Before task: Get relevant memories
memories = alma.retrieve(
task="Test the login form validation",
agent="helena",
top_k=5
)
# Inject memories into your prompt
prompt = f"""
## Your Task
Test the login form validation
## Knowledge from Past Runs
{memories.to_prompt()}
"""
# Run your agent with the enriched prompt
result = run_agent(prompt)
# After task: Learn from the outcome
alma.learn(
agent="helena",
task="Test login form",
outcome="success",
strategy_used="Tested empty fields, invalid email, valid submission"
)
# Track anti-patterns (what NOT to do)
alma.add_anti_pattern(
agent="helena",
pattern="Using sleep() for async waits",
why_bad="Causes flaky tests, wastes time",
better_alternative="Use explicit waits with conditions"
)
6 vector database backends. From local development to enterprise scale.
Stop watching your AI make the same mistakes. Give your agents memory that persists, learns, and improves.