Memory OS

Memory OS

Memory OS is a persistent semantic memory layer for AI applications. It enables your AI agents, chatbots, and applications to remember context across sessions, retrieve relevant memories intelligently, and build deeper user relationships.

What is Memory OS?

Memory OS provides infrastructure for storing and retrieving memories with semantic understanding. Unlike simple key-value stores or vector databases, Memory OS implements a complete memory system inspired by human cognition:

  • 3-Tier Memory Architecture: Short-term, medium-term, and long-term memories with automatic decay
  • Semantic Vector Search: Find relevant memories based on meaning, not just keywords
  • Intelligent Relevance Scoring: 6-factor algorithm combining similarity, recency, importance, and more
  • LLM-Ready Context: Retrieve pre-formatted context that fits your token budget

Key Features

Three-Tier Memory System

Memories are classified into tiers that mirror human memory:

TierDurationUse Case
Short-termSession/hoursCurrent conversation context
Medium-termDays to weeksRecent interactions and preferences
Long-termPersistentCore user knowledge, facts, relationships

Memory Types

Memory OS distinguishes between two fundamental types of memory:

  • Episodic: Events and interactions ("User asked about pricing on January 5th")
  • Semantic: Facts and knowledge ("User prefers dark mode, works in finance")

Intelligent Retrieval

The relevance scoring algorithm considers:

  1. Semantic similarity to query (40%)
  2. Recency of memory (20%)
  3. Importance score (15%)
  4. Access frequency (10%)
  5. User feedback (10%)
  6. Entity co-occurrence (5%)

Use Cases

AI Agents

Give your agents persistent memory across sessions. Remember user preferences, past decisions, and learned context.

Chatbots

Build chatbots that remember conversation history and user preferences. Create personalized experiences that improve over time.

Personalization

Store user preferences, behaviors, and patterns. Retrieve relevant context to personalize responses and recommendations.

RAG Enhancement

Augment your retrieval-augmented generation with semantic memory. Combine document retrieval with conversational context.

Quick Example

JavaScript
import { MemoryOS } from '@memory-os/sdk';

const memory = new MemoryOS({ apiKey: 'your-api-key' });

// Store a memory
await memory.create({
  content: "User prefers dark mode and works in finance",
  tier: "long",
  memory_nature: "semantic"
});

// Search memories
const results = await memory.search({
  query: "What are the user's preferences?",
  limit: 5
});

// Get context for LLM
const context = await memory.getContext({
  query: "Help the user with their request",
  max_tokens: 2000
});

console.log(context.context);
// "User prefers dark mode and works in finance..."
Python
from memoryos import MemoryOS

memory = MemoryOS(api_key="your-api-key")

# Store a memory
memory.create(
    content="User prefers dark mode and works in finance",
    tier="long",
    memory_nature="semantic"
)

# Search memories
results = memory.search(
    query="What are the user's preferences?",
    limit=5
)

# Get context for LLM
context = memory.get_context(
    query="Help the user with their request",
    max_tokens=2000
)

print(context["context"])
# "User prefers dark mode and works in finance..."
Bash
# Store a memory
curl -X POST https://api.mymemoryos.com/v1/memories \
  -H "Authorization: Bearer your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "content": "User prefers dark mode and works in finance",
    "tier": "long",
    "memory_nature": "semantic"
  }'

# Search memories
curl -X POST https://api.mymemoryos.com/v1/search \
  -H "Authorization: Bearer your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "What are the user'\''s preferences?",
    "limit": 5
  }'

# Get context for LLM
curl -X POST https://api.mymemoryos.com/v1/context \
  -H "Authorization: Bearer your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "Help the user with their request",
    "max_tokens": 2000
  }'

Next Steps

Ctrl+Shift+C to copy