Memory OS

Building AI Agents with Memory

AI agents become significantly more capable when they can remember past interactions, learn from feedback, and maintain context across sessions. This guide covers memory patterns for building intelligent, stateful AI agents with Memory OS.

Why AI Agents Need Persistent Memory

Traditional AI agents operate in a stateless manner: each conversation starts fresh with no knowledge of previous interactions. This creates several limitations:

ProblemWithout MemoryWith Memory OS
User preferencesUser repeats preferences every sessionAgent remembers and applies preferences automatically
Context continuity"What were we discussing yesterday?" failsAgent recalls previous conversations seamlessly
LearningSame mistakes repeatedAgent learns from corrections and feedback
PersonalizationGeneric responsesTailored responses based on history
Task continuityMulti-session tasks lose progressAgent picks up where it left off

Memory Patterns for Agents

Pattern 1: Conversation Memory

Store and retrieve conversation context to maintain coherent multi-turn dialogues.

JavaScript
import { MemoryOS } from '@memory-os/sdk';
import OpenAI from 'openai';

const memory = new MemoryOS({ apiKey: process.env.MEMORY_OS_API_KEY });
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

class ConversationalAgent {
  constructor(userId, sessionId) {
    this.userId = userId;
    this.sessionId = sessionId;
    this.conversationHistory = [];
  }

  async chat(userMessage) {
    // 1. Retrieve relevant context from memory
    const context = await memory.getContext({
      query: userMessage,
      max_tokens: 2000
    });

    // 2. Get recent conversation history from this session
    const recentHistory = await memory.search({
      query: `conversation with user ${this.userId}`,
      tier: 'short',
      limit: 10,
      threshold: 0.5
    });

    // 3. Build the prompt with memory context
    const systemPrompt = this.buildSystemPrompt(context, recentHistory);

    // 4. Generate response
    const completion = await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [
        { role: 'system', content: systemPrompt },
        ...this.conversationHistory,
        { role: 'user', content: userMessage }
      ]
    });

    const assistantMessage = completion.choices[0].message.content;

    // 5. Store this turn in memory
    await this.storeConversationTurn(userMessage, assistantMessage);

    // 6. Update local history
    this.conversationHistory.push(
      { role: 'user', content: userMessage },
      { role: 'assistant', content: assistantMessage }
    );

    return assistantMessage;
  }

  buildSystemPrompt(context, recentHistory) {
    const historyText = recentHistory.results
      .map(r => r.content)
      .join('\n');

    return `You are a helpful AI assistant with memory capabilities.

## User Context
${context.context || 'No prior context available.'}

## Recent Conversation
${historyText || 'This is the start of a new conversation.'}

## Instructions
- Use the context above to provide personalized responses
- Reference previous conversations when relevant
- If you learn something new about the user, acknowledge it
- Ask clarifying questions if context is unclear`;
  }

  async storeConversationTurn(userMessage, assistantMessage) {
    // Store as short-term memory
    await memory.memories.create({
      content: `User: ${userMessage}\nAssistant: ${assistantMessage}`,
      tier: 'short',
      content_type: 'conversation',
      memory_nature: 'episodic',
      metadata: {
        user_id: this.userId,
        session_id: this.sessionId,
        timestamp: new Date().toISOString()
      }
    });
  }

  async extractAndStoreInsights(conversation) {
    // Use LLM to extract key facts from conversation
    const extraction = await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [
        {
          role: 'system',
          content: `Extract key facts about the user from this conversation.
          Return a JSON array of facts. Each fact should be a single statement.
          Only include information explicitly stated or strongly implied.
          Example: ["User prefers Python over JavaScript", "User works in healthcare"]`
        },
        { role: 'user', content: conversation }
      ]
    });

    try {
      const facts = JSON.parse(extraction.choices[0].message.content);

      // Store each fact as a long-term memory
      for (const fact of facts) {
        await memory.memories.create({
          content: fact,
          tier: 'long',
          content_type: 'fact',
          memory_nature: 'semantic',
          importance_score: 0.7,
          metadata: {
            user_id: this.userId,
            extracted_from: 'conversation',
            extraction_date: new Date().toISOString()
          }
        });
      }
    } catch (e) {
      console.error('Failed to extract insights:', e);
    }
  }
}

// Usage
const agent = new ConversationalAgent('user_123', 'session_abc');
const response = await agent.chat("Can you help me with my Python project?");
console.log(response);
Python
import os
import json
from datetime import datetime
from typing import List, Dict, Optional
from memoryos import MemoryOS
from openai import OpenAI

memory = MemoryOS(api_key=os.environ["MEMORY_OS_API_KEY"])
openai_client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])


class ConversationalAgent:
    def __init__(self, user_id: str, session_id: str):
        self.user_id = user_id
        self.session_id = session_id
        self.conversation_history: List[Dict[str, str]] = []

    def chat(self, user_message: str) -> str:
        # 1. Retrieve relevant context from memory
        context = memory.get_context(
            query=user_message,
            max_tokens=2000
        )

        # 2. Get recent conversation history from this session
        recent_history = memory.search(
            query=f"conversation with user {self.user_id}",
            tier="short",
            limit=10,
            threshold=0.5
        )

        # 3. Build the prompt with memory context
        system_prompt = self._build_system_prompt(context, recent_history)

        # 4. Generate response
        messages = [
            {"role": "system", "content": system_prompt},
            *self.conversation_history,
            {"role": "user", "content": user_message}
        ]

        completion = openai_client.chat.completions.create(
            model="gpt-4",
            messages=messages
        )

        assistant_message = completion.choices[0].message.content

        # 5. Store this turn in memory
        self._store_conversation_turn(user_message, assistant_message)

        # 6. Update local history
        self.conversation_history.extend([
            {"role": "user", "content": user_message},
            {"role": "assistant", "content": assistant_message}
        ])

        return assistant_message

    def _build_system_prompt(self, context: Dict, recent_history: Dict) -> str:
        history_text = "\n".join(
            r["content"] for r in recent_history.get("results", [])
        )

        return f"""You are a helpful AI assistant with memory capabilities.

## User Context
{context.get('context', 'No prior context available.')}

## Recent Conversation
{history_text or 'This is the start of a new conversation.'}

## Instructions
- Use the context above to provide personalized responses
- Reference previous conversations when relevant
- If you learn something new about the user, acknowledge it
- Ask clarifying questions if context is unclear"""

    def _store_conversation_turn(self, user_message: str, assistant_message: str):
        memory.memories.create(
            content=f"User: {user_message}\nAssistant: {assistant_message}",
            tier="short",
            content_type="conversation",
            memory_nature="episodic",
            metadata={
                "user_id": self.user_id,
                "session_id": self.session_id,
                "timestamp": datetime.utcnow().isoformat()
            }
        )

    def extract_and_store_insights(self, conversation: str):
        """Use LLM to extract key facts from conversation."""
        extraction = openai_client.chat.completions.create(
            model="gpt-4",
            messages=[
                {
                    "role": "system",
                    "content": """Extract key facts about the user from this conversation.
                    Return a JSON array of facts. Each fact should be a single statement.
                    Only include information explicitly stated or strongly implied.
                    Example: ["User prefers Python over JavaScript", "User works in healthcare"]"""
                },
                {"role": "user", "content": conversation}
            ]
        )

        try:
            facts = json.loads(extraction.choices[0].message.content)

            for fact in facts:
                memory.memories.create(
                    content=fact,
                    tier="long",
                    content_type="fact",
                    memory_nature="semantic",
                    importance_score=0.7,
                    metadata={
                        "user_id": self.user_id,
                        "extracted_from": "conversation",
                        "extraction_date": datetime.utcnow().isoformat()
                    }
                )
        except json.JSONDecodeError as e:
            print(f"Failed to extract insights: {e}")


# Usage
agent = ConversationalAgent("user_123", "session_abc")
response = agent.chat("Can you help me with my Python project?")
print(response)

Pattern 2: Task Memory

For agents that handle multi-step tasks, track progress and state across interactions.

JavaScript
import { MemoryOS } from '@memory-os/sdk';
import OpenAI from 'openai';

const memory = new MemoryOS({ apiKey: process.env.MEMORY_OS_API_KEY });
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

class TaskAgent {
  constructor(userId) {
    this.userId = userId;
  }

  async startTask(taskDescription) {
    // Create task memory
    const taskMemory = await memory.memories.create({
      content: `Task: ${taskDescription}`,
      tier: 'medium',
      content_type: 'document',
      memory_nature: 'episodic',
      metadata: {
        user_id: this.userId,
        task_status: 'in_progress',
        task_type: 'user_task',
        steps_completed: [],
        created_at: new Date().toISOString()
      }
    });

    // Plan the task
    const plan = await this.planTask(taskDescription);

    // Store the plan
    await memory.memories.create({
      content: `Task plan for "${taskDescription}":\n${plan.steps.map((s, i) => `${i + 1}. ${s}`).join('\n')}`,
      tier: 'medium',
      content_type: 'document',
      memory_nature: 'semantic',
      parent_memory_id: taskMemory.id,
      metadata: {
        user_id: this.userId,
        task_id: taskMemory.id,
        total_steps: plan.steps.length
      }
    });

    return {
      taskId: taskMemory.id,
      plan: plan.steps
    };
  }

  async planTask(taskDescription) {
    const completion = await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [
        {
          role: 'system',
          content: `You are a task planning assistant. Break down tasks into clear, actionable steps.
          Return a JSON object with a "steps" array containing the ordered steps.`
        },
        { role: 'user', content: `Plan this task: ${taskDescription}` }
      ],
      response_format: { type: 'json_object' }
    });

    return JSON.parse(completion.choices[0].message.content);
  }

  async executeStep(taskId, stepNumber, userInput = null) {
    // Retrieve task context
    const taskContext = await memory.search({
      query: `task ${taskId}`,
      limit: 10,
      threshold: 0.5
    });

    // Get the task plan
    const planMemory = taskContext.results.find(r =>
      r.content.includes('Task plan for')
    );

    if (!planMemory) {
      throw new Error('Task plan not found');
    }

    // Execute the step
    const completion = await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [
        {
          role: 'system',
          content: `You are executing step ${stepNumber} of a task.

Task context:
${taskContext.results.map(r => r.content).join('\n---\n')}

${userInput ? `User provided: ${userInput}` : ''}

Complete this step and return a JSON object with:
- "result": description of what was done
- "next_action": what the user should do next (if anything)
- "completed": boolean indicating if step is done`
        }
      ],
      response_format: { type: 'json_object' }
    });

    const stepResult = JSON.parse(completion.choices[0].message.content);

    // Store step completion
    await memory.memories.create({
      content: `Step ${stepNumber} completed: ${stepResult.result}`,
      tier: 'medium',
      content_type: 'event',
      memory_nature: 'episodic',
      parent_memory_id: taskId,
      metadata: {
        user_id: this.userId,
        task_id: taskId,
        step_number: stepNumber,
        completed_at: new Date().toISOString()
      }
    });

    return stepResult;
  }

  async resumeTask(userId) {
    // Find incomplete tasks for this user
    const incompleteTasks = await memory.search({
      query: `task in_progress user ${userId}`,
      tier: 'medium',
      limit: 5,
      threshold: 0.5
    });

    const tasks = incompleteTasks.results.filter(r =>
      r.metadata?.task_status === 'in_progress'
    );

    if (tasks.length === 0) {
      return { message: 'No incomplete tasks found' };
    }

    // Get the most recent incomplete task
    const task = tasks[0];

    // Find completed steps
    const taskHistory = await memory.search({
      query: `step completed task ${task.id}`,
      limit: 20,
      threshold: 0.5
    });

    const completedSteps = taskHistory.results
      .filter(r => r.metadata?.task_id === task.id)
      .map(r => r.metadata?.step_number)
      .filter(Boolean);

    return {
      taskId: task.id,
      taskDescription: task.content,
      completedSteps,
      nextStep: Math.max(...completedSteps, 0) + 1
    };
  }

  async completeTask(taskId) {
    // Update task status
    await memory.memories.update(taskId, {
      metadata: {
        task_status: 'completed',
        completed_at: new Date().toISOString()
      }
    });

    // Store completion event
    await memory.memories.create({
      content: `Task ${taskId} completed successfully`,
      tier: 'long',
      content_type: 'event',
      memory_nature: 'episodic',
      importance_score: 0.8,
      metadata: {
        user_id: this.userId,
        task_id: taskId,
        event_type: 'task_completion'
      }
    });
  }
}

// Usage
const agent = new TaskAgent('user_123');

// Start a new task
const task = await agent.startTask('Set up a new React project with TypeScript');
console.log('Task created:', task.taskId);
console.log('Steps:', task.plan);

// Execute steps
const step1Result = await agent.executeStep(task.taskId, 1);
console.log('Step 1:', step1Result);

// Resume a task later
const resumedTask = await agent.resumeTask('user_123');
console.log('Resumed task:', resumedTask);
Python
import os
import json
from datetime import datetime
from typing import Dict, List, Optional, Any
from memoryos import MemoryOS
from openai import OpenAI

memory = MemoryOS(api_key=os.environ["MEMORY_OS_API_KEY"])
openai_client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])


class TaskAgent:
    def __init__(self, user_id: str):
        self.user_id = user_id

    def start_task(self, task_description: str) -> Dict[str, Any]:
        # Create task memory
        task_memory = memory.memories.create(
            content=f"Task: {task_description}",
            tier="medium",
            content_type="document",
            memory_nature="episodic",
            metadata={
                "user_id": self.user_id,
                "task_status": "in_progress",
                "task_type": "user_task",
                "steps_completed": [],
                "created_at": datetime.utcnow().isoformat()
            }
        )

        # Plan the task
        plan = self._plan_task(task_description)

        # Store the plan
        steps_text = "\n".join(f"{i+1}. {s}" for i, s in enumerate(plan["steps"]))
        memory.memories.create(
            content=f'Task plan for "{task_description}":\n{steps_text}',
            tier="medium",
            content_type="document",
            memory_nature="semantic",
            parent_memory_id=task_memory["id"],
            metadata={
                "user_id": self.user_id,
                "task_id": task_memory["id"],
                "total_steps": len(plan["steps"])
            }
        )

        return {
            "task_id": task_memory["id"],
            "plan": plan["steps"]
        }

    def _plan_task(self, task_description: str) -> Dict:
        completion = openai_client.chat.completions.create(
            model="gpt-4",
            messages=[
                {
                    "role": "system",
                    "content": """You are a task planning assistant. Break down tasks into clear, actionable steps.
                    Return a JSON object with a "steps" array containing the ordered steps."""
                },
                {"role": "user", "content": f"Plan this task: {task_description}"}
            ],
            response_format={"type": "json_object"}
        )

        return json.loads(completion.choices[0].message.content)

    def execute_step(
        self,
        task_id: str,
        step_number: int,
        user_input: Optional[str] = None
    ) -> Dict:
        # Retrieve task context
        task_context = memory.search(
            query=f"task {task_id}",
            limit=10,
            threshold=0.5
        )

        # Get the task plan
        plan_memory = next(
            (r for r in task_context["results"] if "Task plan for" in r["content"]),
            None
        )

        if not plan_memory:
            raise ValueError("Task plan not found")

        # Execute the step
        context_text = "\n---\n".join(r["content"] for r in task_context["results"])
        user_info = f"User provided: {user_input}" if user_input else ""

        completion = openai_client.chat.completions.create(
            model="gpt-4",
            messages=[
                {
                    "role": "system",
                    "content": f"""You are executing step {step_number} of a task.

Task context:
{context_text}

{user_info}

Complete this step and return a JSON object with:
- "result": description of what was done
- "next_action": what the user should do next (if anything)
- "completed": boolean indicating if step is done"""
                }
            ],
            response_format={"type": "json_object"}
        )

        step_result = json.loads(completion.choices[0].message.content)

        # Store step completion
        memory.memories.create(
            content=f"Step {step_number} completed: {step_result['result']}",
            tier="medium",
            content_type="event",
            memory_nature="episodic",
            parent_memory_id=task_id,
            metadata={
                "user_id": self.user_id,
                "task_id": task_id,
                "step_number": step_number,
                "completed_at": datetime.utcnow().isoformat()
            }
        )

        return step_result

    def resume_task(self, user_id: str) -> Dict:
        # Find incomplete tasks
        incomplete_tasks = memory.search(
            query=f"task in_progress user {user_id}",
            tier="medium",
            limit=5,
            threshold=0.5
        )

        tasks = [
            r for r in incomplete_tasks["results"]
            if r.get("metadata", {}).get("task_status") == "in_progress"
        ]

        if not tasks:
            return {"message": "No incomplete tasks found"}

        task = tasks[0]

        # Find completed steps
        task_history = memory.search(
            query=f"step completed task {task['id']}",
            limit=20,
            threshold=0.5
        )

        completed_steps = [
            r["metadata"]["step_number"]
            for r in task_history["results"]
            if r.get("metadata", {}).get("task_id") == task["id"]
            and r.get("metadata", {}).get("step_number")
        ]

        next_step = max(completed_steps, default=0) + 1

        return {
            "task_id": task["id"],
            "task_description": task["content"],
            "completed_steps": completed_steps,
            "next_step": next_step
        }

    def complete_task(self, task_id: str):
        # Update task status
        memory.memories.update(
            task_id,
            metadata={
                "task_status": "completed",
                "completed_at": datetime.utcnow().isoformat()
            }
        )

        # Store completion event
        memory.memories.create(
            content=f"Task {task_id} completed successfully",
            tier="long",
            content_type="event",
            memory_nature="episodic",
            importance_score=0.8,
            metadata={
                "user_id": self.user_id,
                "task_id": task_id,
                "event_type": "task_completion"
            }
        )


# Usage
agent = TaskAgent("user_123")

# Start a new task
task = agent.start_task("Set up a new React project with TypeScript")
print(f"Task created: {task['task_id']}")
print(f"Steps: {task['plan']}")

# Execute steps
step1_result = agent.execute_step(task["task_id"], 1)
print(f"Step 1: {step1_result}")

# Resume a task later
resumed_task = agent.resume_task("user_123")
print(f"Resumed task: {resumed_task}")

Pattern 3: Learning from Feedback

Agents that improve over time by learning from user corrections and feedback.

JavaScript
import { MemoryOS } from '@memory-os/sdk';
import OpenAI from 'openai';

const memory = new MemoryOS({ apiKey: process.env.MEMORY_OS_API_KEY });
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

class LearningAgent {
  constructor(userId) {
    this.userId = userId;
  }

  async processCorrection(originalResponse, correction, context) {
    // Analyze what was wrong
    const analysis = await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [
        {
          role: 'system',
          content: `Analyze this correction and extract a learning principle.

Original response: ${originalResponse}
User correction: ${correction}
Context: ${context}

Return a JSON object with:
- "learning": A general principle to remember (e.g., "User prefers formal language")
- "category": Category of learning (preference, fact, style, behavior)
- "confidence": How confident we should be (0-1)
- "apply_always": Whether to apply this always or situationally`
        }
      ],
      response_format: { type: 'json_object' }
    });

    const learning = JSON.parse(analysis.choices[0].message.content);

    // Store the learning as a long-term memory
    await memory.memories.create({
      content: learning.learning,
      tier: 'long',
      content_type: 'fact',
      memory_nature: 'semantic',
      importance_score: learning.confidence,
      metadata: {
        user_id: this.userId,
        category: learning.category,
        source: 'user_correction',
        apply_always: learning.apply_always,
        original_context: context,
        learned_at: new Date().toISOString()
      }
    });

    // Store the correction event for future reference
    await memory.memories.create({
      content: `User corrected: "${originalResponse}" to prefer: "${correction}"`,
      tier: 'medium',
      content_type: 'event',
      memory_nature: 'episodic',
      metadata: {
        user_id: this.userId,
        event_type: 'correction',
        timestamp: new Date().toISOString()
      }
    });

    return learning;
  }

  async recordFeedback(memoryId, feedbackType, details = null) {
    // Record feedback on a specific interaction
    const feedbackTypes = {
      'thumbs_up': { score_adjustment: 0.1, type: 'positive' },
      'thumbs_down': { score_adjustment: -0.1, type: 'negative' },
      'helpful': { score_adjustment: 0.15, type: 'positive' },
      'not_helpful': { score_adjustment: -0.15, type: 'negative' },
      'incorrect': { score_adjustment: -0.2, type: 'correction' }
    };

    const feedback = feedbackTypes[feedbackType];
    if (!feedback) {
      throw new Error(`Unknown feedback type: ${feedbackType}`);
    }

    // Get the memory being rated
    const ratedMemory = await memory.memories.get(memoryId);

    // Update importance score based on feedback
    const newImportance = Math.max(0, Math.min(1,
      (ratedMemory.importance_score || 0.5) + feedback.score_adjustment
    ));

    await memory.memories.update(memoryId, {
      importance_score: newImportance
    });

    // Store the feedback event
    await memory.memories.create({
      content: `Feedback: ${feedbackType} on memory about "${ratedMemory.content.substring(0, 100)}..."`,
      tier: 'short',
      content_type: 'event',
      memory_nature: 'episodic',
      metadata: {
        user_id: this.userId,
        rated_memory_id: memoryId,
        feedback_type: feedbackType,
        feedback_details: details,
        timestamp: new Date().toISOString()
      }
    });

    return { memoryId, newImportance, feedbackType };
  }

  async applyLearnings(query) {
    // Retrieve relevant learnings
    const learnings = await memory.search({
      query: `learning preference style ${query}`,
      tier: 'long',
      limit: 10,
      threshold: 0.5
    });

    // Filter to only include actual learnings
    const relevantLearnings = learnings.results.filter(r =>
      r.metadata?.source === 'user_correction' ||
      r.metadata?.category
    );

    // Build a learnings context
    const learningsContext = relevantLearnings
      .map(l => `- ${l.content}`)
      .join('\n');

    return learningsContext;
  }

  async generateWithLearnings(query) {
    // Get context and learnings
    const [context, learnings] = await Promise.all([
      memory.getContext({ query, max_tokens: 1500 }),
      this.applyLearnings(query)
    ]);

    const systemPrompt = `You are a helpful assistant that has learned the following about this user:

## User-Specific Learnings
${learnings || 'No specific learnings yet.'}

## General Context
${context.context || 'No additional context.'}

Apply these learnings to provide a personalized response.`;

    const completion = await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [
        { role: 'system', content: systemPrompt },
        { role: 'user', content: query }
      ]
    });

    return completion.choices[0].message.content;
  }
}

// Usage
const agent = new LearningAgent('user_123');

// Record a correction
await agent.processCorrection(
  "Here's the code in JavaScript...",
  "I prefer Python examples",
  "User asked for a code example"
);

// Record feedback
await agent.recordFeedback('memory_456', 'thumbs_up');

// Generate response using learnings
const response = await agent.generateWithLearnings('Show me how to sort a list');
console.log(response); // Will use Python based on learning
Python
import os
import json
from datetime import datetime
from typing import Dict, Optional
from memoryos import MemoryOS
from openai import OpenAI

memory = MemoryOS(api_key=os.environ["MEMORY_OS_API_KEY"])
openai_client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])


class LearningAgent:
    def __init__(self, user_id: str):
        self.user_id = user_id

    def process_correction(
        self,
        original_response: str,
        correction: str,
        context: str
    ) -> Dict:
        # Analyze what was wrong
        analysis = openai_client.chat.completions.create(
            model="gpt-4",
            messages=[
                {
                    "role": "system",
                    "content": f"""Analyze this correction and extract a learning principle.

Original response: {original_response}
User correction: {correction}
Context: {context}

Return a JSON object with:
- "learning": A general principle to remember (e.g., "User prefers formal language")
- "category": Category of learning (preference, fact, style, behavior)
- "confidence": How confident we should be (0-1)
- "apply_always": Whether to apply this always or situationally"""
                }
            ],
            response_format={"type": "json_object"}
        )

        learning = json.loads(analysis.choices[0].message.content)

        # Store the learning as a long-term memory
        memory.memories.create(
            content=learning["learning"],
            tier="long",
            content_type="fact",
            memory_nature="semantic",
            importance_score=learning["confidence"],
            metadata={
                "user_id": self.user_id,
                "category": learning["category"],
                "source": "user_correction",
                "apply_always": learning["apply_always"],
                "original_context": context,
                "learned_at": datetime.utcnow().isoformat()
            }
        )

        # Store the correction event
        memory.memories.create(
            content=f'User corrected: "{original_response}" to prefer: "{correction}"',
            tier="medium",
            content_type="event",
            memory_nature="episodic",
            metadata={
                "user_id": self.user_id,
                "event_type": "correction",
                "timestamp": datetime.utcnow().isoformat()
            }
        )

        return learning

    def record_feedback(
        self,
        memory_id: str,
        feedback_type: str,
        details: Optional[str] = None
    ) -> Dict:
        feedback_types = {
            "thumbs_up": {"score_adjustment": 0.1, "type": "positive"},
            "thumbs_down": {"score_adjustment": -0.1, "type": "negative"},
            "helpful": {"score_adjustment": 0.15, "type": "positive"},
            "not_helpful": {"score_adjustment": -0.15, "type": "negative"},
            "incorrect": {"score_adjustment": -0.2, "type": "correction"}
        }

        feedback = feedback_types.get(feedback_type)
        if not feedback:
            raise ValueError(f"Unknown feedback type: {feedback_type}")

        # Get the memory being rated
        rated_memory = memory.memories.get(memory_id)

        # Update importance score
        current_importance = rated_memory.get("importance_score", 0.5)
        new_importance = max(0, min(1,
            current_importance + feedback["score_adjustment"]
        ))

        memory.memories.update(memory_id, importance_score=new_importance)

        # Store the feedback event
        memory.memories.create(
            content=f'Feedback: {feedback_type} on memory about "{rated_memory["content"][:100]}..."',
            tier="short",
            content_type="event",
            memory_nature="episodic",
            metadata={
                "user_id": self.user_id,
                "rated_memory_id": memory_id,
                "feedback_type": feedback_type,
                "feedback_details": details,
                "timestamp": datetime.utcnow().isoformat()
            }
        )

        return {
            "memory_id": memory_id,
            "new_importance": new_importance,
            "feedback_type": feedback_type
        }

    def apply_learnings(self, query: str) -> str:
        learnings = memory.search(
            query=f"learning preference style {query}",
            tier="long",
            limit=10,
            threshold=0.5
        )

        relevant_learnings = [
            r for r in learnings["results"]
            if r.get("metadata", {}).get("source") == "user_correction"
            or r.get("metadata", {}).get("category")
        ]

        return "\n".join(f"- {l['content']}" for l in relevant_learnings)

    def generate_with_learnings(self, query: str) -> str:
        context = memory.get_context(query=query, max_tokens=1500)
        learnings = self.apply_learnings(query)

        system_prompt = f"""You are a helpful assistant that has learned the following about this user:

## User-Specific Learnings
{learnings or 'No specific learnings yet.'}

## General Context
{context.get('context', 'No additional context.')}

Apply these learnings to provide a personalized response."""

        completion = openai_client.chat.completions.create(
            model="gpt-4",
            messages=[
                {"role": "system", "content": system_prompt},
                {"role": "user", "content": query}
            ]
        )

        return completion.choices[0].message.content


# Usage
agent = LearningAgent("user_123")

# Record a correction
agent.process_correction(
    "Here's the code in JavaScript...",
    "I prefer Python examples",
    "User asked for a code example"
)

# Record feedback
agent.record_feedback("memory_456", "thumbs_up")

# Generate response using learnings
response = agent.generate_with_learnings("Show me how to sort a list")
print(response)  # Will use Python based on learning

Example: Personal Assistant with Memory

A complete example of a personal assistant that remembers user preferences, schedules, and ongoing tasks.

JavaScript
import { MemoryOS } from '@memory-os/sdk';
import OpenAI from 'openai';

const memory = new MemoryOS({ apiKey: process.env.MEMORY_OS_API_KEY });
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

class PersonalAssistant {
  constructor(userId) {
    this.userId = userId;
  }

  async initialize() {
    // Load user profile
    this.profile = await this.getOrCreateProfile();
    return this;
  }

  async getOrCreateProfile() {
    const profileSearch = await memory.search({
      query: `user profile ${this.userId}`,
      tier: 'long',
      limit: 1,
      threshold: 0.8
    });

    if (profileSearch.results.length > 0) {
      return profileSearch.results[0];
    }

    // Create default profile
    const profile = await memory.memories.create({
      content: `User profile for ${this.userId}. New user, preferences not yet established.`,
      tier: 'long',
      content_type: 'document',
      memory_nature: 'semantic',
      importance_score: 0.9,
      metadata: {
        user_id: this.userId,
        type: 'profile',
        created_at: new Date().toISOString()
      }
    });

    return profile;
  }

  async chat(message) {
    // Gather all relevant context
    const [
      generalContext,
      preferences,
      recentTasks,
      conversationHistory
    ] = await Promise.all([
      memory.getContext({ query: message, max_tokens: 1000 }),
      this.getPreferences(),
      this.getRecentTasks(),
      this.getConversationHistory()
    ]);

    const systemPrompt = this.buildSystemPrompt({
      generalContext: generalContext.context,
      preferences,
      recentTasks,
      conversationHistory
    });

    const completion = await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [
        { role: 'system', content: systemPrompt },
        { role: 'user', content: message }
      ],
      functions: this.getAvailableFunctions(),
      function_call: 'auto'
    });

    const response = completion.choices[0];

    // Handle function calls
    if (response.message.function_call) {
      const result = await this.executeFunctionCall(response.message.function_call);
      return result;
    }

    // Store conversation
    await this.storeInteraction(message, response.message.content);

    // Extract and store any new information
    await this.extractNewInformation(message, response.message.content);

    return response.message.content;
  }

  async getPreferences() {
    const prefs = await memory.search({
      query: `user preferences ${this.userId}`,
      tier: 'long',
      limit: 10,
      threshold: 0.6
    });

    return prefs.results
      .filter(r => r.metadata?.category === 'preference')
      .map(r => r.content)
      .join('\n');
  }

  async getRecentTasks() {
    const tasks = await memory.search({
      query: `task todo ${this.userId}`,
      tier: 'medium',
      limit: 5,
      threshold: 0.5
    });

    return tasks.results
      .filter(r => r.metadata?.task_status === 'in_progress')
      .map(r => `- ${r.content}`)
      .join('\n');
  }

  async getConversationHistory() {
    const history = await memory.search({
      query: `conversation ${this.userId}`,
      tier: 'short',
      limit: 5,
      threshold: 0.5
    });

    return history.results
      .map(r => r.content)
      .join('\n---\n');
  }

  buildSystemPrompt({ generalContext, preferences, recentTasks, conversationHistory }) {
    return `You are a personal AI assistant with persistent memory.

## User Preferences
${preferences || 'No preferences recorded yet.'}

## Active Tasks
${recentTasks || 'No active tasks.'}

## Recent Conversation
${conversationHistory || 'This is the start of a new conversation.'}

## General Context
${generalContext || 'No additional context.'}

## Instructions
- Remember and apply user preferences
- Track ongoing tasks and remind when appropriate
- Be proactive about relevant information
- Learn from corrections and feedback
- Keep responses personalized and context-aware`;
  }

  getAvailableFunctions() {
    return [
      {
        name: 'remember_preference',
        description: 'Store a user preference for future reference',
        parameters: {
          type: 'object',
          properties: {
            preference: { type: 'string', description: 'The preference to remember' },
            category: { type: 'string', description: 'Category: communication, schedule, tools, etc.' }
          },
          required: ['preference']
        }
      },
      {
        name: 'create_task',
        description: 'Create a new task or reminder',
        parameters: {
          type: 'object',
          properties: {
            task: { type: 'string', description: 'The task description' },
            due_date: { type: 'string', description: 'Optional due date' },
            priority: { type: 'string', enum: ['low', 'medium', 'high'] }
          },
          required: ['task']
        }
      },
      {
        name: 'search_memory',
        description: 'Search through stored memories and information',
        parameters: {
          type: 'object',
          properties: {
            query: { type: 'string', description: 'What to search for' }
          },
          required: ['query']
        }
      }
    ];
  }

  async executeFunctionCall(functionCall) {
    const { name, arguments: args } = functionCall;
    const parsedArgs = JSON.parse(args);

    switch (name) {
      case 'remember_preference':
        await this.rememberPreference(parsedArgs.preference, parsedArgs.category);
        return `I'll remember that: "${parsedArgs.preference}"`;

      case 'create_task':
        await this.createTask(parsedArgs.task, parsedArgs.due_date, parsedArgs.priority);
        return `Task created: "${parsedArgs.task}"`;

      case 'search_memory':
        const results = await memory.search({
          query: parsedArgs.query,
          limit: 5,
          threshold: 0.6
        });
        return results.results.map(r => r.content).join('\n\n');

      default:
        return `Unknown function: ${name}`;
    }
  }

  async rememberPreference(preference, category = 'general') {
    await memory.memories.create({
      content: preference,
      tier: 'long',
      content_type: 'fact',
      memory_nature: 'semantic',
      importance_score: 0.8,
      metadata: {
        user_id: this.userId,
        category: 'preference',
        preference_type: category,
        recorded_at: new Date().toISOString()
      }
    });
  }

  async createTask(task, dueDate = null, priority = 'medium') {
    await memory.memories.create({
      content: `Task: ${task}`,
      tier: 'medium',
      content_type: 'document',
      memory_nature: 'episodic',
      importance_score: priority === 'high' ? 0.9 : priority === 'medium' ? 0.6 : 0.4,
      metadata: {
        user_id: this.userId,
        task_status: 'in_progress',
        due_date: dueDate,
        priority,
        created_at: new Date().toISOString()
      }
    });
  }

  async storeInteraction(userMessage, assistantResponse) {
    await memory.memories.create({
      content: `User: ${userMessage}\nAssistant: ${assistantResponse}`,
      tier: 'short',
      content_type: 'conversation',
      memory_nature: 'episodic',
      metadata: {
        user_id: this.userId,
        timestamp: new Date().toISOString()
      }
    });
  }

  async extractNewInformation(userMessage, assistantResponse) {
    // Use LLM to extract any factual information worth remembering
    const extraction = await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [
        {
          role: 'system',
          content: `Extract any new factual information about the user from this exchange that would be worth remembering long-term.

Return a JSON object with:
- "facts": Array of facts to remember (empty if none)
- "preferences": Array of preferences mentioned (empty if none)

Only include information explicitly stated or strongly implied.`
        },
        {
          role: 'user',
          content: `User: ${userMessage}\nAssistant: ${assistantResponse}`
        }
      ],
      response_format: { type: 'json_object' }
    });

    const extracted = JSON.parse(extraction.choices[0].message.content);

    // Store extracted facts
    for (const fact of extracted.facts || []) {
      await memory.memories.create({
        content: fact,
        tier: 'long',
        content_type: 'fact',
        memory_nature: 'semantic',
        importance_score: 0.6,
        metadata: {
          user_id: this.userId,
          source: 'conversation_extraction',
          extracted_at: new Date().toISOString()
        }
      });
    }

    // Store extracted preferences
    for (const pref of extracted.preferences || []) {
      await this.rememberPreference(pref, 'auto_extracted');
    }
  }
}

// Usage
async function main() {
  const assistant = await new PersonalAssistant('user_123').initialize();

  // Conversation with memory
  console.log(await assistant.chat("Hi! I prefer working in the mornings."));
  console.log(await assistant.chat("Can you remind me to review the quarterly report?"));
  console.log(await assistant.chat("What do you know about my preferences?"));
}

main();

Example: Customer Support Agent

A support agent that remembers customer history, preferences, and past issues.

JavaScript
import { MemoryOS } from '@memory-os/sdk';
import OpenAI from 'openai';

const memory = new MemoryOS({ apiKey: process.env.MEMORY_OS_API_KEY });
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

class CustomerSupportAgent {
  constructor(customerId) {
    this.customerId = customerId;
  }

  async handleInquiry(inquiry) {
    // Load customer context
    const customerContext = await this.getCustomerContext();

    // Check for similar past issues
    const pastIssues = await this.findSimilarIssues(inquiry);

    // Get product knowledge
    const productContext = await this.getProductContext(inquiry);

    const systemPrompt = `You are a helpful customer support agent.

## Customer Information
${customerContext.summary}

## Account Status
- Customer since: ${customerContext.customerSince}
- Plan: ${customerContext.plan}
- Previous interactions: ${customerContext.interactionCount}

## Past Similar Issues
${pastIssues.length > 0 ? pastIssues.map(i => `- ${i}`).join('\n') : 'No similar past issues found.'}

## Relevant Product Information
${productContext}

## Guidelines
- Be empathetic and helpful
- Reference past interactions when relevant
- Personalize based on customer history
- Escalate complex issues appropriately
- Update customer records after resolution`;

    const completion = await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [
        { role: 'system', content: systemPrompt },
        { role: 'user', content: inquiry }
      ]
    });

    const response = completion.choices[0].message.content;

    // Log this interaction
    await this.logInteraction(inquiry, response);

    return response;
  }

  async getCustomerContext() {
    const context = await memory.getContext({
      query: `customer ${this.customerId} profile history preferences`,
      max_tokens: 1000
    });

    // Search for customer profile
    const profile = await memory.search({
      query: `customer profile ${this.customerId}`,
      tier: 'long',
      limit: 1,
      threshold: 0.7
    });

    // Get recent interactions
    const interactions = await memory.search({
      query: `support interaction ${this.customerId}`,
      tier: 'medium',
      limit: 5,
      threshold: 0.5
    });

    return {
      summary: context.context || 'New customer, no prior history.',
      customerSince: profile.results[0]?.metadata?.customer_since || 'Unknown',
      plan: profile.results[0]?.metadata?.plan || 'Unknown',
      interactionCount: interactions.results.length
    };
  }

  async findSimilarIssues(inquiry) {
    // Search for similar past issues
    const similar = await memory.search({
      query: `issue problem ${inquiry}`,
      tier: 'long',
      limit: 5,
      threshold: 0.7
    });

    // Filter to only resolved issues
    return similar.results
      .filter(r => r.metadata?.resolution)
      .map(r => `${r.content} - Resolution: ${r.metadata.resolution}`);
  }

  async getProductContext(inquiry) {
    // Search product documentation/knowledge base
    const productInfo = await memory.search({
      query: inquiry,
      tier: 'long',
      limit: 3,
      threshold: 0.6
    });

    return productInfo.results
      .filter(r => r.metadata?.type === 'product_info')
      .map(r => r.content)
      .join('\n\n');
  }

  async logInteraction(inquiry, response) {
    // Store as medium-term memory
    await memory.memories.create({
      content: `Support inquiry: ${inquiry}\n\nResponse: ${response}`,
      tier: 'medium',
      content_type: 'conversation',
      memory_nature: 'episodic',
      metadata: {
        customer_id: this.customerId,
        type: 'support_interaction',
        timestamp: new Date().toISOString()
      }
    });
  }

  async resolveIssue(issueDescription, resolution) {
    // Store the resolved issue for future reference
    await memory.memories.create({
      content: issueDescription,
      tier: 'long',
      content_type: 'document',
      memory_nature: 'semantic',
      importance_score: 0.8,
      metadata: {
        customer_id: this.customerId,
        type: 'resolved_issue',
        resolution,
        resolved_at: new Date().toISOString()
      }
    });
  }

  async updateCustomerPreference(preference, category) {
    await memory.memories.create({
      content: preference,
      tier: 'long',
      content_type: 'fact',
      memory_nature: 'semantic',
      importance_score: 0.7,
      metadata: {
        customer_id: this.customerId,
        type: 'customer_preference',
        category,
        updated_at: new Date().toISOString()
      }
    });
  }
}

// Usage
const agent = new CustomerSupportAgent('customer_789');

const response = await agent.handleInquiry(
  "I'm having trouble with my subscription. It says expired but I renewed last week."
);
console.log(response);

// After resolving
await agent.resolveIssue(
  "Subscription showing as expired despite renewal",
  "Payment was pending. Manually confirmed and activated subscription."
);

Best Practices for Agent Memory Design

1. Memory Tier Selection

Information TypeTierExample
Current conversationShort"User is asking about Python"
Active tasksMedium"Working on quarterly report"
User preferencesLong"Prefers concise responses"
Corrections/learningsLong"User prefers TypeScript over JavaScript"

2. Memory Hygiene

  • Regularly promote important short-term memories to long-term
  • Clean up completed tasks
  • Merge duplicate or conflicting memories
  • Update stale information

3. Context Window Management

  • Prioritize recent and relevant memories
  • Use token budgets effectively
  • Balance breadth vs depth of context

4. Privacy Considerations

  • Be transparent about what's remembered
  • Provide memory viewing and deletion controls
  • Don't store sensitive information unnecessarily
  • Respect user preferences about memory retention

Ctrl+Shift+C to copy