Why does my AI Employee "forget" information between turns? This guide explains the stateless nature of Ema workflows and shows you how to design conversations that maintain context across multiple turns—without any actual memory.

The Problem You'll Encounter

You're building a conversational AI and run into this:

Turn 1: Bot asks "Who am I speaking with?"
Turn 2: User says "I'm Michael Thompson"
Turn 3: User says "What's my portfolio looking like?"

Question: How does the AI "remember" on Turn 3 that it's talking to Michael?

Intuition says: Store the name somewhere, retrieve it later.

Reality: That's not how Ema workflows work.


The Surprising Truth: There Is No Memory

Each workflow execution is completely stateless. When a user sends a message:

  1. A fresh workflow execution starts
  2. Nodes run, produce outputs
  3. The response is sent to the user
  4. All intermediate data disappears

No variables persist. No entity extraction results carry over. Nothing.

flowchart TD
    A["User message arrives"] --> B["Fresh workflow starts"]
    B --> C["Nodes execute, produce outputs"]
    C --> D["Response sent to user"]
    D --> E["ALL DATA GONE<br/>(except conversation)"]

    style E fill:#ffcdd2,stroke:#c62828,stroke-width:2px

So How Do Conversations Work?

The conversation history itself is your memory.

Turn 3's trigger.chat_conversation contains:

Bot: "Who am I speaking with?"
User: "I'm Michael Thompson" ← The name is HERE
Bot: "Hi Michael! How can I help?"
User: "What's my portfolio looking like?"

The name isn't "stored" anywhere—it's re-extracted from the conversation history every single turn.


The Pattern: Re-Derive State From History

flowchart TD
    A["trigger.chat_conversation<br/>(full history)"] --> B["Entity Extraction"]
    B --> C["{client_name: 'Michael Thompson', ...}"]
    C --> D["Rest of workflow uses this data"]

    subgraph extraction["Entity Extraction"]
        B
        note["Scan ENTIRE conversation<br/>for client name, email,<br/>request type...<br/><br/>Extracts from ALL messages<br/>including previous turns"]
    end

Key Insight: Design for Stateless Re-Extraction

Your extraction prompts must scan the entire history:

# ❌ BAD: Only looks at current message
entity_extraction:
  instructions: "Extract the client name from the message"
  # Problem: Misses Turn 2's answer when processing Turn 3

# ✅ GOOD: Scans entire conversation
entity_extraction:
  instructions: |
    Scan the entire conversation for client name.
    The name may have been provided in an earlier turn.
  # Finds the name wherever it appeared

Visual: Multi-Turn State Flow

Here's what happens across turns:

TURN 1              TURN 2              TURN 3
──────────────────────────────────────────────────────
User: "Hello"       User: "I'm Michael"  User: "My portfolio?"

Extract:            Extract:             Extract:
  name: null          name: "Michael" ✓    name: "Michael" ✓

Bot: "Who are you?" Bot: "Hi Michael!"   Bot: "Your portfolio..."

AFTER TURN:         AFTER TURN:          AFTER TURN:
  Data: GONE          Data: GONE           Data: GONE
  Conv: SAVED ✓       Conv: SAVED ✓        Conv: SAVED ✓

What gets saved: Only the conversation transcript.

What disappears: All extracted values, intermediate results, node outputs.


Advanced: App-Side State via additional_context

If you control the calling application, you can maintain state externally:

// Your app maintains session state
let session = { caller: null };

// When bot confirms name extraction
if (response.confirmedCaller) {
  session.caller = { id: "c_102", name: "Michael Thompson" };
}

// Pass to next turn
await ema.chat({
  message: "What's my portfolio?",
  additional_context: JSON.stringify(session), // Pre-populated!
});

The workflow can then check additional_context first before extracting:

query_builder:
  instructions: |
    Check additional_context for pre-confirmed caller info.
    If present, use it directly.
    If not, extract from conversation history.

Why this helps:

  • Faster (skip re-extraction)
  • More reliable (app-confirmed values)
  • Enables features like "remember me" across sessions

Common Mistakes

Mistake Why It Fails Fix
Expecting variables to persist Workflow is stateless Re-extract from chat_conversation
Extraction prompt says "from this message" Only sees current turn Say "from entire conversation"
Not including bot responses in scan Bot may have confirmed info Include all messages in extraction
Ignoring corrections User says "Actually, it's Mike" "If corrected, use new value"

Troubleshooting Guide

Problem: AI "Forgets" Previously Provided Information

Symptoms: User provides name in Turn 2, but Turn 3 asks again.

Diagnosis: Check your entity extraction instructions.

Solution:

# Before
instructions: "Extract client name from the message"

# After
instructions: |
  Scan the ENTIRE conversation history for client name.
  - Check all user messages
  - Check bot confirmations
  - Use the most recent value if corrected

Problem: Extraction Returns Null on Follow-Up Turns

Symptoms: Extraction works on Turn 2, returns null on Turn 3.

Diagnosis: Current message doesn't contain the info, and extraction isn't scanning history.

Solution: Ensure your extraction input is trigger.chat_conversation, not trigger.user_query.

Problem: Corrections Not Being Applied

Symptoms: User corrects info ("Actually, use mike@corp.com"), but AI uses old value.

Diagnosis: Extraction doesn't have correction handling.

Solution:

instructions: |
  Extract email from conversation.
  IMPORTANT: If user corrects a value ("Actually...", "Use this instead..."),
  the corrected value takes precedence.

Best Practices Summary

Practice Why
Extraction prompts scan full history Data could appear anywhere
Include bot messages in scan Bot confirmations matter
Handle corrections explicitly Users change their minds
Use additional_context for app state Faster, more reliable
Test multi-turn scenarios Single-turn tests miss state bugs

TL;DR

Question Answer
Does extracted data persist? No—workflow is stateless
How to remember across turns? Re-extract from chat_conversation
Is this expensive? Minimal—pattern matching in history
Can I optimize? Yes—use additional_context for app-side state

The conversation IS your state store. Design accordingly.

Every turn, the AI rebuilds its understanding from the transcript. There is no hidden memory—just the conversation history and your extraction logic.


For more on Ema workflows and trigger outputs, see Ema Workflows: Mastering RAG Flow Control.