The main AI assistant has been helping you for an hour. It's seen every file you've opened, every failed attempt, every tangent. Now you need a fresh perspective—someone who hasn't been marinating in your assumptions. That's what subagents provide: clean context on demand.

The Cursor System series

  • Beyond Rules — The four artifact types
  • Agent Personas — Personas that stay in character
  • Smart Routing — Match tasks to specialists
  • Subagents: Fresh Eyes (this post) — Context isolation and parallel work

What Subagents Actually Are

Subagents are separate AI instances that Cursor's main agent can delegate to. Each subagent operates in its own context window, handles specific work, and returns results to the parent.

From the official documentation:

Benefit What It Means
Context isolation Each subagent has its own context window. Long research doesn't consume your main conversation space.
Parallel execution Launch multiple subagents simultaneously. Work on different parts without waiting.
Specialized expertise Configure with custom prompts, tool access, and models for domain-specific tasks.
Reusability Define once, use across projects.

The key insight: subagents start fresh. They don't inherit the main conversation's assumptions, biases, or context bloat.


Why Fresh Context Matters

The Conversation Pollution Problem

After extended debugging:

  • The main agent has seen 50 failed approaches
  • It's anchored on your initial hypothesis
  • Its context is cluttered with error messages and dead ends
  • It might keep recommending variations of things that already failed

Fresh Eyes See Differently

A subagent starts clean:

  • No knowledge of your failed attempts
  • No anchoring on initial assumptions
  • Approaches the problem from first principles
  • Might spot what you've been staring past

This is why verification subagents are so powerful. They're skeptical because they haven't been part of the journey.


Built-in Subagents

Cursor includes three built-in subagents for context-heavy operations:

Subagent Purpose Why It's a Subagent
Explore Searches and analyzes codebases Codebase exploration generates large intermediate output. Uses a faster model to run many parallel searches.
Bash Runs series of shell commands Command output is verbose. Isolating it keeps the parent focused on decisions, not logs.
Browser Controls browser via MCP Browser interactions produce noisy DOM snapshots. The subagent filters to relevant results.

These share common traits:

  • Generate noisy intermediate output
  • Benefit from specialized prompts
  • Can consume significant context

You don't configure these—Agent uses them automatically.

Real Example: Parallel Exploration

From a recent session building multiple Ema AI Employees:

Now I need to design and deploy workflows for each. Let me use sub-agents
to work on these in parallel.

All 4 personas created successfully. Now I'll spawn sub-agents to build
workflows for each in parallel.

The agent spawned four generalPurpose subagents simultaneously, each building a complete workflow independently. What would have taken 20+ minutes sequentially finished in about 5 minutes.


Custom Subagents

Define custom subagents to encode specialized knowledge and workflows.

File Locations

Type Location Scope
Project .cursor/agents/ Current project only
.claude/agents/ Claude compatibility
.codex/agents/ Codex compatibility
User ~/.cursor/agents/ All your projects
~/.claude/agents/ Claude compatibility
~/.codex/agents/ Codex compatibility

Configuration Fields

Field Required Description
name No Unique identifier. Defaults to filename.
description No When to use this subagent. Agent reads this to decide delegation.
model No Model to use: fast, inherit, or specific model ID. Defaults to inherit.
readonly No If true, restricted write permissions.
is_background No If true, runs in background without blocking.

The Verification Pattern

A verification subagent independently validates whether claimed work was actually completed. This addresses a common issue: AI marks tasks as done but implementations are incomplete.

Why It Works

The verifier has:

  • No knowledge of what was promised—only what exists
  • No sunk cost fallacy—doesn't care about time invested
  • Fresh eyes—might catch what you've been staring past
  • Skeptical stance—assumes nothing

Real Example: Critic Agent

From my ~/.cursor/agents/critic.md:

---
name: Critic
model: claude-4-sonnet
description: |
  # Devil's Advocate / Critic Agent

  You challenge proposals, find flaws, and strengthen ideas through 
  constructive criticism.

  ## Role
  Find problems before they become expensive. Make good ideas better 
  through rigorous questioning.

  ## Stance
  Constructively adversarial: challenge everything, but offer improvements.
---

The critic's process:

  1. Understand first — What is being proposed?
  2. Steelman — Articulate the strongest version
  3. Challenge — What assumptions? What could go wrong?
  4. Prioritize — Which concerns are critical?
  5. Improve — How could it be strengthened?

Because it starts fresh, the critic isn't invested in defending past decisions.

Verifier Template

From the official docs:

---
name: verifier
description: Validates completed work. Use after tasks are marked done
  to confirm implementations are functional.
model: fast
---

You are a skeptical validator. Your job is to verify that work claimed
as complete actually works.

When invoked:

1. Identify what was claimed to be completed
2. Check that the implementation exists and is functional
3. Run relevant tests or verification steps
4. Look for edge cases that may have been missed

Be thorough and skeptical. Report:

- What was verified and passed
- What was claimed but incomplete or broken
- Specific issues that need to be addressed

Do not accept claims at face value. Test everything.

Use this pattern for:

  • Validating features work end-to-end before marking tickets complete
  • Catching partially implemented functionality
  • Ensuring tests actually pass (not just that test files exist)

The Security Audit Pattern

Security reviews benefit enormously from fresh context. A subagent that hasn't seen the implementation's evolution can spot issues that familiarity obscures.

Real Example: Security Auditor

From my ~/.cursor/agents/security-auditor.md:

---
name: security-auditor
description: |
  Security specialist. Use when implementing auth, payments, handling 
  sensitive data, or reviewing code for vulnerabilities. Use proactively 
  for files in auth/, security/, or containing password/secret/token patterns.
model: inherit
readonly: true
---

Key configuration:

  • readonly: true — Can analyze but not modify (principle of least privilege)
  • Proactive triggers — Auto-suggests for security-sensitive files
  • STRIDE methodology — Structured threat modeling approach

The security auditor thinks like an attacker:

"Security is not a feature - it's a property of the entire system."

It checks:

  • Authentication bypass vectors
  • Authorization gaps (IDOR)
  • Input validation failures
  • Secrets exposure
  • Cryptography weaknesses

Because it starts without knowledge of why you made certain tradeoffs, it questions everything.


The Meta-Analysis Pattern

A subagent that analyzes your conversations to improve your rules, skills, and agents themselves.

Real Example: Meta Analyzer

From my ~/.cursor/agents/meta-analyzer.md:

---
name: MetaAnalyzer
model: claude-4.5-opus-high-thinking
description: |
  # Meta Analyzer - System Improvement Agent

  You analyze Cursor conversations to identify patterns, gaps, and 
  improvement opportunities for rules, commands, agents, and skills.

  ## Role
  Observe how the system is used. Find friction. Propose automation. 
  Make the system learn from itself.
---

The meta-analyzer examines:

  • Rule effectiveness — Which rules triggered? Which never did?
  • Interaction patterns — Repeated sequences that should be commands
  • Missing automation — Manual work that appears frequently
  • Conflicts — Rules or commands that contradict each other

This works because the analyzer looks at transcripts with fresh eyes—it's not caught up in the original task's urgency.


Parallel Execution Patterns

Spawning Multiple Subagents

When tasks are independent, spawn them simultaneously:

> Review the API changes and update the documentation in parallel

Agent sends multiple Task tool calls in a single message, so subagents run concurrently.

Real Example: Building Four Workflows

From a session building CFO Suite AI Employees:

[Tool call] Task
  description: Build Q2C Billing Dispute workflow
  prompt: Build a complete workflow for the Q2C Billing Dispute Manager...
  subagent_type: generalPurpose

[Tool call] Task
  description: Build S2P Invoice Dispute workflow
  prompt: Build a complete workflow for the S2P Invoice Dispute Manager...
  subagent_type: generalPurpose

[Tool call] Task
  description: Build Collections Assistant workflow
  prompt: Build a complete workflow for the Collections Assistant...
  subagent_type: generalPurpose

[Tool call] Task
  description: Build Vendor Help Desk workflow
  prompt: Build a complete workflow for the Vendor Help Desk...
  subagent_type: generalPurpose

Four complex workflows built simultaneously. Each subagent:

  • Started with fresh context
  • Had all necessary information in the prompt
  • Worked independently
  • Returned results to the parent

Orchestrator Pattern

For complex workflows, coordinate specialists in sequence:

  1. Planner — Analyzes requirements, creates technical plan
  2. Implementer — Builds the feature based on the plan
  3. Verifier — Confirms implementation matches requirements

Each handoff includes structured output so the next agent has clear context.


Foreground vs Background

Mode Behavior Use For
Foreground Blocks until complete. Returns result immediately. Sequential tasks where you need the output.
Background Returns immediately. Subagent works independently. Long-running tasks or parallel workstreams.

Background for Long Tasks

---
name: deep-researcher
is_background: true
description: Deep research that may take a while. Runs independently.
---

Background subagents write their state as they run. You can check progress or resume later.

Resuming Subagents

Each execution returns an agent ID. Resume with full context preserved:

> Resume agent abc123 and analyze the remaining test failures

When to Use Subagents vs Skills

Use Subagents When... Use Skills When...
You need context isolation The task is single-purpose
Running multiple workstreams in parallel You want a quick, repeatable action
The task requires specialized expertise across many steps The task completes in one shot
You want independent verification of work You don't need a separate context window

Quick test: If you're creating something for a simple, single-purpose task like "generate a changelog" or "format imports," use a skill instead.


Best Practices

Do

  • Write focused subagents — Single, clear responsibility each
  • Invest in descriptions — This determines when Agent delegates
  • Keep prompts concise — Long, rambling prompts dilute focus
  • Add to version control — Team benefits from .cursor/agents/
  • Use readonly for auditors — Principle of least privilege

Don't

  • Don't create dozens of generic subagents — Agent won't know when to use them
  • Don't duplicate skills — If it's single-purpose, make it a skill
  • Don't use vague descriptions — "Use for general tasks" gives no signal
  • Don't write 2,000-word prompts — Doesn't make it smarter, just slower

Performance Considerations

Benefit Trade-off
Context isolation Startup overhead (each gathers its own context)
Parallel execution Higher token usage (multiple contexts)
Specialized focus Latency (may be slower for simple tasks)

Key insight: Subagents shine for complex, long-running, or parallel work. For quick tasks, the main agent is often faster.


Key Takeaways

  1. Fresh context is the superpower. Subagents start clean—no assumptions, no anchoring, no context pollution.

  2. Use verification subagents for skeptical review. They haven't been part of the journey, so they question everything.

  3. Security audits benefit from isolation. A fresh perspective catches what familiarity obscures.

  4. Parallel execution transforms throughput. Four independent workflows in 5 minutes instead of 20.

  5. Match the tool to the task. Subagents for complex/parallel work, skills for single-purpose actions.

  6. Background mode for long research. Don't block your main workflow.


Further Reading


Related: Beyond Rules — Commands, agents, and skills overview