Manually spawning agents doesn't scale. If you have 15 specialized agents but have to remember which one handles what, you'll just use the main assistant. This post shows how to route tasks to agents automatically based on patterns.

The Cursor System series


The Routing Problem

You've built a portfolio of specialized agents:

  • Debugger for bug investigation
  • SecurityAuditor for vulnerability analysis
  • TestEngineer for test generation
  • Critic for challenging assumptions
  • Documenter for documentation

But when you say "this endpoint keeps failing," you don't want to think about which agent to spawn. You want the system to figure it out.

Manual spawning:

User: /spawn debugger "endpoint keeps failing"

Smart routing:

User: "This endpoint keeps failing"
Agent: [detects debugging task, spawns Debugger automatically]

The Routing Rule

Smart routing uses a rule that's always applied, teaching the main agent when to spawn specialists.

# .cursor/rules/agent-routing/RULE.md
---
description: "Routes tasks to appropriate specialized agents based on task patterns"
alwaysApply: true
---

# Agent Routing

When a user request matches one of these patterns, spawn the appropriate agent.

## Agent Selection Guide

| Task Pattern | Agent | When to Spawn |
|--------------|-------|---------------|
| "Review this code/PR" | CodeReviewer | Code quality analysis |
| "Check for security issues" | SecurityAuditor | Security-sensitive changes |
| "Debug/fix this bug" | Debugger | Error investigation |
| "Why is this failing?" | Debugger | Test/runtime failures |
| "Document this" | Documenter | Documentation needs |
| "Plan how to..." | Planner | Complex task decomposition |
| "What does this do?" | Researcher | Code exploration |
| "Challenge this approach" | Critic | Decision validation |
| "Generate tests for" | TestEngineer | Test creation |
| "Summarize what changed" | Changelog | Change documentation |
| "Make this faster" | Optimizer | Performance work |
| "Refactor this" | Refactorer | Code restructuring |

## Multi-Pattern Detection

Some requests match multiple patterns. Handle these with parallel or sequential spawning:

### Security + Quality (Parallel)
Patterns: "review" + "security-sensitive code" (auth, payment, crypto)
Action: Spawn CodeReviewer AND SecurityAuditor in parallel

### Plan + Execute (Sequential)
Patterns: "plan and implement" / "design then build"
Action: Spawn Planner first, then main agent implements

## Spawn Behavior

- Spawn with relevant context (file, error, question)
- Let specialist complete their analysis
- Synthesize findings back to user
- Suggest follow-up actions based on findings

Pattern Matching in Practice

Here's how patterns map to agent selection:

User Input                              → Agent Selected
─────────────────────────────────────────────────────────
"review my authentication changes"      → CodeReviewer + SecurityAuditor
"is this code secure?"                  → SecurityAuditor
"why is this test failing?"             → Debugger
"how does the payment flow work?"       → Researcher
"I think we should rewrite this"        → Critic (challenge the proposal)
"create tests for UserService"          → TestEngineer
"what changed in this session?"         → Changelog
"break this into smaller tasks"         → Planner
"this endpoint returns 500 randomly"    → Debugger
"optimize the database queries"         → Optimizer

Compound Patterns

Some requests need multiple agents. The routing rule handles these:

Security-sensitive code review:

User: "Review my new payment processing code"

Detection:
- "Review" → CodeReviewer
- "payment" → Security-sensitive domain

Action: Parallel spawn of CodeReviewer + SecurityAuditor

Plan then implement:

User: "Plan and implement user authentication"

Detection:
- "plan" → Planner
- "implement" → Execution task

Action: Sequential — Planner first, then main agent executes

Spawn Patterns

Sequential Handoff

Tasks that need one agent's output before another starts.

flowchart LR
  P["Planner<br/>design"]:::primary
  M["Main<br/>build"]:::primary
  C["Changelog<br/>summary"]:::primary

  P --> M --> C

The Planner produces a structured plan. Main agent executes it. Changelog summarizes what was done.

Parallel Review

Tasks that benefit from multiple perspectives simultaneously.

flowchart TB
  M1["Main Agent<br/>coordinates"]:::primary

  CR["CodeReviewer<br/>quality"]:::agent
  SA["SecurityAudit<br/>security"]:::agent
  CT["Critic<br/>challenge"]:::agent

  M2["Main Agent<br/>synthesize"]:::primary

  M1 --> CR & SA & CT
  CR & SA & CT --> M2

Each specialist provides their analysis. Main agent synthesizes into prioritized findings.

Background Investigation

Long-running tasks that shouldn't block the main conversation.

flowchart TB
  R["Researcher<br/>runs in background"]:::agent
  F["Findings ready<br/>notifies when done"]:::accent

  R --> F

Complete Interaction Flow

Here's a realistic example showing routing in action:

flowchart TB
  U["User: Review my payment code"]:::primary
  A["Main Agent<br/>Pattern: Review + payment"]:::primary
  CR["CodeReviewer"]:::agent
  SA["SecurityAuditor"]:::agent
  CT["Critic"]:::agent
  SYN["Synthesis"]:::accent

  U --> A
  A --> CR & SA & CT
  CR & SA & CT --> SYN
Agent Findings
CodeReviewer Clean patterns, good naming, missing error handling
SecurityAuditor SQL injection risk, card data in logs
Critic Why not Stripe SDK? Idempotency concerns?
Synthesis Critical: SQL injection. High: Card data, retry logic

Natural Language Triggers

Users don't always use explicit keywords. Train routing to handle natural language:

Phrase                              → Implied Agent
─────────────────────────────────────────────────────
"commit this"                       → (command: /checkpoint)
"this is acting weird"              → Debugger
"can you take a look?"              → CodeReviewer
"make sure it's safe"               → SecurityAuditor
"I'm not sure about this approach"  → Critic
"what would break if..."            → Critic
"walk me through this"              → Researcher
"get this ready for PR"             → CodeReviewer + /checkpoint

When NOT to Route

Not every request needs an agent. The routing rule should also define when to not spawn:

## Direct Handling (No Agent Spawn)

Handle directly without spawning agents when:
  - Simple code changes ("change X to Y")
  - Direct questions with obvious answers
  - File operations ("create", "move", "delete")
  - Running commands ("npm install", "git status")
  - Small, contained tasks (< 5 minutes)

Only spawn agents for:
  - Tasks requiring specialized expertise
  - Multi-faceted analysis
  - Deep investigation
  - Quality-critical work

Debugging Routing

When routing doesn't work as expected:

Check Pattern Match

"What pattern did you detect in my request?"

Force Specific Agent

"Use the SecurityAuditor for this, regardless of patterns"

See Available Agents

"List all available specialized agents and their triggers"

Verify Agent Exists

"Does the Debugger agent exist? Show me its definition."

Key Takeaways

  1. Routing rules enable automatic agent selection. Users describe tasks naturally; the system picks the specialist.

  2. Patterns should be specific but not brittle. Include multiple triggers per agent.

  3. Compound patterns need explicit handling. Security + review = parallel spawn.

  4. Sequential vs parallel matters. Plan-then-execute is sequential; multi-perspective review is parallel.

  5. Synthesis is critical. Multiple agent outputs need to be unified, deduplicated, and prioritized.

  6. Not everything needs an agent. Simple tasks should be handled directly.


What's Next

Smart routing gets tasks to agents automatically. But what about commands? When you ask for multiple commands, how do you avoid duplicate work? And which actions should the AI take automatically vs. ask permission for?

The next post covers command coalescing and autonomous workflows—making AI do more while staying safe.


Next: Command Coalescing and Autonomous Workflows — Smart orchestration and safe automation.