Designing with the Persona Lens Model
Understanding that agents are persona lenses changes how you design routing, coordination, and expectations. This post covers practical patterns for building systems that work with this reality rather than against it.
The Persona Lens Model series
- The Multi-Agent Illusion — What really happens when you "spawn" an agent
- Anatomy of a Persona Lens — Inside an agent definition file
- Designing with the Persona Lens Model (this post) — Practical patterns for persona-based systems
Implication 1: Routing is Persona Selection
When you build a system that "routes tasks to appropriate agents," you're actually building a persona selector:
flowchart TD
Task[Incoming Task] --> Detect[Context Detection]
Detect --> |"security keywords"| Security[Security Persona]
Detect --> |"documentation request"| Docs[Documentation Persona]
Detect --> |"code review"| Reviewer[Reviewer Persona]
Detect --> |"unclear"| Fallback[General Persona]
Security --> Claude[Claude]
Docs --> Claude
Reviewer --> Claude
Fallback --> Claude
Claude --> Output[Shaped Output]
What this changes:
- "Routing logic" = persona matching rules
- "Agent capabilities" = persona metadata (what triggers each)
- "Agent registry" = collection of persona files with selection criteria
Design pattern: Add explicit metadata to persona files:
# Security Auditor
triggers:
- keywords: [security, vulnerability, CVE, OWASP]
- file_patterns: [auth/*, crypto/*, */security.*]
- explicit_request: true
capabilities:
- security_review
- vulnerability_assessment
- compliance_check
## Role
...
This metadata enables systematic selection rather than ad-hoc routing.
Implication 2: Coordination is Persona Layering
Patterns like "have Security and Reviewer analyze this in parallel" actually mean:
flowchart LR
Code[Code] --> S[Apply Security Persona]
Code --> R[Apply Reviewer Persona]
S --> Merge[Combine Outputs]
R --> Merge
Merge --> Final[Final Report]
Both passes go through the same Claude instance, just with different persona lenses. The "coordination" is in how you:
- Sequence the persona applications (parallel or serial)
- Scope what each persona sees (same input or filtered)
- Merge the outputs (concatenate, reconcile, or synthesize)
What this changes:
- No actual parallelism unless you make separate API calls
- "Agent communication" is really output-to-input chaining
- Conflicts between "agents" are conflicts in merged outputs
Merge Strategies
| Strategy | When to Use | Implementation |
|---|---|---|
| Concatenate | Independent analyses | Append outputs with headers |
| Reconcile | Potentially conflicting findings | Second pass with both outputs as context |
| Synthesize | Need unified recommendation | Synthesis persona that combines perspectives |
| Vote | Multiple opinions on same question | Count agreements, flag disagreements |
Example: Reconcile Strategy
## Synthesis Prompt
You have two analyses of the same code:
### Security Analysis
[output from security persona]
### Code Quality Analysis
[output from reviewer persona]
Reconcile these into a single prioritized report:
- Where do they agree?
- Where do they conflict?
- What's the unified recommendation?
Implication 3: Personas are Pure Functions
The spawn/agent language helps humans conceptualize the system, but:
- No new compute resources are allocated per "agent"
- No separate memory space exists
- No true parallel execution occurs (unless orchestration layer manages it)
Design pattern: Treat personas as pure functions:
persona(context) → output
No side effects, no state, no memory. If you need those, build them externally and inject context.
flowchart LR
Input[Context + Task] --> Persona[Persona Function]
Persona --> Output[Shaped Output]
State[(External State)] -.-> |"inject"| Input
Output -.-> |"persist"| State
Implication 4: Authoring is Prompt Engineering
This reframes authoring priorities:
| Principle | Why It Matters |
|---|---|
| Clarity over cleverness | The model follows instructions literally |
| Structure over prose | Numbered steps > flowing paragraphs |
| Examples over descriptions | Show desired output, don't just describe it |
| Constraints over assumptions | Explicit "don't" lists prevent drift |
Decision Framework: When to Use Multiple Personas
| Scenario | Approach |
|---|---|
| Task needs one clear expertise | Single persona |
| Task needs multiple perspectives on same content | Multiple personas, merge outputs |
| Task has sequential phases with different needs | Chain personas, output → input |
| Task is ambiguous, could go multiple directions | Router → selected persona |
| Task needs persistent memory | External state + persona |
Single Persona
Best for focused tasks where one lens is enough.
flowchart LR
Request[User Request] --> Detect[Detect Intent] --> Apply[Apply Persona] --> Output[Output]
Multiple Personas (Parallel)
Best when you need different perspectives on the same input.
flowchart LR
Input[Input] --> Security[Security]
Input --> Quality[Quality]
Input --> Perf[Performance]
Security --> Merge[Merge]
Quality --> Merge
Perf --> Merge
Merge --> Output[Output]
Chained Personas (Sequential)
Best when output from one phase feeds the next.
flowchart LR
Spec[Spec] --> Architect[Architect] --> Design[Design]
Design --> Implementer[Implementer] --> Code[Code]
Code --> Reviewer[Reviewer] --> Feedback[Feedback]
Router + Persona
Best when task type varies.
flowchart LR
Input[Input] --> Router[Router] --> Persona[Appropriate Persona] --> Output[Output]
Common Mistakes
| Mistake | Problem | Fix |
|---|---|---|
| Over-engineering "agent communication" | Complexity with no benefit | Pass outputs directly |
| Expecting parallel execution | Only one model runs at a time | Design for sequential or batch API calls |
| Vague persona definitions | Inconsistent behavior | Use the anatomy template from Post 2 |
| Too many personas | Selection becomes error-prone | Consolidate overlapping capabilities |
| No fallback persona | Unmatched requests fail | Always include a general-purpose fallback |
| Assuming agents share context | They don't—each invocation is fresh | Explicitly pass needed context |
Persona Portfolio: Coverage Without Overlap
A well-designed persona collection covers your needs without redundancy:
| Category | Persona | Triggers |
|---|---|---|
| Analysis | SecurityAuditor | "security", "vulnerability", "CVE" |
| CodeReviewer | "review", "quality", "feedback" | |
| Debugger | "bug", "error", "fix", "broken" | |
| Creation | Documenter | "document", "readme", "explain" |
| TestEngineer | "test", "coverage", "spec" | |
| Implementer | "implement", "build", "code" | |
| Planning | Architect | "design", "architecture", "structure" |
| Planner | "plan", "break down", "steps" | |
| Meta | Fallback | (unmatched requests) |
Key principle: Each persona owns a distinct concern. Overlap creates routing ambiguity.
Anti-Pattern: The God Persona
# Bad: Does everything
You are an expert at security, testing, documentation,
architecture, performance, and code review.
This defeats the purpose. A persona that does everything does nothing distinctly. Split into focused specialists and route between them.
Pattern: Persona Composition
For complex tasks, compose focused personas:
flowchart TD
subgraph Phase1["Phase 1: Analysis"]
Code[Code] --> Security[SecurityAuditor]
Code --> Reviewer[CodeReviewer]
Security --> SF[security_findings]
Reviewer --> QF[quality_findings]
end
subgraph Phase2["Phase 2: Synthesis"]
SF --> Synth[Synthesizer]
QF --> Synth
Synth --> Report[combined_report]
end
subgraph Phase3["Phase 3: Action"]
Report --> Planner[Planner]
Planner --> Plan[remediation_plan]
end
Each persona stays focused. Composition handles complexity.
Setting Realistic Expectations
| Expectation | Reality |
|---|---|
| Agents run in parallel | Sequential unless you use multiple API calls |
| Agents communicate directly | You merge their outputs |
| Agents remember previous work | Each invocation is stateless |
| More agents = better results | More personas = more routing complexity |
| Agents are autonomous | They follow instructions in their definitions |
Common Misconceptions
-
"This model is limiting" — It's clarifying. The persona lens model describes what's already true about most AI systems. Understanding it helps you work with the grain rather than against it.
-
"Real multi-agent systems are different" — Some are. But many "multi-agent" platforms are this architecture with better UX. Ask vendors: Is it one model or many? How do agents share state? What's the actual coordination mechanism?
-
"I should simulate separate agents" — Only if there's benefit. Often a well-written single persona outperforms a complex multi-agent system. Start simple.
What to Do Next
- Audit existing "agents": Are they actually personas? Can they be simplified?
- Apply the patterns: Try persona selection, layering, and explicit merge strategies
- Build your portfolio: Map your needs to focused personas with clear triggers
- Set realistic expectations: Design for what the architecture actually provides
"Multi-agent coordination is really about sequencing or combining multiple persona lenses."
This completes the Persona Lens Model series. For related content, see Designing Agent Personas That Actually Work for detailed authoring guidance.