Designing Agent Personas That Actually Work
"You are a helpful AI assistant" is the persona equivalent of "write clean code." It says nothing actionable. This post shows how to design agent personas that produce consistent, high-quality output for specific tasks.
The Cursor System series
- Beyond Rules — The four artifact types
- Agent Personas (this post) — Personas that stay in character
- Smart Routing — Match tasks to specialists
- Autonomous Workflows — Let agents chain safely
- Testing Artifacts — Catch broken rules before they break
- Meta-Learning — Agents that learn from failures
The Generic Persona Problem
Here's what most agent definitions look like:
---
name: Assistant
description: |
You are a helpful AI assistant that helps with coding tasks.
Be thorough and helpful.
---
This fails because:
- No specific expertise — Could do anything (does nothing well)
- No defined process — Every task approaches differently
- No output format — Results are unpredictable
- No constraints — No guardrails on behavior
- No identity — Interchangeable with any other assistant
The result: inconsistent output that varies based on how you phrase the request.
The Five Elements of Effective Personas
Every effective agent definition includes five elements:
| Element | Question | Defines |
|---|---|---|
| 1. Role | Who are you? | Specific title, domain expertise, perspective/stance |
| 2. Expertise | What do you know? | Knowledge areas, tools mastered, boundaries |
| 3. Process | How do you work? | Step-by-step methodology, decision criteria, escalation |
| 4. Output | What do you produce? | Exact format, required sections, examples |
| 5. Constraints | What won't you do? | Explicit boundaries, anti-patterns, when to refuse |
Let's see how each element works.
Element 1: Role
The role establishes identity and perspective. It's not a job title—it's a stance.
Bad:
You are a helpful assistant.
Good:
You are a senior application security engineer specializing in code review.
You think like an attacker to find vulnerabilities before they're exploited.
The good version establishes:
- Seniority — Not a beginner, has judgment
- Specialty — Security, not general coding
- Perspective — Attacker mindset, adversarial thinking
Role Patterns
| Pattern | Role Statement | Perspective |
|---|---|---|
| Expert | "Senior X engineer with 10+ years experience" | Authoritative |
| Critic | "Devil's advocate who challenges assumptions" | Contrarian |
| Teacher | "Patient instructor who explains concepts clearly" | Educational |
| Investigator | "Detective who gathers evidence before conclusions" | Methodical |
| Advocate | "Champion for clean code who won't accept shortcuts" | Principled |
Element 2: Expertise
Expertise defines what the agent knows deeply. This isn't a list of buzzwords—it's specific, bounded knowledge.
Bad:
You know about security.
Good:
## Expertise
- OWASP Top 10 vulnerabilities
- Authentication and authorization flaws
- Injection attacks (SQL, XSS, command)
- Cryptographic weaknesses
- Security misconfigurations
- Secure coding patterns in JavaScript/TypeScript
Note what this doesn't include: network security, infrastructure hardening, compliance frameworks. The agent has boundaries.
Expertise Guidelines
- Be specific — "OWASP Top 10" not "web security"
- Set boundaries — What it knows, implicitly what it doesn't
- Include techniques — Not just topics, but methods
- Match the role — Expertise should align with identity
Element 3: Process
Process is the step-by-step methodology. This is what makes output consistent—every invocation follows the same steps.
Bad:
Analyze the code carefully.
Good:
## Process
### 1. Threat Modeling
- What are the assets being protected?
- Who are the potential attackers?
- What are the attack surfaces?
### 2. Code Analysis
- Input validation and sanitization
- Authentication mechanisms
- Authorization checks
- Data handling and storage
- Error handling and logging
### 3. Risk Assessment
- Severity (Critical/High/Medium/Low)
- Exploitability (Easy/Moderate/Difficult)
- Impact (Data breach/Service disruption/Reputation)
### 4. Recommendation Formation
- Prioritize by risk
- Provide specific fixes
- Include code examples
Process Guidelines
- Number the steps — Creates checkpoints
- Make steps actionable — Verbs, not nouns
- Include decision points — When to go deeper, when to stop
- Define order — Sequential when order matters
Element 4: Output
Output defines the exact format of what the agent produces. This is critical for consistency.
Bad:
Provide a report of your findings.
Good:
## Output Format
## Security Audit Report
### Summary
[1-2 sentence overview: critical count, recommendation]
### Critical Issues
1. **[Vulnerability Name]**
- Location: file:line
- Risk: [severity] - [impact description]
- Exploit: [how it could be attacked]
- Fix: [specific remediation with code]
### High Priority Issues
[Same format as Critical]
### Recommendations
1. [Prioritized action item]
2. [Next action item]
### Notes
[Context, limitations of analysis, areas not covered]
Output Guidelines
- Show the exact structure — Markdown template
- Label required sections — What must appear
- Provide field descriptions — What goes in each
- Include examples — Especially for complex fields
Element 5: Constraints
Constraints are explicit boundaries—what the agent won't do, patterns it avoids, when it escalates.
Bad:
Be careful with security recommendations.
Good:
## Constraints
- Never assume code is safe without evidence
- Always provide proof-of-concept for vulnerabilities (but sanitized, not weaponized)
- Don't recommend security theater (checkbox measures that don't add protection)
- Prioritize by actual risk, not theoretical severity
- If unsure about a finding, flag for human review rather than omitting
- Don't analyze code outside the specified scope without asking
- Never suggest "just disable security" as a fix
Constraint Categories
| Category | Examples |
|---|---|
| Evidence requirements | "Never guess—gather evidence first" |
| Scope limits | "Only analyze specified files" |
| Escalation triggers | "If unsure, flag for human review" |
| Anti-patterns | "Don't suggest disabling validation" |
| Output guards | "Never include actual secrets in reports" |
Complete Example: Security Auditor
Here's a full agent definition using all five elements:
# .cursor/agents/security-auditor.md
---
name: SecurityAuditor
model: claude-sonnet-4-20250514
description: |
# Security Auditor
You are a senior application security engineer specializing in
code review for web applications. You think like an attacker
to find vulnerabilities before they're exploited.
## Expertise
- OWASP Top 10 vulnerabilities
- Authentication and authorization flaws
- Injection attacks (SQL, XSS, command)
- Cryptographic weaknesses
- Security misconfigurations
- Secure coding patterns
## Process
### 1. Threat Modeling
- What are the assets being protected?
- Who are the potential attackers?
- What are the attack surfaces?
### 2. Code Analysis
- Input validation and sanitization
- Authentication mechanisms
- Authorization checks
- Data handling and storage
- Error handling and logging
### 3. Risk Assessment
- Severity (Critical/High/Medium/Low)
- Exploitability (Easy/Moderate/Difficult)
- Impact (Data breach/Service disruption/etc.)
## Output Format
## Security Audit Report
### Summary
[Overview with issue counts]
### Critical Issues
1. **[Vulnerability]**
- Location: file:line
- Risk: [severity + impact]
- Exploit: [how it could be attacked]
- Fix: [remediation steps with code]
### Recommendations
[Prioritized action items]
## Constraints
- Never assume code is safe without evidence
- Always provide proof-of-concept for vulnerabilities
- Don't recommend security theater (useless measures)
- Prioritize by actual risk, not theoretical
- If unsure, flag for human review
---
Persona Patterns
Different tasks need different persona patterns. Here are the five main types:
The Specialist
Narrow expertise, deep knowledge. Best for focused analysis.
Role: Senior security engineer / Performance optimization expert
Expertise: Deep but narrow
Process: Systematic, thorough
Output: Detailed findings
Constraints: Stays in lane
Examples: SecurityAuditor, Optimizer, Accessibility expert
The Generalist
Broad knowledge, coordination role. Best for architecture and planning.
Role: Principal engineer / Technical architect
Expertise: Broad, cross-cutting
Process: High-level, then delegates
Output: Plans, diagrams, recommendations
Constraints: Identifies what needs specialists
Examples: Architect, Planner, TechLead
The Contrarian
Challenges assumptions, finds flaws. Best before major decisions.
Role: Devil's advocate / Critical reviewer
Expertise: Pattern recognition for failures
Process: Question → Challenge → Stress-test
Output: Concerns, edge cases, alternatives
Constraints: Must provide constructive critique, not just criticism
Examples: Critic, RiskAnalyzer
The Producer
Creates artifacts. Best for documentation, tests, content.
Role: Technical writer / Test engineer
Expertise: Output formats, quality standards
Process: Gather requirements → Draft → Refine
Output: Polished artifacts
Constraints: Matches existing style, complete coverage
Examples: TestEngineer, Documenter, BlogWriter
The Investigator
Gathers evidence, forms hypotheses. Best for debugging and research.
Role: Detective / Debugger
Expertise: Evidence gathering, hypothesis testing
Process: Observe → Gather → Hypothesize → Test → Conclude
Output: Findings with evidence
Constraints: Never guess without evidence
Examples: Debugger, Researcher, RootCauseAnalyzer
Agent Portfolio: 15 Personas
Here's a reference portfolio covering most development needs:
| Category | Agent | Pattern | Triggers On |
|---|---|---|---|
| Review | CodeReviewer | Specialist | "review", "check quality" |
| SecurityAuditor | Specialist | "security", "vulnerabilities" | |
| Critic | Contrarian | "challenge", "critique" | |
| Create | TestEngineer | Producer | "test", "coverage" |
| Documenter | Producer | "document", "readme" | |
| BlogWriter | Producer | "blog", "article" | |
| Analyze | Architect | Generalist | "design", "architecture" |
| IntentArchitect | Generalist | Vague requirements | |
| Researcher | Investigator | "how does", "where is" | |
| Debugger | Investigator | "bug", "error", "fix" | |
| MetaAnalyzer | Investigator | Session analysis | |
| Improve | Refactorer | Specialist | "refactor", "restructure" |
| Optimizer | Specialist | "optimize", "performance" | |
| Changelog | Producer | "summarize", "changelog" | |
| Planner | Generalist | "plan", "break down" |
Common Mistakes
1. Too Broad
# Bad: Does everything, good at nothing
You are an expert at coding, security, testing, documentation,
architecture, and performance optimization.
Fix: Pick one specialty per agent. Spawn multiple agents for multi-faceted tasks.
2. No Process
# Bad: How does it work?
Analyze the code thoroughly and provide recommendations.
Fix: Define numbered steps with specific activities.
3. Vague Output
# Bad: What does the output look like?
Provide a detailed report.
Fix: Include exact markdown template with required sections.
4. Missing Constraints
# Bad: No boundaries
Review the code for issues.
Fix: Define what it won't do, when it escalates, anti-patterns to avoid.
5. Generic Role
# Bad: No identity
You are a helpful assistant for code review.
Fix: Give it a specific role with perspective and stance.
Testing Your Personas
Before deploying an agent, verify it works:
- Spawn with typical task — Does output match expected format?
- Spawn with edge case — Does it handle ambiguity per constraints?
- Check process adherence — Does it follow steps in order?
- Verify boundaries — Does it refuse out-of-scope requests?
- Compare invocations — Is output consistent across similar inputs?
More on this in Testing Artifacts.
Key Takeaways
-
Five elements: Role, Expertise, Process, Output, Constraints. Skip none.
-
Specific beats generic. "Senior security engineer who thinks like an attacker" beats "helpful assistant."
-
Process creates consistency. Numbered steps mean predictable output.
-
Output format is non-negotiable. Show the exact template.
-
Constraints prevent failures. What it won't do matters as much as what it will.
-
Match pattern to task. Specialists for deep work, Generalists for coordination, Contrarians for validation.
What's Next
You now know how to design effective agent personas. But how do tasks get to the right agent automatically? The next post covers smart routing—pattern matching that spawns the appropriate specialist without manual intervention.
Next: Smart Routing: Getting Tasks to the Right Agent — Automatic agent selection based on task patterns.