Rules, Agents, Commands, MCP... WTF?
A guide to what goes where and why — so you can stop wondering and start building.
Rules. Commands. Agents. Skills. MCP. Prompts. If you've spent any time trying to customize your AI coding assistant, you've probably wondered: what's the difference? Where does this go? Why isn't this working?
You're not alone. The ecosystem has grown faster than the documentation. Everyone's got an opinion, half the features are in "Nightly," and the mental model isn't obvious.
This post fixes that. Here's how the pieces fit together — and more importantly, how to make them work together.
The Mental Model (Finally)
Most devs start with rules. Some discover MCP. Few wire them together. Here's the picture:
flowchart TB
A["AI ASSISTANT<br/>reasoning, synthesis"]:::primary
R["RULES<br/>policy"]:::secondary
C["COMMANDS<br/>workflows"]:::secondary
G["AGENTS<br/>expertise"]:::secondary
M["MCP<br/>data layer"]:::accent
P["PLATFORM / API<br/>source of truth"]:::highlight
A --> R & C & G
R & C & G --> M
M --> P
The principle: Each thing has a job. MCP is the shared data layer. Stop putting data in rules.
"Where Does This Go?" (Decision Guide)
| You're encoding... | Where it belongs | Why |
|---|---|---|
| Naming conventions | Rule | Static, won't change |
| "Always do X before Y" | Rule | Workflow policy |
| Multi-step procedures | Command | Repeatable process |
| Deep security analysis | Agent | Needs isolated expertise |
| Available tools/agents | MCP | Changes with platform |
| Template structures | MCP | Evolves over time |
| Cross-project patterns | Skill | Portable knowledge |
| Live system state | MCP | Must stay current |
Rule of thumb: If it could become stale, it belongs in MCP. If it's a procedure, it's a command. If it needs deep focus, spawn an agent.
"If it could become stale, it belongs in MCP."
Rules + MCP: "What" vs "How"
Rules = how the AI should behave. MCP = what it's working with. Sounds simple. Here's where people mess it up:
The Anti-Pattern: Data in Rules
# Bad: Embedding data that will become stale
Available agents:
- chat_categorizer: Routes by intent
- search: Queries knowledge base
- respond: Generates responses
[... 37 more agents that will become stale ...]
The Pattern: Rules Reference MCP
# Good: Rule instructs how to get current data
When user asks about available agents:
1. Call mcp_list_agents()
2. Present results grouped by category
3. For details, call mcp_get_agent(id)
Never hardcode agent lists—they change as the platform evolves.
The rule defines the process. MCP provides the facts.
What Goes Where
| In Rules | In MCP |
|---|---|
| "Always validate before deploying" | Validation endpoints |
| "Use MCP-first generation flow" | Template structures |
| "Categorizers need Fallback" | Agent catalogs |
| Workflow orchestration patterns | Live system state |
| Error handling guidance | Type compatibility checks |
Commands + MCP: Stop Hardcoding Steps
Commands are repeatable workflows. But static commands rot. Here's the difference:
Static Command (Limited)
# /deploy - Deploy to Environment
## Instructions
1. Run tests
2. Build
3. Deploy to staging
4. Verify health check
This works but doesn't adapt to context.
Dynamic Command (MCP-Powered)
# /deploy - Deploy to Environment
## Instructions
When the user invokes `/deploy`:
1. **Discover environment**
- Call `mcp_env()` to get current environment
- Call `mcp_persona(id)` to get deployment target
2. **Validate readiness**
- Call `mcp_persona_analyze(id)` for pre-deploy checks
- If issues found, report and stop
3. **Execute deployment**
- Call `mcp_sync_run(id, target_env)`
- Monitor with `mcp_sync_status(id)`
4. **Verify**
- Call `mcp_persona(id)` in target env
- Compare with source
## Output Format
Show sync status, any warnings, and verification results.
The command orchestrates MCP calls. Each run uses current data.
Command + MCP Patterns
| Command Purpose | MCP Integration |
|---|---|
/checkpoint |
Check dirty state, validate, commit |
/review |
Fetch files, analyze, compare to standards |
/deploy |
Discover env, validate, sync, verify |
/analyze |
Fetch entity, run checks, report |
Agents + MCP: Give Your Experts Context
Agents are specialists. But an expert without context is just guessing. MCP gives them current state without polluting their prompts.
Agent Without MCP (Limited Context)
name: SecurityAuditor
description: |
You audit code for security vulnerabilities.
Check for OWASP Top 10 issues.
# No knowledge of what's actually deployed
Agent With MCP (Full Context)
name: SecurityAuditor
description: |
# Security Auditor
You audit code and deployments for security vulnerabilities.
## Process
1. Call `mcp_persona(id)` to understand what you're auditing
2. Call `mcp_persona_workflow(id)` to see the data flow
3. Call `mcp_persona_data(id)` to check data handling
4. Analyze against OWASP Top 10
5. Report findings with specific locations
## Available MCP Tools
- `mcp_persona(id)` - Get persona details
- `mcp_persona_analyze(id)` - Run platform checks
- `mcp_action(id)` - Get action documentation
## Output Format
[Security report structure...]
The agent's expertise is static. Its context is dynamic via MCP.
Agent + MCP Patterns
| Agent | MCP Integration |
|---|---|
| Debugger | Fetch logs, state, recent changes |
| Architect | Get current structure, constraints |
| Optimizer | Fetch metrics, current config |
| Critic | Get proposal details, compare to patterns |
Skills + MCP: Take Your Patterns Everywhere
Note: Skills are a Cursor Nightly feature. This is forward-looking.
Skills are portable knowledge modules. Same patterns, different projects. MCP adapts them to local data.
Skill Definition
# API Design Skill
## Capability
Analyze REST API designs against best practices.
## Process
1. Parse the API specification
2. Check naming conventions
3. Validate HTTP method usage
4. Review response structures
## Integration Points
This skill works with any MCP that provides:
- `list_endpoints()` - Available API endpoints
- `get_endpoint(path)` - Endpoint details
- `get_schema(name)` - Response schemas
The skill defines patterns. MCP adapts them to each platform.
Skill + MCP Patterns
| Skill | MCP Provides |
|---|---|
| Kubernetes analysis | Cluster state, pod specs |
| API design review | Endpoint catalog, schemas |
| Security checklist | Current config, permissions |
| Testing patterns | Test coverage, failure history |
When It All Clicks: A Real Workflow
MCP examples below use Ema platform tool names. Your server will differ—patterns are what matter.
Here's what it looks like when everything works together:
flowchart TB
U["User: Review and deploy Sales AI to production"]:::primary
RULE["ROUTING RULE<br/>Detects: review + deploy + production"]:::secondary
CR["CodeReviewer<br/>mcp_persona, mcp_workflow"]:::agent
SA["SecurityAuditor<br/>mcp_persona, mcp_analyze"]:::agent
CT["Critic<br/>mcp_persona, mcp_action"]:::agent
SYN["SYNTHESIS<br/>2 critical issues, 3 warnings"]:::primary
CMD["/deploy COMMAND<br/>mcp_env → mcp_sync_run → verify"]:::accent
RES["RESULT<br/>Sales AI deployed to production"]:::primary
U --> RULE
RULE --> CR & SA & CT
CR & SA & CT --> SYN
SYN --> CMD
CMD --> RES
What each piece contributed:
- Routing Rule — Detected intent, coordinated agents
- Agents — Provided specialized analysis
- MCP — Supplied live data to every component
- Command — Executed the deployment workflow
- Main Agent — Synthesized and reported
"Why Isn't This Working?" (Debugging)
When the AI ignores your carefully crafted setup:
Check Rule Application
"List the rule documents you received in the <rules> block this turn."
Check MCP Availability
"What MCP tools do you have access to? List them."
Check Agent Selection
"What pattern did you detect? Which agents did you consider spawning?"
Check Command Execution
"Walk me through the steps you took for /deploy."
Force Loading
"Read .cursor/rules/routing/RULE.md and follow its guidance for this task."
The Rules (Ironically)
1. One Job Per Thing
| Artifact | Responsibility |
|---|---|
| Rules | Policy, constraints, routing |
| Commands | Procedures, workflows |
| Agents | Deep expertise, isolated analysis |
| Skills | Portable patterns |
| MCP | Data, state, actions |
2. MCP as Single Source of Truth
Never duplicate data in rules that exists in MCP. Rules reference MCP; they don't replicate it.
3. Composability
Artifacts should combine naturally:
- Rules route to agents
- Agents use MCP for context
- Commands orchestrate MCP calls
- Skills apply patterns to MCP data
4. Graceful Degradation
If MCP is unavailable:
- Rules still guide behavior
- Commands can prompt for manual input
- Agents work with provided context
- System remains useful, just less dynamic
Sanity Checklist
Rules: Are You Doing It Wrong?
- [ ] Contains policy, not data
- [ ] References MCP for dynamic information
- [ ] Defines routing patterns
- [ ] Specifies constraints and guardrails
Commands: Are They Dynamic?
- [ ] Orchestrates MCP calls (not hardcoded steps)
- [ ] Handles errors gracefully
- [ ] Works with current state, not assumptions
- [ ] Clear output format
Agents: Do They Have Context?
- [ ] Uses MCP to gather context at runtime
- [ ] Has defined expertise boundaries
- [ ] Produces structured output
- [ ] Knows when to escalate
MCP: Is It Actually Useful?
- [ ] Provides current state (not stale snapshots)
- [ ] Supports validation endpoints
- [ ] Returns structured data
- [ ] Tool purposes are clear to the LLM
Now What?
If you're still confused:
- Beyond Rules — What Commands, Agents, and Skills actually do
- The MCP Mental Model — Why MCP exists and how to think about it
If you're ready to build: 3. Agent Personas — How to make agents that aren't useless 4. Smart Routing — Auto-spawn the right agent for the task 5. MCP Tool Design — Design tools the LLM can actually use
Right now: 6. Audit your rules — Find hardcoded data that should come from MCP 7. Pick one command — Wire it to MCP instead of assumptions 8. Try the debugging prompts — See what's actually in context
You don't need all four artifact types on day one. Start with rules + MCP. Add commands when you have procedures. Spawn agents when you need depth. Build up as complexity demands.