Beyond Rules: Commands, Agents, and Skills
Rules are just the beginning. Cursor provides four artifact types—and using the wrong one creates friction. This post covers what rules can't do and when to reach for Commands, Agents, or Skills instead.
The Cursor System series
- Beyond Rules (this post) — The four artifact types
- Agent Personas — Personas that stay in character
- Smart Routing — Match tasks to specialists
- Subagents: Fresh Eyes — Context isolation and parallel work
- Autonomous Workflows — Let agents chain safely
- Testing Artifacts — Catch broken rules before they break
- Meta-Learning — Agents that learn from failures
The Problem: Everything in Rules
Most Cursor users discover rules first. They work, so everything becomes a rule:
- Code conventions → Rule ✓
- Workflow for code review → Rule (awkward)
- Specialized debugging expertise → Rule (wrong tool)
- Reusable knowledge package → Rule (won't scale)
This creates bloated rules that mix guidance with procedures, context with expertise. The AI gets confused because you're overloading one artifact type.
The fix: Use the right artifact for the job.
The Four Artifact Types
| Type | Purpose | Invocation | When to Use |
|---|---|---|---|
| Rules | Persistent context and guardrails | Automatic or @mention | Conventions, constraints, policies |
| Commands | User-triggered workflows | /command |
Repeatable procedures, multi-step tasks |
| Agents | Specialized AI personas | Spawned by main agent | Deep expertise, isolated context |
| Skills | Portable knowledge modules | Agent decides | Cross-project knowledge, shared capabilities |
Let's break down the three you probably haven't used.
Commands: Workflows, Not Context
Commands are the most misunderstood artifact. They look like rules but serve a completely different purpose.
Rules vs Commands
| Aspect | Rules | Commands |
|---|---|---|
| Frontmatter | Required (YAML with description, globs) | None |
| Invocation | Automatic or @mention | User types /command-name |
| Purpose | Context injection | Action execution |
| Location | .cursor/rules/ |
.cursor/commands/ |
Critical mistake: Don't put YAML frontmatter in commands. They're plain markdown with a specific structure.
Command Structure
# /command-name - Brief Description
One-line summary of what this command does.
## Instructions
When the user invokes `/command-name`, do the following:
1. First step
2. Second step
3. Third step
### Default Behavior
What happens with no arguments.
## Variants
### `/command-name --flag`
What this variant does differently.
### `/command-name <target>`
How arguments are handled.
## Output Format
Expected output structure
## Examples
### Basic Usage
User: /command-name
Output: [what happens]
When to Use Commands
Commands excel at repeatable procedures:
- Code review workflow: Analyze → identify issues → suggest fixes → format report
- Checkpoint process: Clean up → validate → commit with message
- Debug flow: Gather evidence → form hypotheses → test → fix
- Documentation generation: Scan code → extract patterns → generate docs
If you find yourself giving the same multi-step instructions repeatedly, that's a command.
Example: The /checkpoint Command
# /checkpoint - Commit Current Work
Clean up, validate, and commit changes with a descriptive message.
## Instructions
When the user invokes `/checkpoint`:
1. **Clean up**
- Remove debug statements (console.log, print, debugger)
- Fix obvious formatting issues
- Ensure no commented-out code blocks
2. **Validate**
- Run linter (report but don't block on warnings)
- Check for obvious errors
- Verify imports are used
3. **Commit**
- Stage relevant changes
- Generate commit message following conventional commits
- Show message for approval before committing
### Default Behavior
Processes all modified files in the current working directory.
## Variants
### `/checkpoint --message "specific message"`
Use provided message instead of generating one.
### `/checkpoint --amend`
Amend the previous commit (only if not pushed).
## Output Format
## Checkpoint Summary
### Cleaned
- Removed 3 console.log statements
- Fixed 2 formatting issues
### Validated
- ✓ Linter passed
- ✓ No obvious errors
### Committed
Message: "feat(auth): add password validation"
Files: 3 changed (+45, -12)
Agents (Subagents): Specialized Expertise
Subagents are separate AI instances with isolated context. Think of them as specialist consultants you can spawn for specific tasks. The official Cursor documentation calls them "subagents" because they're delegated to by the main agent.
Why Subagents Matter
The main AI assistant is a generalist. It tries to be good at everything, which means it's not great at anything specific. Subagents let you:
- Fresh context: Subagents start clean—no anchoring on failed attempts or accumulated assumptions
- Isolate expertise: A security auditor thinks differently than a refactoring specialist
- Preserve main context: Subagent's work doesn't pollute your main conversation
- Run parallel work: Multiple subagents can investigate simultaneously
- Verify skeptically: A verifier subagent hasn't been part of the journey, so it questions everything
For deep coverage of context isolation, verification patterns, and parallel execution, see Subagents: Fresh Eyes on Demand.
Agent Structure
# .cursor/agents/debugger.md
---
name: Debugger
model: claude-sonnet-4-20250514
description: |
# Debugger Agent
You systematically diagnose bugs through hypothesis-driven investigation.
## Role
Find root causes, not symptoms. Fix bugs permanently.
## Expertise
- Error message interpretation
- Stack trace analysis
- Hypothesis generation
- Root cause analysis
## Process
1. Understand the bug (expected vs actual)
2. Gather evidence (logs, stack traces, recent changes)
3. Form hypotheses (ranked by likelihood)
4. Test hypotheses (isolate, add logging)
5. Fix and verify (test, add regression prevention)
## Output Format
## Bug Analysis
### Symptoms
[What's happening]
### Root Cause
[Why it's happening]
### Fix
[Solution]
### Prevention
[Regression test or guard]
## Constraints
- Never guess—gather evidence first
- Fix root cause, not symptoms
- Always add regression test
---
Key insight: The entire agent prompt lives in the description field. When spawned, this becomes its system prompt.
Spawning Agents
# Spawn for specific task
/spawn debugger "This endpoint returns 500 intermittently"
# Spawn in background (continues while you work)
/spawn --background test-engineer "Generate integration tests for UserService"
# Spawn multiple in parallel
/spawn --parallel \
"security: check auth flow for vulnerabilities" \
"quality: review code patterns in new module"
When to Use Agents
| Situation | Why Agent |
|---|---|
| Security-sensitive review | Isolated context, stricter constraints |
| Deep debugging | Focused expertise, doesn't get distracted |
| Parallel investigation | Multiple angles simultaneously |
| Specialized writing | Different voice/style than main assistant |
| Challenge existing decisions | Critic agent with contrarian perspective |
Skills: Portable Knowledge
Skills are reusable capability packages that follow the Agent Skills open standard. Unlike rules (repo-specific) or agents (specialized personas), skills are portable knowledge modules that work across any compatible tool—Cursor, Claude Code, VS Code, Gemini CLI, and many others.
When Skills Beat Rules
| Rules | Skills |
|---|---|
| Repo-specific conventions | Cross-project patterns |
| Work in Cursor only | Work in any compatible tool |
| Static guidance | Versioned, updatable |
| Team-internal | Community shareable |
| Context injection | Can execute scripts |
Skill Locations
Skills are discovered from multiple locations:
| Location | Scope |
|---|---|
.cursor/skills/ |
Project-level |
~/.cursor/skills/ |
User-level (global) |
.claude/skills/, .codex/skills/ |
Cross-tool compatibility |
Skill Structure
Each skill is a folder with a SKILL.md file. Skills can also include scripts, references, and assets:
.cursor/skills/api-analysis/
├── SKILL.md # Required
├── scripts/ # Optional: executable code
│ └── analyze.py
└── references/ # Optional: additional docs
└── patterns.md
The SKILL.md frontmatter is simpler than you might expect:
# .cursor/skills/api-analysis/SKILL.md
---
name: api-analysis
description: |
Analyze REST API designs for consistency and best practices.
Use when reviewing API endpoints, OpenAPI specs, or route definitions.
Triggers: "API review", "check endpoints", "REST patterns"
---
# API Analysis Skill
## Capability
Analyze REST API designs against established best practices.
## Process
1. Parse the API specification
2. Check naming conventions (resources, actions)
3. Validate HTTP method usage
4. Review response structures
5. Identify missing patterns (pagination, errors, versioning)
## Quality Criteria
- Consistent naming (plural nouns for collections)
- Proper HTTP verbs (GET reads, POST creates, PUT replaces, PATCH updates)
- Envelope responses with `data` and `error` fields
- Pagination for list endpoints
- Meaningful error codes
Key insight: The description field is how agents decide relevance. Make it rich with trigger phrases and use cases.
Frontmatter Fields
| Field | Required | Purpose |
|---|---|---|
name |
Yes | Identifier (must match folder name) |
description |
Yes | Discovery—what it does, when to use it |
disable-model-invocation |
No | When true, only invoked via /skill-name |
license |
No | License for shared skills |
When to Use Skills
- Kubernetes knowledge that applies across all your projects
- API design standards shared with your team
- Testing patterns that work in any language
- Security checklists maintained by your security team
- Deployment scripts that agents can execute
For deep coverage of skill discovery mechanics, see How Cursor Finds Skills.
The Decision Framework
flowchart TB
Q{"What do you need?"}:::primary
Q --> RULE["RULE<br/>guidance"]:::accent
Q --> CMD["COMMAND<br/>workflow"]:::accent
Q --> AGENT["AGENT<br/>expertise"]:::accent
Q --> SKILL["SKILL<br/>portable"]:::accent
Quick Reference
| You want... | Use |
|---|---|
| "Always use camelCase" | Rule |
| "Review code, then commit" | Command |
| "Deep security analysis" | Agent |
| "Kubernetes best practices everywhere" | Skill |
| "Don't commit .env files" | Rule |
| "Debug this systematically" | Agent |
| "Generate changelog from commits" | Command |
| "API design standards for all repos" | Skill |
Directory Structure
.cursor/
├── rules/ # Context and guardrails
│ ├── naming/
│ │ └── RULE.md
│ ├── security/
│ │ └── RULE.md
│ └── autonomous-workflows/
│ └── RULE.md
├── commands/ # User-triggered workflows
│ ├── checkpoint.md
│ ├── review.md
│ ├── debug.md
│ └── analyze.md
├── agents/ # Specialized personas
│ ├── debugger.md
│ ├── security-auditor.md
│ ├── code-reviewer.md
│ └── architect.md
└── skills/ # Portable knowledge (Agent Skills standard)
├── api-analysis/
│ ├── SKILL.md
│ └── scripts/
│ └── analyze.py
└── kubernetes/
└── SKILL.md
# Global skills (cross-project)
~/.cursor/skills/
├── my-patterns/
│ └── SKILL.md
└── team-standards/
└── SKILL.md
Skills also work from .claude/skills/ and .codex/skills/ for cross-tool compatibility.
Key Takeaways
-
Rules for context, commands for action. Rules tell the AI how to behave. Commands tell it what to do.
-
Commands have no frontmatter. This is the most common mistake. Commands are plain markdown with a specific structure.
-
Agents are isolated experts. Spawn them for deep work that needs focus or different constraints.
-
Skills are an open standard. Agent Skills work across Cursor, Claude Code, VS Code, and many other tools. Skills you create are portable.
-
Skills can execute code. Unlike rules (guidance) or commands (procedures), skills can include scripts that agents run.
-
Match the artifact to the need. Using rules for everything creates bloated, confusing guidance.
What's Next
Now that you know when to use each artifact, the next post covers how to design effective agents—moving beyond "helpful assistant" to specialized personas that produce consistent, high-quality output.
Next: Designing Agent Personas That Actually Work — The five elements of effective agent design.