How Cursor Finds Skills
You created a custom skill. You tested it. It works when you invoke it directly. But when you describe what you want naturally, the AI ignores it. The skill exists—but it's invisible. Here's why.
Configuring Your AI Assistant series
- Rules vs Skills — When to use each
- The alwaysApply Tax — The hidden cost of always-on rules
- How Cursor Finds Skills (this post) — Discovery mechanics
Agent Skills: The Open Standard
Before diving into discovery mechanics, understand what skills are: Agent Skills is an open standard for extending AI agents with specialized capabilities. Skills work across any compatible tool—Cursor, Claude Code, VS Code, Gemini CLI, Goose, and many others.
This matters because skills you create for Cursor work in other tools, and skills from the community work in Cursor. It's not a proprietary format.
Key characteristics:
| Trait | What It Means |
|---|---|
| Portable | Work across any agent that supports the standard |
| Version-controlled | Stored as files, tracked in your repo or installed from GitHub |
| Executable | Can include scripts the agent runs |
| Progressive | Resources load on demand, keeping context efficient |
Where Skills Live
Cursor automatically discovers skills from multiple locations:
| Location | Scope |
|---|---|
.cursor/skills/ |
Project-level |
.claude/skills/ |
Project-level (Claude compatibility) |
.codex/skills/ |
Project-level (Codex compatibility) |
~/.cursor/skills/ |
User-level (global) |
~/.claude/skills/ |
User-level (global, Claude compatibility) |
~/.codex/skills/ |
User-level (global, Codex compatibility) |
Global skills (~/.cursor/skills/) apply across all your projects. Put cross-project knowledge there—Kubernetes patterns, API design standards, your personal workflows.
Project skills (.cursor/skills/) are repo-specific. Put team workflows there—your deployment process, code review standards.
Skill Directory Structure
Each skill is a folder containing a SKILL.md file:
.cursor/
└── skills/
└── deploy-app/
├── SKILL.md # Required: main instructions
├── scripts/ # Optional: executable code
│ ├── deploy.sh
│ └── validate.py
├── references/ # Optional: additional docs (loaded on demand)
│ └── REFERENCE.md
└── assets/ # Optional: templates, configs
└── config-template.json
The optional directories matter:
| Directory | Purpose | When Loaded |
|---|---|---|
scripts/ |
Executable code agents can run | When skill executes |
references/ |
Detailed documentation | On demand (progressive) |
assets/ |
Templates, configs, data files | When referenced |
Progressive loading is key: agents read SKILL.md first, then load references/ only when needed. This keeps context efficient.
How Discovery Actually Works
When Cursor starts, it discovers skills from skill directories and presents them to the agent. The agent then decides when skills are relevant based on context.
This is the critical insight: the agent decides. Your skill's description field is how the agent determines relevance.
The Discovery Flow
- Cursor scans skill directories at startup
- Skills appear in Settings → Rules → "Agent Decides" section
- When user sends a message, agent evaluates available skills
- Agent matches user intent to skill descriptions
- Relevant skills are loaded into context
Skills can also be manually invoked by typing /skill-name in chat—this bypasses discovery and always works.
The SKILL.md Frontmatter
Every skill needs a SKILL.md file with YAML frontmatter. Here's the complete spec from the official documentation:
| Field | Required | Description |
|---|---|---|
name |
Yes | Skill identifier. Lowercase letters, numbers, hyphens only. Must match folder name. |
description |
Yes | What the skill does and when to use it. This is how the agent decides relevance. |
license |
No | License name or reference to bundled license file. |
compatibility |
No | Environment requirements (system packages, network access, etc.). |
metadata |
No | Arbitrary key-value mapping for categorization and additional data. |
disable-model-invocation |
No | When true, only invoked via /skill-name. Agent won't auto-apply. |
Real Example: The ship Skill
From my global skills at ~/.cursor/skills/ship/SKILL.md:
---
name: ship
description: |
Prepare and ship code for review via pull request.
Use when user asks to: ship code, create PR, prepare pull request,
push and create PR, ready to merge, open PR.
Proactively suggest when: feature is complete, all tests pass,
code has been reviewed.
Triggers: "ship it", "create PR", "prepare PR", "ready to merge",
"open pull request", "push and PR", "let's ship", "send it"
compatibility: Requires git and gh (GitHub CLI)
metadata:
category: workflow
---
Notice:
descriptionincludes what, when, and trigger phrasescompatibilitytells the agent (and user) what tools are requiredmetadatacategorizes the skill for organization
Real Example: The threat-model Skill
---
name: threat-model
description: |
Perform threat modeling using STRIDE methodology.
Use when user asks to: threat model, security analysis, what could go wrong,
attack vectors, security risks, STRIDE analysis, trust boundaries.
Proactively apply when: designing auth systems, handling sensitive data,
new integrations, API design, data flow changes.
Triggers: "threat model", "what could go wrong?", "attack vectors",
"security risks", "STRIDE", "trust boundaries", "threat analysis",
"security assessment", "what are the threats?"
metadata:
category: planning
---
The metadata.category field lets me organize my 39 global skills by purpose: workflow, planning, quality, etc.
The Description Field: Make or Break
The description field is the single most important part of your skill. It's how the agent decides whether your skill is relevant to what the user is asking.
Why Most Skills Are Invisible
# BAD: Invisible to discovery
description: "Helps with stuff"
# BAD: Too vague
description: "Project setup helper"
# BAD: Technical jargon only
description: "Executes CI/CD pipeline orchestration"
These descriptions don't match how users talk. When someone says "set up my project," the agent can't connect that to "CI/CD pipeline orchestration."
The Anatomy of a Good Description
A discoverable description has four parts:
| Part | Purpose | Example |
|---|---|---|
| What it does | Core capability (one sentence) | "Prepare and ship code for review via pull request." |
| When to use | User intent matching | "Use when user asks to: ship code, create PR, prepare pull request" |
| Proactive triggers | Auto-suggestion conditions | "Proactively suggest when: feature is complete, all tests pass" |
| Trigger phrases | Explicit keywords | "Triggers: 'ship it', 'create PR', 'ready to merge', 'send it'" |
Write in Third Person
The description is injected into the agent's context. Write it as a statement about the skill, not as "I" or "you":
# ✅ Good: Third person
description: "Processes Excel files and generates reports"
# ❌ Bad: First person
description: "I can help you process Excel files"
# ❌ Bad: Second person
description: "You can use this to process Excel files"
Include Natural Language Triggers
Users don't say "invoke the threat modeling skill." They say "what could go wrong with this?" or "is this secure?"
description: |
Perform threat modeling using STRIDE methodology.
...
Triggers: "threat model", "what could go wrong?", "attack vectors",
"is this safe?", "security risks", "what are the threats?"
The phrase "what could go wrong?" is gold—it matches how people actually ask about security.
The compatibility Field
The compatibility field documents environment requirements. The agent sees this and can warn users or adjust behavior.
# Network access required
compatibility: Requires network access to fetch dependencies
# Specific tools required
compatibility: Requires git and gh (GitHub CLI)
# System packages
compatibility: Requires Python 3.10+ and pdfplumber package
# Multiple requirements
compatibility: |
Requires:
- Node.js 18+
- Docker
- AWS CLI configured with credentials
This serves two purposes:
- Agent awareness: The agent knows the skill might fail if requirements aren't met
- User documentation: Users can see what's needed before invoking
The metadata Field
The metadata field is an arbitrary key-value mapping for additional information. The official spec doesn't prescribe what goes here—it's flexible.
Common Uses
# Categorization
metadata:
category: workflow
# Team ownership
metadata:
owner: platform-team
slack: "#platform-support"
# Version tracking
metadata:
version: "2.1.0"
lastUpdated: "2026-01-15"
# Multiple tags
metadata:
category: security
compliance: [SOC2, HIPAA]
reviewRequired: true
I use category to organize my 39 global skills:
| Category | Skills |
|---|---|
workflow |
ship, checkpoint, review, preflight |
planning |
plan, threat-model, adr |
quality |
test, validate, hygiene |
context |
init-project, handoff, status |
The disable-model-invocation Option
By default, skills auto-apply when the agent determines they're relevant. Set disable-model-invocation: true to make a skill behave like a traditional slash command—only included when explicitly typed.
---
name: dangerous-operation
description: "Performs destructive database operations"
disable-model-invocation: true # Must be explicitly invoked
---
Use this for:
- Dangerous operations: Database drops, production deployments
- Expensive operations: API calls that cost money
- Explicit workflows: Tasks that should never auto-trigger
When migrating from slash commands, Cursor's /migrate-to-skills sets this automatically.
Progressive Disclosure: Keep SKILL.md Lean
The main SKILL.md should be under 500 lines. Every token competes for context space with conversation history, other skills, and user requests.
The Default Assumption
The agent is already very smart. Only add context it doesn't already have.
Challenge each paragraph:
- "Does the agent really need this explanation?"
- "Can I assume the agent knows this?"
- "Does this justify its token cost?"
Use References for Detail
Put essential information in SKILL.md; move detailed reference material to separate files:
# Code Review
## Quick Start
[Essential instructions - 50 lines]
## Checklist
[Core checklist - 20 lines]
## Additional Resources
- For detailed coding standards, see [references/STANDARDS.md](references/STANDARDS.md)
- For example reviews, see [references/examples.md](references/examples.md)
The agent reads SKILL.md immediately but only loads references/ when needed. This is progressive disclosure—keep the main file focused, let detail load on demand.
Keep References One Level Deep
Link directly from SKILL.md to reference files. Deeply nested references (references linking to other references) may result in partial reads.
Testing Your Skills
Test 1: Direct Invocation
/my-skill
Does it work? If not, check:
SKILL.mdsyntax (valid YAML frontmatter)namematches folder name- Skill appears in Settings → Rules → "Agent Decides"
Test 2: Intent Matching
Without using the skill name, express intent that should trigger it:
"I need to [thing your skill does]"
Does the AI use your skill? If not, improve your description.
Test 3: Natural Language Triggers
Test with the actual phrases users would say:
"What could go wrong with this?" → Should trigger threat-model
"Let's ship it" → Should trigger ship
"Set up this project" → Should trigger init-project
Test 4: Check Discovery
Ask the AI directly:
"What skills do you have for security?"
"What skills can help me deploy?"
Is your skill listed? If not, your description doesn't match the topic.
Common Patterns
Pattern 1: The Workflow Skill
Skills that orchestrate multi-step processes:
---
name: ship
description: |
Prepare and ship code for review via pull request.
Use when user asks to: ship code, create PR, prepare pull request.
Triggers: "ship it", "create PR", "ready to merge", "send it"
compatibility: Requires git and gh (GitHub CLI)
metadata:
category: workflow
---
# Ship
## Philosophy
- Thorough: Run all checks before shipping
- Documented: PRs tell the story
- Safe: No shipping broken code
## Default Behavior
1. Check uncommitted changes
2. Run preflight checks
3. Generate PR summary
4. Create PR
5. Report URL
Pattern 2: The Domain Expert Skill
Skills that bring specialized knowledge:
---
name: threat-model
description: |
Perform threat modeling using STRIDE methodology.
Use when user asks to: threat model, security analysis, what could go wrong.
Proactively apply when: designing auth systems, handling sensitive data.
Triggers: "what could go wrong?", "attack vectors", "is this safe?"
metadata:
category: planning
---
Include the domain framework (STRIDE) in the description—users might search by methodology.
Pattern 3: The Bootstrap Skill
Skills that create things that don't exist yet:
---
name: init-project
description: |
Initialize project directories (.cursor/ and .context/).
Use when user asks to: set up project, initialize project, bootstrap project.
Proactively suggest when: new project, no .cursor/ or .context/ exists.
Triggers: "init project", "set up project", "bootstrap", "create context"
metadata:
category: context
---
Key insight: These skills can't rely on glob patterns because the files don't exist yet. Description-based discovery is essential.
Viewing and Managing Skills
View Discovered Skills
- Open Cursor Settings (Cmd+Shift+J / Ctrl+Shift+J)
- Navigate to Rules
- Skills appear in the Agent Decides section
If your skill doesn't appear:
- Check folder structure (
skill-name/SKILL.md) - Verify
namefield matches folder name - Restart Cursor (skill discovery happens at startup)
Install Skills from GitHub
Community skills can be imported:
- Open Cursor Settings → Rules
- In Project Rules, click Add Rule
- Select Remote Rule (Github)
- Enter the repository URL
Migrate Existing Rules to Skills
Cursor 2.4+ includes /migrate-to-skills:
/migrate-to-skills
It converts:
- Dynamic rules (rules with
alwaysApply: falseand noglobs) → standard skills - Slash commands → skills with
disable-model-invocation: true
Rules with alwaysApply: true or specific glob patterns aren't migrated—they have explicit triggering conditions that differ from skill behavior.
Debugging Checklist
When your skill doesn't activate:
| Check | How |
|---|---|
| Syntax | Is YAML frontmatter valid? |
| Name match | Does name match folder name exactly? |
| Description | Is it specific with trigger phrases? |
| Visibility | Does it appear in Settings → Rules? |
| Direct invoke | Does /skill-name work? |
| Intent match | Does natural language trigger it? |
| Conflicts | Is another skill matching first? |
Common Failures
| Symptom | Likely Cause | Fix |
|---|---|---|
| Never activates | Description too vague | Add specific trigger phrases |
| Works direct, not natural | No intent matching | Improve description with user language |
| Doesn't appear in settings | Invalid structure | Check folder/name match |
| Wrong skill activates | Conflicting descriptions | Make descriptions more specific |
Key Takeaways
-
Skills are an open standard. Agent Skills work across Cursor, Claude Code, VS Code, and many other tools.
-
The agent decides relevance. Your
descriptionfield is how it chooses—make it specific and natural. -
Use
compatibilityfor requirements. Document what tools and environment the skill needs. -
Use
metadatafor organization. Categorize skills, track ownership, add custom data. -
Keep SKILL.md under 500 lines. Use
references/for detailed documentation. -
Write descriptions in third person. Include what, when, and natural trigger phrases.
-
Test with natural language. Direct invocation isn't the real test—users express intent naturally.
Further Reading
- Agent Skills Specification — The open standard
- Cursor Skills Documentation — Cursor-specific details
- Example Skills — Community examples on GitHub
Series Wrap-Up
This series covered how to configure your AI assistant effectively:
- Rules vs Skills — Rules for reference, skills for action
- The alwaysApply Tax — The cost of always-on context
- How Cursor Finds Skills (this post) — Discovery mechanics and the open standard
The principles:
- Context has cost. Be deliberate about what's always-on.
- Match artifacts to purpose. Rules for what/when, skills for how.
- Optimize for discovery. The AI can only use what it can find.
- Think portable. Skills you create work beyond Cursor.
For a practical guide to auditing and optimizing your existing configuration, see Audit Your AI Config.
Related: Audit Your AI Config — Step-by-step cleanup guide