Once you’ve installed a skill, the question stops being “does it work?” and starts being “does it ever fire?” That’s the routing decision — the moment the agent reads your message, scans the list of available skills, and picks zero or one of them to invoke. If you’ve shipped a skill that the agent never reaches for, the bug is almost always in routing, not in the body.
This guide opens up that decision: what the model is actually looking at, what it’s comparing against, and the patterns that consistently move a skill from “never fires” to “fires every time it should.”
What the model sees at routing time
For each installed skill, the agent has access to two strings:
- name — the slug, lowercase-hyphenated
- description — the field from
SKILL.mdfrontmatter
That’s it. The body of SKILL.md is not in front of the model at routing time. Neither are the scripts, templates, or reference docs. If the routing signal isn’t in the description, the skill won’t fire — no matter how good the implementation behind it is.
Mental model. Routing is a classification task over a list of {name, description} pairs. Treat the description as the only label the model gets to vote with.
The decision the model is making
When you send a user message, the agent runs a roughly three-step routing loop:
- Survey. Read the user’s message + the current conversation context.
- Match. For each installed skill, score how well the description matches the user’s intent.
- Pick. If one score is decisively higher than the rest, invoke that skill. If two are close, prefer the more specific one. If none rise above noise, don’t invoke any.
Three things follow from this:
- A skill with a vague description loses to a skill with a specific one — even if the vague one is technically the right choice
- Two skills with overlapping descriptions both lose, because neither wins decisively
- If the user’s phrasing doesn’t appear in any description, none fire
Triggers that move the needle
Literal phrases
If a skill should fire when the user says “the spreadsheet” or “.xlsx”, say those exact words in the description. Models match literal strings far better than they paraphrase. Don’t write “tabular data files” when “csv, xlsx, or tsv” is what users actually type.
Verb + object
A description that lists nouns (“PDF support, Word support”) gives the model nothing to act on. Pair every noun with the action: “extract text from PDFs”, “fill PDF forms”, “merge multiple PDFs.” Verbs are routing fuel.
“Use this skill when…”
Starting the description with this exact framing trains the model to treat the rest as routing rules, not feature copy. It’s mechanical but it works.
Negative examples
Where two skills could plausibly compete, name the boundary in both descriptions: Do NOT use this when X — use the X-skill instead. One sentence saves dozens of misfires.
How to debug a skill that never fires
The 6/6 test. Before declaring a description done, write three real user phrases that should trigger the skill and three that shouldn’t. Run all six against the agent. If you can’t get 6/6, the description needs another pass.
Patterns to try when a skill is silently missing:
- Read the description out loud. If it sounds like the back of a software box, rewrite it. It should sound like a runbook.
- Add the user’s exact phrase. Look at the prompts users send. If the words “the spreadsheet” never appear in your description, add them.
- Cut adjectives. “Powerful,” “comprehensive,” “best-in-class” are noise — they neither narrow the match nor add specificity.
- Check for collision. Two skills with similar wording will both lose. Add a “Do NOT use when…” line to each.
- Shorten. Past ~120 words, descriptions stop being routing signal and start being body content. Cut it down.
How to debug a skill that fires too often
The opposite problem — a skill picked up for things it shouldn’t handle — is almost always one of two failures:
- Triggers too broad. “Use whenever the user wants to do anything with files” will fire on almost everything. Narrow it to the specific job.
- Missing negative examples. Add “Do NOT use this for {nearby job}” — ideally pointing at the skill that should handle that job.
What routing isn’t
A few things people expect routing to do that it doesn’t:
- It doesn’t read the body. Detail in
SKILL.mdbelow the frontmatter is invisible at routing time. - It doesn’t read your README. Same as above.
- It doesn’t run heuristics on filenames. A skill folder named
pdf/doesn’t auto-fire on PDF tasks unless the description says so. - It doesn’t infer from past invocations. Each turn re-scores from scratch on the description alone.
Related
For the field-level mechanics of SKILL.md, see Anatomy of a Skill. For the sentence-level patterns that make descriptions reliable, see Writing Skill Descriptions That Actually Trigger.