Skills
Many agent workflows are not one-off prompts. They recur across tasks: reviewing a change, validating behavior against Linux, analyzing failures, drafting tests, or checking architectural boundaries. When a workflow becomes repeatable, it is often worth packaging it as a skill.
A skill is useful when the task has stable inputs, a recognizable decision pattern, and a reviewable output format. It is less useful when the task is still exploratory or depends mainly on human judgment.
Turn Repeated Workflows into Skills
The best candidates for skills are workflows that appear often and already follow a fixed structure. Examples include:
- change review;
- contract or semantic validation;
- log and failure analysis;
- test drafting and test-gap inspection;
- architecture audits;
- report generation from structured evidence.
These tasks benefit from skills because consistency matters more than improvisation.
Put the Fixed Workflow in Code, Not in the Prompt
One recurring lesson is that stable workflow steps should be scripted instead of being left to the model each time. Input normalization, task classification, report initialization, test selection, and result rendering should be automated whenever possible.
This reduces randomness, lowers context load, and makes the skill easier to reuse across sessions.
Keep Skill Scope Narrow
A skill should solve one kind of problem well. It should not try to plan, implement, verify, and approve a large task all at once.
Narrow skills are easier to trust, easier to compose, and easier to debug. They also make delegation cleaner: one skill can own one review unit, one contract class, or one kind of artifact.
Standardize Inputs and Outputs
Skills become much more reliable when they consume structured inputs and produce structured outputs. Useful inputs often include:
- explicit scope and non-goals;
- code anchors or changed files;
- a contract or specification;
- execution environment and command baseline;
- logs, traces, or reference behavior.
Useful outputs often include:
- a fixed report schema;
- findings with severity;
- evidence IDs and code anchors;
- confidence statements;
- recommended next actions.
This makes the result easier for both humans and later agents to review.
Use Specialized Skills to Protect Architecture
Several lessons show the same failure mode: an agent can make a test pass while quietly damaging abstraction boundaries. This is why specialized review skills are valuable, especially for architecture-sensitive work.
A dedicated audit skill can enforce checks such as:
- generic layers must not depend on concrete implementations;
- patch-style special cases should be rejected;
- compatibility claims should be backed by evidence;
- structural complexity should not grow without justification.
Design Skills for Multi-Agent Collaboration
Skills work best when responsibility boundaries are explicit. An orchestration skill, an implementation skill, and a review skill usually serve different purposes. They should not overwrite one another’s responsibilities.
In parallel execution, skills should have non-overlapping write scopes or clearly separated output artifacts. Otherwise, parallelism creates conflicts instead of speed.
Make Skills Friendly to Context Recovery
Long tasks often span multiple sessions. A good skill should therefore leave behind artifacts that make recovery easy:
- summaries;
- structured findings;
- task trackers;
- reusable logs;
- references to commands and evidence.
The more a workflow depends on remembering a long conversation, the less suitable it is as a reusable skill.