Agent-Friendly Documentation Spec
agentdocsspec.com | v0.2.1
A 22-check specification defining what makes documentation accessible to coding agents. Covers:
- Discoverability: llms.txt, agent discovery directives, structured metadata
- Accessibility: Markdown availability, content negotiation, URL stability
- Quality: Page size limits, content structure, code example coverage
- Observability: Agent traffic identification, usage analytics
Built from empirical observation of agent behavior across hundreds of documentation sites. The spec gives documentation teams concrete, testable criteria for agent-friendliness instead of vague advice.
afdocs
npm: afdocs | v0.6.0
An open-source npm package that audits any documentation site against the Agent-Friendly Documentation Spec. Includes:
- CLI tool for quick command-line audits (
npx afdocs check https://docs.example.com) - Programmatic API for integration into CI/CD pipelines
- Test helpers (vitest) for incorporating agent-friendliness checks into existing test suites
- Multiple output formats: text, JSON, markdown, and GitHub Actions annotations
22 checks fully implemented across 8 categories, with active development on ecosystem-related enhancements.
skill-validator
github.com/agent-ecosystem/skill-validator | v1.3.0
A stable, production-ready tool for analyzing Agent Skills at scale. Available via Homebrew (brew install agent-ecosystem/tap/skill-validator), Go install, or as a library for custom tooling. Features:
- Structure validation: Spec compliance, token counts, orphan detection, internal link verification
- Content analysis: Word count, code ratio, imperative ratio, specificity, and density metrics
- Cross-contamination detection: Identifies programming language confusion across skill files
- LLM-as-judge scoring: Six evaluation dimensions (clarity, actionability, token efficiency, scope discipline, directive precision, novelty) informed by research identifying novelty as a key predictor of skill value
- Output formats: Text, JSON, markdown, and GitHub Actions annotations
The data behind the Agent Skill Report.
skill-validator-ent (Enterprise)
github.com/agent-ecosystem/skill-validator-ent
Enterprise variant of skill-validator that uses AWS Bedrock instead of direct API calls. For organizations that require traffic to route through existing cloud infrastructure. Supports AWS SSO, IAM credentials, and EC2/ECS instance profiles. Same validation and scoring capabilities as the core tool, backed by any Bedrock-available model.
Agent Skill Implementation
github.com/agent-ecosystem/agent-skill-implementation
A community research project cataloging how agent platforms actually implement Agent Skill loading behavior. The Agent Skills spec gives platforms wide latitude in implementation, and the client implementation guide was derived from only 7 of 25+ adopting platforms. This project provides:
- 23 checks across 9 categories: Loading timing, directory recognition, resource access patterns, content presentation, lifecycle management, access control, structural edge cases, skill-to-skill invocation, and skill dependencies
- 17 benchmark skills: Spec-compliant skills with embedded canary phrases that reveal what a platform loaded and when, without relying on model self-reporting
- Per-platform result templates: Structured format for recording findings, distinguishing platform-level from model-level behavior, and documenting fallback workarounds
Open for community contributions. If you use Agent Skills on any platform, your empirical findings are valuable.
Agent Tool Registry
Upcoming
A registry for cataloging MCP servers, Agent Skills, and other agent integrations. Provides structured metadata about available tools, their capabilities, quality signals, and interoperability characteristics. In active development.
Future Standards Work
As the research program identifies new areas where standardization would benefit the ecosystem, additional specifications and tooling will be developed here. Current areas under investigation include:
- Agent tool quality scoring methodology
- MCP server compliance evaluation criteria
- Agent context consumption patterns and limits