How Agent Readiness Scores Are Calculated
The Agent Readiness Score measures how prepared a business is for AI agent interactions across 9 weighted dimensions. This page explains exactly how every point is earned.
The 6-Step Agent Journey
Every AI agent interaction follows 6 steps: FIND, UNDERSTAND, SIGN UP, CONNECT, USE, and PAY. The Agent Readiness Score measures how well a business supports each step. The 9 scoring dimensions map directly to these journey stages.
D1 Discovery
D4 Pricing, D6 Data
D3 Onboarding
D2 API, D7 Security
D8 Reliability, D9 Agent Exp
D5 Payment
The 9 Scoring Dimensions
The Agent Readiness Score is a weighted composite of 9 dimensions organized into 3 tiers by importance. Tier 1 dimensions account for 60% of the total score, Tier 2 for 25%, and Tier 3 for 15%.
| ID | Dimension | Weight | Tier | Visual |
|---|---|---|---|---|
| D2 | API Quality & Coverage | 15% | T1 | |
| D8 | Reliability & Uptime | 13% | T1 | |
| D7 | Security & Trust | 12% | T1 | |
| D1 | Discovery & Findability | 12% | T2 | |
| D6 | Data Format & Structure | 10% | T1 | |
| D9 | Agent Experience | 10% | T1 | |
| D3 | Onboarding & Signup | 8% | T2 | |
| D5 | Payment & Billing | 8% | T3 | |
| D4 | Pricing Transparency | 5% | T2 | |
| + | Agent-Native Bonus | 7% | Bonus | |
| Total | 100% |
D2API Quality & Coverage
Weight: 15% | Tier 1
+
API Quality & Coverage
Weight: 15% | Tier 1
API Quality measures whether a business has machine-callable endpoints, how well-documented they are, and whether they return structured responses. Businesses with REST APIs, OpenAPI specs, and consistent error handling score highest.
D8Reliability & Uptime
Weight: 13% | Tier 1
+
Reliability & Uptime
Weight: 13% | Tier 1
Reliability measures whether an agent can depend on your service being available and responsive. This includes response times, uptime, timeout behavior, and retry-friendliness.
D7Security & Trust
Weight: 12% | Tier 1
+
Security & Trust
Weight: 12% | Tier 1
Security measures whether an agent can safely interact with your business. This includes TLS configuration, authentication methods, CORS policies, and security headers.
D1Discovery & Findability
Weight: 12% | Tier 2
+
Discovery & Findability
Weight: 12% | Tier 2
Discovery measures whether an AI agent can find and identify your business. This includes structured data, agent-specific files (agent-card.json, llms.txt, AGENTS.md), and machine-readable business information.
D6Data Format & Structure
Weight: 10% | Tier 1
+
Data Format & Structure
Weight: 10% | Tier 1
Data Format measures whether your business returns data in formats agents can parse. Clean JSON with consistent schemas, typed fields, and predictable structures score highest.
D9Agent Experience
Weight: 10% | Tier 1
+
Agent Experience
Weight: 10% | Tier 1
Agent Experience measures how well-optimized your business is specifically for AI agent interaction. This includes MCP servers, A2A protocol support, agent-native documentation, and purpose-built agent endpoints.
D3Onboarding & Signup
Weight: 8% | Tier 2
+
Onboarding & Signup
Weight: 8% | Tier 2
Onboarding measures whether an agent can programmatically create an account or start using your service. API key provisioning, OAuth flows, and self-service signup all contribute.
D5Payment & Billing
Weight: 8% | Tier 3
+
Payment & Billing
Weight: 8% | Tier 3
Payment measures whether an agent can complete a transaction programmatically. This includes payment APIs, subscription management, usage-based billing, and invoice endpoints.
D4Pricing Transparency
Weight: 5% | Tier 2
+
Pricing Transparency
Weight: 5% | Tier 2
Pricing Transparency measures whether an agent can determine what your service costs without human interaction. Machine-readable pricing pages, pricing APIs, and structured pricing data contribute.
Agent-Native Bonus (+7%)
Awarded for implementing agent-specific protocols
The Agent-Native Bonus rewards businesses that go beyond basic API readiness to implement agent-specific protocols. Points are awarded for MCP server deployment, A2A protocol support, agent-card.json, llms.txt, AGENTS.md, and emerging standards like UCP, ACP, and x402.
Scoring Caps
Scoring caps are hard maximum scores triggered by critical deficiencies. These caps override dimension scores because they represent fundamental barriers that prevent safe or meaningful agent interaction.
Without TLS encryption, a business cannot score above 39/100 regardless of other factors. Agents cannot safely transmit data over unencrypted connections.
Without any machine-callable endpoints, the maximum score is 29/100. A business with only a website and no API has severe agent readiness limitations.
Without any structured data (JSON-LD, OpenAPI, agent-card.json), scores are capped at 49/100. Agents need machine-readable information to operate.
Auth-Aware Scoring
Auth-aware scoring prevents businesses from being penalized for having secure, authenticated APIs. A 401 Unauthorized response with a JSON error body proves the API exists and is well-implemented, earning 87% of the full score.
| HTTP Response | Score Credit | Explanation |
|---|---|---|
| 200 OK with JSON body | 100% of dimension score | Fully open, documented endpoints receive full marks. |
| 401 Unauthorized + JSON error body | 87% of dimension score | Protected endpoints that return structured auth errors prove the API exists and is well-implemented. Scored at 87% of what a 200 would earn. |
| 401 Unauthorized + HTML/empty | 50% of dimension score | Auth-protected but poor error format. The endpoint exists but does not help agents understand how to authenticate. |
| 403 Forbidden | 40% of dimension score | Access denied with no guidance. The endpoint exists but provides no path forward for agents. |
| No response / timeout | 0% | Endpoint unreachable. No credit awarded. |
7 Agent Readiness Levels (ARL)
Agent Readiness Levels (ARL) are a 7-point classification system that maps score ranges to descriptive maturity stages. ARL provides a quick way to communicate how agent-ready a business is, from ARL-0 (Invisible) to ARL-6 (Agent-First).
Invisible
The business has no machine-readable presence. AI agents cannot discover, understand, or interact with it in any automated way.
Example: A local business with only a basic HTML website and a phone number.
Findable
The business can be found by AI agents through structured data or directory listings, but offers no programmatic interaction.
Example: A restaurant with Google Business Profile and Schema.org markup but no reservation API.
Readable
The business provides machine-readable information about its services, pricing, and availability, but agents cannot take action.
Example: An e-commerce store with product feeds and pricing data but no checkout API.
Functional
The business has APIs that allow agents to perform basic actions like searching, booking, or querying, but the experience is not optimized for agents.
Example: A SaaS product with REST API documentation but no MCP server or agent card.
Integrated
The business has well-documented APIs with agent-aware features. Agents can complete most of the 6-step journey programmatically.
Example: A platform with OpenAPI spec, OAuth, structured error responses, and machine-readable pricing.
Agent-Native
The business was designed with AI agents as first-class consumers. MCP servers, agent cards, and purpose-built agent experiences are deployed.
Example: A developer tool with MCP server, A2A agent card, llms.txt, and agent-optimized documentation.
Agent-First
The business treats AI agents as primary customers. Every step of the agent journey is optimized, monitored, and continuously improved. No company has achieved this level yet.
Example: Theoretical: A business with full MCP, A2A, programmatic onboarding, billing API, and real-time agent analytics.
Tier Thresholds
Agent readiness tiers group businesses into 5 levels based on their composite score. Tier thresholds are: Platinum 90+, Gold 75+, Silver 60+, Bronze 40+, and Not Scored below 40.
Agent-first businesses. All 9 dimensions score high. Full agent-native protocol support. 0 companies currently.
Agent-native businesses with MCP, strong APIs, and most journey steps automated. 1 company currently (Resend, 75).
Well-integrated businesses with documented APIs and structured data. 51 companies currently (10.2%).
Functionally agent-accessible with basic APIs but significant gaps. 250 companies (50%).
Below minimum agent readiness. Major gaps in discovery, APIs, or security. 198 companies (39.6%).
What the Scanner Detects
The AgentHermes scanner checks for 40+ signals across 5 categories during each scan. Detection is non-invasive and reads only publicly available information.
Agent Protocols
API Standards
E-Commerce Platforms
Discovery Signals
Security Signals
Frequently Asked Questions
How is the Agent Readiness Score calculated?
+
The Agent Readiness Score is a weighted composite of 9 dimensions plus an agent-native bonus. Each dimension measures a specific aspect of agent interaction readiness, from discovery (can agents find you?) to payment (can agents pay you?). Dimension weights range from 5% (Pricing Transparency) to 15% (API Quality). The total weights sum to 93%, with a 7% agent-native bonus for implementing protocols like MCP and A2A.
What are the 9 scoring dimensions?
+
The 9 dimensions are: D1 Discovery (12%), D2 API Quality (15%), D3 Onboarding (8%), D4 Pricing Transparency (5%), D5 Payment (8%), D6 Data Format (10%), D7 Security (12%), D8 Reliability (13%), and D9 Agent Experience (10%). There is also a 7% Agent-Native Bonus for implementing agent-specific protocols.
What is a scoring cap?
+
Scoring caps are hard maximum scores triggered by critical deficiencies. A business without TLS encryption cannot score above 39/100. A business with no API endpoints cannot score above 29/100. These caps override dimension scores because they represent fundamental barriers to agent interaction.
How does auth-aware scoring work?
+
Auth-aware scoring recognizes that many APIs require authentication. A 401 Unauthorized response with a well-structured JSON error body scores 87% of what a 200 OK would score, because it proves the API exists and is properly implemented. This prevents businesses from being penalized for having secure, authenticated APIs.
What is an ARL level?
+
ARL (Agent Readiness Level) is a 7-point scale from ARL-0 (Invisible) to ARL-6 (Agent-First) that categorizes businesses by their stage of agent readiness. It maps directly to score ranges: ARL-0 is 0-9, ARL-3 is 40-59 (Bronze tier), and ARL-6 is 90-100 (Platinum tier). No company has reached ARL-6.
What protocols does the scanner detect?
+
The AgentHermes scanner detects MCP (Model Context Protocol) servers, A2A (Agent-to-Agent) protocol support, agent-card.json files, llms.txt files, AGENTS.md files, OpenAPI/Swagger specs, GraphQL endpoints, and e-commerce platforms including Shopify, WooCommerce, and Square. It also checks for x402 micropayment headers and emerging agent communication protocols.
How often are scores updated?
+
Scores are calculated on each scan. Businesses can be rescanned at any time through the AgentHermes audit tool. The scanner re-evaluates all 9 dimensions and the agent-native bonus on every scan, so scores reflect the current state of the business.
Are vertical-specific weights applied?
+
Yes. AgentHermes uses 27 vertical scoring profiles that adjust dimension weights based on industry context. A SaaS company is weighted more heavily on API Quality and Agent Experience, while a restaurant is weighted more on Discovery and Data Format. Weights always renormalize to the same total.
Related Resources
Industry averages and tier distribution data
Pure numbers from 500+ scans
Definitions of agent readiness terms
The agent-hermes.json specification
15 vertical-specific playbooks
Step-by-step guide to raising your score
See your score across all 9 dimensions
Get a free Agent Readiness Score with a detailed breakdown of every dimension, your ARL level, and specific recommendations.
Get Your Score