Skip to main content
Scoring methodology v4

How Agent Readiness Scores Are Calculated

The Agent Readiness Score measures how prepared a business is for AI agent interactions across 9 weighted dimensions. This page explains exactly how every point is earned.

The 6-Step Agent Journey

Every AI agent interaction follows 6 steps: FIND, UNDERSTAND, SIGN UP, CONNECT, USE, and PAY. The Agent Readiness Score measures how well a business supports each step. The 9 scoring dimensions map directly to these journey stages.

STEP 1
FIND

D1 Discovery

STEP 2
UNDERSTAND

D4 Pricing, D6 Data

STEP 3
SIGN UP

D3 Onboarding

STEP 4
CONNECT

D2 API, D7 Security

STEP 5
USE

D8 Reliability, D9 Agent Exp

STEP 6
PAY

D5 Payment

The 9 Scoring Dimensions

The Agent Readiness Score is a weighted composite of 9 dimensions organized into 3 tiers by importance. Tier 1 dimensions account for 60% of the total score, Tier 2 for 25%, and Tier 3 for 15%.

IDDimensionWeightTierVisual
D2API Quality & Coverage15%T1
D8Reliability & Uptime13%T1
D7Security & Trust12%T1
D1Discovery & Findability12%T2
D6Data Format & Structure10%T1
D9Agent Experience10%T1
D3Onboarding & Signup8%T2
D5Payment & Billing8%T3
D4Pricing Transparency5%T2
+Agent-Native Bonus7%Bonus
Total100%
D2

API Quality & Coverage

Weight: 15% | Tier 1

+

API Quality measures whether a business has machine-callable endpoints, how well-documented they are, and whether they return structured responses. Businesses with REST APIs, OpenAPI specs, and consistent error handling score highest.

REST/GraphQL endpoints detected
OpenAPI/Swagger spec available
Structured JSON responses
Consistent error handling
Rate limiting headers present
D8

Reliability & Uptime

Weight: 13% | Tier 1

+

Reliability measures whether an agent can depend on your service being available and responsive. This includes response times, uptime, timeout behavior, and retry-friendliness.

Response time under 2 seconds
TLS certificate valid
No server errors on probe
Retry-after headers
Health check endpoint available
D7

Security & Trust

Weight: 12% | Tier 1

+

Security measures whether an agent can safely interact with your business. This includes TLS configuration, authentication methods, CORS policies, and security headers.

TLS 1.2+ enforced
HSTS enabled
Authentication documented
CORS configured
Security headers present (CSP, X-Frame-Options)
D1

Discovery & Findability

Weight: 12% | Tier 2

+

Discovery measures whether an AI agent can find and identify your business. This includes structured data, agent-specific files (agent-card.json, llms.txt, AGENTS.md), and machine-readable business information.

agent-card.json present
llms.txt file available
AGENTS.md published
Schema.org structured data
MCP server advertised
D6

Data Format & Structure

Weight: 10% | Tier 1

+

Data Format measures whether your business returns data in formats agents can parse. Clean JSON with consistent schemas, typed fields, and predictable structures score highest.

JSON response format
Consistent field naming
Typed fields (no ambiguous strings)
Pagination support
Filtering/sorting parameters
D9

Agent Experience

Weight: 10% | Tier 1

+

Agent Experience measures how well-optimized your business is specifically for AI agent interaction. This includes MCP servers, A2A protocol support, agent-native documentation, and purpose-built agent endpoints.

MCP server deployed
A2A agent card published
Agent-optimized documentation
Tool descriptions for LLMs
Agent-specific rate limits
D3

Onboarding & Signup

Weight: 8% | Tier 2

+

Onboarding measures whether an agent can programmatically create an account or start using your service. API key provisioning, OAuth flows, and self-service signup all contribute.

Programmatic account creation
OAuth2 flow available
API key self-service
No human verification required
Free tier or trial available
D5

Payment & Billing

Weight: 8% | Tier 3

+

Payment measures whether an agent can complete a transaction programmatically. This includes payment APIs, subscription management, usage-based billing, and invoice endpoints.

Payment API available
Subscription management endpoints
Usage-based billing support
Invoice API
Refund/cancellation programmatic
D4

Pricing Transparency

Weight: 5% | Tier 2

+

Pricing Transparency measures whether an agent can determine what your service costs without human interaction. Machine-readable pricing pages, pricing APIs, and structured pricing data contribute.

Pricing page with structured data
Pricing API endpoint
Machine-readable plan comparison
No "contact us for pricing"
Currency and billing cycle specified

Agent-Native Bonus (+7%)

Awarded for implementing agent-specific protocols

The Agent-Native Bonus rewards businesses that go beyond basic API readiness to implement agent-specific protocols. Points are awarded for MCP server deployment, A2A protocol support, agent-card.json, llms.txt, AGENTS.md, and emerging standards like UCP, ACP, and x402.

MCP server deployed+3%
A2A agent card published+1.5%
agent-card.json at /.well-known/+1%
llms.txt file available+0.5%
AGENTS.md published+0.5%
x402 micropayment support+0.5%

Scoring Caps

Scoring caps are hard maximum scores triggered by critical deficiencies. These caps override dimension scores because they represent fundamental barriers that prevent safe or meaningful agent interaction.

No TLS (HTTP only)Max score: 39/100

Without TLS encryption, a business cannot score above 39/100 regardless of other factors. Agents cannot safely transmit data over unencrypted connections.

No API endpoints detectedMax score: 29/100

Without any machine-callable endpoints, the maximum score is 29/100. A business with only a website and no API has severe agent readiness limitations.

No structured dataMax score: 49/100

Without any structured data (JSON-LD, OpenAPI, agent-card.json), scores are capped at 49/100. Agents need machine-readable information to operate.

Auth-Aware Scoring

Auth-aware scoring prevents businesses from being penalized for having secure, authenticated APIs. A 401 Unauthorized response with a JSON error body proves the API exists and is well-implemented, earning 87% of the full score.

HTTP ResponseScore CreditExplanation
200 OK with JSON body100% of dimension scoreFully open, documented endpoints receive full marks.
401 Unauthorized + JSON error body87% of dimension scoreProtected endpoints that return structured auth errors prove the API exists and is well-implemented. Scored at 87% of what a 200 would earn.
401 Unauthorized + HTML/empty50% of dimension scoreAuth-protected but poor error format. The endpoint exists but does not help agents understand how to authenticate.
403 Forbidden40% of dimension scoreAccess denied with no guidance. The endpoint exists but provides no path forward for agents.
No response / timeout0%Endpoint unreachable. No credit awarded.

7 Agent Readiness Levels (ARL)

Agent Readiness Levels (ARL) are a 7-point classification system that maps score ranges to descriptive maturity stages. ARL provides a quick way to communicate how agent-ready a business is, from ARL-0 (Invisible) to ARL-6 (Agent-First).

ARL-0
0-9/100

Invisible

The business has no machine-readable presence. AI agents cannot discover, understand, or interact with it in any automated way.

Example: A local business with only a basic HTML website and a phone number.

ARL-1
10-24/100

Findable

The business can be found by AI agents through structured data or directory listings, but offers no programmatic interaction.

Example: A restaurant with Google Business Profile and Schema.org markup but no reservation API.

ARL-2
25-39/100

Readable

The business provides machine-readable information about its services, pricing, and availability, but agents cannot take action.

Example: An e-commerce store with product feeds and pricing data but no checkout API.

ARL-3
40-59/100

Functional

The business has APIs that allow agents to perform basic actions like searching, booking, or querying, but the experience is not optimized for agents.

Example: A SaaS product with REST API documentation but no MCP server or agent card.

ARL-4
60-74/100

Integrated

The business has well-documented APIs with agent-aware features. Agents can complete most of the 6-step journey programmatically.

Example: A platform with OpenAPI spec, OAuth, structured error responses, and machine-readable pricing.

ARL-5
75-89/100

Agent-Native

The business was designed with AI agents as first-class consumers. MCP servers, agent cards, and purpose-built agent experiences are deployed.

Example: A developer tool with MCP server, A2A agent card, llms.txt, and agent-optimized documentation.

ARL-6
90-100/100

Agent-First

The business treats AI agents as primary customers. Every step of the agent journey is optimized, monitored, and continuously improved. No company has achieved this level yet.

Example: Theoretical: A business with full MCP, A2A, programmatic onboarding, billing API, and real-time agent analytics.

Tier Thresholds

Agent readiness tiers group businesses into 5 levels based on their composite score. Tier thresholds are: Platinum 90+, Gold 75+, Silver 60+, Bronze 40+, and Not Scored below 40.

Platinum90-100/100

Agent-first businesses. All 9 dimensions score high. Full agent-native protocol support. 0 companies currently.

Gold75-89/100

Agent-native businesses with MCP, strong APIs, and most journey steps automated. 1 company currently (Resend, 75).

Silver60-74/100

Well-integrated businesses with documented APIs and structured data. 51 companies currently (10.2%).

Bronze40-59/100

Functionally agent-accessible with basic APIs but significant gaps. 250 companies (50%).

Not Scored0-39/100

Below minimum agent readiness. Major gaps in discovery, APIs, or security. 198 companies (39.6%).

What the Scanner Detects

The AgentHermes scanner checks for 40+ signals across 5 categories during each scan. Detection is non-invasive and reads only publicly available information.

Agent Protocols

MCP (Model Context Protocol) server
A2A (Agent-to-Agent) protocol
agent-card.json at /.well-known/
llms.txt file
AGENTS.md file
UCP (Universal Context Protocol)
ACP (Agent Communication Protocol)
x402 micropayment headers

API Standards

OpenAPI / Swagger specification
GraphQL introspection
REST endpoints with JSON responses
gRPC service definitions
WebSocket endpoints
Webhook support

E-Commerce Platforms

Shopify storefront API
WooCommerce REST API
Square API
Stripe integration
BigCommerce API

Discovery Signals

Schema.org structured data (JSON-LD)
Open Graph metadata
robots.txt with sitemap
RSS/Atom feeds
DNS TXT records for verification

Security Signals

TLS 1.2+ certificate
HSTS header
Content-Security-Policy
X-Frame-Options
CORS configuration
Authentication method (OAuth, API key, Bearer)

Frequently Asked Questions

How is the Agent Readiness Score calculated?

+

The Agent Readiness Score is a weighted composite of 9 dimensions plus an agent-native bonus. Each dimension measures a specific aspect of agent interaction readiness, from discovery (can agents find you?) to payment (can agents pay you?). Dimension weights range from 5% (Pricing Transparency) to 15% (API Quality). The total weights sum to 93%, with a 7% agent-native bonus for implementing protocols like MCP and A2A.

What are the 9 scoring dimensions?

+

The 9 dimensions are: D1 Discovery (12%), D2 API Quality (15%), D3 Onboarding (8%), D4 Pricing Transparency (5%), D5 Payment (8%), D6 Data Format (10%), D7 Security (12%), D8 Reliability (13%), and D9 Agent Experience (10%). There is also a 7% Agent-Native Bonus for implementing agent-specific protocols.

What is a scoring cap?

+

Scoring caps are hard maximum scores triggered by critical deficiencies. A business without TLS encryption cannot score above 39/100. A business with no API endpoints cannot score above 29/100. These caps override dimension scores because they represent fundamental barriers to agent interaction.

How does auth-aware scoring work?

+

Auth-aware scoring recognizes that many APIs require authentication. A 401 Unauthorized response with a well-structured JSON error body scores 87% of what a 200 OK would score, because it proves the API exists and is properly implemented. This prevents businesses from being penalized for having secure, authenticated APIs.

What is an ARL level?

+

ARL (Agent Readiness Level) is a 7-point scale from ARL-0 (Invisible) to ARL-6 (Agent-First) that categorizes businesses by their stage of agent readiness. It maps directly to score ranges: ARL-0 is 0-9, ARL-3 is 40-59 (Bronze tier), and ARL-6 is 90-100 (Platinum tier). No company has reached ARL-6.

What protocols does the scanner detect?

+

The AgentHermes scanner detects MCP (Model Context Protocol) servers, A2A (Agent-to-Agent) protocol support, agent-card.json files, llms.txt files, AGENTS.md files, OpenAPI/Swagger specs, GraphQL endpoints, and e-commerce platforms including Shopify, WooCommerce, and Square. It also checks for x402 micropayment headers and emerging agent communication protocols.

How often are scores updated?

+

Scores are calculated on each scan. Businesses can be rescanned at any time through the AgentHermes audit tool. The scanner re-evaluates all 9 dimensions and the agent-native bonus on every scan, so scores reflect the current state of the business.

Are vertical-specific weights applied?

+

Yes. AgentHermes uses 27 vertical scoring profiles that adjust dimension weights based on industry context. A SaaS company is weighted more heavily on API Quality and Agent Experience, while a restaurant is weighted more on Discovery and Data Format. Weights always renormalize to the same total.

See your score across all 9 dimensions

Get a free Agent Readiness Score with a detailed breakdown of every dimension, your ARL level, and specific recommendations.

Get Your Score