Skip to main content
Case StudyPlatform Comparison

Why Vercel and Supabase Both Score 69-70: The Infrastructure Platform Pattern

Vercel scores 70 Silver. Supabase scores 69 Silver. One point apart, nearly identical tier placement, but built on completely different architectures. Together they reveal the infrastructure platform pattern — the set of signals that reliably produces Silver-tier agent readiness without any agent-specific effort.

AH
AgentHermes Research
April 16, 202613 min read

The Infrastructure Platform Pattern

After scanning 500 businesses, a clear pattern emerged. The companies that score highest on agent readiness are not the ones that intentionally built for AI agents. They are the ones that built for developers. Developer infrastructure platforms share a DNA that maps almost perfectly onto what AI agents need: comprehensive APIs, self-service access, structured data, transparent pricing, and reliable uptime.

Vercel and Supabase are the textbook case. Neither company has shipped agent-card.json. Neither publishes llms.txt. Neither offers an official MCP server from their primary domain. Yet both score in the top 5% of all businesses scanned — because building for developers accidentally builds for agents.

This is the infrastructure platform pattern: if your product is consumed via API, documented with structured specs, secured with token-based auth, and monitored with a public status page, you already have 70% of what agents need. The remaining 30% — the gap between Silver and Gold — is three files and 30 minutes of work.

70
Vercel Score
69
Supabase Score
5
Points to Gold
30m
To close the gap

Vercel at 70: What It Gets Right

Vercel earns the slight edge because of three areas where deployment infrastructure naturally excels at agent readiness.

Deployment API with structured lifecycle

Every deployment exposes a machine-readable lifecycle: created, building, ready, error. An agent can trigger a deploy, poll for status, and read structured build output — all via REST. No scraping needed.

CLI documentation as API documentation

Vercel's CLI is exhaustively documented, and every CLI command maps to an API endpoint. Agents can read the CLI docs and derive the API surface. This dual documentation pattern inflates D2 API Quality scores.

Status page with incident history

status.vercel.com provides structured uptime data, incident timelines, and component-level health. D8 Reliability carries 13% weight — the second-highest dimension. A well-maintained status page alone can be worth 2-3 points.

Structured error responses on every endpoint

Vercel returns JSON error objects with error codes, messages, and request IDs on every failed request. This lifts both D6 Data Quality (10%) and D9 Agent Experience (10%) — 20% of the total score from error handling alone.

Supabase at 69: What It Gets Right

Supabase trails by a single point but wins on raw API depth. A database-as-a-service naturally exposes the most agent-friendly interface of any infrastructure type: structured queries that return structured data.

Three API transports: REST, GraphQL, Realtime

PostgREST auto-generates a REST API from your schema. GraphQL via pg_graphql adds query flexibility. Realtime subscriptions enable event-driven agent workflows. This triple surface area pushes D2 API Quality near the maximum.

Self-service API keys with instant provisioning

A new project gets an anon key and service role key immediately. No approval flow, no sales call, no waiting period. D3 Onboarding (8%) rewards this friction-free credential issuance directly.

OpenAPI specification published

The Management API ships with an OpenAPI spec. Agents can read the spec, understand every endpoint, and generate correct requests without documentation scraping. OpenAPI is the single biggest factor in D2 scores.

Transparent per-project pricing

Every resource (database, storage, bandwidth, edge functions) has public per-unit pricing. A free tier exists with documented limits. D4 Pricing (5%) is the lowest-weighted dimension, but Supabase maximizes it by making every cost machine-readable.

Dimension-by-Dimension Comparison

Both platforms score within 1-2 points of each other on most dimensions. The divergence comes from their core architecture: Vercel excels at reliability and deployment lifecycle signals, Supabase excels at raw API depth and onboarding speed.

Dimension
Vercel
Supabase
Edge
D1 Discoverability (12%)
8.5
8.0
Vercel
D2 API Quality (15%)
11.0
12.5
Supabase
D3 Onboarding (8%)
5.5
6.0
Supabase
D4 Pricing (5%)
3.5
4.0
Supabase
D5 Payment (8%)
5.0
4.5
Vercel
D6 Data Quality (10%)
7.5
7.5
Tied
D7 Security (12%)
9.0
8.5
Vercel
D8 Reliability (13%)
10.5
9.5
Vercel
D9 Agent Experience (10%)
6.5
6.0
Vercel
Agent-Native Bonus (7%)
3.0
2.5
Vercel
Total
70.0
69.0
+1 Vercel

What Both Miss: The 3 Files to Gold

Both platforms fall short of Gold (75+) for the same reason: they are agent-usable but not agent-native. An AI agent can interact with Vercel and Supabase APIs effectively, but neither platform explicitly declares itself as agent-ready. Three files would change that.

agent-card.json

10 minutes+2-3 points on D1 and Agent-Native Bonus

A JSON file at /.well-known/agent-card.json that declares capabilities, tools, and authentication methods. Agents use this as their entry point for discovery.

llms.txt

10 minutes+1-2 points on D1 and D9

A markdown file at /llms.txt that gives LLMs a plain-language summary of what your platform does, its API surface, and how to get started. Faster than parsing docs.

MCP Server endpoint

30-60 minutes+3-4 points on Agent-Native Bonus

An MCP server that exposes platform tools — deploy, query, create-project — as callable functions. Both platforms already have the APIs. The MCP server is the agent-native wrapper.

The math is simple: Vercel at 70 + agent-card.json (+2) + llms.txt (+1) + MCP server (+3) = 76 Gold. Supabase at 69 with the same additions = 75 Gold. The only company that has done all three? Resend, the only Gold at 75.

Lessons for Every Infrastructure Platform

The Vercel-Supabase comparison reveals a repeatable formula. Any infrastructure platform — Cloudflare, Neon, PlanetScale, Render, Railway, Fly.io — can apply the same pattern.

API-first architecture is 60% of the score

D2 API Quality (15%), D6 Data Quality (10%), D7 Security (12%), and D9 Agent Experience (10%) collectively reward structured, authenticated, well-documented APIs. If you already have these, you are already Silver-adjacent.

Status pages are undervalued assets

D8 Reliability (13%) is the second-highest dimension. A public status page with historical uptime, component-level health, and incident timelines is worth more than most companies realize — not just for trust, but for agent decisioning.

Self-service onboarding is non-negotiable

If an agent cannot get API credentials without a human in the loop, D3 Onboarding scores zero. OAuth app creation, API key generation, and instant project provisioning are baseline requirements.

Agent-native signals are the Gold unlock

The gap between Silver and Gold is not about building new APIs. It is about declaring your existing APIs in agent-native formats: agent-card.json for discovery, llms.txt for comprehension, MCP for invocation.

The companies in our developer tools analysis overwhelmingly follow this pattern. 22 of the top 30 Silver-tier companies are developer infrastructure. The pattern is not accidental — it is architectural. And as shown in our GitHub breakdown, even the largest platforms in the world still miss the explicit agent-native signals that separate Silver from Gold.

Frequently Asked Questions

Why do Vercel and Supabase score almost identically?

Both are modern developer infrastructure platforms built API-first. They share the same architectural DNA: comprehensive REST APIs, self-service onboarding, transparent pricing, structured error responses, and public status pages. The Agent Readiness Score rewards these patterns heavily, which is why developer tools dominate the Silver tier.

What separates Silver from Gold?

Gold (75+) requires agent-native signals that Silver companies typically lack: an agent-card.json for A2A discovery, an llms.txt for LLM consumption, and ideally an MCP server for direct tool invocation. These are the explicit "I am ready for AI agents" declarations. Without them, platforms are agent-usable but not agent-native.

Does Supabase have an MCP server?

The Supabase community has built several unofficial MCP servers, and there are third-party integrations. However, Supabase does not ship an official MCP server endpoint from supabase.com itself. An official MCP server that wraps the Management API and client libraries would immediately push the score toward Gold.

How does Vercel compare to GitHub on agent readiness?

GitHub scores 67 Silver — slightly below Vercel at 70. GitHub has deeper API surface (REST + GraphQL + official MCP server via Copilot), but Vercel edges ahead on reliability signals (status page, structured deployment errors) and onboarding friction. Both miss the same Gold-tier files.


See how your platform compares

Get your free Agent Readiness Score across all 9 dimensions. See where you rank against Vercel, Supabase, and 500 other businesses.


Share this article: