Scoring Methodology
How MCP Rated evaluates every MCP server through published criteria, evidence summaries, and coverage notes. Version 2.1 · Last updated March 2026.
Philosophy
- →Independent by design. Scores are based on published criteria rather than vendor claims or paid placement.
- →Evidence before opinion. Protocol behavior, repository signals, documentation quality, and server metadata shape the review.
- →Coverage is never hidden. When an area cannot be assessed fairly, the limitation is disclosed alongside the score.
- →Versioned methodology. Scoring changes are tracked so historical comparisons have context.
Score Taxonomy
Each server is reviewed across six dimensions. The overall score translates those dimensions into a 0–100 view for quick comparison.
Reliability
Protocol conformance, connection stability, schema validity, error handling
Security
Instruction-safety review, dependency audit, credential exposure checks, authentication
Setup
Ease of getting started: README, setup guides, transport DX
Documentation
Quality & completeness of descriptions, schemas, categories
Compatibility
Transport support, schema completeness, tool integration depth
Maintenance
GitHub health signals, adjusted for project scale
Coverage-Aware Scoring
Not every server exposes the same surface area. If a dimension cannot be assessed fairly, MCP Rated explains the limitation and avoids presenting a coverage gap as a failure.
Reliability Status Classification
Some servers require OAuth, a vendor sandbox, or a specific transport environment. Instead of treating those constraints as failures, reliability is shown with one of three coverage statuses:
Full protocol test completed — score reflects actual connection stability, schema validity, and error handling.
Connected to the server but could not test individual tools. Score shown with caveat.
Could not complete enough testing for a reliability score due to access, transport, or sandbox limitations. Shown as N/A — not a negative signal.
When reliability cannot be measured, the detail page presents that limitation directly rather than turning it into a negative reliability claim.
Evidence-Based Review
MCP Rated favors evidence that can be gathered without changing third-party systems. Destructive or state-changing behavior is considered only when a suitable sandbox or vendor-provided environment exists.
- →Measured claims. Public conclusions are tied to evidence that can be reviewed and explained.
- →Summarized evidence. Detail pages describe relevant outcomes and caveats without exposing sensitive internal material.
- →Consistent interpretation. Similar findings are grouped into the same review categories across servers.
Security Scoring
Security review combines public vulnerability data, repository signals, credential exposure indicators, and unsafe-instruction review. Supplemental editorial triage may inform prioritization, but not the sole basis for a negative public score.
Instruction safety
Unsafe-instruction and prompt-injection review
Dependencies
Lockfile presence, known vulnerabilities via OSV and GitHub Advisory
Secrets
Credential exposure and secret hygiene signals
Auth
Authentication method appropriateness for the transport type
Evidence Sources
Vulnerability advisories
Known public CVE and GHSA records
Repository security signals
Dependency and maintenance indicators
Credential exposure indicators
Signals that suggest unsafe secret handling
Instruction-safety review
Signals that suggest unsafe tool guidance
Supplemental triage
Additional context considered during review
Quality Gate
Server pages are indexed only after meeting quality checks for coverage, data completeness, and scoring consistency. Pages that don't meet the quality bar remain hidden from search engines until sufficient evidence is available.
Badge Criteria
Badges are awarded based on score thresholds across reliability, security, and overall quality.
Lab Tested
Server has been tested with sufficient coverage and meets minimum quality standards.
Vendor Verified
Server demonstrates high reliability and security scores from structured testing.
Security Scanned
Server has passed security scanning with satisfactory results.
Vulnerability Disclosure Policy
We publicly list only already-disclosed vulnerabilities from authoritative databases (OSV, GitHub Advisory). Newly discovered findings from internal scans are kept private until responsibly disclosed to the vendor.
Known public CVE/GHSA/advisory — listed immediately
New finding under investigation — not shown publicly
Confirmed and reported to vendor, awaiting fix
Disclosure process complete — listed publicly
Transparency & Re-test Policy
- →Summarized evidence and review caveats are visible on server detail pages.
- →Maintainers can request a re-test by contacting us.
- →Methodology changes are tracked and versioned.
- →Paid features and coverage limitations are disclosed separately so scores do not overstate what was actually tested.