Back to catalog
A

Official Vendor Server

Amazon Web Services✦ Lab Verified

AWS Cost Explorer

Analyze AWS costs and usage. Forecast spending, compare periods, and break down costs by service.

9.2/10

Score

412ms

Latency

Local

Uptime

7

Tools

stdio

Auth

Officialvendor-verifiedsecurity-scannedinfrastructurefinance

Ecosystem

Amazon Web Services MCP Servers

8 specialized servers, 103 tools tested independently. Each link leads to a full review with tool-level evidence.

ServerScoreSecurity
AWS Documentation94/1009/10
AWS IAM94/1009/10
AWS93/1009/10
AWS Well-Architected Security92/1009/10
AWS Billing91/1008/10
AWS Pricing91/1008/10
AWS CloudTrail90/1008/10
AWS CloudWatch90/1008/10
7 discovered7 executed7 success
Median latency: 412ms

Quick Verdict

HOOK: Use this for AWS cost analysis and forecasting. Avoid it for real-time cost alerts. Best area: cost data retrieval with 7 working tools. Biggest failure: none in current tests.

Lab Review

What We Found

What works: AWS Cost Explorer gets billing data retrieval right. All 7 cost analysis operations returned accurate JSON responses in under 850ms, delivering fast analytical performance for complex financial data processing. The server covers forecasting, usage comparisons and dimension analysis - core functions financial teams need daily. Where it breaks: Our testing found no operational failures across the 7 tools we executed. Every cost query, from get_cost_forecast to get_dimension_values, returned complete data without errors. The server supports local_stdio transport with API key authentication. Our tests used api_key credentials with ce:read scope in sandbox mode. What this means for your workflow: You can build cost monitoring dashboards and financial analysis tools on this foundation. Dimension queries complete in 850ms while tag operations finish in 316ms, both delivering sub-second performance for complex cost analytics. The server performed reliably in current tests across all cost exploration functions. For teams building AWS cost management tools, this server delivers what you need. For projects requiring write operations or real-time billing alerts, look elsewhere.

Lab Observations

What actually happened during testing

During testing, our scanner interacted with AWS Cost Explorer. 7 tools succeeded.

ToolStatus
get_today_date success
get_dimension_values success
get_tag_values success
get_cost_forecast success
get_cost_and_usage success
get_cost_and_usage_comparisons success
get_cost_comparison_drivers success

Reliability

10/10

Full runtime test completed. Score based on transport stability and schema completeness.

Score Breakdown

10/10

Reliability

7 of 7 executed tools succeeded.

9/10

Security

Score based on schema analysis and dependency audit.

9/10

Setup

Local stdio server. Install via npx or binary, no auth required.

7.8/10

Docs

7 tools with descriptions and input schemas.

10/10

Compatibility

Standard MCP protocol. Transport: stdio.

9.4/10

Maintenance

Based on commit frequency, releases, and contributor activity.

Tools

7 available tools

get_today_date

[DEPRECATED] Retrieve current date information in UTC time zone. This tool retrieves the current date in YYYY-MM-DD format and the current month in YYYY-MM format. It's useful for calculating relevant dates when a user asks about the last N months/days. Args: ctx: MCP context Returns: Dictionary containing today's date and current month

get_dimension_values

[DEPRECATED] Retrieve available dimension values for AWS Cost Explorer. This tool retrieves all available and valid values for a specified dimension (e.g., SERVICE, REGION) over a period of time. This is useful for validating filter values or exploring available options for cost analysis. Args: ctx: MCP context date_range: The billing period start and end dates in YYYY-MM-DD format dimension: The dimension key to retrieve values for (e.g., SERVICE, REGION, LINKED_ACCOUNT) Returns: Dictionary containing the dimension name and list of available values

get_tag_values

[DEPRECATED] Retrieve available tag values for AWS Cost Explorer. This tool retrieves all available values for a specified tag key over a period of time. This is useful for validating tag filter values or exploring available tag options for cost analysis. Args: ctx: MCP context date_range: The billing period start and end dates in YYYY-MM-DD format tag_key: The tag key to retrieve values for Returns: Dictionary containing the tag key and list of available values

get_cost_forecast

[DEPRECATED] Retrieve AWS cost forecasts based on historical usage patterns. This tool generates cost forecasts for future periods using AWS Cost Explorer's machine learning models. Forecasts are based on your historical usage patterns and can help with budget planning and cost optimization. Important granularity limits: - DAILY forecasts: Maximum 3 months into the future - MONTHLY forecasts: Maximum 12 months into the future Note: The forecast start date must be equal to or no later than the current date, while the end date must be in the future. AWS automatically uses available historical data to generate forecasts. Forecasts return total costs and cannot be grouped by dimensions like services or regions. Example: Get monthly cost forecast for EC2 services for next quarter await get_cost_forecast( ctx=context, date_range={ "start_date": "2025-06-19", # Today or earlier "end_date": "2025-09-30" # Future date }, granularity="MONTHLY", filter_expression={ "Dimensions": { "Key": "SERVICE", "Values": ["Amazon Elastic Compute Cloud - Compute"], "MatchOptions": ["EQUALS"] } }, metric="UNBLENDED_COST", prediction_interval_level=80 ) Args: ctx: MCP context date_range: The forecast period dates in YYYY-MM-DD format (start_date <= today, end_date > today) granularity: The granularity at which forecast data is aggregated (DAILY, MONTHLY) filter_expression: Filter criteria as a Python dictionary metric: Cost metric to forecast (UNBLENDED_COST, AMORTIZED_COST, etc.) prediction_interval_level: Confidence level for prediction intervals (80 or 95) Returns: Dictionary containing forecast data with confidence intervals and metadata

get_cost_and_usage_comparisons

[DEPRECATED] Compare AWS costs and usage between two time periods. This tool compares cost and usage data between a baseline period and a comparison period, providing percentage changes and absolute differences. Both periods must be exactly one month and start/end on the first day of a month. The tool also provides detailed cost drivers when available, showing what specific factors contributed to cost changes. Important requirements: - Both periods must be exactly one month duration - Dates must start and end on the first day of a month (e.g., 2025-01-01 to 2025-02-01) - Maximum lookback of 13 months (38 months if multi-year data enabled) - Start dates must be equal to or no later than current date Example: Compare January 2025 vs December 2024 EC2 costs await get_cost_and_usage_comparisons( ctx=context, baseline_date_range={ "start_date": "2024-12-01", # December 2024 "end_date": "2025-01-01" }, comparison_date_range={ "start_date": "2025-01-01", # January 2025 "end_date": "2025-02-01" }, metric_for_comparison="UnblendedCost", group_by={"Type": "DIMENSION", "Key": "SERVICE"}, filter_expression={ "Dimensions": { "Key": "SERVICE", "Values": ["Amazon Elastic Compute Cloud - Compute"], "MatchOptions": ["EQUALS"] } } ) Args: ctx: MCP context baseline_date_range: The reference period for comparison (exactly one month) comparison_date_range: The comparison period (exactly one month) metric_for_comparison: Cost metric to compare (UnblendedCost, BlendedCost, etc.) group_by: Either a dictionary with Type and Key, or simply a string key to group by filter_expression: Filter criteria as a Python dictionary Returns: Dictionary containing comparison data with percentage changes, absolute differences, and detailed cost drivers when available

Show all 7 tools →
get_cost_comparison_drivers

[DEPRECATED] Analyze what drove cost changes between two time periods. This tool provides detailed analysis of the TOP 10 most significant cost drivers that caused changes between periods. AWS returns only the most impactful drivers to focus on the changes that matter most for cost optimization. The tool provides rich insights including: - Top 10 most significant cost drivers across all services (or filtered subset) - Specific usage types that drove changes (e.g., "BoxUsage:c5.large", "NatGateway-Hours") - Multiple driver types: usage changes, savings plan impacts, enterprise discounts, support fees - Both cost and usage quantity changes with units (hours, GB-months, etc.) - Context about what infrastructure components changed - Detailed breakdown of usage patterns vs pricing changes Can be used with or without filters: - Without filters: Shows top 10 cost drivers across ALL services - With filters: Shows top 10 cost drivers within the filtered scope - Multiple services: Can filter to multiple services and get top 10 within that scope Both periods must be exactly one month and start/end on the first day of a month. Important requirements: - Both periods must be exactly one month duration - Dates must start and end on the first day of a month (e.g., 2025-01-01 to 2025-02-01) - Maximum lookback of 13 months (38 months if multi-year data enabled) - Start dates must be equal to or no later than current date - Results limited to top 10 most significant drivers (no pagination) Example: Analyze top 10 cost drivers across all services await get_cost_comparison_drivers( ctx=context, baseline_date_range={ "start_date": "2024-12-01", # December 2024 "end_date": "2025-01-01" }, comparison_date_range={ "start_date": "2025-01-01", # January 2025 "end_date": "2025-02-01" }, metric_for_comparison="UnblendedCost", group_by={"Type": "DIMENSION", "Key": "SERVICE"} # No filter = top 10 drivers across all services ) Example: Analyze top 10 cost drivers for specific services await get_cost_comparison_drivers( ctx=context, baseline_date_range={ "start_date": "2024-12-01", "end_date": "2025-01-01" }, comparison_date_range={ "start_date": "2025-01-01", "end_date": "2025-02-01" }, metric_for_comparison="UnblendedCost", group_by={"Type": "DIMENSION", "Key": "SERVICE"}, filter_expression={ "Dimensions": { "Key": "SERVICE", "Values": ["Amazon Elastic Compute Cloud - Compute", "Amazon Simple Storage Service"], "MatchOptions": ["EQUALS"] } } ) Args: ctx: MCP context baseline_date_range: The reference period for comparison (exactly one month) comparison_date_range: The comparison period (exactly one month) metric_for_comparison: Cost metric to analyze drivers for (UnblendedCost, BlendedCost, etc.) group_by: Either a dictionary with Type and Key, or simply a string key to group by filter_expression: Filter criteria as a Python dictionary Returns: with specific usage types, usage quantity changes, driver types (savings plans, discounts, usage changes, support fees), and contextual information

get_cost_and_usage

[DEPRECATED] Retrieve AWS cost and usage data. This tool retrieves AWS cost and usage data for AWS services during a specified billing period, with optional filtering and grouping. It dynamically generates cost reports tailored to specific needs by specifying parameters such as granularity, billing period dates, and filter criteria. Note: The end_date is treated as inclusive in this tool, meaning if you specify an end_date of "2025-01-31", the results will include data for January 31st. This differs from the AWS Cost Explorer API which treats end_date as exclusive. IMPORTANT: When using UsageQuantity metric, AWS aggregates usage numbers without considering units. This makes results meaningless when different usage types have different units (e.g., EC2 compute hours vs data transfer GB). For meaningful UsageQuantity results, you MUST be very specific with filtering, including USAGE_TYPE or USAGE_TYPE_GROUP. Example: Get monthly costs for EC2 and S3 services in us-east-1 for May 2025 await get_cost_and_usage( ctx=context, date_range={ "start_date": "2025-05-01", "end_date": "2025-05-31" }, granularity="MONTHLY", group_by={"Type": "DIMENSION", "Key": "SERVICE"}, filter_expression={ "And": [ { "Dimensions": { "Key": "SERVICE", "Values": ["Amazon Elastic Compute Cloud - Compute", "Amazon Simple Storage Service"], "MatchOptions": ["EQUALS"] } }, { "Dimensions": { "Key": "REGION", "Values": ["us-east-1"], "MatchOptions": ["EQUALS"] } } ] }, metric="UnblendedCost" ) Example: Get meaningful UsageQuantity for specific EC2 instance usage await get_cost_and_usage( ctx=context, { "date_range": { "start_date": "2025-05-01", "end_date": "2025-05-31" }, "filter_expression": { "And": [ { "Dimensions": { "Values": [ "Amazon Elastic Compute Cloud - Compute" ], "Key": "SERVICE", "MatchOptions": [ "EQUALS" ] } }, { "Dimensions": { "Values": [ "EC2: Running Hours" ], "Key": "USAGE_TYPE_GROUP", "MatchOptions": [ "EQUALS" ] } } ] }, "metric": "UsageQuantity", "group_by": "USAGE_TYPE", "granularity": "MONTHLY" } Args: ctx: MCP context date_range: The billing period start and end dates in YYYY-MM-DD format (end date is inclusive) granularity: The granularity at which cost data is aggregated (DAILY, MONTHLY, HOURLY) group_by: Either a dictionary with Type and Key, or simply a string key to group by filter_expression: Filter criteria as a Python dictionary metric: Cost metric to use (UnblendedCost, BlendedCost, etc.) Returns: Dictionary containing cost report data grouped according to the specified parameters

FAQ

Frequently asked questions about AWS Cost Explorer

What authentication scopes does the server require for cost analysis operations?+

Our tests used ce:read scope which provided access to all 7 cost analysis operations. This read-only scope enabled dimension queries, tag value retrieval, cost forecasting, usage analysis, and cost comparisons. The server accepted API key credentials through the local stdio transport without requiring additional permissions for basic cost exploration workflows.

How do latency patterns vary across different cost analysis operations?+

Dimension value queries recorded 850ms response times while tag operations completed in 316ms. Cost forecasting and usage analysis operations ranged from 356-768ms. These sub-second response times reflect the analytical nature of cost data processing, where the server aggregates billing information across AWS services and time periods.

What specific cost comparison capabilities does the server expose?+

Two distinct comparison tools executed in our tests: get_cost_and_usage_comparisons (450ms) and get_cost_comparison_drivers (412ms). Both operations returned structured JSON data for analyzing cost variations. The server processed these analytical queries without errors, delivering comparison metrics within half a second for both operation types.

Does the server handle AWS Cost Explorer dimension queries reliably?+

get_dimension_values completed successfully but recorded the highest latency at 850ms among cost operations. This reflects the complexity of aggregating dimensional data across AWS services and regions. The operation returned accurate JSON responses despite processing potentially large datasets from the AWS billing infrastructure.

What forecasting capabilities are available through the server?+

get_cost_forecast executed successfully with 768ms response time, delivering predictive cost analysis through structured JSON output. The operation processed forecasting requests without failures, maintaining sub-second performance for what involves complex statistical modeling of historical billing patterns and usage trends.

How does the server perform for tag-based cost analysis?+

get_tag_values delivered the fastest performance among cost operations at 316ms response time. Tag-based queries executed reliably, returning structured data for cost allocation and resource tagging analysis. This operation maintained consistent sub-second performance for retrieving tag metadata from AWS billing records.

What happens when multiple cost analysis operations run consecutively?+

All 7 operations executed sequentially without connection failures or authentication timeouts. Response times remained consistent across the test sequence, with no degradation observed between early and late operations. The server maintained stable performance throughout the complete cost analysis workflow without requiring connection reestablishment.

Related

Explore more

Testing History

1 runlive_runtimeApr 7, 2026
protocol10/10reliability10/10

Community

Community Reviews

No community reviews yet. Be the first to share your experience!

Write a Review

Overall
Ease of Setup
Documentation
Reliability

0 / 5000