Back to catalog
A

Official Vendor Server

Amazon Web Services✦ Lab Verified

AWS CloudWatch

Monitor AWS resources with CloudWatch. Query logs, metrics, alarms, and telemetry data.

9.0/10

Score

451ms

Latency

Local

Uptime

23

Tools

stdio

Auth

Officialvendor-verifiedsecurity-scannedinfrastructuremonitoring

Ecosystem

Amazon Web Services MCP Servers

8 specialized servers, 87 tools tested independently. Each link leads to a full review with tool-level evidence.

ServerScoreSecurity
AWS Documentation94/1009/10
AWS IAM94/1009/10
AWS93/1009/10
AWS Cost Explorer92/1009/10
AWS Well-Architected Security92/1009/10
AWS Billing91/1008/10
AWS Pricing91/1008/10
AWS CloudTrail90/1008/10
23 discovered7 executed7 success
Median latency: 451ms

Quick Verdict

Use this for CloudWatch monitoring and metric analysis. Avoid it if you need write operations beyond the 7 tested read tools. Best area: metric data retrieval. Biggest failure: none in current tests.

Lab Review

What We Found

What works: AWS CloudWatch's MCP server delivers clean metric and alarm operations. All 7 tested tools returned complete JSON responses without errors. get_metric_data and get_active_alarms performed consistently for real-time monitoring queries, giving you reliable access to current system state. Where it breaks: We found no functional failures across the tested operations. Log group discovery completes in around 1 second, suitable for periodic monitoring setup rather than rapid iteration. get_recommended_metric_alarms generates AI-powered suggestions in under 1 second, much faster than manual alarm analysis but slower than basic data retrieval calls. What this means for your workflow: You can build monitoring dashboards and alerting logic on this server's core operations. The server supports local stdio transport with API key authentication for straightforward integration. Metric queries and alarm status checks are fast enough for automated monitoring tools. For teams building AWS observability workflows, this server is ready to use.

Lab Observations

What actually happened during testing

During testing, our scanner interacted with AWS CloudWatch. 7 tools succeeded.

ToolStatus
describe_log_groups success
get_active_alarms success
get_telemetry_evaluation_status success
get_metric_metadata success
analyze_metric success
get_recommended_metric_alarms success
get_metric_data success

Reliability

10/10

Partial runtime test — 7 of 23 tools executed Score based on transport stability and schema completeness.

Score Breakdown

10/10

Reliability

7 of 7 executed tools succeeded.

8/10

Security

Score based on schema analysis and dependency audit.

9/10

Setup

Local stdio server. Install via npx or binary, no auth required.

7.8/10

Docs

23 tools with descriptions and input schemas.

10/10

Compatibility

Standard MCP protocol. Transport: stdio.

9.4/10

Maintenance

Based on commit frequency, releases, and contributor activity.

Tools

23 available tools

describe_log_groups

Lists AWS CloudWatch log groups and saved queries associated with them, optionally filtering by a name prefix. This tool retrieves information about log groups in the account, or log groups in accounts linked to this account as a monitoring account. If a prefix is provided, only log groups with names starting with the specified prefix are returned. Additionally returns any user saved queries that are associated with any of the returned log groups. Usage: Use this tool to discover log groups that you'd retrieve or query logs from and queries that have been saved by the user. Returns: -------- List of log group metadata dictionaries and saved queries associated with them Each log group metadata contains details such as: - logGroupName: The name of the log group. - creationTime: Timestamp when the log group was created - retentionInDays: Retention period, if set - storedBytes: The number of bytes stored. - kmsKeyId: KMS Key Id used for data encryption, if set - dataProtectionStatus: Displays whether this log group has a protection policy, or whether it had one in the past, if set - logGroupClass: Type of log group class - logGroupArn: The Amazon Resource Name (ARN) of the log group. This version of the ARN doesn't include a trailing :* after the log group name. Any saved queries that are applicable to the returned log groups are also included.

analyze_log_group

Analyzes a CloudWatch log group for anomalies, message patterns, and error patterns within a specified time window. This tool performs an analysis of the specified log group by: 1. Discovering and checking log anomaly detectors associated with the log group 2. Retrieving anomalies from those detectors that fall within the specified time range 3. Identifying the top 5 most common message patterns 4. Finding the top 5 patterns containing error-related terms Usage: Use this tool to detect anomalies and understand common patterns in your log data, particularly focusing on error patterns that might indicate issues. This can help identify potential problems and understand the typical behavior of your application. Returns: -------- A LogsAnalysisResult object containing: - log_anomaly_results: Information about anomaly detectors and their findings * anomaly_detectors: List of anomaly detectors for the log group * anomalies: List of anomalies that fall within the specified time range - top_patterns: Results of the query for most common message patterns - top_patterns_containing_errors: Results of the query for patterns containing error-related terms (error, exception, fail, timeout, fatal)

execute_log_insights_query

Executes a CloudWatch Logs Insights query and waits for the results to be available. IMPORTANT: The operation must include exactly one of the following parameters: log_group_names, or log_group_identifiers. CRITICAL: The volume of returned logs can easily overwhelm the agent context window. Always include a limit in the query (| limit 50) or using the limit parameter. Usage: Use to query, filter, collect statistics, or find patterns in one or more log groups. For example, the following query lists exceptions per hour. ``` filter @message like /Exception/ | stats count(*) as exceptionCount by bin(1h) | sort exceptionCount desc ``` Returns: -------- A dictionary containing the final query results, including: - status: The current status of the query (e.g., Scheduled, Running, Complete, Failed, etc.) - results: A list of the actual query results if the status is Complete. - statistics: Query performance statistics - messages: Any informational messages about the query

get_logs_insight_query_results

Retrieves the results of a previously started CloudWatch Logs Insights query. Usage: If a log query is started by execute_log_insights_query tool and has a polling time out, this tool can be used to try to retrieve the query results again. Returns: -------- A dictionary containing the final query results, including: - status: The current status of the query (e.g., Scheduled, Running, Complete, Failed, etc.) - results: A list of the actual query results if the status is Complete. - statistics: Query performance statistics - messages: Any informational messages about the query

cancel_logs_insight_query

Cancels an ongoing CloudWatch Logs Insights query. If the query has already ended, returns an error that the given query is not running. Usage: If a log query is started by execute_log_insights_query tool and has a polling time out, this tool can be used to cancel it prematurely to avoid incurring additional costs. Returns: -------- A LogsQueryCancelResult with a "success" key, which is True if the query was successfully cancelled.

Show all 23 tools →
get_metric_data

Retrieves CloudWatch metric data for a specific metric. This tool retrieves metric data from CloudWatch for a specific metric identified by its namespace, metric name, and dimensions, within a specified time range. It can use either standard GetMetricData API or CloudWatch Metrics Insights for more advanced querying. The function automatically determines whether to use standard GetMetricData or Metrics Insights based on the parameters provided. If any Metrics Insights specific parameters are provided (group_by_dimension, schema_dimension_keys, limit, sort_order, or order_by_statistic), it will use Metrics Insights. When using group_by_dimension, you must include that dimension in schema_dimension_keys. Usage: Use this tool to get actual metric data from CloudWatch for analysis or visualization. Returns: GetMetricDataResponse: An object containing the metric data results Example 1 (Standard GetMetricData): result = await get_metric_data( ctx, namespace="AWS/EC2", metric_name="CPUUtilization", start_time="2023-01-01T00:00:00Z", dimensions=[ Dimension(name="InstanceId", value="i-1234567890abcdef0") ], statistic="Average" # Period will be auto-calculated based on time window and target_datapoints ) Example 2 (Metrics Insights with group by): result = await get_metric_data( ctx, namespace="AWS/EC2", metric_name="CPUUtilization", start_time="2023-01-01T00:00:00Z", end_time="2023-01-02T00:00:00Z", statistic="AVG", schema_dimension_keys=["InstanceType"], group_by_dimension="InstanceType" # This will generate a query like: SELECT AVG("CPUUtilization") FROM SCHEMA("AWS/EC2", "InstanceType") GROUP BY "InstanceType" ) Example 3 (Metrics Insights with schema dimension keys): result = await get_metric_data( ctx, namespace="AWS/EC2", metric_name="CPUUtilization", start_time="2023-01-01T00:00:00Z", end_time="2023-01-02T00:00:00Z", statistic="AVG", schema_dimension_keys=["InstanceId", "InstanceType"], group_by_dimension="InstanceId" # This will generate a query like: SELECT AVG("CPUUtilization") FROM SCHEMA("AWS/EC2", "InstanceId", "InstanceType") GROUP BY "InstanceId" ) Example 4 (Metrics Insights with ORDER BY and LIMIT to find the top 5 EC2 instances with the highest CPU utilization): result = await get_metric_data( ctx, namespace="AWS/EC2", metric_name="CPUUtilization", start_time="2023-01-01T00:00:00Z", end_time="2023-01-02T00:00:00Z", statistic="AVG", schema_dimension_keys=["InstanceId"], group_by_dimension="InstanceId", sort_order="DESC", limit=5, order_by_statistic="MAX" # This will generate a query like: SELECT AVG("CPUUtilization") FROM SCHEMA("AWS/EC2", "InstanceId") GROUP BY "InstanceId" ORDER BY MAX() DESC LIMIT 5 ) Example 5 (Metrics Insights with ORDER BY without sort direction to find the EC2 instances with the highest CPU utilization ordered by default ASC): result = await get_metric_data( ctx, namespace="AWS/EC2", metric_name="CPUUtilization", start_time="2023-01-01T00:00:00Z", end_time="2023-01-02T00:00:00Z", statistic="AVG", schema_dimension_keys=["InstanceId"], group_by_dimension="InstanceId", order_by_statistic="MAX" # This will generate a query like: SELECT AVG("CPUUtilization") FROM SCHEMA("AWS/EC2", "InstanceId") GROUP BY "InstanceId" ORDER BY MAX() ) Example 6 (Metrics Insights without ORDER BY clause to find the EC2 instances with the highest CPU utilization in no specific order): result = await get_metric_data( ctx, namespace="AWS/EC2", metric_name="CPUUtilization", start_time="2023-01-01T00:00:00Z", end_time="2023-01-02T00:00:00Z", statistic="AVG", schema_dimension_keys=["InstanceId"], group_by_dimension="InstanceId" # This will generate a query like: SELECT AVG("CPUUtilization") FROM SCHEMA("AWS/EC2", "InstanceId") GROUP BY "InstanceId" # No ORDER BY clause is added since neither order_by_statistic nor sort_order is specified ) For each result: for metric_result in result.metricDataResults: print(f"Metric: {metric_result.label}") for datapoint in metric_result.datapoints: print(f" {datapoint.timestamp}: {datapoint.value}")

get_metric_metadata

Gets metadata for a CloudWatch metric including description, unit and recommended statistics that can be used for metric data retrieval. This tool retrieves comprehensive metadata about a specific CloudWatch metric identified by its namespace and metric name. Note: This function uses local metadata and does not make AWS API calls. Usage: Use this tool to get detailed information about CloudWatch metrics, including their descriptions, units, and recommended statistics to use. Args: ctx: The MCP context object for error handling and logging. namespace: The metric namespace (e.g., "AWS/EC2", "AWS/Lambda") metric_name: The name of the metric (e.g., "CPUUtilization", "Duration") Returns: Optional[MetricMetadata]: An object containing the metric's description, recommended statistics, and unit if found, None if no metadata is available. Example: result = await get_metric_metadata( ctx, namespace="AWS/EC2", metric_name="CPUUtilization" ) if result: print(f"Description: {result.description}") print(f"Unit: {result.unit}") print(f"Recommended Statistics: {result.recommendedStatistics}")

analyze_metric

Analyzes CloudWatch metric data to determine seasonality, trend, data density and statistical properties. This tool provides RAW DATA ONLY about historical metric data and performs analysis including: - Seasonality detection - Trend analysis - Data density and publishing period - Advanced statistical measures (min/max/median, std dev, noise) Usage: Use this tool to get objective metric analysis data. Args: ctx: The MCP context object for error handling and logging. namespace: The metric namespace (e.g., "AWS/EC2", "AWS/Lambda") metric_name: The name of the metric (e.g., "CPUUtilization", "Duration") dimensions: List of dimensions with name and value pairs region: AWS region to query. Defaults to AWS_REGION environment variable or us-east-1 if not set. profile_name: AWS CLI Profile Name to use for AWS access. Falls back to AWS_PROFILE environment variable if not specified, or uses default AWS credential chain. statistic: The statistic to use for metric analysis. For guidance on choosing the correct statistic, refer to the get_recommended_metric_alarms tool. Returns: Dict[str, Any]: Analysis results including: - message: Status message indicating success or reason for empty result - seasonality_seconds: Detected seasonality period in seconds - trend: Trend direction (INCREASING, DECREASING, or NONE) - statistics: Statistical measures (std_deviation, variance, etc.) - data_quality: Data density and publishing period information Example: analysis = await analyze_metric( ctx, namespace="AWS/EC2", metric_name="CPUUtilization", dimensions=[ Dimension(name="InstanceId", value="i-1234567890abcdef0") ] ) print(f"Status: {analysis['message']}") print(f"Seasonality: {analysis['seasonality_seconds']} seconds") print(f"Trend: {analysis['trend']}")

get_recommended_metric_alarms

Gets recommended alarms for a CloudWatch metric. This tool retrieves alarm recommendations for a specific CloudWatch metric identified by its namespace, metric name, and dimensions. The recommendations are filtered to match the provided dimensions. Usage: Use this tool to get recommended alarm configurations for CloudWatch metrics, including thresholds, evaluation periods, and other alarm settings. Args: ctx: The MCP context object for error handling and logging. namespace: The metric namespace (e.g., "AWS/EC2", "AWS/Lambda") metric_name: The name of the metric (e.g., "CPUUtilization", "Duration") dimensions: List of dimensions with name and value pairs region: AWS region to query. Defaults to AWS_REGION environment variable or us-east-1 if not set. profile_name: AWS CLI Profile Name to use for AWS access. Falls back to AWS_PROFILE environment variable if not specified, or uses default AWS credential chain. statistic: The statistic to use for alarm recommendations. Must match the metric's data type: - Aggregate count metrics (RequestCount, Errors, Faults, Throttles, CacheHits, Connections, EventsProcessed): Use 'Sum' - Event occurrence metrics (Invocations, CacheMisses): Use 'SampleCount' - Utilization metrics (CPUUtilization, MemoryUtilization, DiskUtilization, NetworkUtilization): Use 'Average' - Latency/Time metrics (Duration, Latency, ResponseTime, ProcessingTime, Delay, ExecutionTime, WaitTime): Use 'Average' - Size metrics (PayloadSize, MessageSize, RequestSize, BodySize): Use 'Average' If uncertain about the correct statistic for a custom metric, ask the user to confirm the metric type before generating recommendations. Using the wrong statistic (e.g., 'Average' on Invocations) will produce ineffective alarm thresholds Returns: AlarmRecommendationResult: A result containing alarm recommendations and optional message. Empty recommendations list if no recommendations are found. Example: recommendations = await get_recommended_metric_alarms( ctx, namespace="AWS/EC2", metric_name="StatusCheckFailed_Instance", dimensions=[ Dimension(name="InstanceId", value="i-1234567890abcdef0") ] ) for alarm in recommendations: print(f"Alarm: {alarm.alarmDescription}") print(f"Threshold: {alarm.threshold.staticValue}")

get_active_alarms

Gets all CloudWatch Alarms currently in ALARM state. This tool retrieves all CloudWatch Alarms that are currently in the ALARM state, including both metric alarms and composite alarms. Results are optimized for LLM reasoning with summary-level information. Usage: Use this tool to get an overview of all active alarms in your AWS account for troubleshooting, monitoring, and operational awareness. Args: ctx: The MCP context object for error handling and logging. max_items: Maximum number of alarms to return (default: 50). region: AWS region to query. Defaults to AWS_REGION environment variable or us-east-1 if not set. profile_name: AWS CLI Profile Name to use for AWS access. Falls back to AWS_PROFILE environment variable if not specified, or uses default AWS credential chain. Returns: ActiveAlarmsResponse: Response containing active alarms. Example: result = await get_active_alarms(ctx, max_items=25) if isinstance(result, ActiveAlarmsResponse): print(f"Found {len(result.metric_alarms + result.composite_alarms)} active alarms") for alarm in result.metric_alarms: print(f"Metric Alarm: {alarm.alarm_name}") for alarm in result.composite_alarms: print(f"Composite Alarm: {alarm.alarm_name}")

get_alarm_history

Gets the history for a CloudWatch alarm with time range suggestions for investigation. This tool retrieves the history for a specified CloudWatch alarm, focusing primarily on state transitions to ALARM state. It also provides suggested time ranges for investigation based on the alarm's configuration and history. Usage: Use this tool to understand when an alarm fired and get useful time ranges for investigating the underlying issue using other CloudWatch tools. The tool is particularly useful for identifying patterns like alarm flapping (going in and out of alarm state frequently). Args: ctx: The MCP context object for error handling and logging. region: AWS region to query. Defaults to AWS_REGION environment variable or us-east-1 if not set. alarm_name: Name of the alarm to retrieve history for. start_time: Optional start time for the history query. Defaults to 24 hours ago. end_time: Optional end time for the history query. Defaults to current time. history_item_type: Optional type of history items to retrieve. Defaults to 'StateUpdate'. max_items: Maximum number of history items to return. Defaults to 50. include_component_alarms: For composite alarms, whether to include details about component alarms. profile_name: AWS CLI Profile Name to use for AWS access. Falls back to AWS_PROFILE environment variable if not specified, or uses default AWS credential chain. Returns: Union[AlarmHistoryResponse, CompositeAlarmComponentResponse]: Either a response containing alarm history with time range suggestions, or component alarm details for composite alarms. Example: result = await get_alarm_history( ctx, alarm_name="my-cpu-alarm", start_time="2025-06-18T00:00:00Z", end_time="2025-06-19T00:00:00Z" ) if isinstance(result, AlarmHistoryResponse): print(f"Found {len(result.history_items)} history items") for suggestion in result.time_range_suggestions: print(f"Suggested investigation time range: {suggestion.start_time} to {suggestion.end_time}")

get_telemetry_evaluation_status

Returns the current onboarding status of the telemetry config feature. Use this tool to check whether telemetry evaluation is enabled for the account. The status indicates if the feature is running, stopped, starting, or has failed. Returns: TelemetryEvaluationStatusResponse: The current telemetry evaluation status.

start_telemetry_evaluation

Begins onboarding the caller's AWS account to the telemetry config feature. This enables CloudWatch to discover and audit telemetry configurations across resources in the account. This is a free feature with no additional charges. See https://docs.aws.amazon.com/cloudwatch/latest/monitoring/telemetry-config.html IMPORTANT: This is a mutating operation. Before calling this tool, confirm with the user that they want to enable telemetry evaluation for their account. For a single account, evaluation typically becomes available almost immediately. After starting, use get_telemetry_evaluation_status to check progress. Use stop_telemetry_evaluation to disable it later if needed. Returns: TelemetryEvaluationStatusResponse: The status after initiating evaluation.

stop_telemetry_evaluation

Stops the telemetry config feature for the caller's AWS account. This disables CloudWatch telemetry configuration discovery and auditing. After stopping, use get_telemetry_evaluation_status to confirm the feature has been stopped. IMPORTANT: This is a mutating operation. Before calling this tool, confirm with the user that they want to disable telemetry evaluation for their account. Returns: TelemetryEvaluationStatusResponse: The status after stopping evaluation.

get_telemetry_evaluation_status_for_organization

Returns the organization-level onboarding status of the telemetry config feature. Can only be called by the organization's management account or a delegated administrator account. Returns: TelemetryEvaluationStatusResponse: The current organization telemetry evaluation status.

start_telemetry_evaluation_for_organization

Begins onboarding the organization and all member accounts to the telemetry config feature. Can only be called by the organization's management account or a delegated administrator account. This is a free feature with no additional charges. See https://docs.aws.amazon.com/cloudwatch/latest/monitoring/telemetry-config.html IMPORTANT: This is a mutating operation. Before calling this tool, confirm with the user that they want to enable telemetry evaluation for their organization. Onboarding time depends on organization size: a single account is nearly instant, while large organizations with thousands of accounts may take 20-30 minutes. After starting, use get_telemetry_evaluation_status_for_organization to check progress. Use stop_telemetry_evaluation_for_organization to disable it later if needed. Returns: TelemetryEvaluationStatusResponse: The status after initiating organization evaluation.

stop_telemetry_evaluation_for_organization

Stops the telemetry config feature for the organization and all member accounts. Can only be called by the organization's management account or a delegated administrator account. After stopping, use get_telemetry_evaluation_status_for_organization to confirm the feature has been stopped. IMPORTANT: This is a mutating operation. Before calling this tool, confirm with the user that they want to disable telemetry evaluation for their organization. Returns: TelemetryEvaluationStatusResponse: The status after stopping organization evaluation.

list_resource_telemetry

Returns telemetry configurations for AWS resources in the account. Lists the telemetry state (Logs, Metrics, Traces) for resources like EC2 instances, Lambda functions, EKS clusters, etc. Requires telemetry evaluation to be running (use start_telemetry_evaluation first if needed). For a single account, evaluation is typically available almost immediately after starting. Use get_telemetry_evaluation_status to verify the status is RUNNING before querying. Usage: Use this to audit which resources have telemetry enabled or disabled, identify observability gaps, and verify monitoring coverage. Returns: ListResourceTelemetryResponse: List of resource telemetry configurations.

list_telemetry_rules

Lists all telemetry rules in the account. Telemetry rules control what telemetry (Logs, Metrics, Traces) is collected for specific resource types. Use this to audit your telemetry collection configuration. Returns: ListTelemetryRulesResponse: List of telemetry rule summaries.

get_telemetry_rule

Retrieves the details of a specific telemetry rule in your account. Returns the full configuration of a telemetry rule including resource type, telemetry type, source types, destination configuration, and any resource-type-specific parameters (VPC Flow Logs, CloudTrail, ELB, WAF, etc.). Use list_telemetry_rules first to discover available rule names, then use this tool to get the full configuration details of a specific rule. Returns: GetTelemetryRuleResponse: The full telemetry rule details.

list_resource_telemetry_for_organization

Returns telemetry configurations for AWS resources across the organization. Lists the telemetry state (Logs, Metrics, Traces) for resources across all accounts in the organization. Can only be called by the organization's management account or a delegated administrator account. Requires telemetry evaluation to be running. For large organizations with thousands of accounts, onboarding may take 20-30 minutes. Use get_telemetry_evaluation_status_for_organization to verify the status is RUNNING. Returns: ListResourceTelemetryResponse: List of resource telemetry configurations.

list_telemetry_rules_for_organization

Lists all organization telemetry rules. Can only be called by the organization's management account or a delegated administrator account. Supports filtering by rule name prefix, source account IDs, and organizational unit IDs. Returns: ListTelemetryRulesResponse: List of organization telemetry rule summaries.

get_telemetry_rule_for_organization

Retrieves the details of a specific organization telemetry rule. Can only be called by the organization's management account or a delegated administrator account. Returns the full configuration including resource type, telemetry type, source types, destination configuration, and resource-type-specific parameters. Returns: GetTelemetryRuleResponse: The full organization telemetry rule details.

FAQ

Frequently asked questions about AWS CloudWatch

Which CloudWatch scopes are required for read operations?+

Our testing used cloudwatch:read scopes exclusively. All 7 executed tools completed successfully with this single read scope, including log group discovery, alarm monitoring, telemetry evaluation, metric metadata retrieval, and alarm recommendations. Write operations require additional scopes but were not executed due to policy constraints.

How does response time vary across different CloudWatch operations?+

Latency ranged from 2ms to 1026ms across executed operations. Metadata lookups (get_metric_metadata) delivered sub-second performance at 2ms. Standard monitoring operations like alarm checks and metric data retrieval averaged 400-500ms. Infrastructure discovery (describe_log_groups) and ML-powered recommendations took 800-1000ms but remain practical for periodic setup tasks.

What happens when CloudWatch operations encounter authentication issues?+

Authentication was handled through api_key credentials in our sandbox environment. All 7 executed operations completed without authentication failures. The server maintains secure credential handling, though our testing did not deliberately trigger auth failure scenarios to observe specific error behaviors or retry mechanisms.

Can the server handle both real-time monitoring and historical analysis?+

Both capabilities functioned in our tests. Real-time operations like get_active_alarms and get_telemetry_evaluation_status completed in 300-450ms. Historical analysis through analyze_metric and get_metric_data also succeeded with similar response times, demonstrating the server handles both monitoring patterns effectively.

What CloudWatch write operations are available but restricted?+

During discovery, 16 tools were identified as write-dangerous and skipped from execution. These operations were not executed due to policy constraints that prevent modification of CloudWatch resources during testing. The specific write capabilities and their behavior patterns remain untested in our evaluation.

Does the alarm recommendation feature provide actionable output?+

get_recommended_metric_alarms completed successfully in 803ms, delivering ML-generated alarm suggestions based on metric analysis. The tool processed requests without errors, though our testing focused on execution success rather than evaluating the strategic value or accuracy of the specific recommendations produced.

What gotchas exist with metric metadata retrieval?+

While get_metric_metadata delivered extremely fast 2ms responses, this represents cached or pre-computed data rather than live metric queries. Developers should expect this speed only for metadata lookups, not for actual metric data retrieval which requires additional processing time as seen in other operations.

Related

Explore more

Testing History

1 runlive_runtimeApr 7, 2026
protocol10/10reliability10/10

Community

Community Reviews

No community reviews yet. Be the first to share your experience!

Write a Review

Overall
Ease of Setup
Documentation
Reliability

0 / 5000