Enable metrics
The Developer SDK documentation below describes a metrics story for operators. For the Java developer preview client (com.aerospike:aerospike-client-sdk in aerospike-client-java-fluent), there is no MetricsConfig / built-in Micrometer bridge on ClusterDefinition as in older drafts—instrument your application with Micrometer or OpenTelemetry and use client logs under com.aerospike.client.sdk (see Configure logging).
Available metrics
Standard metrics
| Metric | Type | Description |
|---|---|---|
aerospike.operations.total | Counter | Total operations executed |
aerospike.operations.errors | Counter | Failed operations |
aerospike.latency.read | Histogram | Read operation latency |
aerospike.latency.write | Histogram | Write operation latency |
aerospike.connections.active | Gauge | Current open connections |
aerospike.connections.pool | Gauge | Connections in pool |
Extended metrics
| Metric | Type | Description |
|---|---|---|
aerospike.batch.size | Histogram | Batch operation sizes |
aerospike.retries | Counter | Operation retry count |
aerospike.timeouts | Counter | Timeout events |
aerospike.cluster.nodes | Gauge | Nodes in cluster |
Enable metrics programmatically
import com.aerospike.client.sdk.Cluster;import com.aerospike.client.sdk.ClusterDefinition;import io.micrometer.core.instrument.MeterRegistry;import io.micrometer.core.instrument.Timer;import io.micrometer.prometheus.PrometheusConfig;import io.micrometer.prometheus.PrometheusMeterRegistry;
// Application-owned registry (not provided by the Aerospike JAR)MeterRegistry registry = new PrometheusMeterRegistry(PrometheusConfig.DEFAULT);
try (Cluster cluster = new ClusterDefinition("localhost", 3000).connect()) { Timer.Sample sample = Timer.start(registry); try { // session.query(...).execute(); etc. } finally { sample.stop(registry.timer("aerospike.sdk.requests")); }}import time
from aerospike_sdk import Behavior, Client, DataSet
async def main(): async with Client("localhost:3000") as client: session = client.create_session(Behavior.DEFAULT) users = DataSet.of("test", "users")
start = time.perf_counter() stream = await session.query(users.id("user-1")).execute() await stream.first() stream.close() duration_ms = (time.perf_counter() - start) * 1000 print(f"aerospike.read.latency_ms={duration_ms:.2f}")Prometheus integration
from prometheus_client import CollectorRegistry, generate_latestfrom prometheus_client import Counter, Histogram
from aerospike_sdk import Behavior, Client, DataSet
registry = CollectorRegistry()reads_total = Counter("aerospike_reads_total", "Read operations", registry=registry)read_latency = Histogram("aerospike_read_latency_ms", "Read latency ms", registry=registry)
async def main(): async with Client("localhost:3000") as client: session = client.create_session(Behavior.DEFAULT) users = DataSet.of("test", "users") with read_latency.time(): stream = await session.query(users.id("user-1")).execute() await stream.first() stream.close() reads_total.inc()
# Expose metrics endpoint@app.route('/metrics')def metrics(): return generate_latest(registry)Access metrics snapshots
Get current metric values programmatically:
// The preview Java client does not expose cluster.getMetrics() / MetricsSnapshot.// Export Micrometer/Prometheus from your app or scrape logs for connection events.# The Python SDK does not expose cluster.get_metrics().# Read metrics from your app-level registry (Prometheus/OpenTelemetry/StatsD).print(f"Total reads: {reads_total._value.get()}")Metrics log files
Enable periodic metrics logging to files:
// No built-in periodic metrics log in the preview client; use your logging framework// and rotate files under /var/log/... as needed (see Configure logging).import logging
logger = logging.getLogger("aerospike_app_metrics")logger.setLevel(logging.INFO)logger.info("aerospike.read.latency_ms=12.7")Client identification labels
Add labels to identify metrics by application or environment:
// Apply labels in your metrics backend (Prometheus relabel, Datadog tags, etc.), not on ClusterDefinition.from dataclasses import dataclass
@dataclass(frozen=True)class MetricTags: app: str = "my-service" env: str = "production" region: str = "us-west-2"
tags = MetricTags()print(tags)Integration with monitoring systems
Prometheus + Grafana
- Export metrics via HTTP endpoint
- Configure Prometheus to scrape your application
- Import the Aerospike client dashboard in Grafana
Datadog
import io.micrometer.datadog.DatadogConfig;import io.micrometer.datadog.DatadogMeterRegistry;import java.time.Clock;
DatadogMeterRegistry registry = new DatadogMeterRegistry( DatadogConfig.DEFAULT, Clock.SYSTEM);// Register timers/counters around your own Aerospike calls; the SDK does not wire this for you.from datadog import statsd
from aerospike_sdk import Behavior, Client, DataSet
async def main(): async with Client("localhost:3000") as client: session = client.create_session(Behavior.DEFAULT) users = DataSet.of("test", "users") stream = await session.query(users.id("user-1")).execute() row = await stream.first() stream.close() statsd.increment("aerospike.read.total", tags=["app:my-service"]) statsd.gauge("aerospike.read.found", 1 if row is not None else 0)Performance impact
Metrics collection has minimal overhead:
- Counter/gauge operations: ~100ns
- Histogram recording: ~500ns
- Total impact: <1% CPU overhead
For latency-critical applications, you can disable histograms:
// Configure Micrometer histogram buckets on your own meters; unrelated to the Aerospike JAR.# Configure histogram behavior in your metrics backend, not on ClusterDefinition.# Example: use a Counter-only approach in your instrumentation path.Next steps
Configure Logging
Set up client-side logging for debugging.
Tune Performance
Optimize client configuration with Behaviors.