Skip to main content
Loading
Version: Operator 3.4.0

Monitoring Aerospike Clusters and AKO

Overviewโ€‹

You can use the Aerospike Monitoring Stack to monitor and set alerts for Aerospike clusters deployed by Aerospike Kubernetes Operator (AKO). This guide explains how to monitor both Aerospike clusters and AKO itself using Prometheus and Grafana.

You can set up monitoring with the Aerospike Prometheus Exporter sidecar and, optionally, with the Prometheus Operator.

Monitoring Aerospike clustersโ€‹

Add Aerospike Prometheus Exporter sidecarโ€‹

To monitor an Aerospike cluster, add the Aerospike Prometheus Exporter as a sidecar to your Aerospike cluster pods. Modify the podSpec section of your Aerospike cluster's Custom Resource (CR) file as follows:

spec:
...
podSpec:
multiPodPerHost: true
sidecars:
- name: aerospike-prometheus-exporter
image: aerospike/aerospike-prometheus-exporter:latest
env:
# Replace with your credentials
- name: "AS_AUTH_USER"
value: "exporter"
- name: "AS_AUTH_PASSWORD"
value: "exporter123"
- name: "AS_AUTH_MODE"
value: "internal"
ports:
- containerPort: 9145
name: exporter
note

Replace the AS_AUTH_USER and AS_AUTH_PASSWORD values with your actual credentials. Enable security in a production environment.

Configure Prometheus to scrape Aerospike cluster metricsโ€‹

To collect metrics from the Aerospike Prometheus Exporter, configure Prometheus to scrape the /metrics endpoint exposed by the exporter.

Using Prometheus Operatorโ€‹

The Prometheus Operator uses a PodMonitor resource to scrape the exporter endpoints.

  1. Create a file named pod-monitor.yaml with the following content:

    apiVersion: monitoring.coreos.com/v1
    kind: PodMonitor
    metadata:
    name: aerospike-cluster-pod-monitor
    namespace: aerospike
    labels:
    release: prometheus-operator
    spec:
    selector:
    matchLabels:
    app: aerospike-cluster
    namespaceSelector:
    matchNames:
    - default
    - aerospike
    podMetricsEndpoints:
    - port: exporter
    path: /metrics
    interval: 30s
  2. Apply the PodMonitor resource:

    kubectl apply -f pod-monitor.yaml

Using AKO and the Prometheus Exporter sidecarโ€‹

If you have cloned the AKO repository, you can apply the monitoring configurations with kubectl:

  1. Start your Aerospike database with the Prometheus exporter sidecar as described in Add Aerospike Prometheus Exporter Sidecar.

  2. Apply the monitoring configurations:

    kubectl apply -k config/monitoring
  3. Download Grafana dashboards from the list hosted at Grafana Labs or specify the dashboardID:revision_number in config/monitoring/grafana/config/download_files.sh and run the script to download the dashboards.

  4. To configure alerts, create Prometheus rule YAML files in the config/monitoring/prometheus/config/alert-rules directory. Aerospike provides predefined Prometheus alert rules in the Aerospike Monitoring GitHub repository.

Grafana dashboardsโ€‹

To visualize the metrics, import pre-built Grafana dashboards from the Aerospike Monitoring GitHub repository or from Grafana Labs.

Monitoring AKOโ€‹

AKO exposes metrics on the /metrics endpoint, protected by kube-rbac-proxy. Prometheus must have the required permissions to scrape these metrics.

Expose AKO metricsโ€‹

Verify RBAC permissionsโ€‹

Verify that the Prometheus deployment has the required RBAC permissions to access the /metrics endpoint of the AKO. If not, create a ClusterRole and attach it to the Prometheus ServiceAccount using a ClusterRoleBinding.

  1. Create a file named metrics-reader.yaml:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    name: metrics-reader
    rules:
    - nonResourceURLs: ["/metrics"]
    verbs: ["get"]
  2. Apply the ClusterRole:

    kubectl apply -f metrics-reader.yaml
  3. Create a ClusterRoleBinding to bind this role to Prometheus's ServiceAccount:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: prometheus-metrics-reader
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: metrics-reader
    subjects:
    - kind: ServiceAccount
    name: PROMETHEUS_SERVICEACCOUNT
    namespace: PROMETHEUS_NAMESPACE
  4. Apply the ClusterRoleBinding:

    kubectl apply -f clusterrolebinding.yaml

Scrape AKO metricsโ€‹

Create a ServiceMonitor resource to configure Prometheus to scrape the AKO metrics endpoint.

  1. Create a file named service-monitor.yaml:

    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
    labels:
    control-plane: controller-manager
    name: aerospike-operator-controller-manager-metrics-monitor
    spec:
    endpoints:
    - path: /metrics
    interval: 15s
    port: https
    scheme: https
    bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    tlsConfig:
    insecureSkipVerify: true
    selector:
    matchLabels:
    control-plane: controller-manager

  2. Verify that the Prometheus deployment is configured to select the AKO ServiceMonitor using serviceMonitorSelector. Ensure that the serviceMonitorSelector includes the labels used in the ServiceMonitor.

    ```
    kubectl -n PROMETHEUS_NAMESPACE get prometheus PROMETHEUS_CR_NAME -o yaml
    ```
  3. Apply the ServiceMonitor resource:

    kubectl apply -f service-monitor.yaml -n AKO_NAMESPACE

Grafana dashboardsโ€‹

Import Grafana dashboards for the Aerospike Kubernetes Operator from the Aerospike Monitoring GitHub repository or from Grafana Labs.

Monitoring without the Prometheus Operatorโ€‹

For both Aerospike Database and AKO monitoring without using the Prometheus Operator:

  1. Ensure your Aerospike server is running with the Prometheus exporter sidecar as described in Add Aerospike Prometheus Exporter Sidecar.

  2. Apply the monitoring configurations available in the cloned AKO repository:

    kubectl apply -k config/monitoring
  3. For Grafana dashboards:

    • Download them in your Grafana UI from Aerospike's Grafana dashboards, or
    • Provide the dashboardID:revision_number in config/monitoring/grafana/config/download_files.sh and run the script to download the dashboards.
  4. For alerts, create Prometheus rule YAML files in the config/monitoring/prometheus/config/alert-rules directory.

Scraping AKO dataโ€‹

Update the scrape_configs section of your Prometheus configuration at prometheus.yaml to include the AKO metrics endpoint.

Add the following scrape job definition:

scrape_configs:
- job_name: "aerospike-kubernetes-operator"
honor_timestamps: true
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: https
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
insecure_skip_verify: true
relabel_configs:
- source_labels: [__meta_kubernetes_service_label_control_plane, __meta_kubernetes_service_labelpresent_control_plane]
separator: ;
regex: (controller-manager);true
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_port_name]
separator: ;
regex: https
replacement: $1
action: keep
kubernetes_sd_configs:
- role: endpoints

Scraping Aerospike dataโ€‹

If you are using Prometheus without the Prometheus Operator, update the scrape_configs section of your Prometheus configuration file at prometheus.yml to include the Aerospike Prometheus Exporter endpoints.

Add the following scrape job definition:

scrape_configs:
- job_name: "aerospike-cluster"
honor_timestamps: true
scrape_interval: 30s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- PROMETHEUS_EXPORTER_ENDPOINT:9145
note

You can replace PROMETHEUS_EXPORTER_ENDPOINT with either the service name or the IP address of your Aerospike Prometheus Exporter endpoint.

Exampleโ€‹

This example demonstrates how to monitor an Aerospike cluster and the Aerospike Kubernetes Operator using the Prometheus Operator.

  1. Deploy the Aerospike Kubernetes Operator (AKO). See Deploy for more details.

  2. Create a Kubernetes Secret for the Aerospike feature-key file.

    kubectl create secret generic aerospike-secret --from-file=PATH_TO_FEATURE_KEY_FILE
  3. Deploy an Aerospike cluster with the Prometheus Exporter sidecar.

    In your Aerospike cluster CR file, add the following to the podSpec section:

    podSpec:
    multiPodPerHost: true
    sidecars:
    - name: aerospike-prometheus-exporter
    image: aerospike/aerospike-prometheus-exporter:latest
    env:
    - name: "AS_AUTH_USER"
    value: "exporter"
    - name: "AS_AUTH_PASSWORD"
    value: "exporter123"
    - name: "AS_AUTH_MODE"
    value: "internal"
    ports:
    - containerPort: 9145
    name: exporter

    Apply the changes:

    kubectl apply -f aerospike-cluster.yaml
  4. Install the kube-prometheus-stack.

    Use Helm to install the kube-prometheus-stack, which includes the Prometheus Operator and Grafana.

    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    helm repo update
    helm install prometheus-operator prometheus-community/kube-prometheus-stack
  5. Create the PodMonitor and ServiceMonitor resources.

    Apply the pod-monitor.yaml and service-monitor.yaml files created previously:

    kubectl apply -f pod-monitor.yaml
    kubectl apply -f service-monitor.yaml -n AKO_NAMESPACE
  6. Access the Grafana Dashboard.

    Forward the Grafana service port to your local machine:

    kubectl port-forward svc/prometheus-operator-grafana 3000:80

    Go to http://localhost:3000 in your browser. Log in with the default credentials (username: admin, password: prom-operator).

  7. Import Grafana Dashboards.

    In the Grafana UI, import dashboards from Aerospike's Grafana dashboards.

  8. Configure Alert Rules.

    Use predefined Prometheus alert rules from the Aerospike Monitoring GitHub repository or create your own.

note

Ensure that all namespaces, labels, and configurations match your deployment environment.