Skip to content

Aerospike Backup Service quickstart

In this ten-minute quickstart, you will install Aerospike Backup Service (ABS), run backup and restore operations through the REST API, and monitor backup activity with Prometheus and Grafana. You will use a local Docker Compose environment so you can test ABS safely before moving to a larger deployment.

The quickstart starts these core services first:

  • aerospike-cluster: a single-node Aerospike Database test deployment
  • aerospike-backup-service: the ABS API service that receives backup and restore requests
  • minio: S3-compatible object storage where backup files are written
  • minio-client: a short-lived init container that creates the backup bucket in MinIO

Later in the quickstart, you will also add:

  • prometheus: metrics collector that scrapes ABS at /metrics
  • grafana: dashboard UI that visualizes Prometheus metrics

You will trigger a backup, run a restore request, and confirm monitoring signals.

Architecture

ABS runs as a separate service alongside Aerospike Database. Aerospike Database stores your records and serves application reads and writes.

ABS is the backup control plane. It exposes REST endpoints, schedules backup jobs, and writes backups to an external local or cloud storage target.

Prometheus scrapes ABS metrics for job status and timing, and Grafana turns those metrics into dashboards.

Prerequisites

  • Git installed to clone the repository.
  • Docker Desktop installed and running. This quickstart uses Docker Compose so you can run multiple containers on a single machine.
  • curl installed so you can send REST requests to ABS from a terminal without a separate client application.

Start a small test cluster

  1. Clone the Aerospike Backup Service repository and navigate to the Docker Compose directory.

    Terminal window
    git clone https://github.com/aerospike/aerospike-backup-service.git
    cd aerospike-backup-service/build/docker-compose
  2. (Optional) Review the Docker Compose file that starts Aerospike, ABS, and MinIO.

    Terminal window
    cat docker-compose.yaml

    The file defines four services:

    • minio and minio-client: MinIO object storage and an init container that creates the backup bucket.
    • aerospike-cluster: a single-node Aerospike database.
    • aerospike-backup-service: the ABS service, pulled from Docker Hub. It connects to the Aerospike cluster and writes backups to MinIO.
  3. (Optional) Review the ABS configuration files.

    Terminal window
    cat aerospike-backup-service.yml

    aerospike-backup-service.yml has four sections:

    • aerospike-clusters defines the Aerospike cluster that ABS connects to.
    • storage defines where ABS stores the backup files. This quickstart uses MinIO, a local stand-in for cloud object storage.
    • backup-policies defines how backup jobs run and are retained.
    • backup-routines is a list of backup routines for ABS to run.

    The credentials file provides AWS-style credentials that the ABS service uses to authenticate with MinIO S3 storage.

    See Configuration file examples for more details about the configuration file sections.

  4. Start the Docker Compose stack and wait for the services to start.

    The minio-client init container may show Exited (0) after it creates the backup bucket. This is expected.

    Terminal window
    docker compose up -d
    Example response
    ✔ Network docker-compose_default Created 0.0s
    ✔ Container aerospike-cluster Healthy 31.5s
    ✔ Container minio Healthy 30.8s
    ✔ Container minio-client Exited 31.4s
    ✔ Container aerospike-backup-service Started 34.5s
  5. Check that ABS is reachable by using curl to send GET requests to the /health and /version endpoints at port 8080.

    Verify that you get Ok from /health and a JSON response from /version. Run the commands one at a time to see the separate responses.

    Terminal window
    curl -s http://localhost:8080/health
    curl -s http://localhost:8080/version
    Example response
    Ok
    {
    "version": "v3.5.0",
    "commit": "COMMIT_SHA",
    "build-time": "BUILD_TIMESTAMP"
    }

Send REST requests for backup and restore

  1. Load sample records into the Aerospike test.demo set.

    This quickstart uses local internal storage (MinIO on the same Docker host) and a highly performant Aerospike deployment, so a small dataset often backs up too quickly to observe progress in Grafana. In this step, you’ll use the Aerospike Benchmark tool (asbench) to create and load a dataset of 5 million records (about 300 MB in total).

    asbench prints live progress lines (write(tps=...)) as it runs.

    Terminal window
    docker run --rm --network docker-compose_default \
    aerospike/aerospike-tools \
    asbench -h aerospike-cluster -p 3000 \
    -n test \
    -s demo \
    -k 5000000 \
    -o I8 \
    -w I \
    -z 16 \
    -T 5000
    Example command
    2026-04-14 18:20:32 INFO write(tps=201704 ...) total(tps=201704 ...)
    2026-04-14 18:20:33 INFO write(tps=220972 ...) total(tps=220972 ...)
    ...

    This load typically takes under 30 seconds to complete on a typical laptop.

  2. Verify the sample records are present using asinfo to read data from the Aerospike cluster.

    The following command uses the Aerospike Info tool (asinfo) to read the number of records in the test.demo set.

    Terminal window
    docker run --rm --network docker-compose_default \
    aerospike/aerospike-tools \
    asinfo --no-config-file -h aerospike-cluster -v "sets/test"
    Example response
    ns=test:set=demo:objects=5000000:tombstones=0:data_used_bytes=...

    Look for the objects value for set=demo and confirm it matches the number of records you loaded in the previous step.

  3. Read the configured cluster and routine names by sending two separate GET requests to the ABS API.

    The results confirm ABS loaded your config file and tell you which routine name to use in API calls.

    Terminal window
    curl -s http://localhost:8080/v1/config/clusters
    curl -s http://localhost:8080/v1/config/routines
    Example response
    /v1/config/clusters
    {
    "absCluster1": {
    "seed-nodes": [
    {
    "host-name": "aerospike-cluster",
    "port": 3000
    }
    ],
    "credentials": {
    "user": "admin",
    "password": "admin"
    }
    }
    }
    /v1/config/routines
    {
    "minioKeepFilesRoutine": {
    "backup-policy": "keepFilesPolicy",
    "source-cluster": "absCluster1",
    "storage": "minioStorage",
    "interval-cron": "@daily",
    "incr-interval-cron": "@hourly",
    "namespaces": ["test"]
    }
    }

    In this quickstart, the routine name is minioKeepFilesRoutine and the cluster name is absCluster1. The routine is configured to perform full backups daily and incremental backups hourly.

  4. Trigger an on-demand full backup with a POST request to the ABS API, then get ready to run the next command to capture the backup during the short window while it is running.

    This bypasses the schedule defined in the policy and starts a backup immediately. A successful request returns 202 Accepted, which means the job was accepted and queued. Content-Length: 0 means ABS accepted the backup request and returned no response body.

    Terminal window
    curl -i -X POST http://localhost:8080/v1/backups/full/minioKeepFilesRoutine
    Example response
    HTTP/1.1 202 Accepted
    Date: Tue, 14 Apr 2026 21:05:27 GMT
    Content-Length: 0
  5. Within the next few seconds, query currentBackup to get information about the backup while it is running.

    On a typical laptop, the backup runs for about 10-20 seconds, so run this request quickly after the previous command. While the job is active, the response includes a full object with progress fields.

    Terminal window
    curl -s http://localhost:8080/v1/backups/currentBackup/minioKeepFilesRoutine
    Example in-progress response
    /v1/backups/currentBackup/minioKeepFilesRoutine
    {
    "full": {
    "total-records": 5000000,
    "done-records": 1834210,
    "start-time": "2026-04-14T21:05:27.430Z",
    "percentage-done": 36,
    "duration": 4,
    "metrics": {
    "records-per-second": 509686,
    "kilobytes-per-second": 50432,
    "pipeline": 335
    }
    },
    "next-full": "2026-04-15T00:00:00Z",
    "next-incremental": "2026-04-14T22:00:00Z"
    }
  6. Wait for the backup to finish, then query currentBackup again.

    After completion, the full object is no longer present and last-full is updated.

    Terminal window
    curl -s http://localhost:8080/v1/backups/currentBackup/minioKeepFilesRoutine
    Example post-completion response
    /v1/backups/currentBackup/minioKeepFilesRoutine
    {
    "last-full": "2026-04-14T21:05:37.769Z",
    "next-full": "2026-04-15T00:00:00Z",
    "next-incremental": "2026-04-14T22:00:00Z"
    }
  7. List full backups and confirm the new backup entry.

    Look for a recent timestamp and verify that record-count is the same value that you loaded with asbench.

    Terminal window
    curl -s http://localhost:8080/v1/backups/full
    Example response
    /v1/backups/full
    {
    "minioKeepFilesRoutine": [
    {
    "created": "2026-04-14T21:05:27.732Z",
    "timestamp": 1776197127732,
    "finished": "2026-04-14T21:05:37.769Z",
    "duration": 10,
    "namespace": "test",
    "record-count": 5000000,
    "byte-count": 506875071,
    "file-count": 1,
    "secondary-index-count": 0,
    "udf-count": 0,
    "key": "minioKeepFilesRoutine/backup/1776197127732/data/test",
    "compression": "NONE",
    "encryption": "NONE"
    }
    ]
    }
  8. Trigger a restore job.

    This command does three things:

    • Sends a POST request to /v1/restore/timestamp to start a restore operation.
    • Passes JSON with the routine name and the current time in milliseconds.
    • Saves the response (a restore job ID) into an environment variable named RESTORE_JOB_ID.

    This is all wrapped into one command for the quickstart to make it faster to run. The job ID will be different in your environment.

    ABS restores the most recent full backup before that timestamp and then applies incremental backups up to that timestamp.

    You can reuse this variable in the next request without manually copying and pasting the job ID. If you open another terminal session, set the variable again by rerunning the command.

    Terminal window
    RESTORE_JOB_ID=$(curl -s -X POST http://localhost:8080/v1/restore/timestamp \
    -H "Content-Type: application/json" \
    -d "{\"routine\":\"minioKeepFilesRoutine\",\"time\":$(($(date +%s)*1000))}")
    echo "$RESTORE_JOB_ID"
    Example response
    4697118440623777782
  9. Check restore status with the returned job ID.

    Wait until the response shows status as Done or Failed.

    Terminal window
    curl -s "http://localhost:8080/v1/restore/status/${RESTORE_JOB_ID}"
    Example response
    {
    "read-records": 5000000,
    "total-bytes": 505625000,
    "expired-records": 0,
    "skipped-records": 0,
    "ignored-records": 0,
    "inserted-records": 0,
    "existed-records": 0,
    "fresher-records": 5000000,
    "index-count": 0,
    "udf-count": 0,
    "errors-in-doubt": 0,
    "status": "Done"
    }

    In this quickstart, restore runs to the same namespace, so inserted-records can stay 0 while read-records and fresher-records increase.

Monitor ABS with Prometheus and Grafana

Prometheus and Grafana provide a monitoring pipeline for ABS:

  • Prometheus scrapes raw time-series metrics from ABS.
  • Grafana queries those metrics and visualizes backup health, rates, and failures.
  1. Create a file named prometheus.yml in your current build/docker-compose working directory with a Prometheus scrape config for ABS.

    This tells Prometheus where ABS is running on the Docker network.

    prometheus.yml
    global:
    scrape_interval: 15s
    scrape_configs:
    - job_name: 'aerospike-backup-service'
    static_configs:
    - targets: ['aerospike-backup-service:8080']
  2. Start Prometheus and Grafana on the same Docker network as the ABS stack.

    Terminal window
    docker run -d --name abs-prometheus \
    --network docker-compose_default \
    -p 9090:9090 \
    -v "$PWD/prometheus.yml:/etc/prometheus/prometheus.yml" \
    prom/prometheus
    docker run -d --name abs-grafana \
    --network docker-compose_default \
    -p 3301:3000 \
    grafana/grafana-oss

    You should see two new container IDs printed to the console and two new containers running in your Docker Desktop dashboard.

  3. (Optional) Confirm Prometheus can scrape ABS metrics by sending GET requests to the /api/v1/query endpoint.

    You can also check http://localhost:9090/targets and verify the aerospike-backup-service target is UP.

    Terminal window
    curl -s "http://localhost:9090/api/v1/query?query=up%7Bjob%3D%22aerospike-backup-service%22%7D"
    curl -s "http://localhost:9090/api/v1/query?query=aerospike_backup_service_backup_events_total"
    Example response
    up{job="aerospike-backup-service"}
    {
    "status": "success",
    "data": {
    "resultType": "vector",
    "result": [
    {
    "metric": {
    "__name__": "up",
    "instance": "aerospike-backup-service:8080",
    "job": "aerospike-backup-service"
    },
    "value": [1774389204.122, "1"]
    }
    ]
    }
    }
    aerospike_backup_service_backup_events_total
    {
    "status": "success",
    "data": {
    "resultType": "vector",
    "result": [
    {
    "metric": {
    "__name__": "aerospike_backup_service_backup_events_total",
    "instance": "aerospike-backup-service:8080",
    "job": "aerospike-backup-service",
    "outcome": "success",
    "routine": "minioKeepFilesRoutine",
    "type": "full"
    },
    "value": [1774389204.138, "1"]
    }
    ]
    }
    }
  4. Open Grafana in a browser window at http://localhost:3301 and sign in with admin / admin. You can skip the prompt to change your password.

  5. Add a Prometheus data source:

    • Go to http://localhost:3301/connections/datasources/new and select Prometheus.
    • Set the URL to http://abs-prometheus:9090, then select Save & test.
  6. Import the premade ABS dashboard hosted by Grafana.

    • Go to http://localhost:3301/dashboard/import.
    • Enter 21375, click Load, then click Import.

    Dashboard reference: Aerospike Backup Service dashboard.

  7. Change the Grafana time range to Last 30 minutes.

    The default Last 30 days window can miss very recent metrics in a fresh test environment due to sampling resolution. Grafana is designed more for longer-running trends than short example backup runs, so you may have to trigger a few backups and manually refresh the time range to see updated metrics right away.

  8. Trigger another backup and watch progress in Grafana.

    Terminal window
    curl -X POST http://localhost:8080/v1/backups/full/minioKeepFilesRoutine

    In Grafana, watch the ABS dashboard and confirm these changes:

    • Backup progress: The % Backup Progress panel, reflecting the metric aerospike_backup_service_backup_progress_pct, increases during a running backup and drops after the job finishes.
    • Backup events: The Success panel, reflecting aerospike_backup_service_backup_events_total{outcome="success"}, increases after a successful run.
    • Error-related events: The Failures, Skip, and Retry panels stay flat when backups complete successfully.
    • Backup duration: After the job finishes, the Full - Backup duration and Backup duration - full panels update with data from the completed backup.

You have completed a full setup and test of Aerospike and Aerospike Backup Service!

If all your steps completed successfully, you should have confirmed the following:

  • Your ABS instance can reach the Aerospike source cluster and the backup storage destination.
  • Your backup job requests are executing through the full API-to-storage path.
  • The monitoring pipeline from ABS to Prometheus to Grafana is complete and working.

You can now continue to send more backup and restore requests, monitor them with Prometheus and Grafana, and explore the ABS API further.

When you finish testing, stop and remove the test environment containers. You may have created several gigabytes of test backup files in MinIO depending on the record count you loaded. Run these commands from the build/docker-compose directory in the cloned repository.

Terminal window
docker rm -f abs-prometheus abs-grafana
docker compose down -v

Next steps

Feedback

Was this page helpful?

What type of feedback are you giving?

What would you like us to know?

+Capture screenshot

Can we reach out to you?