Aerospike Backup Service quickstart
In this ten-minute quickstart, you will install Aerospike Backup Service (ABS), run backup and restore operations through the REST API, and monitor backup activity with Prometheus and Grafana. You will use a local Docker Compose environment so you can test ABS safely before moving to a larger deployment.
The quickstart starts these core services first:
aerospike-cluster: a single-node Aerospike Database test deploymentaerospike-backup-service: the ABS API service that receives backup and restore requestsminio: S3-compatible object storage where backup files are writtenminio-client: a short-lived init container that creates the backup bucket in MinIO
Later in the quickstart, you will also add:
prometheus: metrics collector that scrapes ABS at/metricsgrafana: dashboard UI that visualizes Prometheus metrics
You will trigger a backup, run a restore request, and confirm monitoring signals.
Architecture
ABS runs as a separate service alongside Aerospike Database. Aerospike Database stores your records and serves application reads and writes.
ABS is the backup control plane. It exposes REST endpoints, schedules backup jobs, and writes backups to an external local or cloud storage target.
Prometheus scrapes ABS metrics for job status and timing, and Grafana turns those metrics into dashboards.
Prerequisites
- Git installed to clone the repository.
- Docker Desktop installed and running. This quickstart uses Docker Compose so you can run multiple containers on a single machine.
curlinstalled so you can send REST requests to ABS from a terminal without a separate client application.
Start a small test cluster
-
Clone the Aerospike Backup Service repository and navigate to the Docker Compose directory.
Terminal window git clone https://github.com/aerospike/aerospike-backup-service.gitcd aerospike-backup-service/build/docker-compose -
(Optional) Review the Docker Compose file that starts Aerospike, ABS, and MinIO.
Terminal window cat docker-compose.yamlThe file defines four services:
minioandminio-client: MinIO object storage and an init container that creates the backup bucket.aerospike-cluster: a single-node Aerospike database.aerospike-backup-service: the ABS service, pulled from Docker Hub. It connects to the Aerospike cluster and writes backups to MinIO.
-
(Optional) Review the ABS configuration files.
Terminal window cat aerospike-backup-service.ymlaerospike-backup-service.ymlhas four sections:aerospike-clustersdefines the Aerospike cluster that ABS connects to.storagedefines where ABS stores the backup files. This quickstart uses MinIO, a local stand-in for cloud object storage.backup-policiesdefines how backup jobs run and are retained.backup-routinesis a list of backup routines for ABS to run.
The
credentialsfile provides AWS-style credentials that the ABS service uses to authenticate with MinIO S3 storage.See Configuration file examples for more details about the configuration file sections.
-
Start the Docker Compose stack and wait for the services to start.
The
minio-clientinit container may showExited (0)after it creates the backup bucket. This is expected.Terminal window docker compose up -dExample response ✔ Network docker-compose_default Created 0.0s✔ Container aerospike-cluster Healthy 31.5s✔ Container minio Healthy 30.8s✔ Container minio-client Exited 31.4s✔ Container aerospike-backup-service Started 34.5s -
Check that ABS is reachable by using
curlto send GET requests to the/healthand/versionendpoints at port8080.Verify that you get
Okfrom/healthand a JSON response from/version. Run the commands one at a time to see the separate responses.Terminal window curl -s http://localhost:8080/healthcurl -s http://localhost:8080/versionExample response Ok{"version": "v3.5.0","commit": "COMMIT_SHA","build-time": "BUILD_TIMESTAMP"}
Send REST requests for backup and restore
-
Load sample records into the Aerospike
test.demoset.This quickstart uses local internal storage (MinIO on the same Docker host) and a highly performant Aerospike deployment, so a small dataset often backs up too quickly to observe progress in Grafana. In this step, you’ll use the Aerospike Benchmark tool (
asbench) to create and load a dataset of 5 million records (about 300 MB in total).asbenchprints live progress lines (write(tps=...)) as it runs.Terminal window docker run --rm --network docker-compose_default \aerospike/aerospike-tools \asbench -h aerospike-cluster -p 3000 \-n test \-s demo \-k 5000000 \-o I8 \-w I \-z 16 \-T 5000Example command 2026-04-14 18:20:32 INFO write(tps=201704 ...) total(tps=201704 ...)2026-04-14 18:20:33 INFO write(tps=220972 ...) total(tps=220972 ...)...This load typically takes under 30 seconds to complete on a typical laptop.
-
Verify the sample records are present using
asinfoto read data from the Aerospike cluster.The following command uses the Aerospike Info tool (
asinfo) to read the number of records in thetest.demoset.Terminal window docker run --rm --network docker-compose_default \aerospike/aerospike-tools \asinfo --no-config-file -h aerospike-cluster -v "sets/test"Example response ns=test:set=demo:objects=5000000:tombstones=0:data_used_bytes=...Look for the
objectsvalue forset=demoand confirm it matches the number of records you loaded in the previous step. -
Read the configured cluster and routine names by sending two separate GET requests to the ABS API.
The results confirm ABS loaded your config file and tell you which routine name to use in API calls.
Terminal window curl -s http://localhost:8080/v1/config/clusterscurl -s http://localhost:8080/v1/config/routinesExample response /v1/config/clusters {"absCluster1": {"seed-nodes": [{"host-name": "aerospike-cluster","port": 3000}],"credentials": {"user": "admin","password": "admin"}}}/v1/config/routines {"minioKeepFilesRoutine": {"backup-policy": "keepFilesPolicy","source-cluster": "absCluster1","storage": "minioStorage","interval-cron": "@daily","incr-interval-cron": "@hourly","namespaces": ["test"]}}In this quickstart, the routine name is
minioKeepFilesRoutineand the cluster name isabsCluster1. The routine is configured to perform full backups daily and incremental backups hourly. -
Trigger an on-demand full backup with a POST request to the ABS API, then get ready to run the next command to capture the backup during the short window while it is running.
This bypasses the schedule defined in the policy and starts a backup immediately. A successful request returns
202 Accepted, which means the job was accepted and queued.Content-Length: 0means ABS accepted the backup request and returned no response body.Terminal window curl -i -X POST http://localhost:8080/v1/backups/full/minioKeepFilesRoutineExample response HTTP/1.1 202 AcceptedDate: Tue, 14 Apr 2026 21:05:27 GMTContent-Length: 0 -
Within the next few seconds, query
currentBackupto get information about the backup while it is running.On a typical laptop, the backup runs for about 10-20 seconds, so run this request quickly after the previous command. While the job is active, the response includes a
fullobject with progress fields.Terminal window curl -s http://localhost:8080/v1/backups/currentBackup/minioKeepFilesRoutineExample in-progress response /v1/backups/currentBackup/minioKeepFilesRoutine {"full": {"total-records": 5000000,"done-records": 1834210,"start-time": "2026-04-14T21:05:27.430Z","percentage-done": 36,"duration": 4,"metrics": {"records-per-second": 509686,"kilobytes-per-second": 50432,"pipeline": 335}},"next-full": "2026-04-15T00:00:00Z","next-incremental": "2026-04-14T22:00:00Z"} -
Wait for the backup to finish, then query
currentBackupagain.After completion, the
fullobject is no longer present andlast-fullis updated.Terminal window curl -s http://localhost:8080/v1/backups/currentBackup/minioKeepFilesRoutineExample post-completion response /v1/backups/currentBackup/minioKeepFilesRoutine {"last-full": "2026-04-14T21:05:37.769Z","next-full": "2026-04-15T00:00:00Z","next-incremental": "2026-04-14T22:00:00Z"} -
List full backups and confirm the new backup entry.
Look for a recent timestamp and verify that
record-countis the same value that you loaded withasbench.Terminal window curl -s http://localhost:8080/v1/backups/fullExample response /v1/backups/full {"minioKeepFilesRoutine": [{"created": "2026-04-14T21:05:27.732Z","timestamp": 1776197127732,"finished": "2026-04-14T21:05:37.769Z","duration": 10,"namespace": "test","record-count": 5000000,"byte-count": 506875071,"file-count": 1,"secondary-index-count": 0,"udf-count": 0,"key": "minioKeepFilesRoutine/backup/1776197127732/data/test","compression": "NONE","encryption": "NONE"}]} -
Trigger a restore job.
This command does three things:
- Sends a
POSTrequest to/v1/restore/timestampto start a restore operation. - Passes JSON with the routine name and the current time in milliseconds.
- Saves the response (a restore job ID) into an environment variable named
RESTORE_JOB_ID.
This is all wrapped into one command for the quickstart to make it faster to run. The job ID will be different in your environment.
ABS restores the most recent full backup before that timestamp and then applies incremental backups up to that timestamp.
You can reuse this variable in the next request without manually copying and pasting the job ID. If you open another terminal session, set the variable again by rerunning the command.
Terminal window RESTORE_JOB_ID=$(curl -s -X POST http://localhost:8080/v1/restore/timestamp \-H "Content-Type: application/json" \-d "{\"routine\":\"minioKeepFilesRoutine\",\"time\":$(($(date +%s)*1000))}")echo "$RESTORE_JOB_ID"Example response 4697118440623777782 - Sends a
-
Check restore status with the returned job ID.
Wait until the response shows
statusasDoneorFailed.Terminal window curl -s "http://localhost:8080/v1/restore/status/${RESTORE_JOB_ID}"Example response {"read-records": 5000000,"total-bytes": 505625000,"expired-records": 0,"skipped-records": 0,"ignored-records": 0,"inserted-records": 0,"existed-records": 0,"fresher-records": 5000000,"index-count": 0,"udf-count": 0,"errors-in-doubt": 0,"status": "Done"}In this quickstart, restore runs to the same namespace, so
inserted-recordscan stay0whileread-recordsandfresher-recordsincrease.
Monitor ABS with Prometheus and Grafana
Prometheus and Grafana provide a monitoring pipeline for ABS:
- Prometheus scrapes raw time-series metrics from ABS.
- Grafana queries those metrics and visualizes backup health, rates, and failures.
-
Create a file named
prometheus.ymlin your currentbuild/docker-composeworking directory with a Prometheus scrape config for ABS.This tells Prometheus where ABS is running on the Docker network.
prometheus.yml global:scrape_interval: 15sscrape_configs:- job_name: 'aerospike-backup-service'static_configs:- targets: ['aerospike-backup-service:8080'] -
Start Prometheus and Grafana on the same Docker network as the ABS stack.
Terminal window docker run -d --name abs-prometheus \--network docker-compose_default \-p 9090:9090 \-v "$PWD/prometheus.yml:/etc/prometheus/prometheus.yml" \prom/prometheusdocker run -d --name abs-grafana \--network docker-compose_default \-p 3301:3000 \grafana/grafana-ossYou should see two new container IDs printed to the console and two new containers running in your Docker Desktop dashboard.
-
(Optional) Confirm Prometheus can scrape ABS metrics by sending GET requests to the
/api/v1/queryendpoint.You can also check
http://localhost:9090/targetsand verify theaerospike-backup-servicetarget isUP.Terminal window curl -s "http://localhost:9090/api/v1/query?query=up%7Bjob%3D%22aerospike-backup-service%22%7D"curl -s "http://localhost:9090/api/v1/query?query=aerospike_backup_service_backup_events_total"Example response up{job="aerospike-backup-service"} {"status": "success","data": {"resultType": "vector","result": [{"metric": {"__name__": "up","instance": "aerospike-backup-service:8080","job": "aerospike-backup-service"},"value": [1774389204.122, "1"]}]}}aerospike_backup_service_backup_events_total {"status": "success","data": {"resultType": "vector","result": [{"metric": {"__name__": "aerospike_backup_service_backup_events_total","instance": "aerospike-backup-service:8080","job": "aerospike-backup-service","outcome": "success","routine": "minioKeepFilesRoutine","type": "full"},"value": [1774389204.138, "1"]}]}} -
Open Grafana in a browser window at
http://localhost:3301and sign in withadmin/admin. You can skip the prompt to change your password. -
Add a Prometheus data source:
- Go to
http://localhost:3301/connections/datasources/newand select Prometheus. - Set the URL to
http://abs-prometheus:9090, then select Save & test.
- Go to
-
Import the premade ABS dashboard hosted by Grafana.
- Go to
http://localhost:3301/dashboard/import. - Enter
21375, click Load, then click Import.
Dashboard reference: Aerospike Backup Service dashboard.
- Go to
-
Change the Grafana time range to
Last 30 minutes.The default
Last 30 dayswindow can miss very recent metrics in a fresh test environment due to sampling resolution. Grafana is designed more for longer-running trends than short example backup runs, so you may have to trigger a few backups and manually refresh the time range to see updated metrics right away. -
Trigger another backup and watch progress in Grafana.
Terminal window curl -X POST http://localhost:8080/v1/backups/full/minioKeepFilesRoutineIn Grafana, watch the ABS dashboard and confirm these changes:
- Backup progress: The % Backup Progress panel, reflecting the metric
aerospike_backup_service_backup_progress_pct, increases during a running backup and drops after the job finishes. - Backup events: The Success panel, reflecting
aerospike_backup_service_backup_events_total{outcome="success"}, increases after a successful run. - Error-related events: The Failures, Skip, and Retry panels stay flat when backups complete successfully.
- Backup duration: After the job finishes, the Full - Backup duration and Backup duration - full panels update with data from the completed backup.
- Backup progress: The % Backup Progress panel, reflecting the metric
You have completed a full setup and test of Aerospike and Aerospike Backup Service!
If all your steps completed successfully, you should have confirmed the following:
- Your ABS instance can reach the Aerospike source cluster and the backup storage destination.
- Your backup job requests are executing through the full API-to-storage path.
- The monitoring pipeline from ABS to Prometheus to Grafana is complete and working.
You can now continue to send more backup and restore requests, monitor them with Prometheus and Grafana, and explore the ABS API further.
When you finish testing, stop and remove the test environment containers.
You may have created several gigabytes of test backup files in MinIO depending on the record count you loaded.
Run these commands from the build/docker-compose directory in the cloned repository.
docker rm -f abs-prometheus abs-grafanadocker compose down -vNext steps
- For installation options outside Docker, see Install and test ABS
- For more API workflows, see API usage examples
- For additional metrics and alerts, see ABS Monitoring