# Aerospike Backup Service quickstart

In this ten-minute quickstart, you will install Aerospike Backup Service (ABS), run backup and restore operations through the REST API, and monitor backup activity with Prometheus and Grafana. You will use a local Docker Compose environment so you can test ABS safely before moving to a larger deployment.

The quickstart starts these core services first:

-   `aerospike-cluster`: a single-node Aerospike Database test deployment
-   `aerospike-backup-service`: the ABS API service that receives backup and restore requests
-   `minio`: S3-compatible object storage where backup files are written
-   `minio-client`: a short-lived init container that creates the backup bucket in MinIO

Later in the quickstart, you will also add:

-   `prometheus`: metrics collector that scrapes ABS at `/metrics`
-   `grafana`: dashboard UI that visualizes Prometheus metrics

You will trigger a backup, run a restore request, and confirm monitoring signals.

::: note
This quickstart runs all components on one local Docker host for speed, including a single-node Aerospike deployment. In production, ABS, Aerospike, object storage, and monitoring usually run as separate services with independent scaling, access controls, and persistence. This setup does not demonstrate multi-node cluster behavior, such as migrations, node failure handling, or network partition scenarios.
:::

## Architecture

ABS runs as a separate service alongside Aerospike Database. Aerospike Database stores your records and serves application reads and writes.

ABS is the backup control plane. It exposes REST endpoints, schedules backup jobs, and writes backups to an external local or cloud storage target.

Prometheus scrapes ABS metrics for job status and timing, and Grafana turns those metrics into dashboards.

## Prerequisites

-   [Git](https://git-scm.com/) installed to clone the repository.
-   [Docker Desktop](https://docs.docker.com/get-docker/) installed and running. This quickstart uses Docker Compose so you can run multiple containers on a single machine.
-   `curl` installed so you can send REST requests to ABS from a terminal without a separate client application.

## Start a small test cluster

1.  Clone the Aerospike Backup Service repository and navigate to the Docker Compose directory.
    
    Terminal window
    
    ```shell
    git clone https://github.com/aerospike/aerospike-backup-service.git
    
    cd aerospike-backup-service/build/docker-compose
    ```
    
2.  (Optional) Review the Docker Compose file that starts Aerospike, ABS, and MinIO.
    
    Terminal window
    
    ```shell
    cat docker-compose.yaml
    ```
    
    The file defines four services:
    
    -   `minio` and `minio-client`: MinIO object storage and an init container that creates the backup bucket.
    -   `aerospike-cluster`: a single-node Aerospike database.
    -   `aerospike-backup-service`: the ABS service, pulled from Docker Hub. It connects to the Aerospike cluster and writes backups to MinIO.
3.  (Optional) Review the ABS configuration files.
    
    Terminal window
    
    ```shell
    cat aerospike-backup-service.yml
    ```
    
    `aerospike-backup-service.yml` has four sections:
    
    -   `aerospike-clusters` defines the Aerospike cluster that ABS connects to.
    -   `storage` defines where ABS stores the backup files. This quickstart uses MinIO, a local stand-in for cloud object storage.
    -   `backup-policies` defines how backup jobs run and are retained.
    -   `backup-routines` is a list of backup routines for ABS to run.
    
    The `credentials` file provides AWS-style credentials that the ABS service uses to authenticate with MinIO S3 storage.
    
    See [Configuration file examples](https://aerospike.com/docs/database/tools/backup-and-restore/backup-service/config-examples) for more details about the configuration file sections.
    
    ::: caution
    This quickstart intentionally uses low-security defaults (for example `admin` / `admin`, `minioadmin` credentials, and a public test bucket) to reduce setup time. Use this setup only in an isolated local test environment. For production, use strong credentials, private buckets, TLS, and least-privilege IAM policies.
    :::
    
4.  Start the Docker Compose stack and wait for the services to start.
    
    The `minio-client` init container may show `Exited (0)` after it creates the backup bucket. This is expected.
    
    Terminal window
    
    ```shell
    docker compose up -d
    ```
    
    Example response
    
    ```text
    ✔ Network docker-compose_default      Created                            0.0s
    
    ✔ Container aerospike-cluster         Healthy                            31.5s
    
    ✔ Container minio                     Healthy                            30.8s
    
    ✔ Container minio-client              Exited                             31.4s
    
    ✔ Container aerospike-backup-service  Started                            34.5s
    ```
    
5.  Check that ABS is reachable by using `curl` to send GET requests to the `/health` and `/version` endpoints at port `8080`.
    
    Verify that you get `Ok` from `/health` and a JSON response from `/version`. Run the commands one at a time to see the separate responses.
    
    Terminal window
    
    ```shell
    curl -s http://localhost:8080/health
    
    curl -s http://localhost:8080/version
    ```
    
    Example response
    
    ```text
    Ok
    ```
    
    ```json
    {
    
    "version": "v3.5.0",
    
    "commit": "COMMIT_SHA",
    
    "build-time": "BUILD_TIMESTAMP"
    
    }
    ```
    

## Send REST requests for backup and restore

1.  Load sample records into the Aerospike `test.demo` set.
    
    This quickstart uses local internal storage (MinIO on the same Docker host) and a highly performant Aerospike deployment, so a small dataset often backs up too quickly to observe progress in Grafana. In this step, you’ll use the Aerospike Benchmark tool (`asbench`) to create and load a dataset of 5 million records (about 300 MB in total).
    
    `asbench` prints live progress lines (`write(tps=...)`) as it runs.
    
    Terminal window
    
    ```shell
    docker run --rm --network docker-compose_default \
    
      aerospike/aerospike-tools \
    
      asbench -h aerospike-cluster -p 3000 \
    
      -n test \
    
      -s demo \
    
      -k 5000000 \
    
      -o I8 \
    
      -w I \
    
      -z 16 \
    
      -T 5000
    ```
    
    Example command
    
    ```text
    2026-04-14 18:20:32 INFO write(tps=201704 ...) total(tps=201704 ...)
    
    2026-04-14 18:20:33 INFO write(tps=220972 ...) total(tps=220972 ...)
    
    ...
    ```
    
    This load typically takes under 30 seconds to complete on a typical laptop.
    
2.  Verify the sample records are present using `asinfo` to read data from the Aerospike cluster.
    
    The following command uses the Aerospike Info tool (`asinfo`) to read the number of records in the `test.demo` set.
    
    Terminal window
    
    ```shell
    docker run --rm --network docker-compose_default \
    
      aerospike/aerospike-tools \
    
      asinfo --no-config-file -h aerospike-cluster -v "sets/test"
    ```
    
    Example response
    
    ```text
    ns=test:set=demo:objects=5000000:tombstones=0:data_used_bytes=...
    ```
    
    Look for the `objects` value for `set=demo` and confirm it matches the number of records you loaded in the previous step.
    
3.  Read the configured cluster and routine names by sending two separate GET requests to the ABS API.
    
    The results confirm ABS loaded your config file and tell you which routine name to use in API calls.
    
    Terminal window
    
    ```shell
    curl -s http://localhost:8080/v1/config/clusters
    
    curl -s http://localhost:8080/v1/config/routines
    ```
    
    Example response
    
    /v1/config/clusters
    
    ```json
    {
    
      "absCluster1": {
    
        "seed-nodes": [
    
          {
    
            "host-name": "aerospike-cluster",
    
            "port": 3000
    
          }
    
        ],
    
        "credentials": {
    
          "user": "admin",
    
          "password": "admin"
    
        }
    
      }
    
    }
    ```
    
    /v1/config/routines
    
    ```json
    {
    
      "minioKeepFilesRoutine": {
    
        "backup-policy": "keepFilesPolicy",
    
        "source-cluster": "absCluster1",
    
        "storage": "minioStorage",
    
        "interval-cron": "@daily",
    
        "incr-interval-cron": "@hourly",
    
        "namespaces": ["test"]
    
      }
    
    }
    ```
    
    In this quickstart, the routine name is `minioKeepFilesRoutine` and the cluster name is `absCluster1`. The routine is configured to perform full backups daily and incremental backups hourly.
    
4.  Trigger an on-demand full backup with a POST request to the ABS API, then get ready to run the next command to capture the backup during the short window while it is running.
    
    This bypasses the schedule defined in the policy and starts a backup immediately. A successful request returns `202 Accepted`, which means the job was accepted and queued. `Content-Length: 0` means ABS accepted the backup request and returned no response body.
    
    Terminal window
    
    ```shell
    curl -i -X POST http://localhost:8080/v1/backups/full/minioKeepFilesRoutine
    ```
    
    Example response
    
    ```text
    HTTP/1.1 202 Accepted
    
    Date: Tue, 14 Apr 2026 21:05:27 GMT
    
    Content-Length: 0
    ```
    
5.  Within the next few seconds, query `currentBackup` to get information about the backup while it is running.
    
    On a typical laptop, the backup runs for about 10-20 seconds, so run this request quickly after the previous command. While the job is active, the response includes a `full` object with progress fields.
    
    Terminal window
    
    ```shell
    curl -s http://localhost:8080/v1/backups/currentBackup/minioKeepFilesRoutine
    ```
    
    Example in-progress response
    
    /v1/backups/currentBackup/minioKeepFilesRoutine
    
    ```json
    {
    
      "full": {
    
        "total-records": 5000000,
    
        "done-records": 1834210,
    
        "start-time": "2026-04-14T21:05:27.430Z",
    
        "percentage-done": 36,
    
        "duration": 4,
    
        "metrics": {
    
          "records-per-second": 509686,
    
          "kilobytes-per-second": 50432,
    
          "pipeline": 335
    
        }
    
      },
    
      "next-full": "2026-04-15T00:00:00Z",
    
      "next-incremental": "2026-04-14T22:00:00Z"
    
    }
    ```
    
    ::: note
    If your hardware is powerful enough, the backup may complete before you can manually run the `currentBackup` command. In this case, you can try another backup or even go back to the start of this section and run the `asbench` command again with an increased record count. ABS backup speed is roughly linear, so adjusting the record count should adjust the backup duration at a predictable rate.
    :::
    
6.  Wait for the backup to finish, then query `currentBackup` again.
    
    After completion, the `full` object is no longer present and `last-full` is updated.
    
    Terminal window
    
    ```shell
    curl -s http://localhost:8080/v1/backups/currentBackup/minioKeepFilesRoutine
    ```
    
    Example post-completion response
    
    /v1/backups/currentBackup/minioKeepFilesRoutine
    
    ```json
    {
    
      "last-full": "2026-04-14T21:05:37.769Z",
    
      "next-full": "2026-04-15T00:00:00Z",
    
      "next-incremental": "2026-04-14T22:00:00Z"
    
    }
    ```
    
7.  List full backups and confirm the new backup entry.
    
    Look for a recent timestamp and verify that `record-count` is the same value that you loaded with `asbench`.
    
    Terminal window
    
    ```shell
    curl -s http://localhost:8080/v1/backups/full
    ```
    
    Example response
    
    /v1/backups/full
    
    ```json
    {
    
      "minioKeepFilesRoutine": [
    
        {
    
          "created": "2026-04-14T21:05:27.732Z",
    
          "timestamp": 1776197127732,
    
          "finished": "2026-04-14T21:05:37.769Z",
    
          "duration": 10,
    
          "namespace": "test",
    
          "record-count": 5000000,
    
          "byte-count": 506875071,
    
          "file-count": 1,
    
          "secondary-index-count": 0,
    
          "udf-count": 0,
    
          "key": "minioKeepFilesRoutine/backup/1776197127732/data/test",
    
          "compression": "NONE",
    
          "encryption": "NONE"
    
        }
    
      ]
    
    }
    ```
    
8.  Trigger a restore job.
    
    This command does three things:
    
    -   Sends a `POST` request to `/v1/restore/timestamp` to start a restore operation.
    -   Passes JSON with the routine name and the current time in milliseconds.
    -   Saves the response (a restore job ID) into an environment variable named `RESTORE_JOB_ID`.
    
    This is all wrapped into one command for the quickstart to make it faster to run. The job ID will be different in your environment.
    
    ABS restores the most recent full backup before that timestamp and then applies incremental backups up to that timestamp.
    
    You can reuse this variable in the next request without manually copying and pasting the job ID. If you open another terminal session, set the variable again by rerunning the command.
    
    Terminal window
    
    ```shell
    RESTORE_JOB_ID=$(curl -s -X POST http://localhost:8080/v1/restore/timestamp \
    
      -H "Content-Type: application/json" \
    
      -d "{\"routine\":\"minioKeepFilesRoutine\",\"time\":$(($(date +%s)*1000))}")
    
    echo "$RESTORE_JOB_ID"
    ```
    
    Example response
    
    ```text
    4697118440623777782
    ```
    
9.  Check restore status with the returned job ID.
    
    Wait until the response shows `status` as `Done` or `Failed`.
    
    Terminal window
    
    ```shell
    curl -s "http://localhost:8080/v1/restore/status/${RESTORE_JOB_ID}"
    ```
    
    Example response
    
    ```json
    {
    
      "read-records": 5000000,
    
      "total-bytes": 505625000,
    
      "expired-records": 0,
    
      "skipped-records": 0,
    
      "ignored-records": 0,
    
      "inserted-records": 0,
    
      "existed-records": 0,
    
      "fresher-records": 5000000,
    
      "index-count": 0,
    
      "udf-count": 0,
    
      "errors-in-doubt": 0,
    
      "status": "Done"
    
    }
    ```
    
    In this quickstart, restore runs to the same namespace, so `inserted-records` can stay `0` while `read-records` and `fresher-records` increase.
    

## Monitor ABS with Prometheus and Grafana

Prometheus and Grafana provide a monitoring pipeline for ABS:

-   Prometheus scrapes raw time-series metrics from ABS.
-   Grafana queries those metrics and visualizes backup health, rates, and failures.

::: note
In this quickstart, monitoring is a single Prometheus container and a single Grafana container. In production, monitoring is usually centralized and durable, with longer retention, alerting, authentication, and dashboard access controls.
:::

1.  Create a file named `prometheus.yml` in your current `build/docker-compose` working directory with a Prometheus scrape config for ABS.
    
    This tells Prometheus where ABS is running on the Docker network.
    
    prometheus.yml
    
    ```yaml
    global:
    
      scrape_interval: 15s
    
    scrape_configs:
    
      - job_name: 'aerospike-backup-service'
    
        static_configs:
    
          - targets: ['aerospike-backup-service:8080']
    ```
    
2.  Start Prometheus and Grafana on the same Docker network as the ABS stack.
    
    Terminal window
    
    ```shell
    docker run -d --name abs-prometheus \
    
      --network docker-compose_default \
    
      -p 9090:9090 \
    
      -v "$PWD/prometheus.yml:/etc/prometheus/prometheus.yml" \
    
      prom/prometheus
    
    docker run -d --name abs-grafana \
    
      --network docker-compose_default \
    
      -p 3301:3000 \
    
      grafana/grafana-oss
    ```
    
    You should see two new container IDs printed to the console and two new containers running in your Docker Desktop dashboard.
    
3.  (Optional) Confirm Prometheus can scrape ABS metrics by sending GET requests to the `/api/v1/query` endpoint.
    
    You can also check `http://localhost:9090/targets` and verify the `aerospike-backup-service` target is `UP`.
    
    Terminal window
    
    ```shell
    curl -s "http://localhost:9090/api/v1/query?query=up%7Bjob%3D%22aerospike-backup-service%22%7D"
    
    curl -s "http://localhost:9090/api/v1/query?query=aerospike_backup_service_backup_events_total"
    ```
    
    Example response
    
    up{job="aerospike-backup-service"}
    
    ```json
    {
    
      "status": "success",
    
      "data": {
    
        "resultType": "vector",
    
        "result": [
    
          {
    
            "metric": {
    
              "__name__": "up",
    
              "instance": "aerospike-backup-service:8080",
    
              "job": "aerospike-backup-service"
    
            },
    
            "value": [1774389204.122, "1"]
    
          }
    
        ]
    
      }
    
    }
    ```
    
    aerospike\_backup\_service\_backup\_events\_total
    
    ```json
    {
    
      "status": "success",
    
      "data": {
    
        "resultType": "vector",
    
        "result": [
    
          {
    
            "metric": {
    
              "__name__": "aerospike_backup_service_backup_events_total",
    
              "instance": "aerospike-backup-service:8080",
    
              "job": "aerospike-backup-service",
    
              "outcome": "success",
    
              "routine": "minioKeepFilesRoutine",
    
              "type": "full"
    
            },
    
           "value": [1774389204.138, "1"]
    
          }
    
        ]
    
      }
    
    }
    ```
    
4.  Open Grafana in a browser window at `http://localhost:3301` and sign in with `admin` / `admin`. You can skip the prompt to change your password.
    
5.  Add a Prometheus data source:
    
    -   Go to `http://localhost:3301/connections/datasources/new` and select **Prometheus**.
    -   Set the URL to `http://abs-prometheus:9090`, then select **Save & test**.
6.  Import the premade ABS dashboard hosted by Grafana.
    
    -   Go to `http://localhost:3301/dashboard/import`.
    -   Enter `21375`, click **Load**, then click **Import**.
    
    Dashboard reference: [Aerospike Backup Service dashboard](https://grafana.com/grafana/dashboards/21375-aerospike-backup-service/).
    
7.  Change the Grafana time range to `Last 30 minutes`.
    
    The default `Last 30 days` window can miss very recent metrics in a fresh test environment due to sampling resolution. Grafana is designed more for longer-running trends than short example backup runs, so you may have to trigger a few backups and manually refresh the time range to see updated metrics right away.
    
8.  Trigger another backup and watch progress in Grafana.
    
    Terminal window
    
    ```shell
    curl -X POST http://localhost:8080/v1/backups/full/minioKeepFilesRoutine
    ```
    
    In Grafana, watch the ABS dashboard and confirm these changes:
    
    -   Backup progress: The **% Backup Progress** panel, reflecting the metric `aerospike_backup_service_backup_progress_pct`, increases during a running backup and drops after the job finishes.
    -   Backup events: The **Success** panel, reflecting `aerospike_backup_service_backup_events_total{outcome="success"}`, increases after a successful run.
    -   Error-related events: The **Failures**, **Skip**, and **Retry** panels stay flat when backups complete successfully.
    -   Backup duration: After the job finishes, the **Full - Backup duration** and **Backup duration - full** panels update with data from the completed backup.

You have completed a full setup and test of Aerospike and Aerospike Backup Service!

If all your steps completed successfully, you should have confirmed the following:

-   Your ABS instance can reach the Aerospike source cluster and the backup storage destination.
-   Your backup job requests are executing through the full API-to-storage path.
-   The monitoring pipeline from ABS to Prometheus to Grafana is complete and working.

You can now continue to send more backup and restore requests, monitor them with Prometheus and Grafana, and explore the ABS API further.

When you finish testing, stop and remove the test environment containers. You may have created several gigabytes of test backup files in MinIO depending on the record count you loaded. Run these commands from the `build/docker-compose` directory in the cloned repository.

Terminal window

```shell
docker rm -f abs-prometheus abs-grafana

docker compose down -v
```

## Next steps

-   For installation options outside Docker, see [Install and test ABS](https://aerospike.com/docs/database/tools/backup-and-restore/backup-service/install)
-   For more API workflows, see [API usage examples](https://aerospike.com/docs/database/tools/backup-and-restore/backup-service/api-examples)
-   For additional metrics and alerts, see [ABS Monitoring](https://aerospike.com/docs/database/tools/backup-and-restore/backup-service/monitoring)