Skip to content

Quiesce a node

This page describes how to quiesce a node for planned maintenance, such as an Aerospike Database upgrade, OS upgrade, or hardware upgrade. It includes step-by-step maintenance procedures for AP namespaces.

When a node is quiesced, it is placed at the end of every partition’s succession list during the next cluster rebalance. For each partition where the quiesced node was the master, this causes it to hand off master ownership of that partition to a non-quiesced node, provided the two nodes have matching full partition versions.

  • In some ways, a quiesced node behaves like it is in the cluster. It accepts transactions, schedules migrations if appropriate, and counts towards determining that a partition is available for strong consistency (SC).

  • One difference in how a quiesced node behaves is that it never drops its data, even if all migrations complete and leave it with a superfluous partition version. It is assumed that the quiesced node will be taken down and returned to its prior place as a replica. Keeping data means it will not take as long to re-sync, needing only a “delta” migration instead of a “fill” migration.

Using quiescence for a smooth master handoff

A common use for quiescence is to enable smooth master handoffs.

Normally, when a node is removed from a cluster, it takes a few seconds for the remaining nodes to re-cluster and determine a new master for the partitions that had been master on the removed node. This period of time is the master gap. Some transactions, such as writes and SC reads, may have timeouts shorter than the master gap and will not find a master node during this timeout. AP reads retry against a replica by default.

Quiescing the node to be removed can fill this gap. If it is first quiesced, and a rebalance is triggered, the master handoff will occur during the rebalance. The quiesced node will continue to receive transactions and proxy them to the new master until all the clients have discovered the new master and moved to it. After this has happened, the quiesced node can be taken down. The re-clustering will not have a master gap and the burst of timeouts should instead become a burst of proxies. For more information, see the proxy-related metrics such as client_proxy_complete.

Quiescing also improves durability. In an AP cluster with RF=2, if the node is quiesced, a rebalance triggered, and migrations complete before the node is removed, two full copies of every partition remain in the cluster. This protects against data loss if a second node fails before the first one returns. For namespaces with RF=1, waiting for migrations to complete after quiescing is essential, because the single copy of each partition must migrate to another node before shutdown.

Planned maintenance

This section provides procedures for rolling upgrades and planned maintenance on AP (non-strong-consistency) namespaces. In multi-AZ rack-aware deployments, you can alternatively process one rack at a time.

If a procedure is stuck on a verification step, the simplest recovery is to undo the pending operation: unquiesce a quiesced node, reverse a dynamic configuration change, or restart a node that is down. If that does not resolve the issue, see Troubleshooting, search the Support Knowledge Base, or open a support case.

Before you begin

Configure migrate-fill-delay on every node to a value that exceeds the expected time for a single node (or rack) to complete maintenance and rejoin. This prevents unnecessary data movement while a node is temporarily out of the cluster. See Delay migrations for details.

If you set migrate-fill-delay dynamically, the value reverts to the static configuration on node restart. Since planned maintenance involves restarting nodes, set this value in the configuration file (aerospike.conf) so it persists across restarts.

Rolling software upgrade (asd restart, no host reboot)

When only the Aerospike daemon is restarted (for example, a rolling software upgrade), the node can warm restart because shared memory segments holding the primary index and secondary indexes in shared memory survive. Process one node at a time:

  1. Quiesce the node, then trigger a recluster.

    Terminal window
    Admin+> manage quiesce with <node-ip>
    Admin+> manage recluster

    Verify: show statistics like pending_quiesce shows true on the target:

    Terminal window
    Admin+> show statistics like pending_quiesce
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:10 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Node |node1:3000|node2:3000|node3:3000|node4:3000
    pending_quiesce|false |true |false |false
    Number of rows: 2

    After recluster, show statistics like quiesce shows effective_is_quiesced: true on the target and nodes_quiesced: 1 on all nodes:

    Terminal window
    Admin+> show statistics like quiesce
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:20 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Node |node1:3000|node2:3000|node3:3000|node4:3000
    effective_is_quiesced|false |true |false |false
    nodes_quiesced |1 |1 |1 |1
    pending_quiesce |false |true |false |false
    Number of rows: 4

    See full output examples for additional detail.

  2. Wait for the quiesce handoff to complete.

    Terminal window
    Admin+> show latencies

    Verify: ops/sec drops to zero on the quiesced node. Then confirm client_proxy_* and batch_sub_proxy_* counters stop incrementing on the quiesced node, and from_proxy_* counters stop on the remaining nodes. See full verification details.

  3. Shut down asd, perform the upgrade, and restart asd. The node warm restarts.

    Terminal window
    $ sudo systemctl stop aerospike
    # ... perform upgrade ...
    $ sudo systemctl start aerospike

    Verify: info network shows the node has rejoined at the expected cluster size.

    Terminal window
    Admin> info network
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-14 00:00:29 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime
    | | | | |Size| Key|Integrity| Principal| Conns|
    node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37
    node2:3000 | BB90B0CA8BD688A|10.0.3.2:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:00:14
    node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37
    node4:3000 |*BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37
    Number of rows: 4
  4. Wait for migrations to complete before quiescing the next node, so that the next handoff is immediate.

    Terminal window
    Admin+> asinfo -v 'cluster-stable:size=N;ignore-migrations=false'

    Verify: all nodes return the same cluster key.

  5. Repeat from step 1 for the next node.

Planned maintenance with host reboot

When the host itself is rebooted, shared memory is wiped. This means the node cold restarts unless the primary index is persisted beforehand, and any in-memory namespace data without storage-backed persistence is lost. Use ASMT to preserve the primary index for a warm restart and to avoid cold restart side effects (slower startup, potential zombie records). For in-memory namespaces without persistence, you must wait for migrations to complete after the node rejoins before taking down the next node. Process one node at a time:

  1. Quiesce the node, then trigger a recluster.

    Terminal window
    Admin+> manage quiesce with <node-ip>
    Admin+> manage recluster

    Verify: same as rolling upgrade step 1.

  2. Wait for the quiesce handoff to complete.

    Terminal window
    Admin+> show latencies

    Verify: same as rolling upgrade step 2.

  3. Shut down asd.

    Terminal window
    $ sudo systemctl stop aerospike
  4. Back up the indexes of each namespace with asmt. The -z option enables compression, which is recommended for planned maintenance.

    Terminal window
    $ sudo asmt -b -v -z -p <path-to-backup-directory> -n <ns1, ns2, ...>

    See Backing up indexes with ASMT for full output details.

  5. Reboot the host and perform OS or hardware maintenance.

  6. After the host is back, restore the indexes of each namespace with asmt. The -z option is not needed; ASMT auto-detects compressed files.

    Terminal window
    $ sudo asmt -r -v -p <path-to-backup-directory> -n <ns1, ns2, ...>

    See Restoring indexes with ASMT for full output details.

  7. Start asd. The node warm restarts from the restored index instead of cold restarting. The Aerospike log confirms this with beginning warm restart for each namespace (instead of beginning cold start).

    Terminal window
    $ sudo systemctl start aerospike

    Verify: info network shows the node has rejoined at the expected cluster size.

    Terminal window
    Admin> info network
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-14 00:00:29 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime
    | | | | |Size| Key|Integrity| Principal| Conns|
    node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37
    node2:3000 | BB90B0CA8BD688A|10.0.3.2:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:00:14
    node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37
    node4:3000 |*BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37
    Number of rows: 4
  8. Wait for migrations to complete.

    Terminal window
    Admin+> asinfo -v 'cluster-stable:size=N;ignore-migrations=false'

    Verify: all nodes return the same cluster key.

  9. Repeat from step 1 for the next node.

Rack at a time (multi-AZ rack-aware deployments)

In a rack-aware cluster deployed across multiple availability zones, you can take down a full rack at a time instead of one node at a time. Set migrate-fill-delay to cover the entire rack maintenance window. For two-rack deployments, the active-rack optimization below can skip the quiesce step for the passive rack.

  1. Quiesce all nodes in the target rack and trigger a recluster.

    Terminal window
    Admin+> manage quiesce with <node-ip-1>
    Admin+> manage quiesce with <node-ip-2>
    ...
    Admin+> manage recluster

    Verify: show statistics like quiesce shows effective_is_quiesced: true on the quiesced nodes and nodes_quiesced equals the number of quiesced nodes on all nodes.

    Terminal window
    Admin+> show statistics like quiesce
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-14 00:41:02 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Node |node1:3000|node2:3000|node3:3000|node4:3000|node5:3000|node6:3000
    effective_is_quiesced|false |false |true |true |false |false
    nodes_quiesced |2 |2 |2 |2 |2 |2
    pending_quiesce |false |false |true |true |false |false
    Number of rows: 4
  2. Wait for the quiesce handoff. Verify no traffic or proxies reach the quiesced nodes.

    Terminal window
    Admin+> show latencies
  3. Shut down asd on each node in the rack. If hosts will be rebooted, use ASMT to back up the indexes of each namespace before rebooting and restore them afterward.

    Terminal window
    $ sudo systemctl stop aerospike
    # If rebooting:
    $ sudo asmt -b -v -z -p <path-to-backup-directory> -n <ns1, ns2, ...>
    # ... reboot and perform maintenance ...
    $ sudo asmt -r -v -p <path-to-backup-directory> -n <ns1, ns2, ...>
  4. Perform maintenance on the rack’s hosts.

  5. Start asd on each node. Verify the nodes rejoin the cluster.

    Terminal window
    $ sudo systemctl start aerospike

    Verify: info network shows all nodes have rejoined at the expected cluster size.

    Terminal window
    Admin> info network
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-14 00:48:17 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime
    | | | | |Size| Key|Integrity| Principal| Conns|
    node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 6|76772BDF405A|True |BB9E312FA6C9B28| 5|01:11:25
    node2:3000 |*BB9D787F6BAF3D6|10.0.3.2:3000|E-8.1.1.2| 0.000 | 6|76772BDF405A|True |BB9E312FA6C9B28| 5|01:11:25
    node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 6|76772BDF405A|True |BB9E312FA6C9B28| 5|00:00:14
    node4:3000 | BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 6|76772BDF405A|True |BB9E312FA6C9B28| 5|00:00:14
    node5:3000 | BB9A63D2E17F490|10.0.3.5:3000|E-8.1.1.2| 0.000 | 6|76772BDF405A|True |BB9E312FA6C9B28| 5|01:11:25
    node6:3000 | BB9E312FA6C9B28|10.0.3.6:3000|E-8.1.1.2| 0.000 | 6|76772BDF405A|True |BB9E312FA6C9B28| 5|01:11:25
    Number of rows: 6
  6. Wait for migrations to complete. Use cluster-stable to check. It returns ERROR while migrations are in progress and the cluster key when they are done. Run it periodically until all nodes return the same key:

    Terminal window
    Admin+> asinfo -v 'cluster-stable:size=6;ignore-migrations=false'
    node1:3000 (10.0.3.1) returned:
    76772BDF405A
    node2:3000 (10.0.3.2) returned:
    76772BDF405A
    node3:3000 (10.0.3.3) returned:
    76772BDF405A
    node4:3000 (10.0.3.4) returned:
    76772BDF405A
    node5:3000 (10.0.3.5) returned:
    76772BDF405A
    node6:3000 (10.0.3.6) returned:
    76772BDF405A
  7. Repeat for the next rack.

Optimization with active-rack

Starting with Database 7.2.0, the active-rack feature can shorten the rack-at-a-time procedure for two equally sized racks. When active-rack is configured, the designated active rack holds all master partitions. The passive rack holds only replicas. This means the quiesce step can be skipped for the passive rack: since it has no masters, there is no master handoff needed, so taking it down does not cause a master gap.

Procedure with active-rack:

  1. Designate the rack that will stay up as active-rack. Wait for migrations to complete (masters move to the active rack). See Dynamically enable active rack using asadm.

    Terminal window
    Admin+> manage config namespace <ns> param active-rack to <rack-id>
    Admin+> manage recluster

    Verify: asinfo -v 'cluster-stable:size=N;ignore-migrations=false' returns the same cluster key on all nodes (migrations complete). Use show pmap to confirm all Primary partitions are on the active rack’s nodes.

  2. Shut down asd on the passive rack’s nodes. If hosts will be rebooted, use ASMT to back up and later restore the indexes of each namespace.

    Terminal window
    $ sudo systemctl stop aerospike
    # If rebooting:
    $ sudo asmt -b -v -z -p <path-to-backup-directory> -n <ns1, ns2, ...>
    # ... reboot and perform maintenance ...
    $ sudo asmt -r -v -p <path-to-backup-directory> -n <ns1, ns2, ...>
  3. Perform maintenance on the passive rack’s hosts.

  4. Start asd. The nodes rejoin automatically.

    Terminal window
    $ sudo systemctl start aerospike

    Verify: info network shows all nodes have rejoined at the expected cluster size.

    Terminal window
    Admin> info network
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-14 01:12:45 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime
    | | | | |Size| Key|Integrity| Principal| Conns|
    node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 4|3A38381ADD12|True |BB9D48DF5A70CEE| 5|01:35:53
    node2:3000 |*BB9D787F6BAF3D6|10.0.3.2:3000|E-8.1.1.2| 0.000 | 4|3A38381ADD12|True |BB9D48DF5A70CEE| 5|01:35:53
    node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 4|3A38381ADD12|True |BB9D48DF5A70CEE| 5|00:00:14
    node4:3000 | BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 4|3A38381ADD12|True |BB9D48DF5A70CEE| 5|00:00:14
    Number of rows: 4
  5. Wait for migrations to complete before switching active-rack. Use cluster-stable to check. It returns ERROR while migrations are in progress and the cluster key when they are done. Run it periodically until all nodes return the same key:

    Terminal window
    Admin+> asinfo -v 'cluster-stable:size=4;ignore-migrations=false'
    node1:3000 (10.0.3.1) returned:
    3A38381ADD12
    node2:3000 (10.0.3.2) returned:
    3A38381ADD12
    node3:3000 (10.0.3.3) returned:
    3A38381ADD12
    node4:3000 (10.0.3.4) returned:
    3A38381ADD12
  6. Switch active-rack to point to the now-maintained rack. Wait for migrations to complete (masters move to the new active rack).

    Terminal window
    Admin+> manage config namespace <ns> param active-rack to <other-rack-id>
    Admin+> manage recluster

    Verify: same as step 1. asinfo -v 'cluster-stable:size=N;ignore-migrations=false' returns the same cluster key on all nodes. Use show pmap to confirm all Primary partitions have moved to the new active rack.

  7. Repeat steps 2-5 for the other rack.

Quiesce verification reference

This section provides detailed asadm output examples for each step of the quiesce-based maintenance workflow described in Planned maintenance above. Refer to these examples when you want to see full command output.

Verify the cluster is stable

Use the info network command to ensure there are no migrations, all nodes are in the cluster, and all nodes show the same cluster key.

Terminal window
Admin> info network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-13 23:57:26 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime
| | | | |Size| Key|Integrity| Principal| Conns|
node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 4|AA5AF50552AF|True |BB9D787F6BAF3D6| 5|00:20:34
node2:3000 |*BB9D787F6BAF3D6|10.0.3.2:3000|E-8.1.1.2| 0.000 | 4|AA5AF50552AF|True |BB9D787F6BAF3D6| 5|00:20:34
node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 4|AA5AF50552AF|True |BB9D787F6BAF3D6| 5|00:20:34
node4:3000 | BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 4|AA5AF50552AF|True |BB9D787F6BAF3D6| 5|00:20:34
Number of rows: 4

Quiesce a node

The quiesce command can be issued from asadm or directly through asinfo. Direct it to the target node using the with modifier:

Terminal window
Admin+> manage quiesce with node2:3000
~~~~~~~~~~~Quiesce Nodes~~~~~~~~~~~~
Node|Response
node2:3000 |ok
Number of rows: 1
Run "manage recluster" for your changes to take effect.

Verify by checking pending_quiesce. It should be true on the target node:

Terminal window
Admin+> show statistics like pending_quiesce
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:10 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node |node1:3000|node2:3000|node3:3000|node4:3000
pending_quiesce|false |true |false |false
Number of rows: 2

Recluster

This triggers the master handoff from quiesced nodes and starts migrations.

Terminal window
Admin+> manage recluster
Successfully started recluster

Verify by checking effective_is_quiesced and nodes_quiesced:

Terminal window
Admin+> show statistics like quiesce
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:20 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node |node1:3000|node2:3000|node3:3000|node4:3000
effective_is_quiesced|false |true |false |false
nodes_quiesced |1 |1 |1 |1
pending_quiesce |false |true |false |false
Number of rows: 4

Verify quiesce handoff

After reclustering, verify that the quiesced node is no longer receiving traffic and that proxy counters have settled.

1) No active traffic on the quiesced node

Use show latencies to confirm read and write ops/sec are zero on the quiesced node. Prefix with watch to auto-repeat the command until you see the values settle:

Terminal window
Admin+> watch 5 10 show latencies
[ 'show latencies' sleep: 5.0s iteration: 1 ]
~~~~~~~~~~~~~~~~~~~Latency (2026-04-13 23:58:28 UTC)~~~~~~~~~~~~~~~~~~~
Namespace|Histogram| Node|ops/sec|>1ms |>8ms|>64ms
test |read |node1:3000 | 3580.0| 0.0| 0.0| 0.0
test |read |node2:3000 | 0.0| 0.0| 0.0| 0.0
test |read |node3:3000 | 3606.7| 0.0| 0.0| 0.0
test |read |node4:3000 | 3461.6| 0.0| 0.0| 0.0
| | |10648.3| 0.0| 0.0| 0.0
test |write |node1:3000 | 3616.0| 0.02| 0.0| 0.0
test |write |node2:3000 | 0.0| 0.0| 0.0| 0.0
test |write |node3:3000 | 3546.5| 0.03| 0.0| 0.0
test |write |node4:3000 | 3467.9| 0.02| 0.0| 0.0
| | |10630.4| 0.02| 0.0| 0.0
Number of rows: 8

2) Proxy counters stopped on the quiesced node

There is typically a second or two of proxy transactions as clients retrieve the updated partition map. Confirm the client_proxy_* and batch_sub_proxy_* counters are no longer incrementing on the quiesced node. Use watch to run the command repeatedly and verify the counters are stable across iterations:

Terminal window
Admin+> watch 5 5 show statistics like client_proxy
[ 'show statistics like client_proxy' sleep: 5.0s iteration: 1 ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:33 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node |node1:3000|node2:3000|node3:3000|node4:3000
client_proxy_complete|34 |2044 |93 |86
client_proxy_error |0 |0 |0 |0
client_proxy_timeout |0 |8 |0 |0
Number of rows: 4
Terminal window
Admin+> show statistics like batch_sub_proxy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:33 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node |node1:3000|node2:3000|node3:3000|node4:3000
batch_sub_proxy_complete|0 |0 |0 |0
batch_sub_proxy_error |0 |0 |0 |0
batch_sub_proxy_timeout |0 |0 |0 |0
Number of rows: 4

3) No proxy traffic arriving at non-quiesced nodes

On the remaining nodes, verify the from_proxy_* counters have stopped incrementing. Use watch to confirm the values are stable:

Terminal window
Admin+> watch 5 5 show statistics like from_proxy_read_success
[ 'show statistics like from_proxy_read_success' sleep: 5.0s iteration: 1 ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:38 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node |node1:3000|node2:3000|node3:3000|node4:3000
from_proxy_read_success|383 |0 |393 |324
Number of rows: 2
Admin+> show statistics like from_proxy_write_success
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:38 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node |node1:3000|node2:3000|node3:3000|node4:3000
from_proxy_write_success|393 |0 |371 |393
Number of rows: 2

Stop and start asd

Terminal window
$ sudo systemctl stop aerospike

Verify with info network. The cluster should show one fewer node:

Terminal window
Admin> info network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-13 23:58:58 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime
| | | | |Size| Key|Integrity| Principal| Conns|
node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 3|FC7125ECF349|True |BB9D48DF5A70CEE| 4|00:22:06
node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 3|FC7125ECF349|True |BB9D48DF5A70CEE| 4|00:22:06
node4:3000 |*BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 3|FC7125ECF349|True |BB9D48DF5A70CEE| 5|00:22:06
Number of rows: 3

After maintenance, start asd and verify it rejoins:

Terminal window
$ sudo systemctl start aerospike
Terminal window
Admin> info network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-14 00:00:29 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime
| | | | |Size| Key|Integrity| Principal| Conns|
node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37
node2:3000 | BB90B0CA8BD688A|10.0.3.2:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:00:14
node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37
node4:3000 |*BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37
Number of rows: 4

Wait for migrations

Use cluster-stable to confirm migrations are complete. All nodes should return the same cluster key. Use watch to poll periodically until they converge:

After a quiesce-based maintenance, migrations typically consist only of lead migrations and complete quickly. After a cold restart or when data must be repopulated (for example, in-memory namespaces without persistence after a reboot), migrations take longer.

Terminal window
Admin+> watch 10 30 asinfo -v 'cluster-stable:size=4;ignore-migrations=false'
[ 'asinfo -v cluster-stable:size=4;ignore-migrations=false' sleep: 10.0s iteration: 1 ]
node1:3000 (10.0.3.1) returned:
3866DA39491B
node3:3000 (10.0.3.3) returned:
3866DA39491B
node2:3000 (10.0.3.2) returned:
3866DA39491B
node4:3000 (10.0.3.4) returned:
3866DA39491B
Feedback

Was this page helpful?

What type of feedback are you giving?

What would you like us to know?

+Capture screenshot

Can we reach out to you?