Quiesce a node
This page describes how to quiesce a node for planned maintenance, such as an Aerospike Database upgrade, OS upgrade, or hardware upgrade. It includes step-by-step maintenance procedures for AP namespaces.
When a node is quiesced, it is placed at the end of every partition’s succession list during the next cluster rebalance. For each partition where the quiesced node was the master, this causes it to hand off master ownership of that partition to a non-quiesced node, provided the two nodes have matching full partition versions.
-
In some ways, a quiesced node behaves like it is in the cluster. It accepts transactions, schedules migrations if appropriate, and counts towards determining that a partition is available for strong consistency (SC).
-
One difference in how a quiesced node behaves is that it never drops its data, even if all migrations complete and leave it with a superfluous partition version. It is assumed that the quiesced node will be taken down and returned to its prior place as a replica. Keeping data means it will not take as long to re-sync, needing only a “delta” migration instead of a “fill” migration.
Using quiescence for a smooth master handoff
A common use for quiescence is to enable smooth master handoffs.
Normally, when a node is removed from a cluster, it takes a few seconds for the remaining nodes to re-cluster and determine a new master for the partitions that had been master on the removed node. This period of time is the master gap. Some transactions, such as writes and SC reads, may have timeouts shorter than the master gap and will not find a master node during this timeout. AP reads retry against a replica by default.
Quiescing the node to be removed can fill this gap. If it is first quiesced, and a rebalance is triggered, the master handoff will occur during the rebalance. The quiesced node will continue to receive transactions and proxy them to the new master until all the clients have discovered the new master and moved to it. After this has happened, the quiesced node can be taken down. The re-clustering will not have a master gap and the burst of timeouts should instead become a burst of proxies. For more information, see the proxy-related metrics such as client_proxy_complete.
Quiescing also improves durability. In an AP cluster with RF=2, if the node is quiesced, a rebalance triggered, and migrations complete before the node is removed, two full copies of every partition remain in the cluster. This protects against data loss if a second node fails before the first one returns. For namespaces with RF=1, waiting for migrations to complete after quiescing is essential, because the single copy of each partition must migrate to another node before shutdown.
Planned maintenance
This section provides procedures for rolling upgrades and planned maintenance on AP (non-strong-consistency) namespaces. In multi-AZ rack-aware deployments, you can alternatively process one rack at a time.
If a procedure is stuck on a verification step, the simplest recovery is to undo the pending operation: unquiesce a quiesced node, reverse a dynamic configuration change, or restart a node that is down. If that does not resolve the issue, see Troubleshooting, search the Support Knowledge Base, or open a support case.
Before you begin
Configure migrate-fill-delay on every node to a value that exceeds the expected time for a single node (or rack) to complete maintenance and rejoin. This prevents unnecessary data movement while a node is temporarily out of the cluster. See Delay migrations for details.
If you set migrate-fill-delay dynamically, the value reverts to the static configuration on node restart. Since planned maintenance involves restarting nodes, set this value in the configuration file (aerospike.conf) so it persists across restarts.
Rolling software upgrade (asd restart, no host reboot)
When only the Aerospike daemon is restarted (for example, a rolling software upgrade), the node can warm restart because shared memory segments holding the primary index and secondary indexes in shared memory survive. Process one node at a time:
-
Quiesce the node, then trigger a recluster.
Terminal window Admin+> manage quiesce with <node-ip>Admin+> manage reclusterVerify:
show statistics like pending_quiesceshowstrueon the target:Terminal window Admin+> show statistics like pending_quiesce~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:10 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Node |node1:3000|node2:3000|node3:3000|node4:3000pending_quiesce|false |true |false |falseNumber of rows: 2After recluster,
show statistics like quiesceshowseffective_is_quiesced: trueon the target andnodes_quiesced: 1on all nodes:Terminal window Admin+> show statistics like quiesce~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:20 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Node |node1:3000|node2:3000|node3:3000|node4:3000effective_is_quiesced|false |true |false |falsenodes_quiesced |1 |1 |1 |1pending_quiesce |false |true |false |falseNumber of rows: 4See full output examples for additional detail.
-
Wait for the quiesce handoff to complete.
Terminal window Admin+> show latenciesVerify: ops/sec drops to zero on the quiesced node. Then confirm
client_proxy_*andbatch_sub_proxy_*counters stop incrementing on the quiesced node, andfrom_proxy_*counters stop on the remaining nodes. See full verification details. -
Shut down
asd, perform the upgrade, and restartasd. The node warm restarts.Terminal window $ sudo systemctl stop aerospike# ... perform upgrade ...$ sudo systemctl start aerospikeVerify:
info networkshows the node has rejoined at the expected cluster size.Terminal window Admin> info network~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-14 00:00:29 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime| | | | |Size| Key|Integrity| Principal| Conns|node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37node2:3000 | BB90B0CA8BD688A|10.0.3.2:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:00:14node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37node4:3000 |*BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37Number of rows: 4 -
Wait for migrations to complete before quiescing the next node, so that the next handoff is immediate.
Terminal window Admin+> asinfo -v 'cluster-stable:size=N;ignore-migrations=false'Verify: all nodes return the same cluster key.
-
Repeat from step 1 for the next node.
Planned maintenance with host reboot
When the host itself is rebooted, shared memory is wiped. This means the node cold restarts unless the primary index is persisted beforehand, and any in-memory namespace data without storage-backed persistence is lost. Use ASMT to preserve the primary index for a warm restart and to avoid cold restart side effects (slower startup, potential zombie records). For in-memory namespaces without persistence, you must wait for migrations to complete after the node rejoins before taking down the next node. Process one node at a time:
-
Quiesce the node, then trigger a recluster.
Terminal window Admin+> manage quiesce with <node-ip>Admin+> manage reclusterVerify: same as rolling upgrade step 1.
-
Wait for the quiesce handoff to complete.
Terminal window Admin+> show latenciesVerify: same as rolling upgrade step 2.
-
Shut down
asd.Terminal window $ sudo systemctl stop aerospike -
Back up the indexes of each namespace with
asmt. The-zoption enables compression, which is recommended for planned maintenance.Terminal window $ sudo asmt -b -v -z -p <path-to-backup-directory> -n <ns1, ns2, ...>See Backing up indexes with ASMT for full output details.
-
Reboot the host and perform OS or hardware maintenance.
-
After the host is back, restore the indexes of each namespace with
asmt. The-zoption is not needed; ASMT auto-detects compressed files.Terminal window $ sudo asmt -r -v -p <path-to-backup-directory> -n <ns1, ns2, ...>See Restoring indexes with ASMT for full output details.
-
Start
asd. The node warm restarts from the restored index instead of cold restarting. The Aerospike log confirms this withbeginning warm restartfor each namespace (instead ofbeginning cold start).Terminal window $ sudo systemctl start aerospikeVerify:
info networkshows the node has rejoined at the expected cluster size.Terminal window Admin> info network~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-14 00:00:29 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime| | | | |Size| Key|Integrity| Principal| Conns|node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37node2:3000 | BB90B0CA8BD688A|10.0.3.2:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:00:14node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37node4:3000 |*BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37Number of rows: 4 -
Wait for migrations to complete.
Terminal window Admin+> asinfo -v 'cluster-stable:size=N;ignore-migrations=false'Verify: all nodes return the same cluster key.
-
Repeat from step 1 for the next node.
Rack at a time (multi-AZ rack-aware deployments)
In a rack-aware cluster deployed across multiple availability zones, you can take down a full rack at a time instead of one node at a time. Set migrate-fill-delay to cover the entire rack maintenance window. For two-rack deployments, the active-rack optimization below can skip the quiesce step for the passive rack.
-
Quiesce all nodes in the target rack and trigger a recluster.
Terminal window Admin+> manage quiesce with <node-ip-1>Admin+> manage quiesce with <node-ip-2>...Admin+> manage reclusterVerify:
show statistics like quiesceshowseffective_is_quiesced: trueon the quiesced nodes andnodes_quiescedequals the number of quiesced nodes on all nodes.Terminal window Admin+> show statistics like quiesce~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-14 00:41:02 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Node |node1:3000|node2:3000|node3:3000|node4:3000|node5:3000|node6:3000effective_is_quiesced|false |false |true |true |false |falsenodes_quiesced |2 |2 |2 |2 |2 |2pending_quiesce |false |false |true |true |false |falseNumber of rows: 4 -
Wait for the quiesce handoff. Verify no traffic or proxies reach the quiesced nodes.
Terminal window Admin+> show latencies -
Shut down
asdon each node in the rack. If hosts will be rebooted, use ASMT to back up the indexes of each namespace before rebooting and restore them afterward.Terminal window $ sudo systemctl stop aerospike# If rebooting:$ sudo asmt -b -v -z -p <path-to-backup-directory> -n <ns1, ns2, ...># ... reboot and perform maintenance ...$ sudo asmt -r -v -p <path-to-backup-directory> -n <ns1, ns2, ...> -
Perform maintenance on the rack’s hosts.
-
Start
asdon each node. Verify the nodes rejoin the cluster.Terminal window $ sudo systemctl start aerospikeVerify:
info networkshows all nodes have rejoined at the expected cluster size.Terminal window Admin> info network~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-14 00:48:17 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime| | | | |Size| Key|Integrity| Principal| Conns|node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 6|76772BDF405A|True |BB9E312FA6C9B28| 5|01:11:25node2:3000 |*BB9D787F6BAF3D6|10.0.3.2:3000|E-8.1.1.2| 0.000 | 6|76772BDF405A|True |BB9E312FA6C9B28| 5|01:11:25node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 6|76772BDF405A|True |BB9E312FA6C9B28| 5|00:00:14node4:3000 | BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 6|76772BDF405A|True |BB9E312FA6C9B28| 5|00:00:14node5:3000 | BB9A63D2E17F490|10.0.3.5:3000|E-8.1.1.2| 0.000 | 6|76772BDF405A|True |BB9E312FA6C9B28| 5|01:11:25node6:3000 | BB9E312FA6C9B28|10.0.3.6:3000|E-8.1.1.2| 0.000 | 6|76772BDF405A|True |BB9E312FA6C9B28| 5|01:11:25Number of rows: 6 -
Wait for migrations to complete. Use
cluster-stableto check. It returnsERRORwhile migrations are in progress and the cluster key when they are done. Run it periodically until all nodes return the same key:Terminal window Admin+> asinfo -v 'cluster-stable:size=6;ignore-migrations=false'node1:3000 (10.0.3.1) returned:76772BDF405Anode2:3000 (10.0.3.2) returned:76772BDF405Anode3:3000 (10.0.3.3) returned:76772BDF405Anode4:3000 (10.0.3.4) returned:76772BDF405Anode5:3000 (10.0.3.5) returned:76772BDF405Anode6:3000 (10.0.3.6) returned:76772BDF405A -
Repeat for the next rack.
Optimization with active-rack
Starting with Database 7.2.0, the active-rack feature can shorten the rack-at-a-time procedure for two equally sized racks. When active-rack is configured, the designated active rack holds all master partitions. The passive rack holds only replicas. This means the quiesce step can be skipped for the passive rack: since it has no masters, there is no master handoff needed, so taking it down does not cause a master gap.
Procedure with active-rack:
-
Designate the rack that will stay up as
active-rack. Wait for migrations to complete (masters move to the active rack). See Dynamically enable active rack using asadm.Terminal window Admin+> manage config namespace <ns> param active-rack to <rack-id>Admin+> manage reclusterVerify:
asinfo -v 'cluster-stable:size=N;ignore-migrations=false'returns the same cluster key on all nodes (migrations complete). Useshow pmapto confirm all Primary partitions are on the active rack’s nodes. -
Shut down
asdon the passive rack’s nodes. If hosts will be rebooted, use ASMT to back up and later restore the indexes of each namespace.Terminal window $ sudo systemctl stop aerospike# If rebooting:$ sudo asmt -b -v -z -p <path-to-backup-directory> -n <ns1, ns2, ...># ... reboot and perform maintenance ...$ sudo asmt -r -v -p <path-to-backup-directory> -n <ns1, ns2, ...> -
Perform maintenance on the passive rack’s hosts.
-
Start
asd. The nodes rejoin automatically.Terminal window $ sudo systemctl start aerospikeVerify:
info networkshows all nodes have rejoined at the expected cluster size.Terminal window Admin> info network~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-14 01:12:45 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime| | | | |Size| Key|Integrity| Principal| Conns|node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 4|3A38381ADD12|True |BB9D48DF5A70CEE| 5|01:35:53node2:3000 |*BB9D787F6BAF3D6|10.0.3.2:3000|E-8.1.1.2| 0.000 | 4|3A38381ADD12|True |BB9D48DF5A70CEE| 5|01:35:53node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 4|3A38381ADD12|True |BB9D48DF5A70CEE| 5|00:00:14node4:3000 | BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 4|3A38381ADD12|True |BB9D48DF5A70CEE| 5|00:00:14Number of rows: 4 -
Wait for migrations to complete before switching
active-rack. Usecluster-stableto check. It returnsERRORwhile migrations are in progress and the cluster key when they are done. Run it periodically until all nodes return the same key:Terminal window Admin+> asinfo -v 'cluster-stable:size=4;ignore-migrations=false'node1:3000 (10.0.3.1) returned:3A38381ADD12node2:3000 (10.0.3.2) returned:3A38381ADD12node3:3000 (10.0.3.3) returned:3A38381ADD12node4:3000 (10.0.3.4) returned:3A38381ADD12 -
Switch
active-rackto point to the now-maintained rack. Wait for migrations to complete (masters move to the new active rack).Terminal window Admin+> manage config namespace <ns> param active-rack to <other-rack-id>Admin+> manage reclusterVerify: same as step 1.
asinfo -v 'cluster-stable:size=N;ignore-migrations=false'returns the same cluster key on all nodes. Useshow pmapto confirm all Primary partitions have moved to the new active rack. -
Repeat steps 2-5 for the other rack.
Quiesce verification reference
This section provides detailed asadm output examples for each step of the quiesce-based maintenance workflow described in Planned maintenance above. Refer to these examples when you want to see full command output.
Verify the cluster is stable
Use the info network command to ensure there are no migrations, all nodes are in the cluster, and all nodes show the same cluster key.
Admin> info network~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-13 23:57:26 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime | | | | |Size| Key|Integrity| Principal| Conns|node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 4|AA5AF50552AF|True |BB9D787F6BAF3D6| 5|00:20:34node2:3000 |*BB9D787F6BAF3D6|10.0.3.2:3000|E-8.1.1.2| 0.000 | 4|AA5AF50552AF|True |BB9D787F6BAF3D6| 5|00:20:34node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 4|AA5AF50552AF|True |BB9D787F6BAF3D6| 5|00:20:34node4:3000 | BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 4|AA5AF50552AF|True |BB9D787F6BAF3D6| 5|00:20:34Number of rows: 4Quiesce a node
The quiesce command can be issued from asadm or directly through asinfo. Direct it to the target node using the with modifier:
Admin+> manage quiesce with node2:3000~~~~~~~~~~~Quiesce Nodes~~~~~~~~~~~~ Node|Responsenode2:3000 |okNumber of rows: 1
Run "manage recluster" for your changes to take effect.Verify by checking pending_quiesce. It should be true on the target node:
Admin+> show statistics like pending_quiesce~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:10 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Node |node1:3000|node2:3000|node3:3000|node4:3000pending_quiesce|false |true |false |falseNumber of rows: 2Recluster
This triggers the master handoff from quiesced nodes and starts migrations.
Admin+> manage reclusterSuccessfully started reclusterVerify by checking effective_is_quiesced and nodes_quiesced:
Admin+> show statistics like quiesce~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:20 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Node |node1:3000|node2:3000|node3:3000|node4:3000effective_is_quiesced|false |true |false |falsenodes_quiesced |1 |1 |1 |1pending_quiesce |false |true |false |falseNumber of rows: 4Verify quiesce handoff
After reclustering, verify that the quiesced node is no longer receiving traffic and that proxy counters have settled.
1) No active traffic on the quiesced node
Use show latencies to confirm read and write ops/sec are zero on the quiesced node. Prefix with watch to auto-repeat the command until you see the values settle:
Admin+> watch 5 10 show latencies[ 'show latencies' sleep: 5.0s iteration: 1 ]~~~~~~~~~~~~~~~~~~~Latency (2026-04-13 23:58:28 UTC)~~~~~~~~~~~~~~~~~~~Namespace|Histogram| Node|ops/sec|>1ms |>8ms|>64mstest |read |node1:3000 | 3580.0| 0.0| 0.0| 0.0test |read |node2:3000 | 0.0| 0.0| 0.0| 0.0test |read |node3:3000 | 3606.7| 0.0| 0.0| 0.0test |read |node4:3000 | 3461.6| 0.0| 0.0| 0.0 | | |10648.3| 0.0| 0.0| 0.0test |write |node1:3000 | 3616.0| 0.02| 0.0| 0.0test |write |node2:3000 | 0.0| 0.0| 0.0| 0.0test |write |node3:3000 | 3546.5| 0.03| 0.0| 0.0test |write |node4:3000 | 3467.9| 0.02| 0.0| 0.0 | | |10630.4| 0.02| 0.0| 0.0Number of rows: 82) Proxy counters stopped on the quiesced node
There is typically a second or two of proxy transactions as clients retrieve the updated partition map. Confirm the client_proxy_* and batch_sub_proxy_* counters are no longer incrementing on the quiesced node. Use watch to run the command repeatedly and verify the counters are stable across iterations:
Admin+> watch 5 5 show statistics like client_proxy[ 'show statistics like client_proxy' sleep: 5.0s iteration: 1 ]~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:33 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Node |node1:3000|node2:3000|node3:3000|node4:3000client_proxy_complete|34 |2044 |93 |86client_proxy_error |0 |0 |0 |0client_proxy_timeout |0 |8 |0 |0Number of rows: 4Admin+> show statistics like batch_sub_proxy~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:33 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Node |node1:3000|node2:3000|node3:3000|node4:3000batch_sub_proxy_complete|0 |0 |0 |0batch_sub_proxy_error |0 |0 |0 |0batch_sub_proxy_timeout |0 |0 |0 |0Number of rows: 43) No proxy traffic arriving at non-quiesced nodes
On the remaining nodes, verify the from_proxy_* counters have stopped incrementing. Use watch to confirm the values are stable:
Admin+> watch 5 5 show statistics like from_proxy_read_success[ 'show statistics like from_proxy_read_success' sleep: 5.0s iteration: 1 ]~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:38 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Node |node1:3000|node2:3000|node3:3000|node4:3000from_proxy_read_success|383 |0 |393 |324Number of rows: 2
Admin+> show statistics like from_proxy_write_success~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~test Namespace Statistics (2026-04-13 23:58:38 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Node |node1:3000|node2:3000|node3:3000|node4:3000from_proxy_write_success|393 |0 |371 |393Number of rows: 2Stop and start asd
$ sudo systemctl stop aerospikeVerify with info network. The cluster should show one fewer node:
Admin> info network~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-13 23:58:58 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime | | | | |Size| Key|Integrity| Principal| Conns|node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 3|FC7125ECF349|True |BB9D48DF5A70CEE| 4|00:22:06node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 3|FC7125ECF349|True |BB9D48DF5A70CEE| 4|00:22:06node4:3000 |*BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 3|FC7125ECF349|True |BB9D48DF5A70CEE| 5|00:22:06Number of rows: 3After maintenance, start asd and verify it rejoins:
$ sudo systemctl start aerospikeAdmin> info network~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2026-04-14 00:00:29 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime | | | | |Size| Key|Integrity| Principal| Conns|node1:3000 | BB94B7AEB45DB52|10.0.3.1:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37node2:3000 | BB90B0CA8BD688A|10.0.3.2:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:00:14node3:3000 | BB989A1BF1D8116|10.0.3.3:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37node4:3000 |*BB9D48DF5A70CEE|10.0.3.4:3000|E-8.1.1.2| 0.000 | 4|3866DA39491B|True |BB9D48DF5A70CEE| 5|00:23:37Number of rows: 4Wait for migrations
Use cluster-stable to confirm migrations are complete. All nodes should return the same cluster key. Use watch to poll periodically until they converge:
After a quiesce-based maintenance, migrations typically consist only of lead migrations and complete quickly. After a cold restart or when data must be repopulated (for example, in-memory namespaces without persistence after a reboot), migrations take longer.
Admin+> watch 10 30 asinfo -v 'cluster-stable:size=4;ignore-migrations=false'[ 'asinfo -v cluster-stable:size=4;ignore-migrations=false' sleep: 10.0s iteration: 1 ]node1:3000 (10.0.3.1) returned:3866DA39491B
node3:3000 (10.0.3.3) returned:3866DA39491B
node2:3000 (10.0.3.2) returned:3866DA39491B
node4:3000 (10.0.3.4) returned:3866DA39491B