Skip to main content
Loading

Rack awareness configuration

Overview

This page describes how to implement rack awareness in namespaces configured for Available and Partition-tolerant (AP) mode.

note

See Configure rack awareness for detailed instructions on implementing rack awareness in a namespace configured for strong consistency.

The rack awareness feature stores replicas of records in separate hardware failure groups which are defined by their rack-id.

How it works

The following examples illustrate how rack awareness operates.

  • When configuring three racks with replication factor of 3 (RF3), each rack receives one replica for a given partition.
    • The three replicas are each on their own racks. Specific nodes are distributed as specified in the succession list order.
  • If you lose a rack, the number of replicas is eventually restored to match the value of your replication-factor. For instance:
    • your server is configued for RF3
    • you reduce the number of racks from 3 racks to 2
    • one rack hosts the master
    • the other rack hosts one replica
    • and the third replica moves to one of the two racks.
  • To avoid having data missing from the cluster, configure rack awareness to use multiple racks defined by rack-id.

Partition distribution

Masters are always evenly distributed across each node in the cluster, regardless of the configuration in prefer-uniform-balance, even when the number of racks is greater than the replication-factor (RF).
In these cases, racks do not have a copy of each partition.

Imbalanced racks

Imbalanced racks are racks with different numbers of nodes. The master partition and replicas are distributed to specific racks, however if the RF is configured higher than the current number of racks, the extra replicas are distributed randomly.

Imbalanced clusters

If a single node goes down, the cluster is temporarily imbalanced until the node is restored. This imbalance does not cause service interruptions. The cluster continues uninterrupted. Once the node restarts, the cluster automatically rebalances. The imbalance in the general load on the nodes across racks depends on the nature of the workload.

Implement rack awareness on an AP namespace

You can configure rack awareness at the namespace level. To assign nodes to the same rack, specify the same rack-id for these nodes.

namespace {
...
rack-id 1
...
}

Implement rack awareness dynamically on a cluster

You can implement rack awareness for an existing cluster dynamically.

note

Tools package 6.0.x or later is required to use asadm's manage config commands. manage config requires asadm to be in enable mode by typing enable. Otherwise, use the equivalent asinfo - set-config command.

  1. On each node use asadm's manage config command to change the rack-id to the desired value:
asadm -e "enable; manage config namespace <namespaceName> param rack-id to 1 with <host>"

or use asinfo's set-config command:

asinfo -v "set-config:context=namespace;id=namespaceName;rack-id=1"
  1. Add the rack-id to the namespace stanza in the configuration file to ensure that the configuration persists following any restarts.
  2. Trigger a rebalance of the cluster to engage migrations with the new rack-id configurations.
note

Make sure to persist your changes in the configuration file to protect against a rollback due to restart. Also, verify that there are no typos in the configuration file. This is the best practice for updating the configuration file for any dynamic change.

Display rack information

Use the following command to display the rack information.

note

Tools package 6.2.x or later is required to use asadm's show racks command. Otherwise, use the equivalent asinfo - racks command.

asadm -e "show racks"
~~~~~~~~~~~~~~~~Racks (2021-10-21 20:33:28 UTC)~~~~~~~~~~~~~~~~~
Namespace|Rack| Nodes
| ID|
bar |2 |BB9040016AE4202, BB9020016AE4202, BB9010016AE4202
test |1 |BB9040016AE4202, BB9010016AE4202
Number of rows: 2

For the test namespace, rack-id 1 includes nodes BB9040016AE4202, BB9020016AE4202, BB9010016AE4202. rack-id 2 includes nodes BB9040016AE4202, BB9010016AE4202.

Configure the node-id

Partition distribution is based on the cluster's node-id. Node IDs can be changed one node at a time in a rolling fashion across a cluster.

Specify the node ID inside the server stanza of the configuration file, as shown in the following example.

```
service {
<...>
node-id a1
<...>
}
```

Rack awareness reads

Rack awareness also provides a mechanism for database clients to read from the servers in the closest rack or zone on a preferential basis. This can result in lower latency, increased stability, and significantly reduced traffic charges by limiting cross-availability-zone traffic.

The feature is available in Java, C, C#, Go and Python.

Set up rack aware reads

  1. Set up clusters in logical racks. (See Implement rack awareness on an AP namespace.)

  2. Set the rackId and rackAware flags in the ClientPolicy object. Use the rack ID specified in the nodes for the associated-AZ where that application is running. The following example uses Java to demonstrate how to enable rack awareness. Operations are similar in other clients.


    ClientPolicy clientPolicy = new ClientPolicy();
    clientPolicy.rackId = <<rack id>>;
    clientPolicy.rackAware = true;

note

To avoid hard-coding, the rack ID can be obtained dynamically via the cloud provider's API, or set in a local property file.

  1. Once the application has connected, set 2 additional parameters in the Policy associated with the reads to be rack aware.

    AP Mode

    Policy policy = new Policy();
    policy.readModeAP = ReadModeAP.ALL;
    policy.replica = Replica.PREFER_RACK;
    readModeAP.ALL indicates that all replicas can be consulted.
    policy.replica = Replica.PREFER_RACK indicates that the record in the same rack should be accessed if possible.

Where to next?