Batched commands
A batch is a series of commands that are sent together to the database server. A batch groups multiple commands into one unit and passes it in a single network trip to each database node.
Batch advantages
- Batches combine multiple record commands, including updates, deletes, reads and UDFs.
- Batching allows any key-value operation, such as mixing record-level gets and deletes, as well as bin-level transaction commands including increment, prepend/append, Map and List operations, and bitwise operations.
- Each batch type applies the same command(s) to a list of record keys: batch write, batch exists, batch operate, batch read, batch delete, and batch UDF.
- When batching multiple updates, the number of connections between client and the server is reduced.
- Batches optimize the use of network resources, such as packets and network sockets. By increasing throughput, batch can more efficiently use network resources in some cases.
- Batched commands are used extensively in Aerospike data modeling patterns, where denormalized designs rely on batch reads and writes to access related records efficiently.
Batch workflow
The client groups the primary keys in the batch by cluster node and creates a sub-batch request to each node. If a sub-batch contains only a single key, the client optimizes by sending the equivalent single-record command instead of a batch protocol message — for example, get() for a batch read, operate() for a batch operate, or delete() for a batch delete. Batch requests occur in series or in parallel depending on the batch policy. Parallel requests in synchronous mode require extra threads, which are created or taken from a thread pool.
Batched commands use a single network socket to each cluster node, which helps with parallelizing requests. Multiple keys use one network request, which is beneficial for a large number of small records, but not as beneficial when the number of records per node is small, or the data returned per record is very large. Batch requests sometimes increase the latency of some requests, but only because clients normally wait until all the keys are retrieved from the cluster nodes before it returns control to the caller.
Some clients, such as the C client, deliver each record as soon as it arrives, allowing client applications to process data from the fastest server first. In the Java client, sync batch and async batch with RecordArrayListener wait until all responses arrive from the cluster nodes. Async batch calls the RecordSequenceListener to send back records one at a time as they are received.
Batch considerations
- Batches are not transactions - they are neither atomic nor isolated. A batch is a group of commands that execute in parallel, saving round-trip time (RTT) and potentially making more efficient use of network resources, for example when batching many commands with small payloads. A batch can be part of a transaction by setting a transaction ID (Txn) in the
BatchPolicy.txn. - There is no guarantee for the order of execution across nodes. The commands in a sub-batch can be processed in order, a behavior controlled by a batch policy flag. Otherwise, the commands in a sub-batch execute in parallel on the node.
- There is no rollback processing for failed commands in a batch.
- You can configure the batch policy to stop or continue batch processing if an operation fails.
- Use the
operatecommand to combine multiple changes with the same key into a single batch entry. - Starting with Database 6.0.0, you can combine any type of read and write commands against one or more keys.
- Batches use more resources than end-to-end (client and server) read/write commands unless batches are really large. We recommend that you use read/write commands with smaller batch sizes.
Batch commands
Inlining batches
If inline is set to true through flags in the batch policy, the node executes every operation in its sub-batch, one after the other.
- Batch policy
allowInlinecontrols whether to inline sub-batch commands where the keys are in an in-memory namespace. The default value istrue. - Batch policy
allowInlineSSDcontrols whether to inline sub-batch commands where the keys are in an SSD based namespace. The default value isfalse. - For batch commands with smaller records, for example, 1KiB per record, using inline processing is faster for in-memory namespaces.
- If it is not inline, the sub-batch commands are split and executed by different threads.
- Inlining sub-batches does not tend to improve latency when the keys are in an SSD-based namespace. You should benchmark to compare performance.
- When a sub-batch is inlined, one thread executes the commands. The thread is not released until the sub-batch processing is complete. Large inlined batches may divert server resources toward batch commands over single-record commands.
Filtering batch commands
You can attach a record filter expression to any batched commands. The server applies the filter to each record in the batch to determine whether the operation should proceed.
Multiple batch commands to the same key in a batch
- Unless you inline the batch request to be serviced by the same service thread, multiple batch ops to same key, such as [K1, K3…K1, K5…K1, K2], will get distributed to different service threads.
- Client library or the server cannot decipher your intent of commands on K1 and consolidate them into one batch command on K1.
- Each operation on K1 can proceed, potentially, out of order when distributed to different service threads.
If you don’t want this situation, either consolidate commands for the same key into one key entry in the batch, or inline the batch request.
Code examples of batch commands
The following examples demonstrate batch write, exists, operate, read, and delete commands. They use a set of records with a string bin and an integer bin:
| Bin | Type | Value |
|---|---|---|
name | string | "Alice", "Bob", "Carol", "Dave", "Erin" |
visits | integer | 1 |
Each section below shows an excerpt. See the Code block at the bottom of the page for the full, continuous program in each language.
Batch write
Batch writes send multiple write commands in a single network round-trip. Requires Database 6.0+. Each entry in the batch is a per-key write record that carries its own operations.
Same write to multiple keys
When every key receives the same bins and values, build one write entry per key with identical operations. This is the common pattern for seeding or initializing uniform data across many keys.
String[] names = {"Alice", "Bob", "Carol", "Dave", "Erin"};Key[] keys = new Key[names.length];
List<BatchRecord> writeRecords = new ArrayList<>();for (int i = 0; i < keys.length; i++) { keys[i] = new Key("test", "demo", "user" + (i + 1)); writeRecords.add(new BatchWrite(keys[i], Operation.array( Operation.put(new Bin("name", names[i])), Operation.put(new Bin("visits", 1)))));}client.operate(null, writeRecords);keys = [("test", "demo", f"user{i+1}") for i in range(5)]names = ["Alice", "Bob", "Carol", "Dave", "Erin"]
batch_recs = br.BatchRecords( [ br.Write( key=keys[i], ops=[ op.write("name", names[i]), op.write("visits", 1), ], ) for i in range(len(keys)) ])
client.batch_write(batch_recs)names := []string{"Alice", "Bob", "Carol", "Dave", "Erin"}keys := make([]*as.Key, len(names))
writeRecords := make([]as.BatchRecordIfc, len(names))for i, name := range names { keys[i], _ = as.NewKey("test", "demo", fmt.Sprintf("user%d", i+1)) writeRecords[i] = as.NewBatchWrite(nil, keys[i], as.PutOp(as.NewBin("name", name)), as.PutOp(as.NewBin("visits", 1)))}client.BatchOperate(nil, writeRecords)const char* names[] = {"Alice", "Bob", "Carol", "Dave", "Erin"};
as_batch_records brecs;as_batch_records_inita(&brecs, 5);for (int i = 0; i < 5; i++) { as_operations* ops = as_operations_new(2); as_operations_add_write_str(ops, "name", names[i]); as_operations_add_write_int64(ops, "visits", 1);
as_batch_write_record* bw = as_batch_write_reserve(&brecs); char user_key[10]; snprintf(user_key, sizeof(user_key), "user%d", i + 1); as_key_init_str(&bw->key, "test", "demo", user_key); bw->ops = ops;}aerospike_batch_write(&as, &err, NULL, &brecs);as_batch_records_destroy(&brecs);string[] names = {"Alice", "Bob", "Carol", "Dave", "Erin"};Key[] keys = new Key[names.Length];
List<BatchRecord> writeRecords = new();for (int i = 0; i < keys.Length; i++){ keys[i] = new Key("test", "demo", $"user{i + 1}"); writeRecords.Add(new BatchWrite(keys[i], Operation.Array( Operation.Put(new Bin("name", names[i])), Operation.Put(new Bin("visits", 1)))));}client.Operate(null, writeRecords);const names = ["Alice", "Bob", "Carol", "Dave", "Erin"];const keys = names.map((_, i) => new Aerospike.Key("test", "demo", `user${i + 1}`));
const writeRecords = keys.map((key, i) => ({ type: Aerospike.batchType.BATCH_WRITE, key, ops: [op.write("name", names[i]), op.write("visits", 1)],}));await client.batchWrite(writeRecords);Different writes per key
When each key needs different bins or values, each write entry carries its own operations. This example writes two completely different records in one batch call.
Key sensorKey = new Key("test", "demo", "sensor1");Key alertKey = new Key("test", "demo", "alert1");
List<BatchRecord> mixedWrites = new ArrayList<>();mixedWrites.add(new BatchWrite(sensorKey, Operation.array( Operation.put(new Bin("type", "temperature")), Operation.put(new Bin("reading", 22.5)))));mixedWrites.add(new BatchWrite(alertKey, Operation.array( Operation.put(new Bin("level", "warning")), Operation.put(new Bin("message", "High temp")))));client.operate(null, mixedWrites);sensor_key = ("test", "demo", "sensor1")alert_key = ("test", "demo", "alert1")
mixed_recs = br.BatchRecords([ br.Write( key=sensor_key, ops=[ op.write("type", "temperature"), op.write("reading", 22.5), ], ), br.Write( key=alert_key, ops=[ op.write("level", "warning"), op.write("message", "High temp"), ], ),])client.batch_write(mixed_recs)sensorKey, _ := as.NewKey("test", "demo", "sensor1")alertKey, _ := as.NewKey("test", "demo", "alert1")
mixedWrites := []as.BatchRecordIfc{ as.NewBatchWrite(nil, sensorKey, as.PutOp(as.NewBin("type", "temperature")), as.PutOp(as.NewBin("reading", 22.5))), as.NewBatchWrite(nil, alertKey, as.PutOp(as.NewBin("level", "warning")), as.PutOp(as.NewBin("message", "High temp"))),}client.BatchOperate(nil, mixedWrites)as_batch_records mixed;as_batch_records_inita(&mixed, 2);
as_operations* sensor_ops = as_operations_new(2);as_operations_add_write_str(sensor_ops, "type", "temperature");as_operations_add_write_double(sensor_ops, "reading", 22.5);as_batch_write_record* bw1 = as_batch_write_reserve(&mixed);as_key_init_str(&bw1->key, "test", "demo", "sensor1");bw1->ops = sensor_ops;
as_operations* alert_ops = as_operations_new(2);as_operations_add_write_str(alert_ops, "level", "warning");as_operations_add_write_str(alert_ops, "message", "High temp");as_batch_write_record* bw2 = as_batch_write_reserve(&mixed);as_key_init_str(&bw2->key, "test", "demo", "alert1");bw2->ops = alert_ops;
aerospike_batch_write(&as, &err, NULL, &mixed);as_batch_records_destroy(&mixed);Key sensorKey = new Key("test", "demo", "sensor1");Key alertKey = new Key("test", "demo", "alert1");
List<BatchRecord> mixedWrites = new(){ new BatchWrite(sensorKey, Operation.Array( Operation.Put(new Bin("type", "temperature")), Operation.Put(new Bin("reading", 22.5)))), new BatchWrite(alertKey, Operation.Array( Operation.Put(new Bin("level", "warning")), Operation.Put(new Bin("message", "High temp")))),};client.Operate(null, mixedWrites);const sensorKey = new Aerospike.Key("test", "demo", "sensor1");const alertKey = new Aerospike.Key("test", "demo", "alert1");
const mixedWrites = [ { type: Aerospike.batchType.BATCH_WRITE, key: sensorKey, ops: [op.write("type", "temperature"), op.write("reading", 22.5)], }, { type: Aerospike.batchType.BATCH_WRITE, key: alertKey, ops: [op.write("level", "warning"), op.write("message", "High temp")], },];await client.batchWrite(mixedWrites);Batch exists
Check whether keys exist. Returns metadata only, without bin data.
In Java, Go, and C#, getHeader is a variant that returns record metadata (generation and TTL) without reading bins.
boolean[] exists = client.exists(null, keys);
for (int i = 0; i < exists.length; i++) { System.out.printf("Key %s: exists=%s%n", keys[i].userKey, exists[i]);}brs = client.batch_read(keys, bins=[])
for rec in brs.batch_records: (key, meta) = rec.record exists = meta is not None print(f"Key {key[2]}: exists={exists}")exists, err := client.BatchExists(nil, keys)if err != nil { log.Fatal(err)}
for i, e := range exists { fmt.Printf("Key %v: exists=%v\n", keys[i].Value(), e)}as_batch batch;as_batch_inita(&batch, 5);for (int i = 0; i < 5; i++) { char user_key[10]; snprintf(user_key, sizeof(user_key), "user%d", i + 1); as_key_init_str(as_batch_keyat(&batch, i), "test", "demo", user_key);}
aerospike_batch_exists(&as, &err, NULL, &batch, batch_exists_cb, NULL);as_batch_destroy(&batch);bool[] exists = client.Exists(null, keys);
for (int i = 0; i < exists.Length; i++){ Console.WriteLine($"Key {keys[i].userKey}: exists={exists[i]}");}const existsResult = await client.batchExists(keys);
for (let i = 0; i < existsResult.length; i++) { const exists = existsResult[i].status === Aerospike.status.OK; console.log(`Key user${i + 1}: exists=${exists}`);}Batch operate
Execute read and write operations atomically against multiple keys using the operate command.
Read results use the same projection model as single-record operate (bin projection and operation projection). Foreground queries support operation projection from Database 8.1.2 and later.
The following example increments visits and reads
back the updated values.
BatchResults opResults = client.operate(null, null, keys, Operation.add(new Bin("visits", 1)), Operation.get("name"), Operation.get("visits"));
for (BatchRecord br : opResults.records) { System.out.printf("%s: visits=%s%n", br.record.getString("name"), br.record.getValue("visits"));}ops = [ op.increment("visits", 1), op.read("name"), op.read("visits"),]
brs = client.batch_operate(keys, ops)
for rec in brs.batch_records: (_, _, bins) = rec.record print(f"{bins['name']}: visits={bins['visits']}")addBin := as.NewBin("visits", 1)batchRecords := make([]as.BatchRecordIfc, len(keys))for i, k := range keys { batchRecords[i] = as.NewBatchWrite(nil, k, as.AddOp(addBin), as.GetBinOp("name"), as.GetBinOp("visits"))}
err = client.BatchOperate(nil, batchRecords)if err != nil { log.Fatal(err)}
for _, br := range batchRecords { rec := br.BatchRec() fmt.Printf("%v: visits=%v\n", rec.Record.Bins["name"], rec.Record.Bins["visits"])}as_batch_records brecs;as_batch_records_inita(&brecs, 5);
for (int i = 0; i < 5; i++) { as_operations* ops = as_operations_new(3); as_operations_add_incr(ops, "visits", 1); as_operations_add_read(ops, "name"); as_operations_add_read(ops, "visits");
as_batch_write_record* bw = as_batch_write_reserve(&brecs); char user_key[10]; snprintf(user_key, sizeof(user_key), "user%d", i + 1); as_key_init_str(&bw->key, "test", "demo", user_key); bw->ops = ops;}
aerospike_batch_write(&as, &err, NULL, &brecs);as_batch_records_destroy(&brecs);BatchResults opResults = client.Operate(null, null, keys, Operation.Add(new Bin("visits", 1)), Operation.Get("name"), Operation.Get("visits"));
foreach (BatchRecord br in opResults.records){ Record record = br.record; Console.WriteLine($"{record.GetValue("name")}: " + $"visits={record.GetValue("visits")}");}const batchOps = keys.map((key) => ({ type: Aerospike.batchType.BATCH_WRITE, key, ops: [ op.incr("visits", 1), op.read("name"), op.read("visits"), ],}));
const opResults = await client.batchWrite(batchOps);
for (const { record } of opResults) { console.log(`${record.bins.name}: visits=${record.bins.visits}`);}Batch read
Read records, all bins or a projection, for a list of keys.
Record[] records = client.get(null, keys, "name", "visits");
for (Record record : records) { System.out.printf("%s: visits=%d%n", record.getString("name"), record.getInt("visits"));}brs = client.batch_read(keys, bins=["name", "visits"])
for rec in brs.batch_records: (_, _, bins) = rec.record print(f"{bins['name']}: visits={bins['visits']}")records, err := client.BatchGet(nil, keys, "name", "visits")if err != nil { log.Fatal(err)}
for _, r := range records { fmt.Printf("%v: visits=%v\n", r.Bins["name"], r.Bins["visits"])}as_batch batch;as_batch_inita(&batch, 5);for (int i = 0; i < 5; i++) { char user_key[10]; snprintf(user_key, sizeof(user_key), "user%d", i + 1); as_key_init_str(as_batch_keyat(&batch, i), "test", "demo", user_key);}
aerospike_batch_get(&as, &err, NULL, &batch, batch_read_cb, NULL);as_batch_destroy(&batch);Record[] records = client.Get(null, keys, "name", "visits");
foreach (Record record in records){ Console.WriteLine($"{record.GetString("name")}: " + $"visits={record.GetInt("visits")}");}const readRecords = await client.batchRead( keys.map((key) => ({ type: Aerospike.batchType.BATCH_READ, key, bins: ["name", "visits"], })));
for (const { record } of readRecords) { console.log(`${record.bins.name}: visits=${record.bins.visits}`);}Batch delete
Delete multiple records by key.
BatchResults deleteResults = client.delete(null, null, keys);
if (deleteResults.status) { System.out.println("All records deleted.");}batch_recs = client.batch_remove(keys)
for rec in batch_recs.batch_records: print(f"Result: {rec.result}")_, err = client.BatchDelete(nil, nil, keys)if err != nil { log.Fatal(err)}as_batch_records del_brecs;as_batch_records_inita(&del_brecs, 5);
for (int i = 0; i < 5; i++) { as_batch_remove_record* br = as_batch_remove_reserve(&del_brecs); char user_key[10]; snprintf(user_key, sizeof(user_key), "user%d", i + 1); as_key_init_str(&br->key, "test", "demo", user_key);}
aerospike_batch_write(&as, &err, NULL, &del_brecs);as_batch_records_destroy(&del_brecs);BatchResults deleteResults = client.Delete(null, null, keys);
if (deleteResults.status){ Console.WriteLine("All records deleted.");}await client.batchRemove(keys);Code block
Expand for the full batched commands example
import com.aerospike.client.AerospikeClient;import com.aerospike.client.BatchRecord;import com.aerospike.client.BatchResults;import com.aerospike.client.BatchWrite;import com.aerospike.client.Bin;import com.aerospike.client.Key;import com.aerospike.client.Operation;import com.aerospike.client.Record;import java.util.ArrayList;import java.util.List;
// ConnectAerospikeClient client = new AerospikeClient("127.0.0.1", 3000);
String[] names = {"Alice", "Bob", "Carol", "Dave", "Erin"};Key[] keys = new Key[names.length];
// Batch writeList<BatchRecord> writeRecords = new ArrayList<>();for (int i = 0; i < keys.length; i++) { keys[i] = new Key("test", "demo", "user" + (i + 1)); writeRecords.add(new BatchWrite(keys[i], Operation.array( Operation.put(new Bin("name", names[i])), Operation.put(new Bin("visits", 1)))));}client.operate(null, writeRecords);
// Batch existsboolean[] exists = client.exists(null, keys);for (int i = 0; i < exists.length; i++) { System.out.printf("Key %s: exists=%s%n", keys[i].userKey, exists[i]);}
// Batch operateBatchResults opResults = client.operate(null, null, keys, Operation.add(new Bin("visits", 1)), Operation.get("name"), Operation.get("visits"));for (BatchRecord br : opResults.records) { System.out.printf("%s: visits=%s%n", br.record.getString("name"), br.record.getValue("visits"));}
// Batch readRecord[] records = client.get(null, keys, "name", "visits");for (Record record : records) { System.out.printf("%s: visits=%d%n", record.getString("name"), record.getInt("visits"));}
// Batch deleteBatchResults deleteResults = client.delete(null, null, keys);System.out.printf("Batch delete status: %s%n", deleteResults.status);
client.close();import aerospikefrom aerospike_helpers.batch import records as brfrom aerospike_helpers.operations import operations as op
# Connectconfig = {"hosts": [("127.0.0.1", 3000)]}client = aerospike.client(config).connect()
keys = [("test", "demo", f"user{i+1}") for i in range(5)]names = ["Alice", "Bob", "Carol", "Dave", "Erin"]
# Batch writebatch_recs = br.BatchRecords( [ br.Write( key=keys[i], ops=[ op.write("name", names[i]), op.write("visits", 1), ], ) for i in range(len(keys)) ])client.batch_write(batch_recs)
# Batch existsbrs = client.batch_read(keys, bins=[])for rec in brs.batch_records: (key, meta) = rec.record print(f"Key {key[2]}: exists={meta is not None}")
# Batch operateops = [ op.increment("visits", 1), op.read("name"), op.read("visits"),]brs = client.batch_operate(keys, ops)for rec in brs.batch_records: (_, _, bins) = rec.record print(f"{bins['name']}: visits={bins['visits']}")
# Batch readbrs = client.batch_read(keys, bins=["name", "visits"])for rec in brs.batch_records: (_, _, bins) = rec.record print(f"{bins['name']}: visits={bins['visits']}")
# Batch deleteclient.batch_remove(keys)
client.close()package main
import ( "fmt" "log"
as "github.com/aerospike/aerospike-client-go/v6")
func main() { // Connect client, err := as.NewClient("127.0.0.1", 3000) if err != nil { log.Fatal(err) } defer client.Close()
names := []string{"Alice", "Bob", "Carol", "Dave", "Erin"} keys := make([]*as.Key, len(names))
// Batch write writeRecords := make([]as.BatchRecordIfc, len(names)) for i, name := range names { keys[i], _ = as.NewKey("test", "demo", fmt.Sprintf("user%d", i+1)) writeRecords[i] = as.NewBatchWrite(nil, keys[i], as.PutOp(as.NewBin("name", name)), as.PutOp(as.NewBin("visits", 1))) } client.BatchOperate(nil, writeRecords)
// Batch exists exists, _ := client.BatchExists(nil, keys) for i, e := range exists { fmt.Printf("Key %v: exists=%v\n", keys[i].Value(), e) }
// Batch operate addBin := as.NewBin("visits", 1) batchRecords := make([]as.BatchRecordIfc, len(keys)) for i, k := range keys { batchRecords[i] = as.NewBatchWrite(nil, k, as.AddOp(addBin), as.GetBinOp("name"), as.GetBinOp("visits")) } client.BatchOperate(nil, batchRecords) for _, br := range batchRecords { rec := br.BatchRec() fmt.Printf("%v: visits=%v\n", rec.Record.Bins["name"], rec.Record.Bins["visits"]) }
// Batch read records, _ := client.BatchGet(nil, keys, "name", "visits") for _, r := range records { fmt.Printf("%v: visits=%v\n", r.Bins["name"], r.Bins["visits"]) }
// Batch delete client.BatchDelete(nil, nil, keys)}#include <aerospike/aerospike.h>#include <aerospike/aerospike_batch.h>#include <aerospike/aerospike_key.h>#include <aerospike/as_batch.h>#include <aerospike/as_operations.h>#include <aerospike/as_record.h>#include <stdio.h>
static bool batch_read_cb(const as_batch_result* results, uint32_t n, void* udata) { for (uint32_t i = 0; i < n; i++) { if (results[i].result == AEROSPIKE_OK) { const char* name = as_record_get_str( &results[i].record, "name"); int64_t visits = as_record_get_int64( &results[i].record, "visits", 0); printf(" %s: visits=%lld\n", name, (long long)visits); } } return true;}
static bool batch_exists_cb(const as_batch_result* results, uint32_t n, void* udata) { for (uint32_t i = 0; i < n; i++) { printf(" exists=%s\n", results[i].result == AEROSPIKE_OK ? "true" : "false"); } return true;}
int main() { as_config config; as_config_init(&config); as_config_add_host(&config, "127.0.0.1", 3000);
aerospike as; aerospike_init(&as, &config); as_error err; aerospike_connect(&as, &err);
const char* names[] = { "Alice", "Bob", "Carol", "Dave", "Erin"};
// Batch write as_batch_records brecs; as_batch_records_inita(&brecs, 5); for (int i = 0; i < 5; i++) { as_operations* ops = as_operations_new(2); as_operations_add_write_str(ops, "name", names[i]); as_operations_add_write_int64(ops, "visits", 1);
as_batch_write_record* bw = as_batch_write_reserve(&brecs); char user_key[10]; snprintf(user_key, sizeof(user_key), "user%d", i + 1); as_key_init_str(&bw->key, "test", "demo", user_key); bw->ops = ops; } aerospike_batch_write(&as, &err, NULL, &brecs); as_batch_records_destroy(&brecs);
// Batch exists as_batch batch; as_batch_inita(&batch, 5); for (int i = 0; i < 5; i++) { char user_key[10]; snprintf(user_key, sizeof(user_key), "user%d", i + 1); as_key_init_str(as_batch_keyat(&batch, i), "test", "demo", user_key); } aerospike_batch_exists(&as, &err, NULL, &batch, batch_exists_cb, NULL); as_batch_destroy(&batch);
// Batch operate as_batch_records op_brecs; as_batch_records_inita(&op_brecs, 5); for (int i = 0; i < 5; i++) { as_operations* ops = as_operations_new(3); as_operations_add_incr(ops, "visits", 1); as_operations_add_read(ops, "name"); as_operations_add_read(ops, "visits");
as_batch_write_record* bw = as_batch_write_reserve(&op_brecs); char user_key[10]; snprintf(user_key, sizeof(user_key), "user%d", i + 1); as_key_init_str(&bw->key, "test", "demo", user_key); bw->ops = ops; } aerospike_batch_write(&as, &err, NULL, &op_brecs); as_batch_records_destroy(&op_brecs);
// Batch read as_batch batch2; as_batch_inita(&batch2, 5); for (int i = 0; i < 5; i++) { char user_key[10]; snprintf(user_key, sizeof(user_key), "user%d", i + 1); as_key_init_str(as_batch_keyat(&batch2, i), "test", "demo", user_key); } aerospike_batch_get(&as, &err, NULL, &batch2, batch_read_cb, NULL); as_batch_destroy(&batch2);
// Batch delete as_batch_records del_brecs; as_batch_records_inita(&del_brecs, 5); for (int i = 0; i < 5; i++) { as_batch_remove_record* br = as_batch_remove_reserve(&del_brecs); char user_key[10]; snprintf(user_key, sizeof(user_key), "user%d", i + 1); as_key_init_str(&br->key, "test", "demo", user_key); } aerospike_batch_write(&as, &err, NULL, &del_brecs); as_batch_records_destroy(&del_brecs);
aerospike_close(&as, &err); aerospike_destroy(&as); return 0;}using Aerospike.Client;
// ConnectAerospikeClient client = new AerospikeClient("127.0.0.1", 3000);
string[] names = {"Alice", "Bob", "Carol", "Dave", "Erin"};Key[] keys = new Key[names.Length];
// Batch writeList<BatchRecord> writeRecords = new();for (int i = 0; i < keys.Length; i++){ keys[i] = new Key("test", "demo", $"user{i + 1}"); writeRecords.Add(new BatchWrite(keys[i], Operation.Array( Operation.Put(new Bin("name", names[i])), Operation.Put(new Bin("visits", 1)))));}client.Operate(null, writeRecords);
// Batch existsbool[] exists = client.Exists(null, keys);for (int i = 0; i < exists.Length; i++){ Console.WriteLine($"Key {keys[i].userKey}: exists={exists[i]}");}
// Batch operateBatchResults opResults = client.Operate(null, null, keys, Operation.Add(new Bin("visits", 1)), Operation.Get("name"), Operation.Get("visits"));foreach (BatchRecord br in opResults.records){ Record record = br.record; Console.WriteLine($"{record.GetValue("name")}: " + $"visits={record.GetValue("visits")}");}
// Batch readRecord[] records = client.Get(null, keys, "name", "visits");foreach (Record record in records){ Console.WriteLine($"{record.GetString("name")}: " + $"visits={record.GetInt("visits")}");}
// Batch deleteclient.Delete(null, null, keys);
client.Close();const Aerospike = await import("aerospike");const op = Aerospike.operations;
// Connectconst client = await Aerospike.connect({ hosts: "127.0.0.1:3000" });
const names = ["Alice", "Bob", "Carol", "Dave", "Erin"];const keys = names.map((_, i) => new Aerospike.Key("test", "demo", `user${i + 1}`));
// Batch writeconst writeRecords = keys.map((key, i) => ({ type: Aerospike.batchType.BATCH_WRITE, key, ops: [op.write("name", names[i]), op.write("visits", 1)],}));await client.batchWrite(writeRecords);
// Batch existsconst existsResult = await client.batchExists(keys);for (let i = 0; i < existsResult.length; i++) { const exists = existsResult[i].status === Aerospike.status.OK; console.log(`Key user${i + 1}: exists=${exists}`);}
// Batch operateconst batchOps = keys.map((key) => ({ type: Aerospike.batchType.BATCH_WRITE, key, ops: [ op.incr("visits", 1), op.read("name"), op.read("visits"), ],}));const opResults = await client.batchWrite(batchOps);for (const { record } of opResults) { console.log(`${record.bins.name}: visits=${record.bins.visits}`);}
// Batch readconst readRecords = await client.batchRead( keys.map((key) => ({ type: Aerospike.batchType.BATCH_READ, key, bins: ["name", "visits"], })));for (const { record } of readRecords) { console.log(`${record.bins.name}: visits=${record.bins.visits}`);}
// Batch deleteawait client.batchRemove(keys);
await client.close();Log examples
Batch information coming from batch-sub. Stats on commands including batches for the namespace called test.
Ticker log line changes:
{test} batch-sub: tsvc (0,0) proxy (0,0,0) read (959,0,0,51,1) write (0,0,0,0) delete (0,0,0,0,0) udf (0,0,0,0) lang (0,0,0,0)When the cluster size changes you might also see proxy events included in a batch.
{test} from-proxy-batch-sub: tsvc (0,0) read (959,0,0,51,1) write (0,0,0,0) delete (0,0,0,0,0) udf (0,0,0,0) lang (0,0,0,0)Batch specific errors
| Value | Error | Description |
|---|---|---|
| 150 | AS_ERR_BATCH_DISABLED | Batch functionality has been disabled by configuring batch-index-threads to 0 |
| 152 | AS_ERR_BATCH_QUEUES_FULL | All batch queues are full. Controlled by the batch-max-buffers-per-queue configuration parameter |
Refer to Error Codes.
Known limitations
Batch writes were not supported prior to the Database 6.0.0, see the Client Matrix.
Client references
Refer to these topics for language-specific code examples: