Skip to content

Best practices

Optimizing performance

This document details available mechanisms to optimize client performance, and suggests best practices for using the client in a cluster of application servers.

Following these best practices helps you:

  • Improve performance through proper connection pooling, policy reuse, and efficient operations.
  • Increase reliability through proper error handling, and connection management.
  • Reduce resource usage through shared client instances and proper cleanup.
  • Maintain observability through proper logging configuration.

Always benchmark and profile your application under realistic workloads to determine the optimal settings for your specific use case.

The library offers different options to optimize performance for various workloads. Benchmarking and profiling your application under realistic workloads is essential for achieving the best possible performance.

Library features

The Aerospike Go client library offers the following features:

  • Minimal memory allocation: The client achieves minimal memory footprint by strategically employing buffer pooling and hash objects.

  • Customization-friendly: It provides parameters that allow users to customize key variables, enabling performance tuning across different workloads.

  • Determinism: The client prioritizes deterministic inner workings, avoiding non-deterministic data structures and algorithms. All pool and queue implementations are strictly bound by maximum memory size and operate within a predetermined number of cycles, ensuring predictable performance without heuristic algorithms.

Best practices

  • Server connection limit: Each server node has a limited number of file descriptors on the operating system for connections. No matter how big, this resource is still limited and can get exhausted by too many connections. Clients pool their connections to database nodes for optimal performance. If node connections are exhausted by existing clients, new clients can’t connect to the database (for example, when you start up a new application in the cluster).

    To guard against this, you should observe the following in your application design:

    • Use only one Client instance in your application: Client instances pool connections inside and synchronize their inner functionality. They are goroutine friendly. Use only one Client instance in your application and pass it around.

    • Limit the Client connection pool: The default number of maximum connection pool size in a client instance is 100. Even under extreme load in fast metal, clients rarely use more than even a quarter of this many connection. When no available connections remain in the pool, the client must create new connections to the server. If the pool is full, connections are closed after their use to conserve available connections.

    If this pool is too small, the client may waste time in connecting to the server for each new request. If the pool is too big, it may waste server connections.

    At its maximum number of 100 for each client, and proto-fd-max set to 10000 in your server node configuration, you can safely have around 80-100 clients per server node. In practice, this will approach 200+ high performing clients. You can change this pool size in ClientPolicy, and then initialize your Client instance using NewClientWithPolicy(policy **ClientPolicy, hostname string, port int) initializer.

    You can also guard against the number of new connections to each node using ClientPolicy.LimitConnectionsToQueueSize = true, so that if a connection is not available in the pool, the client will wait or time out instead of creating a new client.

    Example: configuring connection pool settings

    clientPolicy := NewClientPolicy()
    // Set maximum connections per node (default: 100)
    clientPolicy.ConnectionQueueSize = 100
    // Enforce connection limits (default: true)
    // When true, client will wait or time out instead of creating new connections
    // when the pool is exhausted
    clientPolicy.LimitConnectionsToQueueSize = true
    // Maintain minimum connections (default: 0)
    // Only set if you can configure server proto-fd-idle-ms
    clientPolicy.MinConnectionsPerNode = 10
    // Set idle timeout to be less than server proto-fd-idle-ms
    // If server proto-fd-idle-ms = 60000ms, set this to ~55 seconds
    clientPolicy.IdleTimeout = 55 * time.Second
    client, err := NewClientWithPolicy(clientPolicy, "localhost", 3000)
  • Initial connection buffer size: The client retains its buffers to reduce memory allocation. The memory buffers are grown automatically, but you can set the initial size to avoid reallocations in case the initial size is always too small. If you determine that the initial pool size is sub-optimal for you application, you can set the size with DefaultBufferSize.

  • Use Bin objects in Put operations instead of BinMaps: The Put method requires you to pass a map for bin values. While convenient, it allocates an array of bins on each call, iterates on the map, and makes Bin objects to use.

    For maximum performance, use the PutBins method and pass bins yourself.

    Example: less efficient (using BinMap)

    bins := BinMap{
    "bin1": 42,
    "bin2": "value",
    }
    client.Put(nil, key, bins)

    Example: more efficient (using PutBins)

    client.PutBins(nil, key,
    NewBin("bin1", 42),
    NewBin("bin2", "value"),
    )

    The PutBins method avoids the overhead of:

    • Allocating a map for bin values
    • Iterating over the map
    • Creating Bin objects from map entries
    • Additional memory allocations

    Use PutBins for better performance, especially in high-throughput scenarios.

Enable client logging

Each Client instance runs a background cluster tend goroutine that periodically polls all nodes for cluster status. This background goroutine generates log messages that reflect node additions and removals and any errors when retrieving node status, peers, partition maps, and racks. Applications must enable logging to receive these important messages.

The client uses the logger package for all logging. Configure logging appropriately for your environment:

import "github.com/aerospike/aerospike-client-go/v8/logger"
// Set log level
logger.Logger.SetLevel(logger.INFO) // Avaliable levels are DEBUG, INFO, WARN, and ERROR

The following example uses a custom logger:

import "github.com/aerospike/aerospike-client-go/v8/logger"
// Use a custom logger
logger.Logger.SetLogger(customLogger)

See the Logging documentation for more details.

Warm up the connection pool

After connecting to the database the connection pool is initially empty, and connections are established on a per-need basis, which can be slow and may time out some initial commands. It is recommended to call the client.WarmUp() method right after connecting to the database to fill up the connection pool to the required service level.

The client provides three levels of WarmUp methods:

  • func (clnt *Client) WarmUp(count int) (int, Error) - Warms up connections for all nodes in the cluster.
  • func (clstr *Cluster) WarmUp(count int) (int, Error) - Warms up connections for all nodes in the cluster (same as Client.WarmUp).
  • func (nd *Node) WarmUp(count int) (int, Error) - Warms up connections for a specific node.
client, err := NewClient("localhost", 3000)
if err != nil {
log.Fatal(err)
}
// Warm up the connection pool with 10 connections per node
warmed, err := client.WarmUp(10)
if err != nil {
log.Printf("Warning: Only %d connections warmed up: %v", warmed, err)
}

For more granular control, you can warm up specific nodes:

// Warm up a specific node
nodes := client.GetNodes()
if len(nodes) > 0 {
warmed, err := nodes[0].WarmUp(10)
if err != nil {
log.Printf("Warning: Only %d connections warmed up for node: %v", warmed, err)
}
}

The following example warms up all the nodes in a cluster:

// Warm up via cluster
cluster := client.Cluster() // Access cluster if needed
warmed, err := cluster.WarmUp(10)
if err != nil {
log.Printf("Warning: Only %d connections warmed up: %v", warmed, err)
}

User-defined key

By default, the user-defined key is not stored on the server. It is converted to a hash digest which is used to identify a record. If the user-defined key must persist on the server, use one of the following methods:

  • Set BasePolicy.SendKey to true: The key is sent to the server for storage on writes, and retrieved on multi-record scans and queries.

    writePolicy := NewWritePolicy(0, 0)
    writePolicy.SendKey = true
    client.Put(writePolicy, key, bins)

Replace mode

In cases where all record bins are created or updated by a command, enable Replace mode on the command to increase performance. The server then does not have to read the old record before updating. Do not use Replace mode when updating a subset of bins.

writePolicy := NewWritePolicy(0, 0)
writePolicy.RecordExistsAction = REPLACE
client.Put(writePolicy, key, bins)

Policy management

Each database command takes in a policy as the first argument. If the policy is identical for a group of commands, reuse them instead of instantiating policies for each command.

Set policy defaults

The following example overrides the defaults policy for client:

client, err := NewClient("localhost", 3000)
policy := client.GetDefaultPolicy()
policy.UseCompression = false
client.SetDefaultPolicy(policy)
// Use nil to use defaults
client.Put(nil, key, bins)

Close Recordset

Recordset query iterators should always be closed after the iterator is no longer used. Failure to close the iterator when an exception occurs while processing query results may cause the query buffer to fill up and prevent server nodes from completing the query.

recordset, err := client.ScanAll(nil, namespace, setName)
if err != nil {
return err
}
defer recordset.Close()
for res := range recordset.Results() {
if res.Err != nil {
// handle error
continue
}
// process record
fmt.Println(res.Record)
}

Use Operate for multiple operations

Use Client.Operate() to batch multiple operations (add/get) on the same record in a single call. This reduces network round trips and improves performance.

// Instead of multiple calls:
client.Put(nil, key, BinMap{"counter": 1})
record, _ := client.Get(nil, key)
counter := record.Bins["counter"].(int)
client.Put(nil, key, BinMap{"counter": counter + 1})
// Use Operate for atomic operations:
record, err := client.Operate(nil, key,
AddOp(NewBin("counter", 1)),
GetOp(),
)

Error handling

Always check for errors returned by client operations. The client returns an Error type which provides detailed information about failures:

record, err := client.Get(nil, key)
if err != nil {
// Check for specific error types
if err.Matches(ErrKeyNotFound) {
// Handle key not found
} else if err.Matches(ErrTimeout) {
// Handle timeout
} else {
// Handle other errors
log.Printf("Error: %v", err)
}
return
}

Tend interval

The ClientPolicy.TendInterval controls how often the client checks cluster status. The default is 1 second. Adjust based on your cluster stability requirements:

policy := NewClientPolicy()
policy.TendInterval = 1 * time.Second // Default
Feedback

Was this page helpful?

What type of feedback are you giving?

What would you like us to know?

+Capture screenshot

Can we reach out to you?