Inherited Policy fields Txn and failOnFilteredOut are ignored.
public sealed class ScanPolicy : Policy
ScanPolicy() |
Default constructor. Disable totalTimeout and set maxRetries.
The latest servers support retries on individual data partitions. This feature is useful when a cluster is migrating and partition(s) are missed or incomplete on the first scan attempt. If the first scan attempt misses 2 of 4096 partitions, then only those 2 partitions are retried in the next scan attempt from the last key digest received for each respective partition. A higher default maxRetries is used because it's wasteful to invalidate all scan results because a single partition was missed. |
ScanPolicy(Policy) | Copy scan policy from another policy. |
ScanPolicy(ScanPolicy) | Copy scan policy from another scan policy. |
Txn |
Transaction identifier. If this field is populated, the corresponding
command will be included in the transaction. This field is ignored for scan/query.
Default: null (Inherited from Policy) |
Clone | Creates a deep copy of this scan policy. |
SetTimeout |
Create a single timeout by setting socketTimeout and totalTimeout
to the same value.
(Inherited from Policy) |
SetTimeouts |
Set socketTimeout and totalTimeout. If totalTimeout defined and
socketTimeout greater than totalTimeout, set socketTimeout to
totalTimeout.
(Inherited from Policy) |
compress |
Use zlib compression on command buffers sent to the server and responses received
from the server when the buffer size is greater than 128 bytes. This option will
increase cpu and memory usage(for extra compressed buffers), but decrease the size
of data sent over the network.
This compression feature requires the Enterprise Edition Server. Default: false (Inherited from Policy) |
concurrentNodes |
Should scan requests be issued in parallel.
Default: true |
failOnFilteredOut |
Throw exception if filterExp is defined and that filter evaluates
to false (command ignored). The AerospikeException
will contain result code .
This field is not applicable to batch, scan or query commands. Default: false (Inherited from Policy) |
filterExp |
Optional expression filter. If filterExp exists and evaluates to false, the
command is ignored.
Default: null (Inherited from Policy) |
includeBinData |
Should bin data be retrieved. If false, only record digests (and user keys
if stored on the server) are retrieved.
Default: true |
maxConcurrentNodes |
Maximum number of concurrent requests to server nodes at any point in time.
If there are 16 nodes in the cluster and maxConcurrentNodes is 8, then scan requests
will be made to 8 nodes in parallel. When a scan completes, a new scan request will
be issued until all 16 nodes have been scanned.
This field is only relevant when concurrentNodes is true. Default: 0 (issue requests to all server nodes in parallel) |
maxRecords |
Approximate number of records to return to client. This number is divided by the
number of nodes involved in the scan. The actual number of records returned
may be less than maxRecords if node record counts are small and unbalanced across
nodes.
Default: 0 (do not limit record count) |
maxRetries |
Maximum number of retries before aborting the current command.
The initial attempt is not counted as a retry.
If maxRetries is exceeded, the command will abort with AerospikeException.Timeout. WARNING: Database writes that are not idempotent (such as Add()) should not be retried because the write operation may be performed multiple times if the client timed out previous command attempts. It's important to use a distinct WritePolicy for non-idempotent writes which sets maxRetries = 0; Default for write: 0 (no retries) Default for read: 2 (initial attempt + 2 retries = 3 attempts) Default for scan/query: 5 (6 attempts. See ScanPolicy() comments.) (Inherited from Policy) |
readModeAP |
Read policy for AP (availability) namespaces.
Default: ONE (Inherited from Policy) |
readModeSC |
Read policy for SC (strong consistency) namespaces.
Default: SESSION (Inherited from Policy) |
readTouchTtlPercent |
Determine how record TTL (time to live) is affected on reads. When enabled, the server can
efficiently operate as a read-based LRU cache where the least recently used records are expired.
The value is expressed as a percentage of the TTL sent on the most recent write such that a read
within this interval of the record�s end of life will generate a touch.
For example, if the most recent write had a TTL of 10 hours and read_touch_ttl_percent is set to 80, the next read within 8 hours of the record's end of life (equivalent to 2 hours after the most recent write) will result in a touch, resetting the TTL to another 10 hours. Values:
Default: 0 (Inherited from Policy) |
recordParser |
Alternate record parser.
Default: Use standard record parser. (Inherited from Policy) |
recordQueueSize |
Number of records to place in queue before blocking.
Records received from multiple server nodes will be placed in a queue.
A separate thread consumes these records in parallel.
If the queue is full, the producer threads will block until records are consumed.
Default: 5000 |
recordsPerSecond |
Limit returned records per second (rps) rate for each server.
Do not apply rps limit if recordsPerSecond is zero.
Default: 0 |
replica |
Replica algorithm used to determine the target node for a partition derived from a key
or requested in a scan/query.
Default: SEQUENCE (Inherited from Policy) |
sendKey |
Send user defined key in addition to hash digest on both reads and writes.
If the key is sent on a write, the key will be stored with the record on
the server.
If the key is sent on a read, the server will generate the hash digest from the key and vildate that digest with the digest sent by the client. Unless this is the explicit intent of the developer, avoid sending the key on reads. Default: false (do not send the user defined key) (Inherited from Policy) |
sleepBetweenRetries |
Milliseconds to sleep between retries. Enter zero to skip sleep.
This field is ignored when maxRetries is zero.
This field is also ignored in async mode.
The sleep only occurs on connection errors and server timeouts which suggest a node is down and the cluster is reforming. The sleep does not occur when the client's socketTimeout expires. Reads do not have to sleep when a node goes down because the cluster does not shut out reads during cluster reformation. The default for reads is zero. The default for writes is also zero because writes are not retried by default. Writes need to wait for the cluster to reform when a node goes down. Immediate write retries on node failure have been shown to consistently result in errors. If maxRetries is greater than zero on a write, then sleepBetweenRetries should be set high enough to allow the cluster to reform (>= 3000ms). Default: 0 (do not sleep between retries) (Inherited from Policy) |
socketTimeout |
Socket idle timeout in milliseconds when processing a database command.
If socketTimeout is zero and totalTimeout is non-zero, then socketTimeout will be set to totalTimeout. If both socketTimeout and totalTimeout are non-zero and socketTimeout > totalTimeout, then socketTimeout will be set to totalTimeout. If both socketTimeout and totalTimeout are zero, then there will be no socket idle limit. If socketTimeout is not zero and the socket has been idle for at least socketTimeout, both maxRetries and totalTimeout are checked. If maxRetries and totalTimeout are not exceeded, the command is retried. For synchronous methods, socketTimeout is the socket SendTimeout and ReceiveTimeout. For asynchronous methods, the socketTimeout is implemented using the AsyncTimeoutQueue and socketTimeout is only used if totalTimeout is not defined. Default: 30000ms (Inherited from Policy) |
TimeoutDelay |
Delay milliseconds after socket read timeout in an attempt to recover the socket
in the background. Processing continues on the original command and the user
is still notified at the original command timeout.
When a command is stopped prematurely, the socket must be drained of all incoming data or closed to prevent unread socket data from corrupting the next command that would use that socket. If a socket read timeout occurs and timeoutDelay is greater than zero, the socket will be drained until all data has been read or timeoutDelay is reached. If all data has been read, the socket will be placed back into the connection pool. If timeoutDelay is reached before the draining is complete, the socket will be closed. Sync sockets are drained in the cluster tend thread at periodic intervals. timeoutDelay is not supported for async sockets. Many cloud providers encounter performance problems when sockets are closed by the client when the server still has data left to write (results in socket RST packet). If the socket is fully drained before closing, the socket RST performance penalty can be avoided on these cloud providers. The disadvantage of enabling timeoutDelay is that extra memory/processing is required to drain sockets and additional connections may still be needed for command retries. If timeoutDelay were to be enabled, 3000ms would be a reasonable value. Default: 0 (no delay, connection closed on timeout) (Inherited from Policy) |
totalTimeout |
Total command timeout in milliseconds.
The totalTimeout is tracked on the client and sent to the server along with the command in the wire protocol. The client will most likely timeout first, but the server also has the capability to timeout the command. If totalTimeout is not zero and totalTimeout is reached before the command completes, the command will abort with AerospikeException.Timeout. If totalTimeout is zero, there will be no total time limit. Default for scan/query: 0 (no time limit) Default for all other commands: 1000ms (Inherited from Policy) |