Aerospike Vector opens new approaches to AI-driven recommendationsWebinar registration
Blog

Aerospike Vector Search enhances API framework and streaming ingest

Aerospike Vector Search introduces a fully mature API framework and streaming ingest with advanced features like real-time vector search and unique HNSW index healing. Join the product preview program now!

49853fa3668050d8494757a13108bf3f
Adam Hevenor
Director of Product Management, AI
May 29, 2024|3 min read

Today, we're announcing that Aerospike Vector Search has completed key functionality, providing the necessary features to develop vector search applications. With a fully mature API, developers can now fully add, delete, and update vectors and metadata using the latest Aerospike Vector Search Python client. They can also create indexes to handle unlimited scaling, regardless of the amount of vectors or their dimensionality.

Enabling real-time vector search with unique HNSW index healing

Unlike other vector solutions, Aerospike Vector Search builds and assembles the Hierarchical Navigable Small Worlds (HNSW) index in parallel, enabling horizontal, scale-out ingestion. To match the real-time nature of this streaming data ingestion, the index also requires continuous reconstruction, which Aerospike handles through a background healing process. 

This release introduces a new process for healing the HNSW index. It ensures uninterrupted performance by preventing indexes from getting stale due to delays associated with batch ingest. The updates include improved cache invalidation - marking updates and deletes to keep the cache current. This is essential for accurate, real-time search across large data feeds. Customers can allocate hardware resources as needed to meet their performance requirements. 

Along with index healing and parallel ingestion, Aerospike Vector Search employs two other key technical innovations that differentiate it from other vector solutions. 

Geometrically distributed cache with query steering

Aerospike's geometrically distributed cache, combined with query steering, optimizes search efficiency by guiding each query to the node containing the most relevant index data in memory. This process speeds up response times by clustering cached data into related neighborhoods, ensuring queries are routed to the nodes most likely to contain the necessary data. This approach minimizes latency, even for complex search queries across extensive datasets.

Partitioned index

The partitioned index is another Aerospike feature that stands out, leveraging our overall storage performance and flexibility. Other products use a single index shared across all nodes, forcing users to add memory as the index grows and limiting scaling to the vertical capacity of a single node.

Aerospike breaks up the index into partitions, each aligning with a data partition. This allows the system to scale horizontally and handle a much larger index across multiple nodes. Developers can build extremely large indexes and store them using Aerospike's range of high-performance storage options, like RAM, SSDs, or hybrid memory. The result is exceptional scalability, low latency, and optimal storage performance.

Try Aerospike Vector Search now

As Aerospike Vector Search nears general availability, we invite you to join our product preview program. Be among the first to explore the capabilities of our vector search solution and discover new opportunities in data management and AI.

Webinar: How real-time data can unlock AI/ML apps

With large language models (LLMs) powering generative AI (GenAI) applications, the world is left to ponder how best to employ this generational technology. Look at how real-time application designs are helping customers navigate today’s volatile market pressures and more.