Four data and AI nuggets to mine at Google Cloud Next

Aerospike’s senior director of product marketing offers his POV on the biggest considerations to keep in mind while at this year’s GCN event.

Matt Bushell
Matt Bushell
Sr. Director, Product Marketing
March 27, 2024|7 min read

Perhaps it’s fortuitous that the time between this year’s and last year’s Google Cloud Next (GCN) is barely seven months. This compressed time frame allows attendees to capture better the rapid changes we’ve all experienced with generative AI (GenAI).

“Way back” in late November, I recall having conversations with technologists who had numerous questions about even the basics of vectors and yet to grasp how GenAI might apply to their industry (in one case, pharmaceuticals, and drug discovery). Fast-forward to today, people are more familiar with vector similarity search, the utility of retrieval augmented generation (RAG), and vector-based semantic search.

But, these use cases are high-level and pervasive. Of course, you’ll want to hear how people employ GenAI, the benefits, and how they are architected. With this in mind, it’s worth mentioning that there are a few things that you’ll want to consider, or, as the title suggests, “nuggets to mine,” that are less obvious, so I’ve put together this short list for you to consider.

1. Look for accuracy/relevance at scale from your vector search technologies

A top consideration people face with vector databases is whether to pursue a native database technology or another database (e.g., a NoSQL database) with vector capabilities. Additional considerations involve performance, i.e., throughput when ingesting and indexing data and retrieval response time performance for queries.

Yet, none of this matters if the results are not accurate enough. Obvious, right? The worst-case scenario would be the least accurate, hallucinations, but we all know that RAG can be introduced to tamp this tendency down. RAG will prevent going off the rails, but there’s more to accuracy than RAG. Given that many vector searches use approximate nearest neighbor (ANN) vs. k-nearest neighbor (KNN), there is, by definition, some level of (im)precision. In short, accuracy will drop if the embedding dimensionality has to be held too low to maintain high performance. Keep in mind that vector databases implement advanced indexing strategies that organize the vector space to make retrieval faster and more precise.

In addition, the more dimensionality you have in your vector embeddings, the more accurate your responses can be. That requires more data and the ability to handle and evaluate more data. Plus, your system must be performant enough to create these embeddings and retrieve highly dimensional ones while vector searching against a hierarchical navigable small world repeatedly until an accuracy criterion is met.

2. Graph is a more viable option than you may have previously thought

Despite the industry’s recent emphasis on GenAI over the past 15 months, it’s crucial to understand that vectors and graphs are fundamentally data models at their core, and graph solutions are very good at establishing connections between data points. This approach is inherently valuable in at least two very notable real-time use cases: personalization and fraud prevention.

For personalization, knowing what likes or interests a prospective customer has can lead to more customized offers and better uptake. Ideally, you would directly know the users’ behavior (via online clicks or cookies, which are notably going away), which is deterministic. If you want probabilistic behavior anticipation, companies need to consolidate customer data from various touchpoints, devices, and channels into a single, unified profile. This 360-degree view allows marketers to understand customer behavior comprehensively, helping them tailor more personalized and effective offers and campaigns. Note that this is a bit analogous to the difference between a direct search and a semantic search, which is an approximation requiring more data.

For fraud prevention, it’s imperative to discover links between customers and entities (IP address, device ID, phone, email, address, merchant, beneficiary, etc.). You’ll want to know if a relationship is already present with the customer or even relationships with other linked customers. You will want to ascertain if all the transactions are genuine in the network. With all this to be considered, an identity graph can detect fraud rings using community detection algorithms and visualization to blacklist new entities proactively.

To date, the challenge with graph solutions has been scale and efficiency. You need a lot of data to make them work well, and a system has to be efficient in its use of infrastructure not to stress budgets. The key, then, is to find solutions that can manage graph data at scale and cost-effectively.

3. There are sustainable ways to handle expanding real-time data volumes with the right database

Streaming data and real-time data sets are obviously growing, so you’ll want to devise not just techniques to manage it (don’t keep streaming data but just evaluate it or dump it into data lakes for later evaluation.) Rather, there are technological advances that make dealing with real-time data more efficient than you might think.

Historically, NoSQL systems have been largely in-memory based for high performance at the detriment of cost: RAM is expensive; you need a lot of servers/instances to manage meaningful data sizes; you need a lot of personnel to manage the resultant infrastructure.

However, NoSQL systems have other means to persist and access data, not just in memory. The key, or trick, if you will, is doing so at speed. Real-time is relative to an application’s needs but is often considered nowadays in single-digit milliseconds or sub-millisecond latencies for the best scenarios.

Seek out database providers pushing the boundaries of technology that can access different storage systems (not just RAM) at speeds you know will be useful to your applications. Accessing solid-state drives (SSDs) at millions of transactions per second (TPS) or even elastic block storage (EBS) at, say, 100,000 TPS would lead to sustainable practices.

4. Multi-model database capabilities will go a long way

As mentioned above, there are native vector and multi-model databases (databases that handle multiple data structures, such as key-value, document/JSON, and graph.) A multi-model approach is not to be confused with a multi-modal approach, which searches across text, video, and image. There are two predominant reasons to go with a multi-model approach for your vector solution:

  1. The ability to integrate vector similarity search capabilities with the capabilities of other data models

  2. The simplicity of a single database to manage versus yet another specialty database

For the first scenario, imagine you operate a home realty website and a prospective customer uploads a house style that they like, but they add other descriptors like “good schools,” then input towns they’d like to live in and their price range. Right away, you can see that a vector database would work very well for finding a similar-looking home. Still, you’d want SQL to parameterize the existing housing database by zip code and with prices, as well as a graph database for the characteristics of schools.

It’s certainly easier for database operators to learn how to operate one database than three: SQL-compatible, vector, and graph. Could you model your data around needing all three? Of course. But ideally, you’d have a single database rather than three to juggle. So long as it is extremely performant and efficient, you will reap the operational benefits from this sort of database consolidation.

We hope to see you at Google Cloud Next 2024

Undoubtedly, when you go to GCN, you’re going to want to schedule sessions and walk the trade show floor to learn. I’ll often ask people flatly, “What are you looking to learn?” or “What brought you to the show?” And so it’s these nuggets I hope you consider unearthing while at GCN.

Visit Aerospike in Booth #1261 as we’ll have our own graph and vector database demonstrations. We’d love to see you; feel free to book a meeting or join our daily social contest running April 9-11, where you could win a $50 gift card. To enter:

  1. Go to the Aerospike LinkedIn page and find the Google Cloud Next “post of the day”

  2. Share it on your LinkedIn channel and tag the official Aerospike handle

  3. Show us your post at Booth #1261 to be entered for a chance to win a $50 gift card