Blog

Unveiling the future of AI/ML: From generative models to vector databases

Discover the latest insights in AI/ML, from GenAI to vector databases, including key concepts, use cases, and practical guidance for successful implementation.

Steve Tuohy website
Steve Tuohy
Director of Product Marketing
February 7, 2024|5 min read

The artificial intelligence (AI) landscape has soared over the past year, transitioning for many from a future promise to a present reality. Tools like ChatGPT have catalyzed this shift, marking a significant moment for both consumers and enterprises. The role of databases in this AI revolution is paramount. Recently, I had the opportunity to moderate a dynamic discussion on AI market trends with Forrester’s VP analyst, Mike Gualtieri, and Aerospike’s Head of Product, Lenley Hensarling. Their collective insights, drawn from conversations with numerous enterprise leaders, were eye-opening. The discussion spanned three main areas:

  • An overview of the market and essential technical definitions,

  • successful customer use cases in AI, and

  • practical tips for embarking on AI initiatives, emphasizing the necessary tools and infrastructure.

GenAI and vector databases 101: Market and definitions

Our discussion kicked off with Mike and Lenley reflecting on the AI buzz since the advent of large language models (LLMs) like ChatGPT, but also how much progress pre-dates this wave. Mike joked, “It’s hotter than it was hot!” Meanwhile, Lenley emphasized that while generative AI (GenAI) has indeed captured global attention, AI and ML (machine learning) have been evolving rapidly in the enterprise sector for quite some time.

This set the stage for delving into key concepts and innovations, ranging from LLMs and GenAI to vector databases, complete with their vector embeddings and similarity search algorithms. Mike put it in terms of Def Leppard, AC/DC, and Charles Dickens – turning music and literature into vectors and calculating similarity scores.

The conversation then pivoted to whether enterprises need a specialized vector database. Mike and Lenley engaged in a rich discussion on the topic. Mike presented Forrester’s view on multi-model databases, highlighting the challenges of managing multiple data stores as capabilities expand. Lenley emphasized the importance of not just storing but also indexing vectors and conducting similarity searches efficiently, especially for handling millions of searches per second, which truly tests a vector database’s mettle.

Expanding use cases – RAG, personalization, and fraud detection

Both Mike and Lenley have broad and deep visibility into customer successes in AI over the past year and beyond. One architecture supporting enterprise GenAI adoption in the past year is retrieval augmented generation or RAG. Lenley described RAG as a means of customizing LLMs for specific organizational needs, akin to how management consultants like Deloitte or McKinsey tailor their advice to a company’s unique context. RAG employs vector embeddings derived from an organization’s proprietary data, enhancing the responses of pre-trained models with relevant, customized information.

Mike pointed out that RAG particularly excels in personal productivity applications. He also underscored the importance of real-time capabilities in driving enterprise projects and innovation. We delved into personalization and fraud detection, AI/ML-driven areas that depended on high-performance databases even before the rise of generative AI. These applications, often relying on real-time databases as feature stores for both training and inference phases – and now potentially incorporating vector data – showcase the evolving landscape of AI in business.

By leveraging AI, fraud detection systems can analyze patterns and anomalies in transactional data, enhancing the accuracy and speed of results in dynamic environments. While customers have applied real-time databases to fraud detection for years, vector databases can efficiently add data to these models by turning multiple features into a high-dimensional vector.Further, Mike and Lenley discussed the emergence of small language models and generative engineering, a concept that recognizes the unique ‘language’ inherent in various engineering disciplines. They emphasized that the innovations brought about by LLMs are set to influence and transform these fields significantly.

Practical guidance for implementing and scaling AI projects

The final segment of our webinar offered practical guidance on the infrastructure needed for successful AI projects. Mike divided the process into three workloads: data preparation, training, and inference. He noted a shift in focus, explaining that while GenAI and other models are readily accessible on platforms like Hugging Face, with much of the training complexity abstracted away, the real challenge lies in scaling these solutions. Data preparation often involves converting diverse data into thousands of ML features, and inference demands real-time retrieval of multiple variables from different locations at scale.

Lenley highlighted LexisNexis’ ThreatMetrix product as an excellent example. It effectively and economically processes vast data volumes by extending its data analysis from hours to months of behavior for threat detection.In generative AI, Lenley touched on the concept of semantic caching as a method to conserve resources, checking vector data in cache to reduce calls to foundational models.Concluding the discussion, Mike and Lenley touched on the practicalities of bringing AI applications to production. Mike shared insights from financial services clients using A/B testing strategies, acknowledging the probabilistic nature of models and the challenges in achieving absolute certainty in testing.

More to learn

I encourage you to watch the full webinar, How real-time data and vector can unlock AI/ML apps. It sheds light on the complex world of AI/ML and introduces future considerations like explainability, which is crucial for understanding AI decision-making. As businesses navigate this landscape, knowing about vector databases, learning from customer experiences, and using strategic deployment methods are key. These steps can empower organizations to harness AI/ML’s power. The vast potential for innovation and transformation across industries signals an exciting phase in AI and ML’s evolution.