5 questions on real-time AI with Chief Scientist Naren Narendran
Naren Narendran, Aerospike’s Chief Scientist, answers five questions on how artificial intelligence and machine learning are shaping the modern tech landscape and how vector databases could help solve many of the issues miring today’s AI/ML applications.
Naren Narendran is Aerospike’s Chief Scientist. He has spent three decades in a variety of activities in the science and technology space, including fundamental research at Bell Labs, as well as leading teams and launching new products and infrastructure at Google and Amazon.
Throughout his career, Naren has had a strong interest in adapting novel technological and scientific concepts to pragmatic applications in scalable and performant hardware and software. Get to know Naren below, and learn about his view on the current state of AI, the importance of vector databases for improving large language models (LLMs) like ChatGPT, and what the future holds for AI.
Q: What brought you to Aerospike?
Naren Narendran: I have always been interested in math and theoretical computer science. In particular, dating back to my Ph.D. days, I was fascinated by algorithms and how we can use formal math to define what’s efficiently computable. And that includes taking algorithmic ideas and applying them at scale in high-performance systems. During my years at Bell Labs, I was part of an interesting group that included theoreticians and system builders for software and hardware. It was an eclectic bunch, to say the least. But the collaboration led to great results. It was a wonderful experience, and I have been applying my knowledge of algorithms and theoretical computer science to practical situations ever since.
I was fortunate to be in Google’s New York office when there were only a few dozen employees. By the time I left, there were about 5,000 employees. But those early days were incredible. We had the freedom to drive large, ambitious projects, and I was grateful to be part of the group that built some of the bigger systems that have come out of Google infrastructure and are now parts of Google Cloud.
When I moved to Amazon, I had my first deep experience getting my hands dirty with artificial intelligence (AI) and machine learning (ML). We used AI and ML to do things like organize Amazon’s e-commerce catalog in a way that made it much easier for consumers to browse and find things that they needed. So, all the things I’ve worked on through the years have involved taking algorithmic ideas and applying them to large-scale systems. And that’s what also brought me to Aerospike, which is a unique and powerful large-scale, high-performance system.
Q: What do you make of all the attention given to AI in 2023?
Narendran: 2023 has been an amazing year for AI in that we have seen some great advances and an enormous amount of public attention. But much of the fundamental work behind AI has been in the making for the last decade or so. AI has been in the background of neural networks and other areas. It has brought about things like image recognition and voice and natural language processing. Most people have encountered AI when they use smart assistants, like Alexa, and in their vehicles with self-driving cars.
I think two things happened in 2023 that are significant. First, AI rose to the next level of scale. We went from being intelligent about narrow regions to intelligent about the world. ChatGPT, or LLMs, now deal with expertise about the entire world as learned from the web. It actually can talk intelligently to some extent. The neural networks that were once working in limited scenarios are now working in global scenarios. Second, programs like ChatGPT brought AI to the public eye quite dramatically, where previously, it was relegated to niche technology companies and behind-the-scenes use cases. But ChatGPT caught the public’s imagination and spurred quite a bit of new work and interest in the space. It remains to be seen how this will play out over the next few years as the hype fades and people get over their initial curiosity.
Frankly, ChatGPT and LLMs are not some sort of magic bullet. Some deficiencies need to be resolved. For example, ChatGPT excels at synthesizing language and making content that is accumulated from a variety of sources and making it into something that’s holistic and presentable to an external user. But it doesn’t do a great job of understanding the context behind what it’s generating. In the next few years, we’ll hear more about other complementary technologies, like vectors, that will help to bring clarity and context to the results we see in AI and ML. That’s what makes this an exciting time. There will be much more to come and far more astonishing advances in the years ahead for AI-related applications.
Q: What is the role of real-time data in AI?
Narendran: As with all human endeavors, we strive to do things as quickly and effectively as possible. For example, the invention of the printing press enabled publishers to get newspapers out to the public. It was a marvel at first. But the next step was to constantly improve the technology and processes to get the news out to readers as quickly as possible.
The same thing applies to AI. Companies are currently saying, “How can we deploy this AI tool to respond to real instantaneous events that are happening? And we want to do it with all relevant information and in the proper context.”
Classical AI uses offline and batch processes for analyzing data. That means data is handled on a daily or weekly basis, and from there, predictions and analyses could be made. A variety of technologies have sought to make instant analysis available to users. But there are always trade-offs involved. That’s where Aerospike has a vital role to play.
Aerospike’s low latency and high-scale capabilities are critical aspects of this evolution of AI. It will give users the confidence to know that the results they’re getting include all relevant, real-time data – delivered in the proper context. Aerospike’s planned vector search offering is an example of this, where we are able to particularly leverage our latency and performance strengths.
Q: Can you describe why vectors have so much potential?
Narendran: Vector search is a new form of search that uses AI to do semantic search. Instead of the traditional means of searching by keywords or pixels or parts of images, a vector is a general-purpose representation that helps us understand the semantics of various types of objects we deal with in real life – text, images, music, video, and so on. A vector is a mathematical entity that can capture all of that and convert them into a homogeneous way of thinking about these diverse objects. Vectors capture the deeper meaning of these things, beyond the words in the text, or the pixels in the image. It captures what the text is talking about or what the image represents. So, rather than catching the nitty-gritty of the data, it’s capturing the semantics.
The magic of vector is its ability to capture semantics in a uniform way across all media types. That’s why it’s such a universal tool. Once you convert all of these heterogeneous media types into vectors, you can do a mathematical analysis to say which two are close to each other. In other words, the fact that two images are related is captured when the two corresponding vectors are mathematically close to each other.
Vector search can play a critical role in today’s world of LLMs and ChatGPT. While ChatGPT and LLMs are excellent at synthesizing text and language, they don’t understand what’s behind that. They’re trained to generate language that looks plausible. As a result, you can get text generated by ChatGPT and LLMs that seems plausible from a linguistic perspective but may be factually incorrect at times. Moreover, LLMs trained on data from the internet at large may not have access to data specific to your enterprise, which needs to be provided to the LLM as a “prompt.” Vectors are one way to provide this missing link to attaining accuracy and value. By feeding LLMs the appropriate data from a vector search on your enterprise data corpus, we can generate more valid and trustworthy outputs.
But vectors have much wider applications than the current hype around LLMs. They can act as a fundamental search building block for recommendation engines, anomaly detection, and even other domains like bioinformatics.
Q: Looking ahead, what changes do you expect to see?
Narendran: The neural network models that we’ve been using for the last 10 years for things like LLMs, image recognition, and Natural Language Prediction or Processing (NLP) have proven to be very useful in getting us to where we are today with the “AI revolution.” But we still haven’t found the holy grail of generalized intelligence.
It’s still not fully clear how human intelligence works. Is it a neural network underlying things, or is it something else completely different that’s yet to be discovered? I think that’s still an open question. In another decade or two, we’ll have more insights, and I believe that the current notion of neural networks will be replaced or at least boosted by a technology that takes us to the next level of generalized intelligence. If history teaches us anything, it’s that we’re always learning, discovering, and growing. I look forward to the discoveries and advances in the years ahead.
Explore AI/ML with Aerospike
Discover the power of Aerospike’s low latency and high-scale capabilities, essential elements for any truly valuable AI/ML application. Gain confidence in your results, knowing they incorporate all pertinent real-time data in the right context. Dive deeper into this transformative solution by exploring our Solution Brief.