WEBINARS

3 Critical lessons from AWS, Aerospike and Zeta Global for omnichannel real-time decisioning at scale

Please accept marketing cookies to view this content.

Stephen Faig: Welcome to today's webinar, brought to you by Aerospike, AWS, and Zeta Global. I'm Stephen Faig, director of Database Trends and Applications in Unisphere Research. I will be your host for today's broadcast. Our presentation today is titled 3 Critical Lessons from AWS, Aerospike, and Zeta Global for Omnichannel Real-Time Decisioning at Scale: Driving Human-like Personalization. Before we begin, I want to explain how you can be a part of this broadcast. There will be a question and answer session. If you have a question during the presentation, just type it into the question box provided and click on the submit button. We'll try to get to as many questions as possible, but if your question has not been answered during the show, you will receive an email response. Now to introduce our speakers for today, we have Gerry Louw, global head of Technology, Advertising and Marketing at AWS, Wilson Mai, principal software engineer at Zeta Global, and Daniel Landsman, global director of Ad Tech Solutions at Aerospike. To start things off, I'd like our audience today to get better acquainted with the companies on board and their relation to ad tech. Daniel, there are likely many folks on here that have heard of Aerospike, but can you tell us a little bit about what Aerospike does and how Aerospike became involved in ad tech? Daniel Landsman: Yes, Stephen. More than happy to. Aerospike was founded in 2009 by former Yahoo! Engineers who wanted a better way to move data at scale. Their vision was to create a platform that was high throughput, low latency with a better TCO, and our founders reworked our data architecture from the ground up by taking advantage of new techniques to drive better efficiency at scale. So we'll get into this a little more later, but Aerospike grew up in the ad tech ecosystem since moving data at web scale is a fundamental need for ad tech companies. Some of our earliest clients were the Trade Desk and Xander. Stephen Faig: Understood. Gerry, I'd be surprised if anyone on here wasn't familiar with AWS, but can you tell us about some of the exciting things you're doing in the ad tech industry today? Gerry Louw: Hi, Stephen. I would love to. So in 2021, we delivered 3,084 new and exciting things. And two of those are of special interests for today, and we'll talk a little bit about them later. Now, 90% of those exciting things, of that roadmap is driven by you, the customer. And so it's our customers that are helping us to actually push the boundaries and setting this exciting roadmap. So what customers are telling us is that they want solutions, right? And based on that feedback, we are focused on these six solution areas that you see on the slide here today. And this is really driven by the fact that the loss of identifiers, third-party cookies and IDFA is really forcing the advertising industry to reinvent itself. And these six solutions are focused on helping to accelerate that innovation. Stephen Faig: Wilson, you work for a company that is purely focused on data-driven marketing technology. Can you give us a quick overview about Zeta Global and what you do? Wilson Mai: Oh, yeah, sure. You might not have heard of Zeta, but we've been around for over a decade. Our platform contains a CDP, an ESP, and a DSP, and this gives us the ability to supply 360 degree customer file management and audience activation across all digital channels. We're an omnichannel marketing platform, which competes with other major marketing cloud providers. So we serve customers in many verticals, including eight of the 10 largest auto manufacturers and four of the five top banks and seven of the top 10 credit card issuers just to name a few example. So what makes Zeta different is our proprietary data and scale. Zeta owns and operates the largest commenting platform on the internet. The platform encompasses more than 6 million sites and over 15 billion web pages and aggregate. Like these site generate over 50 million comments, a 17 billion page views and 2 billion uniques on a monthly basis. We use NLP to derive highly specific and tender audiences. Out of those 17 billion page views, roughly 1.2 billion are appended back to a persistent Zeta ID, linking to over 350 million email addresses in the Zeta data cloud. And this leads me to Zeta scale. Our platform has identity at its core. Our identity graph contains profiles on over 225 million people in the US of giving us the scale of these walled gardens. And with our unique data set, we were able to derive both identity and intent signals. And I'm one of the engineers that works on the marketing platform side. Stephen Faig: Thank you very much, Wilson. Okay, so let's dive into our topic. What is the state of advertising and marketing technology today? Daniel Landsman: More than happy to talk to that a little bit, Stephen. So the whole ecosystem is in flux as Gerry alluded to. Take a look at this LUMAscape slide. This is just the display side of the ad tech ecosystem. You can see how messy and complex it is. And on top of that, there's been a lot of recent changes on the regulatory side with legislation like GDPR and CCPA, which was followed by announcements from Google to deprecate cookies the end of '23, and Apple device IDs too. And I believe this has started a new innovation cycle in ad tech. So essentially, companies are being forced to rethink how they approach audience targeting at scale in order to remain competitive. Wilson Mai: Yeah. I mean, just to jump in, I completely agree with Daniel. The only thing that you can count on with certainty in the digital advertising world is change. In general, we find that people not only consume more data, but they also create more data as a result. So according to Forbes, 90% of the world's data was generated in the last two years with 2.5 quintillion bytes of data being created each and every day. So even handling a fraction of that data would be a challenge, and that's why technologies like Aerospike and AWS are so critical to help companies host and scale their architecture in order to keep up with today's consumers. Stephen Faig: Understood. Gerry, your thoughts? Gerry Louw: Yes. I think I'll just reiterate what everybody stated here. The change is the only constant, right? So the loss of third party identifiers, cookies, and IDFA. So what that means is there's a massive force function in the industry. Every layer of every provider in this space is changing. And so there are two factors that come out of that. The one is the need for speed to market and the other one is for agility. And those customers that actually embrace that and lean forward on that will actually win. So from this change comes great opportunity. That opportunity requires agility and speed. Stephen Faig: So what are the biggest challenges ad tech and martech companies are facing today? Daniel Landsman: Well, the status quo is changing. So companies need to evolve to meet new market challenges, as Gerry mentioned, while continuing to meet the SLAs. This means processing more data but still being able to respond in short order. And Aerospike plays really well here. Wilson can speak to this a little bit more, but if you have interest here, I recently wrote a blog which you can find on our website that talks about this a little bit in depth. Wilson Mai: Yeah, I mean in our use case it's more about finding innovative technologies to just keep up with the expansion of data coming in from all these digital devices like the IoT and the telematics. I mean, it's a challenge on its own, but on top of that, we need to handle it in near real time and with low latency. Stephen Faig: Wilson, how do you build an infrastructure when there are so many unknowns in the industry and the requirements keep changing? Wilson Mai: Yeah, I mean it is a tough challenge trying to just build for the unknown. So I think the first step is just to break it down and build an MVP product, right? And then reiterate as you learn more about the business needs and the requirements. I mean, there are key things to take into considerations though. For example, in our use case when we're building the Recommendations engine, we knew that we wanted it to be real time, so we had to take that into account. So we had to have access to real-time identity data, to user history, and the products for the recommendations, and we had to build around that. So we've created services around each of these components since they map nicely to the logical parts. And having these specific services basically gave us the flexibility to change the behavior without having to change the entire architecture. On the data store side, we use a schema-less design to give us this flexibility of accepting any user data without having to constantly alter the DB. And we use Aerospike for that purpose and AWS to build the PLC and handling the scale as we grow. Stephen Faig: Gerry, would you like to weigh in on this question as well? Gerry Louw: Yes. Just expanding on that, the only constant is change, so I would say there are three things that you should keep in mind. When you plan for change, you really are planning for agility. So I would simply say if you're not in the cloud today, get into the cloud, because it's really the only place where you can actually scale up, scale down, delete, stop, and not actually risk your CapEx in a fast changing environment. So that's number one. Number two is you have to structure yourself for speed to actually use these opportunities so that you can be first to market or that you can actually adapt to the changes very quickly. Now, a runner does not get fast by accident. It's the result of intent and training to actually be faster. And in the same way, you have to set your goals for speed. And so the things that I would just call out there is managed services and serverless because each of these can release your critical development resources to actually focus on implementing those changes. And then if I look at the last one, an MVP, again, in a fast changing environment, spend as little time as possible to get your products, your minimum viable product into market, because simply spending time on an edge case that may actually disappear a week or two later because your requirements have changed is not a good investment. So those three things; plan for change, structure for speed, and deliver your products in as rapid release cycle as possible. Stephen Faig: Thanks, Gerry. What are the critical engineering requirements, Wilson? Wilson Mai: For Zeta, I mean the critical requirement revolves around like uptime and scale. We need to build services and products that are highly scalable and highly available. If our services are down, our clients would be impacted, and that's just not acceptable. So in order to achieve this, we needed a redundancy across all layers. We have to have machines posted on multiple AZs and databases replicated across all those AZs as well. For example, we've encountered issues before where a machine just goes bad on the Aerospike cluster. So we have AWS instance alarms set up to pull, actively notify the on-call engineers, and the machine would just be replaced without any interruption on the cluster nor to the service itself. So another critical element is scaling. Our traffic pattern is very spiky, which makes scaling very challenging. So we needed to scale up constantly to handle 10 to 100 times of traffic within a day. So we need not to only scale up, but we also need to scale really fast. So Aerospike did a great job of handling that traffic pattern with low latency even at that scale. But we ran into challenges scaling our application fast enough to do those surges. So there were a couple of things that we did to solve this issue. One thing was just to turn on the detailed monitoring so that the AWS machines report metrics on a minutely basis rather than the standard five-minute interval. And that allowed us to react faster to the spikes. So we also leveraged the ASG warm post, which pre-initializes the instances to drastically reduce the boot up time of our services. And with those changes, we were able to scale up within minutes, whereas before it took us like 10 minutes. The fast scale uptime along with the uptime are critical to our business. Stephen Faig: Understood. Gerry, can you talk about architecting for change? Gerry Louw: Yes, absolutely. So first, architect your infrastructure, your applications to be API-driven and to be consistent of autonomous services. And the specific reason for that is you can then change and scale these units independently, right? So I also was talking about rapid scale, so then you can actually scale each of those independently. The second thing is make sure that your applications are stateless, and we'll get to why in a second. But when you're stateless, you have to persist at state somewhere. And this is really where Aerospike actually shine if you need that, a really, really fast sub-millisecond access to state. So when you're stateless, you can scale faster, but you can then also use more cost-effective units of compute, like spike instances. And then when you actually look at scaling, ensure that your application can scale horizontally because rapid growth, both up and down, is really the cause of delay. And then finally, keep your data as a regional as possible, right? You want to avoid replicating massive centralized databases across transatlantic regions if you don't have to do that. So keep your data as regional as possible. Stephen Faig: Wilson, what do you look for in technology when building out a platform? Wilson Mai: Yeah. I mean, trying to find the right technology nowadays is hard just because there are so many choices out there. So the most important thing is that the technology has to meet your requirements. For example, if you're looking for database where you want to do aggregations across columns, you'll want to look for a columnar DB that's tuned for that purpose. But the other aspect is that, to make sure that the technology handles well during failure scenarios. Things will go bad, machines will go down. And in those situations, for distributed database, it needs to handle it well. There shouldn't be any downtime and definitely no data loss. Scaling is another aspect that we look for. It needs to be able to handle the high load and it needs to scale up easily. So when we started, our database was storing tens of millions of records and it grew to a hundred million. And then we're now in the billions. We've used Aerospike throughout the entire growth stage by just upgrading to larger boxes and increasing the cluster size. But we've kept the same infrastructure throughout the phase, and it's likely only the same when we hit tens of billions of records. And having said that, definitely test out all your use cases just to make sure that all the criteria are met. Stephen Faig: How did you get started on Aerospike and why did you decide on it for this piece of your platform? Wilson Mai: Yeah, I mean, it was a while back, probably five or six years ago when we moved over to using Aerospike. At that time, we ran into scaling challenges under high load, causing spikes here and there and then requested timeouts. So it took lot of effort trying to scale up the database and having five rows was never fun. And so we know that we needed something that's more performant under load with low latency access. So we compare quite a few database and was surprised by the latency and performance of Aerospike. But scaling up the machine was very simple. We would basically just add in a new node and it automatically rebalances itself. So we did have to tune some migration settings just to keep the latency load during that process. Other than that, there wasn't really much tuning or maintenance that we had to do on the cluster itself. So we ended up just choosing the community edition of Aerospike for starters. We're quite happy with it and eventually replaced most of our data stores with it. And replacing the data store actually provided some nice benefits as well. So the fast look-up time basically allows us to shift to other areas, so like making our scoring portion of the Recommendations engine more relevant. And also because Aerospike was just as fast as the distributed cache lookups, there was no need to add that caching layer between our APIs and the database, which drastically just simplified our stack. Stephen Faig: Daniel, what makes Aerospike's real-time data platform different? Daniel Landsman: The Aerospike real-time data platform is uniquely architected to handle large scale ultra low latency workloads, such as real-time bidding for ad tech. So the architecture was developed from the hardware layer up to have optimizations that take advantage of the latest advancements in memory, networking, and processors. So one of the aspects of the architecture is how it uses memory and flash storage. So unlike traditional in-memory architectures, the Aerospike platform optimizes for both memory and flash storage together. Essentially, the platform considers memory and flash as a single storage layer, and this allows for the speed of an in-memory architecture and the persistence of a non-volatile architecture. The speed and persistence we offer is critical for large scale, low latency intensive workloads that have to be operational 24/7 globally. So the Aerospike architecture enables a much higher density per node as well, which significantly reduces the cost and complexity of the infrastructure required. We have customers who've saved millions of dollars a year switching to us. And after they have their applications in production, they tell us that it simply works even as they put more and more data onto it. We've even had people tell us that they try to break it and it just doesn't go down. So this essentially future proofs their infrastructure. On top of that, it's massively scalable without a massive footprint. So we recently did a benchmark with AWS that was designed for two specific ad tech workloads. So the beauty of this benchmark is that we used standard i3en.24xlarge instances with NVMe drives to demonstrate. Aerospike could save customer over 10 million in infrastructure costs on AWS versus a competitor's solution on AWS. So in one of the workloads we tested, which was campaign data, and this was financial transactions, we were doing this test across 20 nodes, processed 6 billion unique records with strong consistency given that it was financial data at a rate of 95,000 TPS with 100% of the reads and 99% of the writes under that millisecond threshold. So this benchmark with AWS underscores what Aerospike means by predictable high performance at petabyte scale while reducing footprint. So another thing that makes us different is you only pay for unique data on the platform. No charging for server instances and no charges for IOPS. Stephen Faig: Understood. Wilson, as a customer of Aerospike, I'd love your take on this question as well. Wilson Mai: Yeah, I mean, as I mentioned earlier on our traffic goal, we had some learnings during that period on our Aerospike cluster. So every often, we would see a trend of high latencies and we weren't sure what's going on. What we discovered was that like the EBS volume that we had ran out into IOPS caps. So we basically just increased the size to raise the cap at the beginning, but it became quite wasteful since the disk usage was so low. So we finally moved over to the i3 instances with the ephemeral NVMe disks. Initially, we were a bit hesitant because of the potential data loss with the ephemeral disk, even though there were multiple replicas set up. What we've ended up doing was basically just increasing the redundancy by creating a backup cluster via XDR. So we basically have six copies of the same data across two different clusters. Stephen Faig: Gerry, as a partner of Aerospike, I'd love your take on this question. Gerry Louw: So when I look at a solution and how I select that, I will probably start off with what I'm not looking for, right? So I refuse to just pick the new shiny thing. And so what I do look for is, what is the value proposition? Can I buy or lease this as a SaaS platform? Is it cloud-native, viable from a cost perspective? Can it scale almost limitlessly? And when I look at each and every one of these categories, Aerospike is a resounding yes for me. Most importantly, when I look across the landscape of customers that actually use this on a daily basis to run their production workloads at very high volume, significantly high volume, it is just rock solid. And that for me is the key reason. That and the performance, the low latency performance is the reason why I would actually select Aerospike. Stephen Faig: Daniel, are there other areas of the ad tech and martech world that Aerospike plays a part in? Daniel Landsman: Absolutely. But it might be worthwhile to chat a little bit more about the platform before we get into the client types and sub-use cases. So let's start a little more high level, and it's really about driving business value overall and figuring out how to best deploy the Aerospike platform. It's not just a database. So Aerospike is a real-time data platform that was designed from the ground up. So no matter what kind of data sources you want to capture, we have the connectors and can process with multiple data models from the edge to the core to system of record. And you can deploy it on the cloud or in a hybrid-type model and you get predictable performance at any scale with 5 nines uptime. And this is why we succeed. So talking a little more about the use cases, it doesn't really matter if you're an SSP, DSP, DMP, CDP, a lot of acronyms I know, ad tech is filled with those, but Aerospike can help. So we see a lot of use cases specifically designed for profile stores, dynamic creative optimization, recommendation engines, business logic, fraud, attribution, analytics, and more. Stephen Faig: Understood. Wilson, is there anything you'd like to add on that? Wilson Mai: Yeah, I think one Aerospike feature that I wanted to highlight which Zeta recently switched to is the Kafka Connect part. So our online data lives in Aerospike, but we have to sync the data over to our data lakes for analytical purposes and offline processing. So although it is a data lake, we do have SLAs on how quickly we wanted the data to be synced over. So initially, we built a process that queries Aerospike based on the last updated timestamp to get the deltas and sync it over. And that process worked really well. It was very fast, even in the millions of records being updated. But as we scale, it was running slower and slower due to just the number of records that are being modified, and we had to find another way of speeding it up. Luckily, Aerospike introduced the Kafka Connect recently and we made use of it. So after the migration, the synchronization time went down from peaks of hours to tens of minutes. And as an added benefit, since we moved the query, the load on the cluster became a lot better as well. Stephen Faig: Let's talk about what AWS and Aerospike offer together. Danny, you want to take this? Daniel Landsman: Sure. So Aerospike is optimized for AWS. We have a strategic partnership and launched Aerospike on AWS for ad tech in November. So ACMS, our managed service, we support AWS. We also have what's called a Quick Start, which I'll touch on a little more in a minute. On top of that, we have connectors for Kubernetes and more. I'll preview this and Gerry speak to it a little bit more, but we're also working on the new Graviton instance with them coming sometime in '22. Gerry Louw: Okay, excellent. So Daniel, let me actually jump in and if we can just go to the next slide there. So earlier on, I mentioned the story of two announcements in 2021 [inaudible 00:28:07]. And so the first one really was the announcement of the i4 instance. And as you can see there, you can see the outcome there, what it actually means for you. So Wilson, this is something that I would highly recommend for you, for instance, to use instead of the i3en. But how did this come about? So we had a customer with a very significant workload, millions of queries per second. That workload actually caused some of the instances to literally melt. And so in conjunction with Aerospike, AWS defined the specific requirements that were needed to actually sustain that workload. And the outcome of that is actually the i4 instance that you see here. So this you can think of as an instance that is really designed for very high performance database workloads and it's really optimized very closely for Aerospike. So that's one thing that we did together. Daniel mentioned Graviton. So when we looked at this, the i4 instance, we also looked at what else can we bring to the table that will reduce the cost and improved the performance of other Aerospike workloads. And so we released the Graviton2 instances, which provide 40% cost performance benefit for customers. And so then working with Aerospike, we embarked on a journey where Aerospike will be porting their database onto the Graviton2 on an ARM-based instance. And we are really, really excited about this because we believe that it will simply deliver the most cost-performance database solution in any cloud. So we are very excited about this journey with Aerospike. So these are real examples of building together, being better together, and we are looking forward to keep on doing this in 2022 with Aerospike and with our customers. Stephen Faig: Gerry, why is scale so essential for the martech and ad tech industry? Gerry Louw: Wow, what a great question. I like to say the things that make ad tech fun is you have massive volumes. So anywhere from a million to 20, 30 million queries per second. Just think about how many billions that is per day. The second criteria is really low latency, sub-20 milliseconds sub-two milliseconds. The third one is you have to deliver these workloads with very, very low cost. And then the fourth one is this volume changes very rapidly on a dime. In minutes, it can increase 20, 30, 40%. So all of these things combine to actually say, "You have to be able to handle this in conjunction." Now when we look at customers, it is not unusual to actually see a year-over-year growth of 20 to 40% and as much as 70%. And then you get unexpected impacts like COVID. When traffic really spiked in 2020 and then reduced in 2021, I saw once that year-over-year in May, down by 28.7%. Now the importance of scale, the ability to simply increase and decrease your infrastructure, your platform to facilitate this, is just it's a basic requirement. If you can't do that, you will simply not be able to actually conduct business. And that's really where these two components, the cloud, AWS in this case, and Aerospike, enable customers to actually facilitate this type of scale. Stephen Faig: Wilson, I'd love your take on this question. Scalability. Wilson Mai: Yeah. I mean, as Gerry just mentioned, the rate of data growth is just astounding, and it's just growing day by day. This creates an ever-growing list of data pumps to be processed. And to support our clients, we have to offer them solutions with insight at this scale, whether it's to drive acquisition, to generate revenue growth, or just to retain users. And with today's digital mode of communication, it enables our clients to reach a much wider audience. But targeting them and personalization is the key and providing an omni-channel solution that becomes a necessity. Stephen Faig: How does Zeta intend to do real-time personalization at scale? Wilson Mai: So when we've designed the Recommendations engine, we did take scale into consideration. So what we did was we separated out the services so it can scale independently. For example, the API layer for handling the incoming request has a different scaling characteristic than the scoring engine. So it's more IO bound, whereas the scoring engine is more memory intensive and is more compute bound. The services are separated out and hosted on different instance types with specs that are tailored for that purpose on AWS. So we've also designed the engine to have very low latency with a slight drop-off in accuracy as a trade-off for that. And we've done that by limiting the user history and first turning out products before it gets scored. We've also made sure to choose a database that can handle high load, otherwise scaling the application would be useless if the bottleneck is on the database itself. And on the ML training pipeline side, we leverage a Kafka Connect to synchronize our identity data into the data lakes much faster to just improve the accuracy of our AI model. And having this design base [inaudible 00:34:27] serve us really well. Stephen Faig: What are the lessons you learned so far, Wilson, that you can pass on to our audience today? Wilson Mai: Wow, that's a great question. When we were building the Recommendations engine, we had to tick account to make sure that all the entire pipeline is basically real time, right? The behavioral data needs to be streamed in. It has to be processed and it has to be stored so that it can be made available to the engine within seconds. And choosing the right database and the right streaming platform was critical for us. So Aerospike is great for low latency and the high throughput workload, but it also needs to be ran on the right hardware so it can run optimally. And AWS provided that flexibility of changing machine types within a few clicks. In our case, like we learned that having NVMe drive made all the difference. And looking back, having alerts on this metrics would probably have made finding the issue much easier. As Gerry mentioned earlier, once the i4s and the Gravitons are out, we'll definitely give that a try. For the scaling part, AWS has helped us a lot, right? So they have a tool called the WarmPool, which basically initializes the instances on the ASG that would help scale up much faster. And the last thing that I do want to mention was just keep a constant lookout for what's available, whether it's new features from Aerospike or new machine types from AWS. Stephen Faig: Thanks, Wilson. Gerry, any lessons you can offer our audience today? Gerry Louw: Yes. The first one is three easy steps to reduce your cost by 84%, right? First one is make sure your application is stateless. When you're stateless, you make sure that you can auto-scale. When you can auto-scale, convert that to use spot instances. And be aggressive on spot instances. Go for 100% spot instances. Don't be shy. And then the final step there is migrate your workload to Graviton, Graviton2. It's much easier than you think those three steps, stateless, auto-scale spot instances, Graviton. Four steps will reduce your cost by up to 84%. The other one I mentioned before, keep your data regional. Synchronizing data across continents increases latency and increases your data transfer costs. So keep your data regional where possible. And then Wilson mentioned this, experiment frequently and often, be that with new features, be that with new instances. So I would highly recommend, if you're not currently using it, look at the M5d instance, the i3en instances, the new i4 instances, the Graviton instances. Each and every one of you have unique workload. And without experimentation, you cannot get to the optimal instance selection for your workload, and that can really have a significant impact on your platform performance and the cost efficiency. So that's my recommendations for today. Stephen Faig: Understood. Daniel, is there a way folks can try Aerospike? Daniel Landsman: There is. We have multiple deployment types across AWS for ad tech. So you can do the Aerospike Quick Start, which is an automated template that can be used to deploy Aerospike clusters in your own VPC. So it deploys all the ancillary elements and tools for a worry-free deployment to accelerate your app development cycle in a self-managed environment. There's also Aerospike cloud managed service, which is a risk-free approach to deploy a full environment to match your requirements of service level, high availability, disaster recovery, and on-demand scalability to accelerate time to value, minimizing risk and cost. And then Aerospike Kubernetes Operator to deploy on a hybrid cloud or multi-cloud environments, which simplifies the installation, operation, and maintenance of Aerospike Kubernetes clusters. So you can find all these at aerospike.com/AWS plus reference architecture in a short video on deploying the three node cluster in under 15 minutes. So if you want more information, you can check out these links or go to our website where you can find them. Or feel free to email me as well and I can point you in the right direction. Stephen Faig: Thank you, Daniel. Okay, we're going to dive into questions from our viewers today. First question, "Are there any additional ad tech trends or developments on the horizon that professionals should keep an eye on as we move further into 2022?" Gerry Louw: Okay, I think I'll dive in on that one. 80% of web traffic is considered to be anonymous, estimated. So without identifiers, third party cookies or first party data, how will we actually be able to reach these users? And so metadata about the user is becoming absolutely critical. And so contextual analysis will become a key workload in the future. We already see customers investing significant time in exploring how they can obtain more contextual metadata about where a user is coming from. The second one is we've seen a lot of legislation around user privacy. This will continue. And so that will increase governance required. And to reduce the impact of that... I tend to say it's a tax, I guess I should not. But if you think of the impact on this, on workloads, what we see is that customers want to actually minimize the workload around that. And so they're looking to share data so that they can actually centralize the governance and the control for that data in one single place versus duplicating it across many, many copies. And then the final trend that we see is with the loss of identifiers again, with a massive increase in volume. Optimization is now more and more dependent on the ability to utilize machine learning for the specific aspects of it. So the automation of your machine learning layer is a critical aspect that will help you to actually deliver that advertising and intelligence capability in your organization. So we see a significant increase in investment in machine learning capability. Stephen Faig: Understood. Can you talk about strategies to effectively target the 80% of the open web that does not have deterministic data? Daniel Landsman: Yeah, Stephen, I'll jump in here. So Gerry had some great points about contextual being a subsegment of targeting, but there are other possible solutions via interest behavioral data or leveraging synthetic IDs generated in the supply chain. So essentially, advertisers are going to have to take a portfolio type approach, especially because their identifiers are becoming more opaque. So companies will need to try leveraging different approaches to find what works best for them in their particular use case, but essentially to fill out that 80% of the open web that's unidentified, you're going to have to try out those different solutions that I just mentioned. Stephen Faig: Thanks for clarifying, Daniel. Gerry, is there anything you'd like to add to that? Gerry Louw: No, I think what we are seeing here is that there are multiple players. Daniel mentioned synthetic IDs. So what we see is that there's really almost an explosion of third parties that are providing these synthetic IDs. And the kicker is that none of them are dominant yet. So that means that the users or the potential users of those IDs have to be really flexible so that they can actually change and adapt not just one, but multiple parties to actually provide these synthetic identifiers. Stephen Faig: Understood. And Daniel, what is the timeline for the Aerospike on a Graviton release? Daniel Landsman: Sometime in '22. So that's the most guidance we can give at this point. Stephen Faig: Gotcha. Well, we'll have to stay tuned. Well, that brings us to about the top of the hour. I'd like to give a huge thank you to our speakers for coming on board today and sharing their insights and their expertise. Once again, Gerry Louw, global head of Technology, Advertising and Marketing at AWS, Wilson Mai, principal software engineer at Zeta Global, and Daniel Landsman, global director of Ad Tech Solutions at Aerospike. This event will be archived. You will receive an email once the archive is posted. You'll be able to use the same exact URL to access it or share it with the colleague. Thank you everyone for joining us today, and we hope to see again soon.

About this webinar

Today’s marketing and advertising professionals are challenged to keep pace with the ever-increasing demands of the digital consumer. For the modern consumer, general personalization in advertising is no longer enough, they demand individualized experiences. Meeting this demand requires advertisers to develop and scale an ad tech stack with the ability to store terabytes of data about hundreds of millions of individuals. More importantly, it requires the ability to query and action these consumer profiles with the lowest latency possible.

During our webinar, we’ll peek under the hood of Zeta Global’s omnichannel marketing platform to see how they are using Aerospike’s real-time data platform and AWS to curate real-time, individualized digital experiences as well as understand how they envision this technology scaling to meet the exponential growth of consumer data which then needs to be activated across a network of ever-increasing digital touch points.

You’ll learn: – How to leverage real-time behavior data to power a recommendation engine – How to achieve high throughput and low latency at scale without breaking the bank – How to utilize AWS for elastic scale for the cloud

Speakers

gerry-louw
Gerry Louw
Global Head of Technology - Advertising and Marketing AWS
Wilson Mai
Principal Software Engineer, Zeta Global
headshot-Daniel-Landsman-400w-150x150
Daniel Landsman
Global Director of Ad Tech Solutions