Aerospike Vector opens new approaches to AI-driven recommendationsWebinar registration
WEBINARS

Office Hours: What’s new in Aerospike 7

Please accept marketing cookies to view this content.

George Demorest:

So welcome everyone. This is George Demorest from Aerospike and welcome to the Office Hours for the Aerospike 7 Database announcement. With me is Ronen Botzer who's going to do most of the talking, thankfully. And we're going to talk about what we just announced on November 20th, and that is the Aerospike Database 7. Just a quick catch up on Aerospike. So the database, it features a smart client for high performance for single hop to the data. It's a developer friendly environment. It is a real-time database that handles multiple data models and SQL access. It, as you'll hear today, now features a unified storage engine format and works on any cloud including private cloud and on-prem.

And it is part of the Aerospike Real-time Data Platform. So that's the commercial message, the intro for the company. But today we're going to talk about what we just announced and that is Aerospike Database 7. And you can see the headline from the press release there. But I'm not going to steal Ronen's thunder. I'm going to just talk a little bit about at the highest level. The unified storage engine format is the biggest piece of work, I would say, that we accomplished for this release. But actually what it means is that we have bolstered, especially in-memory database capabilities and improved a lot of things for namespaces in memory, including warm restart, in-memory compression, better performance. And I'll let Ronen tell you why that is.

It's not so obvious until you explain it. But generally speaking, a lot of people actually think we're just an in-memory database and that is actually not so. We have three different storage models and in-memory is one of them. But, of course, as we know, in-memory is about having as much DRAM as possible. It's the most expensive. It's the fastest. And it enables other in-memory databases to talk about how real-time and how fast they are. But it is expensive. And we offer three different storage engines including an all flash, which is the most cost-effective and takes advantage of flash and VME devices and delivers incredible price performance. And then the third is our patented hybrid memory architecture. So all of these three different storage engines have been in place for years. But the work on creating a unified storage format means that the in-memory model got some upgrades for this release. So about those upgrades, I'm going to turn it now over to Ronen who lives and breathes the Aerospike Database, knows it probably better than anyone else. So Ronen welcome. Tell us about Aerospike 7.

Ronen Botzer:

Okay, so Aerospike 7 is several different things. One of them is work that we've been doing in the background on our way to multi-record transactions and something we talked about before. But the main thing with 7.0 is that it is a fantastic in-memory database. People have used Aerospike as an in-memory database for different reasons in the past, even though the most common deployment is the hybrid memory deployment where the primary index and, typically, the secondary indexes, if somebody uses them, are deployed in memory and the data is served from SSD. That is the most widespread deployment even though about 30% of our customers also use in-memory namespaces for certain use cases. Now we have several customers that are 100% in-memory with all their applications. And that typically is because they're deploying on environments where they can't get reliable SDs, they don't really know what the hardware is.

So for example, our partners that are creating software where we are part of their tech stack and are deploying us but can't force their customers to get any specific type of hardware, the most common thing is, obviously, you are going to have memory, network cards, et cetera. So in their case, they rely on in-memory as a unifying way to deploy Aerospike without worrying about the hardware underneath it. Because otherwise when you're in hybrid memory, we do rely on the performance of the drives in order to get latencies that are sub millisecond. So you want to have decent SSDs and not everybody can necessarily enforce that. So that said, there is a big difference here.

Once we unified the storage engine, so now the exact same storage engine, same storage format is applied to data on SSD, data on persistent memory, or data in memory, RAM, and it's the same type of write block storage. It's the same mechanism for defrag. We unified all of that. But we get some weird side effects that are actually fantastic for in-memory databases. First thing is, in-memory data will persist without local drives because it now lives in shared memory segments. This means that it'll survive restarts of the Aerospike Database. If you roll ASD, let's say you are trying to patch Aerospike from 7.00 to 7.01, you take down your node, that memory does not evaporate, it's not tied to the process. So just like in enterprise versions of Aerospike, the indexes are living in shared memory. Now, if you define an in-memory namespace, the data is going to be living in shared memory as well.

This means that we get, you can now use the Aerospike shared memory tool to back up and restore these shared memory segments in case you need to actually reboot the whole host. So if you are patching the kernel and it requires a restart, you can use ASMT not just to backup the indexes, you can also back up the data storage. You save it after you do a clean shutdown to disc, and then when you restart the whole machine and the shared memory segments are gone, you use ASMT to restore those back into memory. And now you get a fast restart.

So in a warm start or fast restart, there is no need to rebuild the indexes. We had a clean shutdown. We know that they're synced, the index is synced to the data. So all we need to do is, when we can restart, we need to scan only the index and rebuild a bunch of statistics and now we're ready to go. This process was negated by in-memory namespaces in the past. You couldn't do that. You had to load data for either over-the-network if it didn't have persistence which was the after restart or if you had a storage back persistence, we had to load it from disc into memory. So that, basically, the node does not restart until all the namespaces are recovered. These memory, these in-memory namespaces, we would wait on them to do a restart.

Next-

George Demorest:

Actually, let me interrupt you for a moment, Ronen. For one thing, I forgot to mention that this is an office hours format so people can ask questions whenever they like and we'll try to get to them right away. So someone asked what ASMT is, for instance.

Ronen Botzer:

Okay, so ASMT, if you go to docs at aerospike.com/tools, you'll see the ASMT in there. It is the Aerospike Shared-Memory Tool. Even currently, you can use it to back up, if you're an enterprise edition or standard edition, any of the non-community editions, those indexes are going to be stored in shared memory. The tool allows you, after a clean shutdown, to copy them to the file. So if you're rebooting the whole machine, so you get fast restart of Aerospike, you just took ASD down and brought it back up. You did your RPM patching and brought it back up, no problem. But if you are rebooting the whole machine, it is likely to go faster for you to use ASMT to just-in-time copy the shared memory segments. And it's all in the documentation how to do it. It's very simple.

Let's say the EBS, if you're in AWS, any type of persistent storage and then you reboot your machine and you reverse the operational ASMT, you restore those back into shared memory. Now you get warm start of ASD. So it tends to speed up the process quite a bit. Now it also can do the data storage for an in-memory namespace.

George Demorest:

Got it, thanks.

Ronen Botzer:

Okay, next. You have the option of inline compression. So compression, in the past, was an option for the enterprise versions of Aerospike for either data in PMEM or data on SSD, but it did not apply to in-memory. Now because we're using the exact same storage engine, same storage format, you can choose standard or Snappy or LZ4 or none if you don't want it. But that allows you to squeeze a lot more data into the same amount of RAM.

So specifically for in-memory where it's quite constrained, usually no more than half a terabyte or 768 gigs, that means that you can put more into the same space. And then Aerospike 7, as an in-memory database, is extremely fast and very, very stable. We are now doing, using our own continuous defragmentation, same thing that is applied to disc and PMEM for in-memory data. So one of the problems for databases that are in-memory that rely on jmalloc is that the fragmentation is completely out of our hands. It's done by this other system and sometimes it leaks and you have your heap usage grows and you can't really do much about it. We can't control, really, the defragmentation. There's this whole heap efficiency problem. We got much, much better the defragmentation using the same tried and true system that Aerospike has for data on SSD or data on PMEM is now applied across the board to any storage engine including in-memory.

The other thing is in-memory data storage, which is optional, is completely mirrored to disc based persistence. So if you have in-memory and you want to use the disc based persistence, it's completely one-to-one mirrored. So capacity planning gets a lot simpler. But also we don't really touch the disc except for to write new blocks. So only when we write or defragment. So we don't use the disc for defragmentation anymore. In the past we had to defragment that one separately from in-memory. It had a lot of large reads of large blocks and all this operation would actually cause back pressure. And that's the same thing if you're using a competitor that is persisting to disc. The disc becomes the limiting factor on how fast you can write because it starts applying back pressure. You may write really quickly [inaudible 00:12:41]

... a log of the operations or how long it takes to take snapshots. Now in this case, we don't [inaudible 00:12:52] disc. We do warm starts. If there's a warm restart, we don't need to look at this thing at all. If there was a crash in the case of in-memory with persistence, then we do look at it. But again, because it's mirrored all of this, it's not causing back pressure while it's running. Now in-memory without storage persistence, we can reboot from shared memory even when you crash. So even when you crash, it's not like we toss it, we restart off shared memory is basically treated like a drive. It's very cool for anybody that is used in-memory databases before. You get the fast restarts. You get the operating without discs at all, which means much cheaper instances in most cases compared to in-memory with persistence. And then you also get this whole lack of back pressure and the ability to just restart from crashes, treat basically memory as a drive. I think that's it for the features.

George Demorest:

All right, I think this next slide you've already just covered, but I want to talk about developer features because when you and I first started talking about the release, you spoke about this release being the first of several that are going to really focus on improving life for developers. What have we done in this release and what does it mean for the follow on releases?

Ronen Botzer:

Okay, so on the developer side of the world, you now can index and search blobs. People have been doing that, keeping references to other records, their digests basically, but there's no way to index it. You had to cast it into some hex string or something like that, which would take a lot more space. Now you can just take your 20 byte digest reference to other records and index and query for those. So that's pretty nice. In future releases, we're going to allow you to index all types of data. So list map data, et cetera, you're going to be able to. Currently, you can index. With 7.0 you can index string, integers, and blobs, but we're going to add all the other data types as well in upcoming releases.

Now the other cool thing is, if your data is stored in key ordered maps, you can now choose to persist the index. It's a flag that has been added to the developer API for map operations. So there's a map policy that controls the type of the map, if it's unordered or key ordered. Now you can also tell it to persist index. So this is a space for speed trade off. If you have extra space on the disc and you want to get faster operations because all the indexes related to the map keys or map values if you're key value ordered, those will now be saved to disc. And that allows you to do all types of read operations like find by key ranges, find by rank, things like that. Those will all work a lot faster because we don't have to rebuild the index when we load the record from storage. So it doesn't matter if it's in-memory or on disc or on PMEM. Again, all of these things are the same now.

So this is an option that developers can tweak their application, and talking obviously to the operations people, trade-off some space for better performance for when you're storing JSON documents as a map or anything like that, which is pretty nice. Now on the other side, which have multi-tenancy improvements. We already have all kinds of multi-tenancy control mechanisms for control, like role-based access control, rate quotas, set quotas on size and such, things like that. We've added a couple of cool new features. One of them is raising the number of unique sets per namespace from 1K to 4K. And then the other thing, which is a much bigger deal, as we remove the limit of 64,000 unique bin names per namespace, that has completely gone away with Aerospike 7. So again, it doesn't matter if it's in-memory storage or on disk or PMEM, this feature is across-the-board. Your applications no longer have to worry about how many unique bin names they're using. They're unlimited now.

George Demorest:

Got it. So apparently you have a demo to show. So I'm going to stop sharing my screen and you're going to share your demo screen, is that right?

Ronen Botzer:

Yes, yes.

George Demorest:

All right, here we go.

Ronen Botzer:

Try to do that.

George Demorest:

It's us.

Ronen Botzer:

Okay, I think this is happening.

George Demorest:

Yeah, looks good.

Ronen Botzer:

Okay. So we have a cluster of six R7g Graviton3 instances with no flash drives. All the data and metadata is in shared memory. The cluster has a billion objects of about a K. Application factor two. We're generating 3 million reads per second and 1 million writes per second, and the CPU is at 42%. We're going to take down a node.

So we took down one of the nodes in the cluster. We now have a five node cluster. You can see that the cluster quickly stabilizes. And we continue to serve the load from the clients, but with no change in the latencies.

Now, there's a slight change. The CPU rate has now gone up to about 50%. It was 42% before, roughly across the nodes. Now note the compression ratio of 0.463 at the top right of the dashboard. We're using half the memory that we would've been using without this compression. Similarly, we chose to use two thirds of the memory for storage instead of leaving half of it for defragmentation, which is the default. Both of these are a trade-off of higher CPU for better memory utilization. But with the extra cycles that this new Graviton3 CPU gives us compared to the older Graviton2s, we can afford to do that at this 4 million transactions per second workload. As you can see that the latencies haven't really been affected at all. They're pretty fantastic. We're at 99.9%, around 600 microseconds, even with a node down.

Let's assume that we took this down as part of a regular upgrade to patch Aerospike on it. And once we're done applying the new RBM or WN package, we want the node to return to the cluster as fast as possible. So we brought back a node back up and you can see just how quickly it came back into the cluster. Now in general in Aerospike, the node doesn't start serving traffic until all the namespaces have been restored. So this means that we did a warm start. We're done with warm starting, and then we immediately took work back. So that is an extremely fast restart for an in-memory database. Did not need to load anything from disc. Did not need to replay in a penalty file or RDB plus a penalty file, anything like that. The data is sitting in shared memory. It's treated like a drive. And we just reattached to it and it's good to go.

You can see how quickly it came back and went right back into the mix with the workload. The latencies are, again, fantastic. The CPU is going to start driving down as soon as we deal with delta migrations. Comparing has their, what updates have been changing because we're doing a million writes per second. So even though I took it down for a short amount of time, it now needs to compare the versions of those records that have been changed, that have changed. So the partitions that have updated. Now we're doing delta migrations. We're just double-checking and only keeping the newest version of the record.

Something that if you didn't have that high of a write load, it would go away. But you can see how the CPU now is dropping again. Once these partitions get double-checked and we make sure that we only keep the latest version of the records for those partitions that have updated while the node was down, it'll go back down to around 42%. And now if I wanted to take it down completely, I would've after the shut-down used ASMT to write it, the shared memory to EPS and then recovered from it. So that was my demo.

George Demorest:

Great. We have another question that is, can we upgrade from 5.6 enterprise to 7.0 Directly, or do you have to migrate to 6.0 first?

Ronen Botzer:

You need to be aware of the changes in 6.0. So for example, there's a new four kilobyte end marker. Sorry, four byte end marker for records that we use for all kinds of internal purposes to verify the record is not corrupted by cosmic rays or whatever. That requires you to check that you don't have records that are right on the edge of the write block. So we have a description of upgrading to 6.x. If you look at the 7.0 upgrade instructions on our docs, you will see here's all the other previous things you need to note. But obviously if you're an enterprise customer, please open a support case, ask them. But the documentation, if you want to read it first is there. There's an upgrade instruction for 7.0. You can take a look at that. It'll work.

Yeah. We also have our announcement blog for Aerospike 7. We have my technical blog post that is up there that has a lot more details about why we did this, what do you need to pay attention in the config, and metrics that have changed, things like that. It's a breaking change, so be aware of that. But the config has been made a lot simpler and capacity planning is made a lot simpler. But it's still a bit weird to come from earlier versions of Aerospike and wrap it original around it. Now, if you're completely new to Aerospike, this makes a lot more sense. It's a lot simpler to understand when do I evict? How do I set my eviction thresholds if you're using evictions? Or when do I stop writes? And what does the stop writes even come off of? It's all extremely clear with 7.0. It does take a bit of adjustments.

So I wrote that in the technical blog post and I hope that it's clear. It took even me a bit to wrap my head around it and explain it well. So when you're really, really used to Aerospike, just be aware of the config changes and please read my technical blog posts. It's also the readmes are pretty clear, the upgrade instructions. But it's a little funny how once you're used to something, you're used to all its quirks and you operate within that mental model. But for new people or for new deployments, it's a lot simpler to just configure them, I believe. So take a look at that.

George Demorest:

Okay, we have another question that has come in from Ken, and that is, with the removal of memory size in 7.0, is it safe to assume in a multi-tenant environment with multiple namespaces, one group could cause evictions and stop writes to all other groups and namespaces? Unlike now where they would have an impact on their own namespace if running-

Ronen Botzer:

Hey Ken, I'm going to switch to my phone. I'm at AWS Reinvent and for all weird reasons, we are all crammed into one room. I need to leave right now. So I'm going to take you guys on a walkthrough Reinvent. I'm going to switch to my phone to answer these. So just a moment.

George Demorest:

All right, you got the question though, right?

Ronen Botzer:

I believe so. You may need to swap me there.

George Demorest:

I'm not sure his phone is in the... He should be back online a second.

There he is.

Ronen Botzer:

Hey, I'm back. Pardon me, I am leaving my room because other people in the company also want to use it. All right, so let's... Sorry, I'll come get that later. All right, let me walk you through the gorgeous downstairs of AWS Reinvent. Okay, so with regards to memory, what's really different for how things work is that you've got to remember the shared memory works like a drive now. So SSBs are pre-allocated. You have a partition, you get your partition. It's not something that you need to... It's not on the fly changing. So it's pre-allocated. So you do capacity planning and you decide how much disc you're going to use and all that.

So that's fine. That does not change at all. If your data on SSD. The thing that is, and that's what the data storage is. So our new things like the eviction threshold, the stop writes are off of the size of the disc. Your namespace because these are forced storage and your storage is on disc. So it's not about how much memory you're using. You have a global memory limit which says I want to, when the machine in general hits a certain percentage, I just want to stop write. I can stop writes for. And you can do that per namespace. So you can say, "I want a certain namespace to stop at 70% and I want a different namespace to stop writing at 80%." So you have the ability to prioritize it based on the overall machine memory. Now the evictions and the stop writes are off of data storage for that namespace. So if you have a terabyte on this and you say, "I want to stop writes at 80%." Then when you get to 800 maybe bytes, whatever you stop writes. So memory is similar.

If you have persistence, it's going to go off of the persistence. So if you give it a terabyte, it's going to pre-allocate that terabyte of memory. And that's really how this works. The only time you have a data size, the only time you use the data size configuration is when you have a situation where it's in-memory with no persistence. So now we can't go off of any disc to tell you at what point to stop. And that is what we're going to use for to define how big your shared memory allocations are. So it is a little weird, but think of it the same way you used to treat drives. Now shared memory with no persistence works that way.

George Demorest:

Very good. Well, it looks like we've come to the end of our incoming questions. So as Ronen mentioned, there's his technical blog. There's also a blog from his boss, Lenley Hensarling, that gives you what the release means. That announcement blog is there. The press release itself, for those of you who love press releases, don't we all? There's also a product brief that brings all this information into one format. The other thing I would mention is that you can try Aerospike 7 free for 60 days. So check that out. Also, we have made announcements around Aerospike Cloud. So our database as a service offering is live. The Aerospike 7 database is live. You can download it. They also check out the release notes. And so Ronen, any final thoughts about Aerospike 7? How should we think about Aerospike 7?

Did we lose Ronen?

Aerospike:

I think he's on mute.

George Demorest:

Oh, yeah. Ronen, come back to us.

Ronen? All right, well I think we can, unless he barges back in. Ronen, are you there? All right, well, he's in the midst of reinventing Vegas. And what happens in Vegas, et cetera. So that brings us to the end of our office hours. I mentioned the technical blog, the announcement blog, the press release, the product brief, the product release notes, and the product is ready for download right now. So check it out. I want to thank Ronen for talking to us today about Aerospike 7. I hope you guys will check it out.

As we mentioned, the unified storage that we work so hard on means that the in-memory namespaces, the all-flash namespaces, and our hybrid memory namespace are all in the same format. So it did bring some new benefits to Aerospike as an in-memory database. In-memory is not the number one use case for Aerospike, but as Ronen mentioned, fully 30% of our customers are running some sort of in-memory use cases on Aerospike. So these new features really just provide the better reliability, better performance, the warm restarts, the in-memory compression, and a number of other operational benefits, and then a number of other developer benefits that we've covered. So with that, we're going to end this office hours. We thank you for joining. And check out more information on the Aerospike website. And to all, best to you and have a good rest of your day. Thank you.

About this webinar

Join Aerospike’s Ronen Botzer, Director of Product, as he shares the new features and capabilities of the recent Aerospike Database 7 release.This presentation explores:

  • New in-memory database features in Aerospike 7

  • Unified Storage Engine Format, which simplifies application development and operations

  • Improved in-memory operations, including compression, warm restarts, faster defrag, and tomb raining

We hope you watch this interactive learning event.

Speaker

Ronen Botzer
Ronen Botzer
Director of Product