Scarf tracking pixel
Webinar - July 10: Cut Infra Costs by 80% with Smarter DRAM, NVMe & Cloud Storage StrategyRegister now
Blog

Does Redis have persistence? Understanding Redis data persistence approaches

Redis is known as a fast caching tool, but it can fall short when data persistence and durability are required.

June 10, 2025 | 10 min read
Matt Bushell
Matt Bushell
Sr. Director, Product Marketing

Redis was built for a specific task: as a fast, ephemeral in-memory caching solution. Since its founding in 2011, it's become nearly ubiquitous in that role, but it's also expanded its capabilities and positioning as a full-featured database alternative. But RAM is volatile and doesn't allow for data persistence, a key requirement for any true database.

So, does Redis offer data persistence? The short answer is a yes, of sorts, through its Redis on Flash offering. However, this solution was tacked on after the fact and superimposed on top of the core Redis product, making it less effective than a database designed from the beginning with persistence in mind

Redis data persistence comes with a lot of trade-offs: high latency, large storage volumes, and limited scaling. And with multiple options that work differently across different versions, it can be hard to understand exactly what you're getting.

For the long answer, keep reading.

What is data persistence, and why is it important?

If you turn off your computer for the night, you expect your files to still be there when you turn it on in the morning. That's persistence: data keeping its state, even if the software creating it shuts off. 

Without persistence, a cloud database will only last until the server shuts down, whether that's for an update, due to power loss, or because of a critical error. Additionally, persistence is important to databases for:

  • Long-term visibility: Being able to compare recent data against past data for decision-making, AI model training, or anything that requires significant historical data

    • E.g., revenue forecasting based on historical trends

  • State permanence: Being able to turn off a server or process and getting back to where it was once it turns back on

    • E.g., restarting an application server for updates mid-calculation and having it finish the calculation when it starts back up

  • Data durability/recovery: Being able to recover data to the last working state in the event of an unexpected shutdown or crash

    • E.g., restoring data after a storm knocks out power

  • Consistency and fault tolerance: Recovering data in the event of an incorrect write or similar data fault

    • E.g., correcting a hardware fault that corrupts some or all transactions

  • Compliance and auditing: Being able to show an auditable data-trail and historical data to maintain compliance with rules and regulations

    • E.g., a financial services company needing to keep 10 years of transaction data

  • Cold starts and fast restarts: Being able to restart a server without the need to fully rebuild the last known data state instruction by instruction

    • E.g., caching services often take longer to start since they need to manually input all of the last transactions one by one

Webinar: Five reasons to replace Redis

As data volumes soar in high-workload environments, organizations often face challenges with their data management systems, such as scalability, server sprawl, and unpredictable performance. In this webinar, we explore the benefits and challenges of moving beyond Redis and how to leverage new solutions for better performance.

Does Redis offer persistence?

Redis was built as a low-latency cache, not really as a database, and initially didn't offer data persistence. Everything was stored in volatile memory, which loses its state in the event of power loss or when the process ends and releases allocated memory. This made the service incredibly fast, but didn't allow for persistence.

As a response to demand and to address some shortcomings, Redis introduced Redis on Flash in 2016, a service that combines an in-memory hot cache with warm/cold flash storage (non-volatile memory, or NVM). Then, for durability and availability, persistence approaches were added.

Redis' two approaches to persistence are:

  • Regular Snapshot (RDB)

  • Append Only File (AOF)

Both work a little differently and have some quirks, depending on the version of Redis you use.

Redis persistence with Regular Snapshot 

RDB is most similar to a standard backup process. At scheduled intervals or on demand, a full point-in-time snapshot of the current data state is taken and stored in a single compact file. These intervals can be set based on time periods and number of changes (e.g., "save a snapshot every hour if there have been at least 10 changes").

Because it's a point-in-time snapshot, everything that happens after the last snapshot but before the next one exists only in an ephemeral state that can go away in the event of power loss. Users of Redis RDB can:

  • Lose data between snapshots in the event of an issue

  • Have significant data loss if the rate of transactions between snapshots is high

Redis users can work around this by doing more snapshots, but they add a lot of latency. To create an RDB snapshot while keeping the cache fast, every snapshot forks a child process — essentially duplicating a limited copy of the core Redis process that runs concurrently with caching. This can add significant overhead; your resources are now doing twice the work. 

And the fine print is that these forks can even cause Redis to pause caching services for entire milliseconds. Or even a full second (a lifetime in high-throughput/low-latency applications) if there's a lot of data. 

This is even more problematic because Redis doesn't define what "a lot" of data is. A gig? Two? 10? Maybe a few hundred megabytes? Redis doesn't publicly provide an explanation, and Redis' software agreement forbids sharing benchmarks without formal approval. 

Redis persistence with Append Only File

The other approach Redis provides for persistence is the AOF. Instead of a full single-file snapshot, the AOF logs every single data-altering command sent to your Redis instance. It's a better approach for disaster recovery since it syncs faster than RDB and can do so every second or every query. 

The AOF persistence approach makes your data safer from corruption and loss, but it isn't 100% safe, and this approach comes with its own problems and issues:

  • The append is performed in a background thread that tries hard not to interfere with main thread writes, but "tries hard" is not very reassuring when discussing mission-critical databases. This is especially problematic if the AOF file gets too big and Redis has to create a new one. Rather than just creating a snapshot or starting a new AOF, Redis will rewrite the AOF only to the operations to rebuild the last saved data state, wipe the old file, and start a fresh one. That can be a lot of extra work for a server.

  • If they aren't configured properly, the files can grow to enormous sizes and eat up expensive RAM. This leads to storage bloat as your server fills up with logs, forcing users to make difficult decisions between massively expanding storage or the increased latency of frequent rewrites.

  • Even without a rewrite, all of the extra writes happening in the background increase latency.

  • While they are less prone to loss than RDB, they are way slower on recovery since the AOF file isn't a state but a series of instructions. Recovery isn't just "dump all the data back in and start over," it's "execute all the SETs that it took to get us to where we were, even if data has been over-written multiple times" — a lot of duplication and redundant work being done.

  • Even worse for Redis 7.0 or older versions, AOF uses memory as a buffer to hold new writes until a rewrite is finished, and it will write all commands to disk twice if new ones come in while it's rewriting. It can even stop writing and syncing entirely, requiring manual intervention to restart functionality. And only versions above 2.4 prevent an RDB snapshot and AOF rewrite from being triggered concurrently, which can bring the whole database to a crawl.

Top 10 alternatives that outshine Redis

While Redis is a popular in-memory data store for databases, caching, and messaging, its scalability, and operational complexity can lead to higher ownership costs and staffing needs as workloads and data volumes increase. Other solutions are thus more suitable for many organizations. Check out the top 10 alternatives that outshine Redis.

Redis persistence for Redis Enterprise users

Redis Enterprise users have some of their own problems to worry about, and for users moving from the community to the enterprise version this can be a frustrating experience since the rules and procedures can vary significantly.

There are currently only six enterprise data persistence options offered:

  • None

  • AOF 1/ sec

  • AOF 1/ write

  • RDB 1/ hr

  • RDB 1/ 6hr

  • RDB 1/ 12hr

Enterprise users who need more options will find that they are out of luck when it comes to Redis data persistence. And on top of all the limitations discussed above for the individual persistence options, Redis Enterprise has some quirks entirely of its own.

One potential issue is that RDB is not available for active-active architectures. If you run multiple write-enabled nodes, AOF is your only option. This pattern of limitations holds true for most complex enterprise persistence and durability strategies. As another example, Redis defaults to persistence just on replica nodes. Dual data persistence is available, but can add significant latency since Redis wasn't designed with these structures in mind.

Choosing the best Redis persistence approach

What is the right data persistence approach for Redis users? Each has pros and cons, and the best one depends on specific needs and architectures.



RDB AOF
Save frequency
On-demand, or on a set schedule with a data-change modifier Every second or every write
Save mechanism Spawned child process sharing resources with parent Background thread sharing resources with primary cache
Resource use Medium during snapshotting, none otherwise Relatively high and persistent
Durability Full loss between snapshots, which could be considerable Zero loss at every write config, minimal loss at every second config
Recovery/cold start Relatively fast, similar to traditional backup Slow and has to rebuild data state from individual instructions
Additional concerns Not supported for active-active database designs Can rapidly fill storage volume in busy write environments

A better persistence cache alternative

Users looking for hyperfast speeds and comprehensive data persistence don't have to settle for one or the other. Aerospike was designed for persistence and speed — the best of both worlds. It's why the Aersopike Data Platform is trusted in use cases like low-latency algorithmic trading and real-time fraud prevention. Like the Redis on Flash product, Aerospike offers a Hybrid Memory Architecture (HMA) that splits data between DRAM for speed and NVME flash for persistence: Keys/indexes are stored in-memory to facilitate low-latency access while values/data are stored on flash for built-in persistence.

Since only the index is on volatile memory, data loss in the event of a crash is minimal, and all common persistence approaches are supported. This includes built-in active-active persistence (in contrast to Redis) as well as fast dual data persistence. Recovery and cold starts are quick, since the full state is persistent at all times, and built-in Cross Datacenter Replication (XDR) makes the entire architecture durable and resilient. As an added bonus, flash is much cheaper than DRAM, allowing for significantly lower TCO without sacrificing speed.

But beyond just better durability and availability, Aerospike's approach solves one of the biggest headaches for heavily data-driven applications: split-cache architectures. These approaches are a common pattern learned from years of having to choose between durability and speed, but introduce failure points and expenses into application data layers. Abandoning the split-cache architecture helps developers:

  • Avoid data drift between the cache and persistent layers

  • Reduce maintenance and management overhead

  • Reduce the number of required clusters and nodes

  • Significantly decrease the total cost of ownership

  • Eliminate a major engineering failure point

And while it doesn't directly contribute to data persistence, Aerospike lets users benchmark performance and publicly share their results without using licenses/TOS as a shield to hide performance metrics — something Redis actively tries to prevent. This approach is more in keeping with the open internet ethos that has powered digital innovation for decades, but it also demonstrates Aerospike's confidence in its results. There's no need to hide benchmarks if the benchmarks lead the industry.

The bottom line on Redis data persistence

All of this may be a long answer to the question, "Does Redis offer data persistence?" but it illustrates the problems that arise from using tools outside of their intended purpose. 

Redis is a leader in the fast cache space, and for good reason. However, when it's sold as a full database replacement product, things quickly fall apart. Developers who attempt to use it as such quickly run into walls that require a secondary data persistence layer at best and a massive database migration and code refactor at worst. Both options are costly, difficult, and time-intensive.

Need persistence beyond snapshots and append-only files? That could be a sign that it's time to leave Redis for a full-featured database. Learn how to recognize when it's time to make a change with our white paper on the five signs you have outgrown Redis.

Redis to Aerospike: Migration guide

Redis works well for lightweight caching and quick prototypes. But when your system grows, with more data, users, and uptime requirements, Redis starts to crack. If you're hitting ceilings with DRAM costs, vertical scaling limits, or fragile clustering, it's time for a change. This migration guide provides a clear, practical path for moving from Redis to Aerospike.