Scaling for Extreme Growth? The Data Layer is Ground Zero!

Sheryl Sage
Sr. Director, Cloud Product Management
September 8, 2020|11 min read

To make good decisions you need good data. But it’s not enough to have data — it needs to be data that you can get and process in real-time that is most valuable for today’s digital apps and services.

With this in mind I met with Srini Srinivasan, the Chief Product Officer and Founder of Aerospike along with Sai Devabhaktuni, Sr. Director of Engineering, and Athreya Gopalakrishna, Senior Engineering Manager at PayPal, to discuss how companies build a data strategy to master the art and science of scale.

Tell us a little about scale at PayPal.

(Sai): PayPal has been at the forefront of digital payments for over a decade now. Real-time data is the growth engine of the company, and the more accessible it is, the more value can be achieved. Data is growing at thirty percent each year. My team and I are responsible for managing and scaling data for digital payments and online transaction processing. We invest in a variety of databases including relational and non-relational in order to achieve strategic goals and generate real value for the business. System scale — compute, storage and network connections are at the heart of our success. Also, how the operations come together on the day-to-day also matters.

How do you manage for extreme data growth?

(Athreya): As the data grows, business expectations grow as well. Users must be able to login, transact securely and leave without a performance delay. Transaction speeds (and SLAs) must remain constant even when we add a new product line, or with a spike in transaction volumes during the holidays. Real-time and batch processing are how we originally viewed our online transaction data. We couldn’t move out of our relational databases, but we needed to design real time systems around high availability and sub-millisecond processing times. So we invested in cache, key-value and other NoSQL databases for our real time traffic. Later it was used for backend processing for transactional analytics. Finally the enriched data in the backend analytics process was brought back to the real time transaction processing. Today, one transaction kicks off two thousand concurrent requests, and responds in less than 50ms and typically in less than 10ms.

Why is the data layer such a critical element for so many businesses?

(Srini): Obviously managing large volumes of data is challenging, and one size doesn’t fit all. As Sai and Athreya pointed out, databases are only as good as the workloads that run on them. Some workloads need real time access to data, and others, in the case of analytics may take minutes, and with machine learning it may take hours. If a workload doesn’t require a real time SLA and perhaps is read heavy with only a very few writes, then many databases will do very well. If on the other hand you have a distributed system the database will need to balance availability requirements with maintaining strong consistency.

What part of system scaling is an art and what part is science?

(Sai): The challenge of scaling a system — the one end-users interact with is both an art and a science. Scaling a service or component in the system such as an application, databases, and network is more of a science. However, scaling a system requires a deeper understanding of how services work with each other. When you look at a tech stack, “the art” is looking at how each component interacts with each other. Just like when you throw a rock in a pond there is a ripple effect. A failure in one area cascades to another. And there are always failures in a tech stack. The idea and art is to identify how the failure is impacting other areas and get the bigger view of how customers interact with the system in the presence of failures.

Has the economics of new memory and storage technology changed the game substantially?

(Sai): With the advent of flash-based SSD over NVMe delivering more than 50,000 IOPS per device, for some workloads scale is no longer an issue. But as datasets grow, you need to use the right technology stack. Using tiered storage policies until the economics of flash and HDD converge. Nevertheless, with data growth at 30% YoY and that data being retained longer than ever before, throwing flash at the problem is not economical. Intel Optane PMem is a step function up, and for distributed databases, it was the missing piece. By moving to PMem with Aerospike, we are able to store 4X more, reducing our database cluster server footprint by half. Operationally, storing database indexes in PMem reduced reboot times 5X.

For now HDD continues to play a role with backup and archive.

What are the key dimensions of scale?

(Athreya): There are three dimensions of scale. You can design for scale out, scale up and concurrency scale. To understand the dimensions of scale at the data layer you need to look at the use cases and identify how it behaves in different scenarios.

  • In-memory or caching: Data is ephemeral and used just for a short period of time. The workload is very predictable.

  • Memory first: Data comes in, read sequentially and is persisted. The data lives in the higher layers and once done with the read, the data stays in cold storage.

  • Hybrid: Handles all reads and writes without sequencing.

Hardware profiles can be pre-defined for each of these patterns, but the first step is to understand in-memory, memory first and hybrid dimensions. When scale is incorporated into the design, data growth is seamless, no bottlenecks in any layers (app, data or storage). Keeping this in mind allows us to optimize the hardware for the workload.

How do scale problems manifest?

(Athreya): In general you need to understand the upper limits of each dimension. For example, the amount of storage each database node should have, the number of nodes in a cluster, the maximum throughput and upper limits of throughput. One of the things we like about Aerospike is being able to have a fixed number of bytes (hashed) despite the size of each key. This allows us to have a predictable upper limit for system sizing purposes. Furthermore, you want to consider concurrency scale. For example, how many connections can a given database take, lock and memory management. Often this is one area that is missed.

What factors go into building a data architecture

(Srini): As data grows in volume, in-memory databases can be expensive. It’s not enough to scale up with additional hardware. You need software that is optimized to take advantage of it. By combining DRAM and flash, preserving real-time access is possible with a hybrid-memory architecture. The hybrid-memory architecture increases the volume of data in each node by 10X over a pure in-memory solution. And Intel Optane Persistent Memory is a step function up from there. By replacing DRAM with Intel PMem, it allows our customers to increase the data by another 3-4X. Of course you still need to scale out, but the number of nodes required is 10-20x smaller in many cases. What does this mean? It means that a 200 node cluster can become a manageable 20 node cluster for very large datasets.

How are customer expectations changing in a new Covid-19 era?

(Sai): For online transactions and digital payments our customers expect that our data is 100% accurate and it needs to do so instantly. For an application this means processing millions of transactions a second and responding instantly. This is complex. The app must identify that a user is who they claim to be and do this in real-time while comparing it with historical behavior. There are a number of application scaling considerations, such as stale read, search strategy, data consistency and QoS that impacts the customer experience.

Getting it right starts with the application design. How consistent does the data need to be? What is the round trip time to the data, QoS requirements such as retry, wait and error handling. We worked on these factors very early on.

What advice do you have about managing 30% data growth YoY with consistency and performance?

(Athreya): Change is constant. Today’s cost model may be great, but the economics of data are hard to predict. So slow incremental change allows us to be agile. We currently have 20-30TB per node, because too much data means that we cannot take down the nodes. Everything that we do requires being time tested. Shiny new technology is good. It is also important to take a slow and steady approach.

What are some considerations for global distributed applications?

(Srini): For our customers in the financial services and payments industry, these workloads often require strong consistency. The application state must also be available globally. The database must be able to survive entire site failures and the transaction response time needs to account for updates across regions/geographies. Aerospike multi-site clustering delivers rack-aware behavior, distributed across sites, and each region has a full copy of the data. This allows a local application to read data with very low latency. What is possible is sub 200ms response times for writes across regions that are geographically separated by thousands of miles. When an entire site fails, the application needs to be fully resilient while still maintaining high performance and availability at terabyte-to-petabytes scale.

How do organizations avoid reacting to scale as an afterthought?

(Sai): All scale problems manifest at the data layer. Data engineers need to lead a data strategy and transformation across the company. We don’t know the future with scale, but everyone needs to be ready with a data strategy to handle any volume.

(Srini): Do your research. Many companies come up with a new idea and implement it using only the technology they already have. Often these products fail, but when the service does take off they hit a wall. For example, Cassandra scales out, but doesn’t provide predictable high performance at scale. It is a mistake to have to choose between performance and scale. Additionally, real-time access is critical for a variety of workloads. Companies should not design consumer applications just for the first hundred thousand users, because when the product takes off, SLAs cannot be met, costs increase, and companies must look for a new solution. Architecting for scale, enables companies to grow to hundreds of millions of users with extremely large datasets, and handle hundreds of thousands to millions of reads and writes a second without any trouble.

More about the Speakers

Sai Devabhaktuni, Sr Director of Engineering, PayPal

Sai is a technology leader deeply passionate about solving Petabyte scale data problems, he has over 20 years’ experience in all things database. Sai led the database modernization journey at PayPal over the past 15 years in Scaling Databases horizontally and vertically to web scale from 10,000 executions per second to over 1 million executions per second, improving availability from high 3 9’s to high 4 9’s, re-platforming database infrastructure to commodity x86, pivoting to distributed database architecture, and building an ecosystem of NoSQL and traditional RDBMS database systems to offer fully managed and complete experience to cloud first applications. Sai is a frequent speaker at various database conferences and plays a significant role in influencing database roadmaps across the vendors. Sai also blogs at


Athreya Gopalakrishna, Sr Database Engineering Manager at PayPal

Athreya is a data engineering lead at PayPal, specialized in NoSQL databases and Caching technologies in the PayPal Database portfolio that includes some of the top NoSQL Databases in the industry. He is responsible for evaluating, architecting, designing, developing, operationalizing NoSQL databases and caching technologies for large scale performance centric installations.

Srini Srinivasan, Chief Product Officer and Founder, Aerospike

Srini is Chief Product Officer and Founder at Aerospike. When it comes to databases, Srini Srinivasan is one of the recognized pioneers of Silicon Valley. He has two decades of experience designing, developing and operating high scale infrastructures. He also has over a dozen patents in database, web, mobile and distributed systems technologies. Srini co-founded Aerospike to solve the scaling problems he experienced with Oracle databases while he was Senior Director of Engineering at Yahoo. Srini has a B.Tech, Computer Science from Indian Institute of Technology, Madras and both an M.S. and Ph.D. in Computer Science from the University of Wisconsin-Madison.