Blog

How Forter built a fraud decision engine that responds before you click 'buy'

Forter rebuilt its fraud decision engine to deliver millisecond trust decisions at global scale. Learn how Aerospike replaced caching layers, cut latency 10×, reduced costs by 40%, and enabled always-on availability.

February 3, 2026 | 6 min read
Steve Tuohy website
Steve Tuohy
Director of Product Marketing

On paper, Forter is a fraud prevention company. In practice, it's a trust platform built on identity intelligence.

It’s an important differentiation. Traditional fraud systems ask whether a transaction looks suspicious; Forter, on the other hand, asks who is behind the transaction. That shift, from flagging bad behavior to understanding identity, is why over 240,000 businesses rely on Forter to make real-time trust decisions across the entire customer journey, from login to checkout to returns.

Today, the platform processes over $350 billion in transactions annually across 180 countries, drawing on a database of over a billion consumer identities. If Forter is slow or unavailable, some merchant sites can't complete sales at all.

As Forter scaled, they faced two problems: a database that couldn't deliver consistent latency, and a global business that demanded always-on availability.

This is the story of how Forter used Aerospike to evolve from a single-region system to a global trust platform that delivers predictable performance even under unpredictable traffic and failure scenarios, achieving 10x latency improvement, 40% cost reduction, and an architecture designed to scale to any region they need.

Data replication between data centers: Log shipping vs. Cross Datacenter Replication

Discover how Aerospike's Cross Datacenter Replication (XDR) delivers ultra-low latency, precise control, and efficient data transfer to enhance global data performance.

The challenge of real-time sessions

To understand the person behind each transaction, Forter monitors what shoppers do before they click "buy." How long do they browse? How many products do they view? What is their navigation pattern? Did they arrive through a saved bookmark or a questionable redirect? These behavioral signals show intent in ways that transaction data alone cannot. 

Forter captures this activity as small events, about 2 kilobytes each, written in real time as shoppers interact with merchant sites. The system stores 25 billion of these events, which it constantly writes and reads.

When a shopper finally clicks "buy," Forter has milliseconds to retrieve that customer's session history, feed it to the fraud model, and return a trust decision. With the merchant's checkout flow and payment processing in progress, there's no buffer. If the data layer is slow, the decision either delays the customer experience or is made with incomplete information.

Forter's previous database architecture, Elasticsearch and Couchbase, couldn't deliver the consistency this workload demanded. Latency was wildly unpredictable; a read might return in 2 or 750 milliseconds. There was no SLA the team could trust. The P99 graph was erratic: spikes appeared even without obvious changes in load, and engineers spent cycles investigating anomalies instead of building features.

To compensate, Forter added a caching layer on top of the database just to achieve acceptable read performance. That meant two systems to operate, two sets of failure modes, and twice the complexity. The architecture required 178 servers, yet the behavior remained unpredictable.

For a platform where every millisecond of latency affects the customer's checkout experience, this unpredictability could degrade the experience for a huge number of customers.

How Aerospike replaced the cache and became the state store

Forter needed a system that could deliver cache-like speed without the operational burden of a separate caching layer. Aerospike's Hybrid Memory Architecture (HMA) fits the bill, reading and writing directly from SSDs while keeping indexes in RAM. This gave Forter the read performance they needed with database-level durability

P99 latency fell from 13.3 milliseconds to 1.3 milliseconds, even while shifting data from RAM to SSDs and with one system replacing two. However, the bigger story was predictability. As Raz Harush, Forter's Engineering Manager, explained, "What interests me is that the line is straight. We know what latency we expect. We stayed there relatively flat."

The infrastructure footprint shrank accordingly as well. Forter consolidated from 178 servers to 50 and moved to more efficient instance types. Fewer servers also meant less operational overhead, including fewer failure modes, fewer alerts, and fewer opportunities to break. Overall, infrastructure spend dropped by 40%, with an additional 26% savings available by migrating compute to AWS Graviton instances. 

Five signs you've outgrown Couchbase

As your business grows, the database solution that once felt like the perfect fit can start showing its limits. If you’re a Couchbase user, you might be experiencing escalating costs, inconsistent performance, or challenges with scalability that disrupt SLAs and business growth–clear signs that it’s time for a change.

The migration to Aerospike

Forter's migration took roughly 30 days from proof of concept to production. The key to this speed was an abstraction layer. Forter had built an "accessor" interface on top of their previous database, which meant the migration didn't require touching application code.

For the data itself, the team used a phased cutover: dual writes to both databases, then dual reads to validate integrity. Aerospike served as the primary database, with Couchbase as a backup. Finally, they completely phased out Couchbase. With a four-week TTL on session data, the new cluster populated naturally, so no bulk migration was needed.

The multi-region problem

Forter solved the latency problem. But a single-region architecture created a different risk. If that region went down, merchants couldn't process a single order.

For businesses running flash sales or high-volume campaigns, even a few minutes of downtime can translate directly into lost revenue. Forter's customers needed true redundancy as a baseline requirement, not a nice-to-have for disaster recovery. 

The first attempt: dual-region architecture

Forter's initial approach to multi-region was simple. They ran full stacks in two regions with active-active replication. If one region failed, the other took over the load. In practice, this meant duplicating everything: compute, storage, data pipelines, and operational overhead. 

The cost was obvious, but the bigger problem was consistency. Both regions were recalculating the same data independently, which introduced drift. A user's session state in one region wouldn't always match the other. Engineers found themselves debugging sync issues and chasing inconsistencies instead of building features.

Rearchitecting for global scale: Hub-and-spoke

One reason why Forter's dual-region architecture failed is that it duplicated everything. So, the second attempt started from a different premise: compute once, and move the results.

The team redesigned around a hub-and-spoke model. The hub handles all decision processing centrally. Spokes serve local traffic and store regional data, but they don't duplicate the core decision logic. When a user's session state or a trust decision needs to be available in another region, the data moves, but the compute doesn't.

This eliminated the drift problem. Instead of two regions independently calculating the same results and hoping they match, one region calculates, and the others receive the outcome. Consistency became a property of the architecture.

That clarity transformed infrastructure from an obstacle to an important part of planning. Forter could now predict expansion costs, weigh tradeoffs, and make business decisions based on reliable infrastructure economics.

Outcomes at scale

Forter's migration to Aerospike delivered measurable results across latency, cost, and operational complexity:

  • 10x latency improvement: P99 dropped from 13.3 milliseconds to 1.3 milliseconds. Trust decisions that once risked delaying checkout are now complete with room to spare.

  • Flat, predictable performance: The erratic latency spikes disappeared, and load increases no longer trigger unexplained slowdowns. Engineers could now trust the system's behavior under any conditions.

  • 73% server reduction: The infrastructure footprint shrank from 178 servers to 50. Fewer servers mean fewer failure modes, fewer alerts, and less operational overhead.

  • 40% cost reduction: Consolidating the cache and database layers, combined with more efficient instance types, cut infrastructure spend by 40%.

Try Aerospike Cloud

Break through barriers with the lightning-fast, scalable, yet affordable Aerospike distributed NoSQL database. With this fully managed DBaaS, you can go from start to scale in minutes.