Paper Summary: Lessons from Giant-Scale Services

Today, I read a paper titled Lessons from Giant-Scale Services. This paper is written by Eric Brewer, the guy behind CAP theorem. It is an old paper published in 2001. The paper helps the reader build mental model on how to think about availability of large scale distributed systems.

The paper starts by defining the problem statement.

High availability is a major driving requirement behind giant-scale system design. Other infrastructures — such as the telephone, rail, and water systems — aim for perfect availability, a goal that should apply to IP-based infrastructure services as well. All these systems plan for component failures and natural disasters, but information systems must also deal with constantly evolving features and unpredictable growth.

As I wrote in an earlier blog on CAP theorem,

Availability means every request receives a non-error response from a non-failing node in a reasonable time. It is expected that the client may not receive the most recent write.

We typically measure availability in terms of uptime like 99.99%. As mentioned in the paper, uptime is defined as

uptime = (MTBF - MTTR) / MTBF

MTBF is mean time between two failures

MTTR is the time it takes to recover from a failure.

For high uptime, we can should either have high MTBF or lower MTTR.

Following this equation, we can improve uptime either by reducing the frequency of failures or reducing the time to fix them. Although the former is more pleasing aesthetically, the latter is much easier to accomplish with evolving systems.

When you are building complex distributed systems you it is difficult to have less failures so MTBF will be low. The only way we can achieve high availability is by making sure we can recover from failure faster keeping MTTR low. We need to have highly automated deployment process that can keep MTTR low. This requires investment in platform engineering.

In addition, new features tend to reduce MTBF but have relatively little impact on MTTR, which makes it more stable. Thus, giant-scale systems should focus on improving MTTR and simply apply best effort to MTBF.

The author then goes on to define two more terms – yield and harvest.

yield = queries completed / queries offered

Author makes the case that yield is more useful in practice than uptime as it directly maps to the user experience. The reason yield is more useful is because it help us understand impact on user when queries are not fulfilled.

Being down for a second when there are no queries has no impact on users or yield, but reduces uptime. Similarly, being down for one second at peak and off-peak times generates the same uptime, but vastly different yields because there might be an order-of-magnitude difference in load between the peak second and the minimum-load second. Thus we focus on yield rather than uptime.

Harvest is defined by the following formula

harvest = data available / complete data

This means how much of the data is reflected in the answer. It will become clear when we talk about replication and partitioning.

A perfect system would have 100 percent yield and 100 percent harvest. That is, every query would complete and would reflect the entire database.

The author makes the case that with out design we can influence whether faults will impact yield or harvest.

Replicated systems impact yield when faults occur. They have access to all the data so harvest is not impacted. I think this is more like CP systems in CAP theorem.

On the other hand, partitioned systems impact harvest when faults occurs. They loose part of the data so harvest is impacted. I think this is more like AP system in CAP theorem.

The last most important point covered in the paper is DQ Principle.

Data per query × queries per second → constant

The intuition behind this principle is that the system’s overall capacity tends to have a particular physical bottleneck, such as total I/O bandwidth or total seeks per second, which is tied to data movement. The DQ value is the total amount of data that has to be moved per second on average, and it is thus bounded by the underlying physical limitation. At the high utilization level typical of giant-scale systems, the DQ value approaches this limitation

Overall, DQ normally scales linearly with the number of nodes, which means a small test cluster is a good predictor for DQ changes on the production system

DQ principle is useful for data intensive systems.

Replicas maintain D and reduce Q (and thus yield), while partitions keep Q constant and reduce D (and thus harvest).

The DQ principle gives us new options for graceful degradation: We can either limit Q (capacity) to maintain D, or we can reduce D and increase Q. We can focus on harvest through admission control (AC), which reduces Q, or on yield through dynamic database reduction, which reduces D, or we can use a combination of the two. Temporarily cutting the effective database size in half, for instance, should roughly double our capacity.

Finally author talks about three different deployment strategies:

  1. Fast reboot
  2. Rolling upgrade
  3. Big flip

Big flip is today’s blue green deployment. I have seen many organisations preferring blue green deployment over rolling upgrade because of the limitations of it.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: