The book ‘Design Data Intensive Application’ is a really popular book recently years. Why? The reason is that it is relatively ‘easy’ for the inexperience application software engineers to follow and get a glance of what distributed world looks like. This is my second time to read this book. The first time reading was kind of a casual glance through and I gave up halfway. After I reflected on my learning strategy, I come up with a plan for me to read and digest this book.

I will share the learning notes in a series of blogs. In each post, I will start with a brief Chapter Summary, followed up a Mind Map that summarizes the main topics and its relations. The topics in the mind map are attached with a list of questions what can be answered by the knowledge in the book. The questions and answers are listed in plain text in the Questions Summary section. In the Highlighted Topics section, a few interesting topics in the chapter are picked up to explain in details. In the last Extended Topics section, one or two topics that is not covered in detail in the book is briefly discussed and explored.

This is the sixth post of the series.

Chapter Summary

Database partition or sharding is discussed in this chapter. Partitioning is a very basic concept in data storage engine. As when the data volume goes large, a single node will not be enough to host all the data, we have to distribute the data among multiple nodes. Different approaches for partitioning the primary key and the secondary indexes, rebalancing strategies and request routing strategies are discussed in detail in this chapter.

Mind Map

Question Summary

Approach for partitioning

What is “hot spot” in partitioning?

A node with disproportionally high load

What is “skewed” in partitioning?

Some partitions have more data or queries than others

What is the benefit and downside of key range partitioning?

It supports the range queries easily, however some access patterns will result in hot spots

What is the benefit and downside of hash of key partitioning?

Hash key of partitioning results more evenly distributed data over the partitions to avoid data skew and hot spot. However it makes the range queries challenging. The range queries needs to sent to all the partitions.

Describe the approach of partitioning secondary indexes by document

It is also called local index. The secondary index of a particular partition is stored locally with the partition. When querying for the secondary index, the query needs to go to all the partitions

Describe the term-based partitioning of database with secondary indexes

The term-based partitioning refer to the approach that the secondary indexes are stored and partitioned independent from the primary key

Why the document-based secondary index partitioning of database has expensive read queries?

Document-based partitioning is also termed as local index. Because the index is local in each partition, so query to the index needs to be sent to and processed by all the partitions. That is why it is more expensive

What is the advantage and disadvantage of term-based secondary index partitioning?

Term-based partitioning is more efficient in indexing query, as the index are globally partitioned. However the write become more challenging especially to maintain a synchronous update of the secondary index with write process. The distributed transaction is difficult to achieve in this case

Rebalancing

What is “rebalancing”?

The process of moving load of data from one node in the cluster to another

What are the minimum requirement of a ‘rebalancing’?
  1. after the rebalancing, the data should be fairly distributed to the partitions
  2. during the rebalancing, the read and write should not be interrupted
  3. no more than necessary data should be moved during the rebalancing to make the rebalancing fast and minimize the network and disk I/O load
In which cases a ‘rebalancing’ needs to be performed?
  1. the throughput to the data system increases that we need to add more CPUs to handle the requests
  2. the dataset size increases that we need to add more disks or RAMs to host the data
  3. some machine failed, that we need to move the data to another available machine
What is the problem of mod N partitioning strategy?

When the number of nodes N changes, most of the data needs to be moved from one node to another. The simple solution is to create many more partitions than the number of nodes, when a new node is added, it steels few partitions from other nodes until the partitions are fairly distributed once again.

Describe the fixed number of partition strategy

The number of partitions are chosen at a number that is much larger than the number of nodes. When a new node is added, some selected partitions from each node is moved to the new node. This makes sure that only a small amount of data is moved

In the fixed number of partition approach, what is the drawback of partition volume is too large or too small?

Since the number of partition is fixed, with the increase of dataset size, the size of each partition increase. If the size of partition become too large, rebalancing and recovery from a node failure become expensive. If the size is too small (the fixed number is chosen too large with small sized dataset), they incur too much overhead

What is the caveat of dynamic partitioning?

An empty database starts off with a single partition and with small amount of data, no partition is performed yet. So all the request are handled by a single node while the other nodes sit idle. To mitigate this issue, we can have an initial set of partitions to be configured on an empty database

What is the advantage of dynamic partitioning?

The number of partitions adapts to the total data volume. It avoids the partition size too large or too small issue in the fixed number partitioning strategy

Request Routing

What are the three common approaches of request routing in partitioned data system?
  1. contact any node and let the node handle the routes
  2. add a routing tier that handles the routing strategy
  3. let the client be aware of partitions and connect directly to the appropriate node

Highlighted Topics

Partition Strategy Comparison

Primary key partition strategy compare
  pros cons
key-range more efficient to support range query can result in hot spot or data skews
key-hash evenly distribute the data and queries less efficient in supporting range query

Compound primary key is supported by Cassandra: only the first part of that key is hashed to determine the partition, but the other columns are used as a concatenated index for sorting the data in Cassandra’s SSTable. A query therefore cannot search for a range of values within the first column of a compound key, but if it specifies a fixed value for the first column, it can perform an efficient range scan over the other columns of the key.

Similar for DynamoDB, a partition key and an optional sort key are required to generate its primary key. For dynamoDB, there’s the recommended pattern to design sort-key as composite sort keys to contain the hierarchical (one-to-many) relationships in your data. For example, you might structure the sort key as

[country]#[region]#[state]#[county]#[city]#[neighborhood]

This would let you make efficient range queries for a list of locations at any one of these levels of aggregation, from country to a neighborhood and everything in between. (ref)

Secondary index partition strategy compare
  pros cons
document-base (local index) write to the secondary index is in a single partition, easy to maintain the transaction query to the secondary index needs to be sent to all partitions, hence less efficient
term-based (global index) query to the secondary index is only from a single partition, more efficient write to the secondary index are across partitions, challenging for transaction management

Most of the databases follow document-base secondary index. DynamoDB supports both Global Secondary Index (GSI) and Local Secondary Index (LSI). DynamoDB GSI supports eventual consistency as stated. (ref)

Request Routing

Three different request routing strategies:

ddia_6_partition_request_routing.excalidraw

The challenge of request routing is the consensus among the distributed system about the changes in the assignment of partitions to nodes. Coordination service such as Zookeeper and gossip protocol among nodes are some commons approaches. Below is an illustration of zookeeper as a coordination service to sync among the routing tier and the nodes.

ddia_6_request_routing_zookeeper.excalidraw

Here is a summary of request routing strategies of popular databases

data storages Solution
HBase, SolrCloud, Kafka zookeeper as the coordination service
MongoDB its own config server and monogos daemons as the routing tier
Cassandra and Riak use gossip protocol among the nodes, put the complexity int the nodes, avoid external coordination service dependency
CouchBase routing tier which learns the routing changes from the cluster nodes, not support auto rebalance

Extended Topics

Consistent Hashing - Dynamo-style DB partition strategy

Consistent hashing is useful for

Overview

ddia_6_partition_consistent_hashing.excalidraw

Further Reading

Amazon DynamoDB best practices

Buy Me A Coffee