Chapter Summary
Database partition or sharding is discussed in this chapter. Partitioning is a basic concept in data storage engine. When the data volume goes up, a single node is not sufficient to host all the data, we have to distribute the data among multiple nodes. This chapter explains different approaches for partitioning the primary key and the secondary indexes, rebalancing strategies and request routing strategies.
Mind Map
Question Summary
Approach for partitioning
💡 What is “hot spot” in partitioning?
A node with disproportionally high load.
💡 What is “skewed” in partitioning?
Some partitions have more data or queries than others.
💡 What is the benefit and downside of key range partitioning?
It supports the range queries easily, however some access patterns will result in hot spots.
💡 What is the benefit and downside of hash of key partitioning?
Hash key of partitioning results more evenly distributed data over the partitions to avoid data skew and hot spot. However it makes the range queries challenging. The range queries needs to be sent to all the partitions.
💡 Describe the approach of partitioning secondary indexes by document
It is also called local index. The secondary index of a particular partition is stored locally with the partition. When querying for the secondary index, the query needs to go to all the partitions.
💡 Describe the term-based partitioning of database with secondary indexes
The term-based partitioning refer to the approach that the secondary indexes are stored and partitioned independent from the primary key.
💡 Why does document-based secondary index partitioning of database result in expensive read queries?
Document-based partitioning is also termed as local index. Because the index is local in each partition, so query to the index needs to be sent to and processed by all the partitions. That is why it is more expensive.
💡 What is the advantage and disadvantage of term-based secondary index partitioning?
Term-based partitioning is more efficient in indexing query, as the index are globally partitioned. However the write become more challenging especially to maintain a synchronous update of the secondary index with write process. The distributed transaction is difficult to achieve in this case.
Rebalancing
💡 What is “rebalancing”?
The process of moving load of data from one node in the cluster to another.
💡 What are the minimum requirement of a ‘rebalancing’?
- After the rebalancing, the data should be fairly distributed to the partitions
- During the rebalancing, the read and write should not be interrupted
- No more than necessary data should be moved during the rebalancing to make the rebalancing fast and minimize the network and disk I/O load
💡 In which cases a ‘rebalancing’ needs to be performed?
- The throughput to the data system increases that we need to add more CPUs to handle the requests
- The dataset size increases that we need to add more disks or RAMs to host the data
- Some machine fails, that we need to move the data to another available machine
💡 What is the problem of mod N partitioning strategy?
When the number of nodes N changes, most of the data needs to be moved from one node to another. The simple solution is to create many more partitions than the number of nodes, when a new node is added, it steels few partitions from other nodes until the partitions are fairly distributed once again.
💡 Describe the fixed number of partition strategy
The number of partitions are chosen at a number that is much larger than the number of nodes. When a new node is added, some selected partitions from each node is moved to the new node. This makes sure that only a small amount of data is moved.
💡 In the fixed number of partition approach, what is the drawback of partition volume is too large or too small?
Since the number of partition is fixed, with the increase of dataset size, the size of each partition increase. If the size of partition become too large, rebalancing and recovery from a node failure become expensive. If the size is too small (the fixed number is chosen too large with small sized dataset), they incur too much overhead.
💡 What is the caveat of dynamic partitioning?
An empty database starts off with a single partition and with small amount of data, no partition is performed yet. So all the request are handled by a single node while the other nodes sit idle. To mitigate this issue, we can have an initial set of partitions to be configured on an empty database.
💡 What is the advantage of dynamic partitioning?
The number of partitions adapts to the total data volume. It avoids the partition size too large or too small issue in the fixed number partitioning strategy.
Request Routing
💡 What are the three common approaches of request routing in partitioned data system?
- Contact any node and let the node handle the routes
- Add a routing tier that handles the routing strategy
- Let the client be aware of partitions and connect directly to the appropriate node
Highlighted Topics
Partition Strategy Comparison
Primary key partition strategy compare
pros | cons | |
---|---|---|
key-range | more efficient to support range query | can result in hot spot or data skews |
key-hash | evenly distribute the data and queries | less efficient in supporting range query |
Compound primary key is supported by Cassandra: only the first part of that key is hashed to determine the partition, but the other columns are used as a concatenated index for sorting the data in Cassandra’s SSTable. A query therefore cannot search for a range of values within the first column of a compound key, but if it specifies a fixed value for the first column, it can perform an efficient range scan over the other columns of the key.
Similarly for DynamoDB, a partition key and an optional sort key are required to generate its primary key. For dynamoDB, there’s the recommended pattern to design sort-key as composite sort keys to contain the hierarchical (one-to-many) relationships in your data. For example, you might structure the sort key as
[country]#[region]#[state]#[county]#[city]#[neighborhood]
This would let you make efficient range queries for a list of locations at any one of these levels of aggregation, from country
to a neighborhood
and everything in between. (ref)
Secondary index partition strategy compare
pros | cons | |
---|---|---|
document-base (local index) | write to the secondary index is in a single partition, easy to maintain the transaction | query to the secondary index needs to be sent to all partitions, hence less efficient |
term-based (global index) | query to the secondary index is only from a single partition, more efficient | write to the secondary index are across partitions, challenging for transaction management |
Most of the databases follow document-base secondary index. DynamoDB supports both Global Secondary Index (GSI) and Local Secondary Index (LSI). DynamoDB GSI supports eventual consistency as stated. (ref)
Request Routing
Three different request routing strategies:
- Allow clients to contact any node (e.g. via a round-robin load balancer). If that node coincidentally owns the partition to which the request applies, it can handle the request directly, otherwise it forwards the request to the appropriate node, receives the reply and passes the reply to the client
- Send all requests from clients to a routing tier first, which determines the node that should handle each request and forwards it accordingly. This routing tier does not itself handle any requests; it only acts as a partition-aware load balancer.
- Require that clients to be aware of the partitioning and the assignment of partitions to nodes. In this case, a client can connect directly to the appropriate node, without any intermediary.
The challenge of request routing is the consensus among the distributed system about the changes in the assignment of partitions to nodes. Coordination service such as Zookeeper and gossip protocol among nodes are some common approaches. Below is an illustration of zookeeper as a coordination service to sync among the routing tier and the nodes.
Here is a summary of request routing strategies of popular databases
data storages | Solution |
---|---|
HBase, SolrCloud, Kafka | Zookeeper as the coordination service |
MongoDB | Its own config server and monogos daemons as the routing tier |
Cassandra and Riak | Use gossip protocol among the nodes, put the complexity int the nodes, avoid external coordination service dependency |
CouchBase | Routing tier which learns the routing changes from the cluster nodes, not support auto rebalance |
Extended Topics
Consistent Hashing - Dynamo-style DB partition strategy
Consistent hashing is useful for
- Sharding: minimal rebalancing of partitions while adding or removing nodes
- Load balancing: better cache performance if same requests go to same servers
Overview
- Use a hash function and map its range to a ring
- Map n nodes to the same ring using the same or another hash function
- Each physical node is split into k slices through multiple hash functions: Num virtual
nodes = n*k
- To write data: move clockwise on the ring from hash of data’s key to the nearest virtual node and write data to the physical node represented by that virtual node
- To add server: when a server
si
is added, the affected range starts from serversi
and moves anticlockwise along the ring until a serversj
is found. Thus, keys located betweensi
andsj
need to be redistributed tosi
. - To remove node: when a server
si
is removed, the affected range starts fromsi
and moves anticlockwise along the ring until a serversj
is found. Thus keys located betweensj
andsi
needs to be redistributed tosi+1