Sharding is a method for distributing data across multiple machines. MongoDB uses sharding to support deployments with very large data sets and high throughput operations.

Note: MongoDB supports horizontal scaling through sharding.

Note: MongoDB shards data at the collection level, distributing the collection data across the shards in the cluster.

Sharded Cluster

A MongoDB sharded cluster consists of the following components:
shard: Each shard contains a subset of the sharded data. Each shard can be deployed as a replica set.
mongos: The mongos acts as a query router, providing an interface between client applications and the sharded cluster.
config servers: Config servers store metadata and configuration settings for the cluster. As of MongoDB 3.4, config servers must be deployed as a replica set.

Shard Keys
To distribute the documents in a collection, MongoDB partitions the collection using the shard key. The shard key consists of an immutable field or fields that exist in every document in the target collection.
You choose the shard key when sharding a collection. The choice of shard key cannot be changed after sharding. A sharded collection can have only one shard key.

Chunks
MongoDB partitions sharded data into chunks. Each chunk has an inclusive lower and exclusive upper range based on the shard key.

Balancer and Even Chunk Distribution
In an attempt to achieve an even distribution of chunks across all shards in the cluster, a balancer runs in the background to migrate chunks across the shards.

Connecting to a Sharded Cluster
You must connect to a mongos router to interact with any collection in the sharded cluster. Clients should never connect to a single shard in order to perform read or write operations.

Sharding Strategy
MongoDB supports two sharding strategies for distributing data across sharded clusters.
1. Hashed Sharding
Hashed Sharding involves computing a hash of the shard key field’s value. Each chunk is then assigned a range based on the hashed shard key values.
Note: MongoDB automatically computes the hashes when resolving queries using hashed indexes. Applications do not need to compute hashes.

2. Ranged Sharding
Ranged sharding involves dividing data into ranges based on the shard key values. Each chunk is then assigned a range based on the shard key values.

Choosing a Shard Key
The choice of shard key affects the creation and distribution of the chunks across the available shards. This affects the overall efficiency and performance of operations within the sharded cluster.

1. Shard Key Cardinality
The cardinality of a shard key determines the maximum number of chunks the balancer can create. This can reduce or remove the effectiveness of horizontal scaling in the cluster.

A unique shard key value can exist on no more than a single chunk at any given time. If a shard key has a cardinality of 4, then there can be no more than 4 chunks within the sharded cluster, each storing one unique shard key value. This constrains the number of effective shards in the cluster to 4 as well — adding additional shards would not provide any benefit.

2. Shard Key Frequency
Consider a set representing the range of shard key values — the frequency of the shard key represents how often a given value occurs in the data. If the majority of documents contain only a subset of those values, then the chunks storing those documents become a bottleneck within the cluster.

Note: Take care of Monotonically Changing Shard Keys
A shard key on a value that increases or decreases monotonically is more likely to distribute inserts to a single shard within the cluster.

This occurs because every cluster has a chunk that captures a range with an upper bound of maxKey. maxKey always compares as higher than all other values. Similarly, there is a chunk that captures a range with a lower bound of minKey. minKey always compares as lower than all other values.

If the shard key value is always increasing, all new inserts are routed to the chunk with maxKey as the upper bound. If the shard key value is always decreasing, all new inserts are routed to the chunk with minKey as the lower bound. The shard containing that chunk becomes the bottleneck for write operations.

Use consider using Hashed Sharding in such cases.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store