MongoDB Limits and Thresholds
This document provides a collection of hard and soft limitations ofthe MongoDB system.
BSON Documents
The maximum document size helps ensure that a single document cannotuse excessive amount of RAM or, during transmission, excessive amountof bandwidth. To store documents larger than the maximum size, MongoDBprovides the GridFS API. See mongofiles
and thedocumentation for your driver for moreinformation about GridFS.
Nested Depth for BSON Documents
- MongoDB supports no more than 100 levels of nesting for BSONdocuments.
Naming Restrictions
Database Name Case Sensitivity
- Since database names are case insensitive in MongoDB, databasenames cannot differ only by the case of the characters.
Restrictions on Database Names for Windows
- For MongoDB deployments running on Windows, database names cannotcontain any of the following characters:
- /\. "$*<>:|?
Also database names cannot contain the null character.
Restrictions on Database Names for Unix and Linux Systems
- For MongoDB deployments running on Unix and Linux systems, databasenames cannot contain any of the following characters:
- /\. "$
Also database names cannot contain the null character.
Restriction on Collection Names
Collection names should begin with an underscore or a lettercharacter, and cannot:
- contain the
$
. - be an empty string (e.g.
""
). - contain the null character.
- begin with the
system.
prefix. (Reserved for internal use.)If your collection name includes special characters, such as theunderscore character, or begins with numbers, then to access thecollection use thedb.getCollection()
method in themongo
shell or a similar method for your driver.
- contain the
The maximum length of the collection namespace, which includes thedatabase name, the dot (.
) separator, and the collection name (i.e.<database>.<collection>
), is 120 bytes.
Restrictions on Field Names
Field names cannot contain the
null
character.Top-level field names cannot start with the dollar sign (
$
) character.
Otherwise, starting in MongoDB 3.6, the server permits storage offield names that contain dots (i.e. .
) and dollar signs (i.e.$
).
Important
The MongoDB Query Language cannot always meaningfully expressqueries over documents whose field names contain these characters(see SERVER-30575).
Until support is added in the query language, the use of $
and.
in field names is not recommended and is not supported bythe official MongoDB drivers.
MongoDB does not support duplicate field names
The MongoDB Query Language is undefined over documents withduplicate field names. BSON builders may support creating a BSONdocument with duplicate field names. While the BSON builder may notthrow an error, inserting these documents into MongoDB is notsupported even if the insert succeeds. For example, inserting aBSON document with duplicate field names through a MongoDB drivermay result in the driver silently dropping the duplicate valuesprior to insertion.
Namespaces
Namespace Length
- The maximum length of the collection namespace, which includes thedatabase name, the dot (
.
) separator, and the collection name (i.e.<database>.<collection>
), is 120 bytes.
See also
Indexes
Changed in version 4.2
Starting in version 4.2, MongoDB removes the Index KeyLimit
for featureCompatibilityVersion (fCV)set to "4.2"
or greater.
For MongoDB 2.6 through MongoDB versions with fCV set to "4.0"
orearlier, the total size of an index entry, which can includestructural overhead depending on the BSON type, must be _less than_1024 bytes.
When the Index Key Limit
applies:
MongoDB will not create an indexon a collection if the index entry foran existing document exceeds the
index key limit
.Reindexing operations will error if the index entry for an indexedfield exceeds the
index key limit
. Reindexing operations occur as part of thecompact
command as wellas thedb.collection.reIndex()
method.
Because these operations drop all the indexes from a collection andthen recreate them sequentially, the error from the index key limit
preventsthese operations from rebuilding any remaining indexes for thecollection.
MongoDB will not insert into an indexed collection any document with anindexed field whose corresponding index entry would exceed the
index key limit
,and instead, will return an error. Previous versions of MongoDB wouldinsert but not index such documents.Updates to the indexed field will error if the updated value causes theindex entry to exceed the
index key limit
.
If an existing document contains an indexed field whose index entryexceeds the limit, any update that results in the relocation of thatdocument on disk will error.
mongorestore
andmongoimport
will not insertdocuments that contain an indexed field whose corresponding index entrywould exceed theindex key limit
.In MongoDB 2.6, secondary members of replica sets will continue toreplicate documents with an indexed field whose corresponding indexentry exceeds the
index key limit
on initial sync but will print warnings inthe logs.
Secondary members also allow index build and rebuild operations on acollection that contains an indexed field whose corresponding indexentry exceeds the index key limit
but with warnings in the logs.
With mixed version replica sets where the secondaries are version 2.6and the primary is version 2.4, secondaries will replicate documentsinserted or updated on the 2.4 primary, but will print error messagesin the log if the documents contain an indexed field whosecorresponding index entry exceeds the index key limit
.
- For existing sharded collections, chunk migration will fail if the chunk has a documentthat contains an indexed field whose index entry exceeds the
index key limit
.
Changed in version 4.2
Starting in version 4.2, MongoDB removes the Index Name LengthLimit
for MongoDB versions withfeatureCompatibilityVersion (fCV) set to"4.2"
or greater.
In previous versions of MongoDB or MongoDB versions with fCV setto "4.0"
or earlier, fully qualified index names, which includethe namespace and the dot separators (i.e. <databasename>.<collection name>.$<index name>
), cannot be longer than 127bytes.
By default, <index name>
is the concatenation of the field namesand index type. You can explicitly specify the <index name>
tothe createIndex()
method to ensure that thefully qualified index name does not exceed the limit.
Number of Indexed Fields in a Compound Index
- There can be no more than 32 fields in a compound index.
Queries cannot use both text and Geospatial Indexes
- You cannot combine the
$text
query, which requires aspecial text index, with a query operatorthat requires a different type of special index. For example youcannot combine$text
query with the$near
operator.
Fields with 2dsphere Indexes can only hold Geometries
- Fields with 2dsphere indexes must hold geometrydata in the form of coordinate pairsor GeoJSON data. If you attempt to insert a document withnon-geometry data in a
2dsphere
indexed field, or build a2dsphere
index on a collection where the indexed field hasnon-geometry data, the operation will fail.
See also
The unique indexes limit in Sharding Operational Restrictions.
NaN values returned from Covered Queries by the WiredTiger Storage Engine are always of type double
- If the value of a field returned from a query that is coveredby an index is
NaN
, the type of thatNaN
value is alwaysdouble
.
Multikey Index
- Multikey indexes cannot cover queriesover array field(s).
Geospatial Index
- Geospatial indexes cannotcover a query.
Memory Usage in Index Builds
createIndexes
supports building one or more indexes on acollection.createIndexes
uses a combination of memory andtemporary files on disk to complete index builds. The default limit onmemory usage forcreateIndexes
is 500 megabytes, sharedbetween all indexes built using a singlecreateIndexes
command. Once the memory limit is reached,createIndexes
uses temporary disk files in a subdirectory named_tmp
within the—dbpath
directory to complete the build.
You can override the memory limit by setting themaxIndexBuildMemoryUsageMegabytes
server parameter. Settinga higher memory limit may result in faster completion of index buildslarger than 500 megabytes. However, setting this limit too high relativeto the unused RAM on your system can result in memory errors.
Changed in version 4.2.
- For feature compatibility version (fcv)
"4.2"
,the index build memory limit applies to all index builds. - For feature compatibility version (fcv)
"4.0"
,the index build memory limit only applies to foregroundindex builds.Index builds may be initiated either by a user commandsuch as Create Indexor by an administrative process such as aninitial sync.Both are subject to the limit set bymaxIndexBuildMemoryUsageMegabytes
.
An initial sync operation populatesonly one collection at a time and has no risk of exceeding the memorylimit. However, it is possible for a user to start indexbuilds on multiple collections in multiple databases simultaneouslyand potentially consume an amount of memory greater than the limitset in maxIndexBuildMemoryUsageMegabytes
.
Tip
To minimize the impact of building an index on replica sets andsharded clusters with replica set shards, use a rolling index buildprocedure as described onBuild Indexes on Replica Sets.
Collation and Index Types
The following index types only support simple binary comparison anddo not support collation:
- text indexes,
- 2d indexes, and
- geoHaystack indexes.
Tip
To create a text
, a 2d
, or a geoHaystack
index on acollection that has a non-simple collation, you must explicitlyspecify {collation: {locale: "simple"} }
when creating theindex.
Data
Maximum Number of Documents in a Capped Collection
- If you specify a maximum number of documents for a cappedcollection using the
max
parameter tocreate
, the limit must be less than 232documents. If you do not specify a maximum number of documents whencreating a capped collection, there is no limit on the number ofdocuments.
Replica Sets
Changed in version 3.0.0.
Replica sets can have up to 50 members. SeeIncreased Number of Replica Set Members for more information aboutspecific driver compatibility with large replica sets.
Number of Voting Members of a Replica Set
- Replica sets can have up to 7 voting members. For replica sets withmore than 7 total members, see Non-Voting Members.
Changed in version 2.6.
If you do not explicitly specify an oplog size (i.e. withoplogSizeMB
or —oplogSize
) MongoDB will create an oplog that is nolarger than 50 gigabytes. [1]
[1]Starting in MongoDB 4.0, the oplog can grow past its configured sizelimit to avoid deleting the majority commit point
.
Sharded Clusters
Sharded clusters have the restrictions and thresholds described here.
Sharding Operational Restrictions
Operations Unavailable in Sharded Environments
$where
does not permit references to thedb
objectfrom the$where
function. This is uncommon inun-sharded collections.
The geoSearch
command is not supported in shardedenvironments.
Covered Queries in Sharded Clusters
- Starting in MongoDB 3.0, an index cannot cover a query on asharded collection when run against a
mongos
if the index does not contain the shard key,with the following exception for the_id
index: If a query on asharded collection only specifies a condition on the_id
fieldand returns only the_id
field, the_id
index can cover thequery when run against amongos
even if the_id
field is not the shard key.
In previous versions, an index cannot covera query on a sharded collection when run against amongos
.
Sharding Existing Collection Data Size
- An existing collection can only be sharded if its size does not exceedspecific limits. These limits can be estimated based on the average size ofall shard key values, and the configured chunk size.
Important
These limits only apply for the initial sharding operation. Shardedcollections can grow to any size after successfully enabling sharding.
Use the following formulas to calculate the theoretical maximumcollection size.
- maxSplits = 16777216 (bytes) / <average size of shard key values in bytes>
- maxCollectionSize (MB) = maxSplits * (chunkSize / 2)
Note
The maximum BSON document size is 16MB or 16777216
bytes.
All conversions should use base-2 scale, e.g. 1024 kilobytes = 1megabyte.
If maxCollectionSize
is less than or nearly equal to the targetcollection, increase the chunk size to ensure successful initial sharding.If there is doubt as to whether the result of the calculation is too‘close’ to the target collection size, it is likely better to increase thechunk size.
After successful initial sharding, you can reduce the chunk size as needed.If you later reduce the chunk size, it may take time for all chunks tosplit to the new size. SeeModify Chunk Size in a Sharded Cluster for instructions onmodifying chunk size.
This table illustrates the approximate maximum collection sizesusing the formulas described above:
Average Size of Shard Key Values512 bytes256 bytes128 bytes64 bytesMaximum Number of Splits32,76865,536131,072262,144Max Collection Size (64 MB Chunk Size)1 TB2 TB4 TB8 TBMax Collection Size (128 MB Chunk Size)2 TB4 TB8 TB16 TBMax Collection Size (256 MB Chunk Size)4 TB8 TB16 TB32 TB
Single Document Modification Operations in Sharded Collections
- All
update()
andremove()
operations for a shardedcollection that specify thejustOne
ormulti: false
option must include theshard keyor the_id
field in the query specification.update()
andremove()
operations specifyingjustOne
ormulti: false
in a sharded collection which do not contain either theshard key or the_id
field return an error.
Unique Indexes in Sharded Collections
- MongoDB does not support unique indexes across shards, except whenthe unique index contains the full shard key as a prefix of theindex. In these situations MongoDB will enforce uniqueness acrossthe full key, not a single field.
See
Unique Constraints on Arbitrary Fieldsfor an alternate approach.
Changed in version 3.4.11.
MongoDB cannot move a chunk if the number of documents in the chunk is greater than1.3 times the result of dividing the configuredchunk size by the average document size.db.collection.stats()
includes the avgObjSize
field,which represents the average document size in the collection.
Shard Key Limitations
Shard Key Index Type
- A shard key index can be an ascending index on the shardkey, a compound index that start with the shard key and specifyascending order for the shard key, or a hashed index.
A shard key index cannot be an index that specifies amultikey index, a text index or a geospatial index on the shard key fields.
Shard Key Selection is Immutable
- Once you shard a collection, the selection of the shard key isimmutable; i.e. you cannot select a different shard key for thatcollection.
If you must change a shard key:
- Dump all data from MongoDB into an external format.
- Drop the original sharded collection.
- Configure sharding using the new shard key.
- Pre-split the shardkey range to ensure initial even distribution.
- Restore the dumped data into MongoDB.
Monotonically Increasing Shard Keys Can Limit Insert Throughput
- For clusters with high insert volumes, a shard keys withmonotonically increasing and decreasing keys can affect insertthroughput. If your shard key is the
_id
field, be aware thatthe default values of the_id
fields are ObjectIds which have generally increasing values.
When inserting documents with monotonically increasing shard keys, all insertsbelong to the same chunk on a single shard. The systemeventually divides the chunk range that receives all write operations andmigrates its contents to distribute data more evenly. However, at any momentthe cluster directs insert operations only to a single shard, which creates aninsert throughput bottleneck.
If the operations on the cluster are predominately read operationsand updates, this limitation may not affect the cluster.
To avoid this constraint, use a hashed shard key or select a field that does notincrease or decrease monotonically.
Hashed shard keys and hashedindexes store hashes of keys with ascending values.
Operations
Sort Operations
- If MongoDB cannot use an index to get documents in the requestedsort order, the combined size of all documents in the sortoperation, plus a small overhead, must be less than 32 megabytes.
Aggregation Pipeline Operation
- Pipeline stages have a limit of 100 megabytes of RAM. If a stageexceeds this limit, MongoDB will produce an error. To allow for thehandling of large datasets, use the
allowDiskUse
option to enableaggregation pipeline stages to write data to temporary files.
Changed in version 3.4.
The $graphLookup
stage must stay within the 100 megabytememory limit. If allowDiskUse: true
is specified for theaggregate()
operation, the$graphLookup
stage ignores the option. If there are otherstages in the aggregate()
operation,allowDiskUse: true
option is in effect for these other stages.
Starting in MongoDB 4.2, the profiler log messages and diagnostic logmessages includes a usedDisk
indicator if any aggregation stage wrote data to temporary files dueto memory restrictions.
See also
$sort and Memory Restrictions and $group Operator and Memory.
Aggregation and Read Concern
- Starting in MongoDB 4.2, the
$out
stage cannot be usedin conjunction with read concern"linearizable"
. Thatis, if you specify"linearizable"
read concern fordb.collection.aggregate()
, you cannot include the$out
stage in the pipeline. - The
$merge
stage cannot be used in conjunction with readconcern"linearizable"
. That is, if you specify"linearizable"
read concern fordb.collection.aggregate()
, you cannot include the$merge
stage in the pipeline.
- Starting in MongoDB 4.2, the
See
$or
and 2d Index Internals.
The use of 2d
index for spherical queries may lead to incorrectresults, such as the use of the 2d
index for spherical queriesthat wrap around the poles.
Geospatial Coordinates
- Valid longitude values are between
-180
and180
, bothinclusive. - Valid latitude values are between
-90
and90
, bothinclusive.
- Valid longitude values are between
Area of GeoJSON Polygons
- For
$geoIntersects
or$geoWithin
, if you specify a single-ringed polygon thathas an area greater than a single hemisphere, includethecustom MongoDB coordinate reference system in the $geometry
expression; otherwise,$geoIntersects
or$geoWithin
queries forthe complementary geometry. For all other GeoJSON polygons with areasgreater than a hemisphere,$geoIntersects
or$geoWithin
queries for thecomplementary geometry.
Multi-document Transactions
For multi-document transactions:
- You can specify read/write (CRUD) operations on existingcollections. The collections can be in different databases. For alist of CRUD operations, see CRUD Operations.
- You cannot write to cappedcollections. (Starting in MongoDB 4.2)
- You cannot read/write to collections in the
config
,admin
,orlocal
databases. - You cannot write to
system.*
collections. - You cannot return the supported operation’s query plan (i.e.
explain
). - For cursors created outside of a transaction, you cannot call
getMore
inside the transaction. - For cursors created in a transaction, you cannot call
getMore
outside the transaction. Starting in MongoDB 4.2, you cannot specify
killCursors
asthe first operation in a transaction.The following operations are not allowed in transactions:Operations that affect the database catalog, such as creating ordropping a collection or an index. For example, atransaction cannot include an insert operation that would resultin the creation of a new collection.
The listCollections
and listIndexes
commands and their helper methods are also excluded.
- Non-CRUD and non-informational operations, such as
createUser
,getParameter
,count
, etc. and their helpers.
Transactions have a lifetime limit as specified bytransactionLifetimeLimitSeconds
. The default is 60 seconds.
Write Command Batch Limit Size
100,000
writes areallowed in a single batch operation, defined by a single request tothe server.
Changed in version 3.6: The limit raises from 1,000
to 100,000
writes. This limitalso applies to legacy OP_INSERT
messages.
The Bulk()
operations in themongo
shell and comparable methods in the drivers do nothave this limit.
Views
- The view definition
pipeline
cannotinclude the$out
or the$merge
stage. If the view definition includesnested pipeline (e.g. the view definition includes$lookup
or$facet
stage), thisrestriction applies to the nested pipelinesas well.
Views have the following operation restrictions:
- Views are read-only.
- You cannot rename views.
find()
operations on views do not supportthe following projectionoperators:- Views do not support text search.
- Views do not support map-reduce operations.
- Views do not support geoNear operations (i.e.
$geoNear
pipeline stage).
Sessions
Changed in version 3.6.3: To use sessions with $external
authentication users (i.e.Kerberos, LDAP, x.509 users), the usernames cannot be greaterthan 10k bytes.
Shell
The mongo
shell prompt has a limit of 4095 codepoints foreach line. If you enter a line with more than 4095 codepoints, theshell will truncate it.