Collection Methods
Drop
drops a collection collection.drop(options)
Drops a collection and all its indexes and data. In order to drop a system collection, an options object with attribute isSystem set to true must be specified.
Dropping a collection in a cluster, which is prototype for sharing in other collections is prohibited. In order to be able to drop such a collection, all dependent collections must be dropped first.
Examples
arangosh> col = db.example;
arangosh> col.drop();
arangosh> col;
Show execution results
Hide execution results
[ArangoCollection 75587, "example" (type document, status loaded)]
[ArangoCollection 75587, "example" (type document, status deleted)]
arangosh> col = db._example;
arangosh> col.drop({ isSystem: true });
arangosh> col;
Show execution results
Hide execution results
[ArangoCollection 75594, "_example" (type document, status loaded)]
[ArangoCollection 75594, "_example" (type document, status deleted)]
Truncate
truncates a collection collection.truncate()
Truncates a collection, removing all documents but keeping all its indexes.
Examples
Truncates a collection:
arangosh> col = db.example;
arangosh> col.save({ "Hello" : "World" });
arangosh> col.count();
arangosh> col.truncate();
arangosh> col.count();
Show execution results
Hide execution results
[ArangoCollection 75748, "example" (type document, status loaded)]
{
"_id" : "example/75753",
"_key" : "75753",
"_rev" : "_dAzP06a---"
}
1
0
Compact
Introduced in: v3.4.5
Compacts the data of a collection collection.compact()
Compacts the data of a collection in order to reclaim disk space. For the MMFiles storage engine, the operation will reset the collection’s last compaction timestamp, so it will become a candidate for compaction. For the RocksDB storage engine, the operation will compact the document and index data by rewriting the underlying .sst files and only keeping the relevant entries.
Under normal circumstances running a compact operation is not necessary, as the collection data will eventually get compacted anyway. However, in some situations, e.g. after running lots of update/replace or remove operations, the disk data for a collection may contain a lot of outdated data for which the space shall be reclaimed. In this case the compaction operation can be used.
Properties
gets or sets the properties of a collectioncollection.properties()
Returns an object containing all collection properties.
waitForSync: If true creating a document will only return after the data was synced to disk.
journalSize : The size of the journal in bytes. This option is meaningful for the MMFiles storage engine only.
isVolatile: If true then the collection data will be kept in memory only and ArangoDB will not write or sync the data to disk. This option is meaningful for the MMFiles storage engine only.
keyOptions (optional) additional options for key generation. This is a JSON array containing the following attributes (note: some of the attributes are optional):
- type: the type of the key generator used for the collection.
- allowUserKeys: if set to true, then it is allowed to supply own key values in the _key attribute of a document. If set to false, then the key generator will solely be responsible for generating keys and supplying own key values in the _key attribute of documents is considered an error.
- increment: increment value for autoincrement key generator. Not used for other key generator types.
- offset: initial offset value for autoincrement key generator. Not used for other key generator types.
- schema (optional, default is null): Object that specifies the collection level document schema for documents. The attribute keys
rule
,level
andmessage
must follow the rules documented in Document Schema Validation
In a cluster setup, the result will also contain the following attributes:
numberOfShards: the number of shards of the collection.
shardKeys: contains the names of document attributes that are used to determine the target shard for documents.
replicationFactor: determines how many copies of each shard are kept on different DB-Servers. Has to be in the range of 1-10 or the string
"satellite"
for a SatelliteCollection (Enterprise Edition only). (cluster only)writeConcern: determines how many copies of each shard are required to be in sync on the different DB-Servers. If there are less then these many copies in the cluster a shard will refuse to write. Writes to shards with enough up-to-date copies will succeed at the same time however. The value of writeConcern can not be larger than replicationFactor. (cluster only)
shardingStrategy: the sharding strategy selected for the collection. This attribute will only be populated in cluster mode and is not populated in single-server mode. (cluster only)
collection.properties(properties)
Changes the collection properties. properties must be an object with one or more of the following attribute(s):
waitForSync: If true creating a document will only return after the data was synced to disk.
journalSize: The size of the journal in bytes. This option is meaningful for the MMFiles storage engine only.
replicationFactor: Change the number of shard copies kept on different DB-Servers. Valid values are integer numbers in the range of 1-10 or the string
"satellite"
for a SatelliteCollection (Enterprise Edition only). (cluster only)writeConcern: change how many copies of each shard are required to be in sync on the different DB-Servers. If there are less then these many copies in the cluster a shard will refuse to write. Writes to shards with enough up-to-date copies will succeed at the same time however. The value of writeConcern can not be larger than replicationFactor. (cluster only)
Note: some other collection properties, such as type, isVolatile, keyOptions, numberOfShards or shardingStrategy cannot be changed once the collection is created.
Examples
Read all properties
arangosh> db.example.properties();
{
"isSystem" : false,
"waitForSync" : false,
"keyOptions" : {
"allowUserKeys" : true,
"type" : "traditional",
"lastValue" : 0
},
"writeConcern" : 1,
"cacheEnabled" : false,
"schema" : null
}
Hide execution results
arangosh> db.example.properties();
Show execution results
Change a property
arangosh> db.example.properties({ waitForSync : true });
{
"isSystem" : false,
"waitForSync" : true,
"keyOptions" : {
"allowUserKeys" : true,
"type" : "traditional",
"lastValue" : 0
},
"writeConcern" : 1,
"cacheEnabled" : false,
"schema" : null
}
Hide execution results
arangosh> db.example.properties({ waitForSync : true });
Show execution results
Figures
returns the figures of a collection collection.figures(details)
Returns an object containing statistics about the collection.
Setting details
to true
will return extended storage engine-specific details to the figures (introduced in v3.7.3). The details are intended for debugging ArangoDB itself and their format is subject to change. By default, details
is set to false
, so no details are returned and the behavior is identical to previous versions of ArangoDB.
- indexes.count: The total number of indexes defined for the collection, including the pre-defined indexes (e.g. primary index).
- indexes.size: The total memory allocated for indexes in bytes.
- documentsSize
- cacheInUse
- cacheSize
- cacheUsage
Examples
arangosh> db.demo.figures()
Show execution results
Hide execution results
{
"indexes" : {
"count" : 1,
"size" : 1781
},
"documentsSize" : 16830,
"cacheInUse" : false,
"cacheSize" : 0,
"cacheUsage" : 0
}
arangosh> db.demo.figures(true)
Show execution results
Hide execution results
{
"indexes" : {
"count" : 1,
"size" : 1781
},
"documentsSize" : 16830,
"cacheInUse" : false,
"cacheSize" : 0,
"cacheUsage" : 0,
"engine" : {
"documents" : 1,
"indexes" : [
{
"type" : "primary",
"id" : 0,
"count" : 1
}
]
}
}
GetResponsibleShard
returns the responsible shard for the given document. collection.getResponsibleShard(document)
Returns a string with the responsible shard’s ID. Note that the returned shard ID is the ID of responsible shard for the document’s shard key values, and it will be returned even if no such document exists.
The getResponsibleShard()
method can only be used on Coordinators in clusters.
Shards
returns the available shards for the collection. collection.shards(details)
If details
is not set, or set to false
, returns an array with the names of the available shards of the collection.
If details
is set to true
, returns an object with the shard names as object attribute keys, and the responsible servers as an array mapped to each shard attribute key.
The leader shards are always first in the arrays of responsible servers.
The shards()
method can only be used on Coordinators in clusters.
Load
loads a collection collection.load()
Loads a collection into memory.
Cluster collections are loaded at all times.
Examples
arangosh> col = db.example;
arangosh> col.load();
arangosh> col;
Show execution results
Hide execution results
[ArangoCollection 75675, "example" (type document, status loaded)]
[ArangoCollection 75675, "example" (type document, status loaded)]
Revision
returns the revision id of a collection collection.revision()
Returns the revision id of the collection
The revision id is updated when the document data is modified, either by inserting, deleting, updating or replacing documents in it.
The revision id of a collection can be used by clients to check whether data in a collection has changed or if it is still unmodified since a previous fetch of the revision id.
The revision id returned is a string value. Clients should treat this value as an opaque string, and only use it for equality/non-equality comparisons.
Path
returns the physical path of the collection collection.path()
The path operation returns a string with the physical storage path for the collection data.
The path()
method will return nothing meaningful in a cluster. In a single-server ArangoDB, this method will only return meaningful data for the MMFiles storage engine.
Checksum
calculates a checksum for the data in a collection collection.checksum(withRevisions, withData)
The checksum operation calculates an aggregate hash value for all document keys contained in collection collection.
If the optional argument withRevisions is set to true, then the revision ids of the documents are also included in the hash calculation.
If the optional argument withData is set to true, then all user-defined document attributes are also checksummed. Including the document data in checksumming will make the calculation slower, but is more accurate.
The checksum calculation algorithm changed in ArangoDB 3.0, so checksums from 3.0 and earlier versions for the same data will differ.
The checksum()
method can not be used in clusters.
Unload
unloads a collection collection.unload()
Starts unloading a collection from memory. Note that unloading is deferred until all query have finished.
Cluster collections cannot be unloaded.
Examples
arangosh> col = db.example;
arangosh> col.unload();
arangosh> col;
Show execution results
Hide execution results
[ArangoCollection 65565, "example" (type document, status loaded)]
[ArangoCollection 65565, "example" (type document, status unloaded)]
Rename
renames a collection collection.rename(new-name)
Renames a collection using the new-name. The new-name must not already be used for a different collection. new-name must also be a valid collection name. For more information on valid collection names please refer to the naming conventions.
If renaming fails for any reason, an error is thrown. If renaming the collection succeeds, then the collection is also renamed in all graph definitions inside the _graphs
collection in the current database.
The rename()
method can not be used in clusters.
Examples
arangosh> c = db.example;
arangosh> c.rename("better-example");
arangosh> c;
Show execution results
Hide execution results
[ArangoCollection 75739, "example" (type document, status loaded)]
[ArangoCollection 75739, "better-example" (type document, status loaded)]