We are currently refactoring our documentation. Please excuse any problems you may find and report them here.
Feature | Description |
---|---|
Geo-replication | TiKV uses the Raft consensus algorithm and the Placement Driver to support geo-replication. |
Horizontal scalability | With the Placement Driver and carefully designed Raft groups, TiKV excels in horizontal scalability and can easily scale to 100+ terabytes of data. |
Consistent distributed transactions | Similar to Google’s Spanner, TiKV supports externally consistent distributed transactions. |
Coprocessor support | Similar to Hbase, TiKV implements a coprocessor framework to support distributed computing. |
Automatic sharding | TiKV shards your data into regions without manual intervention, reducing maintenance burden. |
Region balance | TiKV supports rebalancing regions due to faults, workload, or topology needs. |
Dynamic membership | Grow or shrink TiKV clusters dynamically, without the need for downtime. |
Rolling online updates | Using supported deployment methods, safely upgrade TiKV clusters without worry. |
Extensive Metric Suite | Easily integrate TiKV into your infrastructure monitoring with extensive Prometheus reporting. |
Flexible APIs | Use transactional or raw gRPC APIs through clients in your favorite language, or use gRPC directly. |