- All questions on one page
- General questions
- What is YDB?
- What features does YDB provide?
- What consistency model does YDB use?
- How do I design a primary key?
- How do I evenly distribute load across table partitions?
- Can I use NULL in a key column?
- Is there an optimal size of a database row?
- How are secondary indexes used in YDB?
- How is paginated output performed?
- How do I efficiently upload large amounts of data to YDB?
- How do I delete expired data?
- General questions
- SDK
- What should I do if the SDK crashes when shutting down an application?
- What should I do if, when using a
fork()
system call, a program does not work properly in a child process? - What do I do if I get the “Active sessions limit exceeded” error even though the current number of active sessions is within limits?
- Is it possible to make queries to different databases from the same application?
- What should I do if a VM has failed and it’s impossible to make a query?
- Errors
- Possible causes for “Status: OVERLOADED Error: Pending previous query completion” in the C++ SDK
- What do I do if I frequently get the “Transaction locks invalidated” error?
- What causes the “Exceeded maximum allowed number of active transactions” error?
- What do I do if I get the Datashard: Reply size limit exceeded error in response to a query?
- What do I do is I get the “Datashard program size limit exceeded” in response to a query?
- YQL
- Serverless
All questions on one page
General questions
What is YDB?
YDB is a distributed fault-tolerant SQL DBMS. YDB provides high availability and scalability while simultaneously ensuring strict consistency and ACID transaction support. Queries are made using an SQL dialect (YQL).
YDB is a fully managed database. DB instances are created through the YDB database management service.
What features does YDB provide?
YDB provides high availability and data security through synchronous replication in three availability zones. YDB also ensures even load distribution across available hardware resources. This means you don’t need to order resources, YDB automatically provisions and releases resources based on the user load.
What consistency model does YDB use?
To read data, YDB uses a model of strict data consistency.
How do I design a primary key?
To design a primary key properly, follow the rules below.
Avoid situations where the main load falls on a single partition of a table. With even load distribution, it’s easier to achieve high overall performance.
This rule implies that you shouldn’t use a monotonically increasing sequence, such as timestamp, as a table’s primary key.
The fewer table partitions a query uses, the faster it runs. For greater performance, follow the one query — one partition rule.
Avoid situations where a small part of the DB is under much heavier load than the rest of the DB.
For more information, see Schema design.
How do I evenly distribute load across table partitions?
You can use the following techniques to distribute the load evenly across table partitions and increase overall DB performance.
- To avoid using uniformly increasing primary key values, you can:
- Change the order of its components.
- use a hash of the key column values as the primary key.
- Reduce the number of partitions used in a single query.
For more information, see Schema design.
Can I use NULL in a key column?
In YDB, all columns, including key ones, may contain a NULL
value, but we don’t recommend using NULL
as values in key columns.
Per the SQL standard (ISO/IEC 9075), you can’t compare NULL
with other values. Therefore, the use of concise SQL statements with simple comparison operators may result in rows containing NULL being skipped during filtering, for example.
Is there an optimal size of a database row?
To achieve high performance, we don’t recommend writing rows larger than 8 MB and key columns larger than 2 KB to the DB.
For more information about limits, see Database limits.
How are secondary indexes used in YDB?
Secondary indexes in YDB are global and can be non-unique.
For more information, see Secondary indexes.
How is paginated output performed?
To organize paginated output, we recommend selecting data sorted by primary key sequentially, limiting the number of rows with the LIMIT
keyword. We do not recommend using the OFFSET
keyword to solve this problem.
For more information, see Paginated output.
How do I efficiently upload large amounts of data to YDB?
To increase upload speed for large amounts of data, follow the recommendations below:
- When creating a table, explicitly specify the required number of partitions or their boundaries. This will help you effectively use system bandwidth as soon as you start uploading data by avoiding unnecessary re-partitioning of the table.
- Don’t insert data in separate transactions for each row. It’s more efficient to insert multiple rows at once (batch inserts). This reduces the overhead on the transaction mechanism itself.
- In addition to the previous step, within each transaction (batch), insert rows from the primary key-sorted set of data to minimize the number of partitions that the transaction affects.
- Avoid writing data sequentially in ascending or descending order of the primary key value to evenly distribute the load across all table partitions.
For more detail, see Uploading large volumes of data.
How do I delete expired data?
To effectively remove expired data, we recommend using TTL.
SDK
What should I do if the SDK crashes when shutting down an application?
Make sure not to wrap SDK components in a singleton, since their lifetime shouldn’t exceed the execution time of the main()
function. When a client is destroyed, session pools are emptied, so network navigation is required. But gRPC contains global static variables that might already be destroyed by this time. This disables gRPC. If you need to declare a driver as a global object, invoke the Stop(true)
function on the driver before exiting the main()
function.
What should I do if, when using a fork()
system call, a program does not work properly in a child process?
Using fork()
in multithreaded applications is an antipattern. Since both the SDK and the gRPC library are multithreaded applications, their stability is not guaranteed.
What do I do if I get the “Active sessions limit exceeded” error even though the current number of active sessions is within limits?
The limit applies to the number of active sessions. An active session is a session passed to the client to be used in its code. A session is returned to the pool in a destructor. In this case, the session itself is a replicated object. You may have saved a copy of the session in the code.
Is it possible to make queries to different databases from the same application?
Yes, the C++ SDK lets you override the DB parameters and token when creating a client. There is no need to create separate drivers.
What should I do if a VM has failed and it’s impossible to make a query?
To detect that a VM is unavailable, set a client timeout. All queries contain the client timeout parameters. The timeout value should be an order of magnitude greater than the expected query execution time.
Errors
Possible causes for “Status: OVERLOADED Error: Pending previous query completion” in the C++ SDK
Q: When running two queries, I try to get a response from the future method of the second one. It returns: Status: OVERLOADED Why: <main>: Error: Pending previous query completion
.
A: Sessions in the SDK are single-threaded. To run multiple queries at once, you need to create multiple sessions.
What do I do if I frequently get the “Transaction locks invalidated” error?
Typically, if you get this error, repeat a transaction, as YDB uses optimistic locking. If this error occurs frequently, this is the result of a transaction reading a large number of rows or of many transactions competing for the same “hot” rows. It makes sense to view the queries running in the transaction and check if they’re reading unnecessary rows.
What causes the “Exceeded maximum allowed number of active transactions” error?
The logic on the client side should try to keep transactions as short as possible.
No more than 10 active transactions are allowed per session. When starting a transaction, use either the commit flag for autocommit or an explicit commit/rollback.
What do I do if I get the Datashard: Reply size limit exceeded error in response to a query?
This error means that, as a query was running, one of the participating data shards attempted to return over 50 MB of data, which exceeds the allowed limit.
Recommendations:
- A general recommendation is to reduce the amount of data processed in a transaction.
- If a query involves a
Join
, it’s a good idea to make sure that the method used is Index lookup Join. - If a simple selection is performed, make sure that it is done by keys, or add
LIMIT
in the query.
What do I do is I get the “Datashard program size limit exceeded” in response to a query?
This error means that the size of a program (including parameter values) exceeded the 50-MB limit for one of the data shards. In most cases, this indicates an attempt to write over 50 MB of data to database tables in a single transaction. All modifying operations in a transaction such as UPSERT
, REPLACE
, INSERT
, or UPDATE
count as records.
You need to reduce the total size of records in one transaction. Normally, we don’t recommend combining queries that logically don’t require transactionality in a single transaction. When adding/updating data in batches, we recommend reducing the size of one batch to values not exceeding a few megabytes.
YQL
General questions
How do I select table rows by a list of keys?
You can select table rows based on a specified list of table primary key (or key prefix) values using the IN
operator:
DECLARE $keys AS List<UInt64>;
SELECT * FROM some_table
WHERE Key1 IN $keys;
If a selection is made using a composite key, the query parameter must have the type of a list of tuples:
DECLARE $keys AS List<Tuple<UInt64, String>>;
SELECT * FROM some_table
WHERE (Key1, Key2) IN $keys;
To select rows effectively, make sure that the value types in the parameters match the key column types in the table.
Is search by index performed for conditions containing the LIKE operator?
You can only use the LIKE
operator to search a table index if it specifies a row prefix:
SELECT * FROM string_key_table
WHERE Key LIKE "some_prefix%";
Why does a query return only 1000 rows?
1000 rows is the response size limit per YQL query. If a response is shortened, it is flagged as Truncated
. To output more table rows, you can use paginated output or the ReadTable
operation.
How do I update only those values whose keys are not in the table?
You can use the LEFT JOIN
operator to identify the keys a table is missing and update their values:
DECLARE $values AS List<Struct<Key: UInt64, Value: String>>;
UPSERT INTO kv_table
SELECT v.Key AS Key, v.Value AS Value
FROM AS_TABLE($values) AS v
LEFT JOIN kv_table AS t
ON v.Key = t.Key
WHERE t.Key IS NULL;
Join operations
Are there any specific features of Join operations?
A Join
in YDB is performed using one of the two methods below:
- Common Join.
- Index Lookup Join.
Common Join
The contents of both tables (to the left and to the right of Join
) are sent to the requesting node which applies the operation to the totality of the data. This is the most generic way of performing a Join
that is used whenever other optimizations are unavailable. For large tables, this method is either slow or doesn’t work in general due to exceeding the data transfer limits.
Index Lookup Join
For rows on the left of Join
, relevant values are looked up to the right. You use this method whenever the right part is a table and the Join
key is its primary or secondary index key prefix. In this method, limited selections are made from the right table instead of full reads. This lets you use it when working with large tables.
Note
For most OLTP queries, we recommend using Index Lookup Join with a small size of the left part. These operations read little data and can be performed efficiently.
How do I Join data from query parameters?
You can use query parameter data as a constant table. To do this, use the AS_TABLE
modifier with a parameter whose type is a list of structures:
DECLARE $data AS List<Struct<Key1: UInt64, Key2: String>>;
SELECT * FROM AS_TABLE($data) AS d
INNER JOIN some_table AS t
ON t.Key1 = d.Key1 AND t.Key2 = d.Key2;
There is no explicit limit on the number of entries in the constant table, but mind the standard limit on the total size of query parameters (50 MB).
What’s the best way to implement a query like (key1, key2) IN ((v1, v2), (v3, v4), …)?
It’s better to write it using a JOIN with a constant table:
$keys = AsList(
AsStruct(1 AS Key1, "One" AS Key2),
AsStruct(2 AS Key1, "Three" AS Key2),
AsStruct(4 AS Key1, "One" AS Key2)
);
SELECT t.* FROM AS_TABLE($keys) AS k
INNER JOIN table1 AS t
ON t.Key1 = k.Key1 AND t.Key2 = k.Key2;
Transactions
How efficient is it to run multiple queries in a transaction?
When multiple queries are run sequentially, the total transaction latency may be greater than when the same operations are executed within a single query. This is primarily due to additional network latency for each query. Therefore, if a transaction doesn’t need to be interactive, we recommend formulating all operations in a single YQL query.
Is a separate query atomic?
In general, YQL queries can be executed in multiple consecutive phases. For example, a Join query can be executed in two phases: reading data from the left and right table, respectively. This aspect is important when you run a query in a transaction with a low isolation level (online_read_only
), as in this case, data between execution phases can be updated by other transactions.
Serverless
How do secondary indexes affect the request cost?
Operations with indexes are estimated according to the same rules as operations with tables. They are reflected in the request statistics and included in the total indicators that are used to calculate the cost in Request Units (RU). Read more in pricing policy for the serverless YDB API.
When reading data from a table using an index, the request statistics will show the number of rows read from the index and their volume.
When adding a new row to a table, a record is also added to each index that exists in this table, with the number of added records and the volume of written data shown in the statistics.
Whenever a table row update occurs, the statistics will reflect a deletion operation for the old record and an insert for the new one for all indexes that include the fields being updated.
When deleting a table row, the statistics will include the deletion of records from all indexes in this table.