Quick Start

Before proceeding, please ensure you have installed GreptimeDB.

This guide will walk you through creating a metric table and a log table, highlighting the core features of GreptimeDB.

Connect to GreptimeDB

GreptimeDB supports multiple protocols for interacting with the database. In this quick start document, we use SQL for simplicity.

If your GreptimeDB instance is running on 127.0.0.1 with the MySQL client default port 4002 or the PostgreSQL client default port 4003, you can connect to GreptimeDB using the following commands.

By default, GreptimeDB does not have authentication enabled. You can connect to the database without providing a username and password in this section.

  1. mysql -h 127.0.0.1 -P 4002

Or

  1. psql -h 127.0.0.1 -p 4003 -d public

Create tables

Suppose you have a table named grpc_latencies that stores the gRPC latencies of your application. The table schema is as follows:

  1. CREATE TABLE grpc_latencies (
  2. ts TIMESTAMP TIME INDEX,
  3. host STRING,
  4. method_name STRING,
  5. latency DOUBLE,
  6. PRIMARY KEY (host, method_name)
  7. )
  8. engine=mito with('append_mode'='true');
  • ts: The timestamp when the metric was collected. It is the time index column.
  • host: The hostname of the application server. It is a tag column.
  • method_name: The name of the RPC request method. It is a tag column.
  • latency: The latency of the RPC request.

Additionally, there is a table app_logs for storing application logs:

  1. CREATE TABLE app_logs (
  2. ts TIMESTAMP TIME INDEX,
  3. host STRING,
  4. api_path STRING FULLTEXT,
  5. log_level STRING,
  6. log STRING FULLTEXT,
  7. PRIMARY KEY (host, log_level)
  8. )
  9. engine=mito with('append_mode'='true');
  • ts: The timestamp of the log entry. It is the time index column.
  • host: The hostname of the application server. It is a tag column.
  • api_path: The API path, indexed with FULLTEXT for accelerated search..
  • log_level: The log level of the log entry. It is a tag column.
  • log: The log message, indexed with FULLTEXT for accelerated search.

Write data

Let’s insert some mocked data to simulate collected metrics and error logs.

Two application servers, host1 and host2, have been recording gRPC latencies. Starting from 2024-07-11 20:00:10, host1 experienced a significant increase in latency.

The following image shows the unstable latencies of host1.

unstable latencies

The following SQL statements insert the mocked data.

Before 2024-07-11 20:00:10, the hosts were functioning normally:

  1. INSERT INTO grpc_latencies (ts, host, method_name, latency) VALUES
  2. ('2024-07-11 20:00:06', 'host1', 'GetUser', 103.0),
  3. ('2024-07-11 20:00:06', 'host2', 'GetUser', 113.0),
  4. ('2024-07-11 20:00:07', 'host1', 'GetUser', 103.5),
  5. ('2024-07-11 20:00:07', 'host2', 'GetUser', 107.0),
  6. ('2024-07-11 20:00:08', 'host1', 'GetUser', 104.0),
  7. ('2024-07-11 20:00:08', 'host2', 'GetUser', 96.0),
  8. ('2024-07-11 20:00:09', 'host1', 'GetUser', 104.5),
  9. ('2024-07-11 20:00:09', 'host2', 'GetUser', 114.0);

After 2024-07-11 20:00:10, host1‘s latencies becomes unstable:

  1. INSERT INTO grpc_latencies (ts, host, method_name, latency) VALUES
  2. ('2024-07-11 20:00:10', 'host1', 'GetUser', 150.0),
  3. ('2024-07-11 20:00:10', 'host2', 'GetUser', 110.0),
  4. ('2024-07-11 20:00:11', 'host1', 'GetUser', 200.0),
  5. ('2024-07-11 20:00:11', 'host2', 'GetUser', 102.0),
  6. ('2024-07-11 20:00:12', 'host1', 'GetUser', 1000.0),
  7. ('2024-07-11 20:00:12', 'host2', 'GetUser', 108.0),
  8. ('2024-07-11 20:00:13', 'host1', 'GetUser', 80.0),
  9. ('2024-07-11 20:00:13', 'host2', 'GetUser', 111.0),
  10. ('2024-07-11 20:00:14', 'host1', 'GetUser', 4200.0),
  11. ('2024-07-11 20:00:14', 'host2', 'GetUser', 95.0),
  12. ('2024-07-11 20:00:15', 'host1', 'GetUser', 90.0),
  13. ('2024-07-11 20:00:15', 'host2', 'GetUser', 115.0),
  14. ('2024-07-11 20:00:16', 'host1', 'GetUser', 3000.0),
  15. ('2024-07-11 20:00:16', 'host2', 'GetUser', 95.0),
  16. ('2024-07-11 20:00:17', 'host1', 'GetUser', 320.0),
  17. ('2024-07-11 20:00:17', 'host2', 'GetUser', 115.0),
  18. ('2024-07-11 20:00:18', 'host1', 'GetUser', 3500.0),
  19. ('2024-07-11 20:00:18', 'host2', 'GetUser', 95.0),
  20. ('2024-07-11 20:00:19', 'host1', 'GetUser', 100.0),
  21. ('2024-07-11 20:00:19', 'host2', 'GetUser', 115.0),
  22. ('2024-07-11 20:00:20', 'host1', 'GetUser', 2500.0),
  23. ('2024-07-11 20:00:20', 'host2', 'GetUser', 95.0);

Some error logs were collected when the host1 latencies of RPC requests encounter latency issues.

  1. INSERT INTO app_logs (ts, host, api_path, log_level, log) VALUES
  2. ('2024-07-11 20:00:10', 'host1', '/api/v1/resource', 'ERROR', 'Connection timeout'),
  3. ('2024-07-11 20:00:10', 'host1', '/api/v1/billings', 'ERROR', 'Connection timeout'),
  4. ('2024-07-11 20:00:11', 'host1', '/api/v1/resource', 'ERROR', 'Database unavailable'),
  5. ('2024-07-11 20:00:11', 'host1', '/api/v1/billings', 'ERROR', 'Database unavailable'),
  6. ('2024-07-11 20:00:12', 'host1', '/api/v1/resource', 'ERROR', 'Service overload'),
  7. ('2024-07-11 20:00:12', 'host1', '/api/v1/billings', 'ERROR', 'Service overload'),
  8. ('2024-07-11 20:00:13', 'host1', '/api/v1/resource', 'ERROR', 'Connection reset'),
  9. ('2024-07-11 20:00:13', 'host1', '/api/v1/billings', 'ERROR', 'Connection reset'),
  10. ('2024-07-11 20:00:14', 'host1', '/api/v1/resource', 'ERROR', 'Timeout'),
  11. ('2024-07-11 20:00:14', 'host1', '/api/v1/billings', 'ERROR', 'Timeout'),
  12. ('2024-07-11 20:00:15', 'host1', '/api/v1/resource', 'ERROR', 'Disk full'),
  13. ('2024-07-11 20:00:15', 'host1', '/api/v1/billings', 'ERROR', 'Disk full'),
  14. ('2024-07-11 20:00:16', 'host1', '/api/v1/resource', 'ERROR', 'Network issue'),
  15. ('2024-07-11 20:00:16', 'host1', '/api/v1/billings', 'ERROR', 'Network issue');

Query data

Filter by tags and time index

You can filter data using the WHERE clause. For example, to query the latency of host1 after 2024-07-11 20:00:15:

  1. SELECT *
  2. FROM grpc_latencies
  3. WHERE host = 'host1' AND ts > '2024-07-11 20:00:15';
  1. +---------------------+-------+-------------+---------+
  2. | ts | host | method_name | latency |
  3. +---------------------+-------+-------------+---------+
  4. | 2024-07-11 20:00:16 | host1 | GetUser | 3000 |
  5. | 2024-07-11 20:00:17 | host1 | GetUser | 320 |
  6. | 2024-07-11 20:00:18 | host1 | GetUser | 3500 |
  7. | 2024-07-11 20:00:19 | host1 | GetUser | 100 |
  8. | 2024-07-11 20:00:20 | host1 | GetUser | 2500 |
  9. +---------------------+-------+-------------+---------+
  10. 5 rows in set (0.14 sec)

You can also use functions when filtering the data. For example, you can use approx_percentile_cont to calculate the 95th percentile of the latency grouped by the host:

  1. SELECT
  2. approx_percentile_cont(latency, 0.95) AS p95_latency,
  3. host
  4. FROM grpc_latencies
  5. WHERE ts >= '2024-07-11 20:00:10'
  6. GROUP BY host;
  1. +-------------------+-------+
  2. | p95_latency | host |
  3. +-------------------+-------+
  4. | 4164.999999999999 | host1 |
  5. | 115 | host2 |
  6. +-------------------+-------+
  7. 2 rows in set (0.11 sec)

Range query

You can use range queries to monitor latencies in real-time. For example, to calculate the p95 latency of requests using a 5-second window:

  1. SELECT
  2. ts,
  3. host,
  4. approx_percentile_cont(latency, 0.95) RANGE '5s' AS p95_latency
  5. FROM
  6. grpc_latencies
  7. ALIGN '5s' FILL PREV;
  1. +---------------------+-------+-------------+
  2. | ts | host | p95_latency |
  3. +---------------------+-------+-------------+
  4. | 2024-07-11 20:00:05 | host2 | 114 |
  5. | 2024-07-11 20:00:10 | host2 | 111 |
  6. | 2024-07-11 20:00:15 | host2 | 115 |
  7. | 2024-07-11 20:00:20 | host2 | 95 |
  8. | 2024-07-11 20:00:05 | host1 | 104.5 |
  9. | 2024-07-11 20:00:10 | host1 | 4200 |
  10. | 2024-07-11 20:00:15 | host1 | 3500 |
  11. | 2024-07-11 20:00:20 | host1 | 2500 |
  12. +---------------------+-------+-------------+
  13. 8 rows in set (0.06 sec)

You can use matches to search for the columns with the FULLTEXT index. For example, to search for logs with error timeout:

  1. SELECT
  2. ts,
  3. api_path,
  4. log
  5. FROM
  6. app_logs
  7. WHERE
  8. matches(log, 'timeout');
  1. +---------------------+------------------+--------------------+
  2. | ts | api_path | log |
  3. +---------------------+------------------+--------------------+
  4. | 2024-07-11 20:00:10 | /api/v1/billings | Connection timeout |
  5. | 2024-07-11 20:00:10 | /api/v1/resource | Connection timeout |
  6. +---------------------+------------------+--------------------+
  7. 2 rows in set (0.01 sec)

Correlate Metrics and Logs

By combining the data from the two tables, you can easily and quickly determine the time of failure and the corresponding logs. The following SQL query uses the JOIN operation to correlate the metrics and logs:

  1. WITH
  2. metrics AS (
  3. SELECT
  4. ts,
  5. host,
  6. approx_percentile_cont(latency, 0.95) RANGE '5s' AS p95_latency
  7. FROM
  8. grpc_latencies
  9. ALIGN '5s' FILL PREV
  10. ),
  11. logs AS (
  12. SELECT
  13. ts,
  14. host,
  15. count(log) RANGE '5s' AS num_errors,
  16. FROM
  17. app_logs
  18. WHERE
  19. log_level = 'ERROR'
  20. ALIGN '5s'
  21. )
  22. --- Analyze and correlate metrics and logs ---
  23. SELECT
  24. metrics.ts,
  25. p95_latency,
  26. coalesce(num_errors, 0) as num_errors,
  27. metrics.host
  28. FROM
  29. metrics
  30. LEFT JOIN logs ON metrics.host = logs.host
  31. AND metrics.ts = logs.ts
  32. ORDER BY
  33. metrics.ts;
  1. +---------------------+-------------+------------+-------+
  2. | ts | p95_latency | num_errors | host |
  3. +---------------------+-------------+------------+-------+
  4. | 2024-07-11 20:00:05 | 114 | 0 | host2 |
  5. | 2024-07-11 20:00:05 | 104.5 | 0 | host1 |
  6. | 2024-07-11 20:00:10 | 4200 | 10 | host1 |
  7. | 2024-07-11 20:00:10 | 111 | 0 | host2 |
  8. | 2024-07-11 20:00:15 | 115 | 0 | host2 |
  9. | 2024-07-11 20:00:15 | 3500 | 4 | host1 |
  10. | 2024-07-11 20:00:20 | 110 | 0 | host2 |
  11. | 2024-07-11 20:00:20 | 2500 | 0 | host1 |
  12. +---------------------+-------------+------------+-------+
  13. 8 rows in set (0.02 sec)

GreptimeDB Dashboard

GreptimeDB offers a dashboard for data exploration and management.

Explore data

Once GreptimeDB is started as mentioned in the installation section, you can access the dashboard through the HTTP endpoint http://localhost:4000/dashboard.

To add a new query, click on the + button, write your SQL command in the command text, and then click on Run All. The following SQL will retrieve all the data from the grpc_latencies table.

  1. SELECT * FROM grpc_latencies;

Then click on the Chart button in the result panel to visualize the data.

select gRPC latencies

Ingest data by InfluxDB Line Protocol

Besides SQL, GreptimeDB also supports multiple protocols, one of the most popular is InfluxDB Line Protocol. By click Ingest icon in the dashboard, you can upload data in InfluxDB Line Protocol format.

For example, paste the following data into the input box:

  1. grpc_metrics,host=host1,method_name=GetUser latency=100,code=0 1720728021000000000
  2. grpc_metrics,host=host2,method_name=GetUser latency=110,code=1 1720728021000000000

Then click the Write button to ingest the data to the table grpc_metrics. The grpc_metrics table will be created automatically if it does not exist.

Next steps

You have now experienced the core features of GreptimeDB. To further explore and utilize GreptimeDB: