MySQL

Prerequisites

Before trying GreptimeDB, we need to have GreptimeDB and Grafana installed and running locally.

  • GreptimeDB is used for storing and querying data.
  • Grafana is used for visualizing data.

Here we use Docker Compose to start GreptimeDB and Grafana. To do this, create a docker-compose.yml file with the following content:

  1. services:
  2. grafana:
  3. image: grafana/grafana-oss:9.5.15
  4. container_name: grafana
  5. ports:
  6. - 127.0.0.1:3000:3000
  7. greptime:
  8. image: greptime/greptimedb:v0.8.2
  9. container_name: greptimedb
  10. ports:
  11. - 127.0.0.1:4000:4000
  12. - 127.0.0.1:4001:4001
  13. - 127.0.0.1:4002:4002
  14. - 127.0.0.1:4003:4003
  15. command: "standalone start --http-addr 0.0.0.0:4000 --rpc-addr 0.0.0.0:4001 --mysql-addr 0.0.0.0:4002 --postgres-addr 0.0.0.0:4003"
  16. volumes:
  17. - ./greptimedb:/tmp/greptimedb
  18. networks: {}

Then run the following command:

  1. docker-compose up

MySQL - 图1NOTE

The following steps assume that you have followed the documentation above, which uses Docker Compose to install GreptimeDB and Grafana.

Once you’ve successfully started GreptimeDB, you can verify the database status using the following command:

  1. curl http://127.0.0.1:4000/status

If the database is running, you will see an output like the following:

  1. {
  2. "source_time": "2024-05-30T07:59:52Z",
  3. "commit": "05751084e7bbfc5e646df7f51bb7c3e5cbf16d58",
  4. "branch": "HEAD",
  5. "rustc_version": "rustc 1.79.0-nightly (f9b161492 2024-04-19)",
  6. "hostname": "977898bbda4f",
  7. "version": "0.8.1"
  8. }

Try Out Basic SQL Operations

Connect

  1. mysql -h 127.0.0.1 -P 4002

Also, you can use PostgreSQL to connect the database:

  1. psql -h 127.0.0.1 -p 4003 -d public

Create table

Note: GreptimeDB offers a schemaless approach to writing data that eliminates the need to manually create tables using additional protocols. See Automatic Schema Generation.

Now we create a table via MySQL. Let’s start by creating the system_metrics table which contains system resource metrics, including CPU/memory/disk usage. The data is scraped every 5 seconds.

  1. CREATE TABLE IF NOT EXISTS system_metrics (
  2. host STRING,
  3. idc STRING,
  4. cpu_util DOUBLE,
  5. memory_util DOUBLE,
  6. disk_util DOUBLE,
  7. ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  8. PRIMARY KEY(host, idc),
  9. TIME INDEX(ts)
  10. );

Field descriptions:

FieldTypeDescription
hoststringThe hostname
idcstringThe idc name where the host belongs to
cpu_utildoubleThe percent use of CPU
memory_utildoubleThe percent use of memory
disk_utildoubleThe percent use of disks
tstimestampTimestamp column incrementing
  • The table can be created automatically if you are using other protocols. See Create Table.
  • For more information about creating table SQL, please refer to CREATE.
  • For data types, please check data types.

Insert data

Using the INSERT statement is an easy way to add data to your table. The following statement allows us to insert several rows into the system_metrics table.

  1. INSERT INTO system_metrics
  2. VALUES
  3. ("host1", "idc_a", 11.8, 10.3, 10.3, 1667446797450),
  4. ("host1", "idc_a", 80.1, 70.3, 90.0, 1667446797550),
  5. ("host1", "idc_b", 50.0, 66.7, 40.6, 1667446797650),
  6. ("host1", "idc_b", 51.0, 66.5, 39.6, 1667446797750),
  7. ("host1", "idc_b", 52.0, 66.9, 70.6, 1667446797850),
  8. ("host1", "idc_b", 53.0, 63.0, 50.6, 1667446797950),
  9. ("host1", "idc_b", 78.0, 66.7, 20.6, 1667446798050),
  10. ("host1", "idc_b", 68.0, 63.9, 50.6, 1667446798150),
  11. ("host1", "idc_b", 90.0, 39.9, 60.6, 1667446798250);

For more information about the INSERT statement, please refer to INSERT.

Query data

To select all the data from the system_metrics table, use the SELECT statement:

  1. SELECT * FROM system_metrics;

The query result looks like the following:

  1. +-------+-------+----------+-------------+-----------+---------------------+
  2. | host | idc | cpu_util | memory_util | disk_util | ts |
  3. +-------+-------+----------+-------------+-----------+---------------------+
  4. | host1 | idc_a | 11.8 | 10.3 | 10.3 | 2022-11-03 03:39:57 |
  5. | host1 | idc_a | 80.1 | 70.3 | 90 | 2022-11-03 03:39:57 |
  6. | host1 | idc_b | 50 | 66.7 | 40.6 | 2022-11-03 03:39:57 |
  7. | host1 | idc_b | 51 | 66.5 | 39.6 | 2022-11-03 03:39:57 |
  8. | host1 | idc_b | 52 | 66.9 | 70.6 | 2022-11-03 03:39:57 |
  9. | host1 | idc_b | 53 | 63 | 50.6 | 2022-11-03 03:39:57 |
  10. | host1 | idc_b | 78 | 66.7 | 20.6 | 2022-11-03 03:39:58 |
  11. | host1 | idc_b | 68 | 63.9 | 50.6 | 2022-11-03 03:39:58 |
  12. | host1 | idc_b | 90 | 39.9 | 60.6 | 2022-11-03 03:39:58 |
  13. +-------+-------+----------+-------------+-----------+---------------------+
  14. 9 rows in set (0.00 sec)

You can use the count() function to get the number of all rows in the table:

  1. SELECT count(*) FROM system_metrics;
  1. +-----------------+
  2. | COUNT(UInt8(1)) |
  3. +-----------------+
  4. | 9 |
  5. +-----------------+

The avg() function returns the average value of a certain field:

  1. SELECT avg(cpu_util) FROM system_metrics;
  1. +------------------------------+
  2. | AVG(system_metrics.cpu_util) |
  3. +------------------------------+
  4. | 59.32222222222222 |
  5. +------------------------------+

You can use the GROUP BY clause to group rows that have the same values into summary rows. The average memory usage grouped by idc:

  1. SELECT idc, avg(memory_util) FROM system_metrics GROUP BY idc;
  1. +-------+---------------------------------+
  2. | idc | AVG(system_metrics.memory_util) |
  3. +-------+---------------------------------+
  4. | idc_a | 40.3 |
  5. | idc_b | 61.942857142857136 |
  6. +-------+---------------------------------+
  7. 2 rows in set (0.03 sec)

For more information about the SELECT statement, please refer to SELECT.

Collect Host Metrics

To quickly get started with MySQL, we can use Bash to collect system metrics, such as CPU and memory usage, and send it to GreptimeDB via MySQL CLI. The source code is available on GitHub.

If you have started GreptimeDB using the Prerequisites section, you can use the following command to write data:

  1. curl -L \
  2. https://raw.githubusercontent.com/GreptimeCloudStarters/quick-start-mysql/main/quick-start.sh |\
  3. bash -s -- -h 127.0.0.1 -d public -s DISABLED -P 4002

Visualize data

GreptimeDB Dashboard

GreptimeDB provides a user-friendly dashboard to assist users in exploring data. Once GreptimeDB is started as mentioned in the Prerequisites section, you can access the dashboard through the HTTP endpoint http://localhost:4000/dashboard.

Write SQL into the command text, then click Run All. We’ll got all data in system_metrics table.

  1. SELECT * FROM system_metrics;

dashboard-select

Grafana

Add Data Source

You can access Grafana at http://localhost:3000. Use admin as both the username and password to log in.

GreptimeDB can be configured as a MySQL data source in Grafana. Click the Add data source button and select MySQL as the type.

add-mysql-data-source

Fill in the following information:

  • Name: GreptimeDB
  • Host: greptimedb:4002. The host greptimedb is the name of GreptimeDB container
  • Database: public
  • SessionTimezone: UTC

grafana-mysql-config

Click Save & Test button to test the connection.

For more information on using MySQL as a data source for GreptimeDB, please refer to Grafana-MySQL.

Create a Dashboard

To create a new dashboard in Grafana, click the Create your first dashboard button on the home page. Then, click Add visualization and select GreptimeDB as the data source.

In the Query section, ensure that you select GreptimeDB as the data source, choose Time series as the format, switch to the Code tab, and write the following SQL statement. Note that we are using ts as the time column.

  1. SELECT ts AS "time", idle_cpu, sys_cpu FROM public.monitor

grafana-mysql-query-code

Click Run query to view the metric data.

grafana-mysql-run-query

Next Steps

Congratulations on quickly experiencing the basic features of GreptimeDB! Now, you can explore more of GreptimeDB’s features by visiting the User Guide documentation.