Function

Coprocessor annotation

The @coprocesssor annotation specifies a python function as a coprocessor in GreptimeDB and sets some attributes for it.

The engine allows one and only one function annotated with @coprocesssor. We can’t have more than one coprocessor in one script.

ParameterDescriptionExample
sqlOptional. The SQL statement that the coprocessor function will query data from the database and assign them to input args.@copr(sql=”select * from cpu”, ..)
argsOptional. The argument names that the coprocessor function will be taken as input, which are the columns in query results by sql.@copr(args=[“cpu”, “mem”], ..)
returnsThe column names that the coprocessor function will return. The Coprocessor Engine uses it to generate the output schema.@copr(returns=[“add”, “sub”, “mul”, “div”], ..)
backendOptional. The coprocessor function will run on available engines like rspy and pyo3, which are associated with RustPython Backend and CPython Backend respectively. The default engine is set to rspy.@copr(backend=”rspy”, ..)

Both sql and args are optional; they must either be provided together or not at all. They are usually used in Post-Query processing. Please read below.

The returns is required for every coprocessor because the output schema is necessary.

backend is optional, because RustPython can’t support C APIs and you might want to use pyo3 backend to use third-party python libraries that only support C APIs. For example, numpy, pandas etc.

Input of the coprocessor function

python

  1. @coprocessor(args=["number"], sql="select number from numbers limit 20", returns=["value"])
  2. def normalize(v) -> vector[i64]:
  3. return [normalize0(x) for x in v]

The argument v is the number column(specified by the args attribute) in query results that are returned by executing the sql.

Of course, you can have several arguments:

python

  1. @coprocessor(args=["number", "number", "number"],
  2. sql="select number from numbers limit 5",
  3. returns=["value"])
  4. def normalize(n1, n2, n3) -> vector[i64]:
  5. # returns [0,1,8,27,64]
  6. return n1 * n2 * n3

Except args, we can also pass user-defined parameters into the coprocessor:

python

  1. @coprocessor(returns=['value'])
  2. def add(**params) -> vector[i64]:
  3. a = params['a']
  4. b = params['b']
  5. return int(a) + int(b)

And then pass the a and b from HTTP API:

sh

  1. curl -XPOST \
  2. "http://localhost:4000/v1/run-script?name=add&db=public&a=42&b=99"

json

  1. {
  2. "code": 0,
  3. "output": [
  4. {
  5. "records": {
  6. "schema": {
  7. "column_schemas": [
  8. {
  9. "name": "value",
  10. "data_type": "Int64"
  11. }
  12. ]
  13. },
  14. "rows": [
  15. [
  16. 141
  17. ]
  18. ]
  19. }
  20. }
  21. ],
  22. "execution_time_ms": 0
  23. }

We pass a=42&b=99 as query params into HTTP API, and it returns the result 141.

The user-defined parameters must be defined by **kwargs in the coprocessor, and all their types are strings. We can pass anything we want such as SQL to run in the coprocessor.

Output of the coprocessor function

As we have seen in the previous examples, the output must be vectors.

We can return multi vectors:

python

  1. from greptime import vector
  2. @coprocessor(returns=["a", "b", "c"])
  3. def return_vectors() -> (vector[i64], vector[str], vector[f64]):
  4. a = vector([1, 2, 3])
  5. b = vector(["a", "b", "c"])
  6. c = vector([42.0, 43.0, 44.0])
  7. return a, b, c

The return types of function return_vectors is (vector[i64], vector[str], vector[f64]).

But we must ensure that all these vectors returned by the function have the same length. Because when they are converted into rows, each row must have all the column values presented.

Of course, we can return literal values, and they will be turned into vectors:

python

  1. from greptime import vector
  2. @coprocessor(returns=["a", "b", "c"])
  3. def return_vectors() -> (vector[i64], vector[str], vector[i64]):
  4. a = 1
  5. b = "Hello, GreptimeDB!"
  6. c = 42
  7. return a, b, c

Query Data

We provide two ways to easily query data from GreptimeDB in Python Coprocessor:

  • SQL: run a SQL string and return the query result.
  • DataFrame API: a builtin module that describes and executes the query similar to a Pandas DataFrame or Spark DataFrame.

SQL

Use the greptime module’s query method to retrieve a query engine, then call sql function to execute a SQL string, for example:

python

  1. @copr(returns=["value"])
  2. def query_numbers()->vector[f64]:
  3. from greptime import query
  4. return query().sql("select number from numbers limit 10")[0]

Call it via SQL client:

sql

  1. SQL > select query_numbers();
  2. +-----------------+
  3. | query_numbers() |
  4. +-----------------+
  5. | 0 |
  6. | 1 |
  7. | 2 |
  8. | 3 |
  9. | 4 |
  10. | 5 |
  11. | 6 |
  12. | 7 |
  13. | 8 |
  14. | 9 |
  15. +-----------------+
  16. 10 rows in set (1.78 sec)

The sql function returns a list of columns, and each column is a vector of values.

In the above example, sql("select number from numbers limit 10") returns a list of vectors. And use [0] to retrieve the first column vector which is the number column in select SQL.

Post-Query Processing

The coprocessor is helpful when processing a query result before it returns to the user. For example, we want to normalize the value:

  • Return zero instead of null or NaN if it misses,
  • If it is greater than 5, return 5,
  • If it is less than zero, return zero.

Then we can create a normalize.py:

python

  1. import math
  2. def normalize0(x):
  3. if x is None or math.isnan(x):
  4. return 0
  5. elif x > 5:
  6. return 5
  7. elif x < 0:
  8. return 0
  9. else:
  10. return x
  11. @coprocessor(args=["number"], sql="select number from numbers limit 10", returns=["value"])
  12. def normalize(v) -> vector[i64]:
  13. return [normalize0(x) for x in v]

The normalize0 function behaves as described above. And the normalize function is the coprocessor entry point:

  • Execute the SQL select number from numbers limit 10,
  • Extract the column number in the query result and use it as the argument in the normalize function. Then invoke the function.
  • In function, use list comprehension to process the number vector, which processes every element by the normalize0 function.
  • Returns the result named as value column.

The -> vector[i64] part specifies the return column types for generating the output schema.

This example also shows how to import the stdlib and define other functions(the normalize0) for invoking. The normalize coprocessor will be called in streaming. The query result may contain several batches, and the engine will call the coprocessor with each batch. And we should remember that the columns extracted from the query result are all vectors. We will cover vectors in the next chapter.

Submit and run this script will generate the output:

json

  1. {
  2. "output": [
  3. {
  4. "records": {
  5. "schema": {
  6. "column_schemas": [
  7. {
  8. "name": "value",
  9. "data_type": "Int64"
  10. }
  11. ]
  12. },
  13. "rows": [
  14. [0],
  15. [1],
  16. [2],
  17. [3],
  18. [4],
  19. [5],
  20. [5],
  21. [5],
  22. [5],
  23. [5]
  24. ]
  25. }
  26. }
  27. ]
  28. }

Insert data

Of course, you can insert data by sql API too:

python

  1. from greptime import query
  2. @copr(returns=["affected_rows"])
  3. def insert() -> vector[i32]:
  4. return query().sql("insert into monitor(host, ts, cpu, memory) values('localhost',1667446807000, 15.3, 66.6)")

json

  1. {
  2. "code": 0,
  3. "output": [
  4. {
  5. "records": {
  6. "schema": {
  7. "column_schemas": [
  8. {
  9. "name": "rows",
  10. "data_type": "Int32"
  11. }
  12. ]
  13. },
  14. "rows": [
  15. [
  16. 1
  17. ]
  18. ]
  19. }
  20. }
  21. ],
  22. "execution_time_ms": 4
  23. }

HTTP API

/scripts submits a Python script into GreptimeDB.

Save a python script such as test.py:

python

  1. @coprocessor(args = ["number"],
  2. returns = [ "number" ],
  3. sql = "select number from numbers limit 5")
  4. def square(number) -> vector[i64]:
  5. return number * 2

Submits it to database:

shell

  1. curl --data-binary @test.py -XPOST \
  2. "http://localhost:4000/v1/scripts?db=default&name=square"

json

  1. {"code":0}

The python script is inserted into the scripts table and compiled automatically:

shell

  1. curl -G http://localhost:4000/v1/sql --data-urlencode "sql=select * from scripts"

json

  1. {
  2. "code": 0,
  3. "output": [{
  4. "records": {
  5. "schema": {
  6. "column_schemas": [
  7. {
  8. "name": "schema",
  9. "data_type": "String"
  10. },
  11. {
  12. "name": "name",
  13. "data_type": "String"
  14. },
  15. {
  16. "name": "script",
  17. "data_type": "String"
  18. },
  19. {
  20. "name": "engine",
  21. "data_type": "String"
  22. },
  23. {
  24. "name": "timestamp",
  25. "data_type": "TimestampMillisecond"
  26. },
  27. {
  28. "name": "gmt_created",
  29. "data_type": "TimestampMillisecond"
  30. },
  31. {
  32. "name": "gmt_modified",
  33. "data_type": "TimestampMillisecond"
  34. }
  35. ]
  36. },
  37. "rows": [
  38. [
  39. "default",
  40. "square",
  41. "@coprocessor(args = [\"number\"],\n returns = [ \"number\" ],\n sql = \"select number from numbers\")\ndef square(number):\n return number * 2\n",
  42. "python",
  43. 0,
  44. 1676032587204,
  45. 1676032587204
  46. ]
  47. ]
  48. }
  49. }],
  50. "execution_time_ms": 4
  51. }

You can also execute the script via /run-script:

shell

  1. curl -XPOST -G "http://localhost:4000/v1/run-script?db=default&name=square"

json

  1. {
  2. "code": 0,
  3. "output": [{
  4. "records": {
  5. "schema": {
  6. "column_schemas": [
  7. {
  8. "name": "number",
  9. "data_type": "Float64"
  10. }
  11. ]
  12. },
  13. "rows": [
  14. [
  15. 0
  16. ],
  17. [
  18. 2
  19. ],
  20. [
  21. 4
  22. ],
  23. [
  24. 6
  25. ],
  26. [
  27. 8
  28. ]
  29. ]
  30. }
  31. }],
  32. "execution_time_ms": 8
  33. }

Parameters and Result for Python scripts

/scripts accepts query parameters db which specifies the database, and name which names the script. /scripts processes the POST method body as the script file content.

/run-script runs the compiled script by db and name, then returns the output which is the same as the query result in /sql API.

/run-script also receives other query parameters as the user params passed into the coprocessor, refer to Input and Output.