Generate SQL according to the data source

Generate SparkSQL and JdbcSQL based on data source information, including DDL, DML, and DQL.

Interface address: /api/rest_j/v1/metadataQuery/getSparkSql

Request method: GET

Request data type: application/x-www-form-urlencoded

Request parameters:

Parameter nameDescriptionRequiredData type
dataSourceNamedata source nameisString
systemsystem nameisString
databasedatabase nameisString
tabletable nameisString

Example response:

  1. {
  2. "method": null,
  3. "status": 0,
  4. "message": "OK",
  5. "data": {
  6. "sparkSql": {
  7. "ddl": "CREATE TEMPORARY TABLE test USING org.apache.spark.sql.jdbc OPTIONS ( url 'jdbc:mysql://localhost:3306/test', dbtable 'test', user 'root', password 'password' )",
  8. "dml": "INSERT INTO test SELECT * FROM ${resultTable}",
  9. "dql": "SELECT id,name FROM test"
  10. }
  11. }
  12. }

Currently supports jdbc, kafka, elasticsearch, mongo data source, you can register spark table according to SparkSQLDDL for query

Interface address: /api/rest_j/v1/metadataQuery/getJdbcSql

Request method: GET

Request data type: application/x-www-form-urlencoded

Request parameters:

Parameter nameDescriptionRequiredData type
dataSourceNamedata source nameisString
systemsystem nameisString
databasedatabase nameisString
tabletable nameisString

Example response:

  1. {
  2. "method": null,
  3. "status": 0,
  4. "message": "OK",
  5. "data": {
  6. "jdbcSql": {
  7. "ddl": "CREATE TABLE `test` (\n\t `id` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT 'The column name is id',\n\t `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT 'The column name is name',\n\t PRIMARY KEY (`id`)\n\t) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci",
  8. "dml": "INSERT INTO test SELECT * FROM ${resultTable}",
  9. "dql": "SELECT id,name FROM test"
  10. }
  11. }
  12. }

Currently supports JDBC data sources, such as: mysql, oracle, postgres, etc. JdbcSQLDDL can be used for front-end display.

  1. You need to register the data source first

Define DDL_SQL_TEMPLATE to obtain data source information for replacement

  1. public static final String JDBC_DDL_SQL_TEMPLATE =
  2. "CREATE TEMPORARY TABLE %s"
  3. + "USING org.apache.spark.sql.jdbc"
  4. + "OPTIONS ("
  5. + "url '%s',"
  6. + "dbtable '%s',"
  7. + " user '%s',"
  8. + "password '%s'"
  9. + ")";

Splicing DDL according to table schema information

  1. public String generateJdbcDdlSql(String database, String table) {
  2. StringBuilder ddl = new StringBuilder();
  3. ddl.append("CREATE TABLE ").append(String.format("%s.%s", database, table)).append(" (");
  4. try {
  5. List < MetaColumnInfo > columns = getColumns(database, table);
  6. if (CollectionUtils. isNotEmpty(columns)) {
  7. for (MetaColumnInfo column: columns) {
  8. ddl.append("\n\t").append(column.getName()).append(" ").append(column.getType());
  9. if (column. getLength() > 0) {
  10. ddl.append("(").append(column.getLength()).append(")");
  11. }
  12. if (!column. isNullable()) {
  13. ddl.append("NOT NULL");
  14. }
  15. ddl.append(",");
  16. }
  17. String primaryKeys =
  18. columns. stream()
  19. .filter(MetaColumnInfo::isPrimaryKey)
  20. .map(MetaColumnInfo::getName)
  21. .collect(Collectors.joining(", "));
  22. if (StringUtils. isNotBlank(primaryKeys)) {
  23. ddl.append(String.format("\n\tPRIMARY KEY (%s),", primaryKeys));
  24. }
  25. ddl. deleteCharAt(ddl. length() - 1);
  26. }
  27. } catch (Exception e) {
  28. LOG.warn("Fail to get Sql columns(Failed to get the field list)");
  29. }
  30. ddl.append("\n)");
  31. return ddl. toString();
  32. }

Some data sources support direct access to DDL

mysql

  1. SHOW CREATE TABLE 'table'

oracle

  1. SELECT DBMS_METADATA.GET_DDL('TABLE', 'table', 'database') AS DDL FROM DUAL