CREATE AGGREGATE
Defines a new aggregate function.
Synopsis
CREATE [ORDERED] AGGREGATE <name> (<input_data_type> [ , ... ])
( SFUNC = <sfunc>,
STYPE = <state_data_type>
[, PREFUNC = <prefunc>]
[, FINALFUNC = <ffunc>]
[, INITCOND = <initial_condition>]
[, SORTOP = <sort_operator>] )
Description
CREATE AGGREGATE
defines a new aggregate function. Some basic and commonly-used aggregate functions such as count
, min
, max
, sum
, avg
and so on are already provided in HAWQ. If one defines new types or needs an aggregate function not already provided, then CREATE AGGREGATE
can be used to provide the desired features.
An aggregate function is identified by its name and input data types. Two aggregate functions in the same schema can have the same name if they operate on different input types. The name and input data types of an aggregate function must also be distinct from the name and input data types of every ordinary function in the same schema.
An aggregate function is made from one, two or three ordinary functions (all of which must be IMMUTABLE
functions):
- A state transition function
- An optional preliminary segment-level calculation function
- An optional final calculation function
These functions are used as follows:
sfunc( internal-state, next-data-values ) ---> next-internal-state
prefunc( internal-state, internal-state ) ---> next-internal-state
ffunc( internal-state ) ---> aggregate-value
You can specify PREFUNC
as method for optimizing aggregate execution. By specifying PREFUNC
, the aggregate can be executed in parallel on segments first and then on the master. When a two-level execution is performed, SFUNC
is executed on the segments to generate partial aggregate results, and PREFUNC
is executed on the master to aggregate the partial results from segments. If single-level aggregation is performed, all the rows are sent to the master and
Single-level aggregation and two-level aggregation are equivalent execution strategies. Either type of aggregation can be implemented in a query plan. When you implement the functions
HAWQ creates a temporary variable of data type
An aggregate function can provide an optional initial condition, an initial value for the internal state value. This is specified and stored in the database as a value of type text, but it must be a valid external representation of a constant of the state value data type. If it is not supplied then the state value starts out NULL
.
If the state transition function is declared STRICT
, then it cannot be called with NULL
inputs. With such a transition function, aggregate execution behaves as follows. Rows with any null input values are ignored (the function is not called and the previous state value is retained). If the initial state value is NULL
, then at the first row with all non-null input values, the first argument value replaces the state value, and the transition function is invoked at subsequent rows with all non-null input values. This is useful for implementing aggregates like max
. Note that this behavior is only available when
If the state transition function is not declared STRICT
, then it will be called unconditionally at each input row, and must deal with NULL
inputs and NULL
transition values for itself. This allows the aggregate author to have full control over the aggregate handling of NULL
values.
If the final function is declared STRICT
, then it will not be called when the ending state value is NULL
; instead a NULL
result will be returned automatically. (This is the normal behavior of STRICT
functions.) In any case the final function has the option of returning a NULL
value. For example, the final function for avg
returns NULL
when it sees there were zero input rows.
Single argument aggregate functions, such as min or max, can sometimes be optimized by looking into an index instead of scanning every input row. If this aggregate can be so optimized, indicate it by specifying a sort operator. The basic requirement is that the aggregate must yield the first element in the sort ordering induced by the operator; in other words:
SELECT agg(col) FROM tab;
must be equivalent to:
SELECT col FROM tab ORDER BY col USING sortop LIMIT 1;
Further assumptions are that the aggregate function ignores NULL
inputs, and that it delivers a NULL
result if and only if there were no non-null inputs. Ordinarily, a data type’s <
operator is the proper sort operator for MIN
, and >
is the proper sort operator for MAX
. Note that the optimization will never actually take effect unless the specified operator is the “less than” or “greater than” strategy member of a B-tree index operator class.
Ordered Aggregates
If the optional qualification ORDERED
appears, the created aggregate function is an ordered aggregate. In this case, the preliminary aggregation function, prefunc
cannot be specified.
An ordered aggregate is called with the following syntax.
<name> ( <arg> [ , ... ] [ORDER BY <sortspec> [ , ...]] )
If the optional ORDER BY
is omitted, a system-defined ordering is used. The transition function aggordered
in the pg_aggregate
table to indicate the aggregate function is defined as an ordered aggregate.
Parameters
The name (optionally schema-qualified) of the aggregate function to create.
An input data type on which this aggregate function operates. To create a zero-argument aggregate function, write * in place of the list of input data types. An example of such an aggregate is count(*)
.
The name of the state transition function to be called for each input row. For an N-argument aggregate function, the
The data type for the aggregate state value.
The name of a preliminary aggregation function. This is a function of two arguments, both of type
Note that this function is also called in hash aggregate mode within a segment. Therefore, if you call this aggregate function without a preliminary function, hash aggregate is never chosen. Since hash aggregate is efficient, consider defining preliminary function whenever possible.
PREFUNC is optional. If defined, it is executed on master. Input to PREFUNC is partial results from segments, and not the tuples. If PREFUNC is not defined, the aggregate cannot be executed in parallel. PREFUNC and gp_enable_multiphase_agg are used as follows:
- gp_enable_multiphase_agg = off: SFUNC is executed sequentially on master. PREFUNC, even if defined, is unused.
gp_enable_multiphase_agg = on and PREFUNC is defined: SFUNC is executed in parallel, on segments. PREFUNC is invoked on master to aggregate partial results from segments.
CREATE OR REPLACE FUNCTION my_avg_accum(bytea,bigint) returns bytea as 'int8_avg_accum' language internal strict immutable;
CREATE OR REPLACE FUNCTION my_avg_merge(bytea,bytea) returns bytea as 'int8_avg_amalg' language internal strict immutable;
CREATE OR REPLACE FUNCTION my_avg_final(bytea) returns numeric as 'int8_avg' language internal strict immutable;
CREATE AGGREGATE my_avg(bigint) ( stype = bytea,sfunc = my_avg_accum,prefunc = my_avg_merge,finalfunc = my_avg_final,initcond = '' );
The name of the final function called to compute the aggregate result after all input rows have been traversed. The function must take a single argument of type state_data_type
. The return data type of the aggregate is defined as the return type of this function. If
The initial setting for the state value. This must be a string constant in the form accepted for the data type NULL
.
The associated sort operator for a MIN- or MAX-like aggregate function. This is just an operator name (possibly schema-qualified). The operator is assumed to have the same input data types as the aggregate function (which must be a single-argument aggregate function).
Notes
The ordinary functions used to define a new aggregate function must be defined first. Note that in this release of HAWQ, it is required that the IMMUTABLE
.
Any compiled code (shared library files) for custom functions must be placed in the same location on every host in your HAWQ array (master and all segments). This location must also be in the LD_LIBRARY_PATH
so that the server can locate the files.
Examples
Create a sum of cubes aggregate:
CREATE FUNCTION scube_accum(numeric, numeric) RETURNS numeric
AS 'select $1 + $2 * $2 * $2'
LANGUAGE SQL
IMMUTABLE
RETURNS NULL ON NULL INPUT;
CREATE AGGREGATE scube(numeric) (
SFUNC = scube_accum,
STYPE = numeric,
INITCOND = 0 );
To test this aggregate:
CREATE TABLE x(a INT);
INSERT INTO x VALUES (1),(2),(3);
SELECT scube(a) FROM x;
Correct answer for reference:
SELECT sum(a*a*a) FROM x;
Compatibility
CREATE AGGREGATE
is a HAWQ language extension. The SQL standard does not provide for user-defined aggregate functions.