6.14. Aggregate Functions
Aggregate functions operate on a set of values to compute a single result.
Except for count()
, count_if()
, max_by()
, min_by()
andapprox_distinct()
, all of these aggregate functions ignore null valuesand return null for no input rows or when all values are null. For example,sum()
returns null rather than zero and avg()
does not include nullvalues in the count. The coalesce
function can be used to convert null intozero.
Some aggregate functions such as array_agg()
produce different resultsdepending on the order of input values. This ordering can be specified by writingan ORDER BY Clause within the aggregate function:
- array_agg(x ORDER BY y DESC)
- array_agg(x ORDER BY x, y, z)
General Aggregate Functions
arbitrary
(x) → [same as input]Returns an arbitrary non-null value of
x
, if one exists.arrayagg
(_x) → array<[same as input]>Returns an array created from the input
x
elements.avg
(x) → doubleReturns the average (arithmetic mean) of all input values.
avg
(time interval type) → time interval typeReturns the average interval length of all input values.
booland
(_boolean) → booleanReturns
TRUE
if every input value isTRUE
, otherwiseFALSE
.boolor
(_boolean) → booleanReturns
TRUE
if any input value isTRUE
, otherwiseFALSE
.checksum
(x) → varbinaryReturns an order-insensitive checksum of the given values.
count
(*) → bigintReturns the number of input rows.
count
(x) → bigintReturns the number of non-null input values.
countif
(_x) → bigintReturns the number of
TRUE
input values.This function is equivalent tocount(CASE WHEN x THEN 1 END)
.every
(boolean) → booleanThis is an alias for
bool_and()
.geometricmean
(_x) → doubleReturns the geometric mean of all input values.
maxby
(_x, y) → [same as x]Returns the value of
x
associated with the maximum value ofy
over all input values.maxby
(_x, y, n) → array<[same as x]>Returns
n
values ofx
associated with then
largest of all input values ofy
in descending order ofy
.minby
(_x, y) → [same as x]Returns the value of
x
associated with the minimum value ofy
over all input values.minby
(_x, y, n) → array<[same as x]>Returns
n
values ofx
associated with then
smallest of all input values ofy
in ascending order ofy
.max
(x) → [same as input]Returns the maximum value of all input values.
max
(x, n) → array<[same as x]>Returns
n
largest values of all input values ofx
.min
(x) → [same as input]Returns the minimum value of all input values.
min
(x, n) → array<[same as x]>Returns
n
smallest values of all input values ofx
.reduceagg
(_inputValue T, initialState S, inputFunction(S, T, S), combineFunction(S, S, S)) → S- Reduces all input values into a single value.
`inputFunction
will be invokedfor each input value. In addition to taking the input value,inputFunction
takes the current state, initiallyinitialState
, and returns the new state.combineFunction
will be invoked to combine two states into a new state.The final state is returned:
- SELECT id, reduce_agg(value, 0, (a, b) -> a + b, (a, b) -> a + b)
- FROM (
- VALUES
- (1, 2),
- (1, 3),
- (1, 4),
- (2, 20),
- (2, 30),
- (2, 40)
- ) AS t(id, value)
- GROUP BY id;
- -- (1, 9)
- -- (2, 90)
- SELECT id, reduce_agg(value, 1, (a, b) -> a * b, (a, b) -> a * b)
- FROM (
- VALUES
- (1, 2),
- (1, 3),
- (1, 4),
- (2, 20),
- (2, 30),
- (2, 40)
- ) AS t(id, value)
- GROUP BY id;
- -- (1, 24)
- -- (2, 24000)
The state type must be a boolean, integer, floating-point, or date/time/interval.
sum
(x) → [same as input]- Returns the sum of all input values.
Bitwise Aggregate Functions
bitwiseand_agg
(_x) → bigintReturns the bitwise AND of all input values in 2’s complement representation.
bitwiseor_agg
(_x) → bigint- Returns the bitwise OR of all input values in 2’s complement representation.
Map Aggregate Functions
histogram
(x) -> map(K, bigint)Returns a map containing the count of the number of times each input value occurs.
mapagg
(_key, value) -> map(K, V)Returns a map created from the input
key
/value
pairs.mapunion
(_x(K, V)) -> map(K, V)Returns the union of all the input maps. If a key is found in multipleinput maps, that key’s value in the resulting map comes from an arbitrary input map.
multimapagg
(_key, value) -> map(K, array(V))- Returns a multimap created from the input
key
/value
pairs.Each key can be associated with multiple values.
Approximate Aggregate Functions
approxdistinct
(_x) → bigint- Returns the approximate number of distinct input values.This function provides an approximation of
count(DISTINCT x)
.Zero is returned if all input values are null.
This function should produce a standard error of 2.3%, which is thestandard deviation of the (approximately normal) error distribution overall possible sets. It does not guarantee an upper bound on the error forany specific input set.
approxdistinct
(_x, e) → bigint- Returns the approximate number of distinct input values.This function provides an approximation of
count(DISTINCT x)
.Zero is returned if all input values are null.
This function should produce a standard error of no more than e
, whichis the standard deviation of the (approximately normal) error distributionover all possible sets. It does not guarantee an upper bound on the errorfor any specific input set. The current implementation of this functionrequires that e
be in the range of [0.0040625, 0.26000]
.
approxpercentile
(_x, percentage) → [same as x]Returns the approximate percentile for all input values of
x
at thegivenpercentage
. The value ofpercentage
must be between zero andone and must be constant for all input rows.approxpercentile
(_x, percentages) → array<[same as x]>Returns the approximate percentile for all input values of
x
at each ofthe specified percentages. Each element of thepercentages
array must bebetween zero and one, and the array must be constant for all input rows.approxpercentile
(_x, w, percentage) → [same as x]Returns the approximate weighed percentile for all input values of
x
using the per-item weightw
at the percentagep
. The weight must bean integer value of at least one. It is effectively a replication count forthe valuex
in the percentile set. The value ofp
must be betweenzero and one and must be constant for all input rows.approxpercentile
(_x, w, percentage, accuracy) → [same as x]Returns the approximate weighed percentile for all input values of
x
using the per-item weightw
at the percentagep
, with a maximum rankerror ofaccuracy
. The weight must be an integer value of at least one.It is effectively a replication count for the valuex
in the percentileset. The value ofp
must be between zero and one and must be constantfor all input rows.accuracy
must be a value greater than zero and lessthan one, and it must be constant for all input rows.approxpercentile
(_x, w, percentages) → array<[same as x]>Returns the approximate weighed percentile for all input values of
x
using the per-item weightw
at each of the given percentages specifiedin the array. The weight must be an integer value of at least one. It iseffectively a replication count for the valuex
in the percentile set.Each element of the array must be between zero and one, and the array mustbe constant for all input rows.approxset
(_x) → HyperLogLogmerge
(x) → HyperLogLogmerge
(qdigest(T)) -> qdigest(T)qdigestagg
(_x) → qdigest<[same as x]>qdigestagg
(_x, w) → qdigest<[same as x]>qdigestagg
(_x, w, accuracy) → qdigest<[same as x]>numerichistogram
(_buckets, value, weight) → map- Computes an approximate histogram with up to
buckets
number of bucketsfor allvalue
s with a per-item weight ofweight
. The keys of thereturned map are roughly the center of the bin, and the entry is the totalweight of the bin. The algorithm is based loosely on [BenHaimTomTov2010].
buckets
must be a bigint
. value
and weight
must be numeric.
numerichistogram
(_buckets, value) → map- Computes an approximate histogram with up to
buckets
number of bucketsfor allvalue
s. This function is equivalent to the variant ofnumeric_histogram()
that takes aweight
, with a per-item weight of1
.In this case, the total weight in the returned map is the count of items in the bin.
Statistical Aggregate Functions
corr
(y, x) → doubleReturns correlation coefficient of input values.
covarpop
(_y, x) → doubleReturns the population covariance of input values.
covarsamp
(_y, x) → doubleReturns the sample covariance of input values.
entropy
(c) → double- Returns the log-2 entropy of count input-values.
[\mathrm{entropy}(c) = \sum_i \left[ {c_i \over \sum_j [c_j]} \log_2\left({\sum_j [c_j] \over c_i}\right) \right].]
c
must be a bigint
column of non-negative values.
The function ignores any NULL
count. If the sum of non-NULL
counts is 0,it returns 0.
kurtosis
(x) → double
Returns the excess kurtosis of all input values. Unbiased estimate usingthe following expression:
[\mathrm{kurtosis}(x) = {n(n+1) \over (n-1)(n-2)(n-3)} { \sum[(x_i-\mu)^4] \over \sigma^4} -3{ (n-1)^2 \over (n-2)(n-3) },]
where (\mu) is the mean, and (\sigma) is the standard deviation.
regrintercept
(_y, x) → doubleReturns linear regression intercept of input values.
y
is the dependentvalue.x
is the independent value.regrslope
(_y, x) → doubleReturns linear regression slope of input values.
y
is the dependentvalue.x
is the independent value.skewness
(x) → doubleReturns the skewness of all input values.
stddev
(x) → doubleThis is an alias for
stddev_samp()
.stddevpop
(_x) → doubleReturns the population standard deviation of all input values.
stddevsamp
(_x) → doubleReturns the sample standard deviation of all input values.
variance
(x) → doubleThis is an alias for
var_samp()
.varpop
(_x) → doubleReturns the population variance of all input values.
varsamp
(_x) → double- Returns the sample variance of all input values.
Classification Metrics Aggregate Functions
The following functions each measure how some metric of a binaryconfusion matrix changes as a function ofclassification thresholds. They are meant to be used in conjunction.
For example, to find the precision-recall curve, use
- WITH recall_precision AS ( SELECT CLASSIFICATION_RECALL(10000, correct, pred) AS recalls, CLASSIFICATION_PRECISION(10000, correct, pred) AS precisions FROM classification_dataset )SELECT recall, precisionFROM recall_precisionCROSS JOIN UNNEST(recalls, precisions) AS t(recall, precision)
To get the corresponding thresholds for these values, use
- WITH recall_precision AS ( SELECT CLASSIFICATION_THRESHOLDS(10000, correct, pred) AS thresholds, CLASSIFICATION_RECALL(10000, correct, pred) AS recalls, CLASSIFICATION_PRECISION(10000, correct, pred) AS precisions FROM classification_dataset )SELECT threshold, recall, precisionFROM recall_precisionCROSS JOIN UNNEST(thresholds, recalls, precisions) AS t(threshold, recall, precision)
To find the ROC curve, use
- WITH fallout_recall AS ( SELECT CLASSIFICATION_FALLOUT(10000, correct, pred) AS fallouts, CLASSIFICATION_RECALL(10000, correct, pred) AS recalls FROM classification_dataset )SELECT fallout recall,FROM recall_falloutCROSS JOIN UNNEST(fallouts, recalls) AS t(fallout, recall)
classificationmiss_rate
(_buckets, y, x, weight) → array- Computes the miss-rate with up to
buckets
number of buckets. Returnsan array of miss-rate values.
y
should be a boolean outcome value; x
should be predictions, eachbetween 0 and 1; weight
should be non-negative values, indicating the weight of the instance.
Themiss-rateis defined as a sequence whose (j)-th entry is
[{ \sum{i \;|\; x_i \leq t_j \bigwedge y_i = 1} \left[ w_i \right] \over \sum{i \;|\; xi \leq t_j \bigwedge y_i = 1} \left[ w_i \right] + \sum{i \;|\; x_i > t_j \bigwedge y_i = 1} \left[ w_i \right]},]
where (t_j) is the (j)-th smallest threshold,and (y_i), (x_i), and (w_i) are the (i)-thentries of y
, x
, and weight
, respectively.
classificationmiss_rate
(_buckets, y, x) → array- This function is equivalent to the variant of
classification_miss_rate()
that takes aweight
, with a per-item weight of1
. classificationfall_out
(_buckets, y, x, weight) → array- Computes the fall-out with up to
buckets
number of buckets. Returnsan array of fall-out values.
y
should be a boolean outcome value; x
should be predictions, eachbetween 0 and 1; weight
should be non-negative values, indicating the weight of the instance.
Thefall-outis defined as a sequence whose (j)-th entry is
[{ \sum{i \;|\; x_i \leq t_j \bigwedge y_i = 0} \left[ w_i \right] \over \sum{i \;|\; y_i = 0} \left[ w_i \right]},]
where (t_j) is the (j)-th smallest threshold,and (y_i), (x_i), and (w_i) are the (i)-thentries of y
, x
, and weight
, respectively.
classificationfall_out
(_buckets, y, x) → array- This function is equivalent to the variant of
classification_fall_out()
that takes aweight
, with a per-item weight of1
. classificationprecision
(_buckets, y, x, weight) → array- Computes the precision with up to
buckets
number of buckets. Returnsan array of precision values.
y
should be a boolean outcome value; x
should be predictions, eachbetween 0 and 1; weight
should be non-negative values, indicating the weight of the instance.
Theprecisionis defined as a sequence whose (j)-th entry is
[{ \sum{i \;|\; x_i > t_j \bigwedge y_i = 1} \left[ w_i \right] \over \sum{i \;|\; x_i > t_j} \left[ w_i \right]},]
where (t_j) is the (j)-th smallest threshold,and (y_i), (x_i), and (w_i) are the (i)-thentries of y
, x
, and weight
, respectively.
classificationprecision
(_buckets, y, x) → array- This function is equivalent to the variant of
classification_precision()
that takes aweight
, with a per-item weight of1
. classificationrecall
(_buckets, y, x, weight) → array- Computes the recall with up to
buckets
number of buckets. Returnsan array of recall values.
y
should be a boolean outcome value; x
should be predictions, eachbetween 0 and 1; weight
should be non-negative values, indicating the weight of the instance.
Therecallis defined as a sequence whose (j)-th entry is
[{ \sum{i \;|\; x_i > t_j \bigwedge y_i = 1} \left[ w_i \right] \over \sum{i \;|\; y_i = 1} \left[ w_i \right]},]
where (t_j) is the (j)-th smallest threshold,and (y_i), (x_i), and (w_i) are the (i)-thentries of y
, x
, and weight
, respectively.
classificationrecall
(_buckets, y, x) → array- This function is equivalent to the variant of
classification_recall()
that takes aweight
, with a per-item weight of1
. classificationthresholds
(_buckets, y, x) → array- Computes the thresholds with up to
buckets
number of buckets. Returnsan array of threshold values.
y
should be a boolean outcome value; x
should be predictions, eachbetween 0 and 1.
The thresholds are defined as a sequence whose (j)-th entry is the (j)-th smallest threshold.
Differential Entropy Functions
The following functions approximate the binary differential entropy.That is, for a random variable (x), they approximate
[H(x) = - \int x \log_2\left(f(x)\right) dx,]
where (f(x)) is the partial density function of (x).
differentialentropy
(_sample_size, x)- Returns the approximate log-2 differential entropy from a random variable’s sample outcomes. The function internallycreates a reservoir (see [Black2015]), then calculates theentropy from the sample results by approximating the derivative of the cumulative distribution(see [Alizadeh2010]).
sample_size
(long
) is the maximal number of reservoir samples.
x
(double
) is the samples.
For example, to find the differential entropy of x
of data
using 1000000 reservoir samples, use
- SELECT
- differential_entropy(1000000, x)
- FROM
- data
Note
If (x) has a known lower and upper bound,prefer the versions taking (bucket_count, x, 1.0, "fixed_histogram_mle", min, max)
,or (bucket_count, x, 1.0, "fixed_histogram_jacknife", min, max)
,as they have better convergence.
differentialentropy
(_sample_size, x, weight)- Returns the approximate log-2 differential entropy from a random variable’s sample outcomes. The functioninternally creates a weighted reservoir (see [Efraimidis2006]), then calculates theentropy from the sample results by approximating the derivative of the cumulative distribution(see [Alizadeh2010]).
sample_size
is the maximal number of reservoir samples.
x
(double
) is the samples.
weight
(double
) is a non-negative double value indicating the weight of the sample.
For example, to find the differential entropy of x
with weights weight
of data
using 1000000 reservoir samples, use
- SELECT
- differential_entropy(1000000, x, weight)
- FROM
- data
Note
If (x) has a known lower and upper bound,prefer the versions taking (bucket_count, x, weight, "fixed_histogram_mle", min, max)
,or (bucket_count, x, weight, "fixed_histogram_jacknife", min, max)
,as they have better convergence.
differentialentropy
(_bucket_count, x, weight, method, min, max) → double- Returns the approximate log-2 differential entropy from a random variable’s sample outcomes. The functioninternally creates a conceptual histogram of the sample values, calculates the counts, andthen approximates the entropy using maximum likelihood with or without Jacknifecorrection, based on the
method
parameter. If Jacknife correction (see [Beirlant2001]) is used, theestimate is
[n H(x) - (n - 1) \sum{i = 1}^n H\left(x{(i)}\right)]
where (n) is the length of the sequence, and (x_{(i)}) is the sequence with the (i)-th elementremoved.
bucket_count
(long
) determines the number of histogram buckets.
x
(double
) is the samples.
method
(varchar
) is either 'fixed_histogram_mle'
(for the maximum likelihood estimate)or 'fixed_histogram_jacknife'
(for the jacknife-corrected maximum likelihood estimate).
min
and max
(both double
) are the minimal and maximal values, respectively;the function will throw if there is an input outside this range.
weight
(double
) is the weight of the sample, and must be non-negative.
For example, to find the differential entropy of x
, each between 0.0
and 1.0
,with weights 1.0 of data
using 1000000 bins and jacknife estimates, use
- SELECT
- differential_entropy(1000000, x, 1.0, 'fixed_histogram_jacknife', 0.0, 1.0)
- FROM
- data
To find the differential entropy of x
, each between -2.0
and 2.0
,with weights weight
of data
using 1000000 buckets and maximum-likelihood estimates, use
- SELECT differential_entropy(1000000, x, weight, 'fixed_histogram_mle', -2.0, 2.0)FROM data
Note
If (x) doesn’t have known lower and upper bounds, prefer the versions taking (sample_size, x)
(unweighted case) or (sample_size, x, weight)
(weighted case), as they use reservoirsampling which doesn’t require a known range for samples.
Otherwise, if the number of distinct weights is low,especially if the number of samples is low, consider using the version taking(bucket_count, x, weight, "fixed_histogram_jacknife", min, max)
, as jacknife bias correction,is better than maximum likelihood estimation. However, if the number of distinct weights is high,consider using the version taking (bucket_count, x, weight, "fixed_histogram_mle", min, max)
,as this will reduce memory and running time.
[Alizadeh2010] | (1, 2) Alizadeh Noughabi, Hadi & Arghami, N. (2010). “A New Estimator of Entropy”. |
[Beirlant2001] | Beirlant, Dudewicz, Gyorfi, and van der Meulen,“Nonparametric entropy estimation: an overview”, (2001) |
[BenHaimTomTov2010] | Yael Ben-Haim and Elad Tom-Tov, “A streaming parallel decision tree algorithm”,J. Machine Learning Research 11 (2010), pp. 849–872. |
[Black2015] | Black, Paul E. (26 January 2015). “Reservoir sampling”. Dictionary of Algorithms and Data Structures. |
[Efraimidis2006] | Efraimidis, Pavlos S.; Spirakis, Paul G. (2006-03-16). “Weighted random sampling with a reservoir”.Information Processing Letters. 97 (5): 181–185. |