T-Test Aggregation
A t_test
metrics aggregation that performs a statistical hypothesis test in which the test statistic follows a Student’s t-distribution under the null hypothesis on numeric values extracted from the aggregated documents or generated by provided scripts. In practice, this will tell you if the difference between two population means are statistically significant and did not occur by chance alone.
Syntax
A t_test
aggregation looks like this in isolation:
{
"t_test": {
"a": "value_before",
"b": "value_after",
"type": "paired"
}
}
Assuming that we have a record of node start up times before and after upgrade, let’s look at a t-test to see if upgrade affected the node start up time in a meaningful way.
GET node_upgrade/_search
{
"size": 0,
"aggs": {
"startup_time_ttest": {
"t_test": {
"a": { "field": "startup_time_before" },
"b": { "field": "startup_time_after" },
"type": "paired"
}
}
}
}
The field | |
The field | |
Since we have data from the same nodes, we are using paired t-test. |
The response will return the p-value or probability value for the test. It is the probability of obtaining results at least as extreme as the result processed by the aggregation, assuming that the null hypothesis is correct (which means there is no difference between population means). Smaller p-value means the null hypothesis is more likely to be incorrect and population means are indeed different.
{
...
"aggregations": {
"startup_time_ttest": {
"value": 0.1914368843365979
}
}
}
The p-value. |
T-Test Types
The t_test
aggregation supports unpaired and paired two-sample t-tests. The type of the test can be specified using the type
parameter:
"type": "paired"
performs paired t-test
"type": "homoscedastic"
performs two-sample equal variance test
"type": "heteroscedastic"
performs two-sample unequal variance test (this is default)
Filters
It is also possible to run unpaired t-test on different sets of records using filters. For example, if we want to test the difference of startup times before upgrade between two different groups of nodes, we use the same field startup_time_before
by separate groups of nodes using terms filters on the group name field:
GET node_upgrade/_search
{
"size": 0,
"aggs": {
"startup_time_ttest": {
"t_test": {
"a": {
"field": "startup_time_before",
"filter": {
"term": {
"group": "A"
}
}
},
"b": {
"field": "startup_time_before",
"filter": {
"term": {
"group": "B"
}
}
},
"type": "heteroscedastic"
}
}
}
}
The field | |
Any query that separates two groups can be used here. | |
We are using the same field | |
but we are using different filters. | |
Since we have data from different nodes, we cannot use paired t-test. |
{
...
"aggregations": {
"startup_time_ttest": {
"value": 0.2981858007281437
}
}
}
The p-value. |
In this example, we are using the same fields for both populations. However this is not a requirement and different fields and even combination of fields and scripts can be used. Populations don’t have to be in the same index either. If data sets are located in different indices, the term filter on the _index
field can be used to select populations.
Script
The t_test
metric supports scripting. For example, if we need to adjust out load times for the before values, we could use a script to recalculate them on-the-fly:
GET node_upgrade/_search
{
"size": 0,
"aggs": {
"startup_time_ttest": {
"t_test": {
"a": {
"script": {
"lang": "painless",
"source": "doc['startup_time_before'].value - params.adjustment",
"params": {
"adjustment": 10
}
}
},
"b": {
"field": "startup_time_after"
},
"type": "paired"
}
}
}
}
The | |
Scripting supports parameterized input just like any other script. | |
We can mix scripts and fields. |