聚合
Elasticsearch 提供了完整的 Java API 来使用aggregation. 请查看 Aggregations guide。
使用factory来构建聚合( AggregationBuilders
),并将需要统计的聚合加入到查询语句中:
SearchResponse sr = node.client().prepareSearch()
.setQuery( /* your query */ )
.addAggregation( /* add an aggregation */ )
.execute().actionGet();
注意,这里可以添加多个聚合。 详情请查看 Search Java API 。
为了构建聚合请求,可以使用内置的 AggregationBuilders
助手。
只需要在你的类中引入:
import org.elasticsearch.search.aggregations.AggregationBuilders;
构造聚合
如 Aggregations guide 所述,你可以在聚合中定义子聚合。
聚合可以是一个 metrics(指标) 聚合或者是 bucket(分组/桶) 聚合。
例如,这里有三个级别的聚合:
-
Terms aggregation (bucket)条件聚合(桶)
-
Date Histogram aggregation (bucket)日期柱状图聚合(桶)
-
Average aggregation (metric)平均值聚合(指标)
SearchResponse sr = node.client().prepareSearch()
.addAggregation(
AggregationBuilders.terms("by_country").field("country")
.subAggregation(AggregationBuilders.dateHistogram("by_year")
.field("dateOfBirth")
.dateHistogramInterval(DateHistogramInterval.YEAR)
.subAggregation(AggregationBuilders.avg("avg_children").field("children"))
)
)
.execute().actionGet();
指标聚合
最小值聚合
请查看 Min Aggregation 。
最大值聚合
请查看 Max Aggregation 。
求和聚合
请查看 Sum Aggregation 。
平均值聚合
请查看 Avg Aggregation 。
统计聚合
请查看 Stats Aggregation 。
准备聚合请求
这里有一个创建聚合请求的例子:
StatsAggregationBuilder aggregation =
AggregationBuilders
.stats("agg")
.field("height");
使用聚合响应
导入聚合定义类:
import org.elasticsearch.search.aggregations.metrics.stats.Stats;
// sr is here your SearchResponse object
Stats agg = sr.getAggregations().get("agg");
double min = agg.getMin();
double max = agg.getMax();
double avg = agg.getAvg();
double sum = agg.getSum();
long count = agg.getCount();
统计聚合拓展
准备聚合请求
这里有一个创建聚合请求的例子:
ExtendedStatsAggregationBuilder aggregation =
AggregationBuilders
.extendedStats("agg")
.field("height");
使用聚合响应
导入聚合定义类:
import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats;
// sr is here your SearchResponse object
ExtendedStats agg = sr.getAggregations().get("agg");
double min = agg.getMin();
double max = agg.getMax();
double avg = agg.getAvg();
double sum = agg.getSum();
long count = agg.getCount();
double stdDeviation = agg.getStdDeviation();
double sumOfSquares = agg.getSumOfSquares();
double variance = agg.getVariance();
数量聚合
请查看 Value Count Aggregation 。
百分比聚合
请查看 Percentile Aggregation 。
准备聚合请求
这里有一个创建聚合请求的例子:
PercentilesAggregationBuilder aggregation =
AggregationBuilders
.percentiles("agg")
.field("height");
当然你可以提供自己的数字,而不是使用默认值:
PercentilesAggregationBuilder aggregation =
AggregationBuilders
.percentiles("agg")
.field("height")
.percentiles(1.0, 5.0, 10.0, 20.0, 30.0, 75.0, 95.0, 99.0);
使用聚合响应
导入聚合定义类:
import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile;
import org.elasticsearch.search.aggregations.metrics.percentiles.Percentiles;
// sr is here your SearchResponse object
Percentiles agg = sr.getAggregations().get("agg");
// For each entry
for (Percentile entry : agg) {
double percent = entry.getPercent(); // Percent
double value = entry.getValue(); // Value
logger.info("percent [{}], value [{}]", percent, value);
}
上面的例子一般会返回这样的结果:
percent [1.0], value [0.814338896154595]
percent [5.0], value [0.8761912455821302]
percent [25.0], value [1.173346540141847]
percent [50.0], value [1.5432023318692198]
percent [75.0], value [1.923915462033674]
percent [95.0], value [2.2273644908535335]
percent [99.0], value [2.284989339108279]
百分比排名聚合
准备聚合请求
这里有一个创建聚合请求的例子:
PercentileRanksAggregationBuilder aggregation =
AggregationBuilders
.percentileRanks("agg")
.field("height")
.values(1.24, 1.91, 2.22);
使用聚合响应
导入聚合定义类:
import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile;
import org.elasticsearch.search.aggregations.metrics.percentiles.PercentileRanks;
// sr is here your SearchResponse object
PercentileRanks agg = sr.getAggregations().get("agg");
// For each entry
for (Percentile entry : agg) {
double percent = entry.getPercent(); // Percent
double value = entry.getValue(); // Value
logger.info("percent [{}], value [{}]", percent, value);
}
一般会返回如下结果:
percent [29.664353095090945], value [1.24]
percent [73.9335313461868], value [1.91]
percent [94.40095147327283], value [2.22]
基数聚合
请查看 Cardinality Aggregation 。
地理范围聚合
请查看 Geo Bounds Aggregation 。
准备聚合请求
这里有一个创建聚合请求的例子:
GeoBoundsBuilder aggregation =
GeoBoundsAggregationBuilder
.geoBounds("agg")
.field("address.location")
.wrapLongitude(true);
使用聚合响应
导入聚合定义类:
import org.elasticsearch.search.aggregations.metrics.geobounds.GeoBounds;
// sr is here your SearchResponse object
GeoBounds agg = sr.getAggregations().get("agg");
GeoPoint bottomRight = agg.bottomRight();
GeoPoint topLeft = agg.topLeft();
logger.info("bottomRight {}, topLeft {}", bottomRight, topLeft);
一般会返回如下结果:
bottomRight [40.70500764381921, 13.952946866893775], topLeft [53.49603022435221, -4.190029308156676]
热门聚合
请查看 Top Hits Aggregation 。
准备聚合请求
这里有一个创建聚合请求的例子:
AggregationBuilder aggregation =
AggregationBuilders
.terms("agg").field("gender")
.subAggregation(
AggregationBuilders.topHits("top")
);
大多数在查询时使用的参数在这里都可以使用,比如 from
, size
, sort
, highlight
, explain
…
AggregationBuilder aggregation =
AggregationBuilders
.terms("agg").field("gender")
.subAggregation(
AggregationBuilders.topHits("top")
.explain(true)
.size(1)
.from(10)
);
使用聚合响应
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
import org.elasticsearch.search.aggregations.metrics.tophits.TopHits;
// sr is here your SearchResponse object
Terms agg = sr.getAggregations().get("agg");
// For each entry
for (Terms.Bucket entry : agg.getBuckets()) {
String key = entry.getKey(); // bucket key
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], doc_count [{}]", key, docCount);
// We ask for top_hits for each bucket
TopHits topHits = entry.getAggregations().get("top");
for (SearchHit hit : topHits.getHits().getHits()) {
logger.info(" -> id [{}], _source [{}]", hit.getId(), hit.getSourceAsString());
}
}
一般会返回如下结果:
key [male], doc_count [5107]
-> id [AUnzSZze9k7PKXtq04x2], _source [{"gender":"male",...}]
-> id [AUnzSZzj9k7PKXtq04x4], _source [{"gender":"male",...}]
-> id [AUnzSZzl9k7PKXtq04x5], _source [{"gender":"male",...}]
key [female], doc_count [4893]
-> id [AUnzSZzM9k7PKXtq04xy], _source [{"gender":"female",...}]
-> id [AUnzSZzp9k7PKXtq04x8], _source [{"gender":"female",...}]
-> id [AUnzSZ0W9k7PKXtq04yS], _source [{"gender":"female",...}]
脚本指标聚合
准备聚合请求
这里有一个创建聚合请求的例子:
ScriptedMetricAggregationBuilder aggregation = AggregationBuilders
.scriptedMetric("agg")
.initScript(new Script("state.heights = []"))
.mapScript(new Script("state.heights.add(doc.gender.value == 'male' ? doc.height.value : -1.0 * doc.height.value)"));
你也还可以指定将在每个shard上执行的 combine
脚本:
ScriptedMetricAggregationBuilder aggregation = AggregationBuilders
.scriptedMetric("agg")
.initScript(new Script("state.heights = []"))
.mapScript(new Script("state.heights.add(doc.gender.value == 'male' ? doc.height.value : -1.0 * doc.height.value)"))
.combineScript(new Script("double heights_sum = 0.0; for (t in state.heights) { heights_sum += t } return heights_sum"));
您还可以指定将在每个接收请求的节点上执行的 reduce
脚本:
ScriptedMetricAggregationBuilder aggregation = AggregationBuilders
.scriptedMetric("agg")
.initScript(new Script("state.heights = []"))
.mapScript(new Script("state.heights.add(doc.gender.value == 'male' ? doc.height.value : -1.0 * doc.height.value)"))
.combineScript(new Script("double heights_sum = 0.0; for (t in state.heights) { heights_sum += t } return heights_sum"))
.reduceScript(new Script("double heights_sum = 0.0; for (a in states) { heights_sum += a } return heights_sum"));
使用聚合响应
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
import org.elasticsearch.search.aggregations.metrics.tophits.TopHits;
// sr is here your SearchResponse object
ScriptedMetric agg = sr.getAggregations().get("agg");
Object scriptedResult = agg.aggregation();
logger.info("scriptedResult [{}]", scriptedResult);
请注意,聚合的结果取决于你所构建的脚本。 第一个例子一般会返回如下结果:
scriptedResult object [ArrayList]
scriptedResult [ {
"heights" : [ 1.122218480146643, -1.8148918111233887, -1.7626731575142909, ... ]
}, {
"heights" : [ -0.8046067304119863, -2.0785486707864553, -1.9183567430207953, ... ]
}, {
"heights" : [ 2.092635728868694, 1.5697545960886536, 1.8826954461968808, ... ]
}, {
"heights" : [ -2.1863201099468403, 1.6328549117346856, -1.7078288405893842, ... ]
}, {
"heights" : [ 1.6043904836424177, -2.0736538674414025, 0.9898266674373053, ... ]
} ]
第二个例子会返回:
scriptedResult object [ArrayList]
scriptedResult [-41.279615707402876,
-60.88007362339038,
38.823270659734256,
14.840192739445632,
11.300902755741326]
最后一个例子会返回:
scriptedResult object [Double]
scriptedResult [2.171917696507009]
分组聚合
Global Aggregation
请查看 Global Aggregation 。
Filter Aggregation
请查看 Filter Aggregation 。
Filters Aggregation
请查看 Filters Aggregation 。
Prepare aggregation request
这里有一个创建聚合请求的例子:
AggregationBuilder aggregation =
AggregationBuilders
.filters("agg",
new FiltersAggregator.KeyedFilter("men", QueryBuilders.termQuery("gender", "male")),
new FiltersAggregator.KeyedFilter("women", QueryBuilders.termQuery("gender", "female")));
Use aggregation response
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.filters.Filters;
// sr is here your SearchResponse object
Filters agg = sr.getAggregations().get("agg");
// For each entry
for (Filters.Bucket entry : agg.getBuckets()) {
String key = entry.getKeyAsString(); // bucket key
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], doc_count [{}]", key, docCount);
}
一般会返回如下结果:
key [men], doc_count [4982]
key [women], doc_count [5018]
Missing Aggregation
请查看 Missing Aggregation 。
Nested Aggregation
请查看 Nested Aggregation 。
Reverse Nested Aggregation
Prepare aggregation request
这里有一个创建聚合请求的例子:
AggregationBuilder aggregation =
AggregationBuilders
.nested("agg", "resellers")
.subAggregation(
AggregationBuilders
.terms("name").field("resellers.name")
.subAggregation(
AggregationBuilders
.reverseNested("reseller_to_product")
)
);
Use aggregation response
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.nested.Nested;
import org.elasticsearch.search.aggregations.bucket.nested.ReverseNested;
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
// sr is here your SearchResponse object
Nested agg = sr.getAggregations().get("agg");
Terms name = agg.getAggregations().get("name");
for (Terms.Bucket bucket : name.getBuckets()) {
ReverseNested resellerToProduct = bucket.getAggregations().get("reseller_to_product");
resellerToProduct.getDocCount(); // Doc count
}
Children Aggregation
请查看 Children Aggregation 。
Terms Aggregation
请查看 Terms Aggregation 。
Use aggregation response
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
// sr is here your SearchResponse object
Terms genders = sr.getAggregations().get("genders");
// For each entry
for (Terms.Bucket entry : genders.getBuckets()) {
entry.getKey(); // Term
entry.getDocCount(); // Doc count
}
Order
导入BucketOrder类:
import org.elasticsearch.search.aggregations.BucketOrder;
按 doc_count
升序方式进行排序:
AggregationBuilders
.terms("genders")
.field("gender")
.order(BucketOrder.count(true))
Ordering the buckets alphabetically by their terms in an ascending manner:
AggregationBuilders
.terms("genders")
.field("gender")
.order(BucketOrder.key(true))
Ordering the buckets by single value metrics sub-aggregation (identified by the aggregation name):
AggregationBuilders
.terms("genders")
.field("gender")
.order(BucketOrder.aggregation("avg_height", false))
.subAggregation(
AggregationBuilders.avg("avg_height").field("height")
)
按多个条件排序:
AggregationBuilders
.terms("genders")
.field("gender")
.order(BucketOrder.compound( // in order of priority:
BucketOrder.aggregation("avg_height", false), // sort by sub-aggregation first
BucketOrder.count(true))) // then bucket count as a tie-breaker
.subAggregation(
AggregationBuilders.avg("avg_height").field("height")
)
Significant Terms Aggregation
Prepare aggregation request
这里有一个创建聚合请求的例子:
AggregationBuilder aggregation =
AggregationBuilders
.significantTerms("significant_countries")
.field("address.country");
// Let say you search for men only
SearchResponse sr = client.prepareSearch()
.setQuery(QueryBuilders.termQuery("gender", "male"))
.addAggregation(aggregation)
.get();
Use aggregation response
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.significant.SignificantTerms;
// sr is here your SearchResponse object
SignificantTerms agg = sr.getAggregations().get("significant_countries");
// For each entry
for (SignificantTerms.Bucket entry : agg.getBuckets()) {
entry.getKey(); // Term
entry.getDocCount(); // Doc count
}
Range Aggregation
请查看 Range Aggregation 。
Prepare aggregation request
这里有一个创建聚合请求的例子:
AggregationBuilder aggregation =
AggregationBuilders
.range("agg")
.field("height")
.addUnboundedTo(1.0f) // from -infinity to 1.0 (excluded)
.addRange(1.0f, 1.5f) // from 1.0 to 1.5 (excluded)
.addUnboundedFrom(1.5f); // from 1.5 to +infinity
Use aggregation response
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.range.Range;
// sr is here your SearchResponse object
Range agg = sr.getAggregations().get("agg");
// For each entry
for (Range.Bucket entry : agg.getBuckets()) {
String key = entry.getKeyAsString(); // Range as key
Number from = (Number) entry.getFrom(); // Bucket from
Number to = (Number) entry.getTo(); // Bucket to
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, from, to, docCount);
}
一般会返回如下结果:
key [*-1.0], from [-Infinity], to [1.0], doc_count [9]
key [1.0-1.5], from [1.0], to [1.5], doc_count [21]
key [1.5-*], from [1.5], to [Infinity], doc_count [20]
Date Range Aggregation
请查看 Date Range Aggregation 。
Prepare aggregation request
这里有一个创建聚合请求的例子:
AggregationBuilder aggregation =
AggregationBuilders
.dateRange("agg")
.field("dateOfBirth")
.format("yyyy")
.addUnboundedTo("1950") // from -infinity to 1950 (excluded)
.addRange("1950", "1960") // from 1950 to 1960 (excluded)
.addUnboundedFrom("1960"); // from 1960 to +infinity
Use aggregation response
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.range.Range;
// sr is here your SearchResponse object
Range agg = sr.getAggregations().get("agg");
// For each entry
for (Range.Bucket entry : agg.getBuckets()) {
String key = entry.getKeyAsString(); // Date range as key
DateTime fromAsDate = (DateTime) entry.getFrom(); // Date bucket from as a Date
DateTime toAsDate = (DateTime) entry.getTo(); // Date bucket to as a Date
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, fromAsDate, toAsDate, docCount);
}
一般会返回如下结果:
key [*-1950], from [null], to [1950-01-01T00:00:00.000Z], doc_count [8]
key [1950-1960], from [1950-01-01T00:00:00.000Z], to [1960-01-01T00:00:00.000Z], doc_count [5]
key [1960-*], from [1960-01-01T00:00:00.000Z], to [null], doc_count [37]
Ip Range Aggregation
请查看 Ip Range Aggregation 。
Prepare aggregation request
这里有一个创建聚合请求的例子:
AggregatorBuilder<?> aggregation =
AggregationBuilders
.ipRange("agg")
.field("ip")
.addUnboundedTo("192.168.1.0") // from -infinity to 192.168.1.0 (excluded)
.addRange("192.168.1.0", "192.168.2.0") // from 192.168.1.0 to 192.168.2.0 (excluded)
.addUnboundedFrom("192.168.2.0"); // from 192.168.2.0 to +infinity
注意,你也可以使用ip掩码作为范围:
AggregatorBuilder<?> aggregation =
AggregationBuilders
.ipRange("agg")
.field("ip")
.addMaskRange("192.168.0.0/32")
.addMaskRange("192.168.0.0/24")
.addMaskRange("192.168.0.0/16");
Use aggregation response
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.range.Range;
// sr is here your SearchResponse object
Range agg = sr.getAggregations().get("agg");
// For each entry
for (Range.Bucket entry : agg.getBuckets()) {
String key = entry.getKeyAsString(); // Ip range as key
String fromAsString = entry.getFromAsString(); // Ip bucket from as a String
String toAsString = entry.getToAsString(); // Ip bucket to as a String
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, fromAsString, toAsString, docCount);
}
第一个请求一般会返回如下结果:
key [*-192.168.1.0], from [null], to [192.168.1.0], doc_count [13]
key [192.168.1.0-192.168.2.0], from [192.168.1.0], to [192.168.2.0], doc_count [14]
key [192.168.2.0-*], from [192.168.2.0], to [null], doc_count [23]
第二个请求(使用IP掩码)一般会返回如下结果:
key [192.168.0.0/32], from [192.168.0.0], to [192.168.0.1], doc_count [0]
key [192.168.0.0/24], from [192.168.0.0], to [192.168.1.0], doc_count [13]
key [192.168.0.0/16], from [192.168.0.0], to [192.169.0.0], doc_count [50]
Histogram Aggregation
请查看 Histogram Aggregation 。
Prepare aggregation request
这里有一个创建聚合请求的例子:
AggregationBuilder aggregation =
AggregationBuilders
.histogram("agg")
.field("height")
.interval(1);
Use aggregation response
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;
// sr is here your SearchResponse object
Histogram agg = sr.getAggregations().get("agg");
// For each entry
for (Histogram.Bucket entry : agg.getBuckets()) {
Number key = (Number) entry.getKey(); // Key
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], doc_count [{}]", key, docCount);
}
Order
支持与 Terms Aggregation 相同的排序功能。
Date Histogram Aggregation
Prepare aggregation request
这里有一个创建聚合请求的例子:
AggregationBuilder aggregation =
AggregationBuilders
.dateHistogram("agg")
.field("dateOfBirth")
.dateHistogramInterval(DateHistogramInterval.YEAR);
如果你想设置10天间隔:
AggregationBuilder aggregation =
AggregationBuilders
.dateHistogram("agg")
.field("dateOfBirth")
.dateHistogramInterval(DateHistogramInterval.days(10));
Use aggregation response
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;
// sr is here your SearchResponse object
Histogram agg = sr.getAggregations().get("agg");
// For each entry
for (Histogram.Bucket entry : agg.getBuckets()) {
DateTime key = (DateTime) entry.getKey(); // Key
String keyAsString = entry.getKeyAsString(); // Key as String
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], date [{}], doc_count [{}]", keyAsString, key.getYear(), docCount);
}
一般会返回如下结果:
key [1942-01-01T00:00:00.000Z], date [1942], doc_count [1]
key [1945-01-01T00:00:00.000Z], date [1945], doc_count [1]
key [1946-01-01T00:00:00.000Z], date [1946], doc_count [1]
...
key [2005-01-01T00:00:00.000Z], date [2005], doc_count [1]
key [2007-01-01T00:00:00.000Z], date [2007], doc_count [2]
key [2008-01-01T00:00:00.000Z], date [2008], doc_count [3]
Order
支持与 Terms Aggregation 相同的排序功能。
Geo Distance Aggregation
请查看 Geo Distance Aggregation 。
Prepare aggregation request
这里有一个创建聚合请求的例子:
AggregationBuilder aggregation =
AggregationBuilders
.geoDistance("agg", new GeoPoint(48.84237171118314,2.33320027692004))
.field("address.location")
.unit(DistanceUnit.KILOMETERS)
.addUnboundedTo(3.0)
.addRange(3.0, 10.0)
.addRange(10.0, 500.0);
Use aggregation response
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.range.Range;
// sr is here your SearchResponse object
Range agg = sr.getAggregations().get("agg");
// For each entry
for (Range.Bucket entry : agg.getBuckets()) {
String key = entry.getKeyAsString(); // key as String
Number from = (Number) entry.getFrom(); // bucket from value
Number to = (Number) entry.getTo(); // bucket to value
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, from, to, docCount);
}
一般会返回如下结果:
key [*-3.0], from [0.0], to [3.0], doc_count [161]
key [3.0-10.0], from [3.0], to [10.0], doc_count [460]
key [10.0-500.0], from [10.0], to [500.0], doc_count [4925]
Geo Hash Grid Aggregation
Prepare aggregation request
这里有一个创建聚合请求的例子:
AggregationBuilder aggregation =
AggregationBuilders
.geohashGrid("agg")
.field("address.location")
.precision(4);
Use aggregation response
导入聚合定义类:
import org.elasticsearch.search.aggregations.bucket.geogrid.GeoHashGrid;
// sr is here your SearchResponse object
GeoHashGrid agg = sr.getAggregations().get("agg");
// For each entry
for (GeoHashGrid.Bucket entry : agg.getBuckets()) {
String keyAsString = entry.getKeyAsString(); // key as String
GeoPoint key = (GeoPoint) entry.getKey(); // key as geo point
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], point {}, doc_count [{}]", keyAsString, key, docCount);
}
一般会返回如下结果:
key [gbqu], point [47.197265625, -1.58203125], doc_count [1282]
key [gbvn], point [50.361328125, -4.04296875], doc_count [1248]
key [u1j0], point [50.712890625, 7.20703125], doc_count [1156]
key [u0j2], point [45.087890625, 7.55859375], doc_count [1138]
...