English | 简体中文 | 繁體中文 | Русский язык | Français | Español | Português | Deutsch | 日本語 | 한국어 | Italiano | بالعربية
A rollup job is a periodic task that aggregates data from the indices specified by the index pattern and summarizes it into a new index. In the following example, we create an index named sensor that has different date and time stamps. Then we create a rollup job that periodically aggregates data from these indices using a cron job.
PUT /sensor/_doc/1 { "timestamp": 1516729294000, "temperature": 200, "voltage": 5.2, "node": "a" }
When running the above code, we get the following result-
{ "_index" : "sensor", "_type" : "_doc", "_id" : ""1" "_version" : 1, "result" : "created", "_shards" : { "total" : 2, "successful" : 1, "failed" : 0 }, "_seq_no" : 0, "_primary_term" : 1 }
Now, add a second document for other documents, and so on.
PUT /sensor-2018-01-01/_doc/2 { "timestamp": 1413729294000, "temperature": 201, "voltage": 5.9, "node": "a" }
PUT _rollup/job/sensor { "index_pattern": "sensor"-*" "rollup_index": "sensor_rollup", "cron": ""*/30 * * * * ?", "page_size" :1000, "groups" : { "date_histogram": { "field": "timestamp", "interval": ""60m" }, "terms": { "fields": ["node"] } }, "metrics": [ { "field": "temperature", "metrics": ["min", "max", "sum"] }, { "field": "voltage", "metrics": ["avg"] } ] }
The cron parameters control the activation time and frequency of the job. When the cron schedule of the aggregation job is triggered, it will start summarizing from where it was last interrupted after the last activation
After the job runs and processes some data, we can use DSL queries to perform some searches.
GET /sensor_rollup/_rollup_search { "size": 0, "aggregations": { "max_temperature": { "max": { "field": "temperature" } } } }