Ingest data
Our primary goal with the ingestion endpoints is to keep them as fast and reliable as possible. Therefore, we've chosen an event-driven architecture. The trade-off is that ingested data may not be immediately available for query.
Feature | Expected latency |
---|---|
Raw data | < 5 min |
Analytics | < 1 hour |
Seasonality | < 1 day |
Running our analytics pipeline is computationally expensive, and as such, analytics on ingested data will become available at slightly higher latencies.
There are three main ways to ingest data into a Set
Import
Using the import tool
[POST] /api/sets/{set_id}/data
Ingest a single datapoint through the API
{
"timestamp": 1640995300,
"value": 22.4,
"tags": ["temp", "manchester", "indoors"]
}
[POST] /api/sets/{set_id}/data/batch
Ingest a collection of datapoints through the API
[
{
"timestamp": 1640995200,
"value": 3.14,
"tags": ["temp", "manchester", "garden"]
},
{
"timestamp": 1640995300,
"value": 22.4,
"tags": ["temp", "manchester", "indoors"]
}
]
Additional notes
- All ingested data points must contain at least one tag
- Special tags 'all' will be ignored on ingested data points