The Pipeline Metric Data Model
Introduction
The Mezmo Telemetry Pipeline platform enables you to quickly and easily process metrics within your pipeline using out-of-the-box functionality. This includes support for use cases such as:
- Extracting metrics embedded in logs to be sent downstream with the Parse Processor
- Aggregating metric values to reduce storage needs
- Limiting tag cardinality to limit downstream load and preserve stability within metric storage systems
Metrics within the Telemetry Pipeline are handled in the same way that log events are handled, with the exception that certain Processors require the events to be in a specific format in order to function.
Metric values from supported metric Sources, such as Prometheus, are automatically created with the appropriate format to be used within any pipeline. They will also be automatically compatible with any downstream Destinations, such as Prometheus Remote Write and Datadog Metrics.
If you have metrics that are not properly formatted, you can use the Event to Metric Processor to transform them into the appropriate model for subsequent processing and sending downstream.
The Metric Data Model
Metric data within the Pipeline must follow a standard format in order to be used in any processors or destinations that require a metric value.
This table describes the data model for various fields, including the data type for the field, and whether it is required for the data model.
Field | Data type | Required | Description |
---|---|---|---|
name | String | Yes | The name for the metric |
kind | Enumerated set | Yes | The type of metric, either incremental or absolute |
value | Object | Yes | An object of the classes gauge , counter , distribution , set , histogram , or summary |
namespace | String | No | An optional value for distinguishing metric values with the same name |
tags | Object | No | An optional set of tag keys and values |
This code block is a JSON representation of the metric model with example values:
{
"name": "go_goroutines",
"namespace": "myspace",
"tags": {
"instance": "host-address:443"
},
"kind": "absolute",
"value": {
"type": "counter",
"value": 36
}
}
Transforming Metrics
There are multiple ways to transform data to create or manipulate a metrics event. Keep in mind the metric event is treated as a log until it is an input to a metrics processor, or sent to a metric destination.
- You can use all of the standard processors, such as Drop Fields, Filter, and Route on any metrics, so long as none of the required fields are removed.
- You can use the Event to Metric Processor to directly transform any log input into a metric format event at exit. However, you cannot create a
histogram
,distribution
,set
orsummary
using this processor - You can use the Map Fields Processor to move data within an event so that it matches the metric data mode. This requires all of the metric fields to be present already and parsed
- You can use the Parse Processor to extract all of the necessary fields to match the metric data model.
If you use an HTTP Source or extract a metric from a log, you will need to use the Event to Metric Processor or another suitable method to make sure the metric data is formatted to match the Mezmo metrics model.