AWS just lately introduced help for a brand new Apache Flink connector for Prometheus. The brand new connector, contributed by AWS to the Flink open supply venture, provides Prometheus and Amazon Managed Service for Prometheus as a brand new vacation spot for Flink.
On this publish, we clarify how the brand new connector works. We additionally present how one can handle your Prometheus metrics information cardinality by preprocessing uncooked information with Flink to construct real-time observability with Amazon Managed Service for Prometheus and Amazon Managed Grafana.
Amazon Managed Service for Prometheus is a safe, serverless, scaleable, Prometheus-compatible monitoring service. You should use the identical open supply Prometheus information mannequin and question language that you just use at this time to watch the efficiency of your workloads with out having to handle the underlying infrastructure. Flink connectors are software program elements that transfer information into and out of an Amazon Managed Service for Apache Flink utility. You should use the brand new connector to ship processed information to an Amazon Managed Service for Prometheus vacation spot beginning with Flink model 1.19. With Amazon Managed Service for Apache Flink, you possibly can remodel and analyze information in actual time. There are not any servers and clusters to handle, and there’s no compute and storage infrastructure to arrange.
Observability past compute
In an more and more related world, the boundary of methods extends past compute property, IT infrastructure, and functions. Distributed property reminiscent of Web of Issues (IoT) units, related automobiles, and end-user media streaming units are an integral a part of enterprise operations in lots of sectors. The flexibility to look at each asset of your online business is vital to detecting potential points early, enhancing the expertise of your clients, and defending the profitability of the enterprise.
Metrics and time collection
It’s useful to think about observability as three pillars: metrics, logs, and traces. Probably the most related pillar for distributed units, like IoT, is metrics. It’s because metrics can seize measurements from sensors or counting of particular occasions emitted by the system.
Metrics are collection of samples of a given measurement at particular instances. For instance, within the case of a related automobile, they are often the readings from the electrical motor RPM sensor. Metrics are usually represented as time collection, or sequences of discrete information factors in chronological order. Metrics’ time collection are usually related to dimensions, additionally known as labels or tags, to assist with classifying and analyzing the information. Within the case of a related automobile, labels is likely to be one thing like the next:
- Metric identify – For instance, “Electrical Motor RPM”
- Car ID – A novel identifier of the automobile, just like the Car Identification Quantity (VIN)
Prometheus as a specialised time collection database
Prometheus is a well-liked resolution for storing and analyzing metrics. Prometheus defines a normal interface for storing and querying time collection. Generally utilized in mixture with visualization instruments like Grafana, Prometheus is optimized for real-time dashboards and real-time alerting.
Typically thought of primarily for observing compute assets, like containers or functions, Prometheus is definitely a specialised time collection database that may successfully be used to look at several types of distributed property, together with IoT units.
Amazon Managed Service for Prometheus is a serverless, Prometheus-compatible monitoring service. See What’s Amazon Managed Service for Prometheus? to be taught extra about Amazon Managed Service for Prometheus.
Successfully processing observability occasions, at scale
Dealing with observability information at scale turns into tougher, as a result of variety of property and distinctive metrics, particularly when observing massively distributed units, for the next causes:
- Excessive cardinality – Every system emits a number of metrics or kinds of occasions, every to be tracked independently.
- Excessive frequency – Gadgets would possibly emit occasions very ceaselessly, a number of instances per second. This would possibly end in a big quantity of uncooked information. This facet particularly represents the principle distinction from observing compute assets, that are often scraped at longer intervals.
- Occasions arrive at irregular intervals and out of order – In contrast to compute property which are often scraped at common intervals, we frequently see delays of transmission or briefly disconnected units, which trigger occasions to reach at irregular intervals. Concurrent occasions from totally different units would possibly comply with totally different paths and arrive at totally different instances.
- Lack of contextual info – Gadgets typically transmit over channels with restricted bandwidth, reminiscent of GPRS or Bluetooth. To optimize communication, occasions seldom include contextual info, reminiscent of system mannequin or consumer element. Nevertheless, this info is required for an efficient observability.
- Derive metrics from occasions – Gadgets typically emit particular occasions when particular info occur. For instance, when the automobile ignition is turned on or off, or when a warning is emitted by the onboard laptop. These usually are not direct metrics. Nevertheless, counting and measuring the charges of those occasions are invaluable metrics that may be inferred from these occasions.
Successfully extracting worth from uncooked occasions requires processing. Processing would possibly occur on learn, if you question the information, or upfront, earlier than storing.
Storing and analyzing uncooked occasions
The frequent method with observability occasions, and with metrics particularly, is “storing first.” You’ll be able to merely write the uncooked metrics into Prometheus. Processing, reminiscent of grouping, aggregating, and calculating derived metrics, occurs “on question,” when information is extracted from Prometheus.
This method would possibly turn out to be notably inefficient if you’re constructing real-time dashboards or alerting, and your information has very excessive cardinality or excessive frequency. As a time collection database is constantly queried, a big quantity of knowledge is repeatedly extracted from the storage and processed. The next diagram illustrates this workflow.
Preprocessing uncooked observability occasions
Preprocessing uncooked occasions earlier than storing shifts the work left, as illustrated within the following diagram. This will increase the effectivity of real-time dashboards and alerts, permitting the answer to scale.
Apache Flink for preprocessing observability occasions
Preprocessing uncooked observability occasions requires a processing engine that lets you do the next:
- Enrich occasions effectively, wanting up reference information and including new dimensions to the uncooked occasions. For instance, including the automobile mannequin primarily based on the automobile ID. Enrichment permits including new dimensions to the time collection, enabling evaluation in any other case inconceivable.
- Mixture uncooked occasions over time home windows, to cut back frequency. For instance, if a automobile emits an engine temperature measurement each second, you possibly can emit a single pattern with the typical over 5 seconds. Prometheus can effectively mixture frequent samples on learn. Nevertheless, ingesting information with a frequency a lot greater than what is beneficial for dashboarding and real-time alerting shouldn’t be an environment friendly use of Prometheus ingestion all through and storage.
- Mixture uncooked occasions over dimensions, to cut back cardinality. For instance, aggregating some measurement per automobile mannequin.
- Calculate derived metrics making use of arbitrary logic. For instance, counting the variety of warning occasions emitted by every automobile. This additionally allows evaluation in any other case inconceivable utilizing solely Prometheus and Grafana.
- Help event-time semantics, to mixture over time occasions from totally different sources.
Such a preprocessing engine should additionally be capable to scale and course of the big quantity of enter uncooked occasions, and to course of information with low latency—usually subsecond or single-digit seconds—to allow real-time dashboards and altering. To handle these necessities, we see many purchasers utilizing Flink.
Apache Flink meets the aforementioned necessities. Flink is a framework and distributed stream processing engine, designed to carry out computations at in-memory pace and at scale. Amazon Managed Service for Apache Flink provides a totally managed, serverless expertise, permitting you to run your Flink functions with out managing infrastructure or clusters.
Amazon Managed Service for Apache Flink can course of the ingested uncooked occasions. The ensuing metrics, with decrease cardinality and frequency, and extra dimensions, may be written to Prometheus for a more practical visualization and evaluation. The next diagram illustrates this workflow.
Integrating Apache Flink and Prometheus
The brand new Flink Prometheus connector permits Flink functions to seamlessly write preprocessed time collection information to Prometheus. No intermediate part is required, and there’s no requirement to implement a customized integration. The connector is designed to scale, utilizing the flexibility of Flink to scale horizontally, and optimizing the writes to a Prometheus backend utilizing a Distant-Write interface.
Instance use case
AnyCompany is a automobile rental firm managing a fleet of a whole lot of 1000’s hybrid related automobiles, in a number of areas. Every automobile constantly transmits measurements from a number of sensors. Every sensor emits a pattern each second or extra ceaselessly. Automobiles additionally talk warning occasions when one thing improper is detected by the onboard laptop. The next diagram illustrates the workflow.
AnyCompany is planning to make use of Amazon Managed Service for Prometheus and Amazon Managed Grafana to visualise automobile metrics and arrange customized alerts.
Nevertheless, constructing a real-time dashboard primarily based on uncooked information, as transmitted by the automobiles, is likely to be difficult and inefficient. Every automobile might need a whole lot of sensors, every of them leading to a separate time collection to show. Moreover, AnyCompany needs to watch the habits of various automobile fashions. Sadly, the occasions transmitted by the automobiles solely include the VIN. The mannequin may be inferred by wanting up (becoming a member of) some reference information.
To beat these challenges, AnyCompany has constructed a preprocessing stage primarily based on Amazon Managed Service for Apache Flink. This stage has the next capabilities:
- Enrich the uncooked information by including the automobile mannequin, and looking out up reference information primarily based on the automobile identification.
- Cut back the cardinality, aggregating the outcomes per automobile mannequin, out there after the enrichment step.
- Cut back the frequency of the uncooked metrics to cut back write bandwidth, aggregating over time home windows of some seconds.
- Calculate derived metrics primarily based on a number of uncooked metrics. For instance, decide whether or not a automobile is in movement when both the interior combustion engine or {the electrical} motor are rotating.
The results of preprocessing are extra actionable metrics. A dashboard constructed on these metrics can, for instance, assist decide whether or not the final software program replace launched over-the-air to all automobiles of a selected mannequin in particular areas, is inflicting points.
Utilizing the Flink Prometheus connector, the preprocessor utility can write on to Amazon Managed Service for Prometheus, with out intermediate elements.
Nothing prevents you from selecting to put in writing uncooked metrics with full cardinality and frequency to Prometheus, permitting you to drill all the way down to the one automobile. The Flink Prometheus connector is designed to scale by batching and parallelizing writes.
Resolution overview
The next GitHub repository comprises a fictional end-to-end instance overlaying this use case. The next diagram illustrates the structure of this instance.
The workflow consists of the next steps:
- Automobiles, radio transmission, and ingestion of IoT occasions have been abstracted away, and changed by a knowledge generator that produces uncooked occasions for 100 thousand fictional automobiles. For simplicity, the information generator is itself an Amazon Managed Service for Apache Flink utility.
- Uncooked automobile occasions are despatched to a stream storage service. On this instance, we use Amazon Managed Streaming for Apache Kafka (Amazon MSK).
- The core of the system is the preprocessor utility, operating in Amazon Managed Service for Apache Flink. We are going to dive deeper into the main points of the processor within the following sections.
- Processed metrics are straight written to the Prometheus backend, in Amazon Managed Service for Prometheus.
- Metrics are used to generate real-time dashboards on Amazon Managed Grafana.
The next screenshot reveals a pattern dashboard.
Uncooked automobile occasions
Every automobile transmits three metrics virtually each second:
- Inside combustion (IC) engine RPM
- Electrical motor RPM
- Variety of reported warnings
The uncooked occasions are recognized by the automobile ID and the area the place the automobile is positioned.
Preprocessor utility
The next diagram illustrates the logical move of the preprocessing utility operating in Amazon Managed Service for Apache Flink.
The workflow consists of the next steps:
- Uncooked occasions are ingested from Amazon MSK from Flink Kafka supply.
- An enrichment operator provides the automobile mannequin, which isn’t contained within the uncooked occasions. This extra dimension is then used to mixture the uncooked occasions. The ensuing metrics have solely two dimensions: automobile mannequin and area.
- Uncooked occasions are then aggregated over time home windows (5 seconds) to cut back frequency. On this instance, the aggregation logic additionally generates a derived metric: the variety of automobiles in movement. A brand new metric may be derived from uncooked metrics with arbitrary logic. For the sake of the instance, a automobile is taken into account “in movement” if both the IC engine or electrical motor RPM metric usually are not zero.
- The processed metrics are mapped into the enter information construction of the Flink Prometheus connector, which maps on to the time collection information anticipated by the Prometheus Distant-Write interface. Confer with the connector documentation for extra particulars.
- Lastly, the metrics are despatched to Prometheus utilizing the Flink Prometheus connector. Write authentication, required by Amazon Managed Service for Prometheus, is seamlessly enabled utilizing the Amazon Managed Service for Prometheus request signer supplied with the connector. Credentials are mechanically derived from the AWS Id and Entry Administration (IAM) function of the Amazon Managed Service for Apache Flink utility. No extra secret or credential is required.
Within the GitHub repository, you’ll find the step-by-step directions to arrange the working instance and create the Grafana dashboard.
Flink Prometheus connector key options
The Flink Prometheus connector permits Flink functions to put in writing processed metrics to Prometheus, utilizing the Distant-Write interface.
The connector is designed to scale write throughput by:
- Parallelizing writes, utilizing the Flink parallelism functionality
- Batching a number of samples in a single write request to the Prometheus endpoint
Error dealing with complies with Prometheus Distant-Write 1.0 specs. The specs are notably strict about malformed or out-of-order information rejected by Prometheus.
When a malformed or out-of-order write is rejected, the connector discards the offending write request and continues, preferring information freshness over completeness. Nevertheless, the connector makes information loss observable, emitting WARN log entries and exposing metrics that measure the amount of discarded information. In Amazon Managed Service for Apache Flink, these connector metrics may be mechanically exported to Amazon CloudWatch.
Obligations of the consumer
The connector is optimized for effectivity, write throughput, and latency. Validation of incoming information could be notably costly when it comes to CPU utilization. Moreover, totally different Prometheus backend implementations implement constraints in a different way. For these causes, the connector doesn’t validate incoming information earlier than writing to Prometheus.
The consumer is accountable of constructing certain that the information despatched to the Flink Prometheus connector follows the constraints enforced by the actual Prometheus implementations they’re utilizing.
Ordering
Ordering is especially related. Prometheus expects that samples belonging to the identical time collection—samples with the identical metric identify and labels—are written in time order. The connector makes certain ordering shouldn’t be misplaced when information is partitioned to parallelize writes.
Nevertheless, the consumer is chargeable for retaining the ordering upstream within the pipeline. To attain this, the consumer should fastidiously design information partitioning throughout the Flink utility and the stream storage. Solely partitioning by key should be used, and partitioning keys should compound the metric identify and all labels that will likely be utilized in Prometheus.
Conclusion
Prometheus is a specialised time collection database, designed for constructing real-time dashboards and altering. Amazon Managed Service for Prometheus is a totally managed, serverless backend suitable with the Prometheus open supply commonplace. Amazon Managed Grafana lets you construct real-time dashboards, seamlessly interfacing with Amazon Managed Service for Prometheus.
You should use Prometheus for observability use circumstances past compute useful resource, to look at IoT units, related automobiles, media streaming units, and different extremely distributed property offering telemetry information.
Immediately visualizing and analyzing high-cardinality and high-frequency information may be inefficient. Preprocessing uncooked observability occasions with Amazon Managed Service for Apache Flink shifts the work left, vastly simplifying the dashboards or alerting you possibly can construct on high of Amazon Managed Service for Prometheus.
For extra details about operating Flink, Prometheus, and Grafana on AWS, see the assets of those companies:
For extra details about the Flink Prometheus integration, see the Apache Flink documentation.
In regards to the authors
Lorenzo Nicora works as Senior Streaming Resolution Architect at AWS, serving to clients throughout EMEA. He has been constructing cloud-centered, data-intensive methods for over 25 years, working throughout industries each by means of consultancies and product corporations. He has used open-source applied sciences extensively and contributed to a number of tasks, together with Apache Flink, and is the maintainer of the Flink Prometheus connector.
Francisco Morillo is a Senior Streaming Options Architect at AWS. Francisco works with AWS clients, serving to them design real-time analytics architectures utilizing AWS companies, supporting Amazon MSK and Amazon Managed Service for Apache Flink. He’s additionally a foremost contributor to the Flink Prometheus connector.