Thursday, September 4, 2025

Deep dive into the Amazon Managed Service for Apache Fink software lifecycle – Half 2

In Half 1 of this collection, we mentioned basic operations to manage the lifecycle of your Amazon Managed Service for Apache Flink software. If you’re utilizing higher-level instruments akin to AWS CloudFormation or Terraform, the instrument will execute these operations for you. Nevertheless, understanding the basic operations and what the service routinely does can present some degree of Mechanical Sympathy to confidently implement a extra strong automation.

Within the first a part of this collection, we centered on the completely satisfied paths. In a great world, failures don’t occur, and each change you deploy works completely. Nevertheless, the true world is much less predictable. Quoting Werner Vogels, Amazon’s CTO, “Every thing fails, on a regular basis.”

On this put up, we discover failure situations that may occur throughout regular operations or whenever you deploy a change or scale the applying, and the best way to monitor operations to detect and recuperate when one thing goes mistaken.

The much less completely satisfied path

A strong automation should be designed to deal with failure situations, particularly throughout operations. To try this, we have to perceive how Apache Flink can deviate from the completely satisfied path. Because of the nature of Flink as a stateful stream processing engine, detecting and resolving failure situations requires totally different strategies in comparison with different long-running purposes, akin to microservices or short-lived serverless features (akin to AWS Lambda).

Flink’s habits on runtime errors: The fail-and-restart loop

When a Flink job encounters an sudden error at runtime (an unhandled exception), the conventional habits is to fail, cease the processing, and restart from the newest checkpoint. Checkpoints enable Flink to help information consistency and no information loss in case of failure. Additionally, as a result of Flink is designed for stream processing purposes, which run constantly, if the error occurs once more, the default habits is to maintain restarting, hoping the issue is transient and the applying will finally recuperate the conventional processing.In some circumstances, the issue will not be transient, nonetheless. For instance, whenever you deploy a code change that accommodates a bug, inflicting the job to fail as quickly because it begins processing information, or if the anticipated schema doesn’t match the data within the supply, inflicting deserialization or processing errors. The identical situation may additionally occur for those who mistakenly modified a configuration that forestalls a connector to achieve the exterior system. In these circumstances, the job is caught in a fail-and-restart loop, indefinitely, or till you actively force-stop it.

When this occurs, the Managed Service for Apache Flink software standing is likely to be RUNNING, however the underlying Flink job is definitely failing and restarting. The AWS Administration Console provides you a touch, pointing that the applying may want consideration (see the next screenshot).

Application needs attention

Within the following sections, we learn to monitor the applying and job standing, to routinely react to this example.

When beginning or updating the applying goes mistaken

To grasp the failure mode, let’s overview what occurs routinely whenever you begin the applying, or when the applying restarts after you issued UpdateApplication command, as we explored in Half 1 of this collection. The next diagram illustrates what occurs when an software begins.

Application start process

The workflow consists of the next steps:

  1. Managed Service for Apache Flink provisions a cluster devoted to your software.
  2. The code and configuration are submitted to the Job Supervisor node.
  3. The code within the major() technique of your software runs, defining the dataflow of your software.
  4. Flink deploys to the Job Supervisor nodes the substasks that make up your job.
  5. The job and software standing change to RUNNING. Nevertheless, subtasks begin initializing now.
  6. Subtasks restore their state, if relevant, and initialize any assets. For instance, a Kafka connector’s subtask initializes the Kafka consumer and subscribes the subject.
  7. When all subtasks are efficiently initialized, they modify to RUNNING standing and the job begins processing information.

To new Flink customers, it may be complicated {that a} RUNNING standing doesn’t essentially suggest the job is wholesome and processing information.When one thing goes mistaken through the strategy of beginning (or restarting) the applying, relying on the part when the issue arises, you may observe two various kinds of failure modes:

  • (a) An issue prevents the applying code from being deployed – Your software may encounter this failure situation if the deployment fails as quickly because the code and configuration are handed to the Job Supervisor (step 2 of the method), for instance if the applying code package deal is malformed. A typical error is when the JAR is lacking a mainClass or if mainClass factors to a category that doesn’t exist. This failure mode may additionally occur if the code of your major() technique throws an unhandled exception (step 3). In these circumstances, the applying fails to vary to RUNNING, and reverts to READY after the try.
  • (b) The applying is began, the job is caught in a fail-and-restart loop – An issue may happen later within the course of, after the applying standing has modified RUNNING. For instance, after the Flink job has been deployed to the cluster (step 4 of the method), a part may fail to initialize (step 6). This may occur when a connector is misconfigured, or an issue prevents it from connecting to the exterior system. For instance, a Kafka connector may fail to hook up with the Kafka cluster due to the connector’s misconfiguration or networking points. One other doable situation is when the Flink job efficiently initializes, but it surely throws an exception as quickly because it begins processing information (step 7). When this occurs, Flink reacts to a runtime error and may get caught in a fail-and-restart loop.

The next diagram illustrates the sequence of software standing, together with the 2 failure situations simply described.

Application statuses, with failure scenarios

Troubleshooting

We’ve examined what can go mistaken throughout operations, particularly whenever you replace a RUNNING software or restart an software after altering its configuration. On this part, we discover how we are able to act on these failure situations.

Roll again a change

If you deploy a change and notice one thing will not be fairly proper, you usually wish to roll again the change and put the applying again in working order, till you examine and repair the issue. Managed Service for Apache Flink supplies a sleek solution to revert (roll again) a change, additionally restarting the processing from the purpose it was stopped earlier than making use of the fault change, offering consistency and no information loss.In Managed Service for Apache Flink, there are two varieties of rollbacks:

  • Computerized – Throughout an computerized rollback (additionally known as system rollback), if enabled, the service routinely detects when the applying fails to restart after a change, or when the job begins however instantly falls right into a fail-and-restart loop. In these conditions, the rollback course of routinely restores the applying configuration model earlier than the final change was utilized and restarts the applying from the snapshot taken when the change was deployed. See Enhance the resilience of Amazon Managed Service for Apache Flink software with system-rollback function for extra particulars. This function is disabled by default. You’ll be able to allow it as a part of the applying configuration.
  • Handbook – A handbook rollback API operation is sort of a system rollback, but it surely’s initiated by the person. If the applying is operating however you observe one thing not behaving as anticipated after making use of a change, you may set off the rollback operation utilizing the RollbackApplication API motion or the console. Handbook rollback is feasible when the applying is RUNNING or UPDATING.

Each rollbacks work equally, restoring the configuration model earlier than the change and restarting with the snapshot taken earlier than the change. This prevents information loss and brings you again to a model of the applying that was working. Additionally, this makes use of the code package deal that was saved on the time you created the earlier configuration model (the one you might be rolling again to), so there isn’t a inconsistency between code, configuration, and snapshot, even when within the meantime you’ve got changed or deleted the code package deal from the Amazon Easy Storage Service (Amazon S3) bucket.

Implicit rollback: Replace with an older configuration

A 3rd solution to roll again a change is to easily replace the configuration, bringing it again to what it was earlier than the final change. This creates a brand new configuration model, and requires the proper model of the code package deal to be accessible within the S3 bucket whenever you challenge the UpdateApplication command.

Why is there a 3rd choice when the service supplies system rollback and the managed RollbackApplication motion? As a result of most high-level infrastructure-as-code (IaC) frameworks akin to Terraform use this technique, explicitly overwriting the configuration. You will need to perceive this risk although you’ll in all probability use the managed rollback for those who implement your automation based mostly on the low-level actions.

The next are two vital caveats to contemplate for this implicit rollback:

  • You’ll usually wish to restart the applying from the snapshot that was taken earlier than the defective change was deployed. If the applying is presently RUNNING and wholesome, this isn’t the newest snapshot (RESTORE_FROM_LATEST_SNAPSHOT), however quite the earlier one. It’s essential to set the restart from RESTORE_FROM_CUSTOM_SNAPSHOT and choose the proper snapshot.
  • UpdateApplication solely works if the applying is RUNNING and wholesome, and the job may be gracefully stopped with a snapshot. Conversely, if the applying is caught in a fail-and-restart loop, you have to force-stop it first, change the configuration whereas the applying is READY, and later begin the applying from the snapshot that was taken earlier than the defective change was deployed.

Pressure-stop the applying

In regular situations, you cease the applying gracefully, with the automated snapshot creation. Nevertheless, this won’t be doable in some situations, akin to if the Flink job is caught in a fail-and-restart loop. This may occur, for instance, if an exterior system the job makes use of stops working, or as a result of the AWS Id and Entry Administration (IAM) configuration was erroneously modified, eradicating permissions required by the job.

When the Flink job will get caught in a fail-and-restart loop after a defective change, your first choice ought to be utilizing RollbackApplication, which routinely restores the earlier configuration and begins from the proper snapshot. Within the uncommon circumstances you may’t cease the applying gracefully or use RollbackApplication, the final resort is force-stopping the applying. Pressure-stop makes use of the StopApplication command with Pressure=true. It’s also possible to force-stop the applying from the console.

If you force-stop an software, no snapshot is taken (if that have been doable, you’ll have been in a position to gracefully cease). If you restart the applying, you may both skip restoring from a snapshot (SKIP_RESTORE_FROM_SNAPSHOT) or use a snapshot that was beforehand taken, scheduled utilizing Snapshot Supervisor, or manually, utilizing the console or CreateApplicationSnapshot API motion.

We strongly advocate establishing scheduled snapshots for all manufacturing purposes which you can’t afford restarting with no state.

Monitoring Apache Flink software operations

Efficient monitoring of your Apache Flink purposes throughout and after operations is essential to confirm the result of the operation and permit lifecycle automation to boost alarms or react, in case one thing goes mistaken.

The principle indicators you should use throughout operations embody the FullRestarts metric (accessible in Amazon CloudWatch) and the applying, job, and process standing.

Monitoring the result of an operation

The only solution to detect the result of an operation, akin to StartApplication or UpdateApplication, is to make use of the ListApplicationOperations API command. This command returns a listing of the newest operations of a selected software, together with upkeep occasions that drive an software restart.

For instance, to retrieve the standing of the newest operation, you should use the next command:

aws kinesisanalyticsv2 list-application-operations      --application-name MyApplication     | jq '.ApplicationOperationInfoList     | sort_by(.StartTime) | final'

The output might be just like the next code:

{   "Operation": "UpdateApplication",   "OperationId": "12abCDeGghIlM",   "StartTime": "2025-08-06T09:24:22+01:00",   "EndTime": "2025-08-06T09:26:56+01:00",   "OperationStatus": "IN_PROGRESS" }

OperationStatus will comply with the identical logic as the applying standing reported by the console and by DescribeApplication. This implies it won’t detect a failure through the operator initialization or whereas the job begins processing information. As now we have discovered, these failures may put the applying in a fail-and-restart loop. To detect these situations utilizing your automation, you have to use different strategies, which we cowl in the remainder of this part.

Detecting the fail-and-restart loop utilizing the FullRestarts metric

The only solution to detect whether or not the applying is caught in a fail-and-restart loop is utilizing the fullRestarts metric, accessible in CloudWatch Metrics. This metric counts the variety of restarts of the Flink job after you began the applying with a StartApplication command or restarted with UpdateApplication.

In a wholesome software, the variety of full restarts ought to ideally be zero. A single full restart is likely to be acceptable throughout deployment or deliberate upkeep; a number of restarts usually point out some challenge. We advocate to not set off an alarm on a single restart, and even a few consecutive restarts.

The alarm ought to solely be triggered when the applying is caught in a fail-and-restart loop. This means checking whether or not a number of restarts have occurred over a comparatively quick time period. Deciding the interval will not be trivial, as a result of the time the Flink job takes to restart from a checkpoint is dependent upon the scale of the applying state. Nevertheless, if the state of your software is decrease than a number of GB per KPU, you may safely assume the applying ought to begin in lower than a minute.

The objective is making a CloudWatch alarm that triggers when fullRestarts retains growing over a time interval ample for a number of restarts. For instance, assuming your software restarts in lower than 1 minute, you may create a CloudWatch alarm that depends on the DIFF math expression of the fullRestarts metric. The next screenshot reveals an instance of the alarm particulars.

CloudWatch Alarm on fullRestarts

This instance is a conservative alarm, solely triggering if the applying retains restarting for over 5 minutes. This implies you detect the issue after a minimum of 5 minutes. You may contemplate lowering the time to detect the failure earlier. Nevertheless, watch out to not set off an alarm after only one or two restarts. Occasional restarts may occur, for instance throughout regular upkeep (patching) that’s managed by the service, or for a transient error of an exterior system. Flink is designed to recuperate from these circumstances with minimal downtime and no information loss.

Detecting whether or not the job is up and operating: Monitoring software, job, and process standing

We’ve mentioned how you’ve got totally different statuses: the standing of the applying, job, and subtask. In Managed Service for Apache Flink, the applying and job standing change to RUNNING when the subtasks are efficiently deployed on the cluster. Nevertheless, the job will not be actually operating and processing information till all of the subtasks are RUNNING.

Observing the applying standing throughout operations

The applying standing is seen on the console, as proven within the following screenshot.

Screenshot: Application status

In your automation, you may ballot the DescribeApplication API motion to look at the applying standing. The next command reveals the best way to use the AWS Command Line Interface (AWS CLI) and jq command to extract the standing string of an software:

aws kinesisanalyticsv2 describe-application       --application-name       | jq -r '.ApplicationDetail.ApplicationStatus'

Observing job and subtask standing

Managed Service for Apache Flink provides you entry to the Flink Dashboard, which supplies helpful data for troubleshooting, together with the standing of all subtasks. The next screenshot, for instance, reveals a wholesome job the place all subtasks are RUNNING.

Job and Task status

Within the following screenshot, we are able to see a job the place subtasks are failing and restarting.

Job status: failing

In your automation, whenever you begin the applying or deploy a change, you wish to make sure the job is finally up and operating and processing information. This occurs when all of the subtasks are RUNNING. Notice that ready for the job standing to turn into RUNNING after an operation will not be fully secure. A subtask may nonetheless fail and trigger the job to restart after it was reported as RUNNING.

After you execute a lifecycle operation, your automation can ballot the substasks standing ready for considered one of two occasions:

  • All subtasks report RUNNING – This means the operation was profitable and your Flink job is up and operating.
  • Any subtask reviews FAILING or CANCELED – This means one thing went mistaken, and the applying is probably going caught in a fail-and-restart loop. It’s good to intervene, for instance, force-stopping the applying after which rolling again the change.

If you’re restarting from a snapshot and the state of your software is sort of massive, you may observe subtasks will report INITIALIZING standing for longer. In the course of the initialization, Flink restores the state of the operator earlier than altering to RUNNING.

The Flink REST API exposes the state of the subtasks, and can be utilized in your automation. In Managed Service for Apache Flink, this requires three steps:

  1. Generate a pre-signed URL to entry the Flink REST API utilizing the CreateApplicationPresignedUrl API motion.
  2. Make a GET request to the /jobs endpoint of the Flink REST API to retrieve the job ID.
  3. Make a GET request to the /jobs/ endpoint to retrieve the standing of the subtasks.

The next GitHub repository supplies a shell script to retrieve the standing of the duties of a given Managed Service for Apache Flink software.

Monitoring subtasks failure whereas the job is operating

The strategy of polling the Flink REST API can be utilized in your automation, instantly after an operation, to look at whether or not the operation was finally profitable.

We strongly advocate to not constantly ballot the Flink REST API whereas the job is operating to detect failures. This operation is useful resource consuming, and may degrade efficiency or trigger errors.

To watch for suspicious subtask standing modifications throughout regular operations, we advocate utilizing CloudWatch Logs as a substitute. The next CloudWatch Logs Insights question extracts all subtask state transitions:

fields , message | parse message /^(?.+) switched from (?[A-Z]+) to (?[A-Z]+)./ | filter ispresent(process) and ispresent(fromStatus) and ispresent(toStatus) | show , process, fromStatus, toStatus | restrict 10000

How Managed Service for Apache Flink minimizes processing downtime

We’ve seen how Flink is designed for sturdy consistency. To ensure exactly-once state consistency, Flink briefly stops the processing to deploy any modifications, together with scaling. This downtime is required for Flink to take a constant copy of the applying state and reserve it in a savepoint. After the change is deployed, the job is restarted from the savepoint, and there’s no information loss. In Managed Service for Apache Flink, updates are totally managed. When snapshots are enabled, UpdateApplication routinely stops the job and makes use of snapshots (based mostly on Flink’s savepoints) to retain the state.

Flink ensures no information loss. Nevertheless, your small business necessities or Service Stage Goals (SLOs) may additionally impose a most delay for the info obtained by downstream programs, or end-to-end latency. This delay is affected by the processing downtime, or the time the job doesn’t course of information to permit Flink deploying the change.With Flink, some processing downtime is unavoidable. Nevertheless, Managed Service for Apache Flink is designed to reduce the processing downtime whenever you deploy a change.

We’ve seen how the service runs your software in a devoted cluster, for full isolation. If you challenge UpdateApplication on a RUNNING software, the service prepares a brand new cluster with the required quantity of assets. This operation may take a while. Nevertheless, this doesn’t have an effect on the processing downtime, as a result of the service retains the job operating and processing information on the unique cluster till the final doable second, when the brand new cluster is prepared. At this level, the service stops your job with a savepoint and restarts it on the brand new cluster.

Throughout this operation, you might be solely charged for the variety of KPU of a single cluster.

The next diagram illustrates the distinction between the length of the replace operation, or the time the applying standing is UPDATING, and the processing downtime, observable from the job standing, seen within the Flink Dashboard.

Downtime

You’ll be able to observe this course of, holding each the applying console and Flink Dashboard open, whenever you replace the configuration of a operating software, even with no modifications. The Flink Dashboard will turn into briefly unavailable when the service switches to the brand new cluster. Moreover, you may’t use the script we offered to examine the job standing for this scope. Although the cluster retains serving the Flink Dashboard till it’s tore down, the CreateApplicationPresignedUrl motion doesn’t work whereas the applying is UPDATING.

The processing time (the time the job will not be operating on both clusters) is dependent upon the time the job takes to cease with a savepoint (snapshot) and restore the state within the new cluster. This time largely is dependent upon the scale of the applying state. Information skew may additionally have an effect on the savepoint time as a result of barrier alignment mechanism. For a deep dive into the Flink’s barrier alignment mechanism, discuss with Optimize checkpointing in your Amazon Managed Service for Apache Flink purposes with buffer debloating and unaligned checkpoints, holding in thoughts that savepoints are at all times aligned.

For the scope of your automation, you usually wish to wait till the job is again up and operating and processing information. You usually wish to set a timeout. If each the applying and job don’t return to RUNNING inside this timeout, one thing in all probability went mistaken and also you may wish to increase an alarm or drive a rollback. This timeout ought to contemplate the whole replace operation length.

Conclusion

On this put up, we mentioned doable failure situations whenever you deploy a change or scale your software. We confirmed how Managed Service for Apache Flink rollback functionalities can seamlessly deliver you again to a secure place after a change went mistaken. We additionally explored how one can automate monitoring operations to look at software, job, and subtask standing, and the best way to use the fullRestarts metric to detect when the job is in a fail-and-restart loop.

For extra data, see Run a Managed Service for Apache Flink software, Implement fault tolerance in Managed Service for Apache Flink, and Handle software backups utilizing Snapshots.


Concerning the authors

Lorenzo Nicora

Lorenzo Nicora

Lorenzo works as Senior Streaming Answer Architect at AWS, serving to prospects throughout EMEA. He has been constructing cloud-centered, data-intensive programs for over 25 years, working throughout industries each by means of consultancies and product corporations. He has used open-source applied sciences extensively and contributed to a number of initiatives, together with Apache Flink, and is the maintainer of the Flink Prometheus connector.

Felix John

Felix John

Felix is a International Options Architect and information streaming knowledgeable at AWS, based mostly in Germany. He focuses on supporting world automotive & manufacturing prospects on their cloud journey. Exterior of his skilled life, Felix enjoys enjoying Floorball and climbing within the mountains.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles