Saturday, October 11, 2025

Constructing a real-time ICU affected person analytics pipeline with AWS Lambda occasion supply mapping

In hospital intensive care items (ICUs), steady affected person monitoring is vital. Medical units generate huge quantities of real-time information on important indicators corresponding to coronary heart price, blood strain, and oxygen saturation. The important thing problem lies in early detection of affected person deterioration by important signal trending. Healthcare groups should course of 1000’s of knowledge factors day by day per affected person to determine regarding patterns, a activity essential for well timed intervention and doubtlessly life-saving care.

AWS Lambda occasion supply mapping may also help on this situation by robotically polling information streams and triggering capabilities in real-time with out further infrastructure administration. Through the use of AWS Lambda for real-time processing of sensor information and storing aggregated leads to safe information constructions designed for giant analytic datasets known as Iceberg tables in Amazon Easy Storage Service (Amazon S3) buckets, medical groups can obtain each fast alerting capabilities and achieve long-term analytical insights, enhancing their means to offer well timed and efficient care.

On this put up, we exhibit the right way to construct a serverless structure that processes real-time ICU affected person monitoring information utilizing Lambda occasion supply mapping for fast alert technology and information aggregation, adopted by persistent storage in Amazon S3 with an Iceberg catalog for complete healthcare analytics. The answer demonstrates the right way to deal with high-frequency important signal information, implement vital threshold monitoring, and create a scalable analytics platform that may develop together with your healthcare group’s wants and assist monitor sensor alert fatigue within the ICU.

Structure

The next structure diagram illustrates a real-time ICU affected person analytics system.

Arch diagram

On this structure, real-time affected person monitoring information from hospital ICU sensors is ingested into AWS IoT Core, which then streams the information into Amazon Kinesis Information Streams. Two Lambda capabilities devour this streaming information concurrently for various functions, each utilizing Lambda occasion supply mapping integration with Kinesis Information Streams. The primary Lambda perform makes use of the filtering function of occasion supply mapping to detect vital well being occasions the place SpO2(blood oxygen saturation) ranges fall beneath 90%, instantly triggering notifications to caregivers by Amazon Easy Notification Service (Amazon SNS) for speedy response. The second Lambda perform employs the tumbling window function of occasion supply mapping to mixture sensor information over 10-minute time intervals. This aggregated information is then systematically saved in S3 buckets in Apache Iceberg format for historic evaluation and reporting. Your entire pipeline operates in a serverless method, offering scalable, real-time processing of vital healthcare information whereas sustaining each fast alerting capabilities and long-term information storage for analytics.

Amazon S3 information, with its help for Apache Iceberg desk format, permits healthcare organizations to effectively retailer and question giant volumes of time-series affected person information. This resolution permits for complicated analytical queries throughout historic affected person information whereas sustaining excessive efficiency and value effectivity.

Conditions

To implement the answer supplied on this put up, it’s best to have the next:

  • An lively AWS account
  • IAM permissions to deploy CloudFormation templates and provision AWS sources
  • Python put in in your machine to run the ICU affected person sensor information simulator code

Deploy a real-time ICU affected person analytics pipeline utilizing CloudFormation

You utilize AWS CloudFormation templates to create the sources for a real-time information analytics pipeline.

  1. To get began, Check in to the console as Account consumer and choose the suitable Area.
  2. Obtain and launch CloudFormation template  the place you wish to host the Lambda capabilities.
  3. Select Subsequent.
  4. On the Specify stack particulars web page, enter a Stack title (for instance, IoTHealthMonitoring).
  5. For Parameters, enter the next:
    1. IoTTopic: Enter the MQTT matter to your IoT units (for instance, icu/sensors).
    2. EmailAddress: Enter an e-mail deal with for receiving notifications.
  6. Look forward to the stack creation to finish. This course of may take 5-10 minutes.
  7. After the CloudFormation stack completes, it creates following sources:
    1. An AWS IoT Core rule to seize information from the desired IoTTopic matter and routes it to Kinesis information stream.
    2. A Kinesis information stream for ingesting IoT sensor information.
    3. Two Lambda capabilities:
      • FilterSensorData: Displays vital well being metrics and sends alerts.
      • AggregateSensorData: Aggregates sensor information in 10 minutes window.
    4. An Amazon DynamoDB desk (NotificationTimestamps) to retailer notification timestamps for price limiting alerts.
    5. An Amazon SNS matter and subscription to ship e-mail notifications for vital affected person circumstances.
    6. An Amazon Information Firehose supply stream to ship processed information to Amazon S3 utilizing Iceberg format.
    7. Amazon S3 buckets to retailer sensor information.
    8. Amazon Athena and AWS Glue sources for the database and an Iceberg desk for querying aggregated information.
    9. AWS Identification and Entry Administration (IAM) roles and insurance policies to help required permissions for Amazon IoT guidelines, Lambda capabilities, and Information Firehose streams.
    10. Amazon CloudWatch log teams to document for Kinesis Firehose exercise and Lambda capabilities.

Answer walkthrough

Now that you simply’ve deployed the answer, let’s assessment a practical walkthrough. First, simulate affected person important indicators information and ship it to AWS IoT Core utilizing the next Python code in your native machine. To run this code efficiently, guarantee you’ve gotten the mandatory IAM permissions to publish messages to the IoT matter within the AWS account the place the answer is deployed.

import boto3 import json import random import time # AWS IoT Information shopper iot_data_client = boto3.shopper(     'iot-data',     region_name="us-west-2" ) # IOT Subject to publish matter="icu/sensors" # Mounted set of affected person IDs patient_ids = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] print("Infinite sensor information simulation...") attempt:     whereas True:         for patient_id in patient_ids:             # Generate sensor information             message = {                 "patient_id": patient_id,                 "timestamp": int(time.time()),                 "spo2": random.randint(91, 99),                 "heart_rate": random.randint(60, 100),                 "temperature_f": spherical(random.uniform(97.0, 100.0), 1)             }             # Publish to matter             response = iot_data_client.publish(                 matter=matter,                 qos=1,                 payload=json.dumps(message)             )             print(f"Revealed: {message}")         # Wait 30 seconds earlier than subsequent spherical         print("Sleeping for 30 seconds...n")         time.sleep(30) besides KeyboardInterrupt:     print("nSimulation stopped by consumer.")

The next is the format of a pattern ICU sensor message produced by the simulator.

{     "patient_id": 1,     "timestamp": 1683000000,     "spo2": 85,     "heart_rate": 75,     "temperature_f": 98.6 }

Information is printed to the icu/sensors IoT matter each 30 seconds for 10 totally different sufferers, making a steady stream of ICU affected person monitoring information. Messages printed to AWS IoT Core are handed to Kinesis Information Streams utilizing the next message routing rule deployed by our resolution.

Two Lambda capabilities devour information from Information Streams concurrently, each utilizing the Lambda occasion supply mapping integration with Kinesis Information Streams.

Occasion supply mapping

Lambda occasion supply mapping robotically triggers Lambda capabilities in response to information modifications from supported occasion sources like Amazon DynamoDB Streams, Amazon Kinesis Information Streams, Amazon Easy Queue Service (Amazon SQS), Amazon MQ, and Amazon Managed Streaming for Apache Kafka. This serverless integration works by having Lambda ballot these sources for brand spanking new information, that are then processed in configurable batch sizes starting from 1 to 10,000 information. When new information is detected, Lambda robotically invokes the perform synchronously, dealing with the scaling robotically based mostly on the workload. The service helps at-least-once supply and offers sturdy error dealing with by retry insurance policies and dead-letter queues for failed occasions. Occasion supply mappings may be fine-tuned by numerous parameters corresponding to batch home windows, most document age, and retry makes an attempt, making them extremely adaptable to totally different use circumstances. This function is especially priceless in event-driven architectures, in order that prospects can give attention to enterprise logic whereas AWS manages the complexities of occasion processing, scaling, and reliability.

Occasion supply mapping makes use of tumbling home windows and filtering to course of and analyze information.

Tumbling home windows

Tumbling home windows in Lambda occasion processing allow information aggregation in mounted, non-overlapping time intervals, the place every occasion belongs to precisely one window. That is excellent for time-based analytics and periodic reporting. When mixed with occasion supply mapping, this method permits environment friendly batch processing of occasions inside outlined time intervals (for instance, 10-minute home windows), enabling calculations corresponding to common important indicators or cumulative fluid consumption and output whereas optimizing perform invocations and useful resource utilization.

While you configure an occasion supply mapping between Kinesis Information Streams and a Lambda perform, use the Tumbling Window Period setting, which seems within the set off configuration within the Lambda console. The answer you deployed utilizing the CloudFormation template contains the AggregateSensorData Lambda perform, which makes use of a 10-minute tumbling window configuration. Relying on the quantity of messages flowing by the Amazon Kinesis stream, the AggregateSensorData perform may be invoked a number of instances for every 10-minute window, sequentially, with the next attributes within the occasion equipped to the perform.

  • Window begin and finish: The start and ending timestamps for the present tumbling window.
  • State: An object containing the state returned from the earlier window, which is initially empty. The state object can comprise as much as 1 MB of knowledge.
  • isFinalInvokeForWindow: Signifies if that is the final invocation for the tumbling window. This solely happens as soon as per window interval.
  • isWindowTerminatedEarly: A window ends early provided that the state exceeds the utmost allowed dimension of 1 MB.

In a tumbling window, there’s a sequence of Lambda invocations within the following sample:

AggregateSensorData Lambda code snippet:

def handler(occasion, context):          state_across_window = occasion['state']     # Iterate by every document and decode the base64 information     for document in occasion['Records']:         encoded_data = document['kinesis']['data']         partition_key = document['kinesis']['partitionKey']         decoded_bytes = base64.b64decode(encoded_data)         decoded_str = decoded_bytes.decode('utf-8')         decoded_json = json.hundreds(decoded_str)         # create partition_key attribute if it don't exists in state         if partition_key not in state_across_window:             state_across_window[partition_key] = {"min_spo2": decoded_json['spo2'], "max_spo2": decoded_json['spo2'], "avg_spo2": decoded_json['spo2'], "sum_spo2": decoded_json['spo2'], "min_heart_rate": decoded_json['heart_rate'], "max_heart_rate": decoded_json['heart_rate'], "avg_heart_rate": decoded_json['heart_rate'], "sum_heart_rate": decoded_json['heart_rate'], "min_temperature_f": decoded_json['temperature_f'], "max_temperature_f": decoded_json['temperature_f'], "avg_temperature_f": decoded_json['temperature_f'], "sum_temperature_f": decoded_json['temperature_f'], "record_count": 1}         else:             min_spo2 = state_across_window[partition_key]['min_spo2'] if state_across_window[partition_key]['min_spo2']  decoded_json['spo2'] else decoded_json['spo2']             sum_spo2 = state_across_window[partition_key]['sum_spo2'] + decoded_json['spo2']             min_heart_rate = state_across_window[partition_key]['min_heart_rate'] if state_across_window[partition_key]['min_heart_rate']  decoded_json['heart_rate'] else decoded_json['heart_rate']             sum_heart_rate = state_across_window[partition_key]['sum_heart_rate'] + decoded_json['heart_rate']                          min_temperature_f = state_across_window[partition_key]['min_temperature_f'] if state_across_window[partition_key]['min_temperature_f']  decoded_json['temperature_f'] else decoded_json['temperature_f']             sum_temperature_f = state_across_window[partition_key]['sum_temperature_f'] + decoded_json['temperature_f']                          record_count = state_across_window[partition_key]['record_count'] + 1             avg_spo2 = sum_spo2/record_count             avg_heart_rate = sum_heart_rate/record_count             avg_temperature_f = sum_temperature_f/record_count                          state_across_window[partition_key] = {"min_spo2": min_spo2, "max_spo2": max_spo2, "avg_spo2": avg_spo2, "sum_spo2": sum_spo2, "min_heart_rate": min_heart_rate, "max_heart_rate": max_heart_rate, "avg_heart_rate": avg_heart_rate, "sum_heart_rate": sum_heart_rate, "min_temperature_f": min_temperature_f, "max_temperature_f": max_temperature_f, "avg_temperature_f": avg_temperature_f, "sum_temperature_f": sum_temperature_f, "record_count": record_count}              # Decide if the window is ultimate (window finish)     is_final_window = occasion.get('isFinalInvokeForWindow', False)     # Decide if the window is terminated (window ended early)     is_terminated_window = occasion.get('isWindowTerminatedEarly', False)     window_start = occasion['window']['start']     window_end = occasion['window']['end']     if is_final_window or is_terminated_window:         firehose_client = boto3.shopper('firehose')         firehose_stream = os.environ['FIREHOSE_STREAM_NAME']         for key, worth in state_across_window.objects():             worth['patient_id'] = key             worth['window_start'] = window_start             worth['window_end'] = window_end                          firehose_client.put_record(                 DeliveryStreamName= firehose_stream,                 Report={'Information': json.dumps(worth) }             )                  return {             "state": {},             "batchItemFailures": []         }     else:         print(f"interim name for window: ws: {window_start} we: {window_end}")         return {             "state": state_across_window,             "batchItemFailures": []         }

  • The primary invocation comprises an empty state object within the occasion. The perform returns a state object containing customized attributes which can be particular to the customized logic within the aggregation.
  • The second invocation comprises the state object supplied by the primary Lambda invocation. This perform returns an up to date state object with new aggregated values. Subsequent invocations comply with this identical sequence. Following is a pattern of the aggregated state, which may be equipped to subsequent Lambda invocations throughout the identical 10-minute tumbling window.
{     "min_spo2": 88,     "max_spo2": 90,     "avg_spo2": 89.2,     "sum_spo2": 625,     "min_heart_rate": 21,     "max_heart_rate": 22,     "avg_heart_rate": 21.1,     "sum_heart_rate": 148,     "min_temperature_f": 90,     "max_temperature_f": 91,     "avg_temperature_f": 90.1,     "sum_temperature_f": 631,     "record_count": 7,     "patient_id": "44",     "window_start": "2025-05-29T20:51:00Z",     "window_end": "2025-05-29T20:52:00Z" }

  • The ultimate invocation within the tumbling window has the isFinalInvokeForWindow flag set to the true. This comprises the state returned by the newest Lambda invocation. This invocation is chargeable for passing aggregated state messages to the Information Firehose stream, which delivers information to the Amazon S3 bucket utilizing Iceberg information format.
  • After the aggregated information is distributed to Amazon S3, you possibly can question the information utilizing Athena.
Question: SELECT * FROM "cfdb_>"."table_>"

Pattern results of the previous Athena question:

Occasion supply mapping with filtering

Lambda occasion supply mapping with filtering optimizes information processing from sources like Amazon Kinesis by making use of JSON sample filtering earlier than perform invocation. That is demonstrated within the ICU affected person monitoring resolution, the place the system filters for SpO2 readings from Kinesis Information Streams which can be beneath 90%. As a substitute of processing all incoming information, the filtering functionality is used to selectively processes solely vital readings, considerably decreasing prices and processing overhead. The answer makes use of DynamoDB for stylish state administration, monitoring low SpO2 occasions by a schema combining PatientID and timestamp-based keys inside outlined monitoring home windows.

This state-aware implementation balances medical urgency with operational effectivity by sending fast Amazon SNS notifications when vital circumstances are first detected whereas implementing a 15-minute alert suppression window to stop alert fatigue amongst healthcare suppliers. By sustaining state throughout a number of Lambda invocations, the system helps guarantee speedy response to doubtlessly life-threatening conditions whereas minimizing pointless notifications for a similar affected person situation. The mixing of Lambda’occasion filtering, DynamoDB state administration, and dependable alert supply supplied by Amazon SNS creates a strong, scalable healthcare monitoring resolution that exemplifies how AWS companies may be strategically mixed to deal with complicated necessities whereas balancing technical effectivity with medical effectiveness.

Filter sensor information Lambda code snippet:

sns_client = boto3.shopper('sns') dynamodb = boto3.useful resource('dynamodb') table_name = os.environ['DYNAMODB_TABLE'] sns_topic_arn = os.environ['SNS_TOPIC_ARN'] desk = dynamodb.Desk(table_name) FIFTEEN_MINUTES = 15 * 60  # quarter-hour in seconds def handler(occasion, context):     for document in occasion['Records']:         print(f"Aggregated occasion: {document}")         encoded_data = document['kinesis']['data']         partition_key = document['kinesis']['partitionKey']         decoded_bytes = base64.b64decode(encoded_data)         decoded_str = decoded_bytes.decode('utf-8')         # Test final notification timestamp from DynamoDB         attempt:             response = desk.get_item(Key={'partition_key': partition_key})             merchandise = response.get('Merchandise')             now = int(time.time())             if merchandise:                 last_sent = merchandise.get('timestamp', 0)                 if now - last_sent 

To generate an alert notification by the deployed resolution, replace the previous simulator code by setting the SpO2 worth to lower than 90 and run it once more. Inside 1 minute, it’s best to obtain an alert notification on the e-mail deal with you supplied throughout stack creation. The next picture is an instance of an alert notification generated by the deployed resolution.

Clear up

To keep away from ongoing prices after finishing this tutorial, delete the CloudFormation stack that you simply deployed earlier on this put up. This can take away many of the AWS sources created for this resolution. You may must manually delete objects created in Amazon S3, as a result of CloudFormation gained’t take away non-empty buckets throughout stack deletion.

Conclusion

As demonstrated on this put up, you possibly can construct a serverless real-time analytics pipeline for healthcare monitoring through the use of AWS IoT Core, Amazon S3 buckets with iceberg format, and Amazon Kinesis Information Streams integration with AWS Lambda occasion supply mapping. This architectural method eliminates the necessity for complicated code whereas enabling speedy vital affected person care alerts and information aggregation for evaluation utilizing Lambda. The answer is especially priceless for healthcare organizations seeking to modernize their affected person monitoring methods with real-time capabilities. The structure may be prolonged to deal with numerous medical units and sensor information streams, making it adaptable for various healthcare monitoring situations. This put up presents one implementation method, and organizations adopting this resolution ought to make sure the structure and code meets their particular software efficiency, safety, privateness, and regulatory compliance wants.

If this put up helps you or evokes you to unravel an issue, we might love to listen to about it!


Concerning the authors

Nihar Sheth

Nihar Sheth

Nihar is a Senior Product Supervisor on the AWS Lambda group at Amazon Internet Companies. He’s keen about creating intuitive product experiences that clear up complicated buyer issues and allow prospects to attain their enterprise targets.

Pratik Patel

Pratik Patel

Pratik is Sr Technical Account Supervisor and streaming analytics specialist. He works with AWS prospects and offers ongoing help and technical steering to assist plan and construct options utilizing finest practices and proactively helps in maintaining prospects’ AWS environments operationally wholesome.

Priyanka Chaudhary

Priyanka Chaudhary

Priyanka is Senior Options Architect at AWS. She is specialised in information lake and analytics companies and helps many shoppers on this space. As a Options Architect, she performs an important function in guiding strategic prospects by their cloud journey by designing scalable and safe cloud options. Outdoors of labor, she loves spending time with family and friends, watching films, and touring.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles