Thursday, June 12, 2025

Centralize Apache Spark observability on Amazon EMR on EKS with exterior Spark Historical past Server

Monitoring and troubleshooting Apache Spark functions turn into more and more complicated as firms scale their knowledge analytics workloads. As knowledge processing necessities develop, enterprises deploy these functions throughout a number of Amazon EMR on EKS clusters to deal with numerous workloads effectively. Nonetheless, this strategy creates a problem in sustaining complete visibility into Spark functions working throughout these separate clusters. Information engineers and platform groups want a unified view to successfully monitor and optimize their Spark functions.

Though Spark gives highly effective built-in monitoring capabilities by means of Spark Historical past Server (SHS), implementing a scalable and safe observability answer throughout a number of clusters requires cautious architectural concerns. Organizations want an answer that not solely consolidates Spark utility metrics however extends its options by including different efficiency monitoring and troubleshooting packages whereas offering safe entry to those insights and sustaining operational effectivity.

This put up demonstrates learn how to centralize Apache Spark observability utilizing SHS working on EMR on EKS. We showcase learn how to improve SHS with efficiency monitoring instruments, with a sample relevant to many monitoring options comparable to SparkMeasure and DataFlint. On this put up, we use DataFlint for example to show how one can combine further monitoring options. We clarify learn how to acquire Spark occasions from a number of EMR on EKS clusters right into a central Amazon Easy Storage Service (Amazon S3) bucket; deploy SHS on a devoted Amazon Elastic Kubernetes Service (Amazon EKS) cluster; and configure safe entry utilizing AWS Load Balancer Controller, AWS Non-public Certificates Authority, Amazon Route 53, and AWS Shopper VPN. This answer gives groups with a single, safe interface to watch, analyze, and troubleshoot Spark functions throughout a number of clusters.

Overview of answer

Contemplate DataCorp Analytics, a data-driven enterprise working a number of enterprise models with numerous Spark workloads. Their Monetary Analytics staff processes time-sensitive buying and selling knowledge requiring strict processing occasions and devoted assets, and their Advertising and marketing Analytics staff handles buyer conduct knowledge with versatile necessities, requiring a number of EMR on EKS clusters to accommodate these distinct workload patterns. As their Spark functions develop in quantity and complexity throughout these clusters, knowledge and platform engineers battle to keep up complete visibility whereas sustaining safe entry to monitoring instruments.

This situation presents a really perfect use case for implementing centralized observability utilizing SHS and DataFlint. The answer deploys SHS on a devoted EKS cluster, configured to learn occasions from a number of EMR on EKS clusters by means of a centralized S3 bucket. Entry is secured by means of Load Balancer Controller, AWS Non-public CA, Route 53, and Shopper VPN, and DataFlint enhances the monitoring capabilities with further insights and visualizations. The next structure diagram illustrates the parts and their interactions.

Architecture diagram

The answer workflow is as follows:

  1. Spark functions on EMR on EKS use a customized EMR Docker picture that features DataFlint JARs for enhanced metrics assortment. These functions generate detailed occasion logs containing execution metrics, efficiency knowledge, and DataFlint-specific insights. The logs are written to a centralized Amazon S3 location by means of the next configuration (be aware particularly the configurationOverrides part). For extra info, discover the StartJobRun information to discover ways to run Spark jobs and overview the StartJobRun API reference.
{   "identify": "${SPARK_JOB_NAME}",    "virtualClusterId": "${VIRTUAL_CLUSTER_ID}",     "executionRoleArn": "${IAM_ROLE_ARN_FOR_JOB_EXECUTION}",   "releaseLabel": "emr-7.2.0-latest",    "jobDriver": {     "sparkSubmitJobDriver": {       "entryPoint": "s3://${S3_BUCKET_NAME}/app/${SPARK_APP_FILE}",       "entryPointArguments": [         "--input-path",         "s3://${S3_BUCKET_NAME}/data/input",         "--output-path",         "s3://${S3_BUCKET_NAME}/data/output"       ],        "sparkSubmitParameters": "--conf spark.driver.cores=1 --conf spark.driver.reminiscence=4G --conf spark.kubernetes.driver.restrict.cores=1200m --conf spark.executor.cores=2  --conf spark.executor.situations=3  --conf spark.executor.reminiscence=4G"     }   },    "configurationOverrides": {     "applicationConfiguration": [       {         "classification": "spark-defaults",          "properties": {           "spark.driver.memory":"2G",           "spark.kubernetes.container.image": "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${EMR_REPO_NAME}:${EMR_IMAGE_TAG}",           "spark.app.name": "${SPARK_JOB_NAME}"           "spark.eventLog.enabled": "true",           "spark.eventLog.dir": "s3://${S3_BUCKET_NAME}/spark-events/"          }       }     ],      "monitoringConfiguration": {       "persistentAppUI": "ENABLED",       "s3MonitoringConfiguration": {         "logUri": "s3://${S3_BUCKET_NAME}/spark-events/"       }     }   } }

  1. A devoted SHS deployed on Amazon EKS reads these centralized logs. The Amazon S3 location is configured within the SHS to learn from the central Amazon S3 location by means of the next code:
env:   - identify: SPARK_HISTORY_OPTS     worth: "-Dspark.historical past.fs.logDirectory=s3a://${S3_BUCKET}/spark-events/"

  1. We configure Load Balancer Controller, AWS Non-public CA, a Route 53 hosted zone, and Shopper VPN to securely entry the SHS UI utilizing an online browser.
  2. Lastly, customers can entry the SHS net interface at https://spark-history-server.instance.inner/.

You will discover the code base within the AWS Samples GitHub repository.

Stipulations

Earlier than you deploy this answer, make sure that the next conditions are in place:

Arrange a typical infrastructure

Full the next steps to arrange the infrastructure:

  1. Clone the repository to your native machine and set the 2 atmosphere variables. Exchange with the AWS Area the place you need to deploy these assets.
git clone git@github.com:aws-samples/sample-centralized-spark-history-server-emr-on-eks.git cd sample-centralized-spark-history-server-emr-on-eks export REPO_DIR=$(pwd) export AWS_REGION=

  1. Execute the next script to create the frequent infrastructure. The script creates a safe digital personal cloud (VPC) networking atmosphere with private and non-private subnets and an encrypted S3 bucket to retailer Spark utility logs.
cd ${REPO_DIR}/infra ./deploy_infra.sh

  1. To confirm profitable infrastructure deployment, open the AWS CloudFormation console, select your stack, and verify the Occasions, Assets, and Outputs tabs for completion standing, particulars, and checklist of assets created.

Arrange EMR on EKS clusters

This part covers constructing a customized EMR on EKS Docker picture with DataFlint integration, launching two EMR on EKS clusters (datascience-cluster-v and analytics-cluster-v), and configuring the clusters for job submission. Moreover, we arrange the required IAM roles for service accounts (IRSA) to allow Spark jobs to put in writing occasions to the centralized S3 bucket. Full the next steps:

  1. Deploy two EMR on EKS clusters:
cd ${REPO_DIR}/emr-on-eks ./deploy_emr_on_eks.sh

  1. To confirm profitable creation of the EMR on EKS clusters utilizing the AWS CLI, execute the next command:
aws emr-containers list-virtual-clusters      --query "virtualClusters[?state=='RUNNING']"

  1. Execute the next command for the datascience-cluster-v and analytics-cluster-v clusters to confirm their respective states, container supplier info, and related EKS cluster particulars. Exchange with the ID of every cluster obtained from the list-virtual-clusters output.
aws emr-containers describe-virtual-cluster      --id 

Configure and execute Spark jobs on EMR on EKS clusters

Full the next steps to configure and execute Spark jobs on the EMR on EKS clusters:

  1. Generate customized EMR on EKS picture and StartJobRun request JSON information to run Spark jobs:
cd ${REPO_DIR}/jobs ./configure_jobs.sh

The script performs the next duties:

  • Prepares the atmosphere by importing the pattern Spark utility spark_history_demo.py to a chosen S3 bucket for job execution.
  • Creates a customized Amazon EMR container picture by extending the bottom EMR 7.2.0 picture with the DataFlint JAR for extra insights and publishing it to an Amazon Elastic Container Registry (Amazon ECR) repository.
  • Generates cluster-specific StartJobRun request JSON information for datascience-cluster-v and analytics-cluster-v.

Evaluation start-job-run-request-datascience-cluster-v.json and start-job-run-request-analytics-cluster-v.json for extra particulars.

  1. Execute the next instructions to submit Spark jobs on the EMR on EKS digital clusters:
aws emr-containers start-job-run  --cli-input-json file://${REPO_DIR}/jobs/start-job-run/start-job-run-request-datascience-cluster-v.json aws emr-containers start-job-run  --cli-input-json file://${REPO_DIR}/jobs/start-job-run/start-job-run-request-analytics-cluster-v.json

  1. Confirm the profitable era of the logs within the S3 bucket:

aws s3 ls s3://emr-spark-logs--/spark-events/

You have got efficiently arrange an EMR on EKS atmosphere, executed Spark jobs, and picked up the logs within the centralized S3 bucket. Subsequent, we are going to deploy SHS, configure its safe entry, and visualize the logs utilizing it.

Arrange AWS Non-public CA and create a Route 53 personal hosted zone

Use the next code to deploy AWS Non-public CA and create a Route 53 personal hosted zone. It will present a user-friendly URL to hook up with SHS over HTTPS.

cd ${REPO_DIR}/ssl ./deploy_ssl.sh

Arrange SHS on Amazon EKS

Full the next steps to construct a Docker picture containing SHS with DataFlint, deploy it on an EKS cluster utilizing a Helm chart, and expose it by means of a Kubernetes service of sort LoadBalancer. We use a Spark 3.5.0 base picture, which incorporates SHS by default. Nonetheless, though this simplifies deployment, it leads to a bigger picture measurement. For environments the place picture measurement is vital, think about constructing a customized picture with simply the standalone SHS element as an alternative of utilizing the whole Spark distribution.

  1. Deploy SHS on the spark-history-server EKS cluster:
cd ${REPO_DIR}/shs ./deploy_shs.sh

  1. Confirm the deployment by itemizing the pods and viewing the pod logs:
kubectl get pods --namespace spark-history kubectl logs  --namespace spark-history

  1. Evaluation the logs and ensure there are not any errors or exceptions.

You have got efficiently deployed SHS on the spark-history-server EKS cluster, and configured it to learn logs from the emr-spark-logs-- S3 bucket.

Deploy Shopper VPN and add entry to Route 53 for safe entry

Full the next steps to deploy Shopper VPN to securely join your consumer machine (comparable to your laptop computer) to SHS and configure Route 53 to generate a user-friendly URL:

  1. Deploy the Shopper VPN:
cd ${REPO_DIR}/vpn ./deploy_vpn.sh

  1. Add entry to Route 53:
cd ${REPO_DIR}/dns ./deploy_dns.sh

Add certificates to native trusted shops

Full the next steps so as to add the SSL certificates to your working system’s trusted certificates shops for safe connections:

  1. For macOS customers, utilizing Keychain Entry (GUI):
    1. Open Keychain Entry from Functions, Utilities, select the System keychain within the navigation pane, and select File, Import Gadgets.
    2. Browse to and select ${REPO_DIR}/ssl/certificates/ca-certificate.pem, then select the imported certificates.
    3. Develop the Belief part and set When utilizing this certificates to All the time Belief.
    4. Shut and enter your password when prompted and save.
    5. Alternatively, you’ll be able to execute the next command to incorporate the certificates in Keychain and belief it:
sudo safety add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain "${REPO_DIR}/ssl/certificates/ca-certificate.pem"

  1. For Home windows customers:
    1. Rename ca-certificate.pem to ca-certificate.crt.
    2. Select (right-click) ca-certificate.crt and select Set up Certificates.
    3. Select Native Machine (admin rights required).
    4. Choose Place all certificates within the following retailer.
    5. Select Browse and select Trusted Root Certification Authorities.
    6. Full the set up by selecting Subsequent and End.

Arrange Shopper VPN in your consumer machine for safe entry

Full the next steps to put in and configure Shopper VPN in your consumer machine (comparable to your laptop computer) and create a VPN connection to the AWS Cloud:

  1. Obtain, set up, and launch the Shopper VPN utility from the official obtain web page to your working system.
  2. Create your VPN profile:
    1. Select File within the menu bar, select Handle Profiles, and select Add Profile.
    2. Enter a reputation to your profile. Instance: SparkHistoryServerUI
    3. Browse to ${REPO_DIR}/vpn/client_vpn_certs/client-config.ovpn, select the certificates file, and select Add Profile to save lots of your configuration.
  3. Choose your newly created profile, select Join, and look ahead to the connection affirmation to ascertain the VPN connection.

While you’re linked, you’ll have safe entry to the AWS assets in your atmosphere.

VPN connection details

Securely entry the SHS URL

Full the next steps to securely entry SHS utilizing an online browser:

  1. Execute the next command to get the SHS URL:

https://spark-history-server.instance.inner/

  1. Copy this URL and enter it into your net browser to entry the SHS UI.

The next screenshot reveals an instance of the UI.

Spark History Server job summary page

  1. Select an App ID to view its detailed execution info and metrics.

Spark History Server job detail page

  1. Select the DataFlint tab to view detailed utility insights and analytics.

DataFlint insights page

DataFlint shows numerous useful metrics, together with alerts, as proven within the following screenshot.

DataFlint alerts page

Clear up

To keep away from incurring future fees from the assets created on this tutorial, clear up your atmosphere after finishing the steps. To take away all provisioned assets:

  1. Disconnect from the Shopper VPN.
  2. Run the cleanup.sh script:
cd ${REPO_DIR}/ ./cleanup.sh

Conclusion

On this put up, we demonstrated learn how to construct centralized observability for Spark functions utilizing SHS and improve SHS with efficiency monitoring instruments like DataFlint. The answer aggregates Spark occasions from a number of EMR on EKS clusters right into a unified monitoring interface, offering complete visibility into your Spark functions’ efficiency and useful resource utilization. Through the use of a customized EMR picture with efficiency monitoring device integration, we enhanced the usual Spark metrics to realize deeper insights into utility conduct. In case your atmosphere makes use of a mixture of EMR on EKS, Amazon EMR on EC2, or Amazon EMR Serverless, you’ll be able to seamlessly prolong this structure to mixture the logs from EMR on EC2 and EMR Serverless in an identical approach and visualize them utilizing SHS.

Though this answer gives a strong basis for Spark monitoring, manufacturing deployments ought to think about implementing authentication and authorization. SHS helps customized authentication by means of javax servlet filters and fine-grained authorization by means of entry management lists (ACLs). We encourage you to discover implementing authentication filters for safe entry management, configuring user- and group-based ACLs for view and modify permissions, and establishing group mapping suppliers for role-based entry. For detailed steering, check with Spark’s net UI safety documentation and SHS safety features.

Whereas AWS endeavors to use finest practices for safety inside this instance, every group has its personal insurance policies. Please make sure that to make use of the precise insurance policies of your group when deploying this answer as a place to begin for implementing centralized Spark monitoring in your knowledge processing atmosphere.


Concerning the authors

Sri Potluri is a Cloud Infrastructure Architect at AWS. He’s captivated with fixing complicated issues and delivering well-structured options for numerous prospects. His experience spans throughout a spread of cloud applied sciences, offering scalable and dependable infrastructures tailor-made to every mission’s distinctive challenges.

Suvojit Dasgupta is a Principal Information Architect at AWS. He leads a staff of expert engineers in designing and constructing scalable knowledge options for AWS prospects. He makes a speciality of creating and implementing progressive knowledge architectures to deal with complicated enterprise challenges.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles