Thursday, June 5, 2025

Construct a centralized observability platform for Apache Spark on Amazon EMR on EKS utilizing exterior Spark Historical past Server

Monitoring and troubleshooting Apache Spark purposes grow to be more and more complicated as corporations scale their information analytics workloads. As information processing necessities develop, enterprises deploy these purposes throughout a number of Amazon EMR on EKS clusters to deal with numerous workloads effectively. Nonetheless, this strategy creates a problem in sustaining complete visibility into Spark purposes operating throughout these separate clusters. Information engineers and platform groups want a unified view to successfully monitor and optimize their Spark purposes.

Though Spark supplies highly effective built-in monitoring capabilities by way of Spark Historical past Server (SHS), implementing a scalable and safe observability answer throughout a number of clusters requires cautious architectural issues. Organizations want an answer that not solely consolidates Spark software metrics however extends its options by including different efficiency monitoring and troubleshooting packages whereas offering safe entry to those insights and sustaining operational effectivity.

This put up demonstrates the way to construct a centralized observability platform utilizing SHS for Spark purposes operating on EMR on EKS. We showcase the way to improve SHS with efficiency monitoring instruments, with a sample relevant to many monitoring options comparable to SparkMeasure and DataFlint. On this put up, we use DataFlint for instance to display how one can combine further monitoring options. We clarify the way to acquire Spark occasions from a number of EMR on EKS clusters right into a central Amazon Easy Storage Service (Amazon S3) bucket; deploy SHS on a devoted Amazon Elastic Kubernetes Service (Amazon EKS) cluster; and configure safe entry utilizing AWS Load Balancer Controller, AWS Personal Certificates Authority, Amazon Route 53, and AWS Consumer VPN. This answer supplies groups with a single, safe interface to observe, analyze, and troubleshoot Spark purposes throughout a number of clusters.

Overview of answer

Take into account DataCorp Analytics, a data-driven enterprise operating a number of enterprise items with numerous Spark workloads. Their Monetary Analytics workforce processes time-sensitive buying and selling information requiring strict processing instances and devoted assets, and their Advertising Analytics workforce handles buyer conduct information with versatile necessities, requiring a number of EMR on EKS clusters to accommodate these distinct workload patterns. As their Spark purposes develop in quantity and complexity throughout these clusters, information and platform engineers battle to take care of complete visibility whereas sustaining safe entry to monitoring instruments.

This situation presents a perfect use case for implementing a centralized observability platform utilizing SHS and DataFlint. The answer deploys SHS on a devoted EKS cluster, configured to learn occasions from a number of EMR on EKS clusters by way of a centralized S3 bucket. Entry is secured by way of Load Balancer Controller, AWS Personal CA, Route 53, and Consumer VPN, and DataFlint enhances the monitoring capabilities with further insights and visualizations. The next structure diagram illustrates the parts and their interactions.

Architecture diagram

The answer workflow is as follows:

  1. Spark purposes on EMR on EKS use a customized EMR Docker picture that features DataFlint JARs for enhanced metrics assortment. These purposes generate detailed occasion logs containing execution metrics, efficiency information, and DataFlint-specific insights. The logs are written to a centralized Amazon S3 location by way of the next configuration (notice particularly the configurationOverrides part). For added info, discover the StartJobRun information to learn to run Spark jobs and assessment the StartJobRun API reference.
{   "title": "${SPARK_JOB_NAME}",    "virtualClusterId": "${VIRTUAL_CLUSTER_ID}",     "executionRoleArn": "${IAM_ROLE_ARN_FOR_JOB_EXECUTION}",   "releaseLabel": "emr-7.2.0-latest",    "jobDriver": {     "sparkSubmitJobDriver": {       "entryPoint": "s3://${S3_BUCKET_NAME}/app/${SPARK_APP_FILE}",       "entryPointArguments": [         "--input-path",         "s3://${S3_BUCKET_NAME}/data/input",         "--output-path",         "s3://${S3_BUCKET_NAME}/data/output"       ],        "sparkSubmitParameters": "--conf spark.driver.cores=1 --conf spark.driver.reminiscence=4G --conf spark.kubernetes.driver.restrict.cores=1200m --conf spark.executor.cores=2  --conf spark.executor.situations=3  --conf spark.executor.reminiscence=4G"     }   },    "configurationOverrides": {     "applicationConfiguration": [       {         "classification": "spark-defaults",          "properties": {           "spark.driver.memory":"2G",           "spark.kubernetes.container.image": "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${EMR_REPO_NAME}:${EMR_IMAGE_TAG}",           "spark.app.name": "${SPARK_JOB_NAME}"           "spark.eventLog.enabled": "true",           "spark.eventLog.dir": "s3://${S3_BUCKET_NAME}/spark-events/"          }       }     ],      "monitoringConfiguration": {       "persistentAppUI": "ENABLED",       "s3MonitoringConfiguration": {         "logUri": "s3://${S3_BUCKET_NAME}/spark-events/"       }     }   } }

  1. A devoted SHS deployed on Amazon EKS reads these centralized logs. The Amazon S3 location is configured within the SHS to learn from the central Amazon S3 location by way of the next code:
env:   - title: SPARK_HISTORY_OPTS     worth: "-Dspark.historical past.fs.logDirectory=s3a://${S3_BUCKET}/spark-events/"

  1. We configure Load Balancer Controller, AWS Personal CA, a Route 53 hosted zone, and Consumer VPN to securely entry the SHS UI utilizing an internet browser.
  2. Lastly, customers can entry the SHS net interface at https://spark-history-server.instance.inner/.

You will discover the code base within the AWS Samples GitHub repository.

Stipulations

Earlier than you deploy this answer, be sure the next conditions are in place:

Arrange a standard infrastructure

Full the next steps to arrange the infrastructure:

  1. Clone the repository to your native machine and set the 2 setting variables. Change with the AWS Area the place you wish to deploy these assets.
git clone git@github.com:aws-samples/sample-centralized-spark-history-server-emr-on-eks.git cd sample-centralized-spark-history-server-emr-on-eks export REPO_DIR=$(pwd) export AWS_REGION=

  1. Execute the next script to create the widespread infrastructure. The script creates a safe digital personal cloud (VPC) networking setting with private and non-private subnets and an encrypted S3 bucket to retailer Spark software logs.
cd ${REPO_DIR}/infra ./deploy_infra.sh

  1. To confirm profitable infrastructure deployment, open the AWS CloudFormation console, select your stack, and examine the Occasions, Assets, and Outputs tabs for completion standing, particulars, and record of assets created.

Arrange EMR on EKS clusters

This part covers constructing a customized EMR on EKS Docker picture with DataFlint integration, launching two EMR on EKS clusters (datascience-cluster-v and analytics-cluster-v), and configuring the clusters for job submission. Moreover, we arrange the required IAM roles for service accounts (IRSA) to allow Spark jobs to jot down occasions to the centralized S3 bucket. Full the next steps:

  1. Deploy two EMR on EKS clusters:
cd ${REPO_DIR}/emr-on-eks ./deploy_emr_on_eks.sh

  1. To confirm profitable creation of the EMR on EKS clusters utilizing the AWS CLI, execute the next command:
aws emr-containers list-virtual-clusters      --query "virtualClusters[?state=='RUNNING']"

  1. Execute the next command for the datascience-cluster-v and analytics-cluster-v clusters to confirm their respective states, container supplier info, and related EKS cluster particulars. Change with the ID of every cluster obtained from the list-virtual-clusters output.
aws emr-containers describe-virtual-cluster      --id 

Configure and execute Spark jobs on EMR on EKS clusters

Full the next steps to configure and execute Spark jobs on the EMR on EKS clusters:

  1. Generate customized EMR on EKS picture and StartJobRun request JSON information to run Spark jobs:
cd ${REPO_DIR}/jobs ./configure_jobs.sh

The script performs the next duties:

  • Prepares the setting by importing the pattern Spark software spark_history_demo.py to a chosen S3 bucket for job execution.
  • Creates a customized Amazon EMR container picture by extending the bottom EMR 7.2.0 picture with the DataFlint JAR for added insights and publishing it to an Amazon Elastic Container Registry (Amazon ECR) repository.
  • Generates cluster-specific StartJobRun request JSON information for datascience-cluster-v and analytics-cluster-v.

Evaluation start-job-run-request-datascience-cluster-v.json and start-job-run-request-analytics-cluster-v.json for added particulars.

  1. Execute the next instructions to submit Spark jobs on the EMR on EKS digital clusters:
aws emr-containers start-job-run  --cli-input-json file://${REPO_DIR}/jobs/start-job-run/start-job-run-request-datascience-cluster-v.json aws emr-containers start-job-run  --cli-input-json file://${REPO_DIR}/jobs/start-job-run/start-job-run-request-analytics-cluster-v.json

  1. Confirm the profitable era of the logs within the S3 bucket:

aws s3 ls s3://emr-spark-logs--/spark-events/

You could have efficiently arrange an EMR on EKS setting, executed Spark jobs, and picked up the logs within the centralized S3 bucket. Subsequent, we’ll deploy SHS, configure its safe entry, and visualize the logs utilizing it.

Arrange AWS Personal CA and create a Route 53 personal hosted zone

Use the next code to deploy AWS Personal CA and create a Route 53 personal hosted zone. This can present a user-friendly URL to connect with SHS over HTTPS.

cd ${REPO_DIR}/ssl ./deploy_ssl.sh

Arrange SHS on Amazon EKS

Full the next steps to construct a Docker picture containing SHS with DataFlint, deploy it on an EKS cluster utilizing a Helm chart, and expose it by way of a Kubernetes service of sort LoadBalancer. We use a Spark 3.5.0 base picture, which incorporates SHS by default. Nonetheless, though this simplifies deployment, it ends in a bigger picture measurement. For environments the place picture measurement is crucial, think about constructing a customized picture with simply the standalone SHS element as a substitute of utilizing the entire Spark distribution.

  1. Deploy SHS on the spark-history-server EKS cluster:
cd ${REPO_DIR}/shs ./deploy_shs.sh

  1. Confirm the deployment by itemizing the pods and viewing the pod logs:
kubectl get pods --namespace spark-history kubectl logs  --namespace spark-history

  1. Evaluation the logs and make sure there aren’t any errors or exceptions.

You could have efficiently deployed SHS on the spark-history-server EKS cluster, and configured it to learn logs from the emr-spark-logs-- S3 bucket.

Deploy Consumer VPN and add entry to Route 53 for safe entry

Full the next steps to deploy Consumer VPN to securely join your shopper machine (comparable to your laptop computer) to SHS and configure Route 53 to generate a user-friendly URL:

  1. Deploy the Consumer VPN:
cd ${REPO_DIR}/vpn ./deploy_vpn.sh

  1. Add entry to Route 53:
cd ${REPO_DIR}/dns ./deploy_dns.sh

Add certificates to native trusted shops

Full the next steps so as to add the SSL certificates to your working system’s trusted certificates shops for safe connections:

  1. For macOS customers, utilizing Keychain Entry (GUI):
    1. Open Keychain Entry from Functions, Utilities, select the System keychain within the navigation pane, and select File, Import Gadgets.
    2. Browse to and select ${REPO_DIR}/ssl/certificates/ca-certificate.pem, then select the imported certificates.
    3. Increase the Belief part and set When utilizing this certificates to At all times Belief.
    4. Shut and enter your password when prompted and save.
    5. Alternatively, you may execute the next command to incorporate the certificates in Keychain and belief it:
sudo safety add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain "${REPO_DIR}/ssl/certificates/ca-certificate.pem"

  1. For Home windows customers:
    1. Rename ca-certificate.pem to ca-certificate.crt.
    2. Select (right-click) ca-certificate.crt and select Set up Certificates.
    3. Select Native Machine (admin rights required).
    4. Choose Place all certificates within the following retailer.
    5. Select Browse and select Trusted Root Certification Authorities.
    6. Full the set up by selecting Subsequent and End.

Arrange Consumer VPN in your shopper machine for safe entry

Full the next steps to put in and configure Consumer VPN in your shopper machine (comparable to your laptop computer) and create a VPN connection to the AWS Cloud:

  1. Obtain, set up, and launch the Consumer VPN software from the official obtain web page in your working system.
  2. Create your VPN profile:
    1. Select File within the menu bar, select Handle Profiles, and select Add Profile.
    2. Enter a reputation in your profile. Instance: SparkHistoryServerUI
    3. Browse to ${REPO_DIR}/vpn/client_vpn_certs/client-config.ovpn, select the certificates file, and select Add Profile to avoid wasting your configuration.
  3. Choose your newly created profile, select Join, and await the connection affirmation to determine the VPN connection.

Once you’re linked, you should have safe entry to the AWS assets in your setting.

VPN connection details

Securely entry the SHS URL

Full the next steps to securely entry SHS utilizing an internet browser:

  1. Execute the next command to get the SHS URL:

https://spark-history-server.instance.inner/

  1. Copy this URL and enter it into your net browser to entry the SHS UI.

The next screenshot reveals an instance of the UI.

Spark History Server job summary page

  1. Select an App ID to view its detailed execution info and metrics.

Spark History Server job detail page

  1. Select the DataFlint tab to view detailed software insights and analytics.

DataFlint insights page

DataFlint shows numerous useful metrics, together with alerts, as proven within the following screenshot.

DataFlint alerts page

Clear up

To keep away from incurring future fees from the assets created on this tutorial, clear up your setting after finishing the steps. To take away all provisioned assets:

  1. Disconnect from the Consumer VPN.
  2. Run the cleanup.sh script:
cd ${REPO_DIR}/ ./cleanup.sh

Conclusion

On this put up, we demonstrated the way to construct a centralized observability platform for Spark purposes utilizing SHS and improve SHS with efficiency monitoring instruments like DataFlint. The answer aggregates Spark occasions from a number of EMR on EKS clusters right into a unified monitoring interface, offering complete visibility into your Spark purposes’ efficiency and useful resource utilization. By utilizing a customized EMR picture with efficiency monitoring instrument integration, we enhanced the usual Spark metrics to realize deeper insights into software conduct. In case your setting makes use of a mixture of EMR on EKS, Amazon EMR on EC2, or Amazon EMR Serverless, you may seamlessly prolong this structure to combination the logs from EMR on EC2 and EMR Serverless in an analogous means and visualize them utilizing SHS.

Though this answer supplies a sturdy basis for Spark monitoring, manufacturing deployments ought to think about implementing authentication and authorization. SHS helps customized authentication by way of javax servlet filters and fine-grained authorization by way of entry management lists (ACLs). We encourage you to discover implementing authentication filters for safe entry management, configuring user- and group-based ACLs for view and modify permissions, and establishing group mapping suppliers for role-based entry. For detailed steerage, seek advice from Spark’s net UI safety documentation and SHS safety features.

Whereas AWS endeavors to use finest practices for safety inside this instance, every group has its personal insurance policies. Please be sure to make use of the particular insurance policies of your group when deploying this answer as a place to begin for implementing centralized Spark monitoring in your information processing setting.


In regards to the authors

Sri Potluri is a Cloud Infrastructure Architect at AWS. He’s obsessed with fixing complicated issues and delivering well-structured options for numerous clients. His experience spans throughout a variety of cloud applied sciences, offering scalable and dependable infrastructures tailor-made to every mission’s distinctive challenges.

Suvojit Dasgupta is a Principal Information Architect at AWS. He leads a workforce of expert engineers in designing and constructing scalable information options for AWS clients. He makes a speciality of growing and implementing modern information architectures to deal with complicated enterprise challenges.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles