Tuesday, May 20, 2025

Petabyte-scale knowledge migration made easy: AppsFlyer’s greatest follow journey with Amazon EMR Serverless

This submit is co-written with Roy Ninio from Appsflyer.

Organizations worldwide goal to harness the facility of information to drive smarter, extra knowledgeable decision-making by embedding knowledge on the core of their processes. Utilizing data-driven insights allows you to reply extra successfully to sudden challenges, foster innovation, and ship enhanced experiences to your clients. In reality, knowledge has reworked how organizations drive decision-making, however traditionally, managing the infrastructure to help it posed vital challenges and required particular ability units and devoted personnel. The complexity of organising, scaling, and sustaining large-scale knowledge programs impacted agility and tempo of innovation. This reliance on specialists and complex setups usually diverted sources from innovation, slowed time-to-market, and hindered the flexibility to answer adjustments in business calls for.

AppsFlyer is a number one analytics and attribution firm designed to assist companies measure and optimize their advertising efforts throughout cell, net, and related gadgets. With a concentrate on privacy-first innovation, AppsFlyer empowers organizations to make data-driven selections whereas respecting person privateness and compliance laws. AppsFlyer gives instruments for monitoring person acquisition, engagement, and retention, delivering actionable insights to reinforce ROI and streamline advertising methods.

On this submit, we share how AppsFlyer efficiently migrated their huge knowledge infrastructure from self-managed Hadoop clusters to Amazon EMR Serverless, detailing their greatest practices, challenges to beat, and classes realized that may assist information different organizations in related transformations.

Why AppsFlyer embraced a serverless strategy for large knowledge

AppsFlyer manages one of many largest-scale knowledge infrastructures within the business, processing 100 PB of information each day, dealing with thousands and thousands of occasions per second, and operating 1000’s of jobs throughout almost 100 self-managed Hadoop clusters. The AppsFlyer structure is comprised of many knowledge engineering open supply applied sciences, together with however not restricted to Apache Spark, Apache Kafka, Apache Iceberg, and Apache Airflow. Though this setup has powered operations for years, the rising complexity of scaling sources to satisfy fluctuating calls for, coupled with the operational overhead of sustaining clusters, prompted AppsFlyer to rethink their massive knowledge processing technique.

EMR Serverless is a contemporary, scalable resolution that alleviates the necessity for handbook cluster administration whereas dynamically adjusting sources to match real-time workload necessities. With EMR Serverless, scaling up or down occurs inside seconds, minimizing idle time and interruptions like spot terminations.

This shift has freed engineering groups to concentrate on innovation, improved resilience and excessive availability, and future-proofed the structure to help their ever-increasing calls for. By solely paying for compute and reminiscence sources used throughout runtime, AppsFlyer additionally optimized prices and minimized expenses for idle sources, marking a big step ahead in effectivity and scalability.

Resolution overview

AppsFlyer’s earlier structure was constructed round self-managed Hadoop clusters operating on Amazon Elastic Compute Cloud (Amazon EC2) and dealt with the size and complexity of the info workflows. Though this setup supported operational wants, it required substantial handbook effort to take care of, scale, and optimize.

AppsFlyer orchestrated over 100,000 each day workflows with Airflow, managing each streaming and batch operations. Streaming pipelines used Spark Streaming to ingest real-time knowledge from Kafka, writing uncooked datasets to an Amazon Easy Storage Service (Amazon S3) knowledge lake whereas concurrently loading them into BigQuery and Google Cloud Storage to construct logical knowledge layers. Batch jobs then processed this uncooked knowledge, reworking it into actionable datasets for inside groups, dashboards, and analytics workflows. Moreover, some processed outputs had been ingested into exterior knowledge sources, enabling seamless supply of AppsFlyer insights to clients throughout the online.

For analytics and quick queries, real-time knowledge streams had been ingested into ClickHouse and Druid to energy dashboards. Moreover, Iceberg tables had been created from Delta Lake uncooked knowledge and made accessible by means of Amazon Athena for additional knowledge exploration and analytics.

With the migration to EMR Serverless, AppsFlyer changed its self-managed Hadoop clusters, bringing vital enhancements to scalability, cost-efficiency, and operational simplicity.

Spark-based workflows, together with streaming and batch jobs, had been migrated to run on EMR Serverless and make the most of the elasticity of EMR Serverless, dynamically scaling to satisfy workload calls for.

This transition has considerably lowered operational overhead, assuaging the necessity for handbook cluster administration, so groups can focus extra on knowledge processing and fewer on infrastructure.

The next diagram illustrates the answer structure.

This submit critiques the principle challenges and classes realized by the group at AppsFlyer from this migration.

Challenges and classes realized

Migrating a large-scale group like AppsFlyer, with dozens of groups, from Hadoop to EMR Serverless was a big problem—particularly as a result of many R&D groups had restricted or no prior expertise managing infrastructure. To supply a easy transition, AppsFlyer’s Knowledge Infrastructure (DataInfra) group developed a complete migration technique that empowered the R&D groups to seamlessly migrate their pipelines.

On this part, we talk about how AppsFlyer approached the problem and achieved success for the whole group.

Centralized preparation by the DataInfra group

To supply a seamless transition to EMR Serverless, the DataInfra group took the lead in centralizing preparation efforts:

  • Clear possession – Taking full accountability for the migration, the group deliberate, guided, and supported R&D groups all through the method.
  • Structured migration information – An in depth, step-by-step information was created to streamline the transition from Hadoop, breaking down the complexities and making it accessible to groups with restricted infrastructure expertise.

Constructing a powerful help community

To ensure the R&D groups had the sources they wanted, AppsFlyer established a sturdy help setting:

  • Knowledge neighborhood – The first useful resource for answering technical questions. It inspired data sharing throughout groups and was spearheaded by the DataInfra group.
  • Slack help channel – A devoted channel the place the DataInfra group actively responded to questions and guided groups by means of the migration course of. This real-time help considerably lowered bottlenecks and helped groups resolve points shortly.

Infrastructure templates with greatest practices

Recognizing the complexity of the group’s migration, the DataInfra group had standardized templates to assist groups begin shortly and effectively:

  • Infrastructure as code (IaC) templates – They developed Terraform templates with greatest practices for constructing purposes on EMR Serverless. These templates included code examples and actual manufacturing workflows already migrated to EMR Serverless. Groups may shortly bootstrap their tasks through the use of these ready-made templates.
  • Cross-account entry options – Working throughout a number of AWS accounts required managing safe entry between EMR Serverless accounts (the place jobs run) and knowledge storage accounts (the place datasets reside). To streamline this, a step-by-step module was developed for organising cross-account entry utilizing Assume Function permissions. Moreover, a devoted repository was created, so groups can outline and automate position and coverage creation, offering seamless and scalable entry administration.

Airflow integration

As AppsFlyer’s major workflow scheduler, Airflow performs a important position, making it important to supply a seamless transition for its customers.

AppsFlyer developed a devoted Airflow operator for executing Spark jobs on EMR Serverless, fastidiously designed to duplicate the performance of the prevailing Hadoop-based Spark operator. As well as, a Python package deal was made accessible throughout all Airflow clusters with the related operators. This strategy minimized code adjustments, permitting groups to transition seamlessly with minimal modifications.

Fixing frequent permission challenges

To streamline permissions administration, AppsFlyer developed focused options for frequent use circumstances:

  • Complete documentation – Offered detailed directions for dealing with permissions for providers like Athena, BigQuery, Vault, GIT, Kafka, and plenty of extra.
  • Standardized Spark defaults configuration for groups to use to their purposes – Included built-in options for gathering lineage from Spark jobs operating on EMR Serverless, offering accountability and traceability.

Steady engagement with R&D groups

To advertise progress and keep alignment throughout groups, AppsFlyer launched the next measures:

  • Weekly conferences – Weekly standing conferences to evaluation the standing of every group’s migration efforts. Groups shared updates, challenges, and commitments, fostering transparency and collaboration.
  • Help – Proactive help was supplied for points raised throughout conferences to reduce delays. This made positive that the groups had been on monitor and had the help they wanted to satisfy their commitments.

By implementing these methods, AppsFlyer reworked the migration course of from a frightening problem right into a structured and well-supported journey. Key outcomes included:

  • Empowered groups – R&D groups with minimal infrastructure expertise had been in a position to confidently migrate their pipelines.
  • Standardized practices – Infrastructure templates and predefined options supplied consistency and greatest practices throughout the group.
  • Diminished downtime – The {custom} Airflow operator and detailed documentation minimized disruptions to present workflows.
  • Cross-account compatibility – With seamless cross-account entry, groups may run jobs and entry knowledge effectively.
  • Improved collaboration – The info neighborhood and Slack help channel fostered a way of collaboration and shared accountability throughout groups.

Migrating a whole group’s knowledge workflows to EMR Serverless is a posh process, however by investing in preparation, templates, and help, AppsFlyer efficiently streamlined the method for all R&D groups within the firm.

This strategy can function a mannequin for organizations enterprise related migrations.

Spark utility code administration and deployment

For AppsFlyer knowledge engineers, growing and deploying Spark purposes is a core each day accountability. The Knowledge Platform group focuses on figuring out and implementing the fitting set of instruments and safeguards that will not solely simplify the migration to EMR Serverless, but additionally streamline ongoing operations.

There are two totally different approaches accessible for operating Spark code on EMR Serverless: {custom} container photos and JARs or Python information. Firstly of the exploration, {custom} photos regarded promising as a result of it permits larger customization than JARs, which ought to permit the DataInfra group smoother migration for present workloads. After deeper analysis, it was realized that {custom} photos have nice energy, however include a price that in massive scale would should be evaluated. Customized photos introduced the next challenges:

  • Customized photos are supported as of model 6.9.0, however a few of AppsFlyer’s workloads used earlier variations.
  • EMR Serverless sources run from the second EMR Serverless begins downloading the picture till employees are stopped. This implies a fee is finished for combination vCPU, reminiscence, and storage sources through the picture obtain section.
  • They required a special steady integration and supply (CI/CD) strategy than compiling a JAR or Python file, resulting in operational work that ought to be minimized as a lot as potential.

AppsFlyer determined to go all in with JARs and permit solely in distinctive circumstances, the place the customization required the usage of {custom} photos. Ultimately, it was realized that utilizing non-custom photos was appropriate for AppsFlyer use circumstances.

CI/CD perspective

From a CI/CD perspective, AppsFlyer’s DataInfra group determined to align with AppsFlyer’s GitOps imaginative and prescient, ensuring that each infrastructure and utility code are version-controlled, constructed, and deployed utilizing Git operations.

The next diagram illustrates the GitOps strategy AppsFlyer adopted.

JARs steady integration

For CI, the method in control of constructing the appliance artifacts, a number of choices have been explored. The next key concerns drove the exploration course of:

  • Use Amazon S3 because the native JAR supply for EMR Serverless
  • Help totally different variations for a similar job
  • Help staging and manufacturing environments
  • Permit hotfixes, patches, and rollbacks

Utilizing AppsFlyer’s present exterior package deal repository led to challenges, as a result of it required them to construct a {custom} supply into Amazon S3 or a posh runtime potential to fetch the code externally.

Utilizing Amazon S3 instantly additionally had a number of different approaches:

  • Buckets – Use single vs. separated buckets for staging and manufacturing
  • Variations – Use Amazon S3 native object versioning vs. importing a brand new file
  • Hotfix – Override the identical job’s JAR file vs. importing a brand new one

Lastly, the choice was to go together with immutable builds for constant deployment throughout the environments.

Every Spark job git repository pushes to the principle department, triggers a CI course of to validate the semantic versioning (semver) project, compiles the JAR artifact, and uploads it to Amazon S3. Every artifact is uploaded to a few totally different paths based on the model of the JAR, and in addition embody a model tag for the S3 object:

  • //".""."/app.jar
  • //".""/app.jar
  • ///app.jar

AppsFlyer can now have deep granularity and assign every EMR Serverless job to a pinpointed model. Some jobs can run with the most recent main model, and different stability and SLA delicate jobs require a lock to a particular patch model.

EMR Serverless steady deployment

Importing the information to Amazon S3 was the ultimate step within the CI course of, which then results in a special CD course of.

CD is finished by altering the infrastructure code, which is Terraform based mostly, to level to the brand new JAR that was uploaded to Amazon S3. Then the staging or manufacturing utility can begin utilizing the newly uploaded code and the method might be thought of deployed.

Spark utility rollbacks

In the event that they want an utility rollback, AppsFlyer factors the EMR Serverless job IaC configuration from the present impaired JAR model to the earlier steady JAR model within the related Amazon S3 path.

AppsFlyer believes that each automation impacting manufacturing, like CD, requires a breaking glass mechanism for an emergency state of affairs. In such circumstances, AppsFlyer can manually override the wanted S3 object (JAR file) whereas nonetheless utilizing Amazon S3 variations in an effort to have higher visibility and handbook model management.

Single-job vs. multi-job purposes

When utilizing EMR Serverless, one necessary architectural choice is whether or not to create a separate utility for every Spark job or use an computerized scaling utility shared throughout a number of Spark jobs. The next desk summarizes these concerns.

Side Single-Job Software Multi-Job Software
Logical Nature Devoted utility for every job. Shared utility for a number of jobs.
Shared Configurations Restricted shared configurations; every utility is independently configured. Permits shared configurations by means of spark-defaults, together with executors, reminiscence settings, and JARs.
Isolation Most isolation; every job runs independently. Maintains job-level isolation by means of distinct IAM roles regardless of sharing the appliance.
Flexibility Versatile for distinctive configurations or useful resource necessities. Reduces overhead by reusing configurations and utilizing computerized scaling.
Overhead Larger setup and administration overhead attributable to a number of purposes. Decrease administrative overhead however requires cautious useful resource rivalry administration.
Use Instances Appropriate for jobs with distinctive necessities or strict isolation wants. Ideally suited for associated workloads that profit from shared settings and dynamic scaling.

By balancing these concerns, AppsFlyer tailor-made its EMR Serverless utilization to effectively meet the calls for of various Spark workloads throughout their groups.

Airflow operator: Simplifying the transition to EMR Serverless

Earlier than the migration to EMR Serverless, AppsFlyer’s groups relied on a {custom} Airflow Spark operator created by the DataInfra group.

This operator, packaged as a Python library, was built-in into the Airflow setting and have become a key element of the info workflows.

It supplied important capabilities, together with:

  • Retries and alerts – Constructed-in retry logic and PagerDuty alert integration
  • AWS role-based entry – Computerized fetching of AWS permissions based mostly on position names
  • Customized defaults – Setting Spark configurations and package deal defaults tailor-made for every job
  • State administration – Job state monitoring

This operator streamlined operating Spark jobs on Hadoop and was extremely tailor-made to AppsFlyer’s necessities.

When transferring to EMR Serverless, the group selected to construct a {custom} Airflow operator to align with their present Spark-based workflows. They already had dozens of Directed Acyclic Graphs (DAGs) in manufacturing, so with this strategy, they may keep their acquainted interface, together with {custom} dealing with for retries, alerting, and configurations—all with out requiring broad adjustments throughout the board.

This abstraction supplied a smoother migration by preserving the identical growth patterns and minimizing the migration efforts of adapting to the native operator semantics.

The DataInfra group developed a devoted, {custom}, EMR Serverless operator to help the next objectives:

  • Seamless migration – The operator was designed to carefully mimic the interface of the prevailing Spark operator on Hadoop. This made positive that groups may migrate with minimal code adjustments.
  • Characteristic parity – They added the options lacking from the native operator:
    • Constructed-in retry logic.
    • PagerDuty integration for alerts.
    • Computerized role-based permission fetching.
    • Default Spark configurations and package deal help for every job.
  • Simplified integration – It’s packaged as a Python library accessible in Airflow clusters. Groups may use the operator similar to they did with the earlier Spark operator.

The {custom} operator abstracts among the underlying configurations required to submit jobs to EMR Serverless, aligning with AppsFlyer’s inside greatest practices and including important options.

The next is from an instance DAG utilizing the operator:

return SparkBatchJobEmrServerlessOperator(     task_id=task_id,  # Distinctive process identifier within the DAG     jar_file=jar_file,  # Path to the Spark job JAR file on S3     main_class="
", spark_conf=spark_conf, app_id=default_args[""], # EMR Serverless app ID execution_role=default_args[""], # IAM position for job execution polling_interval_sec=120, # How usually to ballot for job standing execution_timeout=timedelta(hours=1), # Max allowed runtime retries=5, # Retry makes an attempt for failed jobs app_args=[], # Arguments to cross to the Spark job depends_on_past=True, # Guarantee sequential process execution tags={'proprietor': ''}, # Metadata for possession aws_assume_role="", # Function for cross-account entry alerting_policy=ALERT_POLICY_CRITICAL.with_slack_channel(sc), # Alerting integration proprietor="", dag=dag # DAG this process belongs to )

Cross-account permissions on AWS: Simplifying EMRs workflows

AppsFlyer operates throughout a number of AWS accounts, creating a necessity for safe and environment friendly cross-account entry. EMR Serverless jobs are executed within the manufacturing account, and the info they course of resides in a separate knowledge account. To allow seamless operation, Assume Function permissions are used to confirm that EMR Serverless jobs operating within the manufacturing account can entry the info and providers within the knowledge account. The next diagram illustrates this structure.

Under is a diagram demonstrating the cross-account permissions AppsFlyer adopted:

Function administration technique

To handle cross-account entry effectively, three distinct roles had been created and maintained:

  • EMR position – Used for executing and managing EMR Serverless purposes within the manufacturing account. Built-in instantly into Airflow employees to make it accessible for the DAGs on the devoted group Airflow cluster.
  • Execution position – Assigned to the Spark job operating on EMR Serverless. Handed by the EMR position within the DAG code to supply seamless integration.
  • Knowledge position – Resides within the knowledge account and is assumed by the execution position to entry knowledge saved in Amazon S3 and different AWS providers.

To implement entry boundaries, every position and coverage is tagged with team-specific identifiers.
This makes positive that groups can solely entry their very own knowledge and roles, minimizing unauthorized entry to different groups’ sources.

Simplifying Airflow migration

A streamlined course of to make cross-account permissions clear for groups migrating their workloads to EMR Serverless was developed:

  1. The EMR position is embedded into Airflow employees, making it accessible for DAGs within the devoted Airflow cluster for every group:
{    "Model":"2012-10-17",    "Assertion":[       "..."{          "Effect":"Allow",          "Action":"iam:PassRole",          "Resource":"arn:aws:iam::account-id:role/execution-role",          "Condition":{             "StringEquals":{                "iam:ResourceTag/Team":"team-tag"             }          }       }    ] } 

  1. The EMR position robotically passes the execution position to the job inside the DAG code:
{   "Model": "2012-10-17",   "Assertion": [     {       "Effect": "Allow",       "Action": "sts:AssumeRole",       "Resource": "arn:aws:iam::data-account-id:role/data-role",       "Condition": {         "StringEquals": {           "iam:ResourceTag/Team": "team-tag"         }       }     }   ] }

  1. The execution position assumes the info position dynamically throughout job execution to entry the required knowledge and providers within the knowledge account:

Permits the Execution Function within the Manufacturing account to imagine the Knowledge Function.

{   "Model": "2012-10-17",   "Assertion": [     {       "Effect": "Allow",       "Principal": {         "AWS": "arn:aws:iam::production-account-id:role/execution-role"       },       "Action": "sts:AssumeRole"     }   ] }

  1. Insurance policies, belief relationships, and position definitions are managed in a devoted GitLab repository. GitLab CI/CD pipelines automate the creation and integration of roles and insurance policies, offering consistency and lowering handbook overhead.

Advantages of AppsFlyer’s strategy

This strategy provided the next advantages:

  • Seamless entry – Groups not have to deal with cross-account permissions manually as a result of these are automated by means of preconfigured roles and insurance policies, offering seamless and safe entry to sources throughout accounts.
  • Scalable and safe – Function-based and tag-based permissions present safety and scalability throughout a number of groups and accounts. Through the use of roles and tags, it alleviates the necessity to create separate hardcoded insurance policies for every group or account. As a substitute, they’ll outline generalized insurance policies that scale robotically as new sources, accounts, or groups are added.
  • Automated administration – GitLab CI/CD streamlines the deployment and integration of insurance policies and roles, lowering handbook effort whereas enhancing consistency. It additionally minimizes human errors, improves change transparency, and simplifies model administration.
  • Flexibility for groups – Groups have the pliability to make use of their very own or native EMR Serverless operators whereas sustaining safe entry to knowledge.

By implementing a sturdy, automated cross-account permissions system, AppsFlyer has enabled safe and environment friendly entry to knowledge and providers throughout a number of AWS accounts. This makes positive that groups can concentrate on their workloads with out worrying about infrastructure complexities, accelerating their migration to EMR Serverless.

Integrating lineage into EMR Serverless

AppsFlyer developed a sturdy resolution for column-level lineage assortment to supply complete visibility into knowledge transformations throughout pipelines. Lineage knowledge is saved in Amazon S3 and subsequently ingested into DataHub, AppsFlyer’s lineage and metadata administration setting.

At present, AppsFlyer collects column-level lineage from a wide range of sources, together with Amazon Athena, BigQuery, Spark, and extra.

This part focuses on how AppsFlyer collects Spark column-level lineage particularly inside the EMR Serverless infrastructure.

Accumulating Spark lineage with Spline

To seize lineage from Spark jobs, AppsFlyer makes use of Spline, an open supply device designed for automated monitoring of information lineage and pipeline buildings.

AppsFlyer modified Spline’s default conduct to output a custom-made Spline object that aligns with AppsFlyer’s particular necessities. AppsFlyer tailored the Spline integration into each legacy and trendy environments. Within the pre-migration section, they injected the Spline agent into Spark jobs by means of their custom-made Airflow Spark operator. Within the post-migration section, they built-in Spline instantly into EMR Serverless purposes.

The lineage workflow consists of the next steps:

  1. As Spark jobs execute, Spline captures detailed metadata in regards to the queries and transformations carried out.
  2. The captured metadata is exported as Spline object information to a devoted S3 bucket.
  3. These Spline objects are processed into column-level lineage objects custom-made to suit AppsFlyer’s knowledge structure and necessities.
  4. The processed lineage knowledge is ingested into DataHub, offering a centralized and interactive view of information dependencies.

The next determine is an instance of a lineage diagram from DataHub.

Challenges and the way AppsFlyer addressed them

AppsFlyer encountered the next challenges:

  • Supporting totally different EMR Serverless purposes – Every EMR Serverless utility has its personal Spark and Scala model necessities.
  • Various operator utilization – Groups usually use {custom} or native EMR Serverless operators, making uniform Spline integration difficult.
  • Confirming common adoption – They want to verify Spark jobs throughout a number of accounts use the Spline agent for lineage monitoring.

AppsFlyer addressed these challenges with the next options:

  • Model-specific Spline brokers – AppsFlyer created a devoted Spline agent for every EMR Serverless utility model to match its Spark and Scala variations. For instance, EMR Serverless utility model 7.0.1 and Spline.7.0.1.
  • Spark defaults integration – They built-in the Spline agent into EMR Serverless utility Spark defaults to confirm lineage assortment for jobs executed on the appliance—no job-specific modifications wanted.
  • Automation for compliance – This course of consists of the next steps:
    • Detect a newly created EMR Serverless utility throughout accounts.
    • Confirm that Spline is correctly outlined within the utility’s Spark defaults.
    • Ship a PagerDuty alert to the devoted group if misconfigurations are detected.

Instance integration with Terraform

To automate Spline integration, AppsFlyer used Terraform and local-exec to outline Spark defaults for EMR Serverless purposes. With Amazon EMR, you may set unified Spark configuration properties by means of spark-defaults, that are then utilized to Spark jobs.

This configuration makes positive the Spline agent is robotically utilized to each Spark job with out requiring modifications to the Airflow operator or the job itself.

This sturdy lineage integration gives the next advantages:

  • Full visibility – Computerized lineage monitoring gives detailed insights into knowledge transformations
  • Seamless scalability – Model-specific Spline brokers present compatibility with EMR Serverless purposes
  • Proactive monitoring – Automated compliance checks confirm that lineage monitoring is persistently enabled throughout accounts
  • Enhanced governance – Ingesting lineage knowledge into DataHub gives traceability, helps audits, and fosters a deeper understanding of information dependencies

By integrating Spline with EMR Serverless purposes, AppsFlyer has supplied complete and automatic lineage monitoring, so groups can perceive their knowledge pipelines higher whereas assembly compliance necessities. This scalable strategy aligns with AppsFlyer’s dedication to sustaining transparency and reliability all through their knowledge panorama.

Monitoring and observability

When embarking on a big migration, and as a day-to-day best-practice course of, monitoring and observability are key components of with the ability to run workloads efficiently for stability, debugging, and value.

AppsFlyer’s DataInfra group set a number of KPIs for monitoring and observability in EMR Serverless:

  • Monitor infrastructure-level metrics and logs:
    • EMR Serverless useful resource utilization, together with price
    • EMR Serverless API utilization
  • Monitor Spark application-level metrics and logs:
    • stdout and stderr logs
    • Spark engine metrics
  • Centralized observability over the prevailing environments, Datadog

Metrics

Utilizing EMR Serverless native metrics, AppsFlyer’s DataInfra group arrange a number of dashboards to help monitoring each the migration and the day-to-day utilization of EMR Serverless throughout the corporate. The next are the principle metrics that had been monitored:

  • Service quota utilization metrics:
    • vCPU utilization monitoring (ResourceCount with vCPU dimension)
    • API utilization monitoring (API precise utilization vs. API limits)
  • Software standing metrics:
    • RunningJobs, SuccessJobs, FailedJobs, PendingJobs, CancelledJobs
  • Useful resource limits monitoring:
    • MaxCPUAllowed vs. CPUAllocated
    • MaxMemoryAllowed vs. MemoryAllocated
    • MaxStorageAllowed vs. StorageAllocated
  • Employee-level metrics:
    • WorkerCpuAllocated vs. WorkerCpuUsed
    • WorkerMemoryAllocated vs. WorkerMemoryUsed
    • WorkerEphemeralStorageAllocated vs. WorkerEphemeralStorageUsed
  • Capability allocation monitoring:
    • Metrics filtered by CapacityAllocationType (PreInitCapacity vs. OnDemandCapacity)
    • ResourceCount
  • Employee kind distribution:
    • Metrics filtered by WorkerType (SPARK_DRIVER vs. SPARK_EXECUTORS)
  • Job success charges over time:
    • SuccessJobs vs. FailedJobs ratio
    • SubmitedJobs vs. PendingJobs

The next screenshot reveals an instance of the tracked metrics.

Logs

For logs administration, AppsFlyer’s DataInfra group explored a number of choices:

Streamlining EMR Serverless log transport to Datadog

As a result of AppsFlyer determined to maintain their logs in an exterior logging setting, the DataInfra group aimed to scale back the variety of parts concerned within the transport course of and reduce upkeep overhead. As a substitute of managing a Lambda based mostly log shipper, they developed a {custom} Spark plugin that seamlessly exports logs from EMR Serverless to Datadog.

Corporations already storing logs in Amazon S3 or CloudWatch Logs can make the most of EMR Serverless native help for these environments. Nonetheless, for groups needing a direct, real-time integration with Datadog, this strategy alleviates the necessity for additional infrastructure, offering a extra environment friendly and maintainable logging resolution.

The {custom} Spark plugin affords the next capabilities:

  • Automated log export – Streams logs from EMR Serverless to Datadog
  • Fewer additional parts – Alleviates the necessity for Lambda based mostly log shippers
  • Safe API key administration – Makes use of Vault as a substitute of hardcoding credentials
  • Customizable logging – Helps {custom} Log4j settings and log ranges
  • Full integration with Spark – Works on each driver and executor nodes

How the plugin works

On this part, we stroll by means of the parts of how the plugin works and supply a pseudocode overview:

  • Driver pluginLoggerDriverPlugin runs on the Spark driver to configure logging. The plugin fetches EMR job metadata, calls Vault to retrieve the Datadog API key, and configures logging settings.
initialize() {   if (person supplied log4j.xml) {      Use {custom} log configuration   } else {      Fetch EMR job metadata (utility identify, job ID, tags)      Retrieve Datadog API key from Vault      Apply default logging settings   } }

  • Executor plugin – LoggerExecutorPlugin gives constant logging throughout executor nodes. It inherits the motive force’s log configuration and makes positive the executors use constant logging
initialize() {    fetch logging config from Driver    apply log settings (log4j, log ranges) }

  • Most important plugin – LoggerSparkPlugin registers the motive force and executor plugins in Spark. It serves because the entry level for Spark and applies {custom} logging settings dynamically.
operate registerPlugin() {   return (driverPlugin, executorPlugin); }

loginToVault(position, vaultAddress) {     create AWS signed request     authenticate with Vault     return vault token } getDatadogApiKey(vaultToken, secretPath) {     fetch API key from Vault     return key }

Arrange the plugin

To arrange the plugin, full the next steps:

  1. Add the next dependencies to your undertaking:
   com.AppsFlyer.datacom   emr-serverless-logger-plugin    

  1. Configure the Spark plugin. The next code permits the {custom} Spark plugin and assigns the Vault position to entry the Datadog API key:

--conf "spark.plugins=com.AppsFlyer.datacom.emr.plugin.LoggerSparkPlugin"

--conf "spark.datacom.emr.plugin.vaultAuthRole=your_vault_role"

  1. Use a {custom} or default Log4j configuration:

--conf "spark.datacom.emr.plugin.location=classpath:my_custom_log4j.xml"

  1. Set the setting variables for various log ranges. This adjusts the logging for particular packages.

--conf "spark.emr-serverless.driverEnv.ROOT_LOG_LEVEL=WARN"

--conf "spark.executorEnv.ROOT_LOG_LEVEL=WARN"

--conf "spark.emr-serverless.driverEnv.LOG_LEVEL=DEBUG"

--conf "spark.executorEnv.LOG_LEVEL=DEBUG"

  1. Configure the Vault and Datadog API key and confirm safe Datadog API key retrieval.

By adopting this plugin, AppsFlyer was in a position to considerably simplify log transport, lowering the variety of transferring components whereas sustaining real-time log visibility in Datadog. This strategy gives reliability, safety, and ease of upkeep, making it an excellent resolution for groups utilizing EMR Serverless with Datadog.

Abstract

Via their migration to EMR Serverless, AppsFlyer achieved a big transformation in group autonomy and operational effectivity. Particular person groups now have larger freedom to decide on and construct their very own sources with out relying on a central infrastructure group, and might work extra independently and innovatively. The minimization of spot interruptions, which had been frequent of their earlier self-managed Hadoop clusters, has considerably improved stability and agility of their operations. Because of this autonomy and reliability, mixed with the automated scaling capabilities of EMR Serverless, the AppsFlyer groups can focus extra on knowledge processing and innovation reasonably than infrastructure administration. The result’s a extra environment friendly, versatile, and self-sufficient growth setting the place groups can higher reply to their particular wants whereas sustaining excessive efficiency requirements.

Ruli Weisbach, AppsFlyer EVP of R&D, says,

“EMR-Serverless is a sport changer for AppsFlyer; we’re in a position to save considerably our price with remarkably decrease administration overhead and maximal elasticity.”

If the AppsFlyer strategy sparked your curiosity and you might be eager about implementing an identical resolution in your group, discuss with the next sources:

Migrating to EMR Serverless can rework your group’s knowledge processing capabilities, providing a totally managed, cloud-based expertise that robotically scales sources and eases the operational complexity of conventional cluster administration, whereas enabling superior analytics and machine studying workloads with larger cost-efficiency.


Concerning the authors

Roy Ninio is an AI Platform Lead with deep experience in scalable knowledge platform and cloud-native architectures. At AppsFlyer, Roy led the design of a high-performance Knowledge Lake dealing with PB of each day occasions, pushed the adoption of EMR Serverless for dynamic massive knowledge processing, and architected lineage and governance programs throughout platforms.

Avichay Marciano is a Sr. Analytics Options Architect at Amazon Internet Providers. He has over a decade of expertise in constructing large-scale knowledge platforms utilizing Apache Spark, trendy knowledge lake architectures, and OpenSearch. He’s keen about data-intensive programs, analytics at scale, and it’s intersection with machine studying.

Eitav Arditti is AWS Senior Options Architect with 15 years in AdTech business, specializing in Serverless, Containers, Platform engineering, and Edge applied sciences. Designs cost-efficient, large-scale AWS architectures that leverage the cloud-native and edge computing to ship scalable, dependable options for enterprise development.

Yonatan Dolan is a Principal Analytics Specialist at Amazon Internet Providers. Yonatan is an Apache Iceberg evangelist, serving to clients design scalable, open knowledge lakehouse architectures and undertake trendy analytics options throughout industries.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles