Friday, May 16, 2025

Entry Amazon Redshift Managed Storage tables by means of Apache Spark on AWS Glue and Amazon EMR utilizing Amazon SageMaker Lakehouse

Knowledge environments in data-driven organizations are altering to fulfill the rising calls for for analytics, together with enterprise intelligence (BI) dashboarding, one-time querying, information science, machine studying (ML), and generative AI. These organizations have an enormous demand for lakehouse options that mix the very best of information warehouses and information lakes to simplify information administration with easy accessibility to all information from their most popular engines.

Amazon SageMaker Lakehouse unifies all of your information throughout Amazon Easy Storage Service (Amazon S3) information lakes and Amazon Redshift information warehouses, serving to you construct highly effective analytics and synthetic intelligence and machine studying (AI/ML) purposes on a single copy of knowledge. SageMaker Lakehouse provides you the pliability to entry and question your information  in place with all Apache Iceberg suitable instruments and engines. It secures your information within the lakehouse by defining fine-grained permissions, that are persistently utilized throughout all analytics and ML instruments and engines. You’ll be able to carry information from operational databases and purposes into your lakehouse in close to actual time by means of zero-ETL integrations. It accesses and queries information in-place with federated question capabilities throughout third-party information sources by means of Amazon Athena.

With SageMaker Lakehouse, you may entry tables saved in Amazon Redshift managed storage (RMS) by means of Iceberg APIs, utilizing the Iceberg REST catalog backed by AWS Glue Knowledge Catalog. This expands your information integration workload throughout information lakes and information warehouses, enabling seamless entry to various information sources.

Amazon SageMaker Unified Studio, Amazon EMR 7.5.0 and better, and AWS Glue 5.0 natively assist SageMaker Lakehouse. This submit describes how you can combine information on RMS tables by means of Apache Spark utilizing SageMaker Unified Studio, Amazon EMR 7.5.0 and better, and AWS Glue 5.0.

entry RMS tables by means of Apache Spark on AWS Glue and Amazon EMR

With SageMaker Lakehouse, RMS tables are accessible by means of the Apache Iceberg REST catalog. Open supply engines similar to Apache Spark are suitable with Apache Iceberg, and so they can work together with RMS tables by configuring this Iceberg REST catalog. You’ll be able to study extra in Connecting to the Knowledge Catalog utilizing AWS Glue Iceberg REST extension endpoint.

Be aware that the Iceberg REST extensions endpoint is used whenever you entry RMS tables. This endpoint is accessible by means of the Apache Iceberg AWS Glue Knowledge Catalog extensions, which comes preinstalled on AWS Glue 5.0 and Amazon EMR 7.5.0 or larger. The extension library permits entry to RMS tables utilizing the Amazon Redshift connector for Apache Spark.

To entry RMS backed catalog databases from Spark, every RMS database requires its personal Spark session catalog configuration. Listed below are the required Spark configurations:

Spark config key Worth
spark.sql.catalog.{catalog_name} org.apache.iceberg.spark.SparkCatalog
spark.sql.catalog.{catalog_name}.sort glue
spark.sql.catalog.{catalog_name}.glue.id {account_id}:{rms_catalog_name}/{database_name}
spark.sql.catalog.{catalog_name}.shopper.area {aws_region}
spark.sql.extensions org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions

Configuration parameters:

  • {catalog_name}: Your chosen identify for referencing the RMS catalog database in your utility code
  • {rms_catalog_name}: The RMS catalog identify as proven within the AWS Lake Formation catalogs part
  • {database_name}: The RMS database identify
  • {aws_region}: The AWS Area the place the RMS catalog is positioned

For a deeper understanding of how the Amazon Redshift hierarchy (databases, schemas, and tables) is mapped to the AWS Glue multilevel catalogs, you may confer with the Bringing Amazon Redshift information into the AWS Glue Knowledge Catalog documentation.

Within the following part, we reveal how you can entry RMS tables by means of Apache Spark utilizing SageMaker Unified Studio JupyterLab notebooks with the AWS Glue 5.0 runtime and Amazon EMR Serverless.

Though we will carry present Amazon Redshift tables into the AWS Glue Knowledge catalog by making a Lakehouse Redshift catalog from an present Redshift namespace and supply entry to a SageMaker Unified Studio challenge, within the following instance, you’ll create a managed Amazon Redshift Lakehouse catalog straight from SageMaker Unified Studio and work with that.

Stipulations

To observe these directions, you could have the next conditions:

Create a SageMaker Unified Studio challenge

Full the next steps to create a SageMaker Unified Studio challenge:

  1. Sign up to SageMaker Unified Studio.
  2. Select Choose a challenge on the highest menu and select Create challenge.
  3. For Mission identify, enter demo.
  4. For Mission profile, select All capabilities.
  5. Select Proceed.

  1. Go away the default values and select Proceed.
  2. Evaluate the configurations and select Create challenge.

You should watch for the challenge to be created. Mission creation can take about 5 minutes. When the challenge standing modifications to Energetic, choose the challenge identify to entry the challenge’s dwelling web page.

  1. Make observe of the Mission position ARN since you’ll want it for subsequent steps.

You’ve efficiently created the challenge and famous the challenge position ARN. The following step is to configure a Lakehouse catalog to your RMS.

Configure a Lakehouse catalog to your RMS

Full the next steps to configure a Lakehouse catalog to your RMS:

  1. Within the navigation pane, select Knowledge.
  2. Select the + (plus) signal.
  3. Choose Create Lakehouse catalog to create a brand new catalog and select Subsequent.

  1. For Lakehouse catalog identify, enter rms-catalog-demo.
  2. Select Add catalog.

  1. Await the catalog to be created.

  1. In SageMaker Unified Studio, select Knowledge within the left navigation pane, then choose the three vertical dots subsequent to Redshift (Lakehouse) and select Refresh to ensure the Amazon Redshift compute is lively.

Create a brand new desk within the RMS Lakehouse catalog:

  1. In SageMaker Unified Studio, on the highest menu, underneath Construct, select Question Editor.
  2. On the highest proper, select Choose information supply.
  3. For CONNECTIONS, select Redshift (Lakehouse).
  4. For DATABASES, select dev@rms-catalog-demo.
  5. For SCHEMAS, select public.
  6. Select Select.

  1. Within the question cell, enter and execute the next question to create a brand new schema:
create schema "dev@rms-catalog-demo".salesdb

  1. In a brand new cell, enter and execute the next question to create a brand new desk:
create desk salesdb.store_sales (ss_sold_timestamp timestamp, ss_item textual content, ss_sales_price float);

  1. In a brand new cell, enter and execute the next question to populate the desk with pattern information:
insert into salesdb.store_sales values ('2024-12-01T09:00:00Z', 'Product 1', 100.0), ('2024-12-01T11:00:00Z', 'Product 2', 500.0), ('2024-12-01T15:00:00Z', 'Product 3', 20.0), ('2024-12-01T17:00:00Z', 'Product 4', 1000.0), ('2024-12-01T18:00:00Z', 'Product 5', 30.0), ('2024-12-02T10:00:00Z', 'Product 6', 5000.0), ('2024-12-02T16:00:00Z', 'Product 7', 5.0);

  1. In a brand new cell, enter and run the next question to confirm the desk contents:
choose * from salesdb.store_sales;

(Elective) Create an Amazon EMR Serverless utility

IMPORTANT: This part is barely required in the event you plan to check additionally utilizing Amazon EMR Serverless. If you happen to intend to make use of AWS Glue solely, you may skip this part fully.

  1. Navigate to the challenge web page. Within the left navigation pane, choose Compute, then choose the Knowledge processing Select Add compute.

  1. Select Create new compute sources, then select Subsequent.

  1. Choose EMR Serverless.

  1. Specify emr_serverless_application as Compute identify, choose Compatibility as Permission mode, and select Add compute.

  1. Monitor the deployment progress. Await the Amazon EMR Serverless utility to finish its deployment. This course of can take a minute.

Entry Amazon Redshift Managed Storage tables by means of Apache Spark

On this part, we reveal how you can question tables saved in RMS utilizing a SageMaker Unified Studio pocket book.

  1. Within the navigation pane, select Knowledge
  2. Underneath Lakehouse, choose the down arrow subsequent to rms-catalog-demo
  3. Underneath dev, choose the down arrow subsequent salesdb, select store_sales, and select the three dots

SageMaker Lakehouse offers a number of evaluation choices: Question with Athena, Question with Redshift, and Open in Jupyter Lab pocket book.

  1. Select Open in Jupyter Lab pocket book
  2. On the Launcher tab, select Python 3 (ipykernel)

In SageMaker Unified Studio JupyterLab, you may specify completely different compute varieties for every pocket book cell. Though this instance demonstrates utilizing AWS Glue compute (challenge.spark.compatibility), the identical code might be executed utilizing Amazon EMR Serverless by choosing the suitable compute within the cell settings. The next desk exhibits the connection sort and compute values to specify when working PySpark code or Spark SQL code with completely different engines:

Compute choice Pyspark code Spark SQL
Connection sort Compute Connection sort Compute
AWS Glue Pyspark challenge.spark.compatibility SQL challenge.spark.compatibility
Amazon EMR Serverless Pyspark emr-s.emr_serverless_application SQL emr-s.emr_serverless_application
  1. Within the pocket book cell’s prime left nook, set Connection Sort to PySpark and choose spark.compatibility (AWS Glue 5.0) as Compute
  2. Execute the next code to initialize the SparkSession and configure rmscatalog because the session catalog for accessing the dev database underneath the rms-catalog-demo RMS catalog:
from pyspark.sql import SparkSession catalog_name = "rmscatalog" #Change  together with your AWS account ID rms_catalog_id = ":rms-catalog-demo/dev" #Change together with your AWS area aws_region="us-east-2" spark = SparkSession.builder.appName('rms_demo')      .config(f'spark.sql.catalog.{catalog_name}', 'org.apache.iceberg.spark.SparkCatalog')      .config(f'spark.sql.catalog.{catalog_name}.sort', 'glue')      .config(f'spark.sql.catalog.{catalog_name}.glue.id', rms_catalog_id)      .config(f'spark.sql.catalog.{catalog_name}.shopper.area', aws_region)      .config('spark.sql.extensions','org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions')      .getOrCreate()

  1. Create a brand new cell and swap the connection sort from PySpark to SQL to execute Spark SQL instructions straight
  2. Enter the next SQL assertion to view all tables underneath salesdb (RMS schema) inside rmscatalog:
SHOW TABLES IN rmscatalog.salesdb

  1. In a brand new SQL cell, enter the next DESCRIBE EXTENDED assertion to view detailed details about the store_sales desk within the salesdb schema:
DESCRIBE EXTENDED rmscatalog.salesdb.store_sales

Within the output, you’ll observe that the Supplier is ready to iceberg. This means that the desk is acknowledged as an Iceberg desk, regardless of being saved in Amazon Redshift managed storage.

  1. In a brand new SQL cell, enter the next SELECT assertion to view the content material of the desk
SELECT * FROM rmscatalog.salesdb.store_sales

All through this instance, we demonstrated how you can create a desk in Amazon Redshift Serverless and seamlessly question it as an Iceberg desk utilizing Apache Spark inside a SageMaker Unified Studio pocket book.

Clear up

To keep away from incurring future expenses, clear up all created sources:

  1. Delete the created SageMaker Unified Studio challenge. This step will routinely delete Amazon EMR compute (for instance, the Amazon EMR Serverless utility) that was provisioned from the challenge:
    1. Inside SageMaker Studio, navigate to the demo challenge’s Mission overview part.
    2. Select Actions, then choose Delete challenge.
    3. Sort verify and select Delete challenge.
  1. Delete the created Lakehouse catalog:
    1. Navigate to the AWS Lake Formation web page within the Catalogs part.
    2. Choose the rms-catalog-demo catalog, select Actions, then choose Delete.
    3. Within the affirmation window sort rms-catalog-demo after which select Drop.

Conclusion

On this submit, we demonstrated how you can use Apache Spark to work together with Amazon Redshift Managed Storage tables by means of Amazon SageMaker Lakehouse utilizing the Iceberg REST catalog. This integration gives a unified view of your information throughout Amazon S3 information lakes and Amazon Redshift information warehouses, so you may construct highly effective analytics and AI/ML purposes whereas sustaining a single copy of your information.

For extra workloads and implementations, go to Simplify information entry to your enterprise utilizing Amazon SageMaker Lakehouse.


Concerning the Authors

Noritaka Sekiyama is a Principal Large Knowledge Architect with Amazon Net Companies (AWS) Analytics providers. He’s accountable for constructing software program artifacts to assist clients. In his spare time, he enjoys biking on his highway bike.

Stefano Sandonà is a Senior Large Knowledge Specialist Resolution Architect at Amazon Net Companies (AWS). Captivated with information, distributed methods, and safety, he helps clients worldwide architect high-performance, environment friendly, and safe information options.

Derek Liu is a Senior Options Architect primarily based out of Vancouver, BC. He enjoys serving to clients clear up massive information challenges by means of Amazon Net Companies (AWS) analytic providers.

Raj Ramasubbu is a Senior Analytics Specialist Options Architect targeted on massive information and analytics and AI/ML with Amazon Net Companies (AWS). He helps clients architect and construct extremely scalable, performant, and safe cloud-based options on AWS. Raj supplied technical experience and management in constructing information engineering, massive information analytics, enterprise intelligence, and information science options for over 18 years previous to becoming a member of AWS. He helped clients in numerous trade verticals like healthcare, medical gadgets, life science, retail, asset administration, automobile insurance coverage, residential REIT, agriculture, title insurance coverage, provide chain, doc administration, and actual property.

Angel Conde Manjon is a Sr. EMEA Knowledge & AI PSA, primarily based in Madrid. He has beforehand labored on analysis associated to information analytics and AI in various European analysis initiatives. In his present position, Angel helps companions develop companies centered on information and AI.


Appendix: Pattern script for Lake Formation FGAC enabled Spark cluster

If you wish to entry RMS tables from Lake Formation FGAC enabled Spark cluster on AWS Glue or Amazon EMR, confer with the next code instance:

from pyspark.sql import SparkSession catalog_name = "rmscatalog" rms_catalog_name = "123456789012:rms-catalog-demo/dev" account_id = "123456789012" area = "us-east-2" spark = SparkSession.builder.appName('rms_demo')  .config('spark.sql.defaultCatalog', catalog_name)  .config(f'spark.sql.catalog.{catalog_name}', 'org.apache.iceberg.spark.SparkCatalog')  .config(f'spark.sql.catalog.{catalog_name}.sort', 'glue')  .config(f'spark.sql.catalog.{catalog_name}.glue.id', rms_catalog_name)  .config(f'spark.sql.catalog.{catalog_name}.shopper.area', area)  .config(f'spark.sql.catalog.{catalog_name}.glue.account-id', account_id)  .config(f'spark.sql.catalog.{catalog_name}.glue.catalog-arn',f'arn:aws:glue:{area}:{rms_catalog_name}')  .config('spark.sql.extensions','org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions')  .getOrCreate()

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles