Saturday, May 10, 2025

Configure cross-account entry of Amazon SageMaker Lakehouse multi-catalog tables utilizing AWS Glue 5.0 Spark

An IAM position, Glue-execution-role, within the client account, with the next insurance policies:

  1. AWS managed insurance policies AWSGlueServiceRole and AmazonRedshiftDataFullAccess.
  2. Create a brand new in-line coverage with the next permissions and connect it:
    {     "Model": "2012-10-17",     "Assertion": [         {             "Sid": "LFandRSserverlessAccess",             "Effect": "Allow",             "Action": [                 "lakeformation:GetDataAccess",                 "redshift-serverless:GetCredentials"             ],             "Useful resource": "*"         },         {             "Impact": "Enable",             "Motion": "iam:PassRole",             "Useful resource": "*",             "Situation": {                 "StringEquals": {                     "iam:PassedToService": "glue.amazonaws.com"                 }             }         }     ] }

  3. Add the next belief coverage to Glue-execution-role, permitting AWS Glue to imagine this position:
    {     "Model": "2012-10-17",     "Assertion": [         {             "Effect": "Allow",             "Principal": {                 "Service": [                     "glue.amazonaws.com"                 ]             },             "Motion": "sts:AssumeRole"         }     ] }

Steps for producer account setup

For the producer account setup, you may both use your IAM administrator position added as Lake Formation administrator or use a Lake Formation administrator position with permissions added as mentioned within the conditions. For illustration functions, we use the IAM admin position Admin added as Lake Formation administrator.

002-BDB 5089

Configure your catalog

Full the next steps to arrange your catalog:

  1. Log in to AWS Administration Console as Admin.
  2. On the Amazon Redshift console, comply with the directions in Registering Amazon Redshift clusters and namespaces to the AWS Glue Information Catalog.
  3. After the registration is initiated, you will note the invite from Amazon Redshift on the Lake Formation console.
  4. Choose the pending catalog invitation and select Approve and create catalog.

003-BDB 5089

  1. On the Set catalog particulars web page, configure your catalog:
    1. For Identify, enter a reputation (for this put up, redshiftserverless1-uswest2).
    2. Choose Entry this catalog from Apache Iceberg appropriate engines.
    3. Select the IAM position you created for the information switch.
    4. Select Subsequent.

    004-BDB 5089

  2. On the Grant permissions – non-obligatory web page, select Add permissions.
    1. Grant the Admin consumer Tremendous consumer permissions for Catalog permissions and Grantable permissions.
    2. Select Add.

    005-BDB 5089

  3. Confirm the granted permission on the subsequent web page and select Subsequent.
    006-BDB 5089
  4. Evaluate the main points on the Evaluate and create web page and select Create catalog.
    007-BDB 5089

Wait just a few seconds for the catalog to indicate up.

  1. Select Catalogs within the navigation pane and confirm that the redshiftserverless1-uswest2 catalog is created.
    008-BDB 5089
  2. Discover the catalog element web page to confirm the ordersdb.public database.
    009-BDB 5089
  3. On the database View dropdown menu, view the desk and confirm that the orderstbl desk exhibits up.
    010-BDB 5089

Because the Admin position, you may as well question the orderstbl in Amazon Athena and ensure the information is out there.

011-BDB 5089

Grant permissions on the tables from the producer account to the patron account

On this step, we share the Amazon Redshift federated catalog database redshiftserverless1-uswest2:ordersdb.public and desk orderstbl in addition to the Amazon S3 based mostly Iceberg desk returnstbl_iceberg and its database customerdb from the default catalog to the patron account. We will’t share your entire catalog to exterior accounts as a catalog-level permission; we simply share the database and desk.

  1. On the Lake Formation console, select Information permissions within the navigation pane.
  2. Select Grant.
    012-BDB 5089
  3. Underneath Principals, choose Exterior accounts.
  4. Present the patron account ID.
  5. Underneath LF-Tags or catalog assets, choose Named Information Catalog assets.
  6. For Catalogs, select the account ID that represents the default catalog.
  7. For Databases, select customerdb.
    013-BDB 5089
  8. Underneath Database permissions, choose Describe beneath Database permissions and Grantable permissions.
  9. Select Grant.
    014-BDB 5089
  10. Repeat these steps and grant table-level Choose and Describe permissions on returnstbl_iceberg.
  11. Repeat these steps once more to grant database- and table-level permissions for the ordertbl desk of the federated catalog database redshiftserverless1-uswest2/ordersdb.

The next screenshots present the configuration for database-level permissions.

015-BDB 5089

016-BDB 5089

The next screenshots present the configuration for table-level permissions.

017-BDB 5089

018-BDB 5089

  1. Select Information permissions within the navigation pane and confirm that the patron account has been granted database- and table-level permissions for each orderstbl from the federated catalog and returnstbl_iceberg from the default catalog.
    019-BDB 5089

Register the Amazon S3 location of the returnstbl_iceberg with Lake Formation.

On this step, we register the Amazon S3 based mostly Iceberg desk returnstbl_iceberg information location with Lake Formation to be managed by Lake Formation permissions. Full the next steps:

  1. On the Lake Formation console, select Information lake places within the navigation pane.
  2. Select Register location.
    020-BDB 5089
  3. For Amazon S3 path, enter the trail on your S3 bucket that you just offered whereas creating the Iceberg desk returnstbl_iceberg.
  4. For IAM position, present the user-defined position LakeFormationS3Registration_custom that you just created as a prerequisite.
  5. For Permission mode, choose Lake Formation.
  6. Select Register location.
    021-BDB 5089
  7. Select Information lake places within the navigation pane to confirm the Amazon S3 registration.
    022-BDB 5089

With this step, the producer account setup is full.

Steps for client account setup

For the patron account setup, we use the IAM admin position Admin, added as a Lake Formation administrator.

The steps within the client account are fairly concerned. Within the client account, a Lake Formation administrator will settle for the AWS Useful resource Entry Supervisor (AWS RAM) shares and create the required useful resource hyperlinks that time to the shared catalog, database, and tables. The Lake Formation admin verifies that the shared assets are accessible by working take a look at queries in Athena. The admin additional grants permissions to the position Glue-execution-role on the useful resource hyperlinks, database, and tables. The admin then runs a be part of question in AWS Glue 5.0 Spark utilizing Glue-execution-role.

Settle for and confirm the shared assets

Lake Formation makes use of AWS RAM shares to allow cross-account sharing with Information Catalog useful resource insurance policies within the AWS RAM insurance policies. To view and confirm the shared assets from producer account, full the next steps:

  1. Log in to the patron AWS console and set the AWS Area to match the producer’s shared useful resource Area. For this put up, we use us-west-2.
  2. Open the Lake Formation console. You will note a message indicating there’s a pending invite and asking you settle for it on the AWS RAM console.
    023-BDB 5089
  3. Comply with the directions in Accepting a useful resource share invitation from AWS RAM to assessment and settle for the pending invitations.
  4. When the invite standing adjustments to Accepted, select Shared assets beneath Shared with me within the navigation pane.
  5. Confirm that the Redshift Serverless federated catalog redshiftserverless1-uswest2, the default catalog database customerdb, the desk returnstbl_iceberg, and the producer account ID beneath Proprietor ID column show appropriately.
    024-BDB 5089
  6. On the Lake Formation console, beneath Information Catalog within the navigation pane, select Databases.
  7. Search by the producer account ID.
    It is best to see the customerdb and public databases. You may additional choose every database and select View tables on the Actions dropdown menu and confirm the desk names

025-BDB 5089

You’ll not see an AWS RAM share invite for the catalog degree on the Lake Formation console, as a result of catalog-level sharing isn’t doable. You may assessment the shared federated catalog and Amazon Redshift managed catalog names on the AWS RAM console, or utilizing the AWS Command Line Interface (AWS CLI) or SDK.

Create a catalog hyperlink container and useful resource hyperlinks

A catalog hyperlink container is a Information Catalog object that references an area or cross-account federated database-level catalog from different AWS accounts. For extra particulars, check with Accessing a shared federated catalog. Catalog hyperlink containers are primarily Lake Formation useful resource hyperlinks on the catalog degree that reference or level to a Redshift cluster federated catalog or Amazon Redshift managed catalog object from different accounts.

Within the following steps, we create a catalog hyperlink container that factors to the producer shared federated catalog redshiftserverless1-uswest2. Contained in the catalog hyperlink container, we create a database. Contained in the database, we create a useful resource hyperlink for the desk that factors to the shared federated catalog desk >:redshiftserverless1-uswest2/ordersdb.public.orderstbl.

  1. On the Lake Formation console, beneath Information Catalog within the navigation pane, select Catalogs.
  2. Select Create catalog.

026-BDB 5089

  1. Present the next particulars for the catalog:
    1. For Identify, enter a reputation for the catalog (for this put up, rl_link_container_ordersdb).
    2. For Kind, select Catalog Hyperlink container.
    3. For Supply, select Redshift.
    4. For Goal Redshift Catalog, enter the Amazon Useful resource Identify (ARN) of the producer federated catalog (arn:aws:glue:us-west-2:>:catalog/redshiftserverless1-uswest2/ordersdb).
    5. Underneath Entry from engines, choose Entry this catalog from Apache Iceberg appropriate engines.
    6. For IAM position, present the Redshift-S3 information switch position that you just had created within the conditions.
    7. Select Subsequent.

027-BDB 5089

  1. On the Grant permissions – non-obligatory web page, select Add permissions.
    1. Grant the Admin consumer Tremendous consumer permissions for Catalog permissions and Grantable permissions.
    2. Select Add after which select Subsequent.

028-BDB 5089

  1. Evaluate the main points on the Evaluate and create web page and select Create catalog.

Wait just a few seconds for the catalog to indicate up.

029-BDB 5089

  1. Within the navigation pane, select Catalogs.
  2. Confirm that rl_link_container_ordersdb is created.

030-BDB 5089

Create a database beneath rl_link_container_ordersdb

Full the next steps:

  1. On the Lake Formation console, beneath Information Catalog within the navigation pane, select Databases.
  2. On the Select catalog dropdown menu, select rl_link_container_ordersdb.
  3. Select Create database.

Alternatively, you may select the Create dropdown menu after which select Database.

  1. Present particulars for the database:
    1. For Identify, enter a reputation (for this put up, public_db).
    2. For Catalog, select rl_link_container_ordersdb.
    3. Go away Location – non-obligatory as clean.
    4. Underneath Default permissions for newly created tables, deselect Use solely IAM entry management for brand new tables on this database.
    5. Select Create database.

031-BDB 5089

  1. Select Catalogs within the navigation pane to confirm that public_db is created beneath rl_link_container_ordersdb.

032-BDB 5089

Create a desk useful resource hyperlink for the shared federated catalog desk

A useful resource hyperlink to a shared federated catalog desk can reside solely contained in the database of a catalog hyperlink container. A useful resource hyperlink for such tables won’t work if created contained in the default catalog. For extra particulars on useful resource hyperlinks, check with Making a useful resource hyperlink to a shared Information Catalog desk.

Full the next steps to create a desk useful resource hyperlink:

  1. On the Lake Formation console, beneath Information Catalog within the navigation pane, select Tables.
  2. On the Create dropdown menu, select Useful resource hyperlink.

033-BDB 5089

  1. Present particulars for the desk useful resource hyperlink:
    1. For Useful resource hyperlink title, enter a reputation (for this put up, rl_orderstbl).
    2. For Vacation spot catalog, select rl_link_container_ordersdb.
    3. For Database, select public_db.
    4. For Shared desk’s area, select US West (Oregon).
    5. For Shared desk, select orderstbl.
    6. After the Shared desk is chosen, Shared desk’s database and Shared desk’s catalog ID ought to get routinely populated.
    7. Select Create.

034-BDB 5089

  1. Within the navigation pane, select Databases to confirm that rl_orderstbl is created beneath public_db, inside rl_link_container_ordersdb.

035-BDB 5089

036-BDB 5089

Create a database useful resource hyperlink for the shared default catalog database.

Now we create a database useful resource hyperlink within the default catalog to question the Amazon S3 based mostly Iceberg desk shared from the producer. For particulars on database useful resource hyperlinks, refer Making a useful resource hyperlink to a shared Information Catalog database.

Although we’re capable of see the shared database within the default catalog of the patron, a useful resource hyperlink is required to question from analytics engines, reminiscent of Athena, Amazon EMR, and AWS Glue. When utilizing AWS Glue with Lake Formation tables, the useful resource hyperlink must be named identically to the supply account’s useful resource. For added particulars on utilizing AWS Glue with Lake Formation, check with Issues and limitations.

Full the next steps to create a database useful resource hyperlink:

  1. On the Lake Formation console, beneath Information Catalog within the navigation pane, select Databases.
  2. On the Select catalog dropdown menu, select the account ID to decide on the default catalog.
  3. Seek for customerdb.

It is best to see the shared database title customerdb with the Proprietor account ID as that of your producer account ID.

  1. Choose customerdb, and on the Create dropdown menu, select Useful resource hyperlink.
  2. Present particulars for the useful resource hyperlink:
    1. For Useful resource hyperlink title, enter a reputation (for this put up, customerdb).
    2. The remainder of the fields needs to be already populated.
    3. Select Create.
  3. Within the navigation pane, select Databases and confirm that customerdb is created beneath the default catalog. Useful resource hyperlink names will present in italicized font.

037-BDB 5089

Confirm entry as Admin utilizing Athena

Now you may confirm your entry utilizing Athena. Full the next steps:

  1. Open the Athena console.
  2. Ensure that an S3 bucket is offered to retailer the Athena question outcomes. For particulars, check with Specify a question end result location utilizing the Athena console.
  3. Within the navigation pane, confirm each the default catalog and federated catalog tables by previewing them.
  4. You may as well run a be part of question as follows. Take note of the three-point notation for referring to the tables from two totally different catalogs:
SELECT returns_tb.market as Market, sum(orders_tb.amount) as Total_Quantity FROM rl_link_container_ordersdb.public_db.rl_orderstbl as orders_tb JOIN awsdatacatalog.customerdb.returnstbl_iceberg as returns_tb ON orders_tb.order_id = returns_tb.order_id GROUP BY returns_tb.market;

038-BDB 5089

This verifies the brand new functionality of SageMaker Lakehouse, which permits accessing Redshift cluster tables and Amazon S3 based mostly Iceberg tables in the identical question, throughout AWS accounts, by means of the Information Catalog, utilizing Lake Formation permissions.

Grant permissions to Glue-execution-role

Now we are going to share the assets from the producer account with extra IAM principals within the client account. Often, the information lake admin grants permissions to information analysts, information scientists, and information engineers within the client account to do their job features, reminiscent of processing and analyzing the information.

We arrange Lake Formation permissions on the catalog hyperlink container, databases, tables, and useful resource hyperlinks to the AWS Glue job execution position Glue-execution-role that we created within the conditions.

Useful resource hyperlinks permit solely Describe and Drop permissions. It is advisable to use the Grant on course configuration to offer database Describe and desk Choose permissions.

Full the next steps:

  1. On the Lake Formation console, select Information permissions within the navigation pane.
  2. Select Grant.
  3. Underneath Principals, choose IAM customers and roles.
  4. For IAM customers and roles, enter Glue-execution-role.
  5. Underneath LF-Tags or catalog assets, choose Named Information Catalog assets.
  6. For Catalogs, select rl_link_container_ordersdb and the patron account ID, which signifies the default catalog.
  7. Underneath Catalog permissions, choose Describe for Catalog permissions.
  8. Select Grant.

039-BDB 5089

040-BDB 5089

  1. Repeat these steps for the catalog rl_link_container_ordersdb:
    1. On the Databases dropdown menu, select public_db.
    2. Underneath Database permissions, choose Describe.
    3. Select Grant.
  2. Repeat these steps once more, however after selecting rl_link_container_ordersdb and public_db, on the Tables dropdown menu, select rl_orderstbl.
    1. Underneath Useful resource hyperlink permissions, choose Describe.
    2. Select Grant.
  3. Repeat these steps to grant extra permissions to Glue-execution-role.
    1. For this iteration, grant Describe permissions on the default catalog databases public and customerdb.
    2. Grant Describe permission on the useful resource hyperlink customerdb.
    3. Grant Choose permission on the tables returnstbl_iceberg and orderstbl.

The next screenshots present the configuration for database public and customerdb permissions.

041-BDB 5089

042-BDB 5089

The next screenshots present the configuration for useful resource hyperlink customerdb permissions.

043-BDB 5089

044-BDB 5089

The next screenshots present the configuration for desk returnstbl_iceberg permissions.

045-BDB 5089

046-BDB 5089

The next screenshots present the configuration for desk orderstbl permissions.

047-BDB 5089

048-BDB 5089

  1. Within the navigation pane, select Information permissions and confirm permissions on Glue-execution-role.

049-BDB 5089

Run a PySpark job in AWS Glue 5.0

Obtain the PySpark script LakeHouseGlueSparkJob.py. This AWS Glue PySpark script runs Spark SQL by becoming a member of the producer shared federated orderstbl desk and Amazon S3 based mostly returns desk within the client account to investigate the information and determine the full orders positioned per market.

Change > within the script along with your client account ID. Full the next steps to create and run an AWS Glue job:

  1. On the AWS Glue console, within the navigation pane, select ETL jobs.
  2. Select Create job, then select Script editor.

050-BDB 5089

  1. For Engine, select Spark.
  2. For Choices, select Begin recent.
  3. Select Add script.
  4. Browse to the placement the place you downloaded and edited the script, choose the script, and select Open.
  5. On the Job particulars tab, present the next info:
    1. For Identify, enter a reputation (for this put up, LakeHouseGlueSparkJob).
    2. Underneath Primary properties, for IAM position, select Glue-execution-role.
    3. For Glue model, choose Glue 5.0.
    4. Underneath Superior properties, for Job parameters, select Add new parameter.
    5. Add the parameters --datalake-formats = iceberg and --enable-lakeformation-fine-grained-access = true.
  6. Save the job.
  7. Select Run to execute the AWS Glue job, and look forward to the job to finish.
  8. Evaluate the job run particulars from the Output logs

051-BDB 5089

052-BDB 5089

Clear up

To keep away from incurring prices in your AWS accounts, clear up the assets you created:

  1. Delete the Lake Formation permissions, catalog hyperlink container, database, and tables within the client account.
  2. Delete the AWS Glue job within the client account.
  3. Delete the federated catalog, database, and desk assets within the producer account.
  4. Delete the Redshift Serverless namespace within the producer account.
  5. Delete the S3 buckets you created as a part of information switch in each accounts and the Athena question outcomes bucket within the client account.
  6. Clear up the IAM roles you created for the SageMaker Lakehouse setup as a part of the conditions.

Conclusion

On this put up, we illustrated carry your current Redshift tables to SageMaker Lakehouse and share them securely with exterior AWS accounts. We additionally confirmed question the shared information warehouse and information lakehouse tables in the identical Spark session, from a recipient account, utilizing Spark in AWS Glue 5.0.

We hope you discover this handy to combine your Redshift tables with an current information mesh and entry the tables utilizing AWS Glue Spark. Take a look at this resolution in your accounts and share suggestions within the feedback part. Keep tuned for extra updates and be at liberty to discover the options of SageMaker Lakehouse and AWS Glue variations.

Appendix: Desk creation

Full the next steps to create a returns desk within the Amazon S3 based mostly default catalog and an orders desk in Amazon Redshift:

  1. Obtain the CSV format datasets orders and returns.
  2. Add them to your S3 bucket beneath the corresponding desk prefix path.
  3. Use the next SQL statements in Athena. First-time customers of Athena ought to check with Specify a question end result location.
CREATE DATABASE customerdb; CREATE EXTERNAL TABLE customerdb.returnstbl_csv(   `returned` string,    `order_id` string,    `market` string) ROW FORMAT DELIMITED    FIELDS TERMINATED BY ';'  LOCATION   's3:////' TBLPROPERTIES (   'skip.header.line.depend'='1' ); choose * from customerdb.returnstbl_csv restrict 10;  

053-BDB 5089

  1. Create an Iceberg format desk within the default catalog and insert information from the CSV format desk:
CREATE TABLE customerdb.returnstbl_iceberg(   `returned` string,    `order_id` string,    `market` string) LOCATION 's3:///returnstbl_iceberg/'  TBLPROPERTIES (   'table_type'='ICEBERG' ); INSERT INTO customerdb.returnstbl_iceberg SELECT * FROM returnstbl_csv;   SELECT * FROM customerdb.returnstbl_iceberg LIMIT 10;  

054-BDB 5089

  1. To create the orders desk within the Redshift Serverless namespace, open the Question Editor v2 on the Amazon Redshift console.
  2. Hook up with the default namespace utilizing your database admin consumer credentials.
  3. Run the next instructions within the SQL editor to create the database ordersdb and desk orderstbl in it. Copy the information out of your S3 location of the orders information to the orderstbl:
create database ordersdb; use ordersdb; create desk orderstbl(   row_id int,    order_id VARCHAR,    order_date VARCHAR,    ship_date VARCHAR,    ship_mode VARCHAR,    customer_id VARCHAR,    customer_name VARCHAR,    section VARCHAR,    metropolis VARCHAR,    state VARCHAR,    nation VARCHAR,    postal_code int,    market VARCHAR,    area VARCHAR,    product_id VARCHAR,    class VARCHAR,    sub_category VARCHAR,    product_name VARCHAR,    gross sales VARCHAR,    amount bigint,    low cost VARCHAR,    revenue VARCHAR,    shipping_cost VARCHAR,    order_priority VARCHAR   ); copy orderstbl from 's3:///ordersdatacsv/orders.csv'  iam_role 'arn:aws:iam:::position/service-role/' CSV  DELIMITER ';' IGNOREHEADER 1 ; choose * from ordersdb.orderstbl restrict 5; 

Concerning the Authors

055-BDB 5089Aarthi Srinivasan is a Senior Massive Information Architect with Amazon SageMaker Lakehouse. She collaborates with the service workforce to boost product options, works with AWS clients and companions to architect lakehouse options, and establishes greatest practices for information governance.

056-BDB 5089Subhasis Sarkar is a Senior Information Engineer with Amazon. Subhasis thrives on fixing advanced technological challenges with revolutionary options. He makes a speciality of AWS information architectures, notably information mesh implementations utilizing AWS CDK parts.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles