[ad_1]
In at present’s advanced enterprise panorama, organizations are challenged to devour from number of sources and sustain with knowledge that pours in all by the day. There’s a demand to design purposes that permits knowledge to be transportable throughout cloud platforms and provides them the power to derive insights from a number of knowledge sources to stay aggressive. On this publish, we reveal how AWS Glue integration with Snowflake has simplified the method of connecting to Snowflake and making use of knowledge transformations with out writing a single line of code. With AWS Glue Studio, now you can use a easy visible interface to compose jobs for migrations that transfer and combine knowledge. It allows you to subscribe to a Snowflake connector in AWS Market, question Snowflake tables and save the info in Amazon Easy Storage Service (Amazon S3) as Parquet format.
In the event you select to carry your personal customized connector or desire a special connector from AWS Market, comply with the steps on this weblog Performing knowledge transformations utilizing Snowflake and AWS Glue. On this publish, we use the brand new AWS Glue Connector for Snowflake to seamlessly join with Snowflake with out the necessity to set up JDBC drivers. To validate the info ingested, we use Amazon Redshift Spectrum to create an exterior desk and question the info in Amazon S3. With Amazon Redshift Spectrum, you’ll be able to effectively question and retrieve knowledge from information in Amazon S3 with out having to load the info into Amazon Redshift tables.
Resolution Overview
Let’s check out the structure diagram on how AWS Glue connects to Snowflake for knowledge ingestion.
Stipulations
Earlier than you begin, be sure you have the next:
- An account in Snowflake, particularly a service account that has permissions to tables to be queried.
- AWS Id and Entry Administration (IAM) permissions in place to create AWS Glue and Amazon Redshift service roles and insurance policies. To configure, comply with the directions in Organising IAM Permissions for AWS Glue and Create an IAM position for Amazon Redshift.
- Amazon Redshift Serverless endpoint. In the event you don’t have it configured, comply with the directions in Amazon Redshift Serverless Analytics.
Configure the Amazon S3 VPC Endpoint
As a primary step, we configure an Amazon S3 VPC Endpoint to allow AWS Glue to make use of a personal IP deal with to entry Amazon S3 with no publicity to the general public web. Full the next steps.
- Open the Amazon VPC console.
- Within the left navigation pane, select Endpoints.
- Select Create Endpoint, and comply with the steps to create an Amazon S3 VPC endpoint of kind Gateway.
Subsequent, we create a secret utilizing AWS Secrets and techniques Supervisor
- On AWS Secrets and techniques Supervisor console, select Retailer a brand new secret.
- For Secret kind, choose Different kind of secret.
- Enter a key as
sfUser
and the worth as your Snowflake consumer identify. - Enter a key as
sfPassword
and the worth as your Snowflake consumer password. - Select Subsequent.
- Identify the key
snowflake_credentials
and comply with by the remainder of the steps to retailer the key.
Subscribe to AWS Market Snowflake Connector
To subscribe to the connector, comply with the steps and activate Snowflake Connector for AWS Glue. This native connector simplifies the method of connecting AWS Glue jobs to extract knowledge from Snowflake
- Navigate to the Snowflake Connector for AWS Glue in AWS Market.
- Select Proceed to Subscribe.
- Evaluate the phrases and circumstances, pricing, and different particulars.
- Select Proceed to Configuration.
- For Supply Methodology, select your supply technique.
- For Software program model, select your software program model
- Select Proceed to Launch.
- Underneath Utilization directions, select Activate the Glue connector in AWS Glue Studio. You’re redirected to AWS Glue Studio.
- For Identify, enter a reputation on your connection (for instance, snowflake_s3_glue_connection).
- Optionally, select a VPC, subnet, and safety group.
- For AWS Secret, select
snowflake_credentials
. - Select Create connection.
A message seems that the connection was efficiently created, and the connection is now seen on the AWS Glue Studio console.
Configure AWS Glue for Snowflake JDBC connectivity
Subsequent, we configure a AWS Glue job by following the steps beneath to extract knowledge.
- On the AWS Glue console, select AWS Glue Studio on the left navigation pane.
- On the AWS Glue Studio console, select Jobs on the left navigation pane.
- Create a job with “Visible with supply and goal” and select the Snowflake connector for AWS Glue 3.0 because the supply and Amazon S3 because the goal.
- Enter a reputation for the job.
- Underneath job particulars, choose an IAM position.
- Create a brand new IAM position should you don’t have already with required AWS Glue and AWS Secrets and techniques Supervisor insurance policies.
- Underneath Visible, Select the Knowledge supply – Connection node and select the connection you created.
- In connection choices, create a key worth pair with question as proven beneath. Observe that
CUSTOMER
desk inSNOWFLAKE_SAMPLE_DATA
database is taken into account for this migration. This desk will get preloaded (1.5M rows) once you set up Snowflake Schema.
key worth question SELECT
C_CUSTKEY,
C_NAME,
C_ADDRESS,
C_NATIONKEY,
C_PHONE,
C_ACCTBAL,
C_MKTSEGMENT,
C_COMMENT
FROM
SNOWFLAKE_SAMPLE_DATA.TPCH_SF1.CUSTOMER
sfUrl MXA94638.us-east-1.snowflakecomputing.com
sfDatabase SNOWFLAKE_SAMPLE_DATA
sfWarehouse COMPUTE_WH
- Within the Output schema part, specify the supply schema as key-value pairs as proven beneath.
- Select the Remodel-ApplyMapping node to view the next remodel particulars.
- Select the Knowledge goal properties – S3 node and enter S3 bucket particulars as proven beneath.
- Select Save.
After you save the job, the next script is generated. It assumes the account data and credentials are saved in AWS Secrets and techniques Supervisor as described earlier.
- Select Run to run the job.
After the job completes efficiently, the run standing ought to change to Succeeded.
The next screenshot exhibits that the info was written to Amazon S3.
Question Amazon S3 knowledge utilizing Amazon Redshift Spectrum
Let’s question the info in Amazon Redshift Spectrum
- On the Amazon Redshift console, select the AWS Area.
- Within the left navigation pane, select Question Editor.
- Run the create exterior desk DDL given beneath.
- Run the choose question on the
CUSTOMER
desk.
Concerns
The AWS Glue crawler doesn’t work instantly with Snowflake. This can be a native functionality that you should use for different AWS knowledge sources which are joined or related within the AWS Glue ETL job. As a substitute, you’ll be able to outline connections within the script as proven earlier on this publish.
The Snowflake supply tables lined on this publish solely give attention to structured knowledge sorts and due to this fact semi-structured or unstructured knowledge sorts in Snowflake (binary, varbinary, and variant) are out of scope. Nevertheless, you could possibly use AWS Glue features corresponding to relationalize to flatten nested schema knowledge into semi-normalized constructions, or you could possibly use Amazon Redshift Spectrum to assist these knowledge sorts.
Conclusion
On this publish, we learnt methods to outline Snowflake connection parameters in AWS Glue, connect with Snowflake from AWS Glue utilizing the AWS native connector for Snowflake, migrate to Amazon S3 and use Redshift Spectrum to question knowledge in Amazon S3 to fulfill your small business wants.
We welcome any ideas or questions within the feedback part beneath
In regards to the Authors
Sindhu Achuthan is a Knowledge Architect with World Monetary Companies at Amazon Internet Companies. She works with clients to offer architectural steering for analytics options on Amazon Glue, Amazon Redshift, AWS Lambda, and different providers. Exterior work, she is a DIYer, likes to go on lengthy trails, and do yoga.
Shayon Sanyal is a Sr. Options Architect specializing in databases at AWS. His day job permits him to assist AWS clients design scalable, safe, performant and sturdy database architectures on the cloud. Exterior work, you could find him climbing, touring or coaching for the subsequent half-marathon.
[ad_2]