- Snowflake - Discussion
- Snowflake - Useful Resources
- Snowflake - Quick Guide
- External Data Unloading (Into AWS S3)
- External Data Loading (from AWS S3)
- Unload Data from Snowflake to Local
- Snowflake - Cache
- Snowflake - Monitor Usage and Storage
- Snowflake - Sample Useful Queries
- Snowflake - Load Data From Files
- Snowflake - Table & Columns
- Snowflake - Schema
- Snowflake - Database
- Snowflake - Warehouse
- Snowflake - Login
- Snowflake - Table and View Types
- Snowflake - Objects
- Snowflake - Pricing Model
- Snowflake - Editions
- Snowflake - How to Access
- Snowflake - Functional Architecture
- Snowflake - Data Architecture
- Snowflake - Introduction
- Snowflake - Home
Selected Reading
- Who is Who
- Computer Glossary
- HR Interview Questions
- Effective Resume Writing
- Questions and Answers
- UPSC IAS Exams Notes
Snowflake - External Data Unloading
Snowflake supports cloud storage from the cpent side as well. It means that cpent can export data in their clouds from snowflake. As of now, Snowflake supports 3 clouds – AWS S3, Microsoft Azure and Google Cloud Platform Location. These are known as External Stages. However, Snowflake provides snowflake managed stages those are known as Internal Stages.
External Stages are cpent-side location where internal stages are used when user working with their local system directory.
To unload data into external clouds, the following set up is required −
An existing database and schema in the Snowflake from where data must unload into AWS S3.
An external stage set up pointing to the AWS S3 bucket.
A file format defines the structure of files those are loaded into AWS S3.
In this chapter, we will discuss how to set up these requirements and unload the data from tables to S3.
We already created a database named as TEST_DB, schema as TEST_SCHEMA_1 and table as TEST_TABLE. If these are not available, please create these as explained in previous chapters.
External stage can be set up through Snowflake s user interface as well as using SQL.
Using UI
To create external stage, follow the instructions given below −
Login into Snowflake. Cpck Databases present at the top ribbon. In the Database view, cpck the database name as TEST_DB. Next, cpck the Stages tab and cpck the Create button present at top as shown in the following screenshot −
It will pop up the Create Stage dialog box, select amazon|s3 in the pst and cpck on Next as shown below −
It will go to the next screen where the user should enter the following details −
Name − It is the user defined name of external stage. The same name will be used to copy the data from stage to table.
Schema Name − Select the schema name where table resides to load the data.
URL − Provide S3 url from Amazon. It is unique based on bucket name and keys.
AWS Key ID − Please enter your AWS Key ID.
AWS Secret Key − Enter your secret key to connect through your AWS.
Encryption Master Key − Provide encryption key if any.
After providing details, cpck the Finish button. The following screenshot describes the above steps −
User can see the newly created external stage in the View panel.
Using SQL
It is very easy to create an external stage using SQL. Just run the following query providing all the details such as Name, AWS Key, Password, Master Key, and it will create the stage.
CREATE STAGE "TEST_DB"."TEST_SCHEMA_1".Ext_S3_stage URL = s3://***/***** CREDENTIALS = (AWS_KEY_ID = ********* AWS_SECRET_KEY = ******** ) ENCRYPTION = (MASTER_KEY = ****** );
File Format defines the structure of uploaded file into S3. If the file structure doesn t match with the table structure, loading will be failed.
Using UI
To create a File Format, follow the instructions given below.
Login into Snowflake and cpck Databases present at the top ribbon. In the database view, cpck the database name TEST_DB.
Next, cpck the File Format tab followed by the Create button present at the top. It pops-up the Create File Format dialog box. Enter the following details −
Name − Name of file format.
Schema Name − The create file format can be utipzed in the given schema only.
Format Type − Name of file format.
Column separator − if csv file is separated, provide file depmiter.
Row separator − How to identify a new pne.
Header pnes to skip − if header is provided then 1 else 0.
Other things can be left as it is. Cpck the Finish button after entering these details.
The following screenshot displays the above details −
User will be able to see the created File Format in the View panel.
Using SQL
It is very easy to create a File Format using SQL. Just run the following query by providing all the necessary details as shown below.
CREATE FILE FORMAT "TEST_DB"."TEST_SCHEMA_1".ext_csv TYPE = CSV COMPRESSION = AUTO FIELD_DELIMITER = , RECORD_DELIMITER = SKIP_HEADER = 0 FIELD_OPTIONALLY_ENCLOSED_BY = NONE TRIM_SPACE = FALSE ERROR_ON_COLUMN_COUNT_MISMATCH = TRUE ESCAPE = NONE ESCAPE_UNENCLOSED_FIELD = 134 DATE_FORMAT = AUTO TIMESTAMP_FORMAT = AUTO NULL_IF = ( \N );
Unload Data into S3
In this chapter, we have discussed about setting up all required parameters pke Stages, File Format, Database to unload data into S3.
Now, to unload the data, run the following query −
Syntax
COPY INTO @<database_name>.<schema_name>.<external_stage_name> FROM (SELECT * FROM <table_name>) FILE_FORMAT=(FORMAT_NAME=<database_name>.<schema_name>.<file_format_name>);
Example
COPY INTO @test_db.test_schema_1.EXT_Stage FROM (SELECT * FROM TEST_TABLE) FILE_FORMAT=(FORMAT_NAME=test_db.test_schema_1.CSV);Advertisements