ETL/ELT all your data into Amazon Redshift DW

amazon_integration

Amazon Redshift is fast, scalable, and easy-to-use, making it a popular data warehouse solution. Redshift is straightforward to query with SQL, efficient for analytical queries and can be a simple add-on for any organization operating its tech stack on AWS.

Amazon Web Services have many benefits. Whether you choose it for the pay as you go pricing, high performance, and speed or its versatile and flexible services provided, we are here to present you the best data loading approaches that work for us.

Etlworks allows users to load your data from cloud storages and APIs, SQL and NoSQL databases, web services to Amazon Redshift data warehouse in a few simple steps. You can configure and schedule the flow using intuitive drag and drop interface and let Etlworks do the rest.

Etlworks supports not just one-time data loading operation. It can help you to integrate your data sources with Amazon Redshift and automate updating your Amazon Redshift with fresh data with no additional effort or involvement!

Today we are going to examine how to load data into Amazon Redshift.

A typical Redshift flow performs the following operations:

  • Extract data from the source.
  • Create CSV files.
  • Compress files using the gzip algorithm.
  • Copy files into Amazon S3 bucket.
  • Check to see if the destination Amazon Redshift table exists, and if it does not – creates the table using metadata from the source.
  • Execute the Amazon Redshift COPY command.
  • Clean up the remaining files.

There are some prerequisites have to be met, before you can design a flow that loads data into Amazon Redshift:

Now, you are ready to create a Redshift flow. Start by opening the Flows window, clicking the + button, and typing redshift into the search field:

redshift-flows

Continue by selecting the flow type, adding source-to-destination transformations and entering the transformation parameters:

redshift-transformation

You can select one of the following sources (FROM) for the Redshift flow:

  • API – use any appropriate string as the source (FROM) name
  • Web Service – use any appropriate string as the source (FROM) name
  • File – use the source file name or a wildcard filename as the source (FROM) name
  • Database – use the table name as the source (FROM) name
  • CDC – use the fully qualified table name as the source (FROM) name
  • Queue – use the queue topic name as the source (FROM) name

For most of the Redshift flows, the destination (TO) is going to be Amazon S3 connection. To configure the final destination, click the Connections tab and select the available Amazon Redshift connection.

redshift-connection

Amazon Redshift can load data from CSVJSON, and Avro formats but Etlwoks supports loading only from CSV so you will need to create a new CSV format and set it as a destination format. If you are loading large datasets into Amazon Redshift, consider configuring a format to split the document into smaller files. Amazon Redshift can load files in parallel, also transferring smaller files over the network can be faster.

If necessary, you can create a mapping  between the source and destination (Redshift) fields.

Mapping is not required, but please remember that if a source field name is not supported by Redshift, it will return an error and the data will not be loaded into the database. For example, if you are loading data from Google Analytics, the output (source) is going to include fields with the prefix ga: ( ga:user, ga:browser, etc. ). Unfortunately, Amazon Redshift does not support fields with a : , so the data will be rejected. If that happens, you can use mapping to rename the destination fields.

ELT for Amazon Redshift

Amazon Redshift provides affordable and nearly unlimited computing power which allows loading data to Amazon Redshift as-is, without pre-aggregation, and processing and transforming all the data quickly when executing analytics queries. Thus, the ETL (Extract-Transform-Load) approach transforms to ELT (Extract-Load-Transform). This may simplify data loading to Amazon Redshift greatly, as you don’t need to think about the necessary transformations.

Etlworks supports executing complex ELT scripts directly into Amazon Redshift which greatly improves performance and reliability of the data injection.

I hope this has been helpful. Go forth and load large amounts of data.

Why Etlworks?

why etlworks

 

If you are reading this, you have most likely already answered the question:

“why do I need a data integration tool anyway?”

The answer is probably any combination of the things below:

  • I’m tired of dealing with the complicated inputs and outputs and don’t want to write a custom code anymore.
  • I need a way to connect to all my data, regardless of its format and location.
  • I need a simple way to transform my data from one format to another.
  • I need to track changes in my transactional database and push them to my data warehouse.
  • I need to connect to the external and internal APIs with different authentication schemas, requests, and responses.
  • I want to create new APIs with just a few mouse clicks, without writing any code.
  • Generally speaking, I’d prefer not to write any code at all.
  • I don’t want to worry about backups, performance, and job monitoring. I want someone to manage my data integration tool for me.

If this sounds familiar, you are probably in the process of selecting the best tool for the task at hands. The truth is, most modern data integration tools are quite good and can significantly simplify your life.

Etlworks is so much better.

By selecting Etlworks as your data integration tool, you will be able to implement very complex data integration flows with fewer steps and faster.

The key advantages of the Etlworks:

  • It automatically parses even the most complicated JSON and XML documents (as well as other formats) and can connect to all your APIs and databases.
  • It is built for Cloud but works equally well when installed on-premise.
  • You can visualize and explore all your data, regardless of the format and location, before creating any data integration flows.
  • You probably won’t need to write any code at all, but even if you will, it will be just a few lines in the familiar programming language: SQL or JavaScript.
  • You can use SQL to extract and flatten data from nested documents.
  • Etlworks is a full-blown enterprise service bus (ESB) so you can create data integration APIs with just a few mouse clicks.
  • Etlworks can integrate data behind the corporate firewall when working together with data integration agent. Read more in this blog post.
  • We provide world-class support for all customers.
  • Our service is so affordable that you won’t have to get a board approval in order to use it.
  • No sales call necessary! Just sign up and start using it right away.

Test drive Etlworks for 14 days free of charge.