Salesforce Data Integration

 

etlworks-salesforce-data-integration

What is Salesforce?

Salesforce is the world’s #1 cloud-based customer relationship management (CRM) platform.

Salesforce offers a wide range of applications for managing businesses processes including sales, customer service, marketing, and e-commerce. For many organizations, Salesforce is a rich source of customer data, such as Accounts, Opportunities, Services, Community, Activities, and Leads.

On its own, Salesforce can dramatically improve how companies run their sales operations, support their customers, and provide products and services to a market. With the integration, businesses make Salesforce more valuable through data.

Through integration, you bring data from disparate sources, databases or applications, such as marketing, support, e-commerce, and sales to the data warehouse.

Effective and efficient integration of Salesforce with adjacent enterprise systems — such as databases, ERP and CRM systems, and custom applications — is critical to enabling sales teams, increasing revenue, and better serving customers. By integrating Salesforce with other applications, APIs and resources, you make Salesforce even more valuable to your employees and your organization.

Ready to get started?

Etlworks is a cloud-native integration platform helps businesses automate manual data management tasks, ensure data that are far more accurate, accelerate business workflows, and provide greater operational visibility to an organization.

Etlworks Salesforce connector allows fast real-time access to Salesforce data. The connector supports all objects and metadata (fields) available through the Salesforce API and works just like any other database connector.

You can load Salesforce Contacts, Leads, Opportunities, Attachments, Accounts, custom objects, etc. directly to/from major cloud and on-premise data sources or synchronize data in both directions. Powerful mapping settings allow you to load and synchronize Salesforce data with sources having different data structure. You can schedule your integration operation to execute it automatically.

Let’s do it!

Extracting data from Salesforce

Note: extracting data from Salesforce is similar to extracting data from the relational database.

Step 1. Create Salesforce connection which will be used as a source (FROM).

Step 2. Create a destination connection, for example, a connection to the relational database, and if needed a format (format is not needed if the destination is a database or well-known API).

Step 3. Create a flow where the source is a database and the destination is a connection created in step 2, for example, relational database.

mceclip0.png

Step 4. Add new source-to-destination transformation.

Step 5. Select Salesforce connection created in step 1 as a source connection and select the Salesforce object you are extracting data from:

salesforce-from.png

Step 6. Select TO connection, format (if needed) and object (for example database table) to load data into.

salesforce-to

Step 7. Click MAPPING and optionally enter Source Query (you don’t need a query if you are extracting data from the Salesforce object unconditionally):

salesforce-query

Step 8. Optionally define the per-field mapping.

salesforce-mapping

Step 9. Add more transformations if needed.

Loading data in Salesforce

Note: loading data in Salesforce is similar to loading data into a relational database.

Step 1. Create a source connection and a format (if needed).

Step 2. Create destination Salesforce connection.

Step 3. Create a flow where the destination is a database.

Step 4. Add new source-to-destination transformation.

Step 5. Select FROM and TO connections and objects (also a FROM format if needed).

to-snowflake

Step 6. Optionally define the per-field mapping.

Step 7. Add more transformations if needed.

Browsing data in Salesforce

You must have a Salesforce connection to browse objects and run SQL queries.

Use Explorer to browse data and metadata in Salesforce as well as execute DML and SELECT queries against Salesforce connection.

browse-salesforce

Change Replication and Data Synchronization

Loading data from Salesforce to your data warehouse is just a part of the problem. Real-time analytics require data in the data warehouse to be constantly up-to-date with Salesforce. In Etlworks, you can always have the most current data from Salesforce in your data warehouse by using High Watermark (HWM) change replication techniques.

After the first replication of all the Salesforce data, subsequent replications update the data warehouse data incrementally with refreshes from Salesforce, in near real-time. Data warehouse data will always be up-to-date in a matter of minutes automatically without any user intervention.

Sign up free or get a demo today to learn more: https://etlworks.com

ETL/ELT all your data into Amazon Redshift DW

amazon_integration

Amazon Redshift is fast, scalable, and easy-to-use, making it a popular data warehouse solution. Redshift is straightforward to query with SQL, efficient for analytical queries and can be a simple add-on for any organization operating its tech stack on AWS.

Amazon Web Services have many benefits. Whether you choose it for the pay as you go pricing, high performance, and speed or its versatile and flexible services provided, we are here to present you the best data loading approaches that work for us.

Etlworks allows users to load your data from cloud storages and APIs, SQL and NoSQL databases, web services to Amazon Redshift data warehouse in a few simple steps. You can configure and schedule the flow using intuitive drag and drop interface and let Etlworks do the rest.

Etlworks supports not just one-time data loading operation. It can help you to integrate your data sources with Amazon Redshift and automate updating your Amazon Redshift with fresh data with no additional effort or involvement!

Today we are going to examine how to load data into Amazon Redshift.

A typical Redshift flow performs the following operations:

  • Extract data from the source.
  • Create CSV files.
  • Compress files using the gzip algorithm.
  • Copy files into Amazon S3 bucket.
  • Check to see if the destination Amazon Redshift table exists, and if it does not – creates the table using metadata from the source.
  • Execute the Amazon Redshift COPY command.
  • Clean up the remaining files.

There are some prerequisites have to be met, before you can design a flow that loads data into Amazon Redshift:

Now, you are ready to create a Redshift flow. Start by opening the Flows window, clicking the + button, and typing redshift into the search field:

redshift-flows

Continue by selecting the flow type, adding source-to-destination transformations and entering the transformation parameters:

redshift-transformation

You can select one of the following sources (FROM) for the Redshift flow:

  • API – use any appropriate string as the source (FROM) name
  • Web Service – use any appropriate string as the source (FROM) name
  • File – use the source file name or a wildcard filename as the source (FROM) name
  • Database – use the table name as the source (FROM) name
  • CDC – use the fully qualified table name as the source (FROM) name
  • Queue – use the queue topic name as the source (FROM) name

For most of the Redshift flows, the destination (TO) is going to be Amazon S3 connection. To configure the final destination, click the Connections tab and select the available Amazon Redshift connection.

redshift-connection

Amazon Redshift can load data from CSVJSON, and Avro formats but Etlwoks supports loading only from CSV so you will need to create a new CSV format and set it as a destination format. If you are loading large datasets into Amazon Redshift, consider configuring a format to split the document into smaller files. Amazon Redshift can load files in parallel, also transferring smaller files over the network can be faster.

If necessary, you can create a mapping  between the source and destination (Redshift) fields.

Mapping is not required, but please remember that if a source field name is not supported by Redshift, it will return an error and the data will not be loaded into the database. For example, if you are loading data from Google Analytics, the output (source) is going to include fields with the prefix ga: ( ga:user, ga:browser, etc. ). Unfortunately, Amazon Redshift does not support fields with a : , so the data will be rejected. If that happens, you can use mapping to rename the destination fields.

ELT for Amazon Redshift

Amazon Redshift provides affordable and nearly unlimited computing power which allows loading data to Amazon Redshift as-is, without pre-aggregation, and processing and transforming all the data quickly when executing analytics queries. Thus, the ETL (Extract-Transform-Load) approach transforms to ELT (Extract-Load-Transform). This may simplify data loading to Amazon Redshift greatly, as you don’t need to think about the necessary transformations.

Etlworks supports executing complex ELT scripts directly into Amazon Redshift which greatly improves performance and reliability of the data injection.

I hope this has been helpful. Go forth and load large amounts of data.

Loading data in Snowflake

etlworks-snowflake

In this blog post, I will be talking about building a reliable data injection pipeline for Snowflake.

Snowflake is a data warehouse built for the cloud. It works across multiple clouds and combines the power of data warehousing, the flexibility of big data platforms, and the elasticity of the cloud.

Based on the Snowflake documentation, loading data is a two-step process:
  1. Upload (i.e. stage) one or more data files into either an internal stage (i.e. within Snowflake) or an external location.
  2. Use the COPY INTO command to load the contents of the staged file(s) into a Snowflake database table.

It is obvious that one step is missing: preparing data files to be loaded in Snowflake.

If steps 1-3 do not look complicated to you, let’s add more details.

Typically, developers that are tasked with loading data into any data warehouse dealing with the following issues:

  • How to build a reliable injection pipeline, which loads hundreds of millions of records every day.
  • How to load only recent changes (incremental replication).
  • How to transform data before loading into the data warehouse.
  • How to transform data after loading into the data warehouse.
  • How to deal with changed metadata (table structure) in both the source and in the destination.
  • How to load data from nested datasets, typically returned by the web services (in addition to loading data from the relational databases).

This is just a short list of hand-picked problems. The good news is that Snowflake is built from the ground up to help with bulk-loading data, thanks to the very robust COPY INTO command, and continues-loading using Snowpipe.

Any Snowflake injection pipeline should at least be utilizing the COPY INTO command and, possibly Snowpipe.

The simplest ETL process that loads data into the Snowflake will look like this:
  1. Extract data from the source and create CSV (also JSON, XML, and other formats) data files.
  2. Archive files using gz compression algorithm.
  3. Copy data files into the Snowflake stage in Amazon S3 bucket (also Azure blob and local file system).
  4. Execute COPY INTO command using a wildcard file mask to load data into the Snowflake table.
  5. Repeat 1-4 for multiple data sources. Injection ends here.
  6. If needed, execute SQL statements in Snowflake database to transform data. For example, populate dimensions from the staging tables.
The part where you need to build a “reliable data injection pipeline” typically includes:
  • Performance considerations and data streaming.
  • Error-handling and retries.
  • Notifications on success and failure.
  • Reliability when moving files to the staging area in S3 or Azure.

COPY INTO command can load data from the files archived using gz compression algorithm. So, it would make sense to archive all the data files before copying or moving them to the staging area.

  • Cleaning up: what to do with all these data files after they have been loaded (or not loaded) into the Snowflake.
  • Dealing with changing table structure in the source and in the destination.

Snowflake supports transforming data while loading it into a table using the COPY INTO <table> command but it will not allow you to load data with inconsistent structure.

Add the need to handle incremental updates in the source (change replication) and you got yourself a [relatively] complicated project at hands.

As always, there are two options:
  1. Develop home-grown ETL using a combination of scripts and in-house tools.
  2. Develop solution using third-party ETL tool or service.

Assuming that you are ready to choose option 2 (if not, go to paragraph one), let’s discuss

The requirements for the right ETL tool for the job

When selecting the ETL tool or service the questions you should be asking yourself are:

  • How much are you willing to invest in learning?
  • Do you prefer the code-first or the drag&drop approach?
  • Do you need to extract data from the semi-structured and unstructured data sources (typically web services) or all your data is in the relational database?
  • Are you looking for point-to-point integration between well-known data sources (for example, Salesforce->Snowflake ) with the minimum customization, or you need to build a custom integration?
  • Do you need your tool to support change replication?
  • How about real-time or almost real-time ETL?
  • Are you looking for a hosted and managed service, running in the cloud or on-premise solution?
Why Etlworks is the best tool for loading data in Snowflake?

First, just like Snowflake, Etlworks is a cloud-first data integration service. It works perfectly well when installed on-premise, but it really shines in the cloud. When subscribing to the service, you can choose the region that is closest to your Snowflake instance which will make all the difference as far as the fast data load is concerned. Also, you won’t have to worry about managing the service.

Second, in Etlwoks you can build even the most complicated data integration flows and transformations using simple drag&drop interface. No need to learn a new language and no complicated build-test-deploy process.

Third, if you are dealing with heterogeneous data sources, web services, semi-structured or unstructured data, or transformations which go beyond the simple point-to-point, pre-baked integrations  – you are probably limited to just a few tools. Etlworks is one of them.

Last but not least, if you need your tool to support a native change (incremental) replication from relational databases or web services, Etlworks can handle this as well. No programming required.  And it is fast.

How it works

In Etlworks, you can choose from several highly configurable data integration flows, optimized for Snowflake:

  • Extract data from databases and load in Snowflake.
  • Extract data from data objects (including web services) and load in Snowflake.
  • Extract data from well-known APIs (such as Google Analytics) and load in Snowflake.
  • Load existing files in Snowflake.
  • Execute any SQL statement or multiple SQL statements.

Behind the scene, the flows perform complicated transformations and create data files for Snowflake, archive files using gz algorithm before copying to the Snowflake stage in the cloud or in the server storage, automatically create and execute COPY INTO <table> command, and much more. For example, the flow can automatically create a table in Snowflake if it does not exist, or it can purge the data files in case of error (Snowflake can automatically purge the file in case of success).

You can find the actual, step-by-step instructions on how to build Snowflake data integration flows in Etlworks in our documentation.

The extra bonus is that in Etlworks you can connect to the Snowflake database, discover the schemas, tables, and columns, run SQL queries, and share queries with the team. All without ever using Snowflake SQL workbench.  Even better – you can connect to all your data sources and destinations, regardless of the format and location to discover the data and the metadata. Learn more about Etlworks Explorer.

Etlworks installation options and pricing model

DjdkTL2UYAAm2fa

In this blog post, I will explain the various Etlworks installation options and how our pricing model works.

Our pricing model is a three-tiered, feature-based SaaS subscription.

There are three clearly defined tiers: Startup, Business and Enterprise.

All tiers allow for an unlimited number of users and provide you access to all included connectors. Tier differences can be found in the following:

  • The total number of records that can be processed each day.
  • The total number of schedules.
  • How often the scheduled flow can run.
  • The type of instance: shared or dedicated.
  • The ability to set up a custom domain with its own SSL certificate
  • White labeling
  • The ability to create and use tenants.

Take a look at our pricing grid for more information.

Dedicated instance, custom domains, white labeling and the ability to use tenants are only available with the Enterprise and on-premise plans.

FAQ


Q. What are tenants?

A. Essentially, tenants are sub-accounts under the main account.

Each tenant has a separate list of users, flows, connections, formats, and listeners, and are completely isolated from each other. They can be used to separate customers and environments – for example, DEV, QA, PRODUCTION, etc. Flows and connections can be copied from one tenant to another.

Q. What happens if we exceed the maximum allowed number of processed records?

A. You will receive a notification that the limit has been exceeded and we will recommend that you switch to a higher tier.

It’s OK if you exceed the limit set for your plan once or twice a week, but systematically exceeding it violates the terms of service.

Q. How does the free trial work?

A. You get full functionality for 14 days, at which point you can turn your trial into a monthly or annual subscription.

Q. What happens when the 14-days free evaluation period ends? Do I need to ask to extend it?

A. Your Etlworks account will be disabled but you will have an option to subscribe to the paying service or request an extension. Your evaluation period can be extended up to 14 days.

Q. How does your on-premise offering work?

A. Unlike cloud subscriptions, the customer owns and operates the on-premise instance running Etlworks. In addition to the subscription fee, we charge $2000 USD as a one-time installation and configuration fee.

Typically we don’t have access to the on-premise instances at all, but we do provide a fully automated, one-click installation and upgrade script.

The on-premise instance must be able to connect to the Etlworks License server at least once a day.

Q. Can I have more than one dedicated instance?

A. Yes, you can. Each instance requires a separate license.

Q. Can Etlworks handle our load?

A. Etlworks Integrator is extremely fast and optimized for performance. It is also horizontally scalable. You can have multiple instances running in parallel behind the load balancer.

Q. How many instances will I need?

A. In most cases, you will need just one instance, which is included in the base price. You might need more than one if you expect a large number of the parallel ETL requests (hundreds of thousands per day) or need guaranteed high availability. Always think of upgrading the amount of available RAM and CPU cores before adding an instance.

Q. Is multi-server option available for cloud Enterprise plans?

A. Yes, it is available.

Q. What does “the price starts from” mean for cloud Enterprise plans?

A. We factor the cost of running the dedicated instance of Etlworks when calculating the price. For example:

  • 8 Gb RAM, 2 CPU cores, 100 SSD – $800 / month.
  • 16 Gb RAM, 4 CPU cores, 100 SSD – $1000 / month.
  • 32 Gb RAM, 8 CPU cores, 200 SSD – $1500 / month.

Q. How can I estimate the size of the instance?

A.  If you are planning to work with APIs that return large (> 100Mb) JSON or XML documents or are planning to proces hundreeds of millions of records per day we recommend the instance with 16 or 32 Gb of RAM.  In all other cases, 8 Gb should be enough.

Q. Can I request to upgrade or downgrade the instance?

A.  Yes, you can. It usually takes just a few minutes to change the size of the instance for the instances managed by Etlworks.

Q. What does “the price starts from” mean for on-premise Enterprise plans?

A. The base price includes one instance of Etlworks Integrator.

  • One 8 Gb instance  – $800 / month.
  • One 16 Gb instance  – $1000 / month.
  • One 32 (and up, no upper limit) Gb instance  – $1500 / month.
  • Two 8 Gb instances (for example DEV and PRODUCTION) – $1600 / month.
  • Two 16 Gb instances – $2000 / month
  • Two 32 (and up) Gb instances  – $3000 / month
  • More than 2 instances – 50% discount for each additional instance

Q. Can I install Etlworks from the AWS or Azure marketplace?

A. Not at the moment. We will be in both marketplaces in Q4 of 2019.

Q. How am I billed?

A. You subscribe and will be billed through our customer portal powered by Paywhirl. You can pay by a major credit card, direct money wire to our bank account or mailed check. Monthly and anual payment plans are available.

Q. What will my total cost be?

A. Your total cost will be the subscription cost listed on our website. No surprises.

Q. What regions are available for shared and dedicated instances?

A. We have shared instances in the US-EAST (us-east-2) and EU-WEST (eu-west-2) regions. Dedicated instances can be installed in any region of your choice.

Q. Can we have a dedicated instance on Microsoft Azure or Google Cloud?

A. Yes, you can. The dedicated instance of Etlworks can be installed on any cloud of your choice.

Q. On your website there is an option to buy a perpetual license. What does this mean?

A. When you buy a perpetual license you own it forever. Perpetual license for one server costs 30K.

The renewal costs $6000 / year after the first year.

We provide a 50% discount for each additional license.

Q. How is support provided?

A.  The basic support by email is included in all plans. You can purchase extended support, which includes 10 hours of the professional services per month for $1000 / month (or 6 hours for $600 / month). You can also buy ad-hoc extended support for $150 / hour.

Cloud Data Integration

cloud-data-integration

In this blog post, I will discuss the definition of cloud data integration and what makes it truly useful.  

Before we start, let’s get on the same page and define what cloud data integration is.

According to Wikipedia, cloud data integration software must have the following features:

  • Deployed on a multi-tenant, elastic cloud infrastructure.
  • Subscription model pricing: operating expense, not capital expenditure.
  • No software development: required connectors should already be available.
  • Users do not perform deployment or manage the platform itself.
  • Presence of integration management & monitoring features.

While I agree with the definition, there’s something is missing:

where is the data we are suppose to be integrating?

If you are ahead of the curve, all your data is already stored in the cloud. While I think we all will be here eventually, as of today, a typical enterprise – from two guys working out of a garage to multinational corporations – owns and operates multiple data silos. I would add diverse and isolated data silos:

  • Cloud databases.
  • On-premise databases, available from the Internet.
  • On-premise databases, not available from the Internet.
  • Public APIs.
  • Private APIs, not available from the Internet.
  • Cloud-based third-party applications.
  • Locally hosted third-party applications.
  • Legacy applications.
  • Files stored locally.
  • Files stored in cloud data storage.
  • Office 365 and Google Docs documents.

Can your favorite data integration platform handle the vast array of data sources? If the answer is “Yes it can! We just need to deploy it in our corporate network and it will be able to connect to all our data,” then it is not a cloud data integration anymore. Don’t get me wrong, there is nothing wrong with the ETL tool deployed locally. It gets the job done, but you are not getting the benefits of the true cloud-based platform, specifically this one:

users do not perform deployment or manage the platform itself.

If this is not a showstopper, my advice is to find and stick to the tool which has all the required connectors and is easy to program and operate. Sure, you will need a competent DevOps group on payroll (in addition to the ETL developers), who will be managing and monitoring the tool, installing upgrades, performing maintenance, etc., but hey…it works.

Keep reading if you want to focus on breaking the data silos in your organization instead of managing the data integration platform. The solution to the problem at hand is so-called hybrid data integration.

Hybrid data integration is a solution when some of the connectors can run on-premise, behind the corporate firewall, while others, and the platform itself runs on the cloud.

We, at Etlworks, believe that no data silo should be left behind, so in addition to our best in class cloud data integration service we offer fully autonomous, zero-maintenance data integration agents which can be installed on-premise, behind the corporate firewall.  Data integration agents are essentially connectors installed locally and seamlessly integrated with a cloud-based Etlworks service.

Let’s consider these typical data integration scenarios:

Source and destination are in the cloud

Example: the source is an SQL Server database in Amazon RDS and the destination is a Snowflake data warehouse.

In this case, no extra software is required. Etlworks can connect to the majority of the cloud-based databases and APIs directly. Check out available connectors.

The source is on-premise, behind the firewall and the destination is in the cloud

Example: the source is locally hosted PostgreSQL database, not available from the Internet, and the destination is Amazon Redshift.

In this scenario, you will need to install a data integration agent as a Windows or Linux service in any available computer in your corporate network (you can install multiple agents in multiple networks if needed). The agent includes a built-in scheduler so it can run periodical extracts from your on-premise database and push changes either directly to the cloud data warehouse or to the cloud-based data storage ( Amazon S3, for example).  You can then configure a flow in Etlworks, which will be pulling data from the cloud data storage and loading into the cloud-based data warehouse.  The flow can use the extremely fast direct data upload into the Redshift available as a task in Etlworks.

The source is in the cloud and the destination is on-premise, behind the firewall

Example: the source is a proprietary API, available from the Internet and the destination is a database in the Azure cloud.

Data Integration Agent can work both ways: extracting data from the sources behind the firewall and loading data into the local databases. In this scenario, the flow in Etlworks will be extracting data from the cloud-based API, then transforming and storing it in the cloud-based storage, available to the agent. The data integration agent will be loading data from the cloud-based storage directly into the local database.

The source is in the cloud and a destination is a third-party software hosted locally

Example: the source is a proprietary API, available from the Internet and the destination is locally hosted data analytics software.

If the third-party software can load data from services such as Google Sheets, you can configure a flow in Etlworks, which will be extracting and transforming data from the API and loading into the Google Sheets. The third-party software will then be loading data directly from the specific worksheet. You can always find a format which is understood by Etlworks and a third-party software.

Source and destination are on-premise, behind the firewall

Example: the source is a proprietary API, not available from the Internet and the destination is another proprietary API, again not available from the Internet.

In this case, you probably don’t need cloud-based software at all. Look at Etlworks on-premise subscription options as well as an option to buy a perpetual license.

Why Etlworks?

If you are reading this, you have most likely already answered the question:

“why do I need a data integration tool anyway?”

The answer is probably any combination of the things below:

  • I’m tired of dealing with the complicated inputs and outputs and don’t want to write a custom code anymore.
  • I need a way to connect to all my data, regardless of its format and location.
  • I need a simple way to transform my data from one format to another.
  • I need to track changes in my transactional database and push them to my data warehouse.
  • I need to connect to the external and internal APIs with different authentication schemas, requests, and responses.
  • I want to create new APIs with just a few mouse clicks, without writing any code.
  • Generally speaking, I’d prefer not to write any code at all.
  • I don’t want to worry about backups, performance, and job monitoring. I want someone to manage my data integration tool for me.

If this sounds familiar, you are probably in the process of selecting the best tool for the task at hands. The truth is, most modern data integration tools are quite good and can significantly simplify your life.

Etlworks is so much better.

By selecting Etlworks as your data integration tool, you will be able to implement very complex data integration flows with fewer steps and faster.

The key advantages of the Etlworks:

  • It automatically parses even the most complicated JSON and XML documents (as well as other formats) and can connect to all your APIs and databases.
  • It is built for Cloud but works equally well when installed on-premise.
  • You can visualize and explore all your data, regardless of the format and location, before creating any data integration flows.
  • You probably won’t need to write any code at all, but even if you will, it will be just a few lines in the familiar programming language: SQL, JavaScript, and Python.
  • You can use SQL to extract and flatten data from nested documents.
  • Etlworks is a full-fledged enterprise service bus (ESB) so you can create data integration APIs with just a few mouse clicks.
  • Etlworks can integrate data behind the corporate firewall when working together with data integration agent. Read more in this blog post.
  • We provide world-class support for all customers.
  • Our service is so affordable that you won’t have to get a board approval in order to use it.
  • No sales call necessary! Just sign up and start using it right away.

Test drive Etlworks for 14 days free of charge.