MongoDB to Redshift- Data Migration

Listen to this article

We will cover various approaches used to perform data migration from MongoDB to Redshift in this article.

A Brief Overview of MongoDB and Redshift

MongoDB is an open source NoSQL database which stores data in JSON format using a document-oriented data model. Data fields can vary by document. MongoDB isn’t associated with any specific data structure so there’s no particular format or schema for data in it.

Amazon Redshift data warehouse is essentially an enterprise-class, relational database query and management system that can achieve efficient storage and optimum query performance through massive parallel processing, columnar data storage, and very efficient, targeted data compression encoding schemes. Read more about Redshift Architecture here.

Approaches to transfer data from MongoDB to Redshift

There are two ways to replicate data from MongoDB to Redshift:

  1. Using a ready-to-use Data Integration Platform
  2. Writing custom ETL code with the help of Export Utility

We will be covering the steps involved in writing custom code to load data from MongoDB to Redshift as well as its limitations.

Transfer Data from MongoDB to Redshift using Custom Code

For the purpose of demonstration, assume that we need to move the ‘products’ collection into Redshift that holds the product details of the manufacturing company.

Two cases should be taken into consideration while transferring data:

  1. Move the data for a one-time load into Redshift.
  2. Incrementally load data into Redshift, which applies only when the data volume is high.

Let us take a look at both the scenarios:

  • One Time Load

The .csv file of the required MongoDB collection will have to be generated using Export command as follows:

Open command prompt and go to the below path to run the BCP command

C:\Program Files\MongoDB\Server\4.0\bin

Run the mongoexport command to generate the output file for the products collection.

mongoexport –host localhost -u ‘username’ -p ‘password’ –db mongodb –collection products –out D:\Work\Articles\products.json

Note that here might be numerous transformations needed before loading this data into Redshift. Achieving this using code will become extremely hard. A tool that provides an easy environment to write transformations might work for you.

  • Upload above generated .txt file to S3 Bucket

Files from local machine can be easily uploaded to AWS in many ways, some of which are given below.

One way is to upload it using file upload utility of S3 which is an intuitive alternative.
You can also achieve this AWS CLI that provides easy commands to upload it to the S3 bucket from your local machine.

As a pre-requisite, you need to install and configure AWS CLI. You can read the user guide to learn more about installing AWS CLI.

Run the following command to upload the file into S3 from the local machine

aws s3 cp D:\Work\Articles\products.json s3://s3bucket011/products.json


  • Create Table schema before loading the data into Redshift
CREATE TABLE sales.products (sku VARCHAR(100) ,

title VARCHAR(100),

description VARCHAR(500),

manufacture_details VARCHAR(1000),

shipping_details VARCHAR(1000),

quantity BIGINT,

pricing VARCHAR(100) )

After running the query a tabular structure without records will be created within Redshift. To check this, run the following query:

Select * from sales.products


  • Using copy command load the data from S3 to Redshift
COPY dev.sales.products FROM ‘s3://s3bucket011/products.json’

iam_role ‘Role_ARN’ format as json ‘auto’;

You will need to confirm if the data has loaded successfully by running the query.

Select * from sales.products limit 10;

This should return the record inserted from products file.

Limitations of the Custom ETL Scripts Approach:

  1. In certain cases where data needs to be moved once or in batches, the custom ETL script method works well, but becomes extremely tedious if data needs to be copied from Mongodb to Redshift in real-time.
  2. In case you are dealing with large volumes of data, incremental load needs to be performed. Incremental load (change data capture) becomes tough since additional steps needed to achieve it.
  3. Transforming data before loading it into Redshift is extremely difficult to attain.
  4. While writing code to extract a subset of data, scripts could break as the source schema keeps changing or evolving resulting in data loss.

The process mentioned above is debilitated, erroneous and more often than not, hard to implement and maintain which may impact the consistency and availability of your data in Redshift.

There is an easier way to replicate data from MongoDB to Redshift.

A ready to use data integration solution can assist you to migrate this data without writing any code. This is how the process will look like when done through a tool:

  • Connect to your MongoDB.
  • Select a replication mode:

(a) Full Dump and Load (b) Incremental load for append-only data (c) Incremental load for mutable data

  • For every collection in Mongodb, select a table name in Redshift where it needs to be copied.

That’s it! You are all set. Your Data Integration Platform will take care of gathering your data incrementally and uploading it seamlessly from Mongodb to Redshift in real-time.

In addition to this, you can bring data from various different sources – databases, cloud applications, SDKs, and more with a Data Integration Platform. This will future proof your data integration set up as well as provide you with the flexibility to immediately replicate data from any source into Redshift.

Reach out to us  at Nitor Infotech to learn more about migrating data and witness how easy it is to load data from MongoDB to Redshift along with several other sources and accelerate the process of generating powerful analytical workflows.