Jira to Snowflake

This page provides you with instructions on how to extract data from Jira and load it into Snowflake. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

What is Jira?

Jira is an issue tracking tool with elements of agile project management woven into it. You can track progress, assign tasks, and introduce the results all from within the product. In short, Jira helps teams collaborate to get work done quickly.

About Snowflake

Snowflake is a data warehouse solution that is entirely cloud based. It's a managed service. If you don't want to deal with hardware, software, or upkeep for a data warehouse you're going to love Snowflake. It runs on the wicked fast Amazon Web Services architecture using EC2 and S3 instances. Snowflake is designed to be flexible and easy to work with where other relational databases are not. One example of this is the query execution. Snowflake creates virtual warehouses where query processing takes place. These virtual warehouses run on separate compute clusters, so querying one of these virtual warehouses doesn't slow down the others. If you have ever had to wait for a query to complete, you know the value of speed and efficiency for query processing.

Getting data out of Jira

For starters, you need to get your data out of Jira. That can be done by making calls to Jira’s REST API. The full documentation for the API can be found here.

To use the Jira REST API, your script needs to make HTTP requests, and parse the response. The Jira REST API uses JSON as its communication format. The standard HTTP methods like GET, PUT, POST and DELETE are going to be your major tools here.

Jira’s API offers access to data endpoints like issues, comments, and numerous other endpoints. Using methods outlined in the API documentation, you can retrieve the data you’d like to move to your destination database.

Sample Jira data

When you query the Jira API, it will return JSON formatted data. Below is an example response from the issues endpoint.

{
    "expand": "schema,names",
    "startAt": 0,
    "maxResults": 50,
    "total": 6,
    "issues": [
        {
            "expand": "html",
            "id": "10230",
            "self": "http://kelpie9:8081/rest/api/2/issue/BULK-62",
            "key": "BULK-62",
            "fields": {
                "summary": "testing",
                "timetracking": null,
                "issuetype": {
                    "self": "http://kelpie9:8081/rest/api/2/issuetype/5",
                    "id": "5",
                    "description": "The sub-task of the issue",
                    "iconUrl": "http://kelpie9:8081/images/icons/issue_subtask.gif",
                    "name": "Sub-task",
                    "subtask": true
                },
.
.
.
                },
                "customfield_10071": null
            },
            "transitions": "http://kelpie9:8081/rest/api/2/issue/BULK-62/transitions",
        },
        {
            "expand": "html",
            "id": "10004",
            "self": "http://kelpie9:8081/rest/api/2/issue/BULK-47",
            "key": "BULK-47",
            "fields": {
                "summary": "Cheese v1 2.0 issue",
                "timetracking": null,
                "issuetype": {
                    "self": "http://kelpie9:8081/rest/api/2/issuetype/3",
                    "id": "3",
                    "description": "A task that needs to be done.",
                    "iconUrl": "http://kelpie9:8081/images/icons/task.gif",
                    "name": "Task",
                    "subtask": false
                },
.
.
.
                  "transitions": "http://kelpie9:8081/rest/api/2/issue/BULK-47/transitions",
        }
    ]
}

Preparing Jira data

With the JSON in hand, you now need to map all those data fields into a schema that can be inserted into your database. This means that, for each value in the response, you need to identify a predefined data type (i.e. INTEGER, DATETIME, etc.) and build a table that can receive them.

Check out the Stitch Jira Documentation to get a good sense of what fields and data types will be provided by each endpoint. Once you have identified all of the columns you will want to insert, go ahead and create a destination table in your database where this data can be loaded.

Preparing data for Snowflake

Depending on the structure that you data is in, you may need to prepare it for loading. Take a look at the supported data types for Snowflake and make sure that the data you've got will map neatly to them. If you have a lot of data, you should compress it. Gzip, bzip2, Brotli, Zstandard v0.8 and deflate/raw deflate compression types are all supported.

One important thing to note here is that you don't need to define a schema in advance when loading JSON data into Snowflake. Onward to loading!

Loading data into Snowflake

There is a good reference for this step in the Data Loading Overview section of the Snowflake documentation. If there isn’t much data that you’re trying to load, then you might be able to use the data loading wizard in the Snowflake web UI. Chances are, the limitations on that tool will make it a non-starter as a reliable ETL solution. There two main steps to getting data into Snowflake:

  • Use the PUT command to stage files
  • Use the COPY INTO table command to load prepared data into the awaiting table from the prior step.

For the COPY step, you’ll have the option of copying from your local drive, or from Amazon S3. One of Snowflakes’ slick features lets you to make a virtual warehouse that will power the insertion process.

Keeping Jira data up to date

So what’s next? You've got a script that collects data from Jira and puts it where you want. This is where a lot of Jira ETL projects can fall apart. You worked hard on this script, and it only pays off if you can use it down the road.

First, you need to account for new data being generated in Jira. Look through the data you are getting from Jira and find a field that is automatically incremented such as updated_at or created_at. Build your script to use these fields as a bookmark for finding new or updated data. Second, you need to get your script running continuously. Some folks use a loop or a cron job.

Easier and faster alternatives

If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.

Thankfully, products like Stitch were built to solve this problem automatically. With just a few clicks, Stitch starts extracting your Jira data via the API, structuring it in a way that is optimized for analysis, and inserting that data into your Snowflake data warehouse.