Using Change Data Capture and CLR to Transfer Data to the Cloud
top of page

Using Change Data Capture and CLR to Transfer Data to the Cloud

We were three: A DBA, a developer and a tester.

Our mission: To develop an ongoing process that will transfer one of our biggest and busiest tables to the cloud. Here’s the story of how we did it:

The Requirements

  1. Transfer the table insert and updates to Azure in a tab-delimited format

  2. Do it in a reasonable time – no more than a few minutes delay between On-premise table state and Azure state

So we had to

  1. Decide from which server we want to transfer the data

  2. Find a method to detect the changes that were made to the table

  3. Decide how to output the rows in a tab-delimited format

  4. Decide how to transfer the data to Azure.

* The actual loading of the data to Azure is out of the scope of this post.

Let’s describe the thinking process for each bullet:

Server

We needed to decide whether we want to transfer the data from the main server, which is the busiest in terms of reads and writes, or from one of our replica servers, which is under less load, but has a slight delay due to replication.

We decided to transfer the data from the replica server, in order to reduce the possibility of hindering our production processes on the main server. Replication latency was taken into account and was treated as a risk we could live with.

Change Detection

Next, we needed to find a method that will detect which rows of the table were inserted/updated.

Our first thought was to:

  1. Add a nullable Datetime column to the table with a default of Getdate()

  2. Add a trigger to the table that will update the new column’s value for each updated row

  3. Add an index on the new column

  4. Add a checkpoint table that will hold the last point in time the migration job processed

This way, we could identify inserts and updates to the table pretty easily. Adding the trigger was a little risky: A trigger is a part of the transaction that triggers it, which can hurt performance, or worse, rollback the transaction if the trigger fails. After some thinking, we decided we could live with it because the trigger was fairly short and straightforward.

But then, the following issue arose:

Our tester told us she needed an audit table that would hold each and every change made against the table, so she could check that each such change was reflected on the Azure side. We had to make a U-turn: As I already said, a trigger is a part of the transaction that triggers it, and I refused to start building complex processes on top of that trigger – the risk for the rest of the processes on that server was too high.

We started looking for a solution that will get the job done asynchronously, and then we found Change Data Capture (CDC). Here’s how it works, in a nutshell:


It was a perfect fit. CDC reads the changes from the Transaction Log asynchronously and records them to a table – exactly what we needed. We changed the code and the checkpoint mechanism to work with CDC, and moved on.

Tab-Delimited Formatting and Data Transfer

Pretty early in the thinking process for these bullets, we knew about an in-house transfer system that takes files and transfers them to Azure. There was no point in writing a new mechanism to do that, and we decided to use it. So all we had is to export the rows to files in a tab-delimited format and copy the output files to a file share, from which they would be taken by the transfer system up to Azure.

So now we had to decide which tool to use. Basically, we considered 3 options: CLR, SSIS and BCP.

Why we Didn’t Choose BCP?

This option came off the table very quickly. We didn’t feel BCP had enough flexibility and easy integration options with the rest of our code.

Why we Didn’t Choose SSIS?

SSIS was actually the leading horse in major parts of the race. It is a well-known mature technology with tons of capabilities, can be used with SQL Server Agent, can be set up pretty quickly, and in fact there are a few good examples of implementing migration solutions using it. In short: It could get the job done.

But none of us was an SSIS expert, nor really liked it. Personally, I tend to get lost when the SSIS packages start to be big, and they tend to do that. Also, future upgrades and comparison between package versions seemed simpler with CLR. 

So..

Why we Did Choose CLR?

First, it’s .Net programming, what we were all comfortable with. This lowered development and maintenance costs. Also, we could easily integrate the CLR procedure with the other parts of the code and catch errors that occured inside the CLR procedure on the outer procedure that called it. As written above, future upgrades and version comparison seemed simpler. On top of that, it got the job done, very quickly.

Two issues to keep in mind though:

  1. We needed to grant the user running the SQL Server service with permissions to the file share (not the SQL Server Agent service).

  2. You can’t reference any DLL you want when using a CLR procedure. We wanted to use some DLL’s we used in our testing components, but couldn’t.

Conclusion

The process works, and works well. All of the inserted and updated rows arrive in Azure in a matter of minutes. Not less importantly, we learned a lot from the thinking and evaluation process.

0 comments

STAY IN TOUCH

Get New posts delivered straight to your inbox

Thank you for subscribing!

bottom of page