Amazon RDS DMS

This is a follow up to my previous write-up. This one is specific to the Database Management Service that Amazon AWS introduced recently. Finally, I’m able to setup the database in RDS and host the webapp with some work around and I’m going to lay it out as to what is involved in doing so. Thanks to the AWS forums and engineers.

The issue I had in migrating data is that my database had all the objects in “dbo” schema and had no other schema. This seems to be a bug and when I posted on the forum, the engineer responded with a workaround. The solution it to use a custom mapping rather than default mapping with the following json
“TableMappings”: [
“Type”: “Include”,
“SourceSchema”: “dbo”,
“SourceTable”: “%”

I’m giving this for completeness sake. I would advise in not following the wizard approach as things take time  and is not clearly mentioned and results in unnecessary delay and confusion. Instead, create the source and destination endpoints, a replication instance (this is the server /machine that runs the program / task of exporting and importing data by connecting to the source and target end points) and a task. The step of creating a replication instance takes time, please wait and let it do its job.

Once you created all these, start the task. In this process, all tables along with the data are copied from the source to destination along with any primary key or clustered indexes your database tables have. That’s all good news, but that’s all it does. It does not bring in any of the constraints (foreign keys, default values, triggers), views, stored procedures, functions or other non-clustered indexes. You have to script them out separately and apply after the data is migrated. If you have any identity columns you are in for some serious work. You have to create a new identity column, turn identity off, update the data and turn identity on. However, if your tables are not that big, you can goto the design mode and turn the identity property (after enabling that from tools->options).

These  issues could have been easily resolved, if we can restore from .bak file. If Amazon can offer restore using a .bak file from S3, it will be an icing on the cake.

One of the basic rule for all the apps we write for cloud is that it should be independent of the local file system. So, Amazon should have a tool to start the restore process using a .bak file from S3. Once initiated (though the RDS claims no access to file system, a file system of some sort exists as it shows c:\MSSSQL\Data\ when you create databases), it should initiate a copy from S3 to the local file system and then do a restore.

Once we have this as part of DMS or a sub-tool similar to the “MigrationTool” they have currently that helps in migrating data from disparate databases.

PS: If you are done with your data migration, please delete the replication instance otherwise it will continue to cost money.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: