Archive for the ‘Cloud Computing’ Category

Amazon RDS DMS

April 3, 2016

This is a follow up to my previous write-up. This one is specific to the Database Management Service that Amazon AWS introduced recently. Finally, I’m able to setup the database in RDS and host the webapp with some work around and I’m going to lay it out as to what is involved in doing so. Thanks to the AWS forums and engineers.

The issue I had in migrating data is that my database had all the objects in “dbo” schema and had no other schema. This seems to be a bug and when I posted on the forum, the engineer responded with a workaround. The solution it to use a custom mapping rather than default mapping with the following json
{
“TableMappings”: [
{
“Type”: “Include”,
“SourceSchema”: “dbo”,
“SourceTable”: “%”
}
]
}

I’m giving this for completeness sake. I would advise in not following the wizard approach as things take time  and is not clearly mentioned and results in unnecessary delay and confusion. Instead, create the source and destination endpoints, a replication instance (this is the server /machine that runs the program / task of exporting and importing data by connecting to the source and target end points) and a task. The step of creating a replication instance takes time, please wait and let it do its job.

Once you created all these, start the task. In this process, all tables along with the data are copied from the source to destination along with any primary key or clustered indexes your database tables have. That’s all good news, but that’s all it does. It does not bring in any of the constraints (foreign keys, default values, triggers), views, stored procedures, functions or other non-clustered indexes. You have to script them out separately and apply after the data is migrated. If you have any identity columns you are in for some serious work. You have to create a new identity column, turn identity off, update the data and turn identity on. However, if your tables are not that big, you can goto the design mode and turn the identity property (after enabling that from tools->options).

These  issues could have been easily resolved, if we can restore from .bak file. If Amazon can offer restore using a .bak file from S3, it will be an icing on the cake.

One of the basic rule for all the apps we write for cloud is that it should be independent of the local file system. So, Amazon should have a tool to start the restore process using a .bak file from S3. Once initiated (though the RDS claims no access to file system, a file system of some sort exists as it shows c:\MSSSQL\Data\ when you create databases), it should initiate a copy from S3 to the local file system and then do a restore.

Once we have this as part of DMS or a sub-tool similar to the “MigrationTool” they have currently that helps in migrating data from disparate databases.

PS: If you are done with your data migration, please delete the replication instance otherwise it will continue to cost money.

Amazon RDS (MSSQL Server)

April 3, 2016

The concept of RDS sounded very attractive and started reviewing its usability as it can take care of so many chores (backup, availability, clustering…). As an icing on the top, Amazon announced Database Migration Service (DMS), a week ago. I could not resist any longer, so took a dive into RDS and I ended up nibbling in its frontiers.

To avoid any security or network issue, I setup two test servers, one in ec2 -Windows 2012 / MS-SQLServer2014 and an RDS instance with the same version of MS SQLserver. I created the test database by restoring from a .bak file to the ec2 SQLserver (as you can’t restore from a backup file directly to RDS) and then wanted to use the DMS. It was pretty basic and the setup was easy to follow. Setup the source DB and target DB connection, tested it, created the replication instance (this is a ec2 server) and the migration task. The task ran successfully, no error logged. However, to my surprise, it didn’t do anything! No matter what I tried, it didn’t do it. I’m still working on getting this up.

In the meantime, I got the data into RDS by the following method (hold your breadth)  – not an easy way especially if you have large amounts of data, that’s where I think DMS will be a killer. DMS offers not only to migrate data but also to replicate data on an ongoing basis. This will let you maintain a parallel world of your production data – how cool is that? (if only it works!:-()

First, script out your views, stored procedures, functions, keys, constraints, indexes etc all in separate scripts. Secondly, use the import / export data and migrate all your current data into your RDS instance (pretty fast – i migrated a database 50 gb under 15 minutes – though I was running a m4.large instance for the source DB and used 10000 IOPS for RDS). Then you use the script you generated to create rest of the objects. This is a painful process, if you still have the habit of using Identity columns, you need to drop and re-create the columns! In addition to being not able to restore a database from a backup, you can’t backup “a specific” database.  The entire instance (sql instance) along with all databases will be backed up. if you are a small shop, you are starting from scratch or you don’t have much data or you want to use it for analytical purposes, RDS will be helpful. However, my sense so far is that RDS is not ready for large installations to be used for production purposes.

I’ll follow up with a write-up after a successful DMS usage.

PS: If you are using an older version of SQLServer (2005 for example), you can use Amazon AWS ec2 / SQLserver 2014 to test your applications without buying the license which is a lot.

Amazon Lambda functions

March 6, 2016

Amazon took AWS to the next level of granularity with the introduction of Lambda functions. They initially started with the IaaS (and still the leader), made it easier to create, deploy & manage your apps using Beanstalk.  Now with Lambda, they just said, it need not be any full-fledged app or website, it could be just functions. If you wrote a function and you want that executed / running for production or otherwise, you can create it as a lambda function and set it to run on AWS infrastructure. You don’t have to worry about any server / machine / VM – zip –nada. This is primarily an event-driven architecture (push & pull). The function can be triggered by any event happening in your aws infrastructure (an incoming email, a click stream, a notification, a file dropped in a specific s3 bucket, database trigger) or it can even be scheduled to run at a specific time or at regular intervals.  There are myriad possibilities / use cases. Let’s say you have a website that you hosted using Beanstalk and that you don’t have access to a server where you can have a cron job or a windows task scheduler, you can write a lambda function and have it run. AWS has enough samples that they call blueprints that you can start with.  These are pretty much templates and you can change and add your logic.

Currently, the lambda functions could be written in Python 2.7, node.js or Java.  Now, let us see couple of use cases.

Use Case1:

Let us say you have servers (EC2 instances) in AWS and some of them are for demo or staging. If it is staging, it might be that the users are accessing them only during office hours (9 to 5 or pick a time range). If it is demo, the sales person probably needs the demo system for the meeting, 12- 2 or 9 to 11:30. You can create a lambda function that starts your specific instances (instance id) and another function that stops your specific instance(s). Then you can have a scheduler using the cloudwatch to have the specific lambda function to be called at the specified time interval. You could get better and add more logic into one function that runs every hour, checks the uptime of the instance(s) and have them start or stop after so many hours. There are many ways to skin the cat and this is one way that I wrote it as a lambda function using AWS Boto3 library in Python. Like every other service, the pricing is low. The first 300,000 GB-seconds of running of lambda function is free. For complete pricing https://aws.amazon.com/lambda/pricing

Use Case2:

This one is close to my heart. I believe the cheapest way to host a webapp and I’m in the process of doing this. Create your html5 and Javascript application that is rendered & executed on the browser, create lambda functions and expose them as http endpoints. The lambda functions in turn will access any database that is needed. You can have your database as RDS. Now, you can call these lambda functions from the webpages through javascript. If you are a starting up and don’t have much to spend, you want to test your idea, here you go. This is the cheapest option to have a website up and running. Host the html/js files in S3 using static hosting. The lambda functions are used only when someone clicks on the link that access the database or executes a serverside logic and you pay only when it gets executed unlike having a server which you have to pay when it is up! Please drop me a line if you want to share or know more. Thanks!

For more visit https://aws.amazon.com/lambda/

Hosting websites – the cheapest option – Amazon S3

March 6, 2014

Every entity must have a website now! There are myriad options starting with Free. The Free ones are pretty much subdomains (sites.google.com/yourbusinessname). If you are handy and know how to create html pages, then read further. it is the cheapest option I came up with.

Create an account with Amazon AWS and sign up for S3, their cloud storage product. They charge you 10 cents a GB per month (http://aws.amazon.com/s3/pricing/) and $0.004 per 10,000 requests. The website HTML files and images altogether would be less than 1MB thus your monthly storage costs would be less than 10 cents.

Create a bucket in S3 called http://www.yourdomainname.com and copy the website files to it with appropriate folder structures. Once copied, goto the bucket properties and select “enable for static website hosting”. Next is the tricky part, you need to edit the bucket policy. I have no idea why Amazon would do this? It would have been much simpler if Amazon would have automatically created the policy once you enable the buck for static website hosting. They might point to security but this is bad user experience.

Anyway, now click on permissions and click Create bucket policy and paste the following script and remember to change it to you actual website name.

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “PublicReadForGetBucketObjects”,
“Effect”: “Allow”,
“Principal”: {
“AWS”: “*”
},
“Action”: “s3:GetObject”,
“Resource”: “arn:aws:s3:::www.yourdomainname.com/*”
}
]
}

Now goto your domain registrar (godaddy, register.com, networksolutions, 1&1…), create an alias / CNAME for “www” and point it to the S3 bucket’s public “Endpoint” under Static Website Hosting.

Once you are done with all this, create another S3 bucket called yourdomainname.com (without www) and click on static website hosting and select “Redirect all requests to another host name” and type http://www.yourdomainname.com. Now whether a user types your website name with or without www, it will load correctly.

Here is the complete instructions from Amazon. Do not follow Route53 (it is not required. if you do it will add 50 cents per month to your cost).

http://docs.aws.amazon.com/gettingstarted/latest/swh/website-hosting-intro.html

Set up http://www.baasha.net this way and the charge is only 8 cents a month.

How to safeguard your cash register?

February 15, 2014

There may be many cogs in a business operation and every one of them is important and needs to be safeguarded. The most important of them is the “cash register”. The threat to cash register existed since the inception of commerce. People invent new ways to protect it and it seemed for quite a while this risk is mitigated with the introduction of credit cards and the online payments. However, in this 21st century, the folks from the medieval times are back again in the virtual world and now they have a new name “hackers”.

There are many reasons and ways for the businesses to protect sensitive data.  I will concentrate on online businesses and on only one item – how to efficiently process and secure customer credit card data and conform to all regulations and not be liable.

Currently, organizations have a wide range of choices to process credit cards as they sell widgets and to charge the cards on a monthly basis to process recurring payments.  There are many ways to process cards, there are thousands of payment processors offering all sorts of discounts on volume, type of cards etc. To process recurring payments, you need to store credit cards. The moment you store credit cards on your infrastructure, no matter how secure you are, you are setting yourself to be a Target (no pun intended).

I set out to review the choices available in the marketplace for a payment processor that enables charging your customer credit cards both one-time as well as recurring payments without forcing your customers to have an account with them, conform to regulations and be PCI compliant.

I reviewed Amazon (Devpay, FPS), Google Wallet, PayPal and Stripe. At the outset Amazon and Google can be discounted as it forces the customers to have an account with them i.e they are allowing you to charge their customers and not the other way. There are enough middle men and I don’t like having one more between you and your customer. With PayPal, you can do both. i.e if you want your customers to pay with their PayPal account, you can do so or you can use PayPal Payments Pro (Direct Payment) which lets you pass the credit cards from your website shopping cart to PayPal behind the scenes through APIs and provide a seamless experience of your customers not having to leave your website. PayPal also offers subscription / recurring payments wherein you can setup certain customers to be billed at regular intervals and a fixed amount to be charged. PayPal also offers something called “Virtual terminal”, which lets your employees to login to a PayPal website to charge customers cards manually. They also offer a device to enable MOTO customers and physical credit card processing as well.

Stripe is a very interesting new player in the market place and does all we want in a clean straight forward fashion. With Stripe, you can create a customer profile with a default credit card and they would return an ID. Every single time, you want to charge that customer you just mention that ID and the amount. You can setup recurring payments as well. You need to create a recurring plan with a said amount and set the plan in specific customers’ profile. It will charge the customers at the specific plan in a recurring interval mentioned. If the amount you charge monthly or at regular intervals varies, then you can still charge the customers you want to charge by sending the ID and the amount, it’s that simple.

Both PayPal and Stripe offer authorizations, process refund of a specific sale (partial or complete), charge pretty much 2.9% + 30c a transaction. If you have higher volume, you can strike a discount with any of them.

Here is a simple comparison of PayPal Pro Direct and Stripe.

PayPal Pro Stripe
Your Customers need to have an account with No
fees 2.9% + 30c 2.9% + 30c
Recurring payments Yes Yes
Stores Credit card only for recurring pay Yes
Charge cards without sending card info everytime No Yes
Encrypt Card info while sending No Yes
Virtual Terminal Yes No
You will be PCI compliant No Yes

if you have your own website with web pages to collect credit card information; don’t want to store credit cards; Want to charge different amounts at regular intervals; don’t want your customers to have an account with other vendors; be PCI compliant and need a processor to process your credit cards, then I would recommend Stripe.

If you don’t want to go through the pain of designing your own web pages to collect credit cards, then you should consider all mentioned processors PayPal, Amazon and Google.

Here are some quick, direet references.

https://merchant.paypal.com/us/cgi-bin/?cmd=_render-content&content_ID=merchant/erp_overview

https://developer.paypal.com/webapps/developer/docs/classic/paypal-payments-pro/integration-guide/WPRecurringPayments/

https://developer.paypal.com/webapps/developer/docs/classic/paypal-payments-pro/integration-guide/WPWebsitePaymentsPro/

https://stripe.com

Just-in-time apps – Next Generation Web hosting

April 30, 2011

Since the inception of internet leading upto its peak,  having a website was a given for every organization . In a similar vein, with the advent of iPhone, now everyone is expected to have an app. After the .com boom, the website creation is commoditized by the web-hosting companies that they would offer a variety of templates and you could build your website in few minutes. The same business model is being extended to the app world by the new entrants – iSites, Widgetbox and bizness apps.

Every one of them offers a paid plan per month per app plan with varying degree of support, analytics and the apps work in both android and iPhone. BiznessApps is the cheaper of the lot.  iSites offers a basic web based app while the BiznessApps offer the ability to create native iPad apps and widgetbox let you create widgets alone.

Securing files in Cloud

April 30, 2011

Recently, I was looking on ways to secure the files we store in cloud for various reasons. I was surprised that Amazon, Rackspace or any other cloud vendor’s lack of APIs to secure storage by certificates or encrypting or other means. While researching around, I read about three interesting solutions

1. Microsoft Encrypted File System (EFS) – I felt that this takes too much time & effort to accomplish this and we are dependent on setting up certificate services and more. I gave up.

2. TrueCrypt – This is great. This is an opensource project, you just download and install the program and follow easy instructions to create a volume. Entire data in the volume is encrypted and password protected. One need to mount the volume by providing the password to access the files. However, once mounted it gives free access to anyone logged on to the machine / server.

3. AxCrypt – This is good if you want to encrypt individual files, while TrueCrypt is to secure the entire disk / volume that you define.

Here is a good review on TrueCrypt and AxCrypt.

I’m still looking for a tool that offers an API which I can call programmatically and secure the volume. In case of Truecrypt, I can create an S3 volume or a storage volume in cloud, define it as encrypted and mount the volume on the server to which it is attached. However, if some hacker gets in to the server, then he / she can read the files. Since the files are no longer encrypted once you mount it, this becomes meaningless. This would be helpful as long as no one can get into the server (this is the same case with Windows EFS as well).

Cloud Portability

January 9, 2011

In late 90s and early 2000, though we were able to develop websites, that were browser neutral , it was a pain point and it has been reduced to a great extent with the advent of UI tools such as Infragistics, Telerik etc. Though the browser neutrality issue continues, now the greater pain is in developing mobile applications for atleast the big FOUR platforms. In our previous post, I blogged about the vaious tools available to achieve that.

Close on the heels of cross-platform mobile development and number portability among mobile operators, then next big thing would be to come up with products and solutions that makes it easy to move in and out of various clouds. If you are hosting in Amazon, you want to be able to move from there to any of the other vendor or to your own private cloud without a cinch! The ultimate tool would take an Amazon AMI as input and produce a set of files which could be used to restore or recreate the machine either in your cloud or at another vendor.  Its like walking into your datacenter, walking out with your hardware, walk into your new datacenter and set up the servers in the new place. Though we don’t have such a product, there are mulitple that helps you in making your product work in multiple clouds.

I was looking around and noticed that EMC-VMWare is working on a open source solution, while Citrix is working on a solution called cloudbridge that lets you deliver data & applications to & from the cloud faster. Google has teamed up with VMWare to announce cloud portability. Thus applications running in VMWare vCloud, vSphere can be deployed in Google App engine. Thus these apps can be easily moved in and out of Google and your private cloud running VMWare.

What we need is a complete set of standards on Machine images, application deployment, packaging.  This will lead to making cloud interoperable. Wipro, one of the IT solution provider claims to have come up with a w-SaaS solution that offers Cloud interoperability and portability through their Cloud Provider Fabric. If you search google for this, you could find the white paper from Wipro.

Though there are multiple interesting products, one that is promising is the cloud OS from Nimbula.  It lets you shunt between Private and Public clouds. This might be a server side offering competing with VMWare while Google Chrome will be on the Desktop.

Darn Interesting & Fascinating

November 4, 2010

With the two latest innovations, shall I say Micosoft is a shining star? Where this will take Microsoft is a darn good question but these two are going to become big. One is the Microsoft Kinect, this has wide ranging implications to the technology industry, even outside IT. If Wii revolutionized the gaming market, Microsoft has really taken it to the next level with Kinect technology. The fact that it can sense the human motion without having to hold anything has implications in so many areas. For example, there could be an iPhone app with this motion sensing technology that could decipher “Sign language” and convert it to voice on the other end and much more.

The second one that pretty much revolutionizes is second only to google digitizing books! Recently microsoft created something called datamarket, yes! a marketplace for data. One can store information in digitized form in the SQL Azure database, make that data or any portion of it as a regular webservice and publish in this market. Some are free, some are free upto a point and others are paid services. Thus its making the data available for computer programs, driving efficiency higher. Some examples of services are realestate data from zillow, carbon emissions information on EU countries, MLB statistics… This is a great opportunity and this will drive more and more sevices to be automated driving costs down and improvig efficiency.

These are two awesome innovations but How far Microsoft will benefit from the advent of these two remains to be seen.  But I’m sure the other players are going to innovate further, improvise resulting in increased adoption and applications of these two innovations. Whether Microsoft will again become a glowing star or a blowing up star is a topic for historians.  Truly these present a great opportunity for the next round of entrepreneurs.