New Year Resolution – “Ready for cloud in 2017”

January 3, 2017

You all had a party for holidays, to celebrate new year, Christmas, Kwanza, Hanukkah. Now what? back to reality – Love the most.  Along with it, there would be a whole slew of new year resolutions. How about a work related resolution? You don’t have to have your boss set goals for you, you should have a goal for yourself.

It is 2017 and in all probability, you will be using, marching towards or will be dragged kicking & screaming into the cloud. In all probability, you are using some form of cloud for your personal needs. The biggest hold back I see is the security concerns and the investments that are already made.

Knowledge is powerful and in the new year, make a resolution to learn about the unknowns. You already have your servers in a data center of your own or co-hosted with a vendor. The data centers of the cloud are no different. Microsoft has come up with a 3D visualization of one of the Azure data centers.  It is simply wonderful.

If you search the internet, you will find similar videos for Google as well. I’m sure you are backing up your photos to google or if you are using an android phone, you data lives in one of the data centers anyway.

Let’s start the journey. The journey is as important as the destination.


Is your apps “Cloud Ready”?

December 11, 2016

It was mainframes before the 90s, then came the client-server architecture and the web followed that in 2000.  The web architecture is somewhat similar to the mainframe architecture. I started my career in client-server, went to the web and now I’m in cloud. The technology departments in the corporate world and the IT services firms assisted in the migration from mainframe to client-server to the web. One need to change the mind-set while designing apps for specific technology platform. It will be easy to understand if you look at each of these technology framework as a genre. The current cloud architecture actually resembles the client-server architecture to a great extent. These days, the apps are feature rich with majority of the processing happening on the server side while still rendering logic on the client side (AngularJS website, apps on your phone etc.)

The are multiple cloud offerings, I mean IaaS, PaaS and of course there are many vendors Amazon, Microsoft, Google, Rackspace, Openstack and the list keeps going. You can have private cloud, public cloud and hybrid cloud. Now, everything you need is provided and is available as an app. The concept called SaaS which started in early 2000s with the advent of websites.

Let’s talk about migrating your applications that are currently living in your data centers or co-hosted or on your own premises. The easiest way is to follow the IaaS path wherein the physical servers are replaced with the VMs in the cloud. I’m saying this as easy because if your servers are currently in a data center you are any way remotely connecting to them from your personal device (PC, mac, Chrome – whatever), it doesn’t matter whether the remote server is in your DC or in Amazon or Microsoft DC. Apart from this, your applications doesn’t need to undergo any change, pretty much. However, this will not let you take advantage of “the cloud” offerings.

The second approach is to build your apps and make it available as SaaS by using the PaaS. This is where you will reap the benefits of the cloud architecture. Wait, you will hear from everyone, oh you are going to get locked up with a vendor! This is true if you were asking me about this even three years ago. Now, the cloud offerings have matured and one approach you can follow is to use containers such as Docker.

Let us take a hypothetical app where you receive data from your various vendors, the app need to load the data, notify vendors / your IT / business on failures, notify customers of the changes and that the latest information is available on the website for everyone to access. In such an app, in the current world (hosted in a physical or virtual server in your DC), you can have your own FTP server to pull the data from vendors and store the files in the file system of your server; with SMTP service running in your server to send the notification emails; any failure files can be moved to an error folder and error logs written to a log file in the log folder; Once the data is valid, you can update your database which forms a separate cluster in your network. If such an app/ website need to be hosted on the cloud, it need to be made “cloud ready”. What does it mean by “cloud ready”. Some of the salient features of a “cloud ready” apps are being machine agnostic, self-aware, secure and more. We will see the top ones here.

  1. Machine agnostic. – some examples
    • No file system – store files on AWS S3 / Azure cloud storage/ drop box [if you really want one, try the AWS EFS, it’s a network file system and you can use like NAS]. If you have a server, you will have a file system, but you should avoid them as it will have issues when you scale up and scale down. In Azure, the local C drive is temporary and you lose all data on it on reboot (other than the OS).
    • No local ftp / SMTP – use other mechanisms or 3rd party [FTP is a serious security issue], use AWS SES or other providers for email.
  2. Self Aware
    • Apps should be service oriented and the service endpoints are fetched from the application config files.
  3. Secure
    • Secure communications – Use SSL and encrypt all communications especially if you are in a regulated industry (eg:HIPAA). if you are app is calling other allied service end-points pass encrypted data and follow authentication and authorization policies. This is something often discounted when you are writing apps for your own DC though it’s a recommended practice.
    • Encrypt Data @ Rest- Don’t keep PII (Personal Identifiable Information) but if you need, you should encrypt that stored data. This includes any document / form data that you collect, create.
    • if you let users upload to your S3, please use proper authorization policy and move them out into a secured bucket ASAP.

In addition to these, there are more and my intention is to bring your attention to this and not to write a thesis. You are always welcome to write to me, if you want to discuss more.

Getting your apps to the cloud opens up possibilities,  more tools & technology at your disposal. You can easily start notifying your clients through SMS (trulio), you can cache the data if you are using GCE, you can use Firebase / AWS cahching to synchronize data and provide a seamless experience irrespective of the device form factor. Ultimately, your application will be device agnostic and your apps can follow your user from phone to PC to TV.

These thoughts are not just for migrating, you need to keep this in mind for newer applications you are building. Feel free to drop me a line if you need help migrating legacy apps to the cloud.

Server-less Architecture

November 26, 2016

Using AngularJS & Firebase

There are lot of write-ups on the pros & cons of server-less architecture. How it creates vendor stickiness, high TCO, it is restricted to certain technologies, it is not a panacea. I’m not going to get into that argument. One thing, I will buy into is that “it is not a panacea”, everything else is debatable. There are many ways of achieving the sever-less architecture and the one we discussed earlier is AWS and the lambda functions. Yes, it is still a choice but IMHO, it takes orchestration of so many individual components of AWS. You need Lambda, API gateway (to make it REST), Cognito or other Auth providers, DynamoDB or other Database and more.

Google has come up with a platform called Firebase.  It is so simple and concise. What initially started as a JSON database has evolved into a full-fledged platform that encompasses all AWS components that I mentioned above. It can host, it can offer authorization, it can authenticate with 3rd party providers such as Facebook, Google, it can store data and if I missed something, you know what? it can do that too 🙂 (you still need to develop the fron-end website) The biggest drawback is that working on Firebase makes you equivalent to an astronaut on the ISS doing a spacewalk! Yes, remember ISS moves in a geostationary orbit @ ~17k miles an hour? well, Google is not that fast! but they keep constantly updating the platform in weeks if not hours. That is the constant challenge. If you need help and came across a video that is a year old, well – forget it. You need to find something that is not more than six months old at the max.

The simplest and fastest way to get up and running in production is to use AngularJS for the front-end and Firebase for the back-end. For most apps, the free version of Firebase is good enough. You can right your code in AngularJS, minify using tools like gulp, google offers a Javascript SDK / library called AngularFire to interface with Firebase from Angular. You need to do few minutes of configuration on the console for enabling authentication, authorization, you can copy the configuration details (like db connection string) and no need for any secret key / password to be stored on the client side, all authorization is done on the server side. Thus you don’t need to worry that your secret keys are getting exposed.  Once you develop the web-app in AngularJS, minify using gulp, you can host the website either in Firebase or AWS S3 (How to host on S3).

If you are a startup, you want to validate and get an MVP out quickly, this is a great platform. You can even continue to run until you reach a critical mass and if you feel any bottleneck, you can afford to build on any platform. Google platform can scale and where I see problem is if you want to download your data from Firebase, it gets tricky. Even if you are big corporation and need to quickly get a web-based app or need a website to manage an event that is coming up or anything that is not going to run for years and that IT says it will take years to build, I will say nothing can be cheaper and faster than Firebase.

If you have thoughts and want to discuss, I’m always open, drop me a line.

AWS RDS & Azure SQL Some updates

October 5, 2016

I’m continuing to explore cloud based SQL Servers (AWS RDS as well as Azure SQL) vs. SQL server in the cloud. There are some good developments that happened recently. Round one Amazon AWS RDS wins against Azure SQL.

In my earlier post (Amazon RDS DMS), I mentioned that these cloud providers should come up with a mechanism to Backup / Restore to/from .bak in S3 or Azure blob storage. I wrote this in April and AWS made it possible by end of June while I was busy playing with Azure. Here is the documentation on how to do this.

Azure has come up with something similar but fell short. They introduced a way to Restore from a .bak file in Azure blob storage. If you have your own instance of SQL server 2014 & above, then there is an option to Restore from “URL” where you specify the URL of the Azure Blob storage details.

However this doesn’t apply to Azure SQL.

On other fronts, Azure SQL offers other features such as Full-text in complete form. Here is the link to the documentation on enabling / using Full-text feature.

The reason, I’m providing the links rather than explaining is that the documentation is really good and you should not have any problem following these instructions. It is pretty much straightforward. I had trouble with the “Restore from URL” because, I was thinking this is the equivalent of the AWS RDS feature. I was trying to restore an Azure SQL database from the Blob storage and getting frustrated!

My biggest pet peeve with Azure SQL is that majority of the features cannot be done using SSMS. It is a wonderful tool that makes SQL Server stand apart. You can’t right click to manage full-text or indices, keys etc. I’m happy that MS is following the open source crowd in warming up to the community but it should do all these without giving up its USP – the wonderful tools. Without these tools such as Visual Studio, SSMS, you can as well use the open source tools.

Please do drop me a line if you have trouble accomplishing any of these or want to reach out to me to share your thoughts.

To Upgrade (or not?)

August 16, 2016

We see all these attractive things in life. we want this, we want that, do we really need? we don’t ask that question. That is the ONE question and the OTHER question is what would it be if we don’t get that? The answers to these two questions pretty much sums up your next course of action.

However, often times it is not as simple as that. If you have a 10 year old Toyota Camry and you want a Mercedes 300, it’s one thing but what if you have a Windows XP PC (yeah!, you read it right) and it works just fine @ work or @ home and you need to upgrade to Windows 7 or 10 (Please don’t think of 8).  If it is @ home, at least you are the only one that is exposed but if it is your work, your exposure is whole lot and not just financially but also the number of people it will impact.  I’m going to focus on running unsupported software here.

I come across this scenario a lot and especially recently. The world has come across the fact that the 800 pound gorilla that has woken up (yes, I’m talking about Microsoft). They have started churning up software that is inline with current global market and to address the current global risks. In thinking through the process, there is no golden rule or a magic wand to figure out, is it time to bite the bullet and upgrade. However, we can come up some guiding principles or set of questions to help us in the decision making process. Based on the experience, interactions with our clients, feedback, industry pundits, I came up with a set of guiding principles. (Again this is just a sample.)

  • Are you in a regulated industry? Is your systems conforms to the govt. regulations
  • Do you have a corporate governance committee? Is your systems conforms to the governance rules laid out?
  • Is there and industry standards body that you are part of? Is your systems conform that (basically you are not just preaching but walking the talk)
  • When did you last perform an upgrade? How many versions are you behind?
  • Any (how many) of your systems use software that are not supported by respective vendors?
  • Are all customer facing systems are running supported software?
  • What is the cost of not performing an upgrade? What is your plan when Sh*** hits the fan or things stop working?

If you are in a regulated industry, you don’t have an wiggle room, you need to follow the govt. regulations. You need to be honest with yourself, thus you should be conforming to your own standards. These are rules that you laid for yourself that you should follow (remember those new year resolutions of going to the gym? – well this doesn’t belong in that category). You are part of a world / industry standards body and you need to stick up. if you are not then who will? Then comes when did you last perform an upgrade. I get it is a chore and too much impact etc. But if your answer is half a decade or more, you have reached a point where the cost of inaction is more than your action. If or how many are using unsupported – if the answer is more than ZERO and they are being used in a customer facing environment, you probably don’t have “a plan” for the last question. If it is the internal systems, you will be able to setup more security rules, un-maintained servers a.k.a – you got a parachute. However the last question is the most critical of the lot for your business. When you do this, you are not just jumping of the plane, but you are jumping off the plane with no parachutes (for anyone in your organization). Depending on how critical, you can make the jobs obsolete, in turn the employees and in result you own organization. Remember we read about stuff on the newspaper and other media about such incidents? Do you want to be one such example?

IMHO, the ideal upgrade is Three years and not more than FIVE. Beyond which, you will get into the problems of unsupported software, security, obsolete technology, rusted workforce and more.

What are your guiding principles? Drop me a line.

Microsoft Open Source

August 8, 2016

Yes, You read it correct, I hear you, saying, isn’t it an oxymoron? Well, it is not.The world is abuzz that ever since Satya took over Microsoft, things started changing. It is more developer friendly, it is more open to new ideas, it is collaborating with various vendors, partners yada yada yada…

This new era started with Microsoft embracing Linux, letting you create Linux instances in Azure. You all know that Visual Studio is free to the entire world. I have been associated with technologies in the past couple of decades, one of the best tool for developers is Visual Studio, hands down (No offense to Eclipse, Sublime and the like). They used to do this before calling it developer edition, then called community edition, trial edition. These various editions are either teasers i.e the features you want to use are not available and that you need to buy the paid version or it will be available only for 90 days or some limited period. They have gone out of that mindset and said, well there is going to be one edition an is free.

The best development tool in the world, the one that is very well integrated with the Azure cloud that lets you manage, create, deploy the assets is free now. If you thought that is not enough, recently, Microsoft announced that they are giving the SQL Server Developer edition Free. This is huge! I remember struggling with the so called SQL Server Developer edition couple of years back that it doesn’t have analysis services, integration services, SSRS, the SQL Agent. Now all that is available. So what is the catch? Nothing. This edition is exactly same as the SQL Server Enterprise Edition. The download link is here. You just need to signup / create an account with VS essentials. The only limitation is that Microsoft says, you cannot use Developer edition in any production environment. At least, you don’t have to worry about purchasing when you are developing your product. You focus on developing the product and when it comes to deploying or taking it live, you can worry about it. Even then, both Microsoft and Amazon offers you to participate in various “Start up” initiatives that you an benefit from.

I hear you asking, so what brought about this change? IMHO, this is how Microsoft was operating from inception. They probably forgot for a while when Steve was at the helm. It was Novell Netware (you guys remember?) that pioneered Local networks in the corporate world, then came Windows NT. You all remember what happened to Netscape? Well, Microsoft offered Internet Explorer free. We didn’t get anything free for a while and now it started again. You get Visual Studio, SQL Server Developer, Xamarin for mobile apps, the start-up initiative, Linux.

Don’t over analyze. Start working on your next idea and the ecosystem is available for you to use the best tools out there. What are you waiting for? Microsoft is Open to sourcing it from / to you. Happy developing.

PS: The complete licensing guide for SQL Server 2014. if you are in doubt, don’t hesitate to drop me a line.

You have (want) an app for that?

August 4, 2016

It’s Déjà vu again. This used to be “do you have a website?” in the early 2000, now it’s “do you have an app for that?”

It’s a given that everyone, everything has a website, thanks to the various website builder tools/websites, content management tools such as WordPress, Joomla, Drupal. This used to be a big business of building websites. These various tools democratized the website building process. You don’t have to be a pro. They all offer a WYSIWIG editor, you drag, drop, buy images of iStockphoto, few minutes (may be an hour) later you have your own website without thinning your wallet. These content management tools really took the wind out not just of making but also maintaining the website on a regular basis.

There are specialized websites / tools bordering on SaaS (Software as a Service) and website builders for specific categories. For example, I was looking to manage my son schools’ PTO / PTA, there are a number of sites and the prominent among them being ParentOrbit. They not only allow PTO to register, collect funds using credit cards, ACH, but also let you setup class parents, broadcast emails, collaborate with other parents. I even noticed them letting independent Business Owners (robotics, chess classes, anything to do with kids)  to run a virtual shop. They can manage their business, communicate, collaborate, accept payment etc.

The same democratization process is happening in the app marketplace. Unless, you are looking to develop a unique, ultra functional app, you pretty much can create, publish to the store and maintain it the same fashion. The same content management tools like WordPress have its own mobile plugin, that lets you convert a website built with WordPress into an app. If you want something specific like, creating a Social media based mobile app such as your band’s fan club there are myriad engines like Anahita, HumHub, SocialEngine. These let you create a website and convert them to mobile apps.

In addition to these tools similar to content management, there are also other mobile app builder tools like goodbarber, appypie and more. Then there are other cross platform app builder tools such as Microsoft Xamarin, Appcelerator Titanium. Microsoft recently bought Xamarin and bundled it free with Visual Studio.

You don’t have an app yet? want an app? Pick one- anyone is good enough than not having one. Still confused? drop me an email.

Microsoft Azure – first impression is a better impression

August 4, 2016

I have been working on AWS for these past years and now started looking at Microsoft Azure as it has become the number two and growing fast. This is my initial post and I’m going to concentrate on the basics, some comparison. I don’t want to bore you with too much details. if you want details, you can reach me.

Microsoft has done few things good and I see and understand why this service is growing so fast. To start with, their interface to manage or the management console (AWS lingo) is good but they could do it better with the placement of the network security groups (my pet peeve). I started off and immediately there is a phone number and a person to chat with if you need help. I reached out as the terminologies are different. I found it surprising to find a contact # and a person to talk to but then I was thrilled that the person knew what he was talking which is getting rare these days! (Try finding a number on AWS or Amazon website). Microsoft got their act together and their Customer Service & Tech support is coherent and good. Everyone knows / heard about cloud but it is still a mystery and having a person to talk to really helps.Score one to MS on this front.

I was able to setup a server, install a sql database and host a webapp in an hour and I felt that was impressive for the first time. Next up pricing. Microsoft pricing is slightly higher than AWS and the documentation on pricing is also scattered.You can’t easily go from seeing the pricing on VMs to storage to Azure SQL. It’s kind of hopping from one place to the other. However, if you already have your setup in a data center and you want to figure out how much will it cost, Microsoft says it has a tool ( i didn’t try it yet) called Site Recovery. Using this, you can even create an image of your server and move it to Azure. Thus you can move your virtualized server between your cloud and on-premise easily, you can’t do that with AWS. If MS scored on this tool, AWS scores it on the ability to create an image of your servers in AWS. The process which I will address it in a later post is cumbersome at best.

One other thing, Azure does it better is the way you want to pay for the service. If you have different cost centers / Business Units, you can create a subscription for each one of them, associate a different credit card. You can name the subscription with the appropriate names, associate a MS partner if you are working some one. This is a big win for the big corporations or for accounting folks.You can do this in AWS  as well but not this straight-forward.

I want to wrap this initial post by mentioning one very important information. Once you create your server and you are no longer using it, please make sure you go to the management console ( azure portal) and “STOP” it from there. If you don’t do this step and just shut-down your instance from inside the RDP or SSH, you will continue to get charged for that instance! Though Microsoft offers feature to put a cap on your spending limit, they don’t offer that on the most popular “pay-as-you-go” subscription.

Overall, I had only one rough experience. Once you have one azure account,  it is not easy to sign-up for another one. Let us say, you created one account (your personal) to develop + test. Now. you are happy but you want to create a new account for production or under your corporate email, you can’t do it. The two ways to do it are to use a different browser or goto Google Chrome,   open an incognito session and sign-up. Why? Go figure!. All-in-all, Azure is comparable service with AWS with better customer service, better tools, better SQL on cloud, but slightly higher price.  If you want SQL server on the pay-as-you-go model, Azure is cheaper than AWS and feature wise, Azure SQL is better than Amazon RDS for SQL Server.


Amazon RDS DMS

April 3, 2016

This is a follow up to my previous write-up. This one is specific to the Database Management Service that Amazon AWS introduced recently. Finally, I’m able to setup the database in RDS and host the webapp with some work around and I’m going to lay it out as to what is involved in doing so. Thanks to the AWS forums and engineers.

The issue I had in migrating data is that my database had all the objects in “dbo” schema and had no other schema. This seems to be a bug and when I posted on the forum, the engineer responded with a workaround. The solution it to use a custom mapping rather than default mapping with the following json
“TableMappings”: [
“Type”: “Include”,
“SourceSchema”: “dbo”,
“SourceTable”: “%”

I’m giving this for completeness sake. I would advise in not following the wizard approach as things take time  and is not clearly mentioned and results in unnecessary delay and confusion. Instead, create the source and destination endpoints, a replication instance (this is the server /machine that runs the program / task of exporting and importing data by connecting to the source and target end points) and a task. The step of creating a replication instance takes time, please wait and let it do its job.

Once you created all these, start the task. In this process, all tables along with the data are copied from the source to destination along with any primary key or clustered indexes your database tables have. That’s all good news, but that’s all it does. It does not bring in any of the constraints (foreign keys, default values, triggers), views, stored procedures, functions or other non-clustered indexes. You have to script them out separately and apply after the data is migrated. If you have any identity columns you are in for some serious work. You have to create a new identity column, turn identity off, update the data and turn identity on. However, if your tables are not that big, you can goto the design mode and turn the identity property (after enabling that from tools->options).

These  issues could have been easily resolved, if we can restore from .bak file. If Amazon can offer restore using a .bak file from S3, it will be an icing on the cake.

One of the basic rule for all the apps we write for cloud is that it should be independent of the local file system. So, Amazon should have a tool to start the restore process using a .bak file from S3. Once initiated (though the RDS claims no access to file system, a file system of some sort exists as it shows c:\MSSSQL\Data\ when you create databases), it should initiate a copy from S3 to the local file system and then do a restore.

Once we have this as part of DMS or a sub-tool similar to the “MigrationTool” they have currently that helps in migrating data from disparate databases.

PS: If you are done with your data migration, please delete the replication instance otherwise it will continue to cost money.

Amazon RDS (MSSQL Server)

April 3, 2016

The concept of RDS sounded very attractive and started reviewing its usability as it can take care of so many chores (backup, availability, clustering…). As an icing on the top, Amazon announced Database Migration Service (DMS), a week ago. I could not resist any longer, so took a dive into RDS and I ended up nibbling in its frontiers.

To avoid any security or network issue, I setup two test servers, one in ec2 -Windows 2012 / MS-SQLServer2014 and an RDS instance with the same version of MS SQLserver. I created the test database by restoring from a .bak file to the ec2 SQLserver (as you can’t restore from a backup file directly to RDS) and then wanted to use the DMS. It was pretty basic and the setup was easy to follow. Setup the source DB and target DB connection, tested it, created the replication instance (this is a ec2 server) and the migration task. The task ran successfully, no error logged. However, to my surprise, it didn’t do anything! No matter what I tried, it didn’t do it. I’m still working on getting this up.

In the meantime, I got the data into RDS by the following method (hold your breadth)  – not an easy way especially if you have large amounts of data, that’s where I think DMS will be a killer. DMS offers not only to migrate data but also to replicate data on an ongoing basis. This will let you maintain a parallel world of your production data – how cool is that? (if only it works!:-()

First, script out your views, stored procedures, functions, keys, constraints, indexes etc all in separate scripts. Secondly, use the import / export data and migrate all your current data into your RDS instance (pretty fast – i migrated a database 50 gb under 15 minutes – though I was running a m4.large instance for the source DB and used 10000 IOPS for RDS). Then you use the script you generated to create rest of the objects. This is a painful process, if you still have the habit of using Identity columns, you need to drop and re-create the columns! In addition to being not able to restore a database from a backup, you can’t backup “a specific” database.  The entire instance (sql instance) along with all databases will be backed up. if you are a small shop, you are starting from scratch or you don’t have much data or you want to use it for analytical purposes, RDS will be helpful. However, my sense so far is that RDS is not ready for large installations to be used for production purposes.

I’ll follow up with a write-up after a successful DMS usage.

PS: If you are using an older version of SQLServer (2005 for example), you can use Amazon AWS ec2 / SQLserver 2014 to test your applications without buying the license which is a lot.