Cloud Computing for All! (virtually)
tomorrow's technology today
Gary Cornell and his views from across the pond…
I was recently talking to a friend who runs a small company. He told me he was thinking of ditching their use of Gmail for their corporate email because of the recent Gmail outages and return to them doing it themselves. He said that he thought they lost so much productivity when their email went down that it was probably worth the cost to go back do doing it themselves.
This seemed a bit much to me and I told him so. Of course, my friend wasn’t behaving in an unusual manner at all; behavioural economists and cognitive psychologists have documented this “anchoring” phenomena in many different experiments. We seize on something that just occurred and then use it to extrapolate future situations – even if the numbers going back further into the past don’t come close to supporting the extrapolation.
The trouble in this case was by focusing on the now, he was no longer taking into account how often his internal email server went down prior to their adopting Gmail! (One estimate says that email goes down around 10 hours a year for small and medium sized companies that do it themselves.) Now don’t get me wrong, I’m certainly not saying Gmail is the solution to every small business’s email needs. I certainly believe that you could use a hosted Exchange server for example or some other hosted email service. But what I do think is that, more and more, it makes less and less sense for small and medium size business to devote the time and resources to hosting their own email.
Another example of people getting scared for no good reason and ascribing to a failure of the Cloud to what turned out to be simply stupidity in the extreme, is the recent potential loss of all the data on t-mobile’s Sidekick smart phones here in the United States. Right after the news broke I have heard a couple of people comment that this shows the cloud cannot be “trusted” for vital data storage. But it wasn’t a failure of the Cloud, it was actually a human failure. The data loss was caused by lack of backups. If whoever has the data doesn’t do backups at locations other than the primary one, you can bet Murphy’s law will apply and sooner or later you’ll pay the piper. In the case of the Sidekick smart phone, t-mobile discovered, after a storage upgrade went awry, that backups weren’t readily (or perhaps even ever going to be) available. But in spite of this potential loss of data in this unusual situation, for any small to medium size company that has large data needs, avoiding the Cloud is a bad mistake. Anyway, up to now in this column and also in all my previous columns on the importance of Cloud Computing, I have been talking only about the Cloud for small to medium size business. I have become convinced that it is the choice to make. However, in my previous columns I was equally clear that using the Cloud for any large companies IT needs was not yet a viable choice.
Larger companies have special needs for security, fault tolerance and uptime: hosted Clouds simply can’t provide what you need-nor do I think they ever will be in a position to do so. You need to control your Cloud if you are a large enough shop!
Nonetheless, the business case for moving more and more of your IT to the Cloud just keeps on growing. There are not only potential direct (and large) immediate savings by moving data centres to cheaper locations, there are the savings in the long run from having a much lower carbon footprint. For example, at the recent VMWorld show in early September, VMWare claimed that to duplicate the power they had in their “Local Cloud” would have required: 25 megawatts of power to drive 37,248 machines and use the equivalent of three football fields to hold the equipment. Instead they ran it all (virtually) on 776 servers (admittedly with 37 terrabytes(!) of internal memory, 348 terabytes of shared storage and 6208 processing cores). And while there were some glitches at the show, it was a powerful example of the power and promise of vSphere 4 whose release in the second quarter of 2009 seems to me to be the most exciting development for Private Clouds thus far. For the first time, I can see large business moving to build Private Clouds that will be as secure and as fault tolerant as they need. In particular, the fault tolerance features of vSphere are really impressive. There’s a demo where they have a Blackberry server running, they pull out the blade that was being used and there‘s no downtime whatsoever.
VMware claims that vSphere will:
- Decrease your capital and operating costs by over 50%
- Run a greener datacentre and reduce your energy costs
- Control your application services levels with advanced availability and security features
- Streamline IT operations and improve flexibility
And while they are no doubt overestimating the savings and underestimating the cost (and pain) needed for the transition, they may not be far off.
Since one can evaluate vSphere for free, I think every company that is thinking of moving to the Cloud should check it out. What about other alternatives for enterprise Private Clouds? XenServer is free of course but doesn’t seem to be at the level of sophistication of vSphere 4. And while I am looking forward to the Azure announcements that will be made at Microsoft’s PDC (and will report on them in my next column I suspect), Azure is still a work in progress whose final form is not completely clear, whereas vSphere 4 is out now.
(ITadviser, Issue 60, Winter 2009)