Let’s take a minute to reflect on the IT industry. Heraclitus had it right: “The only thing that is constant is change.” We’ve all seen the disruptive waves of technology sweep through the IT world. With every wave there are winners and losers. Speaking directly to the compute environment, there have been mainframes, microprocessors, open systems, virtualization, and now cloud computing.
“I think there is a world market for maybe five computers.” — Thomas Watson, chairman of IBM, 1943. (The irony here is that IBM is the only company to survive every disruptive wave to date).
Hold on one minute. I know cloud is a buzzword but let’s be clear. It isn’t simply photo sharing by Microsoft. It’s not just having your videos instantly available to share online. You don’t even need to go over the internet to get the cloud! That’s right, there are such things as internal clouds (becoming more popular now, thanks to the NSA). The cloud isn’t where your storage and compute reside, but how they are delivered to the end-user. Cloud is making IT a utility. Cloud is the embodiment of as-a-service offerings. It’s taking a pool of resources, virtualizing them, and then automating them for elastic provisioning to your end users (likely with a chargeback model).
The companies who have the best public cloud offerings today are Amazon, Google, AT&T, Verizon, etc. Amazon and Google helped pioneer this trend after they found themselves in a position with huge pools of underutilized infrastructure. Well, we all know what they did with it. They pooled all of the resources together, carved it up, and rented out pieces to the public. AT&T and Verizon came late to the party, but owning the Internet’s infrastructure stacks the deck in their favor — most ISPs will offer cloud services to end-users.
What about private clouds? I thought you’d never ask. If I’m a CTO of a corporation, I’m sitting on a big pile of IT infrastructure asking myself why I am looked at internally as a cost center when there are all these companies doing what I am doing, but for a profit. So, I pool all my resources together, add a virtualization layer, and automate my infrastructure to align to my business’ service levels. If I’m very good, I create a self-service portal for my end-users and implement a chargeback model (or at least a show back model). Voila, you’ve brought your DC up to 2013!
The journey to utility-IT is closer than most people think. To compare IT to power companies, we would have local power stations equivalent to AT&T or Verizon. The nuclear power plants would be more like Amazon and Google — the giants. We haven’t covered the plethora of cloud service provider startups, such as software-as-a-service or hosting companies or the thousands of other as-a-service companies. These could be thought of as the solar and wind power. Then there are also generators for generating your own power — private clouds. All of these services will come with different price points, features, and SLAs. We’re close with IT, but something is missing…
Using the same metaphor from above, let’s compare a household appliance with a business application. A household appliance gets plugged into the wall and can leverage solar energy, nuclear energy, local power plants, and generators – it makes no difference to the appliance where the energy comes from. It doesn’t need to be reconfigured to take advantage of the lowest priced power or to draw from all power sources at once if it happens to surge. It all happens automatically. The key piece missing for the application is the orchestration. We already have the technology to plug an application into a cloud and let it start eating up resources. Many companies have this technology down pat. What is lacking, is the ability to let an application consume IT by policy, despite what cloud it lives in. For example, an application might have the need for low latency. This application should be run close to the end-user. If enough users request my application to cross a threshold, then I want to spin up an instance of my application at the local Mom and Pop cloud to be as close to the end user as possible. On the other hand, I might want cheap storage for archiving old data or bulk batch processing. In these examples, I might have a policy to have application (and its underlying resources) move around the globe and in between clouds to follow where the power is trending down.
I’m really excited to see what happens in the industry in the next 3-5 years. What will be possible when these clouds start becoming interconnected? How will the competitive landscape change? Will some clouds stay proprietary and play into orchestration? Lots of questions remain.
p.s. I wrote this two years ago and never published publicly until now.
this was pretty good. Im working with my sole IT guy right now (legit sitting in his office while hes in a meeting in another room) to try and hash out a plan to virtualize all of our servers (were really behind) and also how to convice our clients that hosting their instance in a cloud is better (Cost/SLA/Security) then having us host in one of our “Datacenters”