There were a lot of messages that came out of the recent Burton Group Catalyst conference in San Diego surrounding the public cloud.
But one resonated more than others: You need to get a grip on your own assets, meaning what data is stored on what servers and what the real costs of building or deploying and maintaining an application are before you can figure out if cloud computing is a more cost-effective route.
Burton analyst Chris Howard compared the state of enterprise IT to that of Rome: Are we just building and building upon an old architecture? When is it time to start getting rid of some of the old stuff? And how do we decide what should stay and what should go?
Bill Peer, chief enterprise architect at InterContinental Hotels Group, who presented at the show, talked about building an internal cloud. In the process he is moving data from two mainframes predating the 1960s to new servers on a private cloud.
This is a multibillion dollar company making the move to get rid of old systems, and there are probably other enterprises out there sick of maintaining mainframes and code created by people who are no longer with their company.
The list of cloud computing benefits and risks is long and varied depending on who you talk to, but one benefit is clear: It could force CIOs to assess what they need and can do without, and, if anything, build more efficient data centers on their own.
There is a test for figuring out what can go and stay if you are not faint of heart. Howard shared a story of how Ken Anderson, former CIO of Novell, used to go into the company’s data centers at night and randomly turn systems off.
If no one noticed in three weeks, the system stayed off.