Why Cloud?

Why CloudIT leaders:

It’s time to take an honest look at the business and business goals of your organization. How does IT drive BUSINESS objectives? Can you honestly say that your IT infrastructure contributes to your company’s bottom line? Or are you still a “cost center?” What you will find is that there are big areas of opportunities to enhance business strategy, free up real dollars in hard savings, and free up soft costs. Although out-of-pocket savings is the current focus of the benefits of the Cloud, it’s the soft costs that may provide the biggest business impact.

Freeing up “facilities”

Moving systems to the Cloud will allow key essential resources to focus on those projects that directly impact the business. Your IT group will better serve the organization as a whole by providing the foundation to grow and expand. So what do I mean by facilities? Think on a broader scale. I am not talking about a couple of racks, I’m talking ALL of your physical facilities. Just think of the benefits of not being tied to a physical space:

  • Production and/or Disaster Recovery: you don’t have to house the majority of your hardware onsite. The Cloud can potentially house both primary production AND disaster recovery. Two different locations in the Cloud, nothing in your building.
  • Utilities: Electricity, phone, wireless connectivity, every square foot has associated costs, and much of it can be Cloud based. No more need for the long term contracts and responsibilities a company’s physical space carries. Your utilities don’t have to change when your address does.

The goal of the Cloud is to provide efficiencies to the businesses, both from a cost and support prospective. So why wouldn’t you want:

  • Quicker turns on IT projects
  • Stability across the application base
  • More efficient use of skilled resources
  • Mobility

Shifting applications and functions only makes sense. Consider Microsoft Office 365 as a starting point. Even if you only use Outlook and not the other applications included – SharePoint, CRM, SkyDrive — consider what you WON’T have to worry about:

  • Licensing
  • Version control
  • Hardware life cycles
  • Facility space and costs

And look at the benefits:

  • Ease of access regardless of location
  • Plays right into  Disaster Recovery and Business Continuity plans
  • Latest and greatest versioning / functionality

The bottom line is that the Cloud does provide significant benefits to any business.  It’s time to take a hard look at how your IT footprint can contribute to your company’s success.

Happy Birthday Office 365, what’s next?

It sure looks like it’s been around for a lot longer, but office 365 is officially celebrating its 1 year anniversary this week.

It’s true that some aspects of earlier MS cloud effort have been around for 4-5 years under different names like BPOS but the new branding and consumer side were introduced last year and SharePoint online took a huge step forward. So how is it doing?

Not bad according to different reports. 3.5 million Consumers have signed up and 15% of exchange users are in the cloud (6% increase over the last year). Microsoft is clearly betting the farm on cloud and the recent choice of its cloud chief Nadella to be the next CEO is a telling sign.

A recent technical summary at ZDNet and a financial analysis at Seeking Alpha both look very positively on the stability and profitability of this model.

We’ve been using the Microsoft office 365 email for a number of years and SharePoint for the last few months and our experience has been very positive. Our customers have been reporting similar satisfaction levels with the reliability and performance. The main advantages we see are:

  • Reduced IT costs: No need to allocate server or VM’s. No need for redundancy and backups. No need for regular installation of patches and updates and all the testing involved.
  • We invested in putting provisioning processes in place that dramatically reduced the timeframe for creating new sites and reduced administrative effort.
  • Mobile and iPad access through Office Web Apps.
  • Social: the new newsfeed, Yammer integration and Communities bring out of the box enhanced collaboration and social interaction.

Looking ahead, there are definitely some concerns and wish list items I’d like to see Microsoft address for office 365 and SharePoint online:

  • Stronger security and privacy commitments. Not that the NSA would have a problem getting to most information anyway but knowing that all corporate secrets are basically available to them upon request is disquieting. Multinationals may not be willing or legally able to make the jump and trust Microsoft with their data. This can be the biggest obstacle for mass adoption for larger companies. Small to midsize companies may care less.
  • More control. From an IT point of view this is scary. An inhouse server you can test, tweak, add memory to, reboot when needed, and install 3rd party add-ons. You now, control. Giving away the ability to jump in and intervene is hard. Even when Microsoft does deliver reliability and reasonable performance our natural impulse is to try and make it better, tweak, optimize. Not much you can do here. I do hope that Microsoft expands the controls given to customers. It will get a lot of untrusting IT guys a level of comfort that is not there now.
  • Support for Web Content Management. If we are giving up a local SharePoint environment, why force users to have one if they want to take full advantage of SharePoint as a content management tool for public website?
  • Add native migration tools. Not that I have anything against our partners the migration tool makers but releasing a platform with no out of the box method of upgrading to it was very odd and the fact no support has been offered since is disappointing. Makes the natural audience of smaller to mid-size businesses with an additional expense to migrate.
  • Cleaner social toolset. I wrote about it earlier in the year, that the Yammer acquisition created some confusion among users. The promised SSO is still outstanding and the small incremental steps like the one released this week are a little confusing.

Cloud 101: Understand the Plan

cloud plan

Cloud questions

Moving to the Cloud is a good move in most cases HOWEVER – It’s not as easy as most service providers want you to believe. If the analysis isn’t done properly up front it can lead to poor performance, interruptions in business, and, what I am currently seeing, costs getting out of control quickly.

CIO’s and CFO’s are rightly asking:

Why are our IT Budgets significantly higher?

Wasn’t the Cloud supposed to save us money?

The Reality – The Cloud is not for everything and everybody!

You need 2 things from your service provider:

  1. First and most important – Due diligence
    Your service provider should understand your business and make that the priority 1. For example: Recently I have seen two companies, one an engineering firm and the other in the Insurance industry, that have very dynamic IT needs. These needs were clearly not understood and documented in the detail that was needed to ensure a successful cloud endeavor. Both company’s need to spin up and down environments for pre-determined times. So who’s managing this?
  2. Which leads to my second point – Education
    During the discovery phase, service providers need to make sure that whoever manages the cloud provider/vendor is made aware of the pricing model and supported content to manage the environment properly, what to expect and what controls need to be implemented to ensure environments are managed correctly.

The bottom line is: Many providers are on the bandwagon to sell Cloud. A lot of them don’t have preferred hosting partners and focus only on the transitional services. So clients must understand:

  • whether discovery or due diligence services are provided
  • whether that report includes recommendations regarding which applications should move to the Cloud and which should stay on premise
  • what hosting partner or Cloud service is recommended
  • estimated ROI

Cloud strategy is critical to Cloud success, even if clients have to enter these unchartered waters on their own.

The Cloud Has No Clothes!

Emperor's New ClothesEverybody remembers the classic fairy tale where an emperor and his people are conned in to believing he was attired in a fantastically beautiful set of clothes, when in fact he was in the buff.  No one was willing to admit they did not have the refined taste and intelligence to see the spectacular cloth and splendid robes. It took the strength of innocence in a child to point out the truth. I am about as far from an innocent child as one can get, but it appears to me the cloud is parading about naked.

Every vendor has a cloud offering, every pundit “agrees” the cloud is the future, investors value every cloud company with a premium, every data center operator is “born again” as a cloud player. Every CIO has a cloud initiative and budget line. Really, I have seen this movie plot before, and it does not end well, especially for the Emperor (and the con-men vendors too).

We have worked internally on projects as well as externally with clients to implement aspects of the “cloud”. Results have been mixed and in the process gathered some hard won experience which I will condense here (while protecting both the clothed and the naked).

First, Software as a Service (SaaS) will work if adopted with minimal software modification and maximum adoption of it’s native business process. It is very cost effective if it precludes investment in internal IT infrastructure and personnel, not bad if it slows the growth of same. Outsourcing well-defined rote functions to the SaaS route works well (such as Email).  Adopting SaaS for new non-strategic functions tends to be successful where there are few users and a high degree of specialization. Data backup into the cloud is an excellent example regarding highly specialized solutions that take advantage of economies of scale provided in hardware.

SaaS fails in terms of cost or functionality when it is subject to customization and extension. Service costs tend to swamp the effort from initial modification through long-term maintenance (humans=$$$$). Costs will especially spiral when you combine many users and many customizations.  Remember the “Keep It Simple, Stupid” (KISS) principle saves money and points to success.

Buying virtual machines in the cloud works well if the configuration is simple; few software products, few users, straightforward integration. Development and early deployment is particularly attractive, as is usage by start-up companies and software proofs, tests, and trials. Again, the KISS principle reigns supreme. Remember hardware continues to drop in price and increase in capacity.  Package software costs are stable. Understand the billing algorithms of the key “clouds”. Each has its cost advantages and drawbacks, and they change rapidly under increasing competition and hype. Always benchmark medium to long-term cloud virtual machines against native hardware virtual machine implementations, the results may surprise you (I have been surprised over and over).

The Emperor’s story is an old one and so is the cloud concept in principle; remember its first turn on the karmic wheel of optimizing the highest cost component was time-sharing. This strategy optimized the high cost of proprietary hardware/software (remember IBM and the Seven Dwarfs, but I digress into another fairy tale). As minicomputers (Digital, Data General, Wang) dropped the price of hardware through competition with IBM, software packages became the gating factor. Workstations continued the trend by another factor of 10 reduction in cost of hardware and package software (human service costs are rising).  Wintel and the Internet have driven the marginal cost of raw computing to almost zero compared to the service component. As hardware has followed Moore’s law and software package economies of scale moved to millions of copies, the human costs have skyrocketed in both relational and absolute terms.

If we can keep history as our lens and focus on our cost pressure points, we can maintain our child-like innocence and see others prancing naked while we keep our kilts and heads about us.

Cloud 2012 Redux

Ready for Cloud-01

You shouldn’t have to commit everything at once

This year will be remembered as the year the cloud moved beyond the realm of “Back to Time-Sharing” or a curio for greenfields and start-ups.  While Software as a Service (SaaS) is interesting, it can not be a center piece of your IT infrastructure or strategy due to its limited scope and cost/scalability metrics.  By the same token, every IT system is not a greenfield opportunity, and most require a steady evolutionary response incorporating the existing infrastructure’s DNA and standards.

Just illustrating a “private cloud” with a “public cloud” next to it does not cut it.  What does that really mean?  Ever wonder what is really in that cloud(s)?  Better yet, in safe understandable steps, explain it; cost benefit 3-5-7 year projections, organizational impact for IT and business process, procedural impact for disaster recovery, etc.  Sorry, “Just buy my product because it is what I have to sell!” does not work; I need a tested time-phased architectural plan, with contingencies, before I commit my company and job.

For the first time in the continuing cloud saga, we have been able to put together and test a “non-aligned” approach, which allows an organization to keep IT infrastructural best practice and not “sign-in-blood” to any individual vendor’s ecosystem.  With the proper design, virtual machines (VMs), can be run on multiple vendors’ platforms (Microsoft®, Amazon.com®, etc.) and on-premise, optimized to cost, performance, and security. This effectively puts control of cost and performance in the hands of the CIO and the consuming company.

In addition, credible capabilities exist in the cloud to handle disaster recovery and business continuity, regardless of whether the supporting VMs are provisioned on premise or in the cloud. Certain discreet capabilities, like email and Microsoft Office™ Automation, can be “outsourced” to the cloud and integration to consuming application systems can be maintained in the same manner many organizations have historically outsourced functions like payroll.

The greatest benefit of cloud 2012 is the ability to phase it in over time as existing servers are fully amortised and software licences roll-off and require renewal.  Now we can start to put our plans together and start to take advantage of the coming margin-cutting wars of the Cloud Titans in 2013 and beyond.

IT Cost Cutting and Revenue Enhancing Projects

scissorsIn the current economic climate the CIOs and IT managers are constantly pushed to “do more with less”. However, blindly following this mantra can be a recipe for disaster. These days IT budgets are getting squeezed and there are fewer resources to go around however, literally trying to “do more with less” is the wrong approach. The “do more” approach implies that IT operations were not running efficiently and there was a lot of fat that could be trimmed — quite often that is simply not the case. It is not always possible to find a person or a piece of hardware that is sitting idle which can be cut from the budget without impacting something. However, in most IT departments there are still a lot of opportunities to save cost. But the “do more with less” mantra’s approach of actually trying to do more with less maybe flawed! Instead the right slogan should be something along the lines of “work smarter” or “smart utilization of shrinking resources”; not exactly catchy but conveys what is really needed.

polar bearWhen the times are tough IT departments tend to hunker down and act like hibernating bears – they reduce all activity (especially new projects) to a minimum and try to ride out the winter, not recognizing the opportunity that a recession brings. A more productive approach is to rethink your IT strategy, initiate new projects that enhance your competitive advantage, cut those that don’t, and reinvigorate the IT department in better alignment with the business needs and a more efficient cost structure. The economic climate and the renewed focus on cost reduction provides the much needed impetus to push new initiatives through that couldn’t be done before. Corporate strategy guru Richard Rumelt says,

“There are only two paths to substantially higher performance, one is through continued new inventions and the other requires exploiting changes in your environment.”

Inventing something substantial and new is not always easy or even possible but as the luck would have it the winds of change is blowing pretty hard these days both in technology and in the business environment. Cloud computing has emerged as a disruptive technology and is changing the way applications are built and deployed. Virtualization is changing the way IT departments buy hardware and build data centers. There is a renewed focus on enterprise wide information systems and emergence of new software and techniques have made business intelligence affordable and easy to deploy. These are all signs of major changes afoot in the IT industry. On the business side of the equation the current economic climate is reshaping the landscape and a new breed of winners and losers is sure to emerge. What is needed is a vision, strategy, and will to capitalize on these opportunities and turn them into competitive advantage. Recently a health care client of ours spent roughly $1 million on a BI and data strategy initiative and realized $5 million in savings in the first year due to increased operational efficiency.
 
Broadly speaking IT initiatives can be evaluated along two dimensions cost efficiency and competitive advantage. Cost efficiency defines a project’s ability to lower the cost structure and help you run operations more efficiently. Projects along the competitive advantage dimension provide greater insight into your business and/or market trends and help you gain an edge on the competition. Quite often projects along this dimension rely on an early mover’s advantage which overtime may turn into a “me too” as the competitors jump aboard the same bandwagon. The life of such a competitive advantage can be extended by superior execution but overtime it will fade – think supply-chain automation that gave Dell its competitive advantage in early years. Therefore such projects should be approached with a sense of urgency as each passing day erodes the potential for higher profits. In this framework each project can be considered to have a component of each dimension and can be plotted along these dimensions to help you prioritize projects that can turn recession into an opportunity for gaining competitive edge. Here are six initiatives that can help you break the IT hibernation, help you lower your cost structure, and gain an edge on the competition:

Figure-1-Categorization-of-

Figure 1: Categorization of IT Projects 

Figure-2-Key-Benefits

In the current economic climate no project can go too far without an ROI justification and calculating ROI for an IT project especially something that does not directly produce revenue can be notoriously hard. While calculating ROI for these projects is beyond the scope of this article I hope to return to this issue soon with templates to help you get through the scrutiny of the CFO’s office. For now I will leave you with the thought that ROI can be thought of in terms three components:

  • A value statement
  • Hard ROI (direct ROI)
  • Soft ROI (indirect ROI)

Each one is progressively harder to calculate and requires additional level of rigor and detail but improves the accuracy of calculation. I hope to discuss this subject in more detail in future blog entries.

Cloud Computing Trends: Thinking Ahead (Part 3)

cloud-burstIn the first part of this series we discussed the definition of cloud computing and its various flavors. The second part focused on the offerings from three major players: Microsoft, Amazon, and Google. The third and final part discusses the issues and concerns related to the cloud as well as possible future directions.

A company may someday decide to bring the application in-house due to data security or cost related concerns. An ideal solution would allow creation of a “private in-house cloud” just like some product/ASP companies allow option of running a licensed version in-house or as a hosted service. A major rewrite of existing applications in order to run in a cloud is probably also a non-starter for most organizations. Monitoring and diagnosing applications in the cloud is a concern. Developers must be enabled to diagnose and debug in the cloud and not just in a simulation on a local desktop. Anyone who has spent enough time in the trenches coding and supporting complex applications knows that trying to diagnose complex intermittent problems in a production environment by debugging on a simulated environment on a desktop is going to be an uphill battle to say the least. A credible and sophisticated mechanism is needed to support complex applications running in the cloud. The data and meta-data ownership and security may also give companies dealing with sensitive information a pause. The laws and technology are still playing catch-up when it comes to some thorny issues around data collection, distribution rights, liability, etc.

If cloud computing is to truly fulfill its promise the technology has to evolve and the major players have to ensure that a cloud can be treated like a commodity and allow applications to move seamlessly between the clouds, without requiring a major overhaul of the code. At least some of the major players in cloud computing today don’t have a good history of allowing cross-vendor compatibility and are unlikely to jump on this bandwagon anytime soon. They will likely fight any efforts or trends to commoditize cloud computing. However, based on the history of other platform paradigm shifts they would be fighting against the market forces and the desires of their clients. Similar situations in the past have created opportunities for other vendors and startups to offer solutions that bypass the entrenched interests and offer what the market is looking for. It is not too hard to imagine an offering or a service that can abstract away the actual cloud running the application.

New design patterns and techniques may also emerge to make the transition from one cloud vendor to another easier. Not too long ago this role was played by design patterns like the DAO (data access object) and various OR (object relational) layers to reduce the database vendor lock-in. A similar trend could evolve in the cloud based applications.

All of the above is not meant to condemn cloud computing as an immature technology not ready for the prime time. The discussion above is meant to arm the organization with potential pitfalls of a leading edge technology that can still be a great asset under the right circumstances. Even today’s offerings fit the classic definition of a disruptive technology. Any organization that is creating a new application or over hauling an existing one must seriously consider architecting the application for the cloud. The benefits of instant scalability and “pay for only what you use” are too significant to ignore, especially for small to mid size companies. Not having to tie up your cash in servers and infrastructure alone warrants a serious consideration. Also not having to worry about setting up a data center that can handle the load in case your application goes viral is liberating to say the least. Any application with seasonal demand can also greatly benefit. If you are an online retailer the load on your website probably surges to several times it average volume during the holiday shopping season. Having to buy servers to handle the holiday season load which then remains idle during rest of the year can tie up your capital unnecessarily when it could have been used to grow the business. Cloud computing in its current maturity may not make sense to pursue for every enterprise. However, you should get a solid understanding of what cloud computing has to offer and adjust the way you approach IT today. This will position you to more cost effectively capitalize on what it has to offer today and tomorrow.

Cloud Computing Trends: Thinking Ahead (Part 1)

cloud-burstThe aim of this three part series is to gain insight into the capabilities of cloud computing, some of the major vendors involved, and assessment of their offerings.  This series will help you assess whether cloud computing makes sense for your organization today and how it can help or hurt you. The first part focuses on defining cloud computing and its various flavors, the second part focuses on offerings from some of the major players, and the third part talks about how it can be used today and possible future directions.

Today cloud computing and its sub-categories do not have precise definitions. Different groups and companies define them in different and overlapping ways. While it is hard to find a single useful definition of the term “cloud computing” it is somewhat easier to dissect some of its better known sub-categories such as:

  • SaaS (software as a service)
  • PaaS (platform as a service)
  • IaaS (infrastructure as a service)
  • HaaS (hardware as a service)

Among these categories SaaS is the most well known, since it has been around for a while, and enjoys a well established reputation as a solid way of providing enterprise quality business software and services. Well known examples include: SalesForce.com, Google Apps, SAP, etc. HaaS is the older term used to describe IaaS and is typically considered synonymous with IaaS.  Compared to SaaS, PaaS and IaaS are relatively new, less understood, and less used today. In this series we will mostly focus on PaaS on IaaS as the up and coming forms of cloud computing for the enterprise.

The aim of IaaS is to abstract away the hardware (network, servers, etc.) and allow applications to run virtual instances of servers without ever touching a piece of hardware. PaaS takes the abstraction further eliminates the need to worry about the operating system and other foundational software. If the aim of virtualization is to make a single large computer appear as multiple small dedicated computers, one of the aims of PaaS is to make multiple computers appear as one and make it simple to scale from a single server to many. PaaS aims to abstract away the complexity of the platform and allow your application to automatically scale as the load grows without worrying about adding more servers, disks, or bandwidth. PaaS presents significant benefits for companies that are poised for aggressive organic growth or growth by acquisition.
cloud-computing-diagram

Cloud Computing: Abstraction Layers

So which category/abstraction level (IaaS, Paas, SaaS) of the cloud is right for you? The answer to this question depends on many factors such as: what kind of applications your organization runs on (proprietary vs. commodity), the development stage of these applications (legacy vs. newly developed), time and cost of deployment (immediate/low, long/high), scalability requirements (low vs. high), and vendor lock-in (low vs. high). PaaS is highly suited for applications that inherently have seasonal or highly variable demand and thus require high degree of scalability.  However, PaaS may require a major rewrite or redesign of the application to fit the vendor’s framework and as a result it may cost more and cause vendor lock-in. IaaS is great if your application’s scalability needs are predictable and can be fully satisfied with a single instance. SaaS has been a tried and true way of getting access to software and services that follow industry standards. If you are looking for a good CRM, HR management, or leads management system, etc. your best bet is to go with a SaaS vendor. The relative strengths and weaknesses of these categories are summarized in the following table.

 

App Type

Prop. vs. Commodity

Scalability

 

Vendor

Lock-in

Development Stage

Time & Cost

(of deployment)

IaaS

Proprietary

Low

Low

Late/Legacy

Low

PaaS

Proprietary

High

High

Early

High

SaaS

Commodity

High

High

NA

Low

Cloud Computing: Category Comparison

Cloud Computing: Where is the Killer App?

As an avid reader, I have read too many articles lately about how the bleak economy was going to drive more IT teams to use cloud computing. The real question: what are the proper applications for Cloud Computing? For the more conservative IT leader, there must be a starting point that isn’t throwing one of your mission-critical applications into the cloud.

One of the best applications of cloud computing that I have seen implemented recently is content management software. One of the challenges with content management is that it is difficult to predict the ultimate storage needs. If the implementation is very successful, the storage needs start small and immediately zoom into hundreds of gigabytes as users learn to store spreadsheets, drawings, video and other key corporate documents. Open source content management software can be deployed quickly on cloud computing servers and the cost of storage will ramp up in line with the actual usage. Instead of guessing what the processor needs and storage will be, the IT leader can simply start the implementation and the cloud computing environment will scale as needed. My suggestion is to combine wiki, content management and Web 2.0 project management tools running in the cloud computing space for your next major software implementation project or large corporate project.

A second “killer” application for cloud computing is software development and testing. One of the headaches and major costs for software development is the infamous need for multiple environments for developing and testing. This need is compounded when your development team is using Agile development methodologies and the testing department is dealing with daily builds. The cloud computing environment provides a low-cost means of quickly “spinning up” a development environment and multiple test environments. This use of the cloud  is especially beneficial for web-based development, and testing load balancing for high traffic web sites. The ability to “move up” on processor speeds, number of processors, memory and storage helps establish real baselines for when the software project moves to actual production versus the traditional SWAG approach. The best part is that once the development is complete, the cloud computing environment can be scaled back to maintenance mode and there isn’t unused hardware waiting for re-deployment.

The third “killer” application is data migration. Typically, an IT leader will need large processing and storage needs for a short term, to rapidly migrate data from an older application to a new one. Before the cloud, companies would rent hardware, use it briefly and ship it back to vendor. The issue was guessing the necessary CPU power and storage needs to meet the time constraints for the dreaded cut-over date. The scalability of the cloud computing environment reduces the hardware cost for data migrations and allows flexibility for quickly adding processors on that all important weekend. There is simply no hardware to dispose of when the migration is complete. Now that is a “killer” application in my humble opinion. By the way, cloud computing would be an excellent choice for re-platforming an application, too, especially if the goal is to make the application scaleable.

In summary, if your IT team has a short term hardware need, then carefully consider cloud computing as a cost effective alternative. In the process, you might discover your “killer app” for cloud computing.

IBM Anounces Certification in Cloud Computing .. Huh?

IBM first announced a competency center in Cloud Computing, then a Certification over the past couple of weeks.  Well, I guess the old Druid Priests of Mainframes should recognize the resurrection of their old God TimeSharing.  Here we are, back and rested from the Breast of Gaia, Greener than druidismGreen (Drum Roll Please…….): Cloud Computing!  (Cloud Computing quickly adjusts his costume makeover to hide Ye Olde TimeSharing’s wrinkled roots)  Yes! here I am fresh, new, exciting, Web 2.0, Chrome Ready!  With me are the only guys (Big Smile from IBM!) who can Certify and Consult in My Mysteries…. IBM!

The more things change the more they stay the same, but this pushes the Great Hype Engine to a new high (or low..ha ha ha).  I can understand IBM wanting to jump on the Cloud Computing bandwagon, but are we really ready for a certification?  No one is really sure what is in the Cloud, or how it operates, but IBM is ready to lock it and load it.  Yep, they are Certifiable! (ha ha ha!).  While one can admire the desire to define and take a stand on Cloud Computing; this is one topic that requires a bit more “cook time” before full scale avarice takes hold.

Cloud Computing to too “cloudy” and “amorphous” to define today.  While expertise and advice is required, there needs to be more independent vetting and best-of-breed component level competitions.  Full solution demo platforms need to be put together to elicit ideas and act as pilots.  Case studies need to spring from these efforts as well as early adopters before an organization bets the farm on a Cloud solution.  The existing ERP platforms did not come into being overnight and these solutions have an element of their interdepency and complexity (Rome was not built in a day!).  All of the elements of backup, disaster recoverability, auditability, service level assurance, and security need to be in place before there can be a total buy in to the platform.  The reputation of Cloud Computing does hang in the balance, all that is required is one high visibility failure to set things back for potentially years (especially given the current macro environment).

Above all at this stage, a certain level of independence is required for evaluation and construction of possible solutions.  Evolution mandates independent competition (Nature Red of Tooth and Claw, Cage Fighting, Yes!).  Maturity brings vendor ecosystems and the all consuming Application Stack, but not yet.  More innovation is required, we may not even have heard of the start-up who could win this game.