Little innovations lead to big change

When carriers think about innovation and achieving competitive advantage, most focus on big changes: introducing new products like Usage Based Insurance, redesigning business processes, deploying major technology initiatives like customer and agent portals or new policy administration systems. They think of trends like social media, predictive analytics, and big data.

But innovation and competitive advantage can be achieved by making smaller improvements in everyday processes. For example, understanding and keeping current with ISO changes is critical to most carriers. Consider ISO’s recent Commercial Lines updates. Per their October briefing, ISO CL Update, the Commercial Automobile Program alone included a new auto dealers program and 26 new optional endorsements.

datagrailFor most carriers, hearing that major updates are coming raises shudders. Carriers know that analyzing ISO updates, comparing them to the current version of rates and forms in use, and identifying changes is a challenge. It takes significant time and knowledge to pour through the various ISO materials and determine what has changed and how the changes impact the carrier’s book of business. Since so many carriers already have too much on their plate, they often let ISO updates slide, failing to adopt them in a timely manner. Thus they miss the small innovations that can improve their product offerings and improve their bottom line. When they do choose to catch-up, it is often a daunting effort to jump several versions in one leap, introducing massive change to their systems, processes, and books of business.

Applying automation to the comparison of ISO changes could improve the efficiency of the analysis process, allowing carriers to more quickly determine the impact of adopting or not adopting ISO changes. This targeted solution is not a major system implementation, but it is an innovation that would allow you to best leverage your investment in ISO content, and improve a cumbersome process. It is a small innovation that would allow a carrier to dramatically improve its ability to analyze and react to ISO changes. A small innovation with a big payoff in more efficient internal processes that can translate to improved products, product pricing, and the bottom line.

A solution already exists. Edgewater Consulting has leveraged its deep industry and technical knowledge and long-standing ISO relationship to develop a cloud-based solution to address this critical business need. The solution compares ISO rate books and quickly identifies what has changed, and presents results in Microsoft Excel, a familiar yet powerful analytical tool.

We will be hosting a webinar with ISO to demonstrate the tool, using it to analyze ISO’s commercial auto changes released on October 1. If you’re interested in attending, register here.

Cloud 2012 Redux

Ready for Cloud-01

You shouldn’t have to commit everything at once

This year will be remembered as the year the cloud moved beyond the realm of “Back to Time-Sharing” or a curio for greenfields and start-ups.  While Software as a Service (SaaS) is interesting, it can not be a center piece of your IT infrastructure or strategy due to its limited scope and cost/scalability metrics.  By the same token, every IT system is not a greenfield opportunity, and most require a steady evolutionary response incorporating the existing infrastructure’s DNA and standards.

Just illustrating a “private cloud” with a “public cloud” next to it does not cut it.  What does that really mean?  Ever wonder what is really in that cloud(s)?  Better yet, in safe understandable steps, explain it; cost benefit 3-5-7 year projections, organizational impact for IT and business process, procedural impact for disaster recovery, etc.  Sorry, “Just buy my product because it is what I have to sell!” does not work; I need a tested time-phased architectural plan, with contingencies, before I commit my company and job.

For the first time in the continuing cloud saga, we have been able to put together and test a “non-aligned” approach, which allows an organization to keep IT infrastructural best practice and not “sign-in-blood” to any individual vendor’s ecosystem.  With the proper design, virtual machines (VMs), can be run on multiple vendors’ platforms (Microsoft®, Amazon.com®, etc.) and on-premise, optimized to cost, performance, and security. This effectively puts control of cost and performance in the hands of the CIO and the consuming company.

In addition, credible capabilities exist in the cloud to handle disaster recovery and business continuity, regardless of whether the supporting VMs are provisioned on premise or in the cloud. Certain discreet capabilities, like email and Microsoft Office™ Automation, can be “outsourced” to the cloud and integration to consuming application systems can be maintained in the same manner many organizations have historically outsourced functions like payroll.

The greatest benefit of cloud 2012 is the ability to phase it in over time as existing servers are fully amortised and software licences roll-off and require renewal.  Now we can start to put our plans together and start to take advantage of the coming margin-cutting wars of the Cloud Titans in 2013 and beyond.

Cloud Computing Trends: Thinking Ahead (Part 3)

cloud-burstIn the first part of this series we discussed the definition of cloud computing and its various flavors. The second part focused on the offerings from three major players: Microsoft, Amazon, and Google. The third and final part discusses the issues and concerns related to the cloud as well as possible future directions.

A company may someday decide to bring the application in-house due to data security or cost related concerns. An ideal solution would allow creation of a “private in-house cloud” just like some product/ASP companies allow option of running a licensed version in-house or as a hosted service. A major rewrite of existing applications in order to run in a cloud is probably also a non-starter for most organizations. Monitoring and diagnosing applications in the cloud is a concern. Developers must be enabled to diagnose and debug in the cloud and not just in a simulation on a local desktop. Anyone who has spent enough time in the trenches coding and supporting complex applications knows that trying to diagnose complex intermittent problems in a production environment by debugging on a simulated environment on a desktop is going to be an uphill battle to say the least. A credible and sophisticated mechanism is needed to support complex applications running in the cloud. The data and meta-data ownership and security may also give companies dealing with sensitive information a pause. The laws and technology are still playing catch-up when it comes to some thorny issues around data collection, distribution rights, liability, etc.

If cloud computing is to truly fulfill its promise the technology has to evolve and the major players have to ensure that a cloud can be treated like a commodity and allow applications to move seamlessly between the clouds, without requiring a major overhaul of the code. At least some of the major players in cloud computing today don’t have a good history of allowing cross-vendor compatibility and are unlikely to jump on this bandwagon anytime soon. They will likely fight any efforts or trends to commoditize cloud computing. However, based on the history of other platform paradigm shifts they would be fighting against the market forces and the desires of their clients. Similar situations in the past have created opportunities for other vendors and startups to offer solutions that bypass the entrenched interests and offer what the market is looking for. It is not too hard to imagine an offering or a service that can abstract away the actual cloud running the application.

New design patterns and techniques may also emerge to make the transition from one cloud vendor to another easier. Not too long ago this role was played by design patterns like the DAO (data access object) and various OR (object relational) layers to reduce the database vendor lock-in. A similar trend could evolve in the cloud based applications.

All of the above is not meant to condemn cloud computing as an immature technology not ready for the prime time. The discussion above is meant to arm the organization with potential pitfalls of a leading edge technology that can still be a great asset under the right circumstances. Even today’s offerings fit the classic definition of a disruptive technology. Any organization that is creating a new application or over hauling an existing one must seriously consider architecting the application for the cloud. The benefits of instant scalability and “pay for only what you use” are too significant to ignore, especially for small to mid size companies. Not having to tie up your cash in servers and infrastructure alone warrants a serious consideration. Also not having to worry about setting up a data center that can handle the load in case your application goes viral is liberating to say the least. Any application with seasonal demand can also greatly benefit. If you are an online retailer the load on your website probably surges to several times it average volume during the holiday shopping season. Having to buy servers to handle the holiday season load which then remains idle during rest of the year can tie up your capital unnecessarily when it could have been used to grow the business. Cloud computing in its current maturity may not make sense to pursue for every enterprise. However, you should get a solid understanding of what cloud computing has to offer and adjust the way you approach IT today. This will position you to more cost effectively capitalize on what it has to offer today and tomorrow.

Cloud Computing Trends: Thinking Ahead (Part 1)

cloud-burstThe aim of this three part series is to gain insight into the capabilities of cloud computing, some of the major vendors involved, and assessment of their offerings.  This series will help you assess whether cloud computing makes sense for your organization today and how it can help or hurt you. The first part focuses on defining cloud computing and its various flavors, the second part focuses on offerings from some of the major players, and the third part talks about how it can be used today and possible future directions.

Today cloud computing and its sub-categories do not have precise definitions. Different groups and companies define them in different and overlapping ways. While it is hard to find a single useful definition of the term “cloud computing” it is somewhat easier to dissect some of its better known sub-categories such as:

  • SaaS (software as a service)
  • PaaS (platform as a service)
  • IaaS (infrastructure as a service)
  • HaaS (hardware as a service)

Among these categories SaaS is the most well known, since it has been around for a while, and enjoys a well established reputation as a solid way of providing enterprise quality business software and services. Well known examples include: SalesForce.com, Google Apps, SAP, etc. HaaS is the older term used to describe IaaS and is typically considered synonymous with IaaS.  Compared to SaaS, PaaS and IaaS are relatively new, less understood, and less used today. In this series we will mostly focus on PaaS on IaaS as the up and coming forms of cloud computing for the enterprise.

The aim of IaaS is to abstract away the hardware (network, servers, etc.) and allow applications to run virtual instances of servers without ever touching a piece of hardware. PaaS takes the abstraction further eliminates the need to worry about the operating system and other foundational software. If the aim of virtualization is to make a single large computer appear as multiple small dedicated computers, one of the aims of PaaS is to make multiple computers appear as one and make it simple to scale from a single server to many. PaaS aims to abstract away the complexity of the platform and allow your application to automatically scale as the load grows without worrying about adding more servers, disks, or bandwidth. PaaS presents significant benefits for companies that are poised for aggressive organic growth or growth by acquisition.
cloud-computing-diagram

Cloud Computing: Abstraction Layers

So which category/abstraction level (IaaS, Paas, SaaS) of the cloud is right for you? The answer to this question depends on many factors such as: what kind of applications your organization runs on (proprietary vs. commodity), the development stage of these applications (legacy vs. newly developed), time and cost of deployment (immediate/low, long/high), scalability requirements (low vs. high), and vendor lock-in (low vs. high). PaaS is highly suited for applications that inherently have seasonal or highly variable demand and thus require high degree of scalability.  However, PaaS may require a major rewrite or redesign of the application to fit the vendor’s framework and as a result it may cost more and cause vendor lock-in. IaaS is great if your application’s scalability needs are predictable and can be fully satisfied with a single instance. SaaS has been a tried and true way of getting access to software and services that follow industry standards. If you are looking for a good CRM, HR management, or leads management system, etc. your best bet is to go with a SaaS vendor. The relative strengths and weaknesses of these categories are summarized in the following table.

 

App Type

Prop. vs. Commodity

Scalability

 

Vendor

Lock-in

Development Stage

Time & Cost

(of deployment)

IaaS

Proprietary

Low

Low

Late/Legacy

Low

PaaS

Proprietary

High

High

Early

High

SaaS

Commodity

High

High

NA

Low

Cloud Computing: Category Comparison

IBM Anounces Certification in Cloud Computing .. Huh?

IBM first announced a competency center in Cloud Computing, then a Certification over the past couple of weeks.  Well, I guess the old Druid Priests of Mainframes should recognize the resurrection of their old God TimeSharing.  Here we are, back and rested from the Breast of Gaia, Greener than druidismGreen (Drum Roll Please…….): Cloud Computing!  (Cloud Computing quickly adjusts his costume makeover to hide Ye Olde TimeSharing’s wrinkled roots)  Yes! here I am fresh, new, exciting, Web 2.0, Chrome Ready!  With me are the only guys (Big Smile from IBM!) who can Certify and Consult in My Mysteries…. IBM!

The more things change the more they stay the same, but this pushes the Great Hype Engine to a new high (or low..ha ha ha).  I can understand IBM wanting to jump on the Cloud Computing bandwagon, but are we really ready for a certification?  No one is really sure what is in the Cloud, or how it operates, but IBM is ready to lock it and load it.  Yep, they are Certifiable! (ha ha ha!).  While one can admire the desire to define and take a stand on Cloud Computing; this is one topic that requires a bit more “cook time” before full scale avarice takes hold.

Cloud Computing to too “cloudy” and “amorphous” to define today.  While expertise and advice is required, there needs to be more independent vetting and best-of-breed component level competitions.  Full solution demo platforms need to be put together to elicit ideas and act as pilots.  Case studies need to spring from these efforts as well as early adopters before an organization bets the farm on a Cloud solution.  The existing ERP platforms did not come into being overnight and these solutions have an element of their interdepency and complexity (Rome was not built in a day!).  All of the elements of backup, disaster recoverability, auditability, service level assurance, and security need to be in place before there can be a total buy in to the platform.  The reputation of Cloud Computing does hang in the balance, all that is required is one high visibility failure to set things back for potentially years (especially given the current macro environment).

Above all at this stage, a certain level of independence is required for evaluation and construction of possible solutions.  Evolution mandates independent competition (Nature Red of Tooth and Claw, Cage Fighting, Yes!).  Maturity brings vendor ecosystems and the all consuming Application Stack, but not yet.  More innovation is required, we may not even have heard of the start-up who could win this game.

Reducing IT Costs for New Acquisitions

Over at CIO magazine, Bernard Golden recently published an update on Cloud Computing. In his list of the types of companies that can benefit substantially from computing in the cloud, he left off one situation that can reap tremendous benefits from this approach: newly acquired private equity portfolio companies that are being carved out from larger businesses.

For these companies, cloud computing offers the following benefits:

  • Accelerated implementation timeline that dramatically reduces implementation costs
  • Significant savings on support costs, which typically represent 60% of the IT budget
  • Eliminates the dependency on staffing and retaining IT support staff
  • Costs scale with number of users
  • Repeatable implementation playbook
  • Easily extensible for tuck-in acquisition

One of our senior architects, Martin Sizemore, has laid out the broad strokes of this approach in a short slide show.

It’s an especially attractive M&A technology approach in the middle market, where it can help drive annual IT budgets down under 4% of revenue. While it is most advantageous for creating a new operating platform to accelerate transition services (TSA) migrations, the transition to cloud computing makes sense as a value driver at any point in the asset lifecycle.