Data Darwinism – Evolving your data environment

In my previous posts, the concept of Data Darwinism was introduced, as well as the types of capabilities that allow a company to set itself apart from its competition.   Data Darwinism is the practice of using an organization’s data to survive, adapt, compete and innovate in a constantly changing and increasingly competitive business environment.   If you take an honest and objective look at how and why you are using data, you might find out that you are on the wrong side of the equation.  So the question is “how do I move up the food chain?”

The goal of evolving your data environment is to change from using your data in a reactionary manner and just trying to survive, to proactively using your data as a foundational component to constantly innovate to create a competitive advantage.

The plan is simple on the surface, but not always so easy in execution.   It requires an objective assessment of where you are compared to where you need to be, a plan/blueprint/roadmap to get from here to there, and flexible, iterative execution.

Assess

As mentioned before, taking an objective look at where you are compared to where you need to be is the first critical step.  This is often an interesting conversation among different parts of the organization that have competing interests and objectives. Many organizations can’t get past this first step. People get caught up in politics and self-interest and lose sight of the goal; to move the organization forward into a competitive advantage situation. Other organizations don’t have the in-house expertise or discipline to conduct the assessment. However, until this can be done, you remain vulnerable to other organizations that have moved past this step.

Plan

Great, now you’ve done the assessment, you know what your situation is and what your strengths and weaknesses are.  Without a roadmap of how to get to your data utopia, you’re going nowhere.   The roadmap is really a blueprint of inter-related capabilities that need to be implemented incrementally over time to constantly move the organization forward.   Now, I’ve seen this step end very badly for organizations that make some fundamental mistakes.  They try to do too much at once.  They make the roadmap too rigid to adapt to changing business needs.   They take a form over substance approach.  All these can be fatal to an organization.   They key to the roadmap is three-fold:

  • Flexible – This is not a sprint.   Evolving your data environment takes time.   Your business priorities will change, the external environment in which you operate will change, etc.   The roadmap needs to be flexible enough to enable it to adapt to these types of challenges.
  • – There will be an impulse to move quickly and do everything at once.   That almost never works.   It is important to align the priorities with the overall priorities of the organization.
  • Realistic – Just as you had to take an objective, and possibly painful, look at where you were with respect to your data, you have to take a similar look at what can be done given any number of constraints all organizations face.   Funding, people, discipline, etc. are all factors that need to be considered when developing the roadmap.   In some cases, you might not have the internal skill sets necessary and have to leverage outside talent.   In other cases, you will have to implement new processes, organizational constructs and enabling technologies to enable the movement to a new level.  

Execute Iteratively

The capabilities you need to implement will build upon each other and it will take time for the organization to adapt to the changes.   Taking an iterative approach that focuses on building capabilities based on the organization’s business priorities will greatly increase your chance of success.  It also gives you a chance to evaluate the capabilities to see if they are working as anticipated and generating the expected returns.   Since you are taking an iterative approach, you have the opportunity to make the necessary changes to continue moving forward.

The path to innovation is not always an easy one.   It requires a solid, yet flexible, plan to get there and persistence to overcome the obstacles that you will encounter.   However, in the end, it’s a journey well worth the effort.

Data Darwinism – Are you on the path to extinction?

Most people are familiar with Darwinism.  We’ve all heard the term survival of the fittest.   There is even a humorous take on the subject with the annual Darwin Awards, given to those individuals who have removed themselves from the gene pool through, shall we say, less than intelligent choices.

Businesses go through ups and downs, transformations, up-sizing/down-sizing, centralization/ decentralization, etc.   In other words, they are trying to adapt to the current and future events in order to grow.   Just as in the animal kingdom, some will survive and dominate, some will not fare as well.   In today’s challenging business environment, while many are trying to merely survive, others are prospering, growing and dominating.  

So what makes the difference between being the king of the jungle or being prey?   The ability to make the right decisions in the face of uncertainty.     This is often easier said than done.   However, at the core of making the best decisions is making sure you have the right data.   That brings us back to the topic at hand:  Data Darwinism.   Data Darwinism can be defined as:

“The practice of using an organization’s data to survive, adapt, compete and innovate in a constantly changing and increasingly competitive business environment.”

When asked to assess where they are on the Data Darwinism continuum, many companies will say that they are at the top of the food chain, that they are very fast at getting data to make decisions, that they don’t see data as a problem, etc.   However, when truly asked to objectively evaluate their situation, they often come up with a very different, and often frightening, picture. 

  It’s as simple as looking at your behavior when dealing with data:

If you find yourself exhibiting more of the behaviors on the left side of the picture above, you might be a candidate for the next Data Darwin Awards.

Check back for the next installment of this series “Data Darwinism – Capabilities that Provide a Competitive Advantage.”

The Patient Centered Medical Home (PCMH)

PCMH

In a recent article released by IBM, an argument is made for a transition in the U.S. healthcare system to a team-based approach based on the Patient Centered Medical Home (PCMH) model. A strong case is made from a description of the model, its’ players, technology, and benefits. The critical change that must be established first, though, is the healthcare systems’ evolution to a data-driven system. The access to, higher quality and integration of data, across disparate silos of information, will provide the foundation for this change. Only then can the position of Dr. Douglas Henley, EVP and CEO of the American Academy of Family Physicians, “ A smarter health system is one based in comprehensive patient centered primary care which improves patient/physician communication, the coordination and integration of care, and the quality and cost efficiency of care” be achieved.

The quality and cost of care is what we hear the most about in news headlines. However, the success of each piece of Dr. Henley’s statement is based on the ability of a team of providers to access accurate and updated patient data across care settings and over time in order to proactively suggest lifestyle improvements and reactively diagnose and recommend appropriate treatments.  Fundamentally, each decision maker and operating entity needs a data strategy for how it will achieve the ambitious and often ambiguous goals it likes to claim.

I’ll recite a popular management mantra I’ve heard numerous times, “you can’t manage what you can’t measure.” The healthcare system is a data rich environment. Cleaning, manipulating, and leveraging  the huge volume of data available will become the critical success factors that will enable the linkage between education, research, the delivery of care and its outcomes, to benchmark and monitor the performance of the continuous improvements necessary to bring costs down and quality up.  

Players in the healthcare world will soon find out (if they haven’t already) a principle all those in the data world already know:

  • Good data, appropriately aggregated and manipulated, drives accurate information;
  • Accurate information is not a luxury that most decision makers have;
  • The executives, managers, physicians, nurses, nurse practitioners, educators, pharmacists, researchers, and other stakeholders that do have access to accurate information are in a position to leverage and evolve this data and information from satisfying compliance and regulatory requirements to enabling an organizational knowledge-based asset.

Actionable data will drive the improvements that you see scattered across headlines and mentioned in political speeches in the past and no doubt, in the future.

Image courtesy of Texas Family Physician

Cutting costs should not mean cutting revenue.

0925_mz_skinflint

Image courtesy of BusinessWeek 9/25/08: "AmerisourceBergen's Scrimp-and-Save Dave"

The financial panic of late has caused a lot of attention on cutting costs – from frivolities like pens at customer service counters to headcount – organizations are slowing spending. Bad times force management to review every expense, and in these times obsess with them. Financial peace however has two sides – expense and revenue.

A side effect of cost cutting can be stunted revenue, over both the short and long terms. It is easier to evaluate costs than to uncover revenue opportunities, such as determining  truly profitable offerings and adapting your strategies to maximize sales. Also as difficult to quantify are the true loses in unprofitable transactions, and competitive strategies that can negatively impact your competition.

The answers to many of these questions can be  unearthed from data scattered around an organization, groking customers and instantly shared knowledge between disciplines. For example, by combining:

  • customer survey data;
  • external observations;
  • clues left on web visits;
  • and other correspondence within the corporation;

…an organization can uncover unmet needs to satisfy before the competition, and at reduced investment cost.

When external factors, like a gloomy job outlook, cause customers to change behavior, it is time to use all information at your disposal. Those prospects changing preferences for your offerings can provide golden intelligence about the competition or unmet needs.

Pumping information like this is the heart of business intelligence. Marketing and Sales can uncover the opportunity; however, it is up to the enterprise to determine how to execute a timely offering. Financials, human capital planning, and operations, work in concert to develop the strategy which requires forecasting data, operational statistics and capacity planning data to line up.

A good strategist views all angles, not just reducing cost.

Why Analytics Projects Fail

During an informal forum recently, (whose members shall remain nameless to protect my sorry existence a few more years), analytics projects came up as a topic.  The question was a simple one.  All of the industry analysts and surveys said analytic products and projects would be hot and soak up the bulk of the meager discretionary funds availed a CIO by his grateful company.  If true, why were things so quiet?  Why no “thundering” successes?

My answer was to put forward the “typical” project plan of a hypothetical predictive analytics project as a straw man to explore the topic:

  • First, spend $50 to $100K on product selection.
  • Second, hire a contractor in the product selected and tell him you want a forecasting model for revenue and cost. 
  • The contractor says fine, I’ll set up default questions, by the way where is the data?
  • The contractor is pointed to the users. He successively moves down the organization until he passes through the hands-on user actually driving the applications and reporting (ultimately fingering IT as the source of all data).  On the way the contractor finds a fair amount of the data he needs in Excel spreadsheets and Access databases on the user’s PCs (at this point a CFO in the group hails me as Nostradamus because that is where his data resides).
  • IT gets some extracts together containing the remaining data required that seems to meet the needs the contractor described (as far as they can tell, then IT hits the Staple’s Easy Button —  got to get back to keeping the lights on and the mainline applications running!).
  • Contractor puts the extracts in the analytics product, does some back testing with what ever data he has, makes some neat graphics and charts and declares victory.
  • Senior management is thrilled, the application is quite cool and predicts last month spot on.  Next month even looks close to the current Excel spreadsheet forecast.
  • During the ensuing quarter, the cool charts and graphs look stranger and stranger until the model flames out with bizarre error messages.
  • The conclusion is drawn that the technology is obviously not ready for prime time and that lazy CIO should have warned us.  It’s his problem and he should fix it, isn’t that why we keep him around?

At this point there are a number of shaking heads and muffled chuckles; we have seen this passion play before.  The problem is not any product’s fault or really any individual’s fault (it is that evil nobody again, the bane of my life).  The problem lies in the project approach.

So what would a better approach be?  The following straw man ensued from the discussion:

  • First, in this case, skip the product selection.  There are only two leading commercial products for predictive analytic modeling (SAS, SPSS).  Flip a coin (if you have a three-headed coin look at an open source solution, R or ESS), maybe it’s already on your shelf, blow the dust off.  Better yet, would a standard planning and budgeting package fit (Oracle/Hyperion)?  The next step should give us that answer anyway, no need to rush to buy, vendors are always ready to sell you something (especially at month/quarter end — my, that big a discount!).
  • Use the money saved for a strategic look at the questions that will be asked of the model: What are the key performance indicators for the industry?  Are there any internal benchmarks, industry benchmarks or measures?  Will any external data be needed to ensure optimal (correct?) answers to the projected questions?
  • Now take this information and do some data analysis (much like dumpster diving).  The key is to find the correct data in a format that is properly governed and updated (no Excel or Access need apply).  The key is accurate sustainability of all data inputs, remember our friend GIGO (I feel 20 years old all over again!).  This should sound very much like a standard Data Quality and Governance Project (boring, but necessary evil to prevent future embarrassment to the guilty).  
  • Now that all of the data is dropped into a cozy data mart and supporting extracts are targeted there, set up all production jobs to keep everything fresh.
  • This is also a great time to give that contractor or consultant the questions and analysis done earlier, so it will be at hand with a companion sustainable datamart.  Now iterations begin — computation, aggregation, correlation, derivation, deviation, visualization, (Oh My!). The controlled environment holds everybody’s feet to the fire and provides excellent history to tune the model with.
  • A reasonable model should result, enjoy!

No approach is perfect, and all have their risks, but this one has a better probability of success than most.