Customer Intelligence – Analyzing and Acting on the Data

bubble cloudsPart one of this topic addressed leveraging social media to improve customer satisfaction.  This is the initial step towards a broader goal to create a robust Customer Intelligence framework that allows P/C insurers to listen, connect, analyze, respond and market to customers in a much more proactive and targeted way.

Customer Intelligence is the process of collecting relevant and timely information about customers and prospects, consolidating the data from all the different sources into a cohesive structure, and providing the sales, service and marketing functions with tools that can leverage this intelligence.  The sources of this data not only include the obvious ones such as a carrier’s Customer Service Center, and Policy or Claims Admin system, but should also originate from the Agent, Marketing Surveys, Telematics, and Social Media, including Twitter and Facebook – all mashed up to produce a Balanced Scorecard and Predictive Analytics.

Most CRM systems need to be updated to include new columns in their user profile for data in addition to email and phone number such as Facebook name, Twitter Handle, etc. With the social listening and response management connected to your CRM, a social inquiry can be viewed in context and the activity recorded for future interactions, available to Customer Service Reps or even Agency personnel. This level of social customer intelligence is going to differentiate companies that do it right, becoming a key element of a carrier’s business strategy.

A fully integrated Customer Intelligence platform provides benefits such as:

  • A single integrated interface to many social media outlets
  • The ability to manage multiple writing companies
  • Create and track cases, contacts, accounts, and leads from real-time conversations
  • Manage marketing campaigns and track social media marketing ROI
  • Cue CSR’s on upsell and cross sell opportunities

A carrier should determine the Key Performance Indicators (KPIs) that matter most to their business goals, then view the appropriate data in graphical dashboards to track effectiveness of their efforts.  It’s important to tie those KPIs to their influence on customer behaviors such as loyalty and increased sales.  But carriers must also be aware to not look at positive or negative changes in the wrong way and fully understand the reasons for success or failure.  Reacting to success by following up with more online advertising in certain media outlets, may not produce the desired results, when in fact the reason for an increase in sales is due to the upsell and cross sell efforts of CSRs.

Data Darwinism – Capabilities that provide a competitive advantage

In my previous post, I introduced the concept of Data Darwinism, which states that for a company to be the ‘king of the jungle’ (and remain so), they need to have the ability to continually innovate.   Let’s be clear, though.   Innovation must be aligned with the strategic goals and objectives of the company.   The landscape is littered with examples of innovative ideas that didn’t have a market.  

So that begs the question “What are the behaviors and characteristics of companies that are at the top of the food chain?”    The answer to that question can go in many different directions.   With respect to Data Darwinism, the following hierarchy illustrates the categories of capabilities that an organization needs to demonstrate to truly become a dominant force.

Foundational

The impulse will be for an organization to want to immediately jump to implementing capabilities that they think will allow them to be at the top of the pyramid.   And while this is possible to a certain extent, you must put in place certain foundational capabilities to have a sustainable model.     Examples of capabilities at this level include data integration, data standardization, data quality, and basic reporting.

Without clean, integrated, accurate data that is aligned with the intended business goals, the ability to implement the more advanced capabilities is severely limited.    This does not mean that all foundational capabilities must be implemented before moving on to the next level.  Quite the opposite actually.   You must balance the need for the foundational components with the return that the more advanced capabilities will enable.

Transitional

Transitional capabilities are those that allow an organization to move from silo’d, isolated, often duplicative efforts to a more ‘centralized’ platform in which to leverage their data.    Capabilities at this level of the hierarchy start to migrate towards an enterprise view of data and include such things as a more complete, integrated data set, increased collaboration, basic analytics and ‘coordinated governance’.

Again, you don’t need to fully instantiate the capabilities at this level before building capabilities at the next level.   It continues to be a balancing act.

Transformational

Transformational capabilities are those that allow the company to start to truly differentiate themselves from their competition.   It doesn’t fully deliver the innovative capabilities that set them head and shoulders above other companies, but rather sets the stage for such.   This stage can be challenging for organizations as it can require a significant change in mind-set compared to the current way its conducts its operations.   Capabilities at this level of the hierarchy include more advanced analytical capabilities (such as true data mining), targeted access to data by users, and ‘managed governance’.

Innovative

Innovative capabilities are those that truly set a company apart from its competitors.   They allow for innovative product offerings, unique methods of handling the customer experience and new ways in which to conduct business operations.   Amazon is a great example of this.   Their ability to customize the user experience and offer ‘recommendations’ based on a wealth of user buying  trend data has set them apart from most other online retailers.    Capabilities at this level of the hierarchy include predictive analytics, enterprise governance and user self-service access to data.

The bottom line is that moving up the hierarchy requires vision, discipline and a pragmatic approach.   The journey is not always an easy one, but the rewards more than justify the effort.

Check back for the next installment of this series “Data Darwinism – Evolving Your Data Environment.”

Data Darwinism – Are you on the path to extinction?

Most people are familiar with Darwinism.  We’ve all heard the term survival of the fittest.   There is even a humorous take on the subject with the annual Darwin Awards, given to those individuals who have removed themselves from the gene pool through, shall we say, less than intelligent choices.

Businesses go through ups and downs, transformations, up-sizing/down-sizing, centralization/ decentralization, etc.   In other words, they are trying to adapt to the current and future events in order to grow.   Just as in the animal kingdom, some will survive and dominate, some will not fare as well.   In today’s challenging business environment, while many are trying to merely survive, others are prospering, growing and dominating.  

So what makes the difference between being the king of the jungle or being prey?   The ability to make the right decisions in the face of uncertainty.     This is often easier said than done.   However, at the core of making the best decisions is making sure you have the right data.   That brings us back to the topic at hand:  Data Darwinism.   Data Darwinism can be defined as:

“The practice of using an organization’s data to survive, adapt, compete and innovate in a constantly changing and increasingly competitive business environment.”

When asked to assess where they are on the Data Darwinism continuum, many companies will say that they are at the top of the food chain, that they are very fast at getting data to make decisions, that they don’t see data as a problem, etc.   However, when truly asked to objectively evaluate their situation, they often come up with a very different, and often frightening, picture. 

  It’s as simple as looking at your behavior when dealing with data:

If you find yourself exhibiting more of the behaviors on the left side of the picture above, you might be a candidate for the next Data Darwin Awards.

Check back for the next installment of this series “Data Darwinism – Capabilities that Provide a Competitive Advantage.”

Are your KPIs creating a no-win situation?

Businesses that start implementing KPIs at a departmental level, without an enterprise wide effort to define a balanced set of key performance indicators, can unwittingly push their businesses into a no-win situation, as in these real-world scenarios:

  • Customer Call Centers (often ahead of the curve as far as setting metrics) are tracking and incentivizing their call center agents to keep their call times short. Call center agents, in an effort to shave seconds off of each call, omit the crucial step of searching for a customer before entering a new one while logging interactions. Result: Duplicate customer records, which  may even be pushed to other systems, creating pain throughout multiple departments.
  • In the push to meet monthly sales quotas, hyper-discounting  behavior becomes the norm among the sales team.  If the pricebook is complex and no one can get a true read on profitability, inappropriate discounting may be approved when management doesn’t have access to the right information to make an informed approval decision.
  • Some businesses steer only by financial performance measures, but these are lagging indicators, and can seldom, in and of themselves, provide the required agility to succeed in rapidly changing situations.

The key, of course, is to strive for balance when implementing KPIs:

  • Balance between leading (forward-looking) and lagging (backward-looking) indicators.
  • Balance across stakeholder perspectives. The Balanced Scorecard as a starting point works well to achieve balance across core stakeholder viewpoints of financial, customer, process, and learning/growth.
  • Balance across levels in your business hierarchy. Kaplan and Norton expanded on the balanced scorecard approach to help businesses drive metrics down through their organizations via strategy maps.
  • Balancing speed metrics with quality metrics
james_kirk2c_2266

Image courtesy of memory-alpha.org

The alternative to a balanced approach at the outset is usually a technology desparation move, such as manually cobbling together some key reports, manually trying to scrub out duplicate data, implementing undesirable or even temporary customizations to packaged programs. There’s usually at least one person in the IT department who’s enough of a Star Trek fan to want to reprogram that no-win scenario, just like the young James Kirk did with the Kobayashi Maru.

Why Analytics Projects Fail

During an informal forum recently, (whose members shall remain nameless to protect my sorry existence a few more years), analytics projects came up as a topic.  The question was a simple one.  All of the industry analysts and surveys said analytic products and projects would be hot and soak up the bulk of the meager discretionary funds availed a CIO by his grateful company.  If true, why were things so quiet?  Why no “thundering” successes?

My answer was to put forward the “typical” project plan of a hypothetical predictive analytics project as a straw man to explore the topic:

  • First, spend $50 to $100K on product selection.
  • Second, hire a contractor in the product selected and tell him you want a forecasting model for revenue and cost. 
  • The contractor says fine, I’ll set up default questions, by the way where is the data?
  • The contractor is pointed to the users. He successively moves down the organization until he passes through the hands-on user actually driving the applications and reporting (ultimately fingering IT as the source of all data).  On the way the contractor finds a fair amount of the data he needs in Excel spreadsheets and Access databases on the user’s PCs (at this point a CFO in the group hails me as Nostradamus because that is where his data resides).
  • IT gets some extracts together containing the remaining data required that seems to meet the needs the contractor described (as far as they can tell, then IT hits the Staple’s Easy Button —  got to get back to keeping the lights on and the mainline applications running!).
  • Contractor puts the extracts in the analytics product, does some back testing with what ever data he has, makes some neat graphics and charts and declares victory.
  • Senior management is thrilled, the application is quite cool and predicts last month spot on.  Next month even looks close to the current Excel spreadsheet forecast.
  • During the ensuing quarter, the cool charts and graphs look stranger and stranger until the model flames out with bizarre error messages.
  • The conclusion is drawn that the technology is obviously not ready for prime time and that lazy CIO should have warned us.  It’s his problem and he should fix it, isn’t that why we keep him around?

At this point there are a number of shaking heads and muffled chuckles; we have seen this passion play before.  The problem is not any product’s fault or really any individual’s fault (it is that evil nobody again, the bane of my life).  The problem lies in the project approach.

So what would a better approach be?  The following straw man ensued from the discussion:

  • First, in this case, skip the product selection.  There are only two leading commercial products for predictive analytic modeling (SAS, SPSS).  Flip a coin (if you have a three-headed coin look at an open source solution, R or ESS), maybe it’s already on your shelf, blow the dust off.  Better yet, would a standard planning and budgeting package fit (Oracle/Hyperion)?  The next step should give us that answer anyway, no need to rush to buy, vendors are always ready to sell you something (especially at month/quarter end — my, that big a discount!).
  • Use the money saved for a strategic look at the questions that will be asked of the model: What are the key performance indicators for the industry?  Are there any internal benchmarks, industry benchmarks or measures?  Will any external data be needed to ensure optimal (correct?) answers to the projected questions?
  • Now take this information and do some data analysis (much like dumpster diving).  The key is to find the correct data in a format that is properly governed and updated (no Excel or Access need apply).  The key is accurate sustainability of all data inputs, remember our friend GIGO (I feel 20 years old all over again!).  This should sound very much like a standard Data Quality and Governance Project (boring, but necessary evil to prevent future embarrassment to the guilty).  
  • Now that all of the data is dropped into a cozy data mart and supporting extracts are targeted there, set up all production jobs to keep everything fresh.
  • This is also a great time to give that contractor or consultant the questions and analysis done earlier, so it will be at hand with a companion sustainable datamart.  Now iterations begin — computation, aggregation, correlation, derivation, deviation, visualization, (Oh My!). The controlled environment holds everybody’s feet to the fire and provides excellent history to tune the model with.
  • A reasonable model should result, enjoy!

No approach is perfect, and all have their risks, but this one has a better probability of success than most.