Accountable Care Analytics: A data-driven approach to achieving value-based healthcare

Edgewater’s Accountable Care Analytics application is comprehensive set of data integration and business intelligence capabilities for use by clinical, financial, and care management professionals that empower organizations to improve quality and reduce costs across a spectrum of care delivery settings.  The application streamlines many of the labor-intensive aspects of capturing and reporting quality and financial performance of accountable care, alternative quality contract, and similar risk-based arrangements operating in healthcare today.  It achieves this by enabling healthcare providers to take a data-driven approach to understanding the impact of quality, cost and outcomes on performance across the extended ACO enterprise.

In this podcast, Edgewater provides a high level overview of the Accountable Care Analytics application.

Edgewater Healthcare Analytics

I recently read an article called “The 4 Biggest Obstacles ACOs Face” on Forbes.com that I found really interesting. In it, the author identifies what I think are the primary challenges for Accountable Care Organizations (ACO). But, I would change the order.

ACOs need a management structure in place to make critical operational decisions. But those decisions should be made leveraging enterprise wide data. So, the primary challenge for ACOs — Providing management with accurate, actionable data to make management decisions, before all of the technical integration challenges have been addressed.

To learn more about Edgewater’s Accountable Care Analytics application, and how it can help you get meaningful data to ACO decision makers, email us.

Are you Paralyzed by a Hoard of Big Data?

Lured by the promise of big data benefits, many organizations are leveraging cheap storage to hoard vast amounts of structured and unstructured data. Without a clear framework for big data governance and use, businesses run the risk of becoming paralyzed under an unorganized jumble of data, much of which has become stale and past its expiration date. Stale data is toxic to your business – it could lead you into taking the wrong action based on data that is no longer relevant.

You know there’s valuable stuff in there, but the thought of wading through all THAT to find it stops you dead in your tracks.  There goes your goal of business process improvement, which according to a recent Informatica survey, most businesses cite as their number one Big Data Initiative goal.

Just as the individual hoarder often requires a professional organizer to help them pare the hoard and institute acquisition and retention rules for preventing hoard-induced paralysis in the future, organizations should seek outside help when they find themselves unable to turn their data hoard into actionable information.

An effective big data strategy needs to include the following components:

  1. An appropriate toolset for analyzing big data and making it actionable by the right people. Avoid building an ivory tower big data bureaucracy, and remember, insight has to turn into action.
  2. A clear and flexible framework, such as social master data management, for integrating big data with enterprise applications, one that can quickly leverage new sources of information about your customers and your market.
  3. Information lifecycle management rules and practices, so that insight and action will be taken based on relevant, as opposed to stale  information.
  4. Consideration of how the enterprise application portfolio might need to be refined to maximize the availability and relevance of big data. In today’s world, that will involve grappling with the flow of information between cloud and internally hosted applications as well.
  5. Comprehensive data security framework that defines who is entitled to use the data, change the data and delete the data, as well as encryption requirements as well as any required upgrades in network security.

Get the picture? Your big data strategy isn’t just a data strategy. It has to be a comprehensive technology-process-people strategy.

All of these elements, should of course, be considered when building your big data business case, and estimating return on investment.

Adobe, IBM, WebTrends, and comScore named leaders in Web Analytics

Independent research firm Forrester recently released their annual “Forrester Wave: Web Analytics, Q4 2011” report naming Adobe, IBM, comScore, and WebTrends as the current leaders of the web analytics industry. AT Internet and Google Analytics were also included as “strong performers” while Yahoo Analytics took 7th place as the lone wolf in the “contender” category.

Not surprisingly Adobe Site Catalyst and IBM Coremetrics stood out with the top two scores overall but WebTrends Analytics 10 and comScore Digital Analytix showed major stengths as well. Unica NetInsight, another offering from IBM did not make the list because of its inevitable fate to be merged with Coremetrics. In 2010, IBM acquired both Unica and Coremetrics. The Forrester report states, “IBM is incorporating the complementary and notable features of Unica NetInsight into a merged web analytics solution based on the Coremetrics platform.”

The full report can be downloaded from Adobe or WebTrends and will likely show up on other vendor sites soon.

Multi-Touch Attribution Campaign Tracking with WebTrends

This article is a follow-up to the webinar

All web analytics platforms have some way of tracking marketing campaign performance usually out-of-the-box or with a little bit of set up. Generally they all do a pretty good job of this and provide key reports to make important business decisions about which campaigns to invest more money in, which to reduce spending on, and which to get rid of altogether. But often these decisions are made without insight into the whole picture. Why? The answer is simply because most campaign reports are set up in the industry standard way of attributing all conversions to the last or most recent campaign clicked. This is and has long been the industry standard, but it is time for a change as this method ignores the fact that people often go through multiple campaigns before converting.

So what other attribution options are there? And why wouldn’t I want to attribute conversion credit to the most recent campaign? – There are typically 3 options for campaign attribution:

  1. Last Touch (Most recent campaign)
  2. First Touch (Original campaign)
  3. Multi-touch (All campaign touches)

Technically there are two options for multi-touch attribution. One option is to give full credit to all campaign touches and the other option is to give partial credit to each touch. For example, if 3 different campaign touches resulted in a sale of $30 you could credit each touch with $10. But for the purposes of this article we will focus on the full credit option. As for the question “why wouldn’t I want to attribute conversion credit to the most recent campaign?” – this is not really the right question to ask. The better question to ask is, “Do I have the best possible insight into the performance of my marketing campaigns?” The answer to that question is almost always “no” if you are only analyzing a single attribution method. So rather than replacing industry standard last touch reports, adding first touch and multi-touch to your arsenal of reports is the best course of action.

Fortunately for WebTrends users, there has been a great method for gaining insight into all campaign touches for quite some time although a little work up front is necessary to gain the full power of this. If you are already doing basic campaign tracking within WebTrends then the visitor history table is already turned on and with minimal effort you can set up two new custom reports which report on the first touch campaign and all campaign touches respectively. To do this you need to make use of two features of the visitor history table and create two new custom dimensions, one based on WT.vr.fc (the fc stands for “first campaign”) and another based on WT.vr.ac (the ac stands for “all campaigns”). Once you have the dimensions set up you create custom reports using those dimensions and whichever metrics you want applied. To make things easier, copy the existing campaign ID report and just change the dimension to base the report on.

The “first touch” report ends up looking nearly identical to the existing campaign ID report but the rows of data will be different since the revenue and other conversion credit is applied to the first campaign that referred the conversion as opposed to the last.

Standard Campaign ID Report Sample
First Touch Campaign ID Sample

The “all touches” report is where you’ll notice more differences. You will see some or many (depending on the date range you have selected) rows of data that have multiple campaign IDs separated by semi colons. To view only the data that contains multiple campaign touches just filter the report by a semi colon.

Multi-Touch Campaign ID Report Sample

So what do you do with this information? What does it all mean?
Spending some time with this new data will likely reveal some patterns you never had insight into before. For example, you may notice certain campaigns appear to perform poorly according to your traditional last touch reports but the same campaign’s performance as a first touch is much better, or vice versa. Since the first touch report is so similar to the out of the box campaign ID report it is fairly straightforward. The only difference is that the first touch gets the credit. The all touch reports are more complicated though. What I find most useful about this report is the ability to determine a campaign’s total reach and compare it to its absolute reach.  Take for example campaign ID 32. In the above screenshots you will notice that this campaign ID has $63,441 attributed to it as a last touch campaign, $35,839 attributed to it as a first touch campaign, and $82,036 attributed to it when you search for it in the all touches report (See fig. 4 below). What this data is telling us in this particular case is that:

  • $63,441 in revenue was most recently referred by campaign 32
  • Only $35,839 in revenue was initially referred by campaign 32
  • But overall campaign 32 at least partially referred $82,036 in revenue

As you can see, there can be very significant differences in campaign performance depending on how you look at the data. Taking the easy way out and looking only at a single attribution method can lead to less than fully-informed decisions being made about your campaigns. What if you were relying solely on first-touch reports in this example? That could lead you to reduce your budget on campaign 32 when in reality it was performing much better than your first-touch report told you.

Multi-Touch Report Filtered by Campaign ID 32

Ok, so all that is well and good but manually analyzing campaign IDs one at a time is a lot of work! Yes it certainly is using the methods I just provided as examples. But there is a much better way to approach this. Taking things a step further we can export each of these reports and combine them together in Excel using the campaign IDs as our key values. What we want to end up with is something like the following which will allow us to analyze first, last, and multi-touch all within a single interface.

Multi-Touch Reporting in Excel Sample

In part two of this article I’ll show you how to set this all up in WebTrends. But for now, follow the steps discussed in this article to get these super handy reports in place so you’ll be ready for the next part.

10 Actionable Web Metrics You Can Use – Part 1

Make your web analytics actionable

The end goal of a web analytics report should be to provide some guidance on how to take an action to improve how your website is meeting its goals. However, many analysts simply generate canned reports using their analytics tool and send it to their management for review. In this two-part post, I will share with you 10 different web metrics that can “at a glance” tell your management how well a particular campaign or goal is performing, plus provide some relevant actions that can be taken to improve the underlying performance of the metric.

In Part 1, I will look at five metrics that are expressed in percentages. In Part 2, I will look at five metrics that are expressed as an index. Ideally, these metrics would be designed to be seen as gauges on a dashboard, and some can have the ranges color-coded (green/yellow/red) to quickly show the impact of that metric. Here are the first five actionable metrics.

1. Campaign Margin.

If you are running any paid campaigns for an ecommerce site or lead generating site, you need to know your margin. In simple terms, your campaign margin is defined as your revenue from a campaign less its cost, divided by the revenue. Your goal is to stay as close to 100% as possible. You can create a report that shows the campaign margin for any campaign that involves external spend (banners, paid search, sponsorships, etc…), or an internal spend on employees’ time (social media marketing, forum and article posts, etc…). The smaller your margin, the less money you are making. With this metric, “0%” is breakeven. If you have a negative margin, you are losing money on that campaign. If you have a positive margin, you are making money. This type of margin can be shown as a gauge and placed on your analytics dashboard. If your margin is negative or near zero, you need to take action to look at why the campaign is costing so much or how you can increase the campaign’s effectiveness.

2. Percent Revenue from New Visitors.

This metric tells you how likely visitors are to order from you on their first visit, compared to ordering on successive visits.  In order to create this metric, you need to be able to segment your traffic by new vs. repeat visitors. To calculate the metric, take the revenue generated from new visitors and divide it by the total revenue.  If the percentage is more than 50%, you get more of your sales from first time visitors, If it is less than 50%, you get more orders from repeat visitors. If you see this percentage is low and you have limited repeat buyers, then perhaps you would want to do a better job to get a visitor to purchase on their initial visit. If you have a low percentage of revenue from new visitors, and you have a more expansive product line, then this metric is telling you that you get more of your sales from repeat visitors or customers, and you may want to focus on keeping your content fresh and maintaining campaigns such as email or social networking to keep your visitors coming back.

3. Engaged Visitor Percentage (EVP)

This metric is defined as the number of visits that contain an action or event that indicates engagement divided by the total number of visits. To use this metric, you must first determine what defines an engagement. This can be any of the following – visit a specific number of pages, visit particular pages of interest, subscribe or register to something on your site, post a comment, rate something, click on an ad, use a tool, navigate a map, download something, play a video, forward to a friend, or do anything else you wish to show engagement. By monitoring this metric over time, you can determine if your site is doing a better or worse job of engaging your visitors, if this is one of the goals of your site.

4. Utilization Factor (UF)

Some types of organizations have developed their website to encourage its users to conduct business through it instead of calling or submitting paperwork. For example, an insurance company may want claims to be processed via the web. A financial agency may want its brokers to process transactions via the web instead of sending in forms. If one of your goals is to encourage the use of your site to accomplish tasks, one way to measure this is to track the percentage of activities that are conducted on the web divided by the total number of activities conducted online and offline. This metric is a bit more complicated, as to do it entirely online you need to import the offline data into your web analytic program. You can also export the online data and create an Excel-based report that combines the online and offline data. Your UF can also be used to measure the percent of registered users who use the site to transact business. By monitoring the Utilization Factor over time, you can determine how well your efforts are to shift your transactions to the web. Specific actions can include training of your users on how to use your site to process transactions, or ongoing communications that remind your users to use the site.

5. Self Service Factor (SSF)

If your site is to be used to provide customer service, one of your goals could be to reduce the percent of customer service issues that are handled through the phone. Thus, the SSF would be calculated as the number of service issues that were resolved on the web divided by the total number of service issues (web + phone + chat + email). In order to do this, you would either need to import your offline data into your web analytics program, or export your online data into a spreadsheet to combine it with your offline data. If your company has a target goal for resolving service issues via the site, you can create a gauge that shows how well the actual percentage is compared to the goal, or color-code the result as red or green to show if the SSF is above or below the target. Part of your site’s optimization efforts would include analyzing the issues that are most often called in and updating the content on the website, or making the top 10 most frequent issues a sidebar on the customer service site.

In Part 2 of this article, I will show you how to use these five additional actionable metrics:

  • New Customer Index
  • Campaign Quality Index
  • Return Visitor Index
  • Branded Search Index
  • Site Search Impact

Data Darwinism – Capabilities that provide a competitive advantage

In my previous post, I introduced the concept of Data Darwinism, which states that for a company to be the ‘king of the jungle’ (and remain so), they need to have the ability to continually innovate.   Let’s be clear, though.   Innovation must be aligned with the strategic goals and objectives of the company.   The landscape is littered with examples of innovative ideas that didn’t have a market.  

So that begs the question “What are the behaviors and characteristics of companies that are at the top of the food chain?”    The answer to that question can go in many different directions.   With respect to Data Darwinism, the following hierarchy illustrates the categories of capabilities that an organization needs to demonstrate to truly become a dominant force.

Foundational

The impulse will be for an organization to want to immediately jump to implementing capabilities that they think will allow them to be at the top of the pyramid.   And while this is possible to a certain extent, you must put in place certain foundational capabilities to have a sustainable model.     Examples of capabilities at this level include data integration, data standardization, data quality, and basic reporting.

Without clean, integrated, accurate data that is aligned with the intended business goals, the ability to implement the more advanced capabilities is severely limited.    This does not mean that all foundational capabilities must be implemented before moving on to the next level.  Quite the opposite actually.   You must balance the need for the foundational components with the return that the more advanced capabilities will enable.

Transitional

Transitional capabilities are those that allow an organization to move from silo’d, isolated, often duplicative efforts to a more ‘centralized’ platform in which to leverage their data.    Capabilities at this level of the hierarchy start to migrate towards an enterprise view of data and include such things as a more complete, integrated data set, increased collaboration, basic analytics and ‘coordinated governance’.

Again, you don’t need to fully instantiate the capabilities at this level before building capabilities at the next level.   It continues to be a balancing act.

Transformational

Transformational capabilities are those that allow the company to start to truly differentiate themselves from their competition.   It doesn’t fully deliver the innovative capabilities that set them head and shoulders above other companies, but rather sets the stage for such.   This stage can be challenging for organizations as it can require a significant change in mind-set compared to the current way its conducts its operations.   Capabilities at this level of the hierarchy include more advanced analytical capabilities (such as true data mining), targeted access to data by users, and ‘managed governance’.

Innovative

Innovative capabilities are those that truly set a company apart from its competitors.   They allow for innovative product offerings, unique methods of handling the customer experience and new ways in which to conduct business operations.   Amazon is a great example of this.   Their ability to customize the user experience and offer ‘recommendations’ based on a wealth of user buying  trend data has set them apart from most other online retailers.    Capabilities at this level of the hierarchy include predictive analytics, enterprise governance and user self-service access to data.

The bottom line is that moving up the hierarchy requires vision, discipline and a pragmatic approach.   The journey is not always an easy one, but the rewards more than justify the effort.

Check back for the next installment of this series “Data Darwinism – Evolving Your Data Environment.”

Data Darwinism – Are you on the path to extinction?

Most people are familiar with Darwinism.  We’ve all heard the term survival of the fittest.   There is even a humorous take on the subject with the annual Darwin Awards, given to those individuals who have removed themselves from the gene pool through, shall we say, less than intelligent choices.

Businesses go through ups and downs, transformations, up-sizing/down-sizing, centralization/ decentralization, etc.   In other words, they are trying to adapt to the current and future events in order to grow.   Just as in the animal kingdom, some will survive and dominate, some will not fare as well.   In today’s challenging business environment, while many are trying to merely survive, others are prospering, growing and dominating.  

So what makes the difference between being the king of the jungle or being prey?   The ability to make the right decisions in the face of uncertainty.     This is often easier said than done.   However, at the core of making the best decisions is making sure you have the right data.   That brings us back to the topic at hand:  Data Darwinism.   Data Darwinism can be defined as:

“The practice of using an organization’s data to survive, adapt, compete and innovate in a constantly changing and increasingly competitive business environment.”

When asked to assess where they are on the Data Darwinism continuum, many companies will say that they are at the top of the food chain, that they are very fast at getting data to make decisions, that they don’t see data as a problem, etc.   However, when truly asked to objectively evaluate their situation, they often come up with a very different, and often frightening, picture. 

  It’s as simple as looking at your behavior when dealing with data:

If you find yourself exhibiting more of the behaviors on the left side of the picture above, you might be a candidate for the next Data Darwin Awards.

Check back for the next installment of this series “Data Darwinism – Capabilities that Provide a Competitive Advantage.”

It’s the End of the World As We Know It!

The Holidays are a great for watching “End of the World” shows on the History Channel. They were a great comfort, actually almost encouraging, because all of the prophecies target 2012.  “The Bible Code II”, “The Mayan Prophecies”, and the Big 2012 Special compendium of End of the World scenarios, covering Nostrodamus to obscure German prophets, all agree that 2012 is the big one (Dec 21 to be exact!)  What a relief!, the rest of the news reports are trending to canned goods, shotguns, and gold by the end of the year.  We really have almost a whole 48 months before everything goes bang (I wasn’t ready anyway, procrastination rules!).

Unfortunately, we need to do some IT planning and budgeting for the new year and probably should have some thoughts going out 36 months (after that see the first paragraph).  As I discussed in a prior blog, the reporting, BI/CPM/EPM, and analytics efforts are the strongest priority; followed by rational short cost savings efforts.  All organizations must see where they are heading and keep as much water bailed out of the corporate boat as possible.  Easy call, job done! 

Then again a horrifying thought occurred to me, what if one of these initiatives should fail? (see my nightmares in prior blog posts on Data and Analytics).  I am not saying I’m the Mad Hatter and the CEO is the Red Queen, but my head is feeling a bit loosely attached at the moment.  Management cannot afford a failed project in this environment and neither can the CIO in any company (remember CIO=Career Is Over).

The best way to ensure sucessful project delivery (and guarantee my ringside lawn chair and six-pack at Armageddon in 2012) lies in building on best practice and solid technical architecture.  For example, the most effective architecture is to use a layer of indirection between the CPM application (like Planning & Budgeting) and the source data systems (ERP, Custom transactional).  This layer of indirection would be for data staging, allowing transfer to and from fixed layouts for simplified initial installation and maintenance.  In addition, this staging area would be used for data cleansing and rationalization operations to prevent polluting CPM cubes with uncontrolled errors and changes.  In terms of best practice, libraries and tools should be used in all circumstances to encapsulate knowlege rather than custom procedures or manual operations.  Another best practice is to get procedural control of the Excel and Access jungle of wild and wooley data which stands ready to crash any implementation and cause failure and embarassment to the IT staff (and former CIO).  When systems fail, it is usually a failure of confidence in the validity or timeliness of the information whether presented by dashboard or simple report.

CPM, EPM, and Analytics comprise and convey incredibly refined information and decisions of significant consequence are being made within organizations to restructure and invest based on this information.  The information and decisions are only as good as the underlying data going into them.  So skimping on the proper implementation can put the CIO’s paycheck at serious risk (Ouch!).