Driving Value from Your Healthcare Analytics Program –Key Program Components

If you are a healthcare provider or payer organization contemplating an initial implementation of a Business Intelligence (BI) Analytics system, there are several areas to keep in mind as you plan your program.  The following key components appear in every successful BI Analytics program.  And the sooner you can bring focus and attention to these critical areas, the sooner you will improve your own chances for success.

Key Program Components

Last time we reviewed the primary, top-level technical building blocks.  However, the technical components are not the starting point for these solutions.  Technical form must follow business function.  The technical components come to life only when the primary mission and drivers of the specific enterprise are well understood.  And these must be further developed into a program for defining, designing, implementing and evangelizing the needs and capabilities of BI and related analytics tuned to the particular needs and readiness of the organization.

Key areas that require careful attention in every implementation include the following:

We have found that healthcare organizations (and solution vendors!) have contrasting opinions on how best to align the operational data store (ODS) and enterprise data warehouse (EDW) portions of their strategy with the needs of their key stakeholders and constituencies.  The “supply-driven” approach encourages a broad-based uptake of virtually all data that originates from one or more authoritative source system, without any real pre-qualification of the usefulness of that information for a particular purpose.  This is the hope-laden “build it and they will come” strategy.  Conversely, the “demand-driven” approach encourages a particular focus on analytic objectives and scope, and uses this focus to concentrate the initial data uptake to satisfy a defined set of analytic subject areas and contexts.  The challenge here is to not so narrowly focus the incoming data stream that it limits related exploratory analysis.

For example, a supply-driven initiative might choose to tap into an existing enterprise application integration (EAI) bus and siphon all published HL7 messages into the EDW or ODS data collection pipe.  The proponents might reason that if these messages are being published on an enterprise bus, they should be generally useful; and if they are reasonably compliant with the HL7 RIM, their integration should be relatively straightforward.  However, their usefulness for a particular analytic purpose would still need to be investigated separately.

Conversely, a demand-driven project might start with a required set of representative analytic question instances or archetypes, and drive the data sourcing effort backward toward the potentially diverging points of origin within the business operations.  For example, a surgical analytics platform to discern patterns between or among surgical cost components, OR schedule adherence, outcomes variability, payer mix, or the impact of specific material choices would depend on specific data elements that might originate from potentially disparate locations and settings.  The need here is to ensure that the data sets required to support the specific identified analyses are covered; but the collection strategy should not be so exclusive that it prevents exploration of unanticipated inquiries or analyses.

I’ll have a future blog topic on a methodology we have used successfully to progressively decompose, elaborate and refine stakeholder analytic needs into the data architecture needed to support them.

In many cases, a key objective for implementing healthcare analytics will be to bring focus to specific areas of enterprise operations: to drive improvements in quality, performance or outcomes; to drive down costs of service delivery; or to increase resource efficiency, productivity or throughput, while maintaining quality, cost and compliance.  A common element in all of these is a focus on process.  You must identify the specific processes (or workflows) that you wish to measure and monitor.  Any given process, however simple or complex, will have a finite number of “pulse points,” any one of which will provide a natural locus for control or analysis to inform decision makers about the state of operations and progress toward measured objectives or targets.  These loci become the raw data collection points, where the primary data elements and observations (and accompanying meta-data) are captured for downstream transformation and consumption.

For example, if a health system is trying to gain insight into opportunities for flexible scheduling of OR suites and surgical teams, the base level data collection must probe into the start and stop times for each segment in the “setup and teardown” of a surgical case, and all the resource types and instances needed to support those processes.  Each individual process segment (i.e. OR ready/busy, patient in/out, anesthesia start/end, surgeon in/out, cut/close, PACU in/out, etc.) has distinct control loci the measurement of which comprises the foundational data on which such analyses must be built.  You won’t gain visibility into optimization opportunities if you don’t measure the primary processes at sufficient granularity to facilitate inquiry and action.

Each pulse point reveals a critical success component in the overall operation.  Management must decide how each process will be measured, and how the specific data to be captured will enable both visibility and action.  Visibility that the specific critical process elements being performed are within tolerance and on target; or that they are deviating from a standard or plan and require corrective action.  And the information must both enable and facilitate focused action that will bring performance and outcomes back into compliance with the desired or required standards or objectives.

A key aspect of metric design is defining the needed granularity and dimensionality.  The former ensures the proper focus and resolution on the action needed.  The latter facilitates traceability and exploration into the contexts in which performance and quality issues arise.  If any measured areas under-perform, the granularity and dimensionality will provide a focus for appropriate corrective actions.  If they achieve superior performance, they can be studied and characterized for possible designation as best practices.

For example, how does a surgical services line that does 2500 total knees penetrate this monolithic volume and differentiate these cases in a way that enables usable insights and focused action?  The short answer is to characterize each instance to enable flexible-but-usable segmentation (and sub-segmentation); and when a segment of interest is identified (under-performing; over-performing; or some other pattern), the n-tuple of categorical attributes that was used to establish the segment becomes a roadmap defining the context and setting for the action: either corrective action (i.e. for deviation from standard) or reinforcing action (i.e. for characterizing best practices).  So, dimensions of surgical team, facility, care setting, procedure, implant type and model, supplier, starting ordinal position, day of week, and many others can be part of your surgical analytics metrics design.

Each metric must ultimately be deconstructed into the specific raw data elements, observations and quantities (and units) that are needed to support the computation of the corresponding metric.  This includes the definition, granularity and dimensionality of each data element; its point of origin in the operation and its position within the process to be measured; the required frequency for its capture and timeliness for its delivery; and the constraints on acceptable values or other quality standards to ensure that the data will reflect accurately the state of the operation or process, and will enable (and ideally facilitate) a focused response once its meaning is understood.

An interesting consideration is how to choose the source for a collected data element, when multiple legitimate sources exist (this issue spills over into data governance (see below); and what rules are needed to arbitrate such conflicts.  Arbitration can be based on: whether each source is legitimately designated as authoritative; where each conflicting (or overlapping) data element (and its contents) resides in a life cycle that impacts its usability; what access controls or proprietary rights pertain to the specific instance of data consumption; and the purpose for or context in which the data element is obtained.  Resolving these conflicts is not always as simple as designating a single authoritative source.

Controlling data quality at its source is essential.  All downstream consumers and transformation operations are critically dependent on the quality of each data element at its point of origin or introduction into the data stream.  Data cleansing becomes much more problematic if it occurs downstream of the authoritative source, during subsequent data transformation or data presentation operations.  Doing so effectively allows data to “originate” at virtually any position in the data stream, making traceability and quality tracking more difficult, and increasing the burden of retaining the data that originates at the various points to the quality standard.  On the other hand, downstream consumers may have little or no influence or authority to impose the data cleansing or capture constraints on those who actually collect the data.

Organizations are often unreceptive to the suggestion that their data may have quality issues.  “The data’s good.  It has to be; we run the business on it!”  Although this might be true, when you remove data from its primary operating context, and attempt to use it for different purposes such as aggregation, segmentation, forecasting and integrated analytics, problems with data quality rise to the surface and become visible.

Elements of data quality include: accuracy; integrity; timeliness; timing and dynamics; clear semantics; rules for capture; transformation; and distribution.  Your strategy must include establishing and then enforcing definitions, measures, policies and procedures to ensure that your data is meeting the necessary quality standards. 

The data architecture must anticipate the structure and relationships of the primary data elements, including the required granularity, dimensionality, and alignment with other identifying or describing elements (e.g. master and reference data); and the nature and positioning of the transformation and consumption patterns within the various user bases.

For example, to analyze the range in variation of maintaining schedule integrity in our surgical services example, for each case we must capture micro-architectural elements such as the scheduled and actual start and end times for each critical participant and resource type (e.g. surgeon, anesthesiologist, patient, technician, facility, room, schedule block, equipment, supplies, medications, prior and following case, etc.), each of which becomes a dimension in the hierarchical analytic contexts that will reveal and help to characterize where under-performance or over-performance are occurring.  The corresponding macro-architectural components will address requirements such as scalability, distinction between retrieval and occurrence latency, data volumes, data lineage, and data delivery.

By the way: none of this presumes a “daily batch” system.  Your data architecture might need to anticipate and accommodate complex hybrid models for federating and staging incremental data sets to resolve unavoidable differences in arrival dynamics, granularity, dimensionality, key alignment, or perishability.  I’ll have another blog on this topic, separately.

You should definitely anticipate that the incorporation and integration of additional subject areas and data sets will increase the value of the data; in many instances, far beyond that for which it was originally collected.  As the awareness and use of this resource begins to grow, both the value and sensitivity attributed to these data will increase commensurately.  The primary purpose of data governance is to ensure that the highest quality data assets obtained from all relevant sources are available to all consumers who need them, after all the necessary controls have been put in place.

Key components of an effective strategy are the recognition of data as an enterprise asset; the designation of authoritative sources; commitment to data quality standards and processes; recognition of data proceeding through a life cycle of origination, transformation and distribution, with varying degrees of ownership, stewardship and guardianship, on its way to various consumers for various purposes.  Specific characteristics such as the level of aggregation; the degree of protection required (e.g. PHI); the need for de-identification and re-identification; the designation of “snapshots” and “versions” of data sets; and the constraints imposed by proprietary rights. These will all impact the policies and governance structures needed to ensure proper usage of this critical asset.

Are you positioned for success?

Successful implementation of BI analytics requires more than a careful selection of technology platforms, tools and applications.  The selection of technical components will ideally follow the definition of the organizations needs for these capabilities.  The program components outlined here are a good start on the journey to embedded analytics, proactively driving the desired improvement throughout your enterprise.

From Free Text Clinical Documentation to Data-rich Actionable Information

Hey healthcare providers! Yeah you the “little guy”, the rural community hospital; or you the “average Joe”, the few-hundred bed hub hospital with outpatient clinics, an ED, and some sub-paper-pilespecialties; or you the “behemoth”, the one with the health plan, physician group, outpatient, inpatient, and multi-discipline, multi-care setting institution. Is your EMR really just an electronic filing cabinet? Do nursing and physician notes, standard lab and imaging orders, registration and other critical documents just get scanned into a central system that can’t be referenced later on to meet your analytic needs? Don’t worry, you’re not alone…

Recently, I blogged about some of the advantages of Microsoft’s new Amalga platform; I want to emphasize a capability of Amalga Life Sciences that I hope finds its way into the range of healthcare provider organizations mentioned above, and quick! That is, the ability to create adoctor microscope standard ontology for displaying and navigating the unstructured information collected by providers across care settings and patient visits (see my response to a comment about Amalga Life Science utilization of UMLS for a model of standardized terminology). I don’t have to make this case to the huge group of clinicians already too familiar with this process in hospitals across the country; but the argument (and likely ROI) clearly needs to be articulated for those individuals responsible for transitioning from paper to digital records at the organizations who are dragging their feet (>90%). The question I have for these individuals is, “why is this taking so long? Why haven’t you been able to identify the clear cut benefits from moving from paper-laden manual processes to automated, digital interfaces and streamlined workflows?” These folks should ask the Corporate Executives at hospitals in New Orleans after Hurricane Katrina whether they had hoped to have this debate long before their entire patient population medical records’ drowned; just one reason why “all paper” is a strategy of the past.   

Let’s take one example most provider organizations can conceptualize: a pneumonia patient flow through the Emergency Department. There are numerous points throughout this process that could be considered “data collection points”. These, collectively and over time, paint a vivid picture of the patient experience from registration to triage to physical exam and diagnostic testing to possible admission or discharge. With this data you can do things like real or near-real time clinical alerting that would improve patient outcomes and compliance with regulations like CMS Core Measures; you can identify weak points or bottlenecks in the process to allocate additional resources; you can model best practices identified over time to improve clinical and operational efficiencies. Individually, though, with this data written on a piece of paper (and remember 1 piece of paper for registration, a separate piece for the “Core Measure Checklist”, another for the physician exam, another for the lab/X-ray report, etc.) and maybe scanned into a central system, this information tells you very little. You are also, then, at the mercy of the ability to actually read a physicians handwriting and analyze scanned documents of information vs. delineated data fields that can be trended over time, summarized, visualized, drilled down to, and so on.11-3 hc analytics

Vulnerabilities and Liabilities from Poor Documentation

Relying on poor documentation like illegible penmanship, incomplete charting and unapproved abbreviations burdens nurses and creates a huge liability. With all of the requirements and suggestions for the proper way to document, it’s no wonder why this area is so prone to errors. There are a variety of consequences from performing patient care based on “best guesses” when reading clinical documentation. Fortunately, improving documentation directly correlates with reduced medical errors. The value proposition for improved data collection and standardized terminology for that data makes sense operationally, financially, and clinically.   

So Let’s Get On With It, Shall We?

Advancing clinical care through the use of technology is seemingly one component of the larger healthcare debate in this country centered on “how do we improve the system?” Unfortunately, too many providers want to sprint before they can crawl. Moving off of paper helps you crawl first; it is a valuable, achievable goal across that the majority of organizations burdened with manual processes and their costs and if done properly, the ROI can be realized in a short amount of time with manageable effort. Having said this, the question quickly then becomes, “are we prepared to do what it takes to actually make the system improve?” Are you?

Healthcare Analytics to the Rescue!

Why does my health insurance cost so much?

It’s that time of the year again. No, I am not talking about the holidays. It’s the time of the year, when you figure out how much more money you need to make, in order to afford the rise in your healthcare costs. It’s Annual Enrollment time! But as most folks have already realized, there probably won’t be any raises, bonuses, etc., this year to help off-set the rise in healthcare premiums. The economy is experiencing its biggest downturn since the Great Depression and yet our quoted health insurance cost for next year is rising at a double-digit pace. How is that possible?rising-bar-chart

“Over the last decade, employer-sponsored health insurance premiums have increased 131 percent”.

My wife and I calculated that pre-tax, she would need to earn another $ 1,200 a year this year to off-set the rise in the monthly premiums being charged for an HMO plan with family coverage. Currently, we belong to the #1 ranked Health Plan in the country, which is increasing its rates to the tune of $100 a month for the  same level of coverage as last year. Unfortunately, we have been experiencing this trend for more than the past 20 years.

I realize its not a simple answer, and there are several external factors including rising pharmacy costs, inflation, etc. However, one could argue that since the economy is in a tail-spin, unemployment is sitting just under 10%, and the federal government is wasting time and my tax dollars trying to create a new public option for health coverage, that the best option for insurers is to hold premiums steady and to finally get a handle on what are the true drivers of cost and utilization. Thus, they would not risk losing its most important constituents, their employer groups and members, who every year are now faced with the idea of reducing their level of healthcare coverage just to make ends meet.

If the #1 health plan in the country is raising their premiums by $100 a month for a basic HMO plan, can you imagine what the lower ranking health plans will charge to their members? There are no quick-fix-it solutions for the healthcare industry. However, with so many inefficient processes, fraud, overhead, flawed reimbursement methodologies, expensive compliance and technology projects, etc., the industry is ripe for opportunities to become more analytics focused. With today’s business intelligence and data warehousing technologies available, health plans now have the ability to create high-value metrics that involve integration of disparate data sources from key areas such as: sales & marketing, operations (ex. Claims processing), and cost and utilization across members, providers, and employer groups.

Despite the quoted savings achieved by health plans from a variety of medical management programs, disease management, formularies, network discounts, etc., why is it never passed onto a subscriber’s premium? Are health plans not evaluating the right metrics? Pushing the boundaries for increasing the use of payer analytics will allow health plans to truly understand the drivers of cost and utilization and thus to migrate their business model to become more predictive in nature. Maybe this is wishful thinking, but a health plan could actually reduce their monthly premiums if they can drive out the unknown costs and inefficiencies. A futuristic but intriguing thought would be to have benefit plans that are created and priced for each member, which is based on both historical utilization and predictive analytics to determine the monthly premiums.

At a minimum, can we stop the double-digit price increases?

Physicians Insist, Leave No Data Behind

“I want it all.” This sentiment is shared by nearly all of the clinicians we’ve met with, from the largest integrated health systems (IHS) to the smallest physician practices, in reference to what data they want access to once an aggregation solution like a data warehouse is implemented.  From discussions with organizations throughout the country and across care settings, we understand a problem that plagues many of these solutions: the disparity between what clinical users would like and what technical support staff can provide.

For instance, when building a Surgical Data Mart, an IHS can collect standard patient demographics from a number of its transactional systems.  When asked, “which ‘patient weight’ would you like to keep, the one from your OR system (Picis), your registration system (HBOC) or your EMR (Epic)?” and sure enough, the doctors will respond, “all 3”. Unfortunately, the doctors often do not consider the cost and effort associated with providing three versions of the same data element to end consumers before answering, “I want it all”.  And therein lies our theory for accommodating this request: Leave No Data Behind. In support of this principle, we are not alone.

By now you’ve all heard that Microsoft is making a play in healthcare with its Amalga platform. MS will continue its strategy of integrating expertise through acquisition and so far, it seems to be working. MS claims an advantage of Amalga is its ability to store and manage an infinite amount of data associated with a patient encounter, across care settings and over time, for a truly horizontal and vertical view of the patient experience. Simply put, No Data Left Behind.  The other major players (GE, Siemens, Google) are shoring up their offerings through partnerships that highlight the importance of access to and management of huge volumes of clinical and patient data.

pc-with-dataWhy is the concept of No Data Left Behind important? Clinicians have stated emphatically, “we do not know what questions we’ll be expected to answer in 3-5 years, either based on new quality initiatives or regulatory compliance, and therefore we’d like all the raw and unfiltered data we can get.” Additionally, the recent popularity of using clinical dashboards and alerts (or “interventional informatics”) in clinical settings further supports this claim. While alerts can be useful and help prevent errors, decrease cost and improve quality, studies suggest that the accuracy of alerts is critical for clinician acceptance; the type of alert and its placement and integration in the clinical workflow is also very important in determining its usefulness. As mentioned above, many organizations understand the need to accommodate the “I want it all” claim, but few combine this with expertise of the aggregation, presentation, and appropriate distribution of this information for improved decision making and tangible quality, compliance, and bottom-line impacts. Fortunately, there are a few of us who’ve witnessed and collaborated with institutions to help evolve from theory to strategy to solution.

mountais-of-dataProviders must formulate a strategy to capitalize on the mountains of data that will come once the healthcare industry figures out how to integrate technology across its outdated, paper-laden landscape.  Producers and payers must implement the proper technology and processes to consume this data via enterprise performance management front-ends so that the entire value chain becomes more seamless. The emphasis on data presentation (think BI, alerting, and predictive analytics) continues to dominate the headlines and budget requests. Healthcare institutions, though, understand these kinds of advanced analytics require the appropriate clinical and technical expertise for implementation. Organizations, now more than ever, are embarking on this journey. We’ve had the opportunity to help overcome the challenges of siloed systems, latent data, and an incomplete view of the patient experience to help institutions realize the promise of an EMR, the benefits of integrated data sets, and the decision making power of consolidated, timely reporting. None of these initiatives will be successful, though, with incomplete data sets; a successful enterprise data strategy, therefore, always embraces the principle of “No Data Left Behind”.

ICD-10: Apocalypse or Advantage?

mayan-calendarWith humanity coming up fast on 2012, the media is counting down to this mysterious — some even call it apocalyptic — date that ancient Mayan societies were anticipating thousands of years ago.  However, the really interesting date in healthcare will happen one year earlier. In 2011, per the mandate of Senate Bill 628, the United States will move from the ICD-9 coding system to ICD-10, a much more complex scheme of classifying diseases that reflects recent advances in disease detection and treatment via biomedical informatics, genetic research and international data-sharing. For healthcare payers and providers that have used the ICD-9 coding system for submitting and paying healthcare claims for the last 30 years, it could be apocalyptic without proper planning and execution.  Conservative estimates of the cost of switching to ICD-10 are 1.5 to 3 billion dollars to the healthcare industry as a whole and nearly $70,000 for each doctor’s practice.

Since 1900, regulators of the U.S. health care system have endeavored to give care providers a systematic way to classify diseases so that care processes could be standardized and appropriate payments made. Like many of the world’s developed health care systems, the United States follows the World Health Organization’s (WHO) International Statistical Classification of Diseases and Related Health Problems (ICD) code standard that is typically used internationally to classify morbidity and mortality data for vital health statistics tracking and in the U.S. for health insurance claim reimbursement. In 2011, technically, healthcare providers and payers will be moving from ICD-9-CM to ICD-10-CM and ICD-10-PCS.  To meet this federal mandate, it will be essential that information systems used by U.S. health plans, physicians and hospitals, ambulatory providers and allied health professionals also become ICD-10 compliant. The scale of this effort for healthcare IT professionals could rival the Y2K problem and needs immediate planning.

The challenge is that the U.S. adoption of ICD-10 will undoubtedly require a major overhaul of the nation’s medical coding system because the current ICD-9 codes are deeply imbedded as part of the coding, reporting and reimbursement analysis performed today. In everyday terms, the ICD-9 codes were placed in the middle of a room and healthcare IT systems were built around them. It will require a massive wave of system reviews, new medical coding or extensive updates to existing software, and changes to many system interfaces. Because of the complex structure of ICD-10 codes, implementing and testing the changes in Electronic Medical Records (EMRs), billing systems, reporting packages, decision and analytical systems will require more effort than simply testing data fields – it will involve installing new code sets, training coders, re-mapping interfaces and recreating reports/extracts used by all constituents who access diagnosis codes. In short, ICD-10 implementation has the potential to be so invasive that it could touch nearly all operational systems and procedures of the core payer administration process and the provider revenue cycle.

A small percentage of healthcare organizations, maybe 10 to 15 percent, will use ICD-10 compliance as a way to gain competitive advantage – to further their market agendas, business models and clinical capabilities. By making use of the new code set, these innovators will seek to derive strategic value from the remediation effort instead of procrastinating or trying to avoid the costs. An example will be healthcare plans that seek to manage costs at a more granular level and implement pay for performance programs for their healthcare providers. In addition, ICD-10 offers an opportunity to develop new business partnerships, create new care procedures, and change their business models to grow overall revenue streams. Healthcare organizations looking for these new business opportunities will employ ICD-10 as a marketing differentiator to create a more competitive market position.

There are three key areas for healthcare organizations wanting to convert regulatory compliance into strategic advantage with ICD-10 remediation:

  1. Information and Data Opportunities – Healthcare entities that are early adopters of ICD-10 will be in a position to partner with their peers and constituents to improve data capture, cleansing and analytics. This could lead to the development of advanced analytical capabilities such as physician score cards, insightful drug and pharmaceutical research, and improved disease and medical management support programs, all of which create competitive advantage.
  2. Personal Health Records Opportunities – Using ICD-10 codes, innovative healthcare entities will have access to information at a level of detail never before available, making regional and personal health records (PHRs) more achievable for the provider and member communities. Organizations that align themselves appropriately can provide a service that will differentiate them in the marketplace.
  3. Clinical Documentation Excellence Program – Developing and implementing a Clinical Documentation Excellence (CDE) program is a critical component of organizational preparedness to respond to future regulatory changes because there could be an ICD-11 on the horizon.

Healthcare organizations need to understand the financial impact that ICD-10 will have on their bottom line and begin the operational readiness assessments, gap analyses and process improvement plans to facilitate accurate and appropriate reimbursement. Without action, a healthcare organization can expect to endure “data fog” as the industry moves through the transition from one code set to another. Now is the time to choose to gain the advantage or procrastinate on the coming code apocalypse.