Strategic Finance for Service Lines: Finding Opportunities for Growth

Healthcare providers are always seeking innovations and evaluating strategic alternatives to meet growing demand while healthcare legislation is adding challenges to an already complex industry. As the population continues to age and development increases the demand for high quality healthcare, providers must put themselves in the optimal financial position to deliver the best care to the communities that depend on them.

To do this, many are turning to a service line model so that they can identify profitable areas of their organization that will generate future growth and capture market share.  In order to identify the strategic value of each service line, organizations need to have a long-range planning tool that will enable them to quickly forecast each of their service lines over the next 3-5 years and evaluate growth areas so that investments can be made in the service lines that will generate the greatest long-term economic value for the organization.

Utilizing Oracle’s Hyperion Strategic Finance, Edgewater Ranzal has helped many organizations chart a realistic financial plan to achieve their long-range goals and vision.  Some of the ways that we have helped organizations are as follows:

  • Forecast detailed P&Ls  for each service line using revenue and cost drivers such as number of patients, revenue per procedure, FTE’s, and payer mix to accurately forecast profit levels of each service line.
  • Easily consolidate the forecasted service line P&Ls to view the expected financial results at a care center level or for the Healthcare organization as a whole.
  • Layer into the consolidation structure potential new service lines that are being evaluated to understand the incremental financial impact of adding this new service line.
  • Run scenarios on the key business drivers of each service line to understand how sensitive profitability, EPS, and other key metrics are to changes in variables like number of patients, payer mix, FTE’s and salary levels.
  • Compare multiple scenarios side by side to evaluate the risks and benefits of specific strategies.
  • Evaluate the economic value of large capital projects needed to grow specific service lines beyond their current capacity.  Compare the NPV and IRR of various projects to determine which ones should be funded.
  • Layer into the consolidation structure specific capital projects and view their incremental impact on revenue growth and profitability at the service line level as well as the healthcare organization as a whole.
  • Use the built in funding routine of HSF to allocate cash surpluses to new investments and to analyze at what point in time the organization is going to need to secure more debt financing to fund its operations and its capital investments in specific service lines.

Regardless of where you are in your understanding, analysis, or implementation of service lines, a viable long-term strategy must include a critical evaluation of how will you identify the market drivers for growth, measure sustainable financial success, and adjust to changing economic, regulatory, and financial conditions.

Service Line Evolution: From Definition to Growth and Everything In Between

Are you a healthcare provider that is currently considering organizing your clinical service offerings around formally structured service lines? Have you already attempted to implement service lines in one area and come across some unforeseen difficulties, turf wars, and clinician pushback? Have you established a service line structure across your enterprise and are wondering how to grow particular lines to increase profitability or lower costs? These scenarios depict the various states of maturity that healthcare providers find themselves in as they struggle to define, implement, build, measure, plan/forecast and grow service lines. Regardless of where you sit in this Maturity Model, there are obstacles impeding your progress and growth.

There is no easy way to implement service lines. There are too many egos involved, too many possible hurt feelings, and not enough resources to go around. Why? Because when it comes down to it, you will be favoring one set of surgeons, or doctors, or nurses in a department and therefore giving them more attention, time, money, and other limited resources. And you’re doing this because you’ve determined that partnering with certain specialties or surgical teams or departments will reap greater financial reward, improve clinical outcomes and position your organization for greater potential growth. Hospitals are finally starting to realize that running healthcare like a business makes good cents. And aligning your organization around a defined set of service lines is exactly like maintaining a diversified portfolio of investments – some investments will be more profitable than others; some will have higher risk; some will receive greater scrutiny and public attention; and some will undoubtedly loose you money.  Regardless of the way you chose to do it, though, service lines are a great strategy for healthcare organizations to focus on what they’re best at, align their clinical services with that of their target markets, realize the best return on their marketing dollars, and position their institutions for the greatest possible growth both financially and clinically. In these tough economic times with rising healthcare costs and dwindling reimbursements, along with looming regulatory changes mandating bundled payments, service lines offer a framework for providers to align their clinical, financial, and operational objectives with dynamic markets and an aging population.

Over the past two years we have worked with institutions across the country, ranging from academic medical centers to integrated delivery networks, comprehensive cancer centers and multi-hospital organizations helping them progress along the Service Line Maturity Model. Similar obstacles continue to pop up including clinician buy-in and investments, incentives for progress and adaptation, cultural change management, and how to overcome the inherent data management challenges associated with defining, monitoring and measuring success and growth. In addition, there continue to be more technical challenges like, “how do I allocate individual patient visits to each service line?” We helped create business logic with hierarchies that include data points like DRG (and MS-DRG), ICD-9 diagnosis codes, discharge service, and others that help clearly define which patients go where. We’ve established goals for key performance indicators like Net Patient Revenue (NPR), Units of Service (UOS)/Rates, and patient/payer mix. In addition, there are frequently discussions about employee planning and how to determine where critical skill set deficiencies exists in the next 2, 5, and 10 years as the clinical workforce ages and retires. If it’s at the enterprise level dealing with strategic goals, or the department level dealing with tactical goals, a successful service line model requires a comprehensive, integrated, and coordinated mission from all levels of an organization. The worst thing you can do is try and go at this alone or in a silo, you’ll only soon find out not everyone agrees this is the best path forward…especially if you’re stepping on their turf.

Picis Exchange Global Customer Conference – “It’s All About the Data”

The Picis Exchange Global Customer Conference went off without a hitch last week in Miami. The main information sessions were categorized by the four areas of a hospital Picis specializes in: Anesthesia and Critical Care, Emergency Department, Perioperative Services, and Revenue Management Solutions (via its acquisition of LYNX Medical Systems). I was able to attend a number of sessions, network with both the company and its customers, and hear what the top priorities for this diverse group are over the next few years. As I reviewed my notes this weekend, thinking back to all the conversations I had with OR Directors, Quality Compliance Managers, Clinical Analysts, Billing and Coding Auditors, Anesthesiologists, and IS/IT Directors, one theme emerged – it’s all about the data!

The most frequent discussions centered around a few major challenges the healthcare industry, not just Picis clients, must deal with in the coming months and years. These challenges vary in complexity and impact on the 5 P’s [Patients, Providers, Physicians, Payers, and Pharmaceutical Manufacturers]. Picis customers and users, who collect, analyze, present and distribute data most efficiently and effectively related to the following challenges, position themselves as stable players in an increasingly turbulent industry:

  • Meaningful Use – “What data must I show to demonstrate I’m a meaningful user of Healthcare IT to realize the greatest number of financial incentives available? How can I get away from free-text narrative documentation and start collecting discrete data in anticipation of the newly announced HIMSS Analytics expanded criteria?
  • Quality & Regulatory Compliance – “How can I improve my quality metrics such as Core Measures and keep them consistently high over time? How can I reduce the amount of time it takes for me to report my data? How can I improve my data collection, analysis, and presentation to enable decision makers with actionable data?”
  • ICD-9 to ICD-10 Conversion – “What data and processes must I have in place to demonstrate use of ICD-10 before the looming deadline? Is my technical landscape integrated and robust enough to handle the dramatic increase in ICD-10 codes? Does my user community understand the implications of the changes associated with this conversion?”
  • Resource Productivity – “How can I reduce the amount of time my staff spends chasing paper, manually abstracting charts, and analyzing free-text narrative documentation? What percentage of these processes can I automate so my staff is focused on value-added tasks?”
  • Revenue Cycle Improvement & Cost Transparency – “How can I integrate my clinical, operational, and financial data sets to understand where my opportunities are for enhanced revenue? How can I standardize these as best practices? Can I cut costs by reducing inventory on hand and redundant vendor/supply contracts or by improving resource utilization and provider productivity? How will this impact patient volume? Am I prepared for healthcare reforms’ “call for transparency?”

All of these challenges, although unique, have fundamental components in common that must be established before any progress is made. Each instance requires that processes are established to standardize the collection of data to ensure accuracy and consistency so users can “trust the data”. A “single version of the truth” is essential; without this your hospital will continue to be pockets of siloed expertise lying in Excel spreadsheets and Access databases (best case), or paper charts and scanned documents (worst case) that are laboriously re-validated at every step in the information lifecycle.

Picis did a wonderful job of reinforcing its commitment to its customer base. It promised improved product features, more intuitive user interfaces, an enhanced user community for collaboration and idea sharing, and more opportunities for training. Fundamentally, Picis is a strong player in a market that seems ripe for consolidation and its potential for growth is very high. Yet, Picis will always be just that, a product company. The healthcare industry no doubt needs strong products such as Picis to drive critical operations, and collect the data necessary for improved decision making and transition from paper to automation. But Picis acknowledged, through its evolving collaboration with partners such as Edgewater Technology that understand both the technical landscape and clinical domain, that the true spark for change will come when the people and processes align with these products more effectively. This combination will be the foundation for a heightened level of care from an integrated data strategy that propagates a formula of superior patient outcomes from every dollar spent.

It’s our data that’s hurting us! – Transparency in a consumer-oriented healthcare marketplace

The Internet is really enabling the healthcare consumer to shop and compare like never before.  Why would there be a need to shop for the better hospital, doctor or nursing home? The need to shop for quality care has never been more important and the competition between hospitals and other healthcare providers is heating up.  The consumer wants the best surgeon to do their procedure, in the safest hospital and they are turning to healthcare rating sites in record numbers.  The irony of rating sites is that they are dependent on data that is provided by the hospitals, doctors and other allied providers – it is their very own data that they are being judged by.

Today, it is easy for the consumer to Google “compare doctors” or “compare hospitals” and locate numerous websites with detailed information for comparisons.  Two notable examples are leapfroggroup.com and ucomparehealthcare.com.  Leapfrog does not just rely on publicly reported data from regulatory agencies but extends the information with detailed surveys of hospitals on the key issues: central line infections and infection control, for example.  One comparison website reports that the Top 5% of its reporting hospitals have a 29% lower mortality rate.

No individual or healthcare organization wants an unfair report card.  With medical mistakes as a leading cause of death each year surpassing car accidents, breast cancer and AIDS, the report card also serves healthcare organizations as guidance on critical areas of improvement.  The process to collect regulatory reporting information in many healthcare organizations is tedious, time-consuming and often manual.  In the classic sense of “we have the data somewhere but not in the format that we need it.”  There are several key problems with collecting core measures and other key metrics for reporting:

  • Key data elements for the calculations are paper-based or manually compiled
  • Manual process fatigue from paper form processing
  •  Automated reporting systems use sample patient populations that are too small resulting in a possible statistical errors
  • Errors in the data transfer process, especially in the hand-off of information from one area of the hospital to another skew results
  • Inability to track a diagnosis code early enough in the patient encounter to improve on the measure outcomes
  • Lack of staff training on collecting the right information at the right time in the right format

Consumers use the reported results to compare hospital performance and make decisions about where to receive care.  As a result, healthcare organizations need to focus on data governance to address treating data as an asset, ensuring data quality and tracking the right key metrics.  Addressing this challenge will not only improve the ratings report card for healthcare organizations but will demonstrate the commitment to quality data as well as patient safety.  Better data equals better results.  In the consumer-oriented healthcare marketplace, transparency of key metrics will yield competitive advantage.

Driving Value from Your Healthcare Analytics Program –Key Program Components

If you are a healthcare provider or payer organization contemplating an initial implementation of a Business Intelligence (BI) Analytics system, there are several areas to keep in mind as you plan your program.  The following key components appear in every successful BI Analytics program.  And the sooner you can bring focus and attention to these critical areas, the sooner you will improve your own chances for success.

Key Program Components

Last time we reviewed the primary, top-level technical building blocks.  However, the technical components are not the starting point for these solutions.  Technical form must follow business function.  The technical components come to life only when the primary mission and drivers of the specific enterprise are well understood.  And these must be further developed into a program for defining, designing, implementing and evangelizing the needs and capabilities of BI and related analytics tuned to the particular needs and readiness of the organization.

Key areas that require careful attention in every implementation include the following:

We have found that healthcare organizations (and solution vendors!) have contrasting opinions on how best to align the operational data store (ODS) and enterprise data warehouse (EDW) portions of their strategy with the needs of their key stakeholders and constituencies.  The “supply-driven” approach encourages a broad-based uptake of virtually all data that originates from one or more authoritative source system, without any real pre-qualification of the usefulness of that information for a particular purpose.  This is the hope-laden “build it and they will come” strategy.  Conversely, the “demand-driven” approach encourages a particular focus on analytic objectives and scope, and uses this focus to concentrate the initial data uptake to satisfy a defined set of analytic subject areas and contexts.  The challenge here is to not so narrowly focus the incoming data stream that it limits related exploratory analysis.

For example, a supply-driven initiative might choose to tap into an existing enterprise application integration (EAI) bus and siphon all published HL7 messages into the EDW or ODS data collection pipe.  The proponents might reason that if these messages are being published on an enterprise bus, they should be generally useful; and if they are reasonably compliant with the HL7 RIM, their integration should be relatively straightforward.  However, their usefulness for a particular analytic purpose would still need to be investigated separately.

Conversely, a demand-driven project might start with a required set of representative analytic question instances or archetypes, and drive the data sourcing effort backward toward the potentially diverging points of origin within the business operations.  For example, a surgical analytics platform to discern patterns between or among surgical cost components, OR schedule adherence, outcomes variability, payer mix, or the impact of specific material choices would depend on specific data elements that might originate from potentially disparate locations and settings.  The need here is to ensure that the data sets required to support the specific identified analyses are covered; but the collection strategy should not be so exclusive that it prevents exploration of unanticipated inquiries or analyses.

I’ll have a future blog topic on a methodology we have used successfully to progressively decompose, elaborate and refine stakeholder analytic needs into the data architecture needed to support them.

In many cases, a key objective for implementing healthcare analytics will be to bring focus to specific areas of enterprise operations: to drive improvements in quality, performance or outcomes; to drive down costs of service delivery; or to increase resource efficiency, productivity or throughput, while maintaining quality, cost and compliance.  A common element in all of these is a focus on process.  You must identify the specific processes (or workflows) that you wish to measure and monitor.  Any given process, however simple or complex, will have a finite number of “pulse points,” any one of which will provide a natural locus for control or analysis to inform decision makers about the state of operations and progress toward measured objectives or targets.  These loci become the raw data collection points, where the primary data elements and observations (and accompanying meta-data) are captured for downstream transformation and consumption.

For example, if a health system is trying to gain insight into opportunities for flexible scheduling of OR suites and surgical teams, the base level data collection must probe into the start and stop times for each segment in the “setup and teardown” of a surgical case, and all the resource types and instances needed to support those processes.  Each individual process segment (i.e. OR ready/busy, patient in/out, anesthesia start/end, surgeon in/out, cut/close, PACU in/out, etc.) has distinct control loci the measurement of which comprises the foundational data on which such analyses must be built.  You won’t gain visibility into optimization opportunities if you don’t measure the primary processes at sufficient granularity to facilitate inquiry and action.

Each pulse point reveals a critical success component in the overall operation.  Management must decide how each process will be measured, and how the specific data to be captured will enable both visibility and action.  Visibility that the specific critical process elements being performed are within tolerance and on target; or that they are deviating from a standard or plan and require corrective action.  And the information must both enable and facilitate focused action that will bring performance and outcomes back into compliance with the desired or required standards or objectives.

A key aspect of metric design is defining the needed granularity and dimensionality.  The former ensures the proper focus and resolution on the action needed.  The latter facilitates traceability and exploration into the contexts in which performance and quality issues arise.  If any measured areas under-perform, the granularity and dimensionality will provide a focus for appropriate corrective actions.  If they achieve superior performance, they can be studied and characterized for possible designation as best practices.

For example, how does a surgical services line that does 2500 total knees penetrate this monolithic volume and differentiate these cases in a way that enables usable insights and focused action?  The short answer is to characterize each instance to enable flexible-but-usable segmentation (and sub-segmentation); and when a segment of interest is identified (under-performing; over-performing; or some other pattern), the n-tuple of categorical attributes that was used to establish the segment becomes a roadmap defining the context and setting for the action: either corrective action (i.e. for deviation from standard) or reinforcing action (i.e. for characterizing best practices).  So, dimensions of surgical team, facility, care setting, procedure, implant type and model, supplier, starting ordinal position, day of week, and many others can be part of your surgical analytics metrics design.

Each metric must ultimately be deconstructed into the specific raw data elements, observations and quantities (and units) that are needed to support the computation of the corresponding metric.  This includes the definition, granularity and dimensionality of each data element; its point of origin in the operation and its position within the process to be measured; the required frequency for its capture and timeliness for its delivery; and the constraints on acceptable values or other quality standards to ensure that the data will reflect accurately the state of the operation or process, and will enable (and ideally facilitate) a focused response once its meaning is understood.

An interesting consideration is how to choose the source for a collected data element, when multiple legitimate sources exist (this issue spills over into data governance (see below); and what rules are needed to arbitrate such conflicts.  Arbitration can be based on: whether each source is legitimately designated as authoritative; where each conflicting (or overlapping) data element (and its contents) resides in a life cycle that impacts its usability; what access controls or proprietary rights pertain to the specific instance of data consumption; and the purpose for or context in which the data element is obtained.  Resolving these conflicts is not always as simple as designating a single authoritative source.

Controlling data quality at its source is essential.  All downstream consumers and transformation operations are critically dependent on the quality of each data element at its point of origin or introduction into the data stream.  Data cleansing becomes much more problematic if it occurs downstream of the authoritative source, during subsequent data transformation or data presentation operations.  Doing so effectively allows data to “originate” at virtually any position in the data stream, making traceability and quality tracking more difficult, and increasing the burden of retaining the data that originates at the various points to the quality standard.  On the other hand, downstream consumers may have little or no influence or authority to impose the data cleansing or capture constraints on those who actually collect the data.

Organizations are often unreceptive to the suggestion that their data may have quality issues.  “The data’s good.  It has to be; we run the business on it!”  Although this might be true, when you remove data from its primary operating context, and attempt to use it for different purposes such as aggregation, segmentation, forecasting and integrated analytics, problems with data quality rise to the surface and become visible.

Elements of data quality include: accuracy; integrity; timeliness; timing and dynamics; clear semantics; rules for capture; transformation; and distribution.  Your strategy must include establishing and then enforcing definitions, measures, policies and procedures to ensure that your data is meeting the necessary quality standards. 

The data architecture must anticipate the structure and relationships of the primary data elements, including the required granularity, dimensionality, and alignment with other identifying or describing elements (e.g. master and reference data); and the nature and positioning of the transformation and consumption patterns within the various user bases.

For example, to analyze the range in variation of maintaining schedule integrity in our surgical services example, for each case we must capture micro-architectural elements such as the scheduled and actual start and end times for each critical participant and resource type (e.g. surgeon, anesthesiologist, patient, technician, facility, room, schedule block, equipment, supplies, medications, prior and following case, etc.), each of which becomes a dimension in the hierarchical analytic contexts that will reveal and help to characterize where under-performance or over-performance are occurring.  The corresponding macro-architectural components will address requirements such as scalability, distinction between retrieval and occurrence latency, data volumes, data lineage, and data delivery.

By the way: none of this presumes a “daily batch” system.  Your data architecture might need to anticipate and accommodate complex hybrid models for federating and staging incremental data sets to resolve unavoidable differences in arrival dynamics, granularity, dimensionality, key alignment, or perishability.  I’ll have another blog on this topic, separately.

You should definitely anticipate that the incorporation and integration of additional subject areas and data sets will increase the value of the data; in many instances, far beyond that for which it was originally collected.  As the awareness and use of this resource begins to grow, both the value and sensitivity attributed to these data will increase commensurately.  The primary purpose of data governance is to ensure that the highest quality data assets obtained from all relevant sources are available to all consumers who need them, after all the necessary controls have been put in place.

Key components of an effective strategy are the recognition of data as an enterprise asset; the designation of authoritative sources; commitment to data quality standards and processes; recognition of data proceeding through a life cycle of origination, transformation and distribution, with varying degrees of ownership, stewardship and guardianship, on its way to various consumers for various purposes.  Specific characteristics such as the level of aggregation; the degree of protection required (e.g. PHI); the need for de-identification and re-identification; the designation of “snapshots” and “versions” of data sets; and the constraints imposed by proprietary rights. These will all impact the policies and governance structures needed to ensure proper usage of this critical asset.

Are you positioned for success?

Successful implementation of BI analytics requires more than a careful selection of technology platforms, tools and applications.  The selection of technical components will ideally follow the definition of the organizations needs for these capabilities.  The program components outlined here are a good start on the journey to embedded analytics, proactively driving the desired improvement throughout your enterprise.

Driving Value from Your Healthcare Analytics Program

If you are a healthcare provider or payer organization contemplating an initial implementation of a Business Intelligence (BI) Analytics system, there are several areas to keep in mind as you plan your program.  The following key components appear in every successful BI Analytics program.  And the sooner you can bring focus and attention to these critical areas, the sooner you will improve your own chances for success.

First, a small note on terminology:  We often hear the term “business intelligence” used as the overarching label for these now-familiar graphical dashboard UIs that enable direct, end-user access to actionable metrics, insightful analytics and the rich, multi-dimensional substantiating data that underlie the top-level presentation.  Interactive drill-down; rules-based transformation of data, derivation of facts, and classification of events; hierarchical navigation; and role-based access to progressively disaggregated and increasingly identified data sets are some of the commonly implemented capabilities.  However, we have found that most client organizations will re-cast the “BI” moniker to a more subject- or mission-related name for the system and its intended objectives.  “Clinical Intelligence”, “Quality Analytics” and “Financial Performance Management” are examples.  These names are chosen to apply more directly to the mission, focus and objectives of the specific subject area(s) for which the system is being designed and deployed.  But they often have similar expectations with regard to the data and the end-user capabilities that will be present, and the following principles equally.

Key Technical Components

Often, the first questions the organization asks focus on the technical components; the platforms, tools and applications that will form the eventual solution.  Which ETL tools should we use?  Which cube design tools are best for our needs?  How do we integrate with standard vocabulary services?  What do we use to design and deploy our dashboards?  These “how to” choices will follow consideration of the “what” needs for the system.

The key technical macro-components for a BI analytics system will invariably include one form or another of each of the following capabilities:

The primary components for data receipt and uptake, most frequently receiving raw transaction data from upstream (source) operational and transactional (OLTP) systems, such as an electronic medical record (EMR) system, surgery scheduling system, billing and reimbursement system, or claims processing system.  The portfolio of source systems can be diverse and extensive, and can include the messaging traffic that travels between and synchronizes these individual systems.  Primary data that is captured and exchanged can range from standard, discrete data elements, to less structured collections (e.g. documents), to diverse binary formats originating from virtually any point of care device type.  The primary purpose of these components is to capture the raw data (and/or meta-data) elements that reflect the domain-specific (e.g. clinical, financial) operational or event context in which the primary data originated, propagating this data downstream for consumption in an analytic context.

Data obtained from a mission-focused transactional system will almost invariably need to be computed, mapped, translated, combined, aggregated, aligned or otherwise transformed to enable its consumption for more analytic (OLAP) applications.  Mapping the raw source data elements onto a standard taxonomy and aligning them with a designated ontology or other conceptual model of the subject domain can enhance the stability and extend the useful life of your data.  Various forms of derived data elements will arise, including various asymmetric relationships between elements existing in a data lineage.  This serves to enrich the raw source data, both increasing its relevance and improving its consume-ability for the anticipated (and often unanticipated) analytic contexts.

Pipelining raw data elements through various “enrichment engines” can increase their value and usefulness to a broader set of audiences.  Mapping atomic-level clinical procedure encodings to higher levels in a hierarchy; assigning cases to service lines and resolving overlap conflicts to support differing analytic objectives; and linking isolated events to a network of terms in a standard vocabulary are all examples that can improve the consume-ability of individual or collections of data elements.

The volumes of data gathered and organized into an enterprise-scale data warehouse (EDW) or other integrated repository will require a spectrum of storage approaches to meet the competing demands of user consumption patterns and dynamics.  The demands of some data consumers will require operational data stores (ODS) that combine data from multiple sources, and deliver it to the point of consumption in a timely manner, using various high-throughput staging strategies to align key elements and minimize the latency between the occurrence of the primary event and the availability of the combined data in the ODS.  EDWs integrate (and often rationalize) information from multiple sources (and potentially multiple ODSs) often according to a comprehensive universal data model that reflects the primary data entities and their relationships, facilitating and enhancing the future consumption of the data in a wide variety of unanticipated use cases.  Data cubes can emphasize and optimize the dimensional characteristics of the data entities, and facilitate the hierarchical segmentation, navigation and exploration of the captured subject domain.

Unstructured data elements or collections, including binary data types (e.g. images, voltage tracings, videos, audios, etc.) present different storage requirements, often including explicit separation of indexing and other meta-data from the primary data.

The delivery of data is complete when the end-consumer has gained access to the desired data sets; using the required retrieval, analytics and presentation tools or applications; under the proper controls.  The user experience might include dashboards or other interactive graphical data displays and navigational UIs; rule-driven alerts to notify critical parties of specific conditions or escalating events; analytic and predictive models for exploring hypotheses or projecting “what if” scenarios; and communication tools for distributing, and corresponding with other stakeholders or trading partners on the underlying issues reflected in the information being consumed and even tracking their resolution.

These are the recurring, primary, top-level building blocks.  My next blog will delve into the program components you can apply to drive the definition, design and implementation of your analytics system.

Are you positioned for success?

Successful implementation of BI analytics requires more than a careful selection of technology platforms, tools and applications.  The selection of technical components will ideally follow the definition of the organization’s needs for these capabilities.  The program components outlined next time will offer a start on the journey to proactive embedded analytics, driving the desired improvement throughout your enterprise.

Clinical Alerts – Why Good Intentions Must Start as Good Ideas

As the heated debate continues about ways to decrease the costs of our healthcare system while simultaneously improving its quality, it is critical to consider the most appropriate place to start – which depends on who you are. Much has been made about the advantages of clinical alerts especially with their use in areas high on the national radar like quality of care, medication use and allergic reactions, and adverse events.   Common sense, though, says walk before you run; in this case its crawl before you run. 

Clinical alerts are most often electronic messages sent via email, text, page, and even automated voice to notify a clinician or group of clinicians to conduct a course of action related to their patient care based on data retrieved in a Clinical Decision Support System (CDSS) designed for optimal outcomes. The rules engine that generates alerts is created specifically for various areas of patient safety and quality like administering vaccines to children, core measure compliance, and preventing complications like venous thromboembolism (VTE) (also a core measure). The benefits of using clinical alerts in various care settings are obvious if the right people, processes, and systems are in place to consume and manage the alerts appropriately. Numerous studies have been done highlighting the right and wrong ways of implementing and utilizing alerts. The best criteria I’ve seen used consider 5 major themes when designing alerts: Efficiency, Usefulness, Information Content, User Interface, and Workflow (I’ve personally confirmed each of these from numerous discussions with clinicians ranging from ED nurses to Anesthesiologists in the OR to hospitalists on the floors). And don’t forget one huge piece of the alerting discussion that often gets overlooked…….the patient! While some of these may be obvious, all must be considered as the design and implementation phases of the alerts progress.

OK, Now Back to Reality

A discussion about how clinical alerting can improve the quality of care is one limited to the very few provider organizations that already have the infrastructure setup and resources to implement such an initiative. This means that if you are seriously considering such a task, you should already have:

  • an Enterprise Data Strategy and Roadmap that tells you how alerts tie into the broader mission;
  • Data Governance  to assign ownership and accountability for the quality of your data and implement standards (especially when it comes to clinical documentation and data entry);
  • standardized process flows that identify points for consistent, discrete data collection;
  • surgeon, physician, anesthesiology, nursing, researcher, and hospitalist champions to gather support from various constituencies and facilitate education and buy-in; and
  •  oh yeah, the technology and skilled staff to support a multi-system, highly integrated, complex rules-based environment that will likely change over time and be more scrutinized………

◊◊Or a strong relationship with an experienced consulting partner capable of handling all of these requirements and transferring the necessary knowledge along the way.◊◊

I must emphasize the second bullet for just a moment; data governance is critical to ensure that the quality of the data being collected passes the highest level of scrutiny, from doctors to administrators. This is of the utmost importance because the data is what forms the basis of the information that decision makers act on. The quickest way to lose momentum and buy in to any project is by putting bad data in front of a group of doctors and clinicians; trust me when I say it is infinitely more difficult to win their trust back once you’ve made that mistake. On the other hand, if they trust the data and understand the value of it in near real time across their spectrum of care, you turn them quickly into leaders willing to champion your efforts. And now you have a solid foundation for any healthcare analytics program.    

If you are like the majority of healthcare organizations in this country, you may have some pieces to this puzzle in various stages of design, development, deployment or implementation. In all likelihood, though, you are at the early stages of the Clinical Alerts Maturity Model

 

and with all things considered, should have alerting functionality in the later years of your strategic roadmap. Though, there are many  projects with low cost, fast implementations, quick ROIs, and ample examples to glean lessons learned from like, Computerized Physician Order Entry (CPOE), electronic nursing and physician documentation, Picture Archiving System (PACS), and a clinical data repository (CDR) to use alerting as a prototype or proof of concept to demonstrate the broader value proposition. Clinical alerting, to start, should be incorporated alongside projects that have proven impact across the Clinical Alerts Maturity Model before they are rolled out as stand-alone initiatives.