What I Learned Last Week in Cambridge, MA at the World Congress Health Care Quality Conference

The subtitle for last week’s conference was “Moving from Volume to VALUE Based Care”. The theme’s that emerged from the speaker panels, presentations, and one-off conversations I had seemed well aligned:

  1. Healthcare is currently experiencing a paradigm shift from the traditional provider-centric mentality to that of a patient-centric framework
  2. One of the biggest challenges providers face in the pursuit of higher quality is figuring out how to appropriately leverage all of the data they’re currently collecting, manually and electronically
  3. Emerging opportunities for reigning in costs and improving quality including ACO’s, AQC’s, PCMH’s, and others will only be effective if there are standards for implementation and measuring effectiveness consistently across the country
  4. There are a handful of healthcare providers and payers who have taken significant strides in controlling costs while improving quality by implementing technology solutions that integrate data from across the continuum of patient care.

I was encouraged by the level of enthusiasm in the room. Dr. Allan H. Gorroll from Massachusetts General Hospital and Harvard Medical School made it clear that advancing the quality agenda will require significant investments in primary care; Dr. Kate Koplan spoke about Atrius Health’s push to reduce the problems of over testing and unnecessary treatments; Dr. John Butterly from Dartmouth Hitchcock Health discussed the Patient Centered Medical Home (PCMH) and suggested to all providers that they “have a patient on the team responsible for understanding how to establish the PCMH”; and Micky Tripathi the President and CEO of Massachusetts e-Health Collaborative mentioned the challenges of turning data into actionable information with problems like free text data, inconsistent data collection across care settings and the fear many clinicians have of “change” getting in the way.

I too was a co-presenter at the conference and was delighted by the response to our presentation. My counterpart Neil Ravitz, Chief Operating Officer for the Office of the Chief Medical Officer at the University of Pennsylvania Health System, and I discussed a recent solution we designed and implemented. We were able to automate the collection, integration, calculation, presentation and dissemination of 132 inpatient safety and quality measures across 3 hospitals and 7 source application systems. This new tool consolidates measures from across these hospitals and systems into one place for reporting and analysis through the use of dashboards and dynamic, drill down reports. The major benefits of the solution include:

  1. Changed the focus of quality and decision support analysts from data production to data analysis and action;
  2. Automated quality data collection to enable better accuracy and more timely data; and
  3. Enabled a faster quality improvement cycle time by front line leaders

Dr. Atul Gawande recently suggested in an article in the New Yorker that healthcare should be prepared to start implementing standards for nearly all of the care delivered, from total hip replacements to blood transfusions. As we all know, he is a fan of checklists, one logical tool for standardization. He also states, “Scaling good ideas has been one of our deepest problems in medicine”. When I attend healthcare conferences like the one last week in Cambridge, I’m excited by the progress I see organizations making. When I leave the conference though, I’m quickly reminded of the grim reality of healthcare and Dr. Gawande’s point. And then I wonder, at what point will “patient centric”, “accountable care”, “value based purchasing” and all the other catch phrases of the past few years become the industry standard – and not the exception limited to conferences, New Yorker magazines, and headlines that are only ever heard or read, and rarely ever experienced.

Increase the File Size in CRM 2011 in a support manner

In Microsoft CRM 2011, the file size limit can be increased through the CRM interface from 5MB (5,120 KB) to a maximum of 32MB (32,768 KB), whereas in CRM 4.0 file size changes were done through an unsupported manner via the web.config file which required an IIS restart.

The default attachment file size limit in Microsoft Dynamic CRM 2011 is 5MB.  As files attached to CRM are stored in the database, restrictions should be placed on the amount of file data that can enter CRM so that the database can maintain a reasonable size and so that performance is optimized.

However, business changes may find the need to increase this size in order to handle larger file attachments. While files to the size limit of 32MB can be attached, the infrastructure of the CRM application needs to be considered so that performance is not impacted.

Here are the instructions to increase the file size limit to 10MB (10,240 KB):

Step 1: Select ‘Settings’

Step 2: Select ‘Administration’

Step 3: Select ‘System Settings’

Step 4: Select the ‘Email’ tab

Step 5: Proceed to the section titled ‘Set file size limit for attachments’

Step 6: Change the file size to 10,240

Processes (Workflows) Best Practices

When enabling new workflow options in Microsoft Dynamics CRM 2011, the overall performance of the implementation can be affected. Keep in mind the following best practices, when considering how to ensure that Microsoft Dynamics CRM workflow functionality performs optimally for a particular implementation.

  • Determine the business purposes for implementing workflow prior to enabling the functionality. During planning, analyze the business scenario and define the goals of workflow within the solution. Workflow functionality can provide for businesses’ process automation, exception handling, and end-user alerts.

Decide on the appropriate security/permissions model for workflow. With established business goals in place, determine the scope of users that will be affected by the workflow implementation. Users should be identified to determine who will create and maintain workflows, apply and track workflows, and troubleshoot workflow issues.

  • Use the Scope property sensibly. The Scope property associated with workflow rules defines the extent of records affected by that rule. For example, rules configured with a User scope affect only the records owned by that user, while rules configured with an Organization scope will affect all records within organization, regardless of which user owns each record. When creating workflows make sure to identify the appropriate scope value for each workflow rule to minimize the number of related system events.
  • Take into consideration the overall load associated with workflow within a deployment. Think about the number of instances that each workflow definition triggers. Then also consider these factors which affect load of workflows:
    • number of workflows
    • which entities
    • number of records
    • data size
    • data load

Considering the factors above in the system on a typical day provides for better comprehension of the processes and load variance. Based on this analysis, the workflows can be optimized as required.

  • Review workflow logic wisely. Consider the following factors:
    • Workflows that include infinite loopbacks, due to semantic or logic errors never terminate through normal means, therefore greatly affect overall workflow performance.
    • When implementing workflow functionality within a CRM 2011 deployment, be sure to review the logic in workflow rules and any associated plug-ins for potential loopback issues.
    • As part of ongoing maintenance efforts, periodically publish workflow rules and review them to ensure that duplicated workflow rules are not affecting the same records.
  • When defining workflows that are triggered on update events, be cautious. Taking into account the frequency at which ‘Update’ events occur, be very particular in specifying which attributes the system looks for to trigger updates. Also, avoid using ’Wait’ states in workflows that are triggered on Update events.
  • Scale out as necessary, to improve performance in large deployments. Use dedicated computers to run the Async service for large-scale deployments. That being said, increasing the number of servers running the Async service creates additional stress on the server running Microsoft SQL Server. Therefore, make sure to follow appropriate optimization and tuning strategies on the data tier and investigate the possibility of increasing the number of computers running Microsoft SQL server.
  • Test workflows. Make sure to test and monitor the performance of new workflow functionality before implementing it in a production environment.
  • Async plug-ins. Think through whether plug-ins should run synchronously or asynchronously. When the priority is user responsiveness, running a plug-in asynchronously will enable the user interface to respond quicker to the user. But, asynchronous plug-ins introduce added load to the server to persist the asynchronous operation to the database and process by the Async Service. When scalability is essential, running plug-ins synchronously typically requires fewer loads on the servers than running plug-ins asynchronously.
  • Balancing workflows and asynchronous plug-ins. Asynchronous plug-in and workflow records in the asyncoperationbase table are managed with the same priority. Hence, introducing a large number of asynchronous plug-ins into a system can reduce overall throughput or increase the time between triggering and processing of individual workflows. For that reason, make sure to consider the relative importance of the workflows in the system before adding numerous asynchronous plug-ins to the solution.
  • Child Workflows. Child workflows run as independent workflow instances from their parents. This can facilitate parallel processing on a system with spare capacity, which can be useful for workflows with multiple independent threads of high processing activity. Additional overhead can be introduced if the parallel processing is not critical because other workflow logic threads are blocked waiting for external events to occur.

NOTE: If workflow functionality within a CRM 2011 implementation is not acting as expected, verify that the Async service is running properly. Often, restarting the Async service will unblock workflow processing without affecting the functionality of the workflows that were in the pipeline.

  • Monitor the Microsoft Dynamics CRM 2011 database for excess workflow log records. Workflow processing in Microsoft Dynamics CRM depends on on the Asynchronous Service, which logs its activity in both the AsyncOperationBase table and WorkflowLogBase tables. Performance may be affected as the number of workflow records in the CRM 2011 database grows over time.

Microsoft Dynamics CRM 2011 includes two specific settings, ‘AsyncRemoveCompletedJobs’ and ‘AsyncRemoveCompletedWorkflows’, which can be configured to ensure that Asynchronous Service automatically removes log entries from the AsyncOperationBase and WorkflowLogBase tables. These settings are as follows:

    • The ‘AsyncRemoveCompletedWorkflows’ setting is visible to users in the interface for defining new workflows, and users can set the Removal flag independently on each of the workflows they define.NOTE: When registering an Async plug-in, users can also specify that successfully completed Async Plugin execution records be deleted from the system.
    • Using the deployment Web service, users can change the AsyncRemoveCompletedJobs setting by. Nonetheless, the setting is by default configured to True, which ensures automatic removal of entries for successfully completed jobs from the AsyncOperationBase table.

Epic Clarity Is Not a Data Warehouse

It’s not even the reporting tool for which your clinicians have been asking!

I have attended between four and eight patient safety and quality healthcare conferences a year for the past five years. Personally, I enjoy the opportunities to learn from what others are doing in the space. My expertise lies at the intersection of quality and technology; therefore, it’s what I’m eager to discuss at these events. I am most interested in understanding how health systems are addressing the burgeoning financial burden of reporting more (both internal and external compliance and regulatory mandates) with less (from tightening budgets and, quite honestly, allocating resources to the wrong places for the wrong reasons).

Let me be frank: there is job security in health care analysts, “report writers,” and decision support staff. They continue to plug away at reports, churn out dated spreadsheets, and present static, stale data without context or much value to the decision makers they serve. In my opinion, patient safety and quality departments are the worst culprits of this waste and inefficiency.

When I walk around these conferences and ask people, “How are you reporting your quality measures across the litany of applications, vendors, and care settings at your institution?,” you want to know the most frequent answer I get? “Oh, we have Epic (Clarity)”, “Oh, we have McKesson (HBI),” or “Oh, we have a decision support staff that does that”. I literally have to hold back a combination of emotions – amusement (because I’m so frustrated) and frustration (because all I can do is laugh). I’ll poke holes in just one example: If you have Epic and use Clarity to report here is what you have to look forward to straight from the mouth of a former Epic technical consultant:

It is impossible to use Epic “out of the box” because the tables in Clarity must be joined together to present meaningful data. That may mean (probably will mean) a significant runtime burden because of the processing required. Unless you defer this burden to an overnight process (ETL) the end users will experience significant wait times as their report proceeds to execute these joins. Further, they will wait every time the report runs. Bear in mind that this applies to all of the reports that Epic provides. All of them are based directly on Clarity. Clarity is not a data warehouse. It is merely a relational version of the Chronicles data structures, and as such, is tied closely to the Chronicles architecture rather than a reporting structure. Report customers require de-normalized data marts for simplicity, and you need star schema behind them for performance and code re-use.”

You can’t pretend something is what it isn’t.

Translation that healthcare people will understand: Clarity only reports data in Epic. Clarity is not the best solution for providing users with fast query and report responses. There are better solutions (data marts) that provide faster reporting and allow for integration across systems. Patient safety and quality people know that you need to get data out of more than just your EMR to report quality measures. So why do so many of you think an EMR reporting tool is your answer?

There is a growing sense of urgency at the highest levels in large health systems to start holding quality departments accountable for the operational dollars they continue to waste on non-value added data crunching, report creation, and spreadsheets. Don’t believe me? Ask yourself, “Does my quality team spend more time collecting data and creating reports/spreadsheets or interacting with the organization to improve quality and, consequently, the data?”

Be honest with yourself. The ratio, at best, is 70% of an FTE is collection, 30% is analysis and action. So – get your people out of the basement, out from behind their computer screens, and put them to work. And by work, I mean acting on data and improving quality, not just reporting it.

ASP.Net Master Pages

In ASP.NET MVC3 “master pages” are handled in the _ViewStart.cshtml file.  As the name, suggests the code in this file is executed before each view is rendered (see Scott Gu’s blog post for more details).

Now that you understand the basic use of _ViewStart.cshtml file, let’s go over the scope applied to these files.  The _ViewStart.cshtml file will affect all views in the same directory and below the location of it.  Also, you can have another _ViewStart.cshtml file under a sub-folder which will be executed after the top level _ViewStart.cshtml.  Using this feature you can in effect override the top level _ViewStart.cshtml with one closer to the view.

Now when the Index.cshtml View under the Home folder is rendered, it will first execute the /Views/_ViewStart.cshtml file and then it will render the Index.cshtml View.

However, when the Index.cshtml View under the DifferentMasterPage folder is rendered, it will first execute the /Views/_ViewStart.cshtml file, then it will execute the /Views/DifferentMasterPage/_ViewStart.cshtml file, and then it will render the Index.cshtml View.

To learn more about Edgewater’s .NET usage in custom development, click here. To learn more about Edgewater’s Web Solutions offerings, click here.

BIG DATA in Healthcare? Not quite yet…

AtlasLet’s be honest with ourselves. First –

“who thinks the healthcare industry is ready for Big Data?”

Me either…

Ok, second question,

“who thinks providers can tackle Big Data on their own without the help of healthcare IT consulting firms?”

Better yet,

“can your organization?”

Big data” seems to be yet another catch phrase that has caught many in healthcare by surprise. They’re surprised for the same reason I am which was recently summed up for me by a VP of Enterprise Informatics at a 10 hospital health system – “how can we be talking about managing big data when very few [providers] embrace true enterprise information management principles and can’t even manage to implement tools like enterprise data warehouses for our existing data?” Most people in healthcare who have come from telecommunications, banking, retail, and other industries that embraced Big Data long ago agree the industry still has a long way to go. In addition vendors like Informatica who have a proven track record of helping industries manage Big Data with their technology solutions, still have yet to see significant traction with their tools in healthcare. There are plenty of other things that need to be done first before the benefits of managing Big Data come to fruition.

Have we been here before? Didn’t we previously think that EMR’s were somehow going to transform the industry and “make everything simpler” to document, report from, and analyze? Yes we now know that isn’t the case, but it should be noted that EMR’s will eventually help with these initiatives IF providers have an enterprise data strategy and infrastructure in place to integrate EMR data with all the other data that litters their information landscape AND they have the right people to leverage enterprise data.

Same can be said of Big Data. It should be relatively easy for providers to develop a technical foundation that can store and manage Big Data compared to the time and effort needed to leverage and capitalize on Big Data once you have it. For the significant majority of the industry the focus right now should be on realizing returns in the form of lower costs and improved quality from integrating small samples of data across applications, workflows, care settings, and entities. The number of opportunities for improvement in the existing data landscape with demonstrable value should be top priority to mobilize stakeholders to action. Big Data will have to wait…for now.

Please Stop Telling Everyone You Have an Enterprise Data Warehouse – Because You Don’t

One of the biggest misconceptions amongst business and clinical leaders in healthcare is the notion that most organizations have an enterprise data warehouse. Let me be the bearer of bad news – they don’t, which means you also may not. There are very few organizations that actually have a true enterprise data warehouse; that is, a place where all of their data is integrated and modeled for analysis, from source systems across the organization independent of care settings, technology platform, how it’s collected, or how it’s used.  Some organizations have data warehouses, but these are often limited to the vendor source system they’re sitting on and the data within the vendor application (i.e., McKesson’s HBI and Epic’s Clarity). This means that you are warehousing data from only one source and thus only analyzing and making decisions from one piece of a big puzzle. I’d also bet that the data you’ve started integrating is financial and maybe operational. I understand, save the hard stuff (quality and clinical data) for last.

This misconception is not limited to a single group in healthcare. I’ve heard this from OR Managers, Patient Safety & Quality staff, Service Line Directors, physicians, nurses, and executives.

You say, “Yes we have a data warehouse”…

I say, “Tell me some of the benefits” and “what is your ROI in this technology?”

So, what is it? Can you provide quantitative evidence of the benefits you’ve realized from your investment and use of your “data warehouse”?  If you’re struggling, consider this:

  • When you ask for a performance metric, say Length of Stay (LOS), do you get the same results every time you ask independent of where your supporting data came from or who you asked?
  • Do you have to ask for pieces of information from disparate places or “data handlers” in order to answer your questions? A report from an analyst; a spreadsheet from a source system SME, a tweak here and a tweak there and Voila! A number whose calculation you can’t easily recreate, that changes over time, and requires proprietary knowledge from the report writer to produce.
  • What is the loss in your productivity, as a manager or decision maker, in getting access to this data? More importantly, how much time do you have left to actually analyze, understand and act on the data once you’ve received it?
  • Can you quickly and easily track, measure and report all patient data throughout the continuum of care? Clinical, quality, financial, and operational? Third-party collected (i.e., HCAHPS Patient Satisfaction)? Third-party calculated (i.e., CMS Core Measures)? Market share?

Aside from the loss in productivity and the manual, time-consuming process of piecing together data from disparate places and sources, a true enterprise data warehouse is a single version of the truth. Independent of the number of new applications and source systems you add, business rules you create, definitions you standardize, and analyses you perform, you will get the same answer every time. You can ask any question of an enterprise data warehouse. You don’t have to consider, “Wait, what source system will give me this data? And who knows how to get that data for me?”

In the event you do have an enterprise data warehouse, you should be seeing some of these benefits:

  1. Accurate and trusted, real–time, data-driven decision making
    • Savings: Allocate and deploy resources for localized intervention ensuring the most efficient use of scare resources based upon trusted information available.
  2. Consistent definition and understanding of data and measures reported across the organization
    • Savings: Less time and money spent resolving differences in how people report the same information from different source systems
  3. Strong master data – you have a single, consistent definition for a Patient, Provider, Location, Service Line, and Specialty.
    • Savings: less time resolving differences in patient and provider identifiers when measuring performance; elimination of duplicate or incomplete patient records
  4. A return on the money you spend in your operating budget for analysts and decision support
    • Savings: quantitative improvements from projects and initiatives targeted at clinical outcomes, cost reductions, lean process efficiencies, and others
    • Savings: less time collecting data, more time analyzing and improving processes, operations and outcomes
  5. More informed and evidence-based negotiations with surgeons, anesthesiologists, payers, vendors, and suppliers

In the end, you want an enterprise data warehouse that can accommodate the enterprise data pipeline from when data is captured, through its transformations, to its consumption. Can yours?

Does Claims BI Just Mean “Bodily Injury?”

Anyone in the insurance claims industry that works on BI is not talking about Business Intelligence. Rarely is BI ever applied in insurance claims to mean business intelligence because most carriers only use business intelligence generically to examine closure rates, expense payments, and contact rates. Business intelligence is most often used primarily to analyze data in other business units like agent performance, product profitability and policy discounts.

By properly applying business intelligence and measuring analytics in the claim handling process, carriers have the opportunity to review and grade adjusters for improvement and development of claim adjudication best practices. Monitoring and reviewing claim handling practices will ensure adjusters are performing quality investigations resulting in fair and proper claim settlements for the carrier and the insureds.

A claim is the core of why people purchase insurance products – to get reimbursed when they incur a loss. A claim becomes a personal touch point with the insured, as well as a prospective insured when third parties are involved. How many carriers have used claimants switching to them after a claim to advertise their service? Leveraging analytics to generate business intelligence on claims processes, insured retention, and claimant satisfaction, as well as measuring things like allocated loss expenses, the number of claimants with attorneys, and post closure actions, can be used more directly and efficiently to impact the success of claims handling.

Of course you may not want all of your insureds since there are those that are working to use insurance claims to make money. Properly applied analytics and techniques can detect patterns and trends in claim participation, injuries, supplemental repairs, etc. I know of one specific case where analytics found that a claimant was paid five times for a single leg amputation, and another where a doctor was treating an average of 1,600 patients per day. Business intelligence can also capture the effectiveness of independent medical exams on claim settlements, better understanding and control on reserves, back to work rates, and therapies to move claimants from total disability to partial disability.

The next logical step is moving into predictive modeling.  Properly applied claims analytics helped one western insurer realize their return on investment in a matter of months, when they could proactively augment and deploy needed field staff to respond to several catastrophic storms.

By improving best practices, identifying fraud early, and employing predictive modeling, not only will customer satisfaction be effected, but this will also trigger claims closing more quickly and at lower costs, increasing the number of claim files adjusters can handle and lowering loss ratios. In this tough economy, lowering loss ratios by even as little as 1% can have a big impact on a company’s bottom line.

The Unknown Cost of “High Quality Outcomes” in Healthcare

“You were recently acknowledged for having high quality outcomes compared to your peers, how much is it costing you to report this information?”

I recently read an article on healthcareitnews.com, “What Makes a High Performing Hospital? Ask Premier”. Because so many healthcare providers are so quick to tout their “quality credentials” (yet very few understand how much it costs their organization in wasted time and money running around to collect the data to make these claims) and this article sparked the following thoughts…

The easiest way to describe it, I’ve been told after many times trying to describe it myself, is “the tip of the iceberg”. That is the best analogy to give a group of patient safety and quality executives, staffers, and analysts when describing the effort, patience, time and money needed to build a “patient safety and quality dashboard”  with all types of quality measures with different forms of drill down and roll up.

What most patient safety and quality folks want is a sexy dashboard or scorecard  that can help them report and analyze, in a single place and tool, all of their patient safety and quality measures. It has dials and colors and all sorts of bells and whistles. From Press Ganey patient satisfaction scores, to AHRQ PSIs, Thomson Reuters and Quantros Core Measures, TheraDoc and Midas infection control measures, UHC Academic Medical Center measures….you name it. They want one place to go to see this information aggregated at the enterprise level, with the ability to drill down to the patient detail. They want to see it by Location, or by Physician, by Service Line or by Procedure/Diagnosis. This can be very helpful and extremely valuable to organizations that continue to waste money on quality analysts and abstractors who simply “collect data” instead of “analyze and act” on it. How much time do you think your PS&Q people spend finding data and plugging away at spreadsheets? How much time is left for actual value-added analysis? I would bet you very little…

So that’s what they want, but what are they willing to pay for? The answer is very little. Why?

People in patient safety and quality are experts…in patient safety and quality. What they’re not experts in is data integration, enterprise information management, meta-data strategy, data quality, ETL, data storage, database design, and so on. Why do I mention all these technical principles? Because they ALL go into a robust, comprehensive, scalable and extensible data integration strategy…which sits underneath that sexy dashboard you think you want. So, it is easy for providers to be attracted to someone offering a “sexy dashboard” that knows diddly squat about the foundation, or what you can’t see under the water, that’s required to build it. Didn’t anyone ever tell you “if it sounds too good to be true, it is!?”

Agent Mobility As A Customer Touch Point Opportunity

Agents still say ease of doing business is the key to working with a carrier.  But that means different things to different people, and certainly different things between agent and carrier.  For years carriers have been working to streamline operations within their organizations to make life easier for agents.  Recognizing and implementing standardization such as the use of ACORD forms for applications was an initial step.  Then there was integration between the carrier’s systems and the agency’s management systems that allowed agents to submit applications through online integration.  Finally came the age of the real time online portal where agents can log in to carrier systems and submit applications directly.  How much easier can it get than that – A LOT.

All of these technological advancements are offered by almost every carrier.  So what becomes the differentiator to an agent when they can place business with multiple carriers?  It’s still ease of doing business.  Which carrier allows me to get a quote the easiest by entering the fewest data points and then complete that application and close the business fastest?  Many agents try to close business in volume because more volume means more premiums, which means more commission.

Most of this work is done by agents within the confines of their office.  They can make visits to customers and prospects to talk about other offerings, but then many have to make a follow up appointment to review the quote requested in the meeting.  How about the chance encounter in the supermarket or the church social when you don’t have a computer with you?  This is where insurance agent mobility comes in.

The ubiquitous smartphone is always available and at the ready within its holster.  There are many carriers, such as Amica, Nationwide and Travelers, that have developed smartphone apps for insureds, but not as many allow agents to access information that way.  MassMutual, as an example, developed E4 (Electronic Enhanced Enrollment Experience) which allows agents to enroll retirement plan participants entirely over their smartphone.

If I can check in to, or change my flight on a mobile web site for an airline using my smartphone, shouldn’t an agent be able to get a quick quote for a prospect, file an endorsement for an insured, or even bind coverage and email the policy documentation to their customer?  Imagine the response by the insured to the agent when after about a 2 minute conversation, the newly insured’s phone beeps because the email with all the policy documentation just arrived in their inbox.

Wow, that was easy.

This is a major opportunity, not only for the agent, but also the carrier, to utilize the latest technology to make things easier not only for the agent, but the insured.