Why Office 365 over Google Apps?

Competitive companies have CIOs who are interested in solving business problems – not focused on day-to-day IT tasks. Technology is business and if you don’t master it, your competition will. In order to focus on the business as a CIO, you need to rely on products that will essentially take care of themselves. This is, I believe, the critical benefit of  Microsoft Office 365, and what is really clearly explained in Microsoft’s new white paper, Top 10 Reasons for Choosing Office 365 over Google Apps.

There are dozens of product comparisons out there, but the decision points in this white paper can really be boiled down to 3 reasons:

1Privacy Matters

Microsoft hits this concern first. Why should we believe our information will be safe? Well, Microsoft touts its $9 billion network of data centers, which may or may not be impressive to you.

Google is an advertising company. Why would I trust a company whose business model relies on ad revenue? It creates a motive for selling personal information. While Microsoft’s Bing does sell ads, it is an ancillary rather than primary revenue generator for the Microsoft Corporation, representing less than 8% of Q4 FY2014 total revenue. Google on the other hand generated 91% of their revenue from advertising in 2013 according to their 10k. For more information on Google’s latest run-in with personal information privacy issues, here is a recent Reuters article.

2Allow Users Access to their Content Anytime, Anywhere

Duh. This one is a no brainer. Employees have an increased desire to work from home, have flexible hours, work seamlessly while traveling, and be connected to everything they need 24/7. Because of this growing demand, mobile functionality is becoming more and more critical to today’s workforce.  Office 365 works well online and offline (even email), which is certainly important for business users who travel.

Google apps offer limited offline functionality for email. Google is “committed” to mobility, but what about when you don’t have internet? I find it useful to have access to my information whether I am connected to internet or not.

3Less Training Required

Find me someone who has been in the workforce and hasn’t used the Microsoft Office applications. This means that training is minimal and your employees will likely feel relatively comfortable with the change. Microsoft worked hard to create an online platform that mirrors what employees are already doing with their on-premise versions. It could be a costly nightmare to switch to Google Apps and train employees on an enterprise level because the interface is completely different than Office. And, conversion of desktop versions of documents to Google Apps isn’t always accurate.

These are, as an end-user and mobile employee, the most important reasons that Microsoft cites in their recent white paper. Honestly though, I would pick Office 365 on the first point alone.

Have you had to make this choice? What was the tipping point for you?

5 things you need to migrate web analytics on-premises to SAAS

netinsight sunsetRemember the IBM announcement back in April to sunset NetInsight? The truth is on-premises web analytics is a dying art. The only other well-known vendor that provides on-premises solutions is WebTrends. It is destined to suffer a slow death – there have not been any new releases for the last few years and customers are encouraged to move to the SAAS version. So, the next best thing is to prepare and plan for inevitable – migration to SAAS.

  1. Assess Web Analytics vendors. Sunsetting the on-premises solution presents a good opportunity to reassess the web analytics landscape instead of blindly sticking with the same vendor and moving your data from on-premises to SAAS.
  2. Documentation. Documentation. Documentation. I said it three times and I meant it. No one likes to create it. I get it – it’s a boring, monotonous task to write every variable, every processing rule, every customization. However, switching your web analytics tool without documentation is like going into battle and forgetting your ammunition. You need to ensure you have documented the following:
    • A List of Key Custom Built Reports and Layouts. This is your foundation for stakeholder expectation management and therefore the golden key to your sanity during migration. This task can become a project on its own since going through such an exercise will confirm what reports and metrics are critical and important for the business and which ones you can delete because no one is using them anyway.
    • Current Tool Configuration rules. This is a big one and can cost you dearly. I had many urgent calls from clients desperately trying to understand why their data tanked 20 – 30% when they switched to a new tool or data collection method. In 99.9% of all cases the answer was due to configuration such as page view definition, filtering, visitor tracking methods, etc. The configuration topic certainly warrants a separate post, so check back in a while.
    • Site/Metric Matrix. List all custom collected metrics per site, including metric definition, collection method and syntax. This exercise should be completed hand-in-hand with report documentation to ensure every single custom metric is documented. This is your bible. Keep it on your nightstand and refer to it often. If you need a sample template, come back in a few weeks – I will be posting further on this subject.
  3. A Project Manager and a Project Plan. Web Analytics migration is like surgery – you need to make sure that your patient (aka reporting) will not die during the surgery (migration). You may get away without a plan if you a dealing with a simple site and basic data collection (wart removal), but if you a dealing with multiple sites, channels, applications and vendors (open heart surgery), a solid plan is required for successful execution.
  4. Assessment against Current Reporting Capabilities. Migration from on-premises to a SAAS solution (or one vendor to another) almost always results in loss and/or gain of features. E.g., SAAS solution is likely to have greater social and mobile tracking capabilities but you may need to make some data collection tradeoffs in order to adhere to your company’s data collection policies, especially if you are in the healthcare or financial industry. Create a loss/gain matrix and use it to manage change. No one likes to give up things that they already have. Over communicate any changes to stakeholders in advance, quantify impact of such changes on business and help to define mitigation plan, if necessary.
  5. Assess Current Roles, Responsibilities and Processes. Switching from on-premises to SAAS will impact current roles and processes, e.g., there will be no need to maintain servers.   Process-wise, you will not be able to re-analyze collected data, which will have an impact on new report deployment as well as the ability to apply new reporting requests to historical data. Don’t wait for a bear to get you – review current roles and processes, outline necessary adjustments and manage change before the migration occurs.

Feel free to comment or add to my list!

One Size Does Not Fit All

One size fits allHave you been to the mall and purchased a shirt that says “one size fits all” for the size?  While the shirt may fit some of us perfectly, it might be too large or too small for others.  The same goes for a project.  This “one size fits all” mentality for all projects can put your smaller projects at great risk by bogging them down in a project management methodology that is too rigorous for the size of these projects.

 So what can you do?

Establish a flexible project management methodology framework

  1. Define what a “small” and “large” project is in your organization (e.g., a small project can be between 6 – 12 weeks and a large project is anything greater than 12 weeks)
  2. Identify the deliverables or documents needed for each project type
  3. Monitor smaller projects to validate the success/failure rate of these projects and adjust the deliverables within the framework as necessary

The key point to remember is that the project management methodology is a framework for all projects, not a straitjacket.  The framework needs flexibility to support all projects, no matter their size, while producing results.

One size really does not fit all, so find the size that fits your needs to successfully manage your small project.

Are You an Effective Leader?

Edgewater ConsultingI’m a bit of a history buff and I recently finished reading Jeff Shaara’s new book “The Smoke at Dawn” which focuses on the Civil War battle for Chattanooga.

The book has me thinking about what makes an effective leader. At the beginning of the novel, one general has every advantage, but focuses on the wrong things. While the other general begins at a major disadvantage, focuses on the right things, and ends up winning the battle.

The novel reinforced some core leadership principles that were good reminders for me.

  • First and foremost – where you decide to focus your energy matters. You can allow your attention to be distracted and squandered on the petty minutia or you can keep yourself focused on key goals. An effective leader doesn’t ignore the details, but does know what is important and what is not. An effective leader actively chooses to spend most of his or her energy on what is important.
  • Second, you need to identify a goal to be accomplished and share that vision. An effective leader ensures that everyone on the team understands what the goal is, why the goal is important, and the part they play in making the goal a reality. Even the “reserve forces” play an important role, and they need to be told what it is.
  • Third, you need to listen to and trust the people in the trenches. An effective leader listens to the team’s problems and removes roadblocks. He or she also listens to their ideas and lets them experiment with different ways to reach the goal.
  • Fourth, you need to recognize and acknowledge the efforts of the team, even when they don’t succeed. An effective leader holds people accountable, but also helps them learn from mistakes.
  • Finally, you need to recognize, acknowledge, and act to correct your own mis-steps.

So in brief, the refresher leadership course I gained from reading a novel. It seems that others have found similar inspiration:   http://blogs.hbr.org/2014/07/what-made-a-great-leader-in-1776/  http://theweek.com/article/index/259151/lessons-from-lincoln-5-leadership-tips-history-and-science-agree-on

So — What leadership lessons have you drawn from unexpected sources?

 

5 Warning signs that your methodology needs a reset

Project methodologies tend to grow dysfunctional as time goes on.  The breadth of their standardization increases until the only person who really knows how to use it is the methodology owner.

Too many templates, too many standards, too many hours required for initial training and training on new templates and standards, and perhaps too many good resources moving on to other employers with a less rigid approach.

To find out if your project management methodology is heading down the wrong path, look for these warning signs:

  1. You have a full time position dedicated to policing methodology compliance
  2. Your methodology manages by standard and template instead of by desired outcome and requirements
  3. Your methodology continues to get bigger over time, and details with little or minor influence on success have never been pared away
  4. Your projects are taking longer to implement
  5. Your project sponsors are growing more frustrated with each project you attempt to implement

resetThe methodology should be a guideline, not a noose, for organizational projects  – supporting the strategic goals of the organizational ecosystem instead of drowning in a pool of standardization. If you see any of these warning signs, maybe it’s time to hit the reset button.

 

Avoiding Agile Anarchy

Agile project management

Conventional Agile Methodology Wisdom lists three factors that define an Agile-ready project:

  1. High Uniqueness
  2. High Complexity
  3. Aggressive Deadlines

After using these three parameters to select your first agile project, there is still legwork to be done before sprints are humming along.

Many agile initiatives are announced by fiat with the team structure, sprint length and other basic rules of the road mandated by the Agile Initiative Sponsor. They dive right into development sprints, gathering user stories along the way to build a backlog. Here are some ways this approach could backfire:

  • In a rigid, hierarchical organization, the ability of teams to self-organize is often historically non-existent, and the change management hurdle might be a gap too big to jump. There are many ways that interoffice personalities and politics can sink an agile initiative in its early stages, or at any point along the way.
  • Complex, unique projects require some upfront work on architecture before the development sprints can begin. Agile teams can best manage this by making the first few sprints architecture sprints. Time and again, we have seen horror stories when the overall design or architecture is glossed over:
    • Parallel agile teams within a business design disparate UI’s to enable functionality that is essentially the same, but serves the needs of one particular product group. Before long, it’s obvious that external stakeholders are confused and put off by having to remember two different ways of interacting with the same company
    • User stories are taken down as the basis for development sprints, but they fail to consider the secondary stakeholders. BI reporting needs are often missed.
    • Prioritization of the backlog is driven by business need, without any attention to building foundational pieces first, then layering on transactions.

In short, Agile without Architecture leads to Anarchy, and a lasting bad impression that will taint future Agile efforts. It’s best to look before you leap and take time to address any Agile readiness gaps.

 

Part 3: Creating an Editable Grid in CRM 2013 Using Knockout JS

This is the third installment following Part 2, which demonstrated the editable grid from inside Microsoft Dynamics CRM 2013, and Part 1, which introduced the editable grid in CRM 2011. This blog introduces paging.

I will first demo what the grid looks like with paging in a CRM 2013 environment. Afterwards, I will walk through the main block of code.

I adopted the concepts from this great blog post from Ryan Vanderpol, about adding a pager to a grid.

Demo

The following screen shot demonstrates the pager inside the grid.

CRM editable grid

The above demonstrates:

  • The “Previous” and “Next” buttons allow the user the move forward and backwards one page. Currently, the “Previous” button is disabled because the first page is being displayed.
  • The numbers “1”, “2”, and “3” represent the page numbers.

Code Walkthrough

The following code represents the additional changes required to the source originally introduced in Part 2 of my blog.

I have added a new resource to the mix:

  • new_bootstrap_no_icons.css

I have made changes to the following source.

  • The below html web resource.
    *Look for code marked in green
    *Strike out code is either replaced or removed

See Code Walkthrough here

Code Comments

Snippet Comments
<tbody data-bind=”foreach: pagedList“> Loop through pageList instead of oproducts collection.
<div class=”pagination”> Represents the paging UI controls. This code uses the styles from bootstrap CSS.
self.pageSize = ko.observable(3); Establishes the number of rows to display per page.
self.pageIndex = ko.observable(0); Determines what page to start on when the form loads.
self.moveToPage(self.maxPageIndex()); This is called after a new record is inserted
if (self.pageIndex() > self.maxPageIndex())
self.moveToPage(self.maxPageIndex());
This is called after a record is removed.

What’s next?

In future blog posts:

  • Resolving deadlocks when saving multiple records from the editable grid.
  • Sorting.
  • Do you have any suggestions on what you would like to see?

Diagnose Your Inefficiency Potholes

potholesMany employees tend to complain about work-related inefficiencies as much as Wisconsinites bemoan the craters (aka potholes) left in the roads each winter. In response, companies usually acknowledge that making improvements is critical, and do their part in researching Enterprise Resource Planning (ERP) options. But, are all work-related inefficiencies exclusively due to a legacy system? Are people jumping the gun in assuming so, or are they misidentifying a process problem? Could some of these issues disappear by making a few simple process adjustments? Without empowerment and support, all the technology in the world won’t move your business forward.

There is no exact formula to determine if a problem stems from a bad system or a bad process; but asking yourself some basic questions could help you figure out where the problem lies. For example:

  • Would implementing new process improvements really resolve the problem?
  • Could implementing new system functionality resolve the problem and also provide a competitive edge?
  • Do the system benefits outweigh process benefits?

The following steps should aid you in your diagnosis and decision-making:

Create a problem Inventory 

Interview Subject Matter Experts (SME’s) from the various departments affected to develop a problem inventory list.

Identify process-related problems

Identify all process-related issues from your inventory list. Ask yourself: What is the root cause of the problem? Is there a lack of communication, lack of enforcement, or lack of an actual process? If you answered yes to any of these questions, the problem likely stems from a process issue.

Examples of process-related problems include:

  • A customer is upset that they’re getting bounced around
  • Sales Agents aren’t required to track or manage lead information
  • No official process for returns exists. (If an actual documented process cannot be provided, there probably isn’t one.)

These items may also range in severity. While going through this process, consider assigning priority levels or at least identify quick fixes.

Make process improvements where possible

This step is important because it improves overall business processes and productivity by making identified improvements. It also validates problems that can be resolved realistically. This step may take a few weeks to a few months to transpire, but it provides important insight and brings the process to the next step.

Focus on system-related problems

Once process-related problems are identified and resolved, one is able to ascertain that the remaining problems are system-related and decide if a new ERP system would be advantageous.

Examples of system-related problems include:

  • No visibility to inventory availability
  • Multiple customer masters, item masters, and vendor masters
  • Manipulation applied to reports (current system lacks reporting functionality)

This step will not completely resolve a company’s problems and inefficiencies, nor will it guarantee employee satisfaction. It will, however, allow for a more focused approach when considering solutions. It also provides the added benefit of some inexpensive process improvements along the way.

Total Recall: The True Cost of Foodborne Illness

All eyes are on Tyson this week after their recall of chicken nuggets with a trace of plastics. Unfortunately, it’s not just the makers of highly processed foods that are struggling with recalls right now.

As April unfolds, we see that the organic food industry is not immune either:

  • Three purveyors of organic black peppercorns here, here and here have also announced recalls this week.
  • And, the real shocker is this one: Tea Tree Oil mouthwash is recalled because of bacterial contamination, despite the many websites and even an NIH article touting tea tree oil’s antibacterial properties!

Traceability of the root cause is difficult for both contaminated food and hazardous consumer products, as the recent Fitbit Force recall shows. There still doesn’t seem to be an answer as to what material in the wristbands caused so many users to break out in a rash.

As the following infographic shows, foodborne illness is a serious issue, and some companies are better than others at weathering a recall crisis. As we have said in earlier blog posts, social media has been a real game changer during recent recall crises, in ways both positive (providing a way to tap into rising consumer concerns to spot quality issues early) and negative (the viral consumer frustration response at any lag in response or mis-step during a recall crisis).

Total Recall: The True Cost of Foodborne Illness infographic for disaster recovery and product recalls

 

 

The Politics of Data in an ACO

Data sharingImagine the following scenario. You discover that you are the victim of identity theft, purchases have been made in your name, and your personal credit has been ruined. You are saved, though. You have paid a watchdog organization to monitor your credit, and they have information that clears your good name! So, when you apply for a loan with a bank, you request the credit monitoring agency to share the details of your prior credit problems and its resolution with the bank. But the monitoring agency will not share that information because that might help the bank understand your needs and negotiate a better price for their own credit monitoring service that they resell to their customers – i.e., you. The monitoring service won’t release your information.

Would you put up with this conflict of interest? NO!

In healthcare, we routinely tolerate a form of this conflict of interest, and in many different forms. Even though health insurers are not providing the patient care directly, these payers tend to accumulate a very useful holistic view of each patient’s history, including information from the perspective of what care was provided based on payment being demanded by many different care providers. There are numerous instances where, if this information was shared with other providers, it could positively impact the care management plan, doing so in a more timely manner, and increasing the likelihood of improving the quality of care the patient receives and possibly reducing the overall cost of care across an extended episode.

Here is an example- A patient is admitted to the hospital and receives a pacemaker to address his atrial fibrilation. After being discharged, the patient follows up with his cardiologist who has reduced the dose of digoxin, having diagnosed the patient with a digoxin toxicity. However, the patient attempts to save a few dollars by finishing their current prescription only to be admitted to the hospital a couple of weeks later for the toxicity. This is an opportunity where the care manager could have intervened based on the cardiologist’s toxicity diagnosis being submitted to the payer and no prescription was filled within a few days. The care manager could have helped the patient be more compliant with the cardiologist’s instructions avoiding an inpatient admission.

Healthcare provider organizations and payers (and in some cases regulators) are working together to break down these walls in an effort to increase value across the spectrum of care delivery and the industry in general. However, the sometimes conflicting vested interests of these interacting payers and providers can still be an obstacle, influencing the politics of information disclosure and sharing in the emerging environment of accountable care delivery models.

There is great diversity in the participating organizations that collaborate to make up an ACO. This is definitely not one size fits all. Viewed from the perspective of sharing risks across parties without the immediate concern about maximizing volumes, the integrated provider-based health plans, such as Kaiser Permanente, Geisinger Health System, and Presbyterian Healthcare Services, are already inherently sharing this risk and are reaping the rewards as a single organization. That’s great for the few organizations and patients that are already members participating in one of these plans.

Unfortunately, for other organizations there is still much more to be worked out regarding proactive sharing of data both within an accountable network of providers acting across care settings, and with the payer(s). Within the network, hospital systems usually have some of the infrastructure in place and they know how to routinely share data between systems and applications using standard data exchange conventions such as HL7 and CCD. In collaboration with HIE’s these systems can help facilitate active data distribution, and they very often provoke the organization to address some of the more common aspects of data governance. However, even when this routine “transactional” and operational data is being exchanged and coordinated, there is still a great unmet need for the ACO to buy or build a data repository for the integration and consumption of this data to support reporting and analytics across various functional areas.

Many organizations encounter further challenges in defining and agreeing on which are the authoritative sources of specific elements of data, what are the rights and limits on the use of these data, and how can these assets be used most effectively to facilitate the diverse objectives of this still-emerging new organizational model.

An even greater challenge for some ACOs is collecting the required data from the smaller participating provider networks. These organizations often have less capability to customize their EHRs (if they even have EHRs in place) and less resource capacity to enable the data sharing that is required. To get around this, some ACOs are:

  • Standardizing on a small number of EHRs- (ideally one, but not always possible) This provides the potential to increase economies of scale and leverage the shared learnings across the extended organization.
  • Manually collecting data in registries- Although not always timely, this addresses some rudimentary needs for population-focused care delivery and serves to overcome common barriers such as the willingness of a given provider to collect additional required data and complying with standards.
  • Not collecting desired data at all- While this seems hazardous, progress toward the overall clinical and/or financial goals of the ACO can still be positive, even if an organization cannot directly attribute credit for beneficial outcomes or improvements within the organization, and the ACO can avoid the overhead of collecting and manually managing that data.

Regardless of what data is collected and shared within the ACO, the payers participating often have the highest quality, most broadly useful longitudinal data because:

  1. The data is ‘omniscient’ – it represents, in most cases, all of the services received by (or at least paid for) that patient – provided a claim for those services has been submitted and paid by the participating payer.
  2. Some of this data is standardized and consolidated making it easier to manage.
  3. The data is often enriched with additional data residing in mature information systems such as risk models, and various disease-focused or geographic populations and segments.

Consequently, payer data very often forms the longitudinal backbone that most consistently extends across the various episodes constituting a patient’s medical history and is very important to the success of the ACO’s mission to drive up quality and drive down costs. Despite this opportunity for an ACO to improve its delivery of care to targeted populations, sharing of data is still achieved unevenly across these organizations because some payers feel the utilization, cost and performance data they have could be used to negatively impact their position and weaken their negotiations with the hospitals and other provider organizations.

While claims have traditionally been the de facto standard and basis for many of the risk and performance measures of the ACO, more progressive payers are also now sharing timely data pertaining to services received outside the provider network, referrals between and among providers, authorizations for services, and discharges, further enabling ACOs to utilize this information proactively to implement and measure various improvements in care management across the spectrum of care settings visited by patients under their care.

Collaboration between provider organizations and payers at a data level is moving in a positive direction because of the effort given to ACO development. These efforts should continue to be encouraged so as to realize the possibilities of leveraging timely distribution of data for better treatment of patients and healthcare cost management.