Processes (Workflows) Best Practices

When enabling new workflow options in Microsoft Dynamics CRM 2011, the overall performance of the implementation can be affected. Keep in mind the following best practices, when considering how to ensure that Microsoft Dynamics CRM workflow functionality performs optimally for a particular implementation.

  • Determine the business purposes for implementing workflow prior to enabling the functionality. During planning, analyze the business scenario and define the goals of workflow within the solution. Workflow functionality can provide for businesses’ process automation, exception handling, and end-user alerts.

Decide on the appropriate security/permissions model for workflow. With established business goals in place, determine the scope of users that will be affected by the workflow implementation. Users should be identified to determine who will create and maintain workflows, apply and track workflows, and troubleshoot workflow issues.

  • Use the Scope property sensibly. The Scope property associated with workflow rules defines the extent of records affected by that rule. For example, rules configured with a User scope affect only the records owned by that user, while rules configured with an Organization scope will affect all records within organization, regardless of which user owns each record. When creating workflows make sure to identify the appropriate scope value for each workflow rule to minimize the number of related system events.
  • Take into consideration the overall load associated with workflow within a deployment. Think about the number of instances that each workflow definition triggers. Then also consider these factors which affect load of workflows:
    • number of workflows
    • which entities
    • number of records
    • data size
    • data load

Considering the factors above in the system on a typical day provides for better comprehension of the processes and load variance. Based on this analysis, the workflows can be optimized as required.

  • Review workflow logic wisely. Consider the following factors:
    • Workflows that include infinite loopbacks, due to semantic or logic errors never terminate through normal means, therefore greatly affect overall workflow performance.
    • When implementing workflow functionality within a CRM 2011 deployment, be sure to review the logic in workflow rules and any associated plug-ins for potential loopback issues.
    • As part of ongoing maintenance efforts, periodically publish workflow rules and review them to ensure that duplicated workflow rules are not affecting the same records.
  • When defining workflows that are triggered on update events, be cautious. Taking into account the frequency at which ‘Update’ events occur, be very particular in specifying which attributes the system looks for to trigger updates. Also, avoid using ’Wait’ states in workflows that are triggered on Update events.
  • Scale out as necessary, to improve performance in large deployments. Use dedicated computers to run the Async service for large-scale deployments. That being said, increasing the number of servers running the Async service creates additional stress on the server running Microsoft SQL Server. Therefore, make sure to follow appropriate optimization and tuning strategies on the data tier and investigate the possibility of increasing the number of computers running Microsoft SQL server.
  • Test workflows. Make sure to test and monitor the performance of new workflow functionality before implementing it in a production environment.
  • Async plug-ins. Think through whether plug-ins should run synchronously or asynchronously. When the priority is user responsiveness, running a plug-in asynchronously will enable the user interface to respond quicker to the user. But, asynchronous plug-ins introduce added load to the server to persist the asynchronous operation to the database and process by the Async Service. When scalability is essential, running plug-ins synchronously typically requires fewer loads on the servers than running plug-ins asynchronously.
  • Balancing workflows and asynchronous plug-ins. Asynchronous plug-in and workflow records in the asyncoperationbase table are managed with the same priority. Hence, introducing a large number of asynchronous plug-ins into a system can reduce overall throughput or increase the time between triggering and processing of individual workflows. For that reason, make sure to consider the relative importance of the workflows in the system before adding numerous asynchronous plug-ins to the solution.
  • Child Workflows. Child workflows run as independent workflow instances from their parents. This can facilitate parallel processing on a system with spare capacity, which can be useful for workflows with multiple independent threads of high processing activity. Additional overhead can be introduced if the parallel processing is not critical because other workflow logic threads are blocked waiting for external events to occur.

NOTE: If workflow functionality within a CRM 2011 implementation is not acting as expected, verify that the Async service is running properly. Often, restarting the Async service will unblock workflow processing without affecting the functionality of the workflows that were in the pipeline.

  • Monitor the Microsoft Dynamics CRM 2011 database for excess workflow log records. Workflow processing in Microsoft Dynamics CRM depends on on the Asynchronous Service, which logs its activity in both the AsyncOperationBase table and WorkflowLogBase tables. Performance may be affected as the number of workflow records in the CRM 2011 database grows over time.

Microsoft Dynamics CRM 2011 includes two specific settings, ‘AsyncRemoveCompletedJobs’ and ‘AsyncRemoveCompletedWorkflows’, which can be configured to ensure that Asynchronous Service automatically removes log entries from the AsyncOperationBase and WorkflowLogBase tables. These settings are as follows:

    • The ‘AsyncRemoveCompletedWorkflows’ setting is visible to users in the interface for defining new workflows, and users can set the Removal flag independently on each of the workflows they define.NOTE: When registering an Async plug-in, users can also specify that successfully completed Async Plugin execution records be deleted from the system.
    • Using the deployment Web service, users can change the AsyncRemoveCompletedJobs setting by. Nonetheless, the setting is by default configured to True, which ensures automatic removal of entries for successfully completed jobs from the AsyncOperationBase table.

Is the 1-9-90 rule for social participation dead?

It has long been an axiom that getting people to participate in online communities is hard, and the 1/9/90 rule helped explain why. 1% will be die-hard content creators, 9% will participate and 90% will be passive consumers and sit on the sidelines.

A recent BBC study claims the old rules are dead and that a whopping 77% of adults should be considered participators in some capacity. Interestingly, GigaOm pounced and claimed the old rules still apply.

I think the BBC research is on to something and that the online participation patterns have changed. Few of the things may have contributed:

  • Consolidation: social networks such as Facebook and Twitter consolidate for us updates and posts from multiple communities and allow us to respond directly from there. You no longer need to go and check on 7 different communities to see what is going on.
  • Ease of content creation and sharing especially from mobile devices. Probably too easy if you ask me. if you allow it, your phone will post your location, the pictures you take and more without even asking. The success of Instagram is just one example. Being connected 100% of the time allows us to interact 100% of the day.
  • We are not anonymous anymore. It has been a slow change but if the late 90’s were about virtual identities and avatars, now we interact as real people. It may look like a small change but the whole nature of online interaction shifted from an outlet to interactions we wanted to have outside of our normal (and sometimes restrictive) social circle to where now most of the online interaction is with our social circle. More and more the online communities and social networks augment and extend our real relationships with people and brands.
  • While some people who came to the party felt a bit out of place and stayed close to the wall for a while. After some time you realize that keeping to yourself in a social setting is not very nice and that people actually notice. If you are part of the community, participation is now expected.

So if the BBC is right and we should be expecting more participation what does it mean for businesses?

Business social participation may still be closer to the old rules because they do not reflect a close knit social group but as more people become comfortable in sharing it will start to have an impact.

Internally, collaboration and social networking with colleagues will eventually follow the same pattern of heightened participation if you allow the same enablers. Aggregate and consolidate activities and updates so they are easy to access, make it easy to respond to them and embed interaction and sharing everywhere in internal web applications, sites, tools etc. Making sharing a social norm may not be too far off.

Externally, in addition to the brand enthusiasts and deal seekers there is now a potential in making a lot more people participants

  • Think about creating content that people would want to share. Too many websites and social media sites focus on the marketing side “what we have to sell”. Cool or useful things to do with the product or that are just related to the category will more easily be viral.
  • Many websites have added sharing and likes to their pages but few take it to the level of actually allowing specific questions or comments through social networks on content or products.
  • Think mobile sharing. From QR codes in trade show booths to special coupons for scanning or photographing in the store. Even my dentist has a promotion for getting free whitening pen if you scan a code and like him on Facebook. Brilliant.

Share More: a framework for enhancing collaboration

In a great study, McKinsey and Company published last year they showed how companies that use social and collaborative technologies extensively (networked companies in their terminology) outperformed traditional companies. They called it “Web 2.0 finds its payday”.

So if you work for a networked company – congratulations. Now if your company is part of the vast majority of companies struggling through some forms of collaboration but not seeing enough benefits, how do you get to the payoff stage?

In this following series of posts, I’ll try to offer a methodology and examples for how to do just that. Elevate the level of collaboration and create a fully networked organization one step at a time.

We call this process Share More.

The premise is simple, for each business area or function, find a real world business challenge where collaboration can make a difference. Implement it. Move to the next one.

Creating the overall framework is like creating an association wheel for the term “Share” in the middle:

Sharing can be with just a few team members or with the whole company. It can be internal or external. If you stop and think about all the interactions you have in a week, which causes you the most pain and time? Can these interactions be made simpler using technology? Can you Share More?

The first Share More solution I’d like to address is process and workflow solutions.

Share Process

Process and form automation is all about tracking and control. The real dramatic change is in giving managers and administrators visibility into every step and log of every change and update. It can also speed the process up and save effort in typing information into other systems, initiating emails or filing paper into physical files.

We’ve worked with a large hospitality organization to automate all HR and Payroll related forms through the use of InfoPath and SharePoint and learned a lot of valuable lessons that can be valid to many a process automation:

  • Strongly enforce data integrity: Most forms are created to collect data that will be fed eventually into another system. Therefore data input must come from the same source system it will end up in. Values and choices have to be restricted to valid combinations and open text fields limited to a minimum. The cleaner the data is, the less trouble it will cause down the road.
  • Know how organizational and reporting hierarchy is maintained: While you may know what system holds the organizational reporting structure, knowing that it’s 100% accurate and maintained up to date is a lot harder. Since some forms require sending confidential information like salary for approval, the wrong reporting relationship can compromise important information. Consider masking personal or confidential information if it is not essential for the approval requested (while the data, encrypted, can still be part of the form)
  • Don’t over customize: like our beloved tax code, approval workflows can get extremely complicated and convoluted as organizational politics that evolved over the years created special cases and more exceptions than rules. Codifying these special cases is expensive and prone to change. Consider it an opportunity to streamline and simplify the rules.
  • Augment with stronger 3rd party tools: while the core systems – like SharePoint contain built in (and free) workflow mechanism, it is limited in the control, flexibility, scalability and management as it comes out of the box. Some 3rd party tools like Nintex and K2 BlackPoint provide added flexibility and scalability. For a price.
  • Version deployment: Forms and process will change. How will updates be deployed without interfering with running flows and processes?

In future posts I’ll explore other opportunities for Sharing More including Sharing Insight, Sharing Responsibly and we’ll look into specific opportunities for collaboration and sharing in insurance and healthcare.

Virtualization in Insurance

The Power of a Desktop in the Palm of Your Hand

Is Desktop-as-a-Service a Subset of IT-as-a-Service?

I read this blog recently, and it prompted some reflection on the possible applications for time- and cost-saving benefits in the insurance industry.

There are two basic types of insurance carriers from an IT perspective

  1. Carriers that sell insurance and use IT to support their business goals
  2. Carriers that are an IT shop that also sell insurance.

Though these types of carriers are very different, virtualization is a concept that benefits both.  Virtualization enables carriers with smaller IT shops to effectively leverage improved support efficiencies and more flexibility and allows larger IT organizations to redeploy resources for bigger projects like core system upgrades.

“Virtual desktops,” the keystone of visualization, free a user from hardware burdens by introducing “greater synergy, efficiency, and agility.” This allows users to embrace a mobile and more flexible work style.  This versatile technology applies to a variety of scenarios. With the help of an iPad or Galaxy tablet connected via WiFi to the local area network (LAN) and radio-frequency identification (RFID) tags, doctors have all of their patients’ records at their fingertips. A similar approach benefits insurance agents when visiting customers. With mobile desktop in tow, Claims Adjusters carry their office with them, and Underwriters spend more time in the field reviewing referrals with Agents.

Desktop-as-a-Service as a Subset of IT-as-a-Service has its own benefits. With virtual desktops, new users easily and quickly enter an established network with their own legacy systems already on their desktop.  It becomes easier for an agent to catch a plane to another office, log in, and there’s his desktop, ready to provide personal office functionality.

Lastly, as a part of efficiency improvements, virtualization minimizes the cost of hardware upgrades not only for those of whom work remotely, but for all users in an office.  Because all applications run on servers, users operate smaller systems without a large hard drive and processor.  In addition, any application and operating system problems users experience are addressed without requiring IT to visit the remote machine.

Sorry, Nick Burns the computer guy! You’ll be out of a job.

Doublin’ Down in Hard Times

Hard times are definitely here.  By this time everybody in IT-land has done the obvious: frozen maintenance where possible, put off hardware and software upgrades, outsourced where possible, trimmed heads (contractors, consultants, staff), pushed BI/CPM/EPM analytics projects forward, and tuned up data and web resources.

Now is the time to think outside the bunker!

IT needs to consider what will need to be done to nurture the green shoots poking through the nuclear fallout. All of the talking heads and pundits see them ( glowing with radiation or whatever) and  the utmost must be done to make sure they survive and grow or we shall all sink into the abyss!

This is the time to double down in IT (poker speak).  It is not about heavily hyped Cloud Computing or the latest must-have tech gadget, but about something much more mundane and boring: improving the business process.  There, I’ve said it, what could possibly be more boring?  It doesn’t even plug-in.  In fact (shudder!), it may be partially manual.

Business process is what gets the job done (feeding our paychecks!).  Recessions are historically the perfect time to revise and streamline (supercharge ’em!)  existing business processes because it allows the company to accelerate ahead of the pack coming out of the recession.  In addition, recession acts as something of a time-out for everybody (I only got beatings, no time-outs for me), like the yellow flag during a NASCAR race.  When the yellow flag is out, time to hit the pits for gas and tires.  Double down when it is slow to go faster when things speed up again, obviously the only thing to do.

How? is usually the question.  The best first step is to have existing business processes documented and reviewed.  Neither the staff involved driving the process at the moment nor the business analysts (internal or consultants) are that busy at the moment.  That means any economic or dollar cost of doubling will be minimized under the economic yellow flag.  The second step is to look for best practice, then glance ouside-the-box to maximize improvement.  The third step is to look for supporting technology to supercharge the newly streamlined business process (I knew I could get some IT in there to justify my miserable existance!).

Small and medium businesses get the biggest bang for the buck (just picture trying to gas and change the tires on the Exxon Valdez at Daytona) with this strategy.  This process allows SMBs to leapfrog the best practice and technology research the Global 2000 have done and cut to the chase without the pioneer’s cost (damn those arrows in the backside hurt!).  Plus implementation is cheaper during recession ( I love to be on the buy-side).  The hardware, software, and integration guys have to keep busy so they cut prices to the bone.

The way forward is clear, IT only needs to lead the way, following is kind of boring anyway.

Practical Project Management

practicalityIn times like this every PMP needs a healthy dose of a new and improved PMP, that is, project management practicality. As the recession lingers, those of us who drive the success of projects, programs, and any corporate initiative are going to have to find new ways of doing more with less.  Here are seven practical tips for cutting corners without sacrificing project success.

1. Curtail time-consuming interviews for requirements-gathering. There are several easy ways to cut the effort required to gather information from subject matter experts:

  • Group them by functional area (when appropriate) and avoid interviewing single stakeholders.
  • Use structured information gathering templates and require that they take a pass through them and begin filling in the required information before the meeting. The keyword here is structured. I prefer Excel templates with restricted ranges of responses, rigidly enforced with data validation limiting those responses to list.  STructure the information you need into columns, apply data validation, and put explanatory notes as a cell comment in the column headers.

2. Make your meetings more productive.

  • Know your goals. Have an agenda and be ruthless about sticking to it.
  • Limit the attendees to those people with decision-making authority
    and real subject matter expertise. Bigger meetings cost more and waste more time.
  • Appoint a live note-taker. The note-taker should type the notes live during the meeting and send them out before the end of the day. Transcribing from written notes is wasted effort.

3. Restructure your project team. Combine roles and responsibilities, because fewer roles mean fewer handoffs. It’s better to have a smaller team running above 100% utilization than a larger team at or under 100% utilization.

4. Carefully define the scope of your analysis/requirements gathering effort. Don’t waste time documenting standard business processes in excessive detail; concentrate on the areas that have unique and/or critical requirements.

5. Hold the line on customizations. They add cost to the current project, and will complicate upgrade and migration projects down the road.

6. Request a mini-business case for custom reports. Every custom report should have a place in the spec that describes the business action that the report enables, as well as a list of alternative sources for the requested information if the custom report is not available. This will help the project sponsor make an informed decision when approving the custom report request.

7. Make project status more transparent. To reiterate an earlier post on PMOs: A well-defined, user-friendly, and well-maintained project portal site can cut down on the need for lengthy status meetings. Milestone status, next week’s key tasks, and open action items can be posted to the portal site. A weekly meeting can be used for exception-based reporting on lagging milestones and critical issues, allowing the project sponsor and key stakeholders to participate in resolution during the meeting.

Enterprise e-Commerce on a Shoe String Budget?

e-commerce on a shoe string

Image courtesy of Flickr

While inexpensively built and operated mom and pop e-commerce websites are as common as snow in New England in January, is it possible to build and operate an enterprise grade e-commerce site on a shoe string budget? E-commerce at an enterprise level is not simply slapping a shopping cart to your website and calling it e-commerce enabled. The demands of an enterprise solution may require:

  • Integration with legacy systems
  • Integration with supply-chain systems
  • Support for multiple currencies and tax codes
  • Multiple store-fronts
  • Profile and history driven offer management
  • Integration with a content management system
  • Business user control over promotions and pricing
  • …and more

Challenges of integration with existing systems alone are daunting enough never mind the fancy e-commerce functionality that is often considered vital for competitive differentiation. No wonder why starting an e-commerce venture or an upgrade is considered a seven figure expense. The cost of an enterprise grade e-commerce product alone can easily account for twenty to forty percent of the budget. The other option is to go with a hosted or SaaS based approach and avoid capital expense for software and infrastructure – not a bad approach for testing the waters but in the long run, charges and fees can really add up.

A well executed e-commerce site can provide great returns on the investment by generating new revenue streams, enhancing existing ones, or reducing operational expenses – and that can’t be too bad for the budget or your career. However, in tough economic times the challenge becomes harder as getting approval for large complex projects becomes difficult and even the approved budgets can get slashed. If your budget gets cut, is there a way to still implement enterprise grade e-commerce? Can an open source e-commerce solution be the answer to the “do more with less” mantra? Is open source e-commerce ready to play with the big boys in the enterprise domain? Let’s explore these questions and the capabilities of the open source e-commerce solutions.

Let’s start with a common misconception that an open source e-commerce product requires significant customizations and the cost of customizations more than offsets any savings from not having to pay license fees. Implicit in this assumption is the notion that a commercial product requires little or no customizations. However, the real-world experience shows us that this is not the case. Even the best commercial products cannot be used out-of-the-box unless you decide to adopt their look and feel and their model of e-commerce. The cost of customizations can add up just as rapidly in a commercial product as they can in an open source one. Therefore a prudent approach would be to adhere to the industry standards and best practices and use out-of-the-box functionality in areas which are not competitive differentiators. Heavy customizations should be limited to the aspects of the website that are true differentiators and result in a unique user experience. This guiding principle applies regardless of the decision to use an open source or a commercial product.

There are a lot of inexpensive and open source e-commerce products out there; however, most of them are nothing more than a simple shopping cart. They are only suitable for the most basic needs of a simple web site. However, Apache OFBiz and Magento are two promising contenders that break from the pack and compete in the enterprise space. In this article we will primarily focus on OFBiz.

Apache OFBiz is actually an integrated suite of products that does not only include e-commerce capabilities but also provides support for accounting, order management, warehouse management, content management and more. An enterprise e-commerce implementation cannot exist as a point solution. It has to integrate and work well with other back office processes and applications. OFBiz’s integrated suite can be used to automate and integrate most back office functions. Even if you decide not to use the built-in functionality it can still be integrated with other existing systems albeit with more effort and cost. It provides enough e-commerce functionality out of the box to match most enterprise needs and the rest can be customized if needed. Here is a summary of our assessment of OFBiz:

Technical Capabilities

# Criteria Rating Comments
1. E-commerce capabilities B+ Provides Robust e-commerce capabilities OFBiz e-commerce capabilities include: catalog management, promotion & pricing management, order management, customer management, warehouse management, fulfillment, accounting, content management, and more.
2. Sign-on and Security B Granular and robust security framework The OFBiz security framework provides fine grain control of the security including multiple security roles and privileges. Roles can be used to control access to screens, business methods, web requests (URLs), and/or entire applications.
3. Technical flexibility & ease of use B Very flexible but complex  OFBiz is an application development platform that can be used to build applications and as such provides a tremendous amount of flexibility.  The use of the entire framework (which includes the database, an Object Relational Mapping (ORM) layer, business object layer, scripting support, and UI tools) is optional.
4. Integration with other apps and locations A Multiple integration methods  OFBiz business services can be exposed as services and accessed by multiple methods including Remote Method Invocation (RMI) and XML Web Services.  Integration directly with the OFBiz Relational Database is also possible.
5. Scalability A Highly Scalable  Java systems are highly scalable provided a production architecture that is designed to support heavy load.  A load balancing device and redundancy at the web, application and database servers can redundancy and scalability.
6. Relational database integration A Support for all major database platforms  The most popular OFBiz database platforms are PostgreSQL and MySQL (both of which are open source).  OFBiz has also been tested with Oracle, DB2, Sybase, and MS SQL Server.  The default installation uses an Apache Derby database which is not recommended for production use. Our research indicates some problems with MS SQL Server database – this should be investigated further prior to selecting that database platform.
7. Skill Set to support NA OFBiz framework and application are based in the following technology components:

  • XML
  • Web Development: HTML, CSS, AJAX/JavaScript, Apache
  • Java Development: Java, JSP, Freemarker, BeanShell, Tomcat application server (possibly)
  • Database Development and Administration: MS SQL Server (possibly), SQL, JDBC

Long term support of the application would require knowledge and familiarity in each of these technology sets.  While these technologies are mainstream and skills should be readily available in the future, skills and experience with the OFBiz framework that is built upon these technologies may not be.

Business Position

# Criteria Rating Comments
1. Financial stability B OFBiz is a “top level” project in the Apache Software Foundation.  The Apache Software Foundation provides support for the Apache community of open-source software projects. The Apache projects are characterized by a collaborative, consensus based development process, an open and pragmatic software license, and a desire to create high quality software that leads the way in its field.
2. Maturity of product suite B Open For Business (OFBiz) was initially launched in 2001.  In early 2006, the project went through the Apache Foundation’s “Incubation” process to review projects for quality and open source commitment.  OFBiz was promoted to a top level Apache project in December 2006.The community for OFBiz is very active.  The major web posting board receives between 20-40 postings per day relating to OFBiz.  The original contributors are very active in monitoring these sites and sharing knowledge.
3. Reference Accounts B- Total number of installations is unknown due to the nature of open source software. The OFBiz websites lists more than 70 companies that use their software. However, there are very few marquee names.

Implementing an enterprise e-commerce solution can be expensive and complex process that requires analysis and investment in people, processes, and technology. While it would be insincere to say that an enterprise e-commerce solution can be implemented on a budget in the ballpark of a mom and pop e-commerce store, the budget can be significantly reduced by:

  • Carefully crafting business requirements
  • Adapting the business model to match industry’s best practices
  • Reducing and carefully planning data migration and application integration
  • Keeping the customizations to a minimum
  • And using an open source e-commerce platform

OFBiz provides a viable open source e-commerce stack that can be used to implement enterprise grade e-commerce. When combined with good implementation practices and solid execution the combination can result in slashing costs by twenty to forty percent — which sometimes can make the difference between getting funded or getting shelved.

Insurance litigation in the economic downturn

I heard a report on the news this morning that in a recent survey, lawyers have indicated that they expect a dramatic decrease in business in 2009 and do not anticipate earning income at the same levels they earned in 2008.  Really?

That may be true for mergers & acquisitions, and other similar purchase related transactions, but I do not believe the current economic downturn will have a similar affect on the insurance industry.  In fact, I believe it will have the opposite affect.

I think the upsurge in litigation stemming from the collapse of the credit markets and the mortgage industry could surpass levels ever seen before.  Litigation during these times could include some of the highest settlement amounts, parties sued, and parties suing.  Insurers are bound to get caught up, due not only to defending their interests, but also mainly due to their policy responsibility of defending insureds for litigation brought against them.

Some insurance carriers are gearing up for that increase in defense costs.  The Hartford is already battening down the hatches in preparation for a litigation hurricane.  As the insurer for The Peanut Company of America, they have gone to Federal court for clarification on the liability coverage in their policy, in preparation for the litigation defense costs and settlement payments for the over 1800 product recalls and related illnesses.

People are losing their jobs and can’t make their payments on their Lexus because they over extended in the boom of ’07.  So those vehicles end up on eBay, on fire, or in a chop shop.  Insurance SIU departments see a swell in claim counts.  The number of injuries in car accidents goes up.  These are times when an insurer’s Corporate Performance Management (CPM) and the ability to analyze their own data against their goals, along with incorporating automated processes can really pay off and keep expenses down.  The identification of fraud also becomes key to insurer’s weathering the storm.  Lawyers send people to the same doctors and vice versa.  I remember a case of fraud where a doctor was reported to be treating 1600 people in one day.  So, who gets involved in all these areas – lawyers.  Both on the claimant and on the carrier side.

Traditionally economic downturns are the biggest catalyst for increases in insurance claims and insurance fraud – people need money.  The decrease in policies written, coupled with the increase in policies cancelled for non-payment of premium, is not as dramatic a cost change as the increase in claims.  People still recognize the need for insurance and recognize the importance of maintaining that policy.  However, insureds, and claimants, feel they’ve been paying the premiums on their policies and now they need to get some money back.

I can’t see insurance lawyers experience that much, if any, drop in revenue during this recession.

The Benefits to Insurance Carriers of Automated Workflow Processing

mailcartDo you still distribute paper files and mail the old fashioned way?  I see this all the time.  Even Underwriting departments have people that distribute paper policy files to Underwriters for review of applications, renewals, MVR and CLUE reports.

Why do so many insurance organizations still use a manual distribution method for workflow – especially in the Claims arena which has transactions that are so heavily paper based? There are so many problems created by paper files and mail being stacked on adjusters’ desks for handling without regard to priority.  An insurance organization takes on too much risk:

  • Increased Error Rates
  • Increased Operation Costs
  • Reduced Service Response Time
  • Extending the Lifecycle
  • Raising Adjuster “Burn Out” Rate and Increasing Employee Turnover and Training 

When I was a claims adjuster, every day was the same — about 10:30, after the morning mail was opened (which I had to go to the post office and retrieve because I was a “field adjuster”), a stack about 3 inches tall, wrapped in a rubber band, would be dropped on my desk like a ton of bricks.  At least the claim file numbers were written on them which the administrative staff would spend about 90 minutes researching.  Then I would have to take that stack of mail, and start retrieving all the paper files from cabinets associated with that mail – PIP applications, damage appraisals, attorney correspondence, medical bills, etc.  How was I supposed to go out in the field when I had all those paper files back in the office?  You couldn’t take them with you because they weren’t allowed to leave the office IN CASE THEY GOT LOST.

Granted, this was a long time ago, and I had to consider myself lucky that at least I had a mainframe system into which I could enter my reserves, payments, notes and confirm coverage.  But these days, not storing files electronically and making them accessible remotely is almost inexcusable.  All that wasted time and productivity.  I probably could’ve handled twice the case load and closed files twice as fast if I could have been out in the field all the time.

Like so many of their policy brethren, many modern claim systems include automated workflow and straight-through processing features that insurance organizations with legacy systems can not, or do not, utilize.  But these legacy systems don’t necessarily have to be replaced in order to implement these types of functions.  Many independent automated workflow systems can work right along side existing legacy systems and push work forward.  I know carriers that implement a simple document management system with high speed scanners that scan and distribute 10,000 – yes, ten thousand – pieces of mail every day.

There are those claim managers that are considering making a change to their claim administration system, and may want to increase the priority of the automated workflow function in their search criteria.  By introducing an automated workflow, many insurance organizations have improved productivity by as much as 100%, recognizing savings to the hundreds of thousands of dollars, and supported a 20% increase in business with existing staffing levels.  The additional benefits to an Insurance organization of a workflow utility are that it can:

  • Implement continuity in processing,
  • Decrease processing costs, and
  • Increase efficiencies to improve Service-level Agreements (SLAs) with customers, agents, and company departments. 

Insurance organizations can also benefit by increasing the collaboration of resources using a document repository. A single repository would enable organizations to reduce resource costs associated with searching for non-existent data or recreating data that is unable to be found, such as loss control guidelines, rating specifications, or even just the office fire procedures.  Call center and other service-related expenses can also be reduced by providing customers with access to their documents via the Web for policy documentation and/or claims forms.  In addition, field workers would be more efficient by being able to review and transfer documents remotely, reducing claim processing times and expenses, and allowing for claim payments to be issued more promptly to customers; spending more face time with insureds, claimants, and agents.  Face time is always good for business.

One final note, Enterprise Content Management (ECM) and Workflow can also be utilized as a knowledge broker between the many systems and departments within an Insurance company, and can become an important source for Business Intelligence (BI). It can provide consistent searchable metadata for proper document retrieval that can be used to support Dashboards and other BI reporting tools for executive management, resulting in improved productivity even at those levels. 

But that’s all right.  You keep paying rent on that office space for file cabinets and maintaining resources to pass paper around.  I’m sure you’re not losing market share or unnecessarily increasing your expense ratios.

Collaboration Style Revisited

When looking at the results of our last poll on collaboration styles, several things jumped out at us.

1. Nearly a third of the respondents are either still relying on email collaboration or under-utilizing basic portal functionality (document checkout/checkin for version control).

2. Among users of collaboration portals, there was an even split between Sharepoint and other tools.

This led us to wonder how broad corporate adoption of collaboration tools might be. And it leads us, of course, to another poll.

Comments always welcome, and in case you missed the first post in this series, it’s still open and you can vote here.