Beware What Lies Beyond the Valley of Despair

If you’ve implemented an ERP system in the last few decades, you have surely seen one of the many representations of the traditional ERP change management curve, with copious advice for avoiding, or reducing the depth of the Valley of Despair. The graph is somewhat misleading, in that it typically ends with a plateau or pinnacle of success, implying that your troubles are over as soon as you go live.

If only that were true.

A more comprehensive graph would look like this:

DESERT OF DIS

Notice the descent into what I will refer to as the Desert of Disillusionment, that awful place where every “productivity improvement” line item in your rosy ROI analysis (the one that you used to justify the project)  is revealed as a mirage.

Why does this happen, and does it have to be this way?  More importantly, what are the warning signs and how should you deal with them?  We will deal with specific topics in future posts, but for now, we invite you to take our short survey on diagnosing enterprise system impact on business productivity.

take the survey

Hunting Down an Apache PermGen Issue

imageWe have been experiencing a PermGen out-of-memory problem in an application we have been working on when redeploying our .war to a Tomcat 7.0.33 application server.  This would cause Tomcat to become unresponsive and crash after about 4 or 5 redeploys.

Our application is using Spring (Core, MVC, Security), Hibernate, and JPA.  Our database pool is being maintained by Tomcat and we get connections to this pool via JNDI.

When redeploying our .war by copying it to $CATALINA_HOME/webapps/ we would see a few lines in our logs that looked something like this:

SEVERE: The web application [/myApp] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@d73b31]) and a value of type [java.lang.Class] (value [class oracle.sql.TypeDescriptorFactory]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.

So in our case it seemed to be an issue with the Oracle drivers.  Tomcat 7 does a bit better of a job determining whether there have been leaks and reporting the problem than 6 so there may or may not be useful info in the logs of an older Tomcat install.

Background

The PermGen memory space is typically a small memory space that the JVM uses to store class information and meta data.  It is different from the heap in that your application code has no real control over its contents.  When you start an application all the classes that are loaded will store information in the PermGen space.  When your application is undeployed this space will be reclaimed.

Cause

The cause of the problem of running out of PermGen space when re-deploying a web app in general is that some references to classes can’t be destroyed by the JVM when the application is unregistered.  This leaves all that classloader information sitting in PermGen since there are still live references.  This can be caused by threads started by the application that aren’t cleaned up, database connections that are not cleaned up, etc.  Sometimes this can be in libraries in use.

A good article on this can be found here:

http://plumbr.eu/blog/what-is-a-permgen-leak

HOWEVER, In our case this was caused by the way our application was deployed.  Maven was configured with ojdbc6.jar as a dependency thusly:

<dependency>
​    <groupId>oracle</groupId>
​    <artifactId>ojdbc6</artifactId>
​    <version>11.2.0.3</version>
</dependency>

This caused our war to include ojdb6.jar when deployed.  So in WEB-INF/lib we would see “ojdbc6.jar” in our deployed application.  This is where the problem starts.  When our code requests a database connection it is creating a DataSource object using our *local* ojdbc6.jar (actually our local ClassLoader which uses our local .jar) rather than the one installed in Tomcat’s shared library directory (via Tomcat’s own ClassLoader).  This now creates an object that Tomcat itself will hold on to  but references code in our application and thus our ClassLoader.  Now when our application is destroyed we get a leak since our ClassLoader can’t be removed.

Solution

In this scenario the solution is to tell Maven that the oracle library will be provided by our container.  It will be available at compile-time for our code but will NOT be included in our .war.

<dependency>
​    <groupId>oracle</groupId>
​    <artifactId>ojdbc6</artifactId>
​    <version>11.2.0.3</version>
​    <scope>provided</scope>
</dependency>

Now when our application requests a DataSource the server ojdbc6.jar will be used removing the dependency on our application.  This allows Tomcat to properly cleanup after “myApp.war” when it is removed.

Cloud 2012 Redux

Ready for Cloud-01

You shouldn’t have to commit everything at once

This year will be remembered as the year the cloud moved beyond the realm of “Back to Time-Sharing” or a curio for greenfields and start-ups.  While Software as a Service (SaaS) is interesting, it can not be a center piece of your IT infrastructure or strategy due to its limited scope and cost/scalability metrics.  By the same token, every IT system is not a greenfield opportunity, and most require a steady evolutionary response incorporating the existing infrastructure’s DNA and standards.

Just illustrating a “private cloud” with a “public cloud” next to it does not cut it.  What does that really mean?  Ever wonder what is really in that cloud(s)?  Better yet, in safe understandable steps, explain it; cost benefit 3-5-7 year projections, organizational impact for IT and business process, procedural impact for disaster recovery, etc.  Sorry, “Just buy my product because it is what I have to sell!” does not work; I need a tested time-phased architectural plan, with contingencies, before I commit my company and job.

For the first time in the continuing cloud saga, we have been able to put together and test a “non-aligned” approach, which allows an organization to keep IT infrastructural best practice and not “sign-in-blood” to any individual vendor’s ecosystem.  With the proper design, virtual machines (VMs), can be run on multiple vendors’ platforms (Microsoft®, Amazon.com®, etc.) and on-premise, optimized to cost, performance, and security. This effectively puts control of cost and performance in the hands of the CIO and the consuming company.

In addition, credible capabilities exist in the cloud to handle disaster recovery and business continuity, regardless of whether the supporting VMs are provisioned on premise or in the cloud. Certain discreet capabilities, like email and Microsoft Office™ Automation, can be “outsourced” to the cloud and integration to consuming application systems can be maintained in the same manner many organizations have historically outsourced functions like payroll.

The greatest benefit of cloud 2012 is the ability to phase it in over time as existing servers are fully amortised and software licences roll-off and require renewal.  Now we can start to put our plans together and start to take advantage of the coming margin-cutting wars of the Cloud Titans in 2013 and beyond.

Time to Remodel the Kitchen?

Although determining full and realistic corporate valuation is a task I’ll leave to people of sterner stuff than I (since Facebook went public, not many could begin to speculate on the bigger picture of even small enterprise valuation), I’ve recently been working with a few clients whom have reminded me of why one sometimes needs to remodel.

Nowadays, information technology is often seen as a means to an end. It’s a necessary evil. It’s overhead to your real business. You joined the technological revolution, and your competitors who didn’t, well… sunk. Or… you entered the market with the proper technology in place, and, seatbelt fastened, have taken your place in the market. Good for you. You’ve got this… right?

I’m a software system architect. I envision and build out information technology. I often like to model ideas around analogies to communicate them, because it takes the tech jargon out of it. Now that I’ve painted the picture, let’s think about what’s cooking behind the office doors.

It’s been said that the kitchen is the heart of the home. When it comes to the enterprise (big and small) your company’s production might get done in the shop, but sooner or later, everyone gets fed business processes, which are often cooked in the kitchen of technology. In fact, technology is often so integral to what many companies do nowadays that it’s usually hard to tell where, in your technology stack, business and production processes begin. Indeed, processes all cycle back around, and they almost certainly end with information technology again.

Truly, we’ve come a long way since the ’70s, when implementing any form of “revolutionary” information technology was the basis of a competitive advantage. Nowadays, if you don’t have information technology in the process somewhere, you’re probably only toying with a hobby. It’s not news. Technology graduated from a revolutionary competitive advantage to the realm of commoditized overhead well over a decade ago.

Ok… ok… You have the obligatory kitchen in your home. So what?

If you think of the kitchen in your home as commoditized overhead, you probably are missing out on the even bigger value an update could bring you at appraisal time. Like a home assessment, due diligence as part of corporate valuation will turn up the rusty mouse traps behind the avocado refridgerator and under the porcelain sink:

  • Still rocking 2000 Server with ActiveX?
  • Cold Fusion skills are becoming a specialty, probably not a good talent pool in the area, might be expensive to find resources to maintain those components.
  • Did you say you can spell iSeries? Great, can you administer it?
  • No one’s even touched the SharePoint Team Services server since it was installed by folks from overseas.
  • The community that supported your Open Source components… dried up?
  • Cloud SLAs, Serviceability?
  • Compliance?
  • Disaster Management?
  • Scalability?
  • Security?
  • Documentation…?
    • Don’t even go there.

As you can see… “Everything but the kitchen sink” no longer applies. The kitchen sink is transparently accounted for as well. A well designed information technology infrastructure needs to go beyond hardware and software. It considers redundancy/disaster management, security, operating conditions, such as room to operate and grow, and of course, if there are any undue risks or burdens placed on particular technologies, vendors, or even employees. Full valuation goes further, looking outside the walls to cloud providers and social media outlets. Finally, no inspection would be complete without a look at compliance, of course.

If your information technology does not serve your investors’ needs, your CEO’s needs, your VP of Marketing and Sales’ needs, as well as production’s… but most importantly your customers’, your information technology is detracting from the valuation of your company.

If the work has been done, due diligence will show off the working utility, maintainability, security, scalability, and superior added value of the well-designed enterprise IT infrastructure refresh.

To elaborate on that, a good information technology infrastructure provides a superior customer experience no matter how a customer chooses to interact with your company. Whether it’s at the concierge’s counter, in the drive-through, at a kiosk, on the phone, at your reseller’s office, in a browser or mobile app, your customers should be satisfied with their experience.

Don’t stop with simply tossing dated appliances and replacing them. Really think about how the technologies work together, and how people work with them. This is key… if you take replacement appliances off the shelf and simply plug them in, you are (at best) merely keeping up with your competitors. If you want the full value add, you need to specialize. You need to bend the components to your processes. It’s not just what you’ve got.  It’s how you use it.  It’s the critical difference between overhead and advantage.

Maybe the Augmented Reality Kitchen won’t provide a good return on investment (yet), but… there’s probably a lot that will.

ASP.Net Master Pages

In ASP.NET MVC3 “master pages” are handled in the _ViewStart.cshtml file.  As the name, suggests the code in this file is executed before each view is rendered (see Scott Gu’s blog post for more details).

Now that you understand the basic use of _ViewStart.cshtml file, let’s go over the scope applied to these files.  The _ViewStart.cshtml file will affect all views in the same directory and below the location of it.  Also, you can have another _ViewStart.cshtml file under a sub-folder which will be executed after the top level _ViewStart.cshtml.  Using this feature you can in effect override the top level _ViewStart.cshtml with one closer to the view.

Now when the Index.cshtml View under the Home folder is rendered, it will first execute the /Views/_ViewStart.cshtml file and then it will render the Index.cshtml View.

However, when the Index.cshtml View under the DifferentMasterPage folder is rendered, it will first execute the /Views/_ViewStart.cshtml file, then it will execute the /Views/DifferentMasterPage/_ViewStart.cshtml file, and then it will render the Index.cshtml View.

To learn more about Edgewater’s .NET usage in custom development, click here. To learn more about Edgewater’s Web Solutions offerings, click here.

Share More: a framework for enhancing collaboration

In a great study, McKinsey and Company published last year they showed how companies that use social and collaborative technologies extensively (networked companies in their terminology) outperformed traditional companies. They called it “Web 2.0 finds its payday”.

So if you work for a networked company – congratulations. Now if your company is part of the vast majority of companies struggling through some forms of collaboration but not seeing enough benefits, how do you get to the payoff stage?

In this following series of posts, I’ll try to offer a methodology and examples for how to do just that. Elevate the level of collaboration and create a fully networked organization one step at a time.

We call this process Share More.

The premise is simple, for each business area or function, find a real world business challenge where collaboration can make a difference. Implement it. Move to the next one.

Creating the overall framework is like creating an association wheel for the term “Share” in the middle:

Sharing can be with just a few team members or with the whole company. It can be internal or external. If you stop and think about all the interactions you have in a week, which causes you the most pain and time? Can these interactions be made simpler using technology? Can you Share More?

The first Share More solution I’d like to address is process and workflow solutions.

Share Process

Process and form automation is all about tracking and control. The real dramatic change is in giving managers and administrators visibility into every step and log of every change and update. It can also speed the process up and save effort in typing information into other systems, initiating emails or filing paper into physical files.

We’ve worked with a large hospitality organization to automate all HR and Payroll related forms through the use of InfoPath and SharePoint and learned a lot of valuable lessons that can be valid to many a process automation:

  • Strongly enforce data integrity: Most forms are created to collect data that will be fed eventually into another system. Therefore data input must come from the same source system it will end up in. Values and choices have to be restricted to valid combinations and open text fields limited to a minimum. The cleaner the data is, the less trouble it will cause down the road.
  • Know how organizational and reporting hierarchy is maintained: While you may know what system holds the organizational reporting structure, knowing that it’s 100% accurate and maintained up to date is a lot harder. Since some forms require sending confidential information like salary for approval, the wrong reporting relationship can compromise important information. Consider masking personal or confidential information if it is not essential for the approval requested (while the data, encrypted, can still be part of the form)
  • Don’t over customize: like our beloved tax code, approval workflows can get extremely complicated and convoluted as organizational politics that evolved over the years created special cases and more exceptions than rules. Codifying these special cases is expensive and prone to change. Consider it an opportunity to streamline and simplify the rules.
  • Augment with stronger 3rd party tools: while the core systems – like SharePoint contain built in (and free) workflow mechanism, it is limited in the control, flexibility, scalability and management as it comes out of the box. Some 3rd party tools like Nintex and K2 BlackPoint provide added flexibility and scalability. For a price.
  • Version deployment: Forms and process will change. How will updates be deployed without interfering with running flows and processes?

In future posts I’ll explore other opportunities for Sharing More including Sharing Insight, Sharing Responsibly and we’ll look into specific opportunities for collaboration and sharing in insurance and healthcare.

2010_10_newman

Hello, Newman.

The U.S. Postal Service is in dire financial straits and, like many businesses, in danger of shutting down completely if it doesn’t cut costs.

By now, most carriers have been conducting business with their policyholders and agents offering electronic delivery of everything from applications to renewal notices. But there are those that still require some transactions to be conducted by “snail mail.” If the U.S. Postal Service were to shut down as early as this year, what would they do?  Would the burden of these transactions then be rest with the agents, and if so, how would they react?  Would those carriers be the carrier of choice when they’re not so easy to do business with because of these manually necessary steps.

Obviously, the first step for most carriers is delivering policy documentation leveraging a modern document generation tool, very quickly followed by a method to accept premium payments online and set up automated account withdrawals. Both of those are fairly straightforward projects that can be done relatively quickly and easily.  But the more complicated transactions of policy amendments and audit reporting are where many carriers lag behind. That’s where the postal shut down would really hurt. Policies like Worker’s Comp or General Liability that require businesses to report remuneration and sales figures on a regular basis rely on communication, and if that cannot be done electronically, it will become much more complicated and a longer process if paper documents have to be sent around.

You don’t want your agents to become your Underwriting Assistants and bill collectors.  You want them to stay out there pushing your products and generating business.  You want to be the carrier that’s easy to do business with and the independent agent’s carrier of choice.  If not, you may find that you’ll need to start a postal department to handle all those deliveries to policyholders.

Policy 360

Is Legacy Modernization Just Procrastination?

There is no doubt that replacement of core systems for insurers has been very popular over the past six years or so.  With the advancements in technology enabling vendors to provide solutions that are configurable, and more easily maintained with “plug and play” technologies that can be upgraded by less technical resources, insurers are taking advantage and moving in to new lines of business and new territories, expanding their footprint.  It allows many small and mid-size insurers to better compete with the leviathans who once staved off competition due to their enormous IT staffs.

But many of these insurers have been in business for scores of years, and have successfully relied on their older technology.  Does the advancement in technology along with ubiquitous connectivity mean that the mainframes and older technology systems just have to go?  Does just refacing the green screens with new web-based user interfaces mean that the carriers that do so are just procrastinating and putting off the inevitable?

A recent blog in Tech Digest posed that question to which I would reply, “Why?  If it ain’t broke, don’t fix it.”  With the horrible economy, many people who need a bigger house aren’t dumping the one they have and buying another, they simply add-on.  The core systems within a carrier are very similar.  If the system you have now works well for its use and if you want to expand in to new lines, you don’t need to rip out that old system and pay for an expensive funeral, just add-on and integrate.  This will start your company down the path to more flexibility which can be supported by a system that is specifically designed to bring all your information into one place – Policy360 based on CRM.

Utilizing a system designed to bring data together from multiple sources allows you to keep your existing technology, leverage the capabilities of new systems, and present and manage that information in a much more accessible and user friendly manner.

Is plastic surgery on your legacy systems really just putting off the inevitable?  Or is presenting a fresh look that sees into the future allowing you to keep costs down while expanding service and capabilities.

Paying Too Much for Custom Application Implementation

Face it. Even if you have a team of entry-level coders implementing custom application software, you’re probably still paying too much.

Here’s what I mean:

You already pay upfront for fool proof design and detailed requirements.  If you leverage more technology to implement your application, rather than spending more on coders, your ROI can go up significantly.

In order for entry-level coders to implement software, they need extra detailed designs. Such designs typically must be detailed enough that a coder can simply repeat patterns and fill in blanks from reasonably structured requirements. Coders make mistakes, and have misunderstandings and other costly failures and take months to complete (if nothing changes in requirements during that time).

But, again…   if you have requirements and designs that are already sufficiently structured and detailed… how much more effort is it to get a computer to repeat the patterns and fill in the blanks instead?   Leveraging technology through code generation can help a lot.

Code generation becomes a much less expensive option in cases like that because:

  • There’s dramatically less human error and misunderstanding.
  • Generators can do the work of a team of offshored implementers in moments… and repeat the performance over and over again at the whim of business analysts.
  • Quality Assurance gets much easier…  it’s just a matter of testing each pattern, rather than each detail.  (and while you’re at it, you can generate unit tests as well.)

Code generation is not perfect: it requires very experienced developers to architect and implement an intelligent code generation solution. Naturally, such solutions tend to require experienced people to maintain (because in sufficiently dynamic systems, there will always be implementation pattern changes)  There’s also the one-off stuff that just doesn’t make sense to generate…  (but that all has to be done anyway.)

Actual savings will vary, (and in some cases may not be realized until a later iteration of the application)but typically depend on how large and well your meta data (data dictionary) is structured, and how well your designs lend themselves to code generation.  If you plan for code generation early on, you’ll probably get more out of the experience.  Trying to retro-fit generation can definitely be done (been there, done that, too), but it can be painful.

Projects I’ve worked on that used code generation happened to focus generation techniques mostly on database and data access layer components and/or UI.  Within those components, we were able to achieve 75-80% generated code in the target assemblies.  This meant that from a data dictionary, we were able to generate, for example, all of our database schema and most of our stored procedures, in one case.  In that case, for every item in our data dictionary, we estimated that we were generating about 250 lines of compilable, tested code.  In our data dictionary of about 170 items, that translated into over 400,000 lines of  code.

By contrast, projects where code generation was not used generally took longer to build, especially in cases where the data dictionaries changed during the development process.  There’s no solid apples to apples comparison, but consider hand-writing about 300,000 lines of UI code while the requirements are changing.  Trying to nail down every detail (and change) by hand was a painstaking process, and the changes forced us to adjust the QA cycle accordingly, as well.

Code generation is not a new concept.  There are TONs of tools out there, as demonstrated by this comparison of a number of them on Wikipedia.  Interestingly, some of the best tools for code generation can be as simple as XSL transforms (which opens the tool set up even more).  Code generation may also already be built into your favorite dev tools.  For example, Microsoft’s Visual Studio has had a code generation utility known as T4 built into it for the past few versions, now.   That’s just scratching the surface.

So it’s true…  Code generation is not for every project, but any project that has a large data dictionary (that might need to be changed mid stream) is an immediate candidate in my mind.  It’s especially great for User Interfaces, Database schemas and access layers, and even a lot of transform code, among others.

It’s definitely a thought worth considering.

Product-based Solutions Versus Custom Solutions : Tomb Raider or Genesis?

The Product-based Solution is where most of Corporate America is going for IT today.  The talent required to povide a successful implementation (one you actually renew license maintenance on rather than let let quietly die an ignominious death) requires the tenacity, deep specialized product  knowlege (read arcane dark arts), and courage of a cinema Tomb Raider.  The team required has to know the target product as well as Indiana Jones knows Egyptology; with equivalent courage, problem-solving skills, and morals (one can’t be squeamish hacking a solution into submission) to be able to achieve a usable solution versus an embarrassing product snap-in.   In addition to their product skills the team must be able to quickly navigate the jungle of existing applications with their mysterious artifacts to get the proper integration points and data (Gold! Gold! I say!).

What if the team can’t or don’t navigate your jungle of existing applications or do not know all of the idiosyncracies of the product to be installed?  Well you get an Embarrassing Product Snap-In (Do Not Pass Go, Do Not Collect $200, Do Flush Career).  Every seasoned IT professional has seen one of these puppies, they are the applications you can’t get anyone to use.  Usually because the do not connect to anything users currently work with, or have real usability issues (Harry Potter vs. MIT interface).  Yes, the product is in.  Yes, it tests to the test plan criteria.  Yes, it looks like post-apocalypse Siberia as far as users are concerned (What if we install CRM and no one comes? Ouch! no renewal for Microsoft/Oracle, bummer).

Custom Solutions are more like Genesis, Let there be Light! (ERP, CRM, Order-Entry, you get the picture).  It is a Greenfield Opportunity!  The team you need is just as talented as a Product-based Solution, but very different.  They need to be able to create a blueprint of your desires, like a rock star architect for a signature building.  The team needs to be experts in software engineering and technology best practice.  As well, the team needs to be able to translate your user’s meandering descriptions of what they do (or not) into rational features resembling business process best practice.  That was Easy!

In the case of custom the risk is creating Frankenstein, rather than new life (It’s Alive!, It’s Alive!).  Again, every seasoned IT professional has seen one of these embarrassing creations (Master, the peasants/users are at the gate with pitchforks and torches!).  The end result of one of these bad trips (Fear and Loathing in ERP) is the same, but usually more expensive, than the Product-based alternatives.

Debby Downer what should I do?  Reality is as simple as it is hard; pick the right solution for the organization, Product-based or Custom.  Then get the right team, Tomb Raider or the Great Architect of Giza.