Sergey Nivens - Fotolia
We all have messes to clean up. Our inventory of applications includes tired, old products that were not architected, that we designed to handle myriad exceptions, that do things they were never designed to do, and that we have so customized they are no longer recognizable and are a nightmare to manage. As the pace of technology change continues to accelerate, these applications are a direct barrier to our IT and enterprise agility. Whenever we think about a new process, product or market opportunity, we consider the application labyrinth the new process or product would have to navigate through, and we get depressed and give up. Even if that old application is only a few years old, we made it so complex that only the brave dare touch it.
Even worse, when we do define a technology path forward to modernize, simplify or replace that troublesome application, some of the users of the application mutiny. They love the exceptions it handles. They are enamored with its complexity. They cannot imagine ever doing things differently.
Application modernization, simplification and replacement are broad and deep topics. Here are some approaches I use that seem to help.
Isolate and introduce 'data brokers'
Often, our legacy applications contain lots of direct data connections. Application elements exchange data directly with each other. Over time, we have built more direct connections onto existing direct connections. When we attempt to revise one portion of the application, we first have to find and dissect all those connections. This work can be so daunting that we simply choose not to update the application.
There is a way out of this -- we can select and implement frameworks that insert a data broker between application elements. These data brokers handle the way the application elements publish out and subscribe to data. The broker knows not only what other application elements need the data but also whether the data is replicated in other application elements. Knowing this, the framework lets us manage the data flow into and out of an element.
As we do this work, the application element becomes its own little (or big) island with data flowing in and out, but as managed by the broker. Once we have isolated the element, we can safely update or replace the element (and its data). This data broker approach also helps us identify and replace data replication issues and define the one true source of the data (rather than having elements with duplicated data that we have to synchronize).
Segregate based on purpose
I like to segregate all activities -- including the functionality of applications -- into two broad categories based on the purpose of those activities. The purpose of a few (very few) activities is to create competitive advantage. Only these activities deserve our innovation and creativity.
Almost everything else we do is important but does not deserve innovation. I call these the parity activities. They are vitally important -- we cannot survive without them -- but no one does business with us because we do these things in a unique, creative way. For example, for the vast majority of us, our accounting system is mission-critical but it is highly unlikely that we win customers and grow market share because of our unique, innovative approach to accounting. (Besides, don't we sort of frown on creative accounting?)
Yet, how many of us have customized or modified or embedded into our accounting systems interesting ways to do procurement or payables or receivables or inventory costing? If our users are hesitant to give up the lovable (but painful) features of the legacy applications, I talk to them about the purpose of the application and convince them that the way we optimize business value is by doing the parity activities in a standardized, simplified, best practices way -- and we do that through a vanilla implementation and adoption of standard functionality.
I also apply this idea of purpose (to be better than anyone or be as good as everyone) to application consolidation. We once replaced seven customized versions of CRM with a single configured (not coded, not customized) CRM system. There was not a customer on the face of the earth who chose us over our competitors because we had seven customized CRM systems. We took the idea of process parity all the way to specific functionality. We defined an organizational standard for case management workflow. Every case looked and was handled the same way. Even better, with this level of standardization, it is simple to move applications and other services to the cloud, and upgrades are a breeze.
Given the pace of technology and market change, we need to move fast. To do that, let's get serious about making obvious, conscious plans for modernization.
About the author:
Niel Nickolaisen is CTO at O.C. Tanner Co., a Salt Lake City-based human repath fsources consulting company that designs and implements employee recognition programs. A frequent writer and speaker on transforming IT and IT leadership, Niel holds an M.S. degree in engineering from MIT, as well as an MBA degree and a B.S. degree in physics from Utah State University. You can contact Niel at firstname.lastname@example.org.
More CIO advice from Niel Nickolaisen
2015's five big trends and how CIOs should respond
IT must disown DR for effective DR
Custom cloud computing without breaking the bank