As software seeps into every nook and cranny of the enterprise, the metrics to which IT software developers are held tend to focus on function and time to completion: Does the application do what the customer needs it to do? Can the developer get it done faster?
On the other hand, the structural integrity of the software -- its maintainability, reliability, performance efficiency and security -- often goes unanalyzed until a problem arises. This happens because software quality measures that reach all the way down to the code have been, well, hard to measure.
At least, that's the view of the Consortium for IT Software Quality (CISQ). This global group of IT executives, software vendors, outsourcing providers and systems integrators was formed two years ago under the auspices of two organizations dedicated to improving software engineering: the Object Management Group (OMG) and the Software Engineering Institute, or SEI.
CISQ's goal is to introduce standard metrics for evaluating and benchmarking software quality and size, with an emphasis on automation. One major hurdle in software measurement has been the difficulty of analyzing the structural properties of the code manually, said Bill Curtis, director of CISQ.
"That is very expensive and as a result, people simply don't take these measures as often as they should in order to use them to control their software quality," said Curtis, who is also co-author of the Capability Maturity Model, or CMM, approach to software development and chief scientist at CAST Inc., a software analysis and measurement vendor headquartered in New York. Current measures for assessing software quality manually are not defined to a level where they can be automated, and they end up being subjective -- therefore, not repeatable. "It is impossible to do benchmarking when we have measures that have a lot of subjectivity in them," he said.
Reducing the cost and risk of flawed software code
If it costs a lot to assess software quality manually, glossing over structural issues in software development also exacts a cost. In some shops, rework is 30% to 50% of the total cost of application development, according to CISQ. Maintaining flawed software code accounts for another big chunk of the total cost. Hackers find ways into the system through security holes. The implications are not lost on CIOs, Curtis said. IT leaders cite software failure as their largest risk, "far more than terrorism or natural disaster or other kinds of risks," he contends. These failures include software outages, performance degradation, security breaches and data corruption.
"We need to raise the awareness of what software quality is, how it can be achieved, what these attributes mean and the cost to the business of failures in IT software quality," Curtis said.
With the business demand for IT cost transparency growing more intense, CISQ believes that IT organizations will welcome the new, automatable software quality metrics and use them in these ways:
- IT executives will use the software quality metrics to assess their application portfolio and in particular, gain insight into the applications the business depends on. Benchmarking against an industry standard will help CIOs decide which applications to move forward and which to retire, as well as estimate the cost of those actions.
- Project managers will use the quality metrics and the identification of anti-patterns, or best practice violations, to manage their project applications and measure the growth of quality over the life of an application.
- Developers will use the metrics to drill down into and remediate problems in code.
- Vendor managers will use the metrics in service-level agreements with outsourcers and system integrators to gain visibility into the quality of the code they are getting back.
Is there resistance to software quality metrics and tools?
The other side of the coin of CISQ's mission is to create an infrastructure of authorized assessors and drive a market for assessment products that diagnose the quality of IT software. It's not clear whether the assessors will be accepted or the metrics and tools will be adopted.
CAST, which provides tools to measure the structural quality of IT applications, was selected as one of Gartner Inc.'s 2011 Cool Vendors in application services. Helen Huntley, who covers outsourcing at the Stamford, Conn., consultancy, sees CAST's technology as an effective tool for both service providers and IT executives in her analysis of the vendor.
We need to raise the awareness of what software quality is, how it can be achieved … and the cost to the business of failures in IT software quality.
Bill Curtis, director, CISQ
Still, IT managers who want to use the tool in their IT organizations, face a big challenge. "Many organizations view change -- particularly change that could create new metrics for success -- with suspicion," Huntley noted, adding that "fortunately, time and familiarity with the technology" reduce that risk: "Once IT organizations and C-level executives work within this paradigm for a short period, they soon see the value in transparency and objectivity."
Are the CISQ metrics necessary? CIOs always need software quality metrics to justify application decisions, said Margo Visitacion, vice president and principal analyst in the application development and delivery group at Forrester Research Inc. That's true in tight times when they are under pressure to cut costs, and in flush times when they're being pushed to get thing to market. Historically, however, software quality hasn't been in the forefront at companies at either time. But that is starting to change, she said, and this is especially true at companies with complex, heterogeneous software and systems.
"What the industry is beginning to realize is that quality counts and the lack of quality becomes very expensive. Making sure an application is structurally sound can actually be a cost-save in the long run," Visitacion said.
CIOs are looking for pragmatic metrics, "essentially defect prevention and removal," at the structural level, Visitacion said. She suspects, nevertheless, that companies won't jump on the metrics bandwagon "for goodness's sake" but instead will require a strong bottom-line push.
How software code is judged
Judging who writes the best code is an "artistic discussion, sort of like asking who's better, Picasso or Rembrandt," Curtis said. "But we can describe things that we know are bad." The frantic pace of software development today, in his view, has made matters worse.
"Programmers are under intense pressure to get it to run now," Curtis said. They pile up "technical debt" as they barrel along, knowing that sooner or later a reckoning will come. At low-maturity shops, the word slapdash pretty much describes the software, he said. However, even at more disciplined organizations, big compromises are made, more knowingly, perhaps, and with the goal of making improvements over time. "Our problem is not bad developers. The process is out of control."
Reducing the cost and risk of substandard software requires detecting structural flaws early on. That's doable now with static analysis, Curtis said, referring to the method of computer program debugging done by examining the code without executing the program. He ventures that automatable metrics will usher in a "fourth wave " in software development that's focused, not on higher-level languages, design or process -- the hallmarks of earlier phases, but on engineering software products that are more structurally sound and cost less to use.
Meantime, CISQ technical experts have submitted their first set of definitions to the OMG for approval as a standard, which will be approved by year's end, Curtis hopes.
Let us know what you think about the story; email Linda Tucci, Senior News Writer.