This is the pinnacle component of the software quality stack because: it is the most challenging of all the aspects of software quality, it is the most potentially damaging aspect, and it is the aspect of software quality that is the least understood.
This is not the case for security, where you could spend that same 30% testing an application, and that single security defect that you missed can cause a major breach.
You know what happens next: you see it every day in the news! As is the case with reliability issues, we are almost always finding security problems under the covers of an application; in the pipelines of its conversations with the database server, Web server and other applications it depends on to operate.
So, the solution ought to be simple, right? Lets incorporate security into our daily intake of good development and testing practices, and lets purchase some sophisticated tools to assist us in our efforts.
However, unlike other software quality test tools, automation for security assessment is dangerous and fraught with accuracy challenges. Both false-positives and false-negatives plague existing security tools.
Automated tools are great at validating that functionality does work because they know exactly what to look for and it is easily programmable because it is GUI-driven. An automated tool is mostly clueless, however, when it comes to environment testing as it will find itself in a foreign territory where human knowledge is the only efficient interpreter of clues.
The most effective security assessment tools are the highly-specialized tools that do only a single thing and do it well. Even there you still need a lot of human interaction and interpretation for the application under test and the tool.
Reliability and security defects also emote themselves in dependency testing. Applications operate in a highly co-dependent environment in which they may load dozens of libraries and interface with several third-party or OS components.
It is in these interdependencies where reliability and security defects hide. It is here that the crafty assessor needs to look and it is why they are so elusive. And even if reliability and security requirements were outlined meticulously, they can still be implemented in ways that cause insecurity.
This highlights the importance of thoroughly testing negative requirements and asking the what-if questions constantly during your software quality assessments.
Thorough Software QA
We see the extremely damaging impact of reliability and security defects to businesses all the time. The mind-boggling part is that so many tools vendors still call themselves quality assurance companies when they dont deliver any solutions for reliability and security. How can that be? Were building, using and buying applications with which we entrust our personal information, and many of which have modest security gates at best.
We as consumers assume that software we use has gone through the software quality stack with all aspects covered. How is it that we know so little about the true quality that went into the building and testing of them?
Imagine the same principle applied to a different industry. Would we ever allow General Motors to rant and rave about the quality of their car without reviewing their safety rating? Would we be satisfied knowing that the breaks were tested only prior to assembly, and not after it left the assembly line?
Most other industries require vendors to demonstrate that all aspects of quality have been addressed or documented in the product they release to the public. That day will come for software, but Im impatient, so Ill get the ball rolling with a few specific, easily obtainable recommendations:
Ed Adams is CEO of Security Innovation, an independent provider of application security services that include security testing and training.