The challenges associated with identifying precisely what software resides on desktops, coupled with fear of costly audits by software publishers creates an unwinnable challenge for the IT department. Purchase too few licenses: expose yourself to legal risk. Purchase too many: squander your software budget. Neither choice is a win/win.
Eighteen months ago an international standard for software tagging emerged to address this issue and finally bring consistency to application identification. ISO/IEC 19770-2 defines a simple process by which publishers create and install an XML-based "tag" for each software product they release. The data contained within each tag adheres to a documented, standard format that leaves no doubt as to the exact name of the application, the publisher, the version, and the release date. (The next phase of the standard currently under development, 19770-3, will associate software entitlements, or product use rights, with each application, making compliance efforts even more straight forward).
Adobe and Symantec are beginning to tag new releases according to the standard, and some branches of the U.S. federal government such as the DoD and GSA are moving toward requiring tags in their software procurements. A handful of other publishers have pledged to tag future releases of their software.
Alternatively, some vendors have created their own model for tagging their applications. While this may help end users identify software developed by those specific manufacturers, it is far from an industry-wide standard thus leaving the global problem untouched.
The reality for corporate IT departments is that the ISO tagging standard will not be a practical solution until most, if not all, software publishers fully adopt it. Even if they begin now in earnest, license audits based on tagging alone will provide little value until virtually all installed applications are replaced with newer "tagged" versions.
Lets look at why it is so difficult to ascertain information about installed software. After all, isn't it simply a matter of examining Add/Remove Programs? The answer is no. In fact, there exists no single methodology by which applications can be consistently identified and normalized across all titles and manufacturers; even by automated discovery tools that claim to do exactly this.
Software inventory products that examine Add/Remove Programs information (stored in the Windows registry), for example, are notorious for incorrectly counting some applications. This is because registry entries are frequently only present for those applications installed using the Windows Installer.
Worse yet, for applications that are present, the registry data doesn't always correlate what's installed with what actually requires a license. This makes responsible licensing decisions virtually impossible. Other methodologies utilized by inventory tools, such as examining the installer (MSI) database or application file headers, provide similarly incomplete or misleading information, leading to equally problematic outcomes. Because of such shortcomings, asset management tools that rely on the above methodologies require varying levels of human intervention by end users to translate presented data into truly reliable information.
Some discovery tools, as a way of circumventing these issues, rely on proprietary software catalogs to identify installed programs. These databases are generally compiled using multiple identification methodologies, but their contents are manually validated and normalized in such a way that they correspond one-to-one with licensed application titles. But even with the potential for greater accuracy, software catalogs aren't necessarily the perfect solution for all companies, as it's virtually impossible for any database to contain information about every application ever released to the desktop.
Theres no doubt that software identification creates a lot of pain within the enterprise. So why are software publishers not doing more to ease the burden? Let's face it: when it comes to educating the market about software piracy and enforcing compliance among its end users, the industry is aggressive and well organized. But vendors continue to place the burden of verification and proof of compliance squarely on their customers.
Perhaps the failure to act on tagging stems from a lack of awareness of the ISO standard or a lack of conviction that it will solve the problem. Maybe it's due to the inherent difficulty of justifying product enhancements that don't contribute to "marketable functionality," organizational barriers that hinder coordination of efforts across broad product lines, or a perceived lack of demand among end-users. Or maybe it's simply easier to capitulate to the "chicken and the egg" paradox. That is, software vendors may not commit to the ISO standard until they see critical mass; yet critical mass won't exist until most vendors are firmly on board.
Whatever the reason, until publishers demonstrate they are committed to labeling their software in a way that allows license analysis and compliance reporting to be turned into reliable, automated routine tasks, IT departments will continue to devote countless resources --and a great deal of anxiety -- to obtaining accurate views of their license positions.
Meanwhile, CIOs must recognize and appreciate the challenges and risks faced by their IT staffs in evaluating their license positions, and ensure they have both the knowledge and tools needed to limp along until the promise of software tagging is fulfilled.
Kris Barker is the co-founder and CEO Express Metrix, a leader in IT asset management solutions. Kris was an early participant in the ISO 19770-2 standards work and continues his involvement by participating in the 19770-3 entitlement standard. Kris is also a professional educator, with more than 10 years of experience teaching higher-education students Web development and software programming skills. An aeronautical engineering by training Kris has worked in both development and management positions at WRQ (now Attachmate), DEC and Boeing.