Home > Uncategorized > The Quest for Secure Software

The Quest for Secure Software

Counts of the number of reported vulnerabilities in software products per year vary, but if you take an average of the available counts, you’ll find that approximately 2,500 vulnerabilities were reported in 2002. By 2006 this count had risen to something like 8,000. According to several sources, the number of vulnerabilities reported in 2007 fell slightly, but it is safe to say that over the years the number of reported vulnerabilities has risen substantially. Why? There are simply so many software vulnerabilities to be discovered. Additionally, the number of people attempting to discover vulnerabilities has increased (because discovering vulnerabilities can lead to fame and glory, as well as large financial remuneration), and discovery methods have improved considerably, too. 

Most experts agree that the solution is to incorporate security into the programming process, one in which programmers avoid introducing bugs in their code by doing things such as allocating sufficient memory whenever programs take in input, ensuring that all state transitions are recognized and dealt with in a manner that circumvents abnormal conditions, ensuring that any elevation privileges with which one routine runs are rescinded when control is passed to the next routine, and so on. However, whereas experts agree on what must in principle be done, they vigorously disagree on how to achieve this goal. Some possible approaches include:

  1. Training undergraduate and undergraduate computer science students in secure programming methods.
  2. Training programmers through courses offered by security training organizations such as SANS.
  3. Offering in-house training not on a one-time basis, but rather systematically (e.g., two days every quarter).
  4. Setting up a system of rewards and punishments given for the presence or lack of vulnerabilities in code produced by each developer throughout an organization.
  5. Performing a manual analysis of code shortly after it is written, often through peer reviews of code.
  6. Using technology that spots potential bugs in code, thereby allowing quality assurance and other staff to investigate whether or not vulnerabilities exist before the code is released to the public.
  7. Subjecting alpha and beta code to vigorous security testing, thereby also enabling vulnerabilities in the code to be identified before it is released to the public.

Some of these solutions appear to be much more satisfactory than others. I fear that one of the more unsatisfactory potential approaches is training undergraduate and graduate computer science students (approach 1). I am not saying to withhold this kind of training—to the contrary. What I am saying is that although this approach is good for those who receive the training mandated by it, it is not provide a sufficiently pervasive solution because many if not most individuals who become programmers do not major in computer science, let alone ever go to college. Holding training courses for professionals (approach 2) seems somewhat better in that a wider range of programmers can potentially be reached, but the fact that such courses generally last at the most two to five days considerably limits their usefulness. This amount of time is hardly sufficient time to reverse a lifetime of programming habits. Additionally, think of all the vulnerable code that programmers will produce before they finally get training. In-house training (approach 3) seems better yet, provided that an organization actually recognizes that secure programming is highly desirable and is willing to invest the needed resources, as Microsoft has in connection with its Trusted Computing Initiative (TCI). Rewarding or punishing employees through salary increases/reductions and bonuses (approach 4) is an approach that is very likely to work, and has in fact ostensibly worked well in connection with Microsoft’s TCI. Finding bugs after they have been created, as prescribed by approaches 5, 6 and 7 above, can also work, but the fact that these solutions are post hoc renders them only partially effective. The best way to get rid of vulnerabilities is to avoid them in the first place. I thus view approaches 5, 6 and 7 as “defense in depth” mechanisms; if used as a kind of second safety net beneath the primary net, namely using secure programming methods, they are valuable, but these solutions alone will never go very far in decreasing vulnerabilities in software.

Debating which approach to producing more secure software may, however, be for the most part a moot issue. Any approach or combination of approaches is better than what the software industry currently uses (or better said, so often fails to use). Security in software must start with a commitment by senior management. Without adequate backing and resources from senior management, it is extremely unlikely that vulnerabilities in software will be minimized. And until the public quits buying or starts buying less of vulnerability-ridden software, senior management will continue to have little reason to be committed to produce more secure software.

Categories: Uncategorized Tags:
  1. No comments yet.
  1. No trackbacks yet.
You must be logged in to post a comment.