Network Access Control Technology: Will it Succeed?
I recently attended the Interop Conference in Las Vegas. As I strolled around the exhibition area, I could not help notice all the booths at which Network Access Control (NAC) products were being promoted. NAC technology is used to prevent potentially dangerous systems from being able to connect to networks. NAC tools examine various aspects concerning each system that tries to connect to a network, and then on the basis of the results, either allow or deny network access to that system. So, for example, if the user of a Windows XP system tries to connect to a network, but that system is infected by a worm, NAC technology is supposed detect the infection, sever the existing network connection, and apply a defensive measure such as denying any IP address to that system until it is healthy again. Some NAC implementations perform tests such as determining whether a system connecting to the network has unpatched vulnerabilities, or whether it has the ability to connect using IPv6, or whether certain security-related settings are enabled, and more.
One of the first implementations of NAC technology was at the University of California at Santa Barbara. Network administrators were experiencing nightmare scenarios due to massive virus and worm infections that often started when a single infected PC connected to the network. By implementing a NAC solution, this institution was able to greatly reduce the time spent in identifying and cleaning up infected systems.
In theory, NAC technology is a wonderful idea. In reality, however, NAC is not always as ideal as it sounds. For example, false alarms comprise a frequently reported problem with many of today’s NAC products. A perfectly good system may sometimes trigger a false alarm, causing it to be locked out unjustly, resulting in user frustration, loss of productivity, and escalation of help desk-related costs. Similarly, compromised systems sometimes pass a NAC examination or test and, accordingly, are allowed network access, even though they pose considerable risk to other machines on the network. Some NAC products are capable of performing only an initial examination or test; a system may become compromised after NAC has deemed it worthy of connecting to the network, but it is too late as far as NAC capabilities go. Still another concern, possibly the greatest one, is what happens if someone compromises one or more NAC servers. Last fall I attended a talk at a conference at which I heard of a penetration test in which the penetration tester targeted NAC servers in a large network. By exploiting vulnerabilities in these servers, this person was able to gain root access to them; once he had root access, he locked out all the administrators of these systems from any network access whatsoever. (Humorously, the talk title included the phrase, “Self-defeating networks.”)
It occurred to me that many of the problems with today’s NAC technology are the same problems as ones in today’s intrusion prevention technology, especially with respect to what happens when false alarms and missed detections occur. One must, therefore, seriously question the continued viability of NAC technology. On the other hand, the world of IT needs something like NAC, because allowing compromised and vulnerability-ridden systems to connect to healthy networks is one of the worst things that can happen from a security standpoint. The big question in my mind, therefore, is whether NAC technology will grow and improve in its accuracy and functionality soon enough, or whether it will stalemate because of limited acceptance. As with just about everything else in the world of information security and IT security, only time will tell.