Malware Detection: The Case for a New Approach
I am not saying that you should let your endpoint software expire. Something is better than nothing, and defense-in-depth (e.g., a virus wall plus endpoint software plus integrity checking software) is even better yet. But a very serious problem exists–anti-malware tool vendors are not keeping up with the proliferation and sophistication of today’s malware. Being fundamentally suspicious and paranoid people at heart, perhaps I need to tell myself that information security professionals should also not really expect that they do.
Many of the most powerful rootkits (Rootkit_J, Rootkit/Win32.Agent. Generic.dx, RTXT_AGENT.ebk and Alureon, to name a few) are programmed to bypass or defeat processes of software designed to find them. These rootkits often modify data kernel structures that report the health of processes that are running, such that when a system administrator enters a command to display all the processes that are running on a system, the output selectively excludes rootkit-initiated processes. In other cases rootkits run via maliciously altered libraries or other system programs. But even the best of anti-malware tools misses an unacceptable amount of malware that gets loaded on the machines on which they run.
Given the limitations of anti-malware tools, another approach may start to become more viable. What if information security practices started supplementing their anti-malware efforts with random inspections of hosts throughout their enterprise with an emphasis on malware that could reside on these hosts? Some kinds of surreptitious malware, rootkits in particular, embed themselves in systems such that the only reliable way to identify them is to have a highly technically proficient staff member enter commands and examine log entries that less initiated staff would be less inclined to do. For example, the output of netstat on a host that has been compromised by a Trojan may be doctors to omit processes that are running on behalf of the Trojan. A sophisticated technical staff members might, however, be aware of the fact that the /proc pseudofile system exists on numerous Linux and Unix OSs. An investigator thus can cd to /prof and “drill down” to find information about the host in question in running independently of malware that may also be co-resident. If a rootkit is present, netstat is likely to have been “doctored” to omit displaying a connection to a hostile Web site, the one used in connection with the netstat command. A guru in Linux and/or Unix could, however, find what is really occurring by probing each potentially infected system and determining how that system responds to /proc and other facilities.
I hope that my point here has not been lost. In the 21st century, if you want security, you have to make it happen. Malware is a particularly difficult problem. Worse yet, today’s anti-malware tools do not perform up to expectation. But you still have an alternative–creating and running a manual malware inspection program. Randomly sample X percent of your organization’s hosts and ask senior management for permission to have one of your gurus look for evidence on malware on these hosts. The gurus will without a doubt discover malware that they had never expected. Keep doing this. Security basis purely or mostly on software does not work. Ensure that when it comes to operation security (not just limited to the side of operational security that attempts to discover malware), your organization has procedures for regular inspection of hosts to determine whether or not they have infected hosts. And it you take me up on this offer, please do not be greatly disappointed when you find a significant percentage of hosts are infected with at least one Trojan program.