The New Intrusion Detection: Part 2
How must intrusion detection change to enable it to keep up with the ever nastier and subtler attacks that have been occurring over the last few years? There are several possible answers to this question, but in my mind a common denominator in today’s security breaches is the presence of malware. At a minimum, attackers want back doors on systems that they have compromised, but why have just a back door when you can make the owned system part of a botnet that can generate spam to bring in some cash? Or why not rent out the botnet to a perpetrator who is less technically sophisticated? Malware can also be used to infect systems, something that helps keep perpetrators nearly totally out of the spotlight.
Conventional ways of discovering malware include: running anti-virus and other malware detection software; running Tripwire-type tools on hosts; inspecting logs; running host-based intrusion prevention systems (IPSs) that spot symptoms of attempted malware installation and thwart it; deploying network-based intrusion detection systems (IDSs) that can at least spot traffic to and from telltale ports known to be used by certain types of malware; and more. With the exception of host-based IPSs, which are capable of actually countering malware before it can infect a system, these methods are of limited value against well-written rootkits and other better than average malware for the same reasons I presented in part 1 of this series of blog entries.
The new intrusion detection will have functionality that enables it to obtain information from multiple sources and then correlate it to determine that something is wrong and what its exact nature is. We cannot afford the luxury of depending on any single source of information such as an IDS any more because it is too easy for perpetrators and malware to silence or bypass that source. Additionally, much of today’s malware can hide itself so well that it leaves only miniscule evidence of its existence. For example, a rootkit-infected host may be prevented from showing that TCP port 1214 is open and listening when commands such as netstat and lsof are entered on it. However, the border firewall can spot and report traffic that goes to that port on the victim host and an uncompromised host that is near the compromised one can spot and report slow and gradual scans from the compromised host.
Because of the growing proportion of attacks against applications, Web applications in particular, application firewalls also need to be part of the new intrusion detection. The majority of these firewalls today are aware not only of typical types of attacks against applications, but also of the semantics of the applications they protect. This makes them not only able to keep attack packets from reaching their destination, but also to send data about each attack to a central log aggregation and analysis host.
Additionally, information about data extrusion will be a necessary part of the new intrusion detection. The fact that megabyte-after-megabyte of information is being sent from a host within an internal network to a contextually implausible destination (e.g., an IP address within the range of addresses assigned to South Korea) may not be proof positive that an data extrusion incident has occurred, but it would be a compelling reason to investigate further. Additionally, attempts to read and then transfer files that have been classified as proprietary or sensitive should trigger data extrusion detection and reporting mechanisms. Information concerning possible data extrusion attempts should also be sent to event correlation engines; because this information may be the missing piece of the puzzle concerning a possible attack or set of attacks.
Finally, detailed analysis of potentially compromised systems for signs of malware will be part of the new intrusion detection. When there are a few subtle indications that a host may be compromised, but no conclusive evidence, software and hardware tools that detect malware (rootkits very much included) can often provide an efficient and cost effective means of malware detection. Unfortunately, however, there is no 100 percent effective malware detection software; instead, multiple tools need to be run in the hope that at least one will be able to detect and report the presence of malware. Sometimes, however, all tools will fail at detection because a particular type of malware is so adept at cloaking its presence. Manual inspection by one or more technical guru(s) is appropriate at this point. In Windows hosts rootkits are very difficult to manually discover because they tend to hook operating system (OS) Application Programming Interfaces (APIs), Event Log-related APIs, and other critical APIs. Because of some diversity among different Windows OSs in functions such as Event Logging and Performance Monitoring, rootkits may or may not be able to delete all evidence of their installation, however. Checking logs (especially the System Log) for signs such as suspicious service starts and/or stops and device driver errors may indicate that a rootkit has been installed. Linux and Unix experts have a wider variety of powerful manual analysis techniques from which to choose. Using the /proc file system to spot illogical execution paths, discrepancies between reported open ports (as indicated by netstat and lsof output) and actual open ports, stack execution errors, and much more, as reported by /proc, is one of the best. The bottom line is that computer forensics methods and the new intrusion detection are bound to become increasingly intertwined.
The new intrusion detection is right around the corner. Its logic will be to “notice anything that moves and then dig in deeper, as deep as necessary.” Its main focus will be on finding any evidence of malware, no matter how apparently small. Event correlation will, if anything, become an even more essential function. And detailed manual inspection of certain potential victim hosts, no matter how time consuming this may be, will be another essential component.