To say that rootkits present more risk than any other type of malware is hardly debatable. Estimates saying that one out of every five Windows systems on the Internet are infected with some kind of rootkit or another are not uncommon, and although proof is hard to come by, these estimates are probably not unreasonable. As bad as the rootkit problem has been, we have to some extent been spared because to the best of our knowledge, rootkits for 64-bit operating systems have not been developed–at least until now. Read more…
I was fortunate enough to be able to attend the SANS Virtualization Summit in Washington DC last week. To me, the name SANS equates to quality, and the SANS Virtualization Summit did not prove to be any kind of exception to this rule.
Not unexpectedly, a great deal of the content covered in this workshop had to do with cloud computing. Whether we like it or not, many people equate virtualization and cloud computing. The difference in the SANS Virtualization Summit was that the cloud computing talks actually had some substance to them, unlike most of the other talks on this subject that I have attended at other meetings and conferences. My favorite one was by Matt Linton, who described an eloquent network architecture in which security processes were well-integrated into cloud services at NASA-Ames Research Center. In another session Alexander Meisel, CEO of Art of the Defense, pointed out an often overlooked advantage of cloud computing, namely that it can offer such extensive computing power that some complex business processes that may not run well in conventional IT environments may be able to run well in the cloud.
My main interest is not cloud computing, however, but rather virtualization, especially virtualization security. A few sessions focused on the difficulty of knowing just how many virtual machines (VMs) are running in virtualized environments. One IT director talked about an audit that was conducted. The auditors found a total of 250 VMs, but the director knew of only 50 of them. Imagine the security (let alone auditing) issues that this “VM sprawl” created! Other sessions focused on the differences between conventional physical networking and virtual networking. Traffic between VMs may travel routes that network and system administrators have never envisioned. Conventional security barriers such as firewalls and intrusion prevention systems might thus not be able to inspect and in some cases block some of the traffic in virtual networks. Fortunately, technology such as virtual firewalls that mitigate this problem has become available in the past few years.
One of the most interesting talks was by the conference chair, Tom Liston. He described his research efforts in trying to discover and exploit a vulnerability in VMware that would allow someone who had access to a guest VM to obtain access to the host VM without authorization. Many virtualization specialists claim that “bare metal” virtualization prevents this kind of thing from happening. He refuted this claim, saying that it is impossible for a virtual machine monitor (VMM) to run directly on hardware–there must be at least some operating system instructions available if the VMM layer is to function. This gives an attacker the ability to identify and exploit vulnerabilities in the operating system, even if the operating system is razor thin. Tom found that operating system instructions were written in assembly language, and that some of the commands were not part of the conventional instruction set, but were rather custom instructions. He discovered some coding flaws that could be exploited; one allowed him to crash one of the guest VMs to gain unauthorized access to the host VM on the same physical machine.
A good part of the second day of the conference dealt with compliance issues. Speakers and panelists described numerous complications that cloud computing and virtualization have presented in achieving compliance with various regulations. They then advocated solutions ranging from the use of certain technology products to forming partnerships with providers and/or writing and enforcing SLA provisions that help ensure that compliance requirements are being met.
Ed Ray and I co-presented a talk in which we asserted that the most fundamental problem with security in virtualized environments is vulnerabilities in virtualization software. We described vulnerabilities that have surfaced in virtualization products such as various flavors of VMware, Denali, and Windows Server 2008 Hyper-V over the years, and said that if you do other things right for security in virtualized environments, but do not create and implement a vulnerability patching process, virtualized environments are wide open to attackers. If you would like a copy of this talk, just send email to email@example.com.
I entered the virtualization security arena over four years ago, and have found it to be fascinating. I have done my best to learn as much as I can, and I have learned quite a bit over the years, but the SANS Virtualization Summit did more to accelerate my knowledge and understanding than anything prior to it. Good job, SANS, good job, Tom Liston.
I was fortunate enough to be able to attend the SANS Virtualization Summit in Washington DC last week. To me, the name SANS equates to quality, and the SANS Virtualization Summit did not prove to be any kind of exception to this rule. Read more…
One of the things we who are information security professionals are constantly told is to be sure to anticipate risks with very extreme consequences when we perform risk analyses. Events such as the Katrina hurricane, the destruction of the World Trade Center by terrorists, and the Chernobyl nuclear power plant meltdown accompanied by a gargantuan leakage of radioactivity show that far out of the ordinary events sometimes occur and that we are usually ill-prepared to deal with them. Read more…
Despite my passion for intrusion detection, I am deeply concerned about what is happening in the intrusion detection arena. Nearly a year ago I wrote a short blog series called “The new intrusion detection.” I asserted that intrusion detection systems are missing many current types of attacks, such as social engineering attacks in which malicious attachments and URLs of malicious Web sites are contained in email messages that appear to be sent from someone the intended recipient knows. Statistics presented by Christopher Novak of Investigative Response Verizon Business Security Solutions presented at the ISSA-Silicon Valley meeting earlier this week suggest that the problem is even worse than I previously suspected. According to Novak, only three percent of all attacks indentified by Verizon Businesses’ clients were identified by signature-based intrusion detection systems (IDSs). In contrast, clients became aware of 69 percent of these attacks only after they were informed by a third party. Read more…
Five months ago I wrote in these pages a blog entitled “The Death of Risk.” It was a rant against the recent developments in banking which have featured fat cat bankers up to their ankles in the “moral hazard” miasma and happily getting their bailouts and bonuses from the overburdened taxpayers. The “mortgage meltdown” combined with the subsequent “credit crisis” happened mostly because a few bankers knew their firms were “too big to fail” and took advantage of this insight. I called it “the death of risk” because one of the things we all count on in information security is that the inadvisability of taking inappropriate risk is an almost universally accepted principle. It is not good to “bet the company” because that is usually an inappropriate risk. Some companies have gone out of business because their security was inadequate. But when Lehman Brothers or Bear Stearns went down, it was because somebody else bet the company and then got surprised when they didn’t get bailed out. Risk is everywhere, and coping with it is a major preoccupation of a legion of analysts from economists to CISSPs.
Now comes Donn Parker, one of the preeminent researchers and thinkers in information security, who may have taken the idea of the death of risk a little bit too far. Parker writes in a recent journal article,
“Think of security as a necessary overhead cost of doing business just as are facilities management, legal, audit, human resources, payroll, and accounting, and like the other overheads, it does not produce a return on investment (The return of savings from expenditures for security is unknown since the incidents that would have caused the savings did not occur.) I suggest that you gradually limit the extent of your risk assessments and reporting to meet only the minimum requirements of the law and regulations and remove the word “risk” from your writings, job titles, and job descriptions.”
[emphasis mine] Aaaaaaacckk! (that was me running screaming from the room!).
The article appeared in last month’s issue of The ISSA Journal, volume 8 – issue 7, page 12, entitled “Our Excessively Simplistic Information Security Model and How to Fix it.” The main point of the article is Parker’s proposal to expand our traditional “CIA” model (confidentiality, integrity and availability) to the new “Parkerian Hexad” in which he introduces three additional attributes or objectives of security, utility, authenticity and possession, explains why these three new attributes cover things that weren’t covered before and how they complete the model of information security. Parker also adds many new types of controls (yes, Virginia, prevention, detection and correction are not enough anymore). And he also proposes new objectives for information security, including: avoidance of negligence; an orderly & protected society; compliance with laws, regulations, and audits; ethical conduct; and successful commerce; and competition. The problem is, these new objectives replace risk reduction. Unfortunately, “removing risk” from the definitional model of information security is impossible and absurd and to advocate it throws our whole profession on its ear.
There are many things in Parker’s article that I agree with. The three new attributes he proposes can be shown to relate to “CIA” in a very complementary way. As the velocity of information through our systems and networks continues to accelerate, we will very likely see more instances in which a breach in possession occurs when confidentiality is unaffected; availability will be fine but utility will be in the impaired, and integrity will be intact but authenticity will be revealed to be bad. Parker’s point here is we have oversimplified the things we are striving for, and in so doing have missed important elements. However, Parker’s arguments against risk are themselves simplistic and rhetorically throw the baby out with the bathwater. He mostly rails against the calculation of aggregate risk for a company or organization and notes, quite correctly, that the actual risks from disparate and ostensibly independent threats may be in many cases highly correlated yet we have no idea exactly how they interrelate.
Parker correctly exposes the computation of aggregate risk for a company – or, for that matter the combination of precise risk results from uncharacteristic threats (say, the sum of the ALEs from airplane disasters and earthquakes) – as at best intellectual adventures or, worse, dangerous illusions. But “remove risk from your writings”? No way does that make sense.
In the late 1970s Alfred Kahn, an economist from Cornell University, once intemperately referred to a potential outcome of unwise policy as likely to lead to a recession, or a deep depression. White House advisors to President Carter objected and thereafter Kahn promised never to refer to a “recession” by name. He later jokingly said to reporters that while he could not comment on the likelihood of an economic downturn, he could talk about a “banana.” Later, he changed the fruit to a kumquat after a large banana company complained (I’m not making this up…). What are we to do without “risk” to talk about? The ideas of active acceptance of risk, residual risk, and inappropriate risk that we have worked so diligently to nurture, are good and we benefit from that common language. Risk is important…no, it is crucial. Without it we are simply mumbling about the abstruse. If I start talking about an orderly society and ethics, as Donn Parker argues in his article, I’ll use up all of my boardroom time slot explaining terms, then my controls proposal will get voted down because no one will have a clue what I’m talking about. In the movie “Ghostbusters” Egon describes the elevated amount of psychokentic energy in New York as a, “Twinkie 35 feet long weighing approximately 600 pounds.” When your latest pen test report discloses multiple severity one exposures and shocking unpatched vulnerabilities, instead of referring to this as inappropriate risk, you can quote Winston Zeddemore from the movie,
“That’s a big Twinkie.”
In part 2 of this blog, I’ll talk about why I think Risk has an undeservedly bad name, what problems have emerged by a careless use of risk terms and how to deal with all of this in a way that helps your program. If you are busy “removing risk” from your writings, at least turn “Track Changes” on so you can get it back later…
News concerning BP’s oil spill has made the daily headlines for the last several months. Fortunately, the leaking well has now been successfully sealed, yet experts tell us that the lingering effects of the spread of crude oil over so much land and sea are likely to affect our environment for years. What should concern the US public even more, however, is the possibility that another catastrophic event of the size of the recent one may occur sometime in the not-too-distant future because of the risk management approach that BP’s executive management has by all appearances adopted.
The recent Gulf of Mexico catastrophe is only one of a number of highly-related incidents involving BP’s failure to mitigate operational and safety risks. You may, for instance, have read about the explosion at a BP refinery in Texas in 2005 that resulted in the deaths of 15 workers and injuries to another 170. In 2005 the Occupational Safety and Health Administration (OSHA) ordered BP to pay a fine of $21 million for numerous safety infractions at the refinery. This fine was increased to $50.6 million last year because of BP’s failure to implement safety measures mutually agreed upon by BP and OSHA, and at the same time BP was ordered to set aside at least $500 million more to implement safety and other measures that the (OSHA) has determined are still missing. Labor Secretary Hilda Solis went so far as to say that the amount of these fines parallels “BP’s disregard for workplace safety and shows that we will enforce the law so workers can return home safe at the end of their day.” OSHA has in fact over the years cited BP for many hundreds of safety infractions in Texas and in other locations; recently BP had to pay another $5.9 million dollars for the most recent round of these infractions. Remember, too, the BP oil spill in Prudhoe Bay, Alaska in 2006 involving over 210,000 US gallons of crude oil. In late 2007, BP Exploration-Alaska pled guilty to negligent discharge of oil in violation of the US Clean Water Act and was ordered to pay a fine of $20 million.
With hate talk about BP being very popular nowadays, you might think that I am just following suit, but I really am not. My focus is instead on enterprise governance within BP (or the apparent lack thereof). BP’s executive management must have an incredibly high level of risk tolerance. BP’s goal ostensibly is to bring in the money, lots of it, without adequately dealing with a variety of serious operational and safety-related risks. If catastrophic events occur, BP then uses legal maneuvers, blames its contractors and affiliates to reduce the PR damage, and deploys other tricks in an attempt to reduce legal and regulatory liability and other negative consequences. For all practical purposes BP was caught completely off guard when the Gulf well blew up; apparently BP’s executive management felt that the probability of this or any other well blowing up in the fashion that it did was so small that it did not justify the cost involved in meeting safety standards and creating and testing a disaster recovery plan. When forced to appear before a Congressional committee investigating the oil spill, the then president of BP attempted to put the blame on the contractor who supplied the drilling rig equipment. Hmmm–the BP risk management modus operandi should be becoming very apparent.
Is BP’s executive management stupid? Although there may be ethical issues within this team, on the surface it appears to be anything but stupid. So far BP has had to pay a total of less than $600 million in fines over the years, but in the first quarter of this year this company was making $93 million each day. Risk mitigation costs should never exceed risks. Face the facts–BP’s executive management has ostensibly weighed the costs versus the risks and has decided that risk mitigation is too costly. So BP operations continue without adequate concern for safety of its workers and for the welfare of those who are unfortunate enough to live close enough to BP oil rigs and refineries to be affected by each catastrophic incident.
BUT–BP’s executive management may have failed to consider a very pertinent and serious risk–stock price devaluation. The price of BP stock has plummeted since the Gulf oil spill. Perhaps it is thus time for BP’s executive management team to rethink their risk management strategy.
So what does all this have to do with information security? As I have said before, if an organization has little or no enterprise governance, it is nearly impossible for an information security program to achieve high levels of governance. If BP’s executive management tolerates the amount of risk that it does and if safety and the welfare of the public is of little or no consideration within executive ranks, whoever the CISO of BP is must not be getting far in managing information security-related risk. Support of executive management is critical in obtaining the authority and resources needed to manage security-related risk to a truly acceptable level. I wish the BP CISO lots of luck, but it might be a good time for this person to do a resume’ update.
A little over a decade ago the option to conduct banking and financial transactions over the Internet started to become available to customers. Security experts cautioned that potentially serious security risks permeated such electronic transactions, but banks, merchants and their customers “took the plunge and drank the Kool Aid.” I do not have any statistics concerning the number of users who regularly use Internet-based banking and other financial transactions, but I am confident that it is extremely high. At the same time, naiveté concerning security risks in such transactions is also incredibly high. Read more…
Last week I did something that I do not usually do any more–I took a certification exam, the exam for SANS GIAC Security Leadership Certification (GSLC). I had to not only take this exam, but also obtain a high score to continue in my role as SANS instructor for this course. I’d like to share some of my impressions with you.
First, SANS offers the advantage of allowing certification candidates to schedule the date and location of the exams they must take. This is extremely advantageous to examinees, as it allows them to take an exam when they are ready to do so–but with a few constraints, too. For example, those who sign up for the GSLC exam when they enroll in the GSLC course have four months from the time they took the course to take the exam. I really do not like mass exams that must be taken only on certain dates and times and at limited locations–especially as I grow older. So I very much appreciated being allowed to schedule the exam on the date of my choice. Read more…
In this, the last blog entry in the current series on pen testing, I’d like you to consider with me how and what an organization that needs to have a pen test performed should choose a service provider. This decision is extremely important because not everyone who claims to be a proficient pen tester genuinely fits that bill. It is so easy to hire an individual or consultancy that is incompetent and/or incomplete in its testing. Additionally, not every service provider stays within the rules of engagement. Some get carried away, and in the process cause disruption and/event damage. Read more…