Archive

Archive for August, 2009

Presidential Control over the Internet?

I just read an extremely interesting Web posting at http://news.cnet.com/8301-13578_3-10320096-38.html. A bill (S.773) in the US Senate would give the US President the ability to take control of the Internet if a cybersecurity emergency were to occur. Specially, the bill proposes that the President be given the authority to declare a cybersecurity emergency and to do whatever may be necessary to deal with it. The President could, for example, order that commercial arena computers be disconnected from the Internet.

As one would expect, this bill has already understandingly elicited negative reactions on the part of civil libertarians as well as Internet interest groups and organizations. The idea of the US government, which has a “big brother” image among a fair percentage of the US population, having the authority to disrupt commercial entities’ Internet connections has some fairly frightening potential repercussions. Think, for example, what happened when the well known distributed denial of service (DDoS) attacks against companies such as eTrade, eBay, ZDNet, and Amazon occurred in February of 2000. Loss estimates went as high as $20 million, even though the disruption lasted no more than just a few hours. Think of the potential damage to the US economy if the commercial sector’s connections to the internet were severed for, say, a day or two because the President declared an emergency.

Ironically, the US government (through DARPA) created the Internet, and yet this entity to which it gave birth has turned against its creator in the form of one attack after another against US government computers over decades. To say that the government is winning the war against cyberattackers is laughable; to say that the gap between risk and mitigated risk in government computing environments has been unacceptably large and growing every year is, in contrast, completely credible. The US government is going to somehow come to grips with the real nature of the Internet and the risks that the Internet creates. In a way, S.773 seems to show that at least someone in the US government is trying to get a grip on these issues. Bedlam and anarchy, best friends to cyberattackers, characterize the Internet. Having the ability to take control of the Internet under extreme circumstances is clearly in the best interest of the US; it could, for instance, help thwart massive DDoS attacks designed to render US government computers and networks inoperable.

BUT—because the Internet is such a diverse entity, is it really possible for one person or organization to achieve the level of control necessary to stave off a cybersecurity catastrophe? I honestly doubt it. In my mind this is a much more fundamental issue than the idea of the US government having the ability to disconnect the private sector for the Internet. So even if S.773 passes in the Senate, a similar version of this bill is drafted and passes in the House, and ultimately President Obama signs it, I honestly do not believe that the legislation would make as much difference as those opposed to S.773 would have us to believe.

And just one more thing—some individuals who praised the PATRIOT Act are now crying bloody murder over the proposed provisions of S.773. This makes absolutely no sense to me. The PATRIOT Act gives law enforcement an incredible amount of power, more than ever before in the US. All S.773 would do is give the President the ability to declare and react to a cybersecurity emergency. But given that a large percentage of the US population is functionally illiterate and doesn’t even know that the US Bill of Rights guarantees freedom of speech and religion or when the American Civil War was fought, I guess the inconsistency in the reactions to the PATRIOT Act and S.773 is understandable.

Categories: Uncategorized Tags:

The Windows 7 XP Virtual Machine: A Step in the Wrong Direction

In the past I’ve incurred Microsoft’s wrath plenty of times. A decade ago, for example, I was highly critical of the security (or lack thereof) in Windows NT. Microsoft’s attempts to convince the public that NT security was strong and that only people prone to exaggerate pointed out major security problems in that operating system were often aimed at individuals such as myself. Then several years later Microsoft initiated its Trusted Computing Initiative (TCI), something that greatly changed my view about Microsoft’s willingness and ability to provide adequate security in its products. And if you have been reading my blogs over the past few years, you’ll discover that more than a few times I have given Microsoft more than a few kudos because of substantially improved security in its products that resulted from the TCI.

But much of this is going to change with Windows 7. As I have said before, I am eagerly awaiting the release of what I am sure will be a superb operating system. Let’s face it—Vista was a failure, an operating system that just didn’t meet the mark, even though it offered some new and potentially very worthwhile security-related features. The main problem with Windows 7 from a security perspective is that it will have a built-in XP virtual machine, one that provides XP-Windows 7 compatibility. In “XP mode” special processes that are completely independent of mainstream processes will run. Because of this independence, these XP-mode processes will not be affected by normal security-related Group Policy Objects (GPOs) and hot fixes that are installed. System administrators will thus have to individually set up GPOs for XP mode as well as install separate patches if this mode is to run securely. Instead of having to install and update one copy of anti-virus software, system administrators will have to install two. The same applies to the Windows Firewall and many other security-related features and functions.
Honestly, how many system administrators will be sufficiently motivated to expend double the effort needed to secure a system when they often expend little or no effort at all to security?

In the same way that a single torpedo can sink an entire ship, an undefended XP virtual machine can and will serve as the unauthorized entry point for intruders into a Windows 7 machine. Furthermore, the fact that it will have to be separately configured and maintained for security greatly increases the likelihood that it will be wide open to perpetrators. The Conficker worm has shown just how many systems are still not patched for a vulnerability that first surfaced in October, 2008. If people do not install a patch that has been available for such a long time, what do you think the chances are that they will try to secure a default virtual machine that has no built-in security?

Over the years millions of virus, worm and Trojan infections in Windows systems and an untold number of break-ins into these systems have occurred Compromised Windows systems all over the Internet are routinely used to send spam and to participate in distributed denial of service attacks (DDoS). Microsoft has not been sitting idly by—this vendor has over the years implemented numerous functions and features designed to substantially tighten the security of Windows systems. But I fear that many of the gains Microsoft has made will be negated in Windows 7 by the huge vulnerability that the XP virtual machine constitutes. The ball is clearly in Microsoft’s court—hopefully Microsoft will wake up to the severity of this situation and do something appropriate about it.

Categories: Uncategorized Tags:

US Department of Agriculture Chooses Internet Explorer

Department of Agriculture Chooses Internet Explorer

This morning I encountered a news item that caused me to do a double take. The US Department of Agriculture (DoA) announced that it will allow only the Internet Explorer (IE) browser to be used. Five or six years ago, this would have been virtual suicide. This snippet from a paper that I published in Computer Fraud and Security five years ago sums up the prevailing attitude at that time:

“The Web is still very much a dangerous place, now to a large degree because of a myriad of vulnerabilities in Web browsers, particularly in Microsoft’s Internet Explorer (IE). The average number of announced vulnerabilities in IE per month is virtually unparalleled, with nearly three per month in IE6 over the last two years, according to Secunia. Perhaps worse yet, of the announced IE vulnerabilities, 14 percent have been rated as extremely critical and 34 percent have been rated highly critical. Although IE is currently the most widely used Web browser, the regular stream of Microsoft and other bulletins describing yet more IE security vulnerabilities and media accounts of real-life incidents in which IE vulnerabilities have been exploited have hurt the popularity of IE considerably. Attacks on systems in which IE vulnerabilities are exploited are commonplace and are growing at a rapid pace.”

Then Microsoft “got with it” and started to improve security in IE. Recent versions such as IE 7 have had some impressive security features and capabilities, some of the most notable of which include:

• IE 7 Protected Mode, a special mode that helps reduce previous software vulnerabilities in browser extensions by eliminating the possibility of using them to install malicious software or change system files without a user’s knowledge or consent.

• ActiveX Opt-in— a function that disables all controls that are not explicitly allowed by the user.

• Cross-site scripting attack protection, which provides obstaces that help limit the ability of malicious Web sites to exploit cross-site scripting vulnerabilities in other Web sites.

• A phishing filter that compares addresses of Web sites that a user attempts to visit with a list of reported legitimate sites stored on the user’s computer. It analyzes Web sites that users visit by checking them for characteristics common to phishing sites and sends the address of a Web site that a user visits to a Microsoft on-line service that checks site against a constantly updated list of known phishing sites.

• User interface Privilege Isolation (UIPI), a function that keeps lower-integrity processes from reaching higher-integrity processes.

IE 8 offers several additional security features and capabilities such as

• Improvements in protected mode that allow medium-integrity applications to access low-integrity cookies without the user having to intervene and the new capability for users to control browser behavior even when the browser is started by a medium integrity process.

• New RSS functions, including enabling the Windows RSS Platform to perform authentication without user involvement and assigning an effective ID based on the hash value to every feed item to check and, if necessary, synchronize information regarding whether or not an item stored on multiple computers has been read or not.

• Protection against clickjacking.

• Improved protection against cross site scripting attacks.

The advantages resulting from the new and effective security features and functions in recent versions of IE are lamentably offset to a large degree by the large number of vulnerabilities that are still being found in this browser. However, IE cannot be singled out as the browser with the most vulnerabilities any more. According to numerous studies, Firefox has approximately the same number of vulnerabilities, and Chrome, while relatively new, has also had more than its fair share of vulnerabilities.

I’m not endorsing any particular browser. What I am trying to say is that five years ago if you used IE, you certainly did not choose security as one of your more important criteria. But IE has gotten better as far as its security capabilities go, and given that IE allows the ability to manage multiple browsers from a single point, there are also some practical advantages to using this browser. So despite the fact that the DoA will probably never win any awards for excellence in information security, this department has at least made a very justifiable decision when it standardized on the IE browser.

Categories: Uncategorized Tags:

An Update on the Conficker Worm

An Update on the Conficker Worm

Nearly six months ago I wrote a blog entry on the Conficker worm. At the time, Conficker D had just surfaced. Although similar in many ways to Conficker A, B and C, this version added several new troublesome functions:

• A new custom peer-to-peer protocol used to scan for infected hosts and push or pull (depending on which is appropriate) the latest version of its code.

• Blocking of DNS lookups that would otherwise allow an infected host to connect to and download security-related updates (e.g., anti-virus updates and Windows Updates).

• Disabling of Windows Safe Mode, anti-virus software, and updates in security-related products such as anti-virus tools.

• Creation of a botnet by installing bots in infected systems. The botnet is programmed to launch distributed denial of service (DDoS) attacks on demand.

Conficker’s authors were not through yet, however; version E surfaced only a few weeks after Conficker D was released. Version E is very similar to Conficker D, with the exception that the former sends spam and pops up scareware.

What I find so interesting about the evolution of this worm is that a version designed to make money for the authors did not surface until almost six months after this worm was first released. Clearly, profit was not originally a motive for creating and releasing it, something that very much bucks a pronounced trend in the malware arena. There is considerable money to be made from Conficker E—spam is quite profitable for those who learn to do it right, and because “a sucker is born every minute,” many gullible users have already fallen prey to Conficker’s scareware tactics.

No new version of Conficker has emerged in nearly five months. What are its authors doing and what are they planning to do in the future? According to one estimate, this worm is still infecting 90,000 new Windows systems every day. Furthermore, the authors are almost certainly making at least some money from the functionality of the latest version of this worm. Are the authors content, or have they simply become disinterested in continuing their sordid activity? Are they “on the lam” because they fear being caught by law enforcement? I fear that the answer to each of these questions is “no.” Conficker’s authors may just be “taking the summer off.” Or they may well be planning the next version, one that will be so potent in reproducing itself and so malicious that it will constitute an Internet-wide pandemic.

One thing is for sure—Conficker is no “child’s toy.” Its code is exceptionally well written and the worm runs well on systems that it infects. Additionally, many Windows systems are not patched and are poorly configured with respect to security settings. Accordingly, whether or not a pandemic version of Conficker ever emerges, it is safe to bet that Conficker will be around a very long time.

Categories: Uncategorized Tags:

Does Heartland Blame its QSAs?

Rich Mogull’s stern admonitions (http://securosis.com/blog/an-open-letter-to-robert-carr-ceo-of-heartland-payment-systems/) to Robert Carr, the CEO of Heartland Payment Systems, after Carr’s interview by CSO Online (http://www.csoonline.com/article/499527/Heartland_CEO_on_Data_Breach_QSAs_Let_Us_Down) are shrill and overstated. Mogull took several paragraphs to refute something attributed to Carr that never appeared in the Carr interview: that Carr blames his QSAs for his breach.

In the article, Carr never blamed his QSAs for the breach. Carr does say, “The audits done by our QSAs were of no value whatsoever,” and he later says, “The false reports we got for 6 years (from our QSAs), we have no recourse. No grounds for litigation.” While these statements might imply that the QSAs were to blame, they are a long way from making that assertion.

Carr expresses frustration that nothing his QSAs did in any way prepared Heartland to discover or defend against the attack. An aggrieved shareholder might have said the same thing about financial audits at Enron; why didn’t the auditors tell us about the massive fraud? Or how about Lehman Brothers shareholders’ plight? Who should have told them about the impending crash? Carr’s frustration seems to be that when he looked at the contract with his QSA, he realized that the QSA had no obligation to tell Heartland about the possible existence of vulnerabilities that might be exploited.

It is ironic that Heartland will rely on those very same QSA reports as a key part of their defense against those who are suing them over the incident. Claimants will say Heartland “knew or should have known about the existence of an exploitable vulnerability.” Obviously, Carr intends to argue that Heartland relied absolutely on the QSA reports and was shocked, SHOCKED, when the breach was discovered only moments after their most recent PCI clean bill of health.

The weakness of Mogull’s argument is obvious from Carr’s and Heartland’s new commitment to leading edge security, focused on “data in transit” as well as major new initiatives in data loss prevention (DLP) that go well beyond the scope of the PCI-DSS. If Carr really thought his QSAs and not Heartland were to blame for the breach – as Mogull claims – why would Carr now embark on these initiatives? Would Carr not claim, “look at our PCI report and see that we are clearly doing enough to prevent breaches to cardholder data,” if Mogull is right?

Mogull’s patronizing letter goes on and on pontificating about Carr’s role as CEO and explaining the issues with accountability, roles and independence with which Carr obviously needs no help.

Mogull criticizes Carr for “rely(ing) completely on an annual external assessment to define the whole security posture of his organization.” This is an outrageous accusation against Heartland; one might infer that they had no security staff or that they were dolts. The fact is that Heartland did not “rely completely” on their QSAs and they can prove they didn’t. Of course, we do not know the contents of the confidential report given Heartland by its QSAs and we do not know whether the QSAs gave them a soothing reassurance that all was OK. We do know that Carr thought he paid his QSAs for more and only after he read the fine print did he discover that more was a pipe dream.

All of this lather shifts attention away from PCI-DSS. PCI-SSC and the card brands might consider a reduced system of fines for cases where the processor or merchant passed their PCI assessment but still experienced a breach. As we’ve seen with Heartland and CardSystems Solutions, having a breach is no fun and costs a lot. Adding punitive PCI fines seems to be piling on, a substantial infraction in the NFL, yielding a 15-yard penalty and automatic first down. And it would really help if PCI-SSC would stop saying “We’ve never seen anyone who was breached that was PCI compliant.” This is an absurdity that perpetuates the fiction that (a) having a PCI certificate equates to being secure, and (b) following PCI is enough.

Bottom line, rants like Mogull’s do not help because they unfairly characterize the Heartlands of the world as trying to weasel out of accountability for security. We should instead be asking how companies like Heartland can better avoid breaches and provide them incentives for a positive track record, something that may be more effective than penalties and fines at this point. Like that old joke about 5,000 lawyers at the bottom of the ocean, PCI-DSS is a good start. But PCI-DSS is not the final word and PCI-SSC knows that. They have put out a request for comments and suggestions for improvements to version 1.2.

Categories: Uncategorized Tags:

False Information about the Cars.Gov Web Site

Apparently there is not a sufficient amount of ferment and discontent among those who are unhappy with the Obama Administration. Glenn Beck, the Fox News analyst, recently announced that the US government has declared the right to “seize” and “own” (declare as US government property) any computer that connects to the Cars.Gov Web site, the site that the government has set up in connection with the “Cash for Clunkers” program.

I seldom get caught up in all the Republican-Democrat “spitting wars,” but the idea of the government “owning” someone’s computer if that person connects to Cars.Gov just seemed so implausible that I asked someone who works as a cybersecurity expert in high places in the US government what the truth about this site was. I received the following reply:

>”I checked the site and found their privacy policy:
>http://www.dot.gov/privacy.html
>
>I also found more info on Glenn Beck:
>
>http://www.eff.org/deeplinks/2009/08/cars-gov-terms-service
>
>Which states:
>
>Clicking “continue” on a poorly worded Terms of >Service on a government site >will not give the >government the ability to “tap into your system… >any time they >want. The seizure of the personal >and private information stored on your computer >through a one-sided click-through terms of service >is not conscionable” as lawyers say, and would not >be enforceable even if the cars.gov website was >capable of doing it, which we seriously doubt.
>Moreover, the law has long forbidden the >government from requiring you
>to give up unrelated constitutional rights (here the >4th Amendment right to be free from search and >seizure as a condition of receiving discretionary >government benefits like participation in the Cars >for Clunkers program.”

I sincerely thank my friend (who shall remain anonymous) for this clarification. What I find so interesting is that in the midst of all this clamor, anything, regardless of how true it is, can be turned into an unfounded accusation that spreads widely not only all over the Internet, but also over television, and cybersecurity related falsehoods and embellishments are no exception. Worse yet, there is no indication whatsoever that Mr. Beck or his staff exerted any kind of effort to delve into the truth of Mr. Beck’s so-called “discovery.”

Internet hoaxes surface all the time. Fortunately, we have sites such as snopes.com to debunk these hoaxes. If you go to snopes.com and other similar sites, you’ll find that they have declared Mr. Beck’s accusations to be a hoax. The bottom line is that ostensibly there is no reason to be fearful about connecting to and interacting with the Cars.Gov Web site.

Categories: Uncategorized Tags:

Dangerous Internet Services and Web Site Access in the US Department of Defense

Much to my amazement last week I read a news item alleging that the US Army had originally banned Twitter, but that it had recently reversed its stance. This brought to mind a range of popular Internet services, including other social networking services—Facebook, and MySpace—as well as chat, instant messaging (IM), and more, and how they are often blindly tolerated in critical business and operational contexts despite the many security-related liabilities associated with such sites and services.

I have previously pointed out the security-related dangers of social networking. Not surprisingly, Twitter has recently experienced extended outages due to denial of service attacks. But the risk of denial of Twitter services pales in comparison to others. In the case of the military, national defense secrets are potentially at stake. Data leakage in social networking sites is rapidly escalating. Individuals who connect to such sites may overlook prohibitions against leaking classified information that they have learned because they are so engrossed in interacting with others that they are being open and candid with them. Having something interesting to share with friends and social affiliates may result in unintentionally leaking secrets. .

But social networking sites are in reality only part of the overall security risk problem that so many popular network-based services create. For example, IM poses another set of very serious risks, one that parallel the risks associated with participating in social networking sites. IM sessions also provide great opportunity to leak secrets. The same is true of chat sessions.

The US Department of Defense (DoD) not fared well against attacks against its systems. Consider, for example, the highly successful “Titan Rain” attacks against so many of its systems, which are widely believed to have originated in the Peoples Republic of China. The DoD has since been through multiple, sustained attacks, allegedly from the same source, and now by all appearances more recently also from North Korea. There is no end in sight. So now what happens? The US Army has said that accessing and interacting with a number of vulnerability- and exposure-prone Web sites and services in its networks is just fine.

Has the US Army taken a leave of its senses? Apparently so. At the same time, however, there is some comfort. Just two days ago the US Marine Corp banned the use of social networking services such as Twitter and Facebook. But how could the Army decide on one policy while the Marine Corp decided on one that is entirely different? I have no answer, but by all appearances a leadership vacuum in high places within the Department of Defense when it comes to information security policy. Defense Secretary Robert Gates should take notice of this and act accordingly.

Categories: Uncategorized Tags: