Archive

Archive for December, 2009

What We Learned About Security in 2009

2009 was a tumultuous year for the country, the economy, and for many information security programs and professionals.  Although Forrester’s Andy Jacquith (Twitter @arj) surveyed security practitioners in March and came to the conclusion that three out of four programs had not been cut, my own experience talking with colleagues and clients over the year has been different.  Many organizations have severely cut back, decided not to fill open positions, or otherwise limited financial resources that might otherwise have been available to information security functions.  There’s nothing wrong with this; organizations and economies ebb and flow and practitioners and leaders in information security need to be ready for the inevitable cutbacks, just as they prepare for and advocate for the important new initiatives.

But we did learn something very important about information security in 2009.  How firms and their senior leaders internalize risk and make decisions about risk was in many important ways laid open to public view in 2009 in a way that has never before been possible.  When discussing risk management programs in the past, I’ve always pointed to the financial industry with its chief risk officers, chief investment officers (the other “CIO”) and generally sober and serious approach to all things risk including audit and compliance, as the paradigm for risk management.  But in 2009 we found out that was not necessarily true.  Senior managers throughout the financial industry made risky decisions, “bet the farm,” and otherwise increased their firm’s exposure way beyond the levels of risk typically underwritten by information security departments, and did so in the face of clear evidence (now me with 20/20 hindsight, I admit) that a crushing downturn was coming.  Several senior leaders are no longer in their positions now in part because of the fallout of these decisions and the general leadership style that ignored or winked at this risky orientation. And all of this against a backdrop of what has been argued are unjustifiable compensation packages given the poor performance of many financial institutions (car companies, too) and the resultant taxpayer bailouts that took place.

What wisdom should we take from this?  I believe information security professionals have been given some of the best data points yet available about how firms and senior executives are likely to internalize risk that affects their organizations and their organizations major stakeholders.  This should influence how we communicate about information security risks and other risks inherent in the information technology function.  Many senior executives were paid for taking too much risk – and paid very very well for it.  The upshot of the mortgage meltdown, credit crisis, and resultant economic malaise is that unless organizations change dramatically, a risk-based approach to persuading business leaders about the advisability of implementing new information security controls and tools is less relevant and less likely to succeed than ever before. In short, it’s not enough to frighten them about the implications of the big breach or the potential expense of a forced remedial compliance effort after some other security incident.  How well senior leaders behave on security and other technology risks – which are far more esoteric and difficult to estimate than the kinds of financial risks that have brought down some of Wall Street’s biggest names — is likely to be even more freewheeling with corporate resources than ever before. I reiterate that this conclusion depends on a general continuation of the trend toward more aggressive risk-taking with company resources. If something happens to change the culture of how organizations view risk and accept risk on behalf of the firm, its shareholders and other major stakeholder groups, this could turn out to be an incorrect conclusion.  However, there is no evidence whatsoever that the incentives for taking excessive risk have lessened nor do we see increases in the penalties and disincentives for taking too much risk or for bearing the inevitable losses that will take place with too much risk.  No, it will become easier – not harder —  for managers to say “we can’t afford that level of security,” or to say, “We’ll run noncompliant for another year and see what happens,” after you present the implications of not being compliant with PCI again this year.  There is simply nothing to counterbalance the tendency for organizations to take too much risk and let others underwrite the losses.  In fact, what used to be “career limiting decisions” in the vein of accepting too much risk are now clearly in the realm of “moral hazard.” Top executives make so much money today that if something bad happens on their watch, they simply retire and go into consulting.  Or maybe someone will bail them out, too.  The millions they’ve been paid in cash and options will more than easily sustain a comfortable retirement even for the yachting crowd.  And about those “clawbacks” (of excessive compensation) we’ve heard about, the inevitable litigation will likely be almost as painful and the losses themselves, so we won’t see many of those either.

As a profession, information security must get better at defining and quantifying the risks inherent in not attending to information risk management. Simultaneously, we must continue to shift the emphasis from a risk-based justification for info security to a revenue-based justification. If the 1990s were years of “information security enabling the business,” then the decade just completed has been about learning that enablement wasn’t enough.  And the decade to come will be the one in which information security managers will be forced to take their place among those who generate revenue for the business and in so doing closely align information security with the products, services and customers of the company.

I’ve always advocated that information security managers keep a fresh copy of their resume at home. This is less humorous than it used to be.  Information security managers are increasingly the “designated scapegoats,” for the kinds of breaches and losses that are all too frequently occurring in IT today.  But if there continue to be no real barriers to the moral hazards of accepting too much risk on behalf of shareholders, and senior executives continued to be paid handsomely for short-term revenue, profits, and stock price objectives, then selling security based on risk alone will become “old hat” this year.

Here’s to a new year filled with new assurances that the vital information we manage is well protected against the increasing threats to it.  With that I know we’ll all have a very Happy New Year in 2010.

Categories: Uncategorized Tags:

Smartphone Attacks Are Increasing

Smartphones have become increasingly sophisticated over the years, and not surprisingly their functionality is built on top of operating systems that are starting to become more and more like mainstream operating systems commonly used today. Unfortunately, the presence of more functionality combined with more complete operating systems has resulted in a growing number of vulnerabilities. And the presence of these vulnerabilities has increasingly made cell phones the target of cyberattacks.

Last month users of jailbroken* iPhones in the Netherlands started seeing pop-up messages that informed them that their phones had been compromised and that malware had been installed on them. The messages directed the users to a Web site at which they could pay a five dollar ransom to get the malicious code deleted from their phones. The author of this code had found that that many of the jailbroken iPhones had the secure shell (SSH) program installed to provide secure network connectivity and that this program had a default password (“alpine”). He scanned the IP address range assigned to these phones for ones in which SSH was running and then used this program to install the malicious code in them. Fortunately, the perpetrator was caught and the money was returned to the victims.

Not long afterwards an Australian man developed a worm named “iKee.A” that exploited the same vulnerability in both jailbroken and non-jailbroken iPhones. Instead of demanding a ransom payment, this worm simply installed wallpaper showing a picture of the 1980s rock star Rick Ashley on iPhones that it infected and then attempted to infect other iPhones. About 21,000 of these phones were infected within one week of the worm’s release. Sadly, the author of this malware was not arrested; instead he was offered a job with a software company in Australia.

Shortly after iKee.A surfaced a new version of this malware started infecting iPhones. Named “iKee.B” (and nicknamed “duh”), this variant of iKee.A installed bots on iPhones that it infected. The perpetrator’s intention appears to build a massive botnet within Europe, although malware experts point out that the ability of this worm to spread is rather limited due to a number of factors.

The fact that new malware that targets smartphones is really nothing new. Anyone familiar with the proceedings of the recent Black Hat Conference in Las Vegas will know not only that exploit software that targets iPhones in particular is becoming more prevalent, but also that the perpetrator community’s interest in exploiting smartphone vulnerabilities is increasing dramatically.

Smartphones are not very secure by default. They are about as secure out-of-the-box as Windows NT systems were in the mid-1990s. They can be made to be much more secure with a few changes in configuration settings and default passwords as well as the installation of security software, which is now widely available at a reasonable price. But somehow smartphone users tend to completely overlook the need for security in these devices, leaving them wide open to exploitation such as by the malicious code described earlier in this posting. Let’s face it—if users do not secure their own computers, why would they give any thought to securing their smartphones?

Big trouble is brewing. Stay tuned—it’s going to get ugly.

_____
* – Wikipedia defines jailbreaking as “a process that allows iPhone and iPod Touch users to run unofficial code on their devices bypassing Apple’s official distribution mechanism.”

Categories: Uncategorized Tags:

Employee Surveillance Hysteria and Other Musings

Today’s news includes a story about the ORCA card in the greater Seattle area. It seems that a new unified transit card being made available to residents of the greater Seattle area includes the provision that employers have access to transit ride information for those employees for whom their employers subsidize their transit card purchase. The annual benefit to a subsidized employee is nearly $1000. However, not surprisingly many people are whining about the perceived breach of “privacy rights” and the egregious behavior of their employers when they “snoop” into their employee’s transit ride information.

Some employers have stated that they intend to access the transit ride data — which includes date and time and information about rides — only when there is a need to investigate it after receiving other information about potential abuse. For example, a person claims five hours of overtime on a given day but transit ride information reveals that only a normal shift was worked. Or, a person calls in sick but transit ride information reveals that they traveled to see a ballgame that day. Other employers like Boeing have stated that they view transit ride information in a more “hands off” manner and do not plan to access it even if it might be relevant to investigate fraud. Of course, there will always be people in our country who are seriously confused about ethics, rights, and legalities. It is, after all, a complicated world we live in. However, this issue reminds me of something that happened to me twenty-some years ago and a continuing lesson for information security professionals for the future.

I once worked for a company that installed a new card reader system for door control. This required everyone to carry their own picture badge. There was a concern that it was too easy for unauthorized people to enter the premises which at that time was growing rapidly and would have six or seven buildings on the main campus and quite a number of smaller sales offices around the country. Automatic door control also enabled some doors to become unmanned rather than retaining the need for 24/7 guard staffing and the attendant high cost that implied. At the time of installation, there was great controversy about the potential for employer abuse of the door control system. People were moaning and whining about how the company was going to mine the door control system data and find them or penalize them for a few minutes of tardiness or other such miniscule infractions. In the several years I managed that system, there was not one single complaint of abuse of the door control data. In fact, to the best of my knowledge, the door control data was never accessed and used for anything other than (a) determining how an individual accessed a particular entry door when their cards had not been programmed for it (this usually means they used someone else’s card); and (b) determining if someone was at work after independent suspicions of absenteeism or timecard fraud had been raised. On a number of occasions, door control data was used successfully to pursue disciplinary action against an employee who was committing fraud about their attendance. But looking back, I can think of no one who would now claim that having picture badges and automatic door control systems at the many points of entry for this company in any way infringed on employee rights to privacy. In fact, most would have to admit that the system actually promoted efficiency and free flow of traffic throughout the offices.

The ongoing lesson for security professionals here is that when implementing a system that might be used for intrusive surveillance, define an ironclad policy of how the data will be collected, stored, destroyed and all permissible data uses. Communicate this policy clearly to all of those affected. Then walk the talk. Don’t use the data for any purpose other than that for which it is being collected. This also includes deleting the data when you know it will no longer be necessary. Ask yourself: have you ever been asked to conduct an attendance investigation using door control data against events older than a few months? Probably not. If you’re holding door control data I strongly urge you to delete all data older than, say, 90 days or 180 days. At the same company we implemented ironclad data control policies in other areas forcing the automated deletion of data after a certain aging threshold had been reached. This policy has paid for itself time and time again when we proved that the data no longer existed after outside agencies — including those armed with subpoenas — demanded we produce it. Unless employee data is specifically required to be retained by law or regulation, there should be a policy covering its collection, storage, use, and destruction. And make sure you follow those rules.

One other lesson I’m reminded about in this incident is that not all data use issues are the sole purview of the information security manager. I frequently see managers struggling to get control of controversial issues like detailed use of the Web, e-mail surveillance, cell phone and mobile surveillance, IM tracking etc., etc. These are not — repeat NOT — information security issues. They are policies that should be defined, justified, and carried out based on the needs of the business; whether information security needs to be involved due to the tools chosen to enforce these policies is a totally separate matter. It’s not up to the information security guy — or gal — to define whether certain religiously oriented websites should be accessible over the employer owned intranet. This task should be defined by someone in human resources according to the cultural needs of the company. All too often, information security tools are misused in a way that increases confusion and anxiety in the minds of employees and the information security manager bears the blame. Case in point: I once implemented a web tracking system at a major investment bank. Initially 10 or 15 categories of “inappropriate site” were implemented and on day one my phone began ringing. “Why can’t I access university research data?” “I can’t get to brewery sales information.” Etc. etc. We found out in the space of about one week many issues where an ostensibly “inappropriate” category of information turned out to be necessary for business. In the end, the things we stuck to in terms of implemented policy were hate, porn and gambling. These sites were never needed for business. But we did have to tinker with the filtering system because the word “sex” also appears in many situations that are most decidedly not pornographic but are in fact necessary for business. Also, I learned from my UK colleagues that online betting on the ponies is not considered in any way inappropriate in many UK cultures and so the restriction against gambling related sites also had to be fiddled with.

As an information security manager, it is important to be able to separate the concept of the tools we use from the policies we enforce. In a world of increasingly powerful tools such as data leakage prevention it is very important to have pre-established policies and methods to enforce those policies well in advance of implementing the tools. If a tool can be used for intrusive “snooping” then be prepared to show how such snooping never happened and demonstrate conclusively that effective controls exist for use of the tool that limit all potentially intrusive access to only those instances that are approved according to the company’s policy.

Categories: Uncategorized Tags:

Good luck, Howard

The news is out—after all the waiting and anticipation, Howard Schmidt was named the U.S. Cybersecurity Coordinator today. His name is not exactly new within the world of information security. He has served in positions ranging from an investigator for the U.S. Air Force Office of Special Investigation (AFOSI), the chief security officer for Microsoft and eBay, cybersecurity advisor in the George W. Bush presidency, the president of the International Security Forum (ISF) and the Information Security Systems Association (ISSA), and board member of the International Information Systems Security Certification Consortium, Inc. (ISC2).

I’ve known Howard for the better part of 20 years now. Strangely, I had few dealings with him when he was a member of the AFOSI while I headed the Department of Energy’s response team, despite all the breakins into Air Force systems from compromised DOE systems during Operation Desert Storm and Desert Shield. But I got to know him better once he was at Microsoft. From what I have seen of him, he is more than capable of being extremely successful in his new role.

My concern is with the position that he has filled. Melissa Hathaway functionally served in this position before the Obama Administration came into being, and she continued in this role for approximately seven months after President Obama assumed the Presidency. Insiders say that although Hathaway is not really a cybersecurity-competent person, she fulfilled her role unusually well. Working with the U.S. government is not, after all, such an easy thing to do. Someone I know well worked for her, and during that time sang her praises as both an exceptional manager and someone who knew how to get things done within the extremely difficult confines of the U.S. government. But instead of waiting to be named the U.S. cybersecurity czar, Hathaway quit out of reported frustration over the inner workings of the U.S. government.

Before Hathaway, Amit Yoran, formerly of Symantec, was appointed the cybersecurity czar for the Department of Homeland Security. Yoran’s appointment did not work out at all; he was a Washington outsider who got picked to pieces by experienced and deadly government bureaucrats. He barely lasted one year before resigning. Schmidt’s tour of duty with the George W. Bush administration did not fare much better; reports indicate that he was in a position in which he had a great deal of responsibility without a commensurate amount of authority.

So what might be different about Schmidt’s second tour of duty in the U.S. government? I fear that the answer is nothing. Schmidt’s title is impressive, but the fact that he must report to a deputy national security advisor indicates the actual importance of his position. Howard is a skillful person; perhaps he can make his way around many of the obstacles that will inevitably surround him. But the fact that he is from an organizational viewpoint multiple levels down from the top will result in the same hurdles that a chief security officer faces when that person is two levels below the chief information officer. Furthermore, the scuttlebutt is out that the cybersecurity coordinator position within the Obama administration is not intended to be all that high a position. Schmidt may thus be badly overqualified for it.

So Howard—I wish you the best of luck. The cards are stacked against you, but then again, if anyone can succeed, you can and will!

Categories: Uncategorized Tags:

Expectation of Privacy in Text Messaging?

Should an employee be able to read text messages that its employees have sent from devices and accounts that belong to the employer? The US Supreme court has decided to hear an appeal of a federal appeals ruling that the Ontario, California police department went beyond its rights when it obtained and read personal text messages that the officers sent.

This case is not the first of its kind. In the early 1990’s three employees of Epson in El Segundo, California were fired because of messages they sent to each other in which they boasted of having ignored or disobeyed their managers’ orders. The messages were shown to the managers, who then fired the employees for insubordination. The employees sued on the basis that they had not been informed that their messages were being read and won an out-of-court settlement. This case set an important precedent—that users of computing systems need to be informed if they are going to be monitored. Their expectation of privacy needs to be defeated.

Another case did not occur within the U.S. legal system, but rather within the U.S. Navy. About the same time that the Epson case occurred, an enlisted man broke into numerous Navy computing systems and was subsequently court-martialed on the charges that he did so. Many of the compromised systems were VMS systems that displayed a logon banner that started with “Welcome to my world of VMS.” The defense claimed that the defendant had been welcomed to the computing systems; the military tribunal agreed and ruled in favor of the accused.

Cases such as the Epson and U.S. Navy ones helped accelerate a now widely accepted and implemented principle—to display warning banners when users connect to systems. These banners need to advise users that:

1. Their actions during interaction with the system that they have accessed will be monitored,

2. Continuing with the login process constitutes consent to be monitored, and

3. Unauthorized actions, including gaining access without permission, may result in punishment, including legal prosecution.

Warning banners are designed to defeat the expectation of privacy on the part of users of computing systems and networks as well as to obtain their informed consent. Doing so helps protect organizations against invasion of privacy and other civil lawsuits, and at the same time negates malicious users’ defense that they did not know that what they were doing was wrong.

So how about the California Police case? The officers involved claim that they were “informally” told that their personal use of the smart phones was allowed and that the text messages that they sent would not be monitored. The city of Ontario has denied their assertion, saying instead that its policy does not allow the personal use of such equipment and that this policy is documented in writing and available to the officers.

The City of Ontario could have spared itself a lot of trouble and grief if someone there had realized that the smart phones used by the police are capable of displaying a warning banner whenever the phones are powered on. Displaying such a banner would not only have defeated the expectation of privacy on the part of the police officers, but would also have discouraged the police officers from sending personal text messages. What a pity, but hindsight is 20/20.

The Supreme Court’s ruling is bound to profoundly affect future rulings on cases of this nature. And the fact that a case of this nature has gone all the way to the Supreme Court shows that slowly but surely, issues with which we information security professionals deal are making their way to the forefront of the legal system as well as other important places.

Categories: Uncategorized Tags:

Heartland Gets off the Hook

By now you have probably heard that last week a US District Court Judge Anne Thompson has granted a motion filed by Heartland Payment Systems to throw out a shareholder-initiated class-action lawsuit that followed what many call the worst data security breach ever. The plaintiffs claimed that this company made “false and/or misleading statements and failed to disclose material adverse facts about the company’s business, operations and prospects” and that the company’s cyber security measures were “inadequate and ineffective.” Heartland stock had plunged to a mere 20 percent of its value after news of its massive data security breach surfaced. In her ruling, the judge said that no evidence existed that senior management at Heartland was not paying suitable attention to security issues there.

Was Heartland senior management really not paying sufficient attention to security issues? I could make a case either way. One on hand, nobody at Heartland noticed all the intrusions and infestation of malware for six months. In the commercial world, six months of failure to detect potentially financially catastrophic events of the magnitude that Heartland was experiencing is inconceivable. This says to me that Heartland may have had an intrusion detection effort, but it was not at all effective, a strong (but 100 percent certain) indication that at least part of its practice of security was more perfunctory than anything else. On the other hand, the fact that Heartland Payment Systems had passed a PCI-DSS audit not all that long before the massive data security breach would in my judgment show that this company had at least exercised due care with respect to its credit card-related security practices. We all know that the PCI-DSS standard prescribes “minimum security practices” more than anything else. Still, in the US there is a strong precedent for due care as a compelling defense argument dating back to at least the 1930s in the United States versus Carroll Towing ruling (if not before). But I would also want to know whether information security issues had been regular agenda topics in Heartland’s senior management and the board of directors meetings. If the answer were no, I’d be less inclined to agree with the view that Heartland was paying sufficient attention to information security issues.

Finally, the fact that company’s stock value fell drastically once news of
the breach reached the public should come as no surprise to anyone. This outcome has happened repeatedly in the past such that this trend is now well-documented and analyzed in research studies by institutions such as ones at the University of Maryland and SUNY-Albany (see http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VD0-4X1J73T-1&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&_docanchor=&view=c&_searchStrId=1134142558&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b9d2800998af7ec68cb191b5b7d19187, for example). I seriously doubt that Heartland’s senior management and board were even vaguely aware of the effect of security breaches on stock value. There is a lesson to be learned here—if you are an information security manager, you should be educating your senior management concerning this relationship. It will be one of the best things you can do when it comes to making the “sale for security.”

So Heartland got off the hook, at least with respect to this particular lawsuit. I remain undecided concerning whether the judge’s ruling in the stockholders’ lawsuit was right. But this lawsuit was only one of many that are still pending. If I were a betting person I’d bet that Heartland will not fare quite as well in all of the upcoming lawsuits against it.

Categories: Uncategorized Tags:

The New SQL Injection Attack

Just in time for the Holidays—a new, extremely sophisticated SQL injection attack that may have already infected up to 300,000 Web pages has been detected. Perpetrators are using SQL injection to push a malicious iframe that is named script src=hxxp://318x.com into Web servers. (An iframe is an HTML structure that enables another HTML document to be put into an HTML page.) When a Windows user connects to one of these servers, this iframe redirects the user to a malicious Web site, www.318x .com, without the user’s knowledge. (Important note: please DO NOT CLICK ON THIS URL OR GO TO THIS SITE unless you want your system to become a victim of this attack!) It then runs a script that creates a new iframe that redirects the user’s browser to www.318x.com/a .htm (again, please DO NOT CLICK ON THIS URL OR GO TO THIS SITE), installs a second malicious iframe from aa1100.2288.org/htmlasp/dasp/alt.html, and then installs and executes a script named js.tongji.linezing.com/1358779/tongji.js (which used for tracking where the victim system is). This script creates a third malicious iframe that redirects the victim system’s browser to aa1100.2288.org/htmlasp/dasp/share.html. This script loads and executes yet another script named js.tongji.linezing.com/1364067/tongji.js and also determines the type of browser in the victim system. It then installs several more iframes used to point to hidden scripts that reside in the same directory. These scripts determine what version of Abode Flash Player the victim system has and then attempt to exploit five vulnerabilities in the user’s system:
• A vulnerability in MDAC ADODB.connection ActiveX (see Microsoft Bulletin MS07-009)
• A vulnerability in Internet Explorer uninitialized memory (see Microsoft Bulletin MS09-002)
• A vulnerability in Microsoft video ActiveX (see Microsoft Bulletin MS09-032)
• Vulnerabilities in Microsoft Office Web Components (see Microsoft Bulletin MS09-043)
• An integer overflow vulnerability in Adobe Flash Player (see CVE-2007-0071)
If any of these vulnerabilities is present, a Trojan program, Backdoor.Win3.croo, is installed on the victim system. This piece of malware, which is a variant of the Buzus Trojan, steals banking information that includes among other things account names, passwords, and PINs. Backdoor.Win3.croo also has rootkit functionality, enabling it to avoid detection by anti-virus and other software on infected systems. This Trojan then creates files in two folders:

%ProgramFiles%\Common Files\Syesm.exe
%UserProfile%\ammxv.drv

It then makes changes to the following Registry keys to ensure that the malware starts whenever the system is booted:

HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\DrvKiller
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\DrvKiller\Security
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet002\Services\DrvKiller
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet002\Services\DrvKiller\Security

Finally, the Trojan tries to connect to port 80 of IP address 121.14.136.5 to send information it gleans.

A wide variety of Web sites is being used to inject this malware into users’ systems; parisattitude .com, knowledgespeak .com,and yementimes.com are just a few of these sites. Even the Iowa City, Iowa municipal site was taken over by the perpetrators. Security vendor ScanSafe reported that about two percent of Web connections it had analyzed were to infected sites—a statistic that I consider alarming!

I checked and found that most anti-virus and anti-malware vendors have updates that detect and eradicate this ugly Trojan. If I were you, I’d thus make sure that all your anti-virus and anti-malware software has the latest update. Additionally, you should ensure that all five vulnerabilities that are being exploited in these attacks are patched in all your Windows systems. Furthermore, you should block outbound traffic from your network that is bound for port 80 of IP address 121.14.136.5 and all URLs for sites that the malicious iframes visit. Finally, you should configure intrusion detection and intrusion prevention systems as well as firewalls to monitor all attempts to reach this IP address and the URLs, as these attempts will almost certainly indicate the existence of compromised Windows systems.

Categories: Uncategorized Tags:

Windows Security: Part 7

I could go on talking about Windows security forever, but I won’t, as we need to move on to other important issues. But in this last posting in this series I’d like to discuss and evaluate Windows object permissions In Unix and Linux systems there are only three permissions, read, write and execute. Critics have long complained that having only three permissions does not provide the granularity needed for precise access control when the need for security is high.

Windows file and folder permissions go a long way in addressing this concern. Although the exact permissions available depend on the particular version of Windows, these systems have two types of permissions, Molecular and Atomic. Molecular permissions, which are more high-level in nature, generally include ones such as the following:

– Full Control
– Modify
– Read-Execute
– Read
– Write
– Special Permissions (e.g., Take Ownership)

In contrast, Atomic (or Advanced) permissions are very granular in nature. They generally include the following types of access rights:

– Full Control
– Traverse Folder / Execute File
– List Folder / Read Data
– Read Attributes
– Read Extended Attributes
– Create Files/ Write Data
– Create Folders / Append Data
– Write Attributes
– Delete
– Read Permissions
– Change Permissions
– Take Ownership

I suspect that most Windows administrators and users do not think about permissions for files and folders very much. After all, the default permissions for critical files and folders are almost without exception good from a security standpoint. For example, the default installation directory (which is generally C:\WINDOWS) in Windows XP systems by default allows Full Control to Administrators, but only Read, Read / Execute and List Contents permissions to Users. And the fact that permissions are by default inherited from higher level objects (e.g., folders) to lower-level objects (e.g., subfolders and files within folders) helps keep unsafe permissions from being assigned to newly created objects.

Permissions do not apply exclusively to files and folders. Active Directory containers and their attributes also have permissions that control how much access each group and person is allowed. Groups such as Enterprise Administrators and Domain Administrators are usually by default assigned Full Control access (or nearly as much access), whereas Users are usually by default assigned only something such as Read access. Once again, the default Active Directive object and attribute permissions are good from a security perspective. The only trouble with them that I have ever seen is when a new application that creates its own Active Directory objects and attributes has been installed. Sometimes faulty permissions such as Full Control to Everyone are assigned. Administrators thus need to find and change permissions that are created in this manner.

The only real limitation in Windows permissions is that they are in a certain sense overwhelming. Consider the widely accepted information security principle of keeping information classification schemes and labels simple. Failure to observe this principle causes confusion and complications that result in people failing to comply with standards related to labeling and handling sensitive information. The same basic principle applies to permissions. When there are too many of them and when there are tens of thousands of objects (files, directories, Active Directory containers and attributes, and shares) to which they apply, people will and do simply ignore them.

There is an alternative—Simple File Sharing (which is by default enabled on Windows XP Professional systems). This way of setting permissions offers a much simpler way of controlling share access to individual files, folders and even an entire hard drive, but critics complain that it is too simplistic and that it leads to errors that can leave files and file systems wide open to anyone.

I must credit Microsoft for at least offering meaningful choices when it comes to access control. For those who need high levels of security, there are Atomic and Molecular permissions. For those who don’t, there is Simple File Sharing. Really–what more could we ask?

Categories: Uncategorized Tags:

Windows Security: Part 6

In my last blog posting I discussed the Encrypting File System (EFS) that is built into every Windows operating system since Windows 2000 and how EFS works. Although EFS is effective as a security control against data security breach-related risks, a major limitation is that it does not provide whole disk encryption, making it susceptible to certain kinds of attacks. A perpetrator who has local access to the same hard drive on which Windows resides can, for example, boot a non-Windows operating system to access EFS-encrypted files and directories or copy the entire encrypted contents of a lost or stolen PC’s hard drive to a completely different computer to view the information in clear text. Windows BitLocker encryption, which is available in Vista (see footnote below) and Windows Server 2008, addresses this limitation nicely by encrypting the entire contents of a Windows volume, thereby protecting all the data therein from a wider variety of attacks. Read more…

Categories: Uncategorized Tags:

Windows Security: Part 5

With all the data security breaches that have occurred over the last half decade and also with the advent of data protection requirements such as the PCI-DSS standard, even the most security-resistant organizations have been forced to assess and at least to some degree deal with data extrusion-related risks. Accordingly, vendors of security products as well as some operating system vendors, Microsoft included, have incorporated data extrusion prevention controls into their products. Starting with Windows 2000, Microsoft has provided the Encrypting File System (EFS) in its operating systems. EFS, which works only with the NTFS-5 file system, encrypts files and directories in a manner that is transparent to users. Read more…

Categories: Uncategorized Tags: