Archive

Author Archive

Employee Surveillance Hysteria and Other Musings

Today’s news includes a story about the ORCA card in the greater Seattle area. It seems that a new unified transit card being made available to residents of the greater Seattle area includes the provision that employers have access to transit ride information for those employees for whom their employers subsidize their transit card purchase. The annual benefit to a subsidized employee is nearly $1000. However, not surprisingly many people are whining about the perceived breach of “privacy rights” and the egregious behavior of their employers when they “snoop” into their employee’s transit ride information.

Some employers have stated that they intend to access the transit ride data — which includes date and time and information about rides — only when there is a need to investigate it after receiving other information about potential abuse. For example, a person claims five hours of overtime on a given day but transit ride information reveals that only a normal shift was worked. Or, a person calls in sick but transit ride information reveals that they traveled to see a ballgame that day. Other employers like Boeing have stated that they view transit ride information in a more “hands off” manner and do not plan to access it even if it might be relevant to investigate fraud. Of course, there will always be people in our country who are seriously confused about ethics, rights, and legalities. It is, after all, a complicated world we live in. However, this issue reminds me of something that happened to me twenty-some years ago and a continuing lesson for information security professionals for the future.

I once worked for a company that installed a new card reader system for door control. This required everyone to carry their own picture badge. There was a concern that it was too easy for unauthorized people to enter the premises which at that time was growing rapidly and would have six or seven buildings on the main campus and quite a number of smaller sales offices around the country. Automatic door control also enabled some doors to become unmanned rather than retaining the need for 24/7 guard staffing and the attendant high cost that implied. At the time of installation, there was great controversy about the potential for employer abuse of the door control system. People were moaning and whining about how the company was going to mine the door control system data and find them or penalize them for a few minutes of tardiness or other such miniscule infractions. In the several years I managed that system, there was not one single complaint of abuse of the door control data. In fact, to the best of my knowledge, the door control data was never accessed and used for anything other than (a) determining how an individual accessed a particular entry door when their cards had not been programmed for it (this usually means they used someone else’s card); and (b) determining if someone was at work after independent suspicions of absenteeism or timecard fraud had been raised. On a number of occasions, door control data was used successfully to pursue disciplinary action against an employee who was committing fraud about their attendance. But looking back, I can think of no one who would now claim that having picture badges and automatic door control systems at the many points of entry for this company in any way infringed on employee rights to privacy. In fact, most would have to admit that the system actually promoted efficiency and free flow of traffic throughout the offices.

The ongoing lesson for security professionals here is that when implementing a system that might be used for intrusive surveillance, define an ironclad policy of how the data will be collected, stored, destroyed and all permissible data uses. Communicate this policy clearly to all of those affected. Then walk the talk. Don’t use the data for any purpose other than that for which it is being collected. This also includes deleting the data when you know it will no longer be necessary. Ask yourself: have you ever been asked to conduct an attendance investigation using door control data against events older than a few months? Probably not. If you’re holding door control data I strongly urge you to delete all data older than, say, 90 days or 180 days. At the same company we implemented ironclad data control policies in other areas forcing the automated deletion of data after a certain aging threshold had been reached. This policy has paid for itself time and time again when we proved that the data no longer existed after outside agencies — including those armed with subpoenas — demanded we produce it. Unless employee data is specifically required to be retained by law or regulation, there should be a policy covering its collection, storage, use, and destruction. And make sure you follow those rules.

One other lesson I’m reminded about in this incident is that not all data use issues are the sole purview of the information security manager. I frequently see managers struggling to get control of controversial issues like detailed use of the Web, e-mail surveillance, cell phone and mobile surveillance, IM tracking etc., etc. These are not — repeat NOT — information security issues. They are policies that should be defined, justified, and carried out based on the needs of the business; whether information security needs to be involved due to the tools chosen to enforce these policies is a totally separate matter. It’s not up to the information security guy — or gal — to define whether certain religiously oriented websites should be accessible over the employer owned intranet. This task should be defined by someone in human resources according to the cultural needs of the company. All too often, information security tools are misused in a way that increases confusion and anxiety in the minds of employees and the information security manager bears the blame. Case in point: I once implemented a web tracking system at a major investment bank. Initially 10 or 15 categories of “inappropriate site” were implemented and on day one my phone began ringing. “Why can’t I access university research data?” “I can’t get to brewery sales information.” Etc. etc. We found out in the space of about one week many issues where an ostensibly “inappropriate” category of information turned out to be necessary for business. In the end, the things we stuck to in terms of implemented policy were hate, porn and gambling. These sites were never needed for business. But we did have to tinker with the filtering system because the word “sex” also appears in many situations that are most decidedly not pornographic but are in fact necessary for business. Also, I learned from my UK colleagues that online betting on the ponies is not considered in any way inappropriate in many UK cultures and so the restriction against gambling related sites also had to be fiddled with.

As an information security manager, it is important to be able to separate the concept of the tools we use from the policies we enforce. In a world of increasingly powerful tools such as data leakage prevention it is very important to have pre-established policies and methods to enforce those policies well in advance of implementing the tools. If a tool can be used for intrusive “snooping” then be prepared to show how such snooping never happened and demonstrate conclusively that effective controls exist for use of the tool that limit all potentially intrusive access to only those instances that are approved according to the company’s policy.

Categories: Uncategorized Tags:

Consumer Education Is No Longer Enough to Fight Phishing

In Thursday’s Bank Info Security newsletter, Linda McGlasson writes about the need for more consumer education and awareness as the primary strategy a bank should employ against phishing and malware.  I don’t want to in any way criticize the efforts that have been made to date regarding the education of consumers and individuals about malware and phishing attacks. It’s a good start. However, we are fighting a losing battle. When you have such people as FBI director Robert Mueller ending his personal use of online banking after he got burned when thought he could tell the difference between a genuine e-mail and a phishing attack, this should be a giant signal that we have reached the end of our ability to fight this war through consumer education.

Banks have generally innovated and provided reasonably good security concerning the use of websites for online banking. However, this technology continues to depend on the static password and shared secrets for authentication security. In an age when a significant proportion of PCs have been infected by malware, including key loggers, this is a demonstrably inappropriate strategy for banks to take.

Banks need to improve the customer experience so that use of a bank’s website involves less marketing and more assistance. If I think that the next window is geared toward selling me a product I do not want nor have time to consider, I am likely to click any button that will get me past it. The use of a tiny “no thanks” button hidden somewhere on the window plainly demonstrates that banks think marketing is more important than security. And indeed it may be. Banks expect consumers to shoulder a disproportionate burden for resolving fraudulent use of accounts and what banks are spending themselves on security is a tiny rounding error compared to what they are earning as a result of fraud. How about devoting half of the $35 billion banks make each year on overdraft fees to new anti-fraud initiatives?

Next, banks should adopt a much more aggressive and industrial-strength approach to attacking those who misuse the Internet to propagate malware and fraud. Decoy accounts should be used to isolate and provide early warning on fraudulent activity. Aggressive forensic investigation should be used to track back to those responsible for malware and fraud. Aggressive and uncompromising use of cease-and-desist orders against all who prosper or encourage the use of malware and fraud must be pursued by the banking industry.

As anyone who has ever experienced fraudulent use of their bank account knows, banks tend to adopt a rather negative attitude toward customers who identify fraud. The attitude is very much that of “we’ll investigate and come to our own conclusion about whether or not these transactions are legitimate.” Banks need to recognize that their customers are the ones who discover fraud, and who bear the greatest burden for the resolution of fraud. Bank customers are banks’ greatest assets in fighting fraud. Why do banks persist in acting as though customers are somehow responsible for fraud? Yes, they may have allowed a sophisticated malware attack to infect their PC leading to fraudulent use of online banking credentials – but if the FBI Director himself gets fooled, doesn’t that show that consumers may be doing all they can do? Criminals are responsible for fraud, not consumers who’ve been fooled.

Statistics about this are hard to come by, however I have a suspicion that banks are benefiting from fraudulent activity way more than they would care to admit. For example, I recently had $3500 of fraudulent airline tickets charged my account. Thankfully, bank security flagged this on the day the charges were processed and sent me an email which I received on my Blackberry. The following day, I went into my bank to resolve the matter. I was overdrawn and needed to have the fraudulent charges and the $175 of overdraft fees reversed. The manager who helped me had me speak by phone with the bank’s fraud office to get this accomplished. Reversing transactions were put through that took effect on the following business day (a Monday :-)) on a temporary basis until a permanent resolution could be approved by the bank. For this business day the bank had use of my funds and the net effect on its balance sheet was to overstate the bank’s cash position by $3500 until the funds availability was restored in my account. The bank knew it was fraud but waited a day to restore my balance. I could not use this money. Multiply this by the thousands (millions?) of transactions that succeed in a similar way against bank customers every day, and you have a rather significant bit of dirty laundry to add to the already significant pile already accumulated next to the banks’ washer in the basement. In short, this incident — together with such things as overdraft fee abuse — illustrates that there is a significant moral hazard involved in banks handling of fraud related to their accounts.

This article depends on the premise that widespread use of online banking is a significant positive for the banking industry. I believe this to be true. Banks have achieved significant productivity benefits from implementation of electronic banking measures of all types. But if consumers develop the perception that banks don’t care enough about phishing and malware to really work hard to stop it, then the electronic banking revolution will fade before it reaches its full potential. One thing banks have learned over the decades is that customer perception about banks is very hard to change. And for their part, banks are rather clumsy in their own approaches to developing and managing their brands. If consumers believe that banks are content to let fraud take place and leave customers to pick up the pieces, that could turn into a huge negative that could take years for banks to reverse.

Hiding from the reality of organized phishing and malware attacks by pretending that all is well will not be productive. In the current climate of significant mismanagement of risk by banks (sub-prime mortgages, credit default swaps, etc. – dare I say wrongdoing?) banks should realize that the same old “safety and soundness” message they offer regarding handling of fraud creates a real cognitive dissonance among consumers. The notion that banks play the market like they’re in Vegas, then accept taxpayer bailouts, then pay themselves millions while they place a hold on your money as they “investigate possible fraud” should be killed with a stake through the heart by all banks who care about keeping their deposit base.

Banks should be known as the primary fighters against phishing, malware, and fraud that are out there causing consumers to think twice about using electronic banking services. When consumers are facing financial pressures like never before, banks should be their friend and advocate in fighting fraud, taking much more of a “we’re on your side” attitude. I would argue that if one half of the unneeded and unwanted marketing messages I receive from banks were converted to helpful and empowering messages about information security that would be a good start to improving our chances in the war against phishing, malware and fraud. Perhaps banks should offer a bounty to consumers who identify a fraudulent transaction on their online banking statement. I’d like to see more headlines about banks cooperating with authorities, filing criminal and civil complaints against individuals and organized crime who are engaged in criminal activities. Only when banks, together with the credit card companies, take the lead in this war will we stand any chance of stemming the tide of phishing, fraud and malware.

Categories: Uncategorized Tags:

TJX and the Problem of Opportunity Cost

When blogging earlier about the aftermath of the TJX breach, I was reminded of something that happened to me years ago that expanded my perspective in understanding the true cost of information security.  I managed a department that included security engineers who operated the global Kerberos based authentication system for the firm.  One day at about 10 AM the system went down around the world.  Sessions already logged in were unaffected but no one could log on anywhere on the planet.  This is a fairly major outage and potentially a career limiting one.  After about 45 minutes, we were able to restore service and began accounting for the impact from this potentially catastrophic outage.  This was a large Wall Street investment bank and as it turned out the most profoundly affected unit included foreign currency futures traders. Had the outage occurred earlier in the day, it would have been much broader and more impactful.  We determined that approximately 75 users around the world were affected by their inability to log onto the system. Armed with this information, I went hat in hand to the managing director in charge of this futures trading unit. This is a person who makes about $20 million a year (somewhat more than I made that year 🙂 ).  He opened the meeting by saying “Jim, this is a very serious outage and we can’t overestimate the impact of such a service problem on the firm.”  I told him I understood this very well and my objective was to try to quantify in dollar terms the actual amount of financial impact that came from this particular outage. We might use this calculation in a variety of ways such as computing the return on investment from an HA cluster or other architectural approach to avoid a global outage in the future.

The managing director reiterated how serious an outage it was and when I pressed him for precise dollar estimates, he said “that morning, when foreign currency traders couldn’t logon they were unable to make certain bets in the marketplace. However, had they been able to make bets, they probably would’ve made the wrong ones given what happened later in the trading day. Therefore, we actually made money from the outage.”  I must’ve blinked my lack of understanding because he went on to say “that’s right, had my people been able to logon they would have made the wrong bets and lost money for the bank.”

It’s kind of hard to build this into the computation of the impact of an outage on the economic success of the firm.  When we made our own economic estimates later, we simply ignored this incident because including a positive number would have implied that it is possible to make money from having a system outage which cannot be a feasible financial outcome upon which a high-availability system can be based.  We did, however, try to calculate how much the outage might have cost had it come two and a half hours earlier and that was a big number…

This illustrates several problems with the computation of business impact of an adverse incident.  Even though statistically there is the possibility that an outage will produce a positive outcome, we ignore those.  By rights we should include them as just as statistically significant as the negative outcomes but our job is to provide protection against the negative outcomes, not the lucky ones.

Justification for information security is heavily biased on “soft dollars”. Attacks that weren’t successful, outages that didn’t happen, confidence that was improved and lower overhead from improved security interfaces are all quantified based on soft dollars. However, soft dollars don’t put food on the table or money into the shareholders’ pockets.  In fact, we always assume that the firm has something useful to do with the money we’d like to spend on information security if for some reason we didn’t need to spend that money.  This is what is behind the concept of “internal rate of return.”   If TJX had not experienced their breach, what would they have done with the extra earnings they made in 2007 and 2008 after all those customers did not desert them and all of those fines and penalties did not need to be paid?  Maybe TJX would have wasted that money on inventory or new stores that would have proved disastrous once the mortgage meltdown and the credit crunch reached their climax. The point is, you have to assume that the money you’d like to invest in security (or any other project for that matter) is precious and would otherwise be put to good use. The way to represent this in an ROI spreadsheet model is to use a middling return on invested capital rather than basing the hurdle rate on the most successful outcomes seen for other projects.  By using a middle range threshold, you build in the chance that some investments will go bad and not pay off.  In business school, the joke was that when you asked the professor about the hurdle rate, the answer was that it was a very complex calculation and unique for each different firm or industry, in short, “10%.”

TJX spent tens of millions of dollars on fines, penalties and damages resulting from its breach of more than 40 million credit card numbers in 2007.  In addition, it spent a lot more money upgrading its security infrastructure and may in fact have overpaid for those investments because they were made under some duress and perhaps lacked the full architectural thoughtfulness that might have attended less pressure filled in investments.  Assuming that excellent security would’ve prevented the breach, one would also have to build in as a benefit to security investments the lost margins, legal fees, and perhaps other softer opportunity costs to add to the total benefit stemming from avoiding a devastating information breach.  The stockholders might even like to get some of that stock price back as well.

TJX did not spend the money to have excellent security and instead suffered a breach.  We do not know if that decision was based upon an underestimate of the actual costs – including the soft dollar costs — of having a breach or real and pressing investments demanded elsewhere in the business that upstaged security.

There are two important lessons for security leaders and architects from this. The first is that there’s always something else to do with the money when considering making security investments. That consideration is more complex when one considers that oftentimes security is part of the overall IT organization and therefore might not substitute for investments made elsewhere in the firm but for investments in other technologies within IT. During the budgeting and planning process — or during a mid-year reallocation — it’s useful to consider the next project on the list and make certain that the opportunity cost from not investing in that project is appropriately figured into the security investment.

The second lesson is that the more you can drive benefits from the soft dollar side of the equation to the hard dollar side (real revenues, margins, or committed cost savings) the more clear-cut the investment decision becomes. This is not to say ignore or otherwise treat soft dollar benefits as trivial — this would be a mistake especially when such benefits can be quite substantial — but it does focus attention on the challenge of actually capturing the benefits after an investment in security infrastructure.  When they are all soft benefits, capturing and documenting financial success is a difficult exercise that can breed cynicism and distrust within the organization when not done well.  When two projects under consideration have equal benefits but one is all soft dollar benefits and the other is hard dollar benefits, the hard dollars or higher revenues or committed cost reductions will trump soft dollars every time.  Employees who can measure their own value to the organization by the  generated profits from their transactions in any given day or month want to see all of the promised benefits from new security infrastructure captured.

We can all think of projects that never reached their full potential.  The PKI implementation that never reached full roll-out.  The voice-activated password self-service tool that nobody uses.  The data from the IDS system that is not aggregated.  Etc.  These are all projects that were justified on substantial soft-dollar benefits and it is likely had untold opportunity costs beyond their out-of-pocket implementation costs.  If the opportunity costs had been included, would we have tried harder to capture the benefits?

Know your opportunity costs. These include the financial costs that we’ve discussed as well as the costs of having people devoted to your project versus other security or non-security priorities. Understanding the depth and character of opportunity costs can significantly improve your ability to justify and win approval for information security projects.  It can also galvanize the organization to drive the project successfully and capture the full measure of benefits.

Categories: Uncategorized Tags:

Should companies spend to avoid breaches??

I was shocked by the blog posted September 4 by Robert Westervelt of search security.com  and re-forwarded today to subscribers of “SecurityBytes Roundup” concerning the aftermath of the TJX credit card breach. As readers of this blog will no doubt recall TJX experienced a breach in early 2007 that exposed over 45 million credit cards and the company has been busy cleaning up after the mess ever since then. Now, 2 1/2 years later, after a 42% decline in stock price (in 2008), Westervelt sees TJX financial performance as an indicator that spending for advanced information security tools is apparently unjustified. Read more…

Categories: Uncategorized Tags:

Alignment is one key to long-term security success

Many information security programs are languishing on a plateau or a mild downward trend when viewed from the perspective of budget and resource allocation. There are many reasons this is true but one of the most important ones is a congenital lack of alignment between the information security program and the overall business. Simply stated, if security is not viewed as part of the top line success of any organization, it’s just another cost to be minimized. And as infosec leaders know all too well, there are plenty of people inside the corporate organization who know how to drive costs down ruthlessly. Read more…

Categories: Uncategorized Tags:

Encryption is Evidence of Illegal Activity

Most of our readers will be aware that the Customs Service has a program to search the laptops of selected travelers returning to the United States. Typically, a traveler is asked to step aside, power on the computer, and provide the password so that the computer can be perused ostensibly for contraband. Of course, anyone who experiences this will, at best, find this a huge hassle. Moreover, if you also happen to be trafficking in child pornography or jihadist writings, your trip may get a lot worse at this point. However, what if you’re a mild-mannered businessman — or woman – who’s been abroad on business and just wants to get home with his or her company provided laptop?

The answer is it’s not so pretty. There are many reasons you might not want the government to know the contents of your laptop. For example, your laptop might contain the confidential information of clients for whom you provide highly sensitive and confidential advice. Or, your laptop may contain writings that are privileged communications between yourself and your attorney; or your laptop might contain the confidential intellectual property of your employer which you are bound to keep secret under the terms of your employment contract, unless you are compelled to reveal it through judicial due process. The little kabuki drama that unfolds at Customs is not a judicial due process. So, you may be tempted to simply refuse to provide the password to unlock and/or decrypt the computer. Now what? Read more…

Categories: Uncategorized Tags:

Does Heartland Blame its QSAs?

Rich Mogull’s stern admonitions (http://securosis.com/blog/an-open-letter-to-robert-carr-ceo-of-heartland-payment-systems/) to Robert Carr, the CEO of Heartland Payment Systems, after Carr’s interview by CSO Online (http://www.csoonline.com/article/499527/Heartland_CEO_on_Data_Breach_QSAs_Let_Us_Down) are shrill and overstated. Mogull took several paragraphs to refute something attributed to Carr that never appeared in the Carr interview: that Carr blames his QSAs for his breach.

In the article, Carr never blamed his QSAs for the breach. Carr does say, “The audits done by our QSAs were of no value whatsoever,” and he later says, “The false reports we got for 6 years (from our QSAs), we have no recourse. No grounds for litigation.” While these statements might imply that the QSAs were to blame, they are a long way from making that assertion.

Carr expresses frustration that nothing his QSAs did in any way prepared Heartland to discover or defend against the attack. An aggrieved shareholder might have said the same thing about financial audits at Enron; why didn’t the auditors tell us about the massive fraud? Or how about Lehman Brothers shareholders’ plight? Who should have told them about the impending crash? Carr’s frustration seems to be that when he looked at the contract with his QSA, he realized that the QSA had no obligation to tell Heartland about the possible existence of vulnerabilities that might be exploited.

It is ironic that Heartland will rely on those very same QSA reports as a key part of their defense against those who are suing them over the incident. Claimants will say Heartland “knew or should have known about the existence of an exploitable vulnerability.” Obviously, Carr intends to argue that Heartland relied absolutely on the QSA reports and was shocked, SHOCKED, when the breach was discovered only moments after their most recent PCI clean bill of health.

The weakness of Mogull’s argument is obvious from Carr’s and Heartland’s new commitment to leading edge security, focused on “data in transit” as well as major new initiatives in data loss prevention (DLP) that go well beyond the scope of the PCI-DSS. If Carr really thought his QSAs and not Heartland were to blame for the breach – as Mogull claims – why would Carr now embark on these initiatives? Would Carr not claim, “look at our PCI report and see that we are clearly doing enough to prevent breaches to cardholder data,” if Mogull is right?

Mogull’s patronizing letter goes on and on pontificating about Carr’s role as CEO and explaining the issues with accountability, roles and independence with which Carr obviously needs no help.

Mogull criticizes Carr for “rely(ing) completely on an annual external assessment to define the whole security posture of his organization.” This is an outrageous accusation against Heartland; one might infer that they had no security staff or that they were dolts. The fact is that Heartland did not “rely completely” on their QSAs and they can prove they didn’t. Of course, we do not know the contents of the confidential report given Heartland by its QSAs and we do not know whether the QSAs gave them a soothing reassurance that all was OK. We do know that Carr thought he paid his QSAs for more and only after he read the fine print did he discover that more was a pipe dream.

All of this lather shifts attention away from PCI-DSS. PCI-SSC and the card brands might consider a reduced system of fines for cases where the processor or merchant passed their PCI assessment but still experienced a breach. As we’ve seen with Heartland and CardSystems Solutions, having a breach is no fun and costs a lot. Adding punitive PCI fines seems to be piling on, a substantial infraction in the NFL, yielding a 15-yard penalty and automatic first down. And it would really help if PCI-SSC would stop saying “We’ve never seen anyone who was breached that was PCI compliant.” This is an absurdity that perpetuates the fiction that (a) having a PCI certificate equates to being secure, and (b) following PCI is enough.

Bottom line, rants like Mogull’s do not help because they unfairly characterize the Heartlands of the world as trying to weasel out of accountability for security. We should instead be asking how companies like Heartland can better avoid breaches and provide them incentives for a positive track record, something that may be more effective than penalties and fines at this point. Like that old joke about 5,000 lawyers at the bottom of the ocean, PCI-DSS is a good start. But PCI-DSS is not the final word and PCI-SSC knows that. They have put out a request for comments and suggestions for improvements to version 1.2.

Categories: Uncategorized Tags:

Information Security Can Support Human Ecosystems

The Knights of Columbus and Marist organizations recently completed a survey of consumers and managers about ethics in today’s business world.   http://news.yahoo.com/s/usnw/20090226/pl_usnw/majority_of_public_believes_corporate_america_needs_new_moral_direction.  One of the striking things about the outcome of the survey is how much agreement there is between consumers and business managers about the lack of ethical standards behind today’s financial mess. This raises a fascinating topic of the ecology of human values within organizations and information security’s role in preserving and promoting that ecology.

Years ago I worked for an organization that pioneered aspects of modern information technology and also in the research of information security.  One of the things that information security consultants were expected to do back then was to routinely mount “social engineering” attacks on client organizations in order to expose the likelihood that information might be shared inappropriately by members of that organization. Generally speaking, social engineering requires misrepresentation of the social engineer’s identity and role in order to trick the victim — in this case an unsuspecting employee of the organization — into revealing secrets. At our company however we defined a hard and fast rule that no consultant should ever be required to tell a lie as a normal or routine part of their day-to-day job.  Lying is unethical.  Even if done as a means to a worthy end.  Many people observe religious values that prohibit them from telling a lie.  Honesty is the underpinning of much of today’s common law and contract law. Although counterbalanced by the principle of “caveat emptor,” honesty still holds a centrally important role in virtually all we do within business today. But in the Knights of Columbus and Marist poll, consumers and business managers alike saw honesty and ethical behavior as lacking or at least declining  in business organizations today. Read more…

Categories: Uncategorized Tags:

The System Shall Be Secure

20+ years after starting my career focused on information security and issues of risk acceptance, assurance, and implementation of appropriate controls within organizations, I’m still seeing the statement “the system shall be secure” as the single indication of security requirements for a given system being proposed for development. I recently had occasion to ask the senior development executive for a leading provider of software to pharmaceuticals firms to define the aspects or elements of his system that supported information security. What I got back in return was a hastily drafted statement complete with typographical errors that contained several hand waving mentions of information security jargon but amounted to essentially nothing in the way of a substantial statement of information security. I got the distinct impression that this company had never been asked for a definitive statement on information security before. Read more…

Categories: Uncategorized Tags:

What Is Information Security? Really??

In the current issue of IEEE Security and Privacy, Silver Bullet Editor Gary McGraw, CTO of Cigital, asks an interesting question: “What is security?” His interviewee, Gunnar Peterson, Founder of Artec Group, mentions Dan Geer’s statement that security is “risk management.” Later in the interview, McGraw asks whether security is “a thing”. Peterson says it’s a “set of services”. Also, security is defined by Butler Lampson as authentication, authorization, and auditing (“A-u — the Gold Standard” – ha ha). All of this mumbling about security illustrates an essential problem with the profession and the intellectual domain of information security: there is no good definition of information security that is generally accepted. For years, I have proposed that a better definition of information security than “CIA” is

a well-informed sense of assurance that
information risks and controls are in balance.

Read more…

Categories: Uncategorized Tags: