Archive

Author Archive

Archive Everything Forever, Part 1

What do the Chinese Communist Party and FINRA (Financial Industry Regulatory Authority) have in common? They both want to control and/or censor all communications by their communities. In the case of the Red Chinese, of course, this affects things like whether Tiananmen Square gets sprayed with machine gun fire or Google gets to do business in China without shame. In the case of FINRA in the US, this affects whether registered representatives and their financial firm employers can use social media unfettered. Free speech? What free speech?
Recently, FINRA announced that financial firms are responsible for “monitoring” and “archiving” all communications on social media sites such as Facebook and Twitter by people in their employ, mostly targeting registered representatives, those authorized to trade securities for their firms, their clients, or who advise individuals about securities and financial markets.  In fairness, FINRA’s guidance sounds pretty reasonable: “supervise the use of social networking sites to ensure that recommendations are suitable and their customers are not misled.” And they also state that, “FINRA does not endorse any particular technology to keep such records, nor are we certain that adequate technology currently exists.” OK fair enough. But what to do?
This reminds me of deliberations I participated in back in the mid-1990s in which the security and operations people in regulated financial firms were told to “archive everything forever,” as a kind of “shot across the bow” by regulators frozen in the headlights of the exponentially growing phenomenon called The Internet. No known technology then satisfied “archive everything forever.” But that didn’t stop the regulators. There has always been a requirement to archive communications made on paper. Later, it was realized that a lot of faxed communications might be bypassing postal mail-based controls. Later still, recorded phone lines were required (creating kind of a “hot line” class of phones within trading rooms – if you needed to make a personal call, better use a pay phone or a big, clunky cell phone like the ones used by the “LowScore Band” in those commercials) which generated lots of coping behavior among those who needed to communicate regarding non-firm business. Trouble is, as was well-documented in the original “Wall Street” movie (Oliver Stone plans to release the sequel to the 1987 classic this year) fraudsters also could still escape monitoring by using the same coping mechanisms. Remember Charlie Sheen breathing into his phone, “Blue Horseshoe loves Anacot Steel”?
This also evokes memories of a case I worked on early in my Wall Street career. A young trader had posted a comment on a Yankees bulletin board (now there’s and arcane term for you in 2010…) in response to an inappropriate posting of a credit card offer on the same board. The credit card offer was not in any way illegal, but it so angered the young trader that he posted an expletive laced rant about how “this board is for Yankees fans,” etc. etc. from his firm email account. We got five or six sternly worded complaints from people, some of whose children were users of the Yankee-fan board site themselves, who were worried that our firm would tolerate such language. OK, personal speech by a trader on his lunch time. But: using a firm-provided and firm-identified email origin. This damaged the firm’s reputation. The young trader even said to us, “I knew I should have waited until I was home,” to make the angry post. He was not surprised to be fired. Fast forward to today, though. The distinction between personal and firm identified email is way fuzzier. Could someone have researched the IP address used for a typical HTTP session and linked the firm with the bad language in the same way? Maybe. Would the firm arrive at the same conclusion about perceived damage to reputation? Seriously open to question. This vivifies the problem regulators face today though it has nothing to do with fraud.
“Archive everything forever” was a great example of the kind of clueless regulation securities professionals have faced for a long long time. Remember, this statement came at a time when Bernie Madoff was probably into his second decade of his little scheme, and the SEC had already conducted its first investigation of Madoff Securities and found nothing untoward. The problem really is, in today’s climate of “get the greedy bankers,” it is likely that regulation designed to prevent fraud will get more draconian and less effective. What’s called for is banks and securities firms to take the initiative and provide tools to their employees and agents to help keep everybody out of trouble.
The answer, I think, is found in emergent information technologies today. Information security has reached a great watershed in its evolution from preventive, inwardly focused tools to externally focused, product and value enhancing tools. I foresee a day when it will truly be possible to differentiate firms by the security they demonstrate, not just dubious self-assertions. In Part 2 of this blog, we’ll develop this idea more completely.

Categories: Network Security Tags:

What We Learned About Security in 2009

2009 was a tumultuous year for the country, the economy, and for many information security programs and professionals.  Although Forrester’s Andy Jacquith (Twitter @arj) surveyed security practitioners in March and came to the conclusion that three out of four programs had not been cut, my own experience talking with colleagues and clients over the year has been different.  Many organizations have severely cut back, decided not to fill open positions, or otherwise limited financial resources that might otherwise have been available to information security functions.  There’s nothing wrong with this; organizations and economies ebb and flow and practitioners and leaders in information security need to be ready for the inevitable cutbacks, just as they prepare for and advocate for the important new initiatives.

But we did learn something very important about information security in 2009.  How firms and their senior leaders internalize risk and make decisions about risk was in many important ways laid open to public view in 2009 in a way that has never before been possible.  When discussing risk management programs in the past, I’ve always pointed to the financial industry with its chief risk officers, chief investment officers (the other “CIO”) and generally sober and serious approach to all things risk including audit and compliance, as the paradigm for risk management.  But in 2009 we found out that was not necessarily true.  Senior managers throughout the financial industry made risky decisions, “bet the farm,” and otherwise increased their firm’s exposure way beyond the levels of risk typically underwritten by information security departments, and did so in the face of clear evidence (now me with 20/20 hindsight, I admit) that a crushing downturn was coming.  Several senior leaders are no longer in their positions now in part because of the fallout of these decisions and the general leadership style that ignored or winked at this risky orientation. And all of this against a backdrop of what has been argued are unjustifiable compensation packages given the poor performance of many financial institutions (car companies, too) and the resultant taxpayer bailouts that took place.

What wisdom should we take from this?  I believe information security professionals have been given some of the best data points yet available about how firms and senior executives are likely to internalize risk that affects their organizations and their organizations major stakeholders.  This should influence how we communicate about information security risks and other risks inherent in the information technology function.  Many senior executives were paid for taking too much risk – and paid very very well for it.  The upshot of the mortgage meltdown, credit crisis, and resultant economic malaise is that unless organizations change dramatically, a risk-based approach to persuading business leaders about the advisability of implementing new information security controls and tools is less relevant and less likely to succeed than ever before. In short, it’s not enough to frighten them about the implications of the big breach or the potential expense of a forced remedial compliance effort after some other security incident.  How well senior leaders behave on security and other technology risks – which are far more esoteric and difficult to estimate than the kinds of financial risks that have brought down some of Wall Street’s biggest names — is likely to be even more freewheeling with corporate resources than ever before. I reiterate that this conclusion depends on a general continuation of the trend toward more aggressive risk-taking with company resources. If something happens to change the culture of how organizations view risk and accept risk on behalf of the firm, its shareholders and other major stakeholder groups, this could turn out to be an incorrect conclusion.  However, there is no evidence whatsoever that the incentives for taking excessive risk have lessened nor do we see increases in the penalties and disincentives for taking too much risk or for bearing the inevitable losses that will take place with too much risk.  No, it will become easier – not harder —  for managers to say “we can’t afford that level of security,” or to say, “We’ll run noncompliant for another year and see what happens,” after you present the implications of not being compliant with PCI again this year.  There is simply nothing to counterbalance the tendency for organizations to take too much risk and let others underwrite the losses.  In fact, what used to be “career limiting decisions” in the vein of accepting too much risk are now clearly in the realm of “moral hazard.” Top executives make so much money today that if something bad happens on their watch, they simply retire and go into consulting.  Or maybe someone will bail them out, too.  The millions they’ve been paid in cash and options will more than easily sustain a comfortable retirement even for the yachting crowd.  And about those “clawbacks” (of excessive compensation) we’ve heard about, the inevitable litigation will likely be almost as painful and the losses themselves, so we won’t see many of those either.

As a profession, information security must get better at defining and quantifying the risks inherent in not attending to information risk management. Simultaneously, we must continue to shift the emphasis from a risk-based justification for info security to a revenue-based justification. If the 1990s were years of “information security enabling the business,” then the decade just completed has been about learning that enablement wasn’t enough.  And the decade to come will be the one in which information security managers will be forced to take their place among those who generate revenue for the business and in so doing closely align information security with the products, services and customers of the company.

I’ve always advocated that information security managers keep a fresh copy of their resume at home. This is less humorous than it used to be.  Information security managers are increasingly the “designated scapegoats,” for the kinds of breaches and losses that are all too frequently occurring in IT today.  But if there continue to be no real barriers to the moral hazards of accepting too much risk on behalf of shareholders, and senior executives continued to be paid handsomely for short-term revenue, profits, and stock price objectives, then selling security based on risk alone will become “old hat” this year.

Here’s to a new year filled with new assurances that the vital information we manage is well protected against the increasing threats to it.  With that I know we’ll all have a very Happy New Year in 2010.

Categories: Network Security Tags:

Employee Surveillance Hysteria and Other Musings

Today’s news includes a story about the ORCA card in the greater Seattle area. It seems that a new unified transit card being made available to residents of the greater Seattle area includes the provision that employers have access to transit ride information for those employees for whom their employers subsidize their transit card purchase. The annual benefit to a subsidized employee is nearly $1000. However, not surprisingly many people are whining about the perceived breach of “privacy rights” and the egregious behavior of their employers when they “snoop” into their employee’s transit ride information.

Some employers have stated that they intend to access the transit ride data — which includes date and time and information about rides — only when there is a need to investigate it after receiving other information about potential abuse. For example, a person claims five hours of overtime on a given day but transit ride information reveals that only a normal shift was worked. Or, a person calls in sick but transit ride information reveals that they traveled to see a ballgame that day. Other employers like Boeing have stated that they view transit ride information in a more “hands off” manner and do not plan to access it even if it might be relevant to investigate fraud. Of course, there will always be people in our country who are seriously confused about ethics, rights, and legalities. It is, after all, a complicated world we live in. However, this issue reminds me of something that happened to me twenty-some years ago and a continuing lesson for information security professionals for the future.

I once worked for a company that installed a new card reader system for door control. This required everyone to carry their own picture badge. There was a concern that it was too easy for unauthorized people to enter the premises which at that time was growing rapidly and would have six or seven buildings on the main campus and quite a number of smaller sales offices around the country. Automatic door control also enabled some doors to become unmanned rather than retaining the need for 24/7 guard staffing and the attendant high cost that implied. At the time of installation, there was great controversy about the potential for employer abuse of the door control system. People were moaning and whining about how the company was going to mine the door control system data and find them or penalize them for a few minutes of tardiness or other such miniscule infractions. In the several years I managed that system, there was not one single complaint of abuse of the door control data. In fact, to the best of my knowledge, the door control data was never accessed and used for anything other than (a) determining how an individual accessed a particular entry door when their cards had not been programmed for it (this usually means they used someone else’s card); and (b) determining if someone was at work after independent suspicions of absenteeism or timecard fraud had been raised. On a number of occasions, door control data was used successfully to pursue disciplinary action against an employee who was committing fraud about their attendance. But looking back, I can think of no one who would now claim that having picture badges and automatic door control systems at the many points of entry for this company in any way infringed on employee rights to privacy. In fact, most would have to admit that the system actually promoted efficiency and free flow of traffic throughout the offices.

The ongoing lesson for security professionals here is that when implementing a system that might be used for intrusive surveillance, define an ironclad policy of how the data will be collected, stored, destroyed and all permissible data uses. Communicate this policy clearly to all of those affected. Then walk the talk. Don’t use the data for any purpose other than that for which it is being collected. This also includes deleting the data when you know it will no longer be necessary. Ask yourself: have you ever been asked to conduct an attendance investigation using door control data against events older than a few months? Probably not. If you’re holding door control data I strongly urge you to delete all data older than, say, 90 days or 180 days. At the same company we implemented ironclad data control policies in other areas forcing the automated deletion of data after a certain aging threshold had been reached. This policy has paid for itself time and time again when we proved that the data no longer existed after outside agencies — including those armed with subpoenas — demanded we produce it. Unless employee data is specifically required to be retained by law or regulation, there should be a policy covering its collection, storage, use, and destruction. And make sure you follow those rules.

One other lesson I’m reminded about in this incident is that not all data use issues are the sole purview of the information security manager. I frequently see managers struggling to get control of controversial issues like detailed use of the Web, e-mail surveillance, cell phone and mobile surveillance, IM tracking etc., etc. These are not — repeat NOT — information security issues. They are policies that should be defined, justified, and carried out based on the needs of the business; whether information security needs to be involved due to the tools chosen to enforce these policies is a totally separate matter. It’s not up to the information security guy — or gal — to define whether certain religiously oriented websites should be accessible over the employer owned intranet. This task should be defined by someone in human resources according to the cultural needs of the company. All too often, information security tools are misused in a way that increases confusion and anxiety in the minds of employees and the information security manager bears the blame. Case in point: I once implemented a web tracking system at a major investment bank. Initially 10 or 15 categories of “inappropriate site” were implemented and on day one my phone began ringing. “Why can’t I access university research data?” “I can’t get to brewery sales information.” Etc. etc. We found out in the space of about one week many issues where an ostensibly “inappropriate” category of information turned out to be necessary for business. In the end, the things we stuck to in terms of implemented policy were hate, porn and gambling. These sites were never needed for business. But we did have to tinker with the filtering system because the word “sex” also appears in many situations that are most decidedly not pornographic but are in fact necessary for business. Also, I learned from my UK colleagues that online betting on the ponies is not considered in any way inappropriate in many UK cultures and so the restriction against gambling related sites also had to be fiddled with.

As an information security manager, it is important to be able to separate the concept of the tools we use from the policies we enforce. In a world of increasingly powerful tools such as data leakage prevention it is very important to have pre-established policies and methods to enforce those policies well in advance of implementing the tools. If a tool can be used for intrusive “snooping” then be prepared to show how such snooping never happened and demonstrate conclusively that effective controls exist for use of the tool that limit all potentially intrusive access to only those instances that are approved according to the company’s policy.

Categories: Network Security Tags:

Consumer Education Is No Longer Enough to Fight Phishing

In Thursday’s Bank Info Security newsletter, Linda McGlasson writes about the need for more consumer education and awareness as the primary strategy a bank should employ against phishing and malware.  I don’t want to in any way criticize the efforts that have been made to date regarding the education of consumers and individuals about malware and phishing attacks. It’s a good start. However, we are fighting a losing battle. When you have such people as FBI director Robert Mueller ending his personal use of online banking after he got burned when thought he could tell the difference between a genuine e-mail and a phishing attack, this should be a giant signal that we have reached the end of our ability to fight this war through consumer education.

Banks have generally innovated and provided reasonably good security concerning the use of websites for online banking. However, this technology continues to depend on the static password and shared secrets for authentication security. In an age when a significant proportion of PCs have been infected by malware, including key loggers, this is a demonstrably inappropriate strategy for banks to take.

Banks need to improve the customer experience so that use of a bank’s website involves less marketing and more assistance. If I think that the next window is geared toward selling me a product I do not want nor have time to consider, I am likely to click any button that will get me past it. The use of a tiny “no thanks” button hidden somewhere on the window plainly demonstrates that banks think marketing is more important than security. And indeed it may be. Banks expect consumers to shoulder a disproportionate burden for resolving fraudulent use of accounts and what banks are spending themselves on security is a tiny rounding error compared to what they are earning as a result of fraud. How about devoting half of the $35 billion banks make each year on overdraft fees to new anti-fraud initiatives?

Next, banks should adopt a much more aggressive and industrial-strength approach to attacking those who misuse the Internet to propagate malware and fraud. Decoy accounts should be used to isolate and provide early warning on fraudulent activity. Aggressive forensic investigation should be used to track back to those responsible for malware and fraud. Aggressive and uncompromising use of cease-and-desist orders against all who prosper or encourage the use of malware and fraud must be pursued by the banking industry.

As anyone who has ever experienced fraudulent use of their bank account knows, banks tend to adopt a rather negative attitude toward customers who identify fraud. The attitude is very much that of “we’ll investigate and come to our own conclusion about whether or not these transactions are legitimate.” Banks need to recognize that their customers are the ones who discover fraud, and who bear the greatest burden for the resolution of fraud. Bank customers are banks’ greatest assets in fighting fraud. Why do banks persist in acting as though customers are somehow responsible for fraud? Yes, they may have allowed a sophisticated malware attack to infect their PC leading to fraudulent use of online banking credentials – but if the FBI Director himself gets fooled, doesn’t that show that consumers may be doing all they can do? Criminals are responsible for fraud, not consumers who’ve been fooled.

Statistics about this are hard to come by, however I have a suspicion that banks are benefiting from fraudulent activity way more than they would care to admit. For example, I recently had $3500 of fraudulent airline tickets charged my account. Thankfully, bank security flagged this on the day the charges were processed and sent me an email which I received on my Blackberry. The following day, I went into my bank to resolve the matter. I was overdrawn and needed to have the fraudulent charges and the $175 of overdraft fees reversed. The manager who helped me had me speak by phone with the bank’s fraud office to get this accomplished. Reversing transactions were put through that took effect on the following business day (a Monday :-)) on a temporary basis until a permanent resolution could be approved by the bank. For this business day the bank had use of my funds and the net effect on its balance sheet was to overstate the bank’s cash position by $3500 until the funds availability was restored in my account. The bank knew it was fraud but waited a day to restore my balance. I could not use this money. Multiply this by the thousands (millions?) of transactions that succeed in a similar way against bank customers every day, and you have a rather significant bit of dirty laundry to add to the already significant pile already accumulated next to the banks’ washer in the basement. In short, this incident — together with such things as overdraft fee abuse — illustrates that there is a significant moral hazard involved in banks handling of fraud related to their accounts.

This article depends on the premise that widespread use of online banking is a significant positive for the banking industry. I believe this to be true. Banks have achieved significant productivity benefits from implementation of electronic banking measures of all types. But if consumers develop the perception that banks don’t care enough about phishing and malware to really work hard to stop it, then the electronic banking revolution will fade before it reaches its full potential. One thing banks have learned over the decades is that customer perception about banks is very hard to change. And for their part, banks are rather clumsy in their own approaches to developing and managing their brands. If consumers believe that banks are content to let fraud take place and leave customers to pick up the pieces, that could turn into a huge negative that could take years for banks to reverse.

Hiding from the reality of organized phishing and malware attacks by pretending that all is well will not be productive. In the current climate of significant mismanagement of risk by banks (sub-prime mortgages, credit default swaps, etc. – dare I say wrongdoing?) banks should realize that the same old “safety and soundness” message they offer regarding handling of fraud creates a real cognitive dissonance among consumers. The notion that banks play the market like they’re in Vegas, then accept taxpayer bailouts, then pay themselves millions while they place a hold on your money as they “investigate possible fraud” should be killed with a stake through the heart by all banks who care about keeping their deposit base.

Banks should be known as the primary fighters against phishing, malware, and fraud that are out there causing consumers to think twice about using electronic banking services. When consumers are facing financial pressures like never before, banks should be their friend and advocate in fighting fraud, taking much more of a “we’re on your side” attitude. I would argue that if one half of the unneeded and unwanted marketing messages I receive from banks were converted to helpful and empowering messages about information security that would be a good start to improving our chances in the war against phishing, malware and fraud. Perhaps banks should offer a bounty to consumers who identify a fraudulent transaction on their online banking statement. I’d like to see more headlines about banks cooperating with authorities, filing criminal and civil complaints against individuals and organized crime who are engaged in criminal activities. Only when banks, together with the credit card companies, take the lead in this war will we stand any chance of stemming the tide of phishing, fraud and malware.

TJX and the Problem of Opportunity Cost

When blogging earlier about the aftermath of the TJX breach, I was reminded of something that happened to me years ago that expanded my perspective in understanding the true cost of information security.  I managed a department that included security engineers who operated the global Kerberos based authentication system for the firm.  One day at about 10 AM the system went down around the world.  Sessions already logged in were unaffected but no one could log on anywhere on the planet.  This is a fairly major outage and potentially a career limiting one.  After about 45 minutes, we were able to restore service and began accounting for the impact from this potentially catastrophic outage.  This was a large Wall Street investment bank and as it turned out the most profoundly affected unit included foreign currency futures traders. Had the outage occurred earlier in the day, it would have been much broader and more impactful.  We determined that approximately 75 users around the world were affected by their inability to log onto the system. Armed with this information, I went hat in hand to the managing director in charge of this futures trading unit. This is a person who makes about $20 million a year (somewhat more than I made that year :-) ).  He opened the meeting by saying “Jim, this is a very serious outage and we can’t overestimate the impact of such a service problem on the firm.”  I told him I understood this very well and my objective was to try to quantify in dollar terms the actual amount of financial impact that came from this particular outage. We might use this calculation in a variety of ways such as computing the return on investment from an HA cluster or other architectural approach to avoid a global outage in the future.

The managing director reiterated how serious an outage it was and when I pressed him for precise dollar estimates, he said “that morning, when foreign currency traders couldn’t logon they were unable to make certain bets in the marketplace. However, had they been able to make bets, they probably would’ve made the wrong ones given what happened later in the trading day. Therefore, we actually made money from the outage.”  I must’ve blinked my lack of understanding because he went on to say “that’s right, had my people been able to logon they would have made the wrong bets and lost money for the bank.”

It’s kind of hard to build this into the computation of the impact of an outage on the economic success of the firm.  When we made our own economic estimates later, we simply ignored this incident because including a positive number would have implied that it is possible to make money from having a system outage which cannot be a feasible financial outcome upon which a high-availability system can be based.  We did, however, try to calculate how much the outage might have cost had it come two and a half hours earlier and that was a big number…

This illustrates several problems with the computation of business impact of an adverse incident.  Even though statistically there is the possibility that an outage will produce a positive outcome, we ignore those.  By rights we should include them as just as statistically significant as the negative outcomes but our job is to provide protection against the negative outcomes, not the lucky ones.

Justification for information security is heavily biased on “soft dollars”. Attacks that weren’t successful, outages that didn’t happen, confidence that was improved and lower overhead from improved security interfaces are all quantified based on soft dollars. However, soft dollars don’t put food on the table or money into the shareholders’ pockets.  In fact, we always assume that the firm has something useful to do with the money we’d like to spend on information security if for some reason we didn’t need to spend that money.  This is what is behind the concept of “internal rate of return.”   If TJX had not experienced their breach, what would they have done with the extra earnings they made in 2007 and 2008 after all those customers did not desert them and all of those fines and penalties did not need to be paid?  Maybe TJX would have wasted that money on inventory or new stores that would have proved disastrous once the mortgage meltdown and the credit crunch reached their climax. The point is, you have to assume that the money you’d like to invest in security (or any other project for that matter) is precious and would otherwise be put to good use. The way to represent this in an ROI spreadsheet model is to use a middling return on invested capital rather than basing the hurdle rate on the most successful outcomes seen for other projects.  By using a middle range threshold, you build in the chance that some investments will go bad and not pay off.  In business school, the joke was that when you asked the professor about the hurdle rate, the answer was that it was a very complex calculation and unique for each different firm or industry, in short, “10%.”

TJX spent tens of millions of dollars on fines, penalties and damages resulting from its breach of more than 40 million credit card numbers in 2007.  In addition, it spent a lot more money upgrading its security infrastructure and may in fact have overpaid for those investments because they were made under some duress and perhaps lacked the full architectural thoughtfulness that might have attended less pressure filled in investments.  Assuming that excellent security would’ve prevented the breach, one would also have to build in as a benefit to security investments the lost margins, legal fees, and perhaps other softer opportunity costs to add to the total benefit stemming from avoiding a devastating information breach.  The stockholders might even like to get some of that stock price back as well.

TJX did not spend the money to have excellent security and instead suffered a breach.  We do not know if that decision was based upon an underestimate of the actual costs – including the soft dollar costs — of having a breach or real and pressing investments demanded elsewhere in the business that upstaged security.

There are two important lessons for security leaders and architects from this. The first is that there’s always something else to do with the money when considering making security investments. That consideration is more complex when one considers that oftentimes security is part of the overall IT organization and therefore might not substitute for investments made elsewhere in the firm but for investments in other technologies within IT. During the budgeting and planning process — or during a mid-year reallocation — it’s useful to consider the next project on the list and make certain that the opportunity cost from not investing in that project is appropriately figured into the security investment.

The second lesson is that the more you can drive benefits from the soft dollar side of the equation to the hard dollar side (real revenues, margins, or committed cost savings) the more clear-cut the investment decision becomes. This is not to say ignore or otherwise treat soft dollar benefits as trivial — this would be a mistake especially when such benefits can be quite substantial — but it does focus attention on the challenge of actually capturing the benefits after an investment in security infrastructure.  When they are all soft benefits, capturing and documenting financial success is a difficult exercise that can breed cynicism and distrust within the organization when not done well.  When two projects under consideration have equal benefits but one is all soft dollar benefits and the other is hard dollar benefits, the hard dollars or higher revenues or committed cost reductions will trump soft dollars every time.  Employees who can measure their own value to the organization by the  generated profits from their transactions in any given day or month want to see all of the promised benefits from new security infrastructure captured.

We can all think of projects that never reached their full potential.  The PKI implementation that never reached full roll-out.  The voice-activated password self-service tool that nobody uses.  The data from the IDS system that is not aggregated.  Etc.  These are all projects that were justified on substantial soft-dollar benefits and it is likely had untold opportunity costs beyond their out-of-pocket implementation costs.  If the opportunity costs had been included, would we have tried harder to capture the benefits?

Know your opportunity costs. These include the financial costs that we’ve discussed as well as the costs of having people devoted to your project versus other security or non-security priorities. Understanding the depth and character of opportunity costs can significantly improve your ability to justify and win approval for information security projects.  It can also galvanize the organization to drive the project successfully and capture the full measure of benefits.

Should companies spend to avoid breaches??

I was shocked by the blog posted September 4 by Robert Westervelt of search security.com  and re-forwarded today to subscribers of “SecurityBytes Roundup” concerning the aftermath of the TJX credit card breach. As readers of this blog will no doubt recall TJX experienced a breach in early 2007 that exposed over 45 million credit cards and the company has been busy cleaning up after the mess ever since then. Now, 2 1/2 years later, after a 42% decline in stock price (in 2008), Westervelt sees TJX financial performance as an indicator that spending for advanced information security tools is apparently unjustified. Read more…

Alignment is one key to long-term security success

Many information security programs are languishing on a plateau or a mild downward trend when viewed from the perspective of budget and resource allocation. There are many reasons this is true but one of the most important ones is a congenital lack of alignment between the information security program and the overall business. Simply stated, if security is not viewed as part of the top line success of any organization, it’s just another cost to be minimized. And as infosec leaders know all too well, there are plenty of people inside the corporate organization who know how to drive costs down ruthlessly. Read more…

Encryption is Evidence of Illegal Activity

Most of our readers will be aware that the Customs Service has a program to search the laptops of selected travelers returning to the United States. Typically, a traveler is asked to step aside, power on the computer, and provide the password so that the computer can be perused ostensibly for contraband. Of course, anyone who experiences this will, at best, find this a huge hassle. Moreover, if you also happen to be trafficking in child pornography or jihadist writings, your trip may get a lot worse at this point. However, what if you’re a mild-mannered businessman — or woman – who’s been abroad on business and just wants to get home with his or her company provided laptop?

The answer is it’s not so pretty. There are many reasons you might not want the government to know the contents of your laptop. For example, your laptop might contain the confidential information of clients for whom you provide highly sensitive and confidential advice. Or, your laptop may contain writings that are privileged communications between yourself and your attorney; or your laptop might contain the confidential intellectual property of your employer which you are bound to keep secret under the terms of your employment contract, unless you are compelled to reveal it through judicial due process. The little kabuki drama that unfolds at Customs is not a judicial due process. So, you may be tempted to simply refuse to provide the password to unlock and/or decrypt the computer. Now what? Read more…