Employee Surveillance Hysteria and Other Musings
Today’s news includes a story about the ORCA card in the greater Seattle area. It seems that a new unified transit card being made available to residents of the greater Seattle area includes the provision that employers have access to transit ride information for those employees for whom their employers subsidize their transit card purchase. The annual benefit to a subsidized employee is nearly $1000. However, not surprisingly many people are whining about the perceived breach of “privacy rights” and the egregious behavior of their employers when they “snoop” into their employee’s transit ride information.
Some employers have stated that they intend to access the transit ride data — which includes date and time and information about rides — only when there is a need to investigate it after receiving other information about potential abuse. For example, a person claims five hours of overtime on a given day but transit ride information reveals that only a normal shift was worked. Or, a person calls in sick but transit ride information reveals that they traveled to see a ballgame that day. Other employers like Boeing have stated that they view transit ride information in a more “hands off” manner and do not plan to access it even if it might be relevant to investigate fraud. Of course, there will always be people in our country who are seriously confused about ethics, rights, and legalities. It is, after all, a complicated world we live in. However, this issue reminds me of something that happened to me twenty-some years ago and a continuing lesson for information security professionals for the future.
I once worked for a company that installed a new card reader system for door control. This required everyone to carry their own picture badge. There was a concern that it was too easy for unauthorized people to enter the premises which at that time was growing rapidly and would have six or seven buildings on the main campus and quite a number of smaller sales offices around the country. Automatic door control also enabled some doors to become unmanned rather than retaining the need for 24/7 guard staffing and the attendant high cost that implied. At the time of installation, there was great controversy about the potential for employer abuse of the door control system. People were moaning and whining about how the company was going to mine the door control system data and find them or penalize them for a few minutes of tardiness or other such miniscule infractions. In the several years I managed that system, there was not one single complaint of abuse of the door control data. In fact, to the best of my knowledge, the door control data was never accessed and used for anything other than (a) determining how an individual accessed a particular entry door when their cards had not been programmed for it (this usually means they used someone else’s card); and (b) determining if someone was at work after independent suspicions of absenteeism or timecard fraud had been raised. On a number of occasions, door control data was used successfully to pursue disciplinary action against an employee who was committing fraud about their attendance. But looking back, I can think of no one who would now claim that having picture badges and automatic door control systems at the many points of entry for this company in any way infringed on employee rights to privacy. In fact, most would have to admit that the system actually promoted efficiency and free flow of traffic throughout the offices.
The ongoing lesson for security professionals here is that when implementing a system that might be used for intrusive surveillance, define an ironclad policy of how the data will be collected, stored, destroyed and all permissible data uses. Communicate this policy clearly to all of those affected. Then walk the talk. Don’t use the data for any purpose other than that for which it is being collected. This also includes deleting the data when you know it will no longer be necessary. Ask yourself: have you ever been asked to conduct an attendance investigation using door control data against events older than a few months? Probably not. If you’re holding door control data I strongly urge you to delete all data older than, say, 90 days or 180 days. At the same company we implemented ironclad data control policies in other areas forcing the automated deletion of data after a certain aging threshold had been reached. This policy has paid for itself time and time again when we proved that the data no longer existed after outside agencies — including those armed with subpoenas — demanded we produce it. Unless employee data is specifically required to be retained by law or regulation, there should be a policy covering its collection, storage, use, and destruction. And make sure you follow those rules.
One other lesson I’m reminded about in this incident is that not all data use issues are the sole purview of the information security manager. I frequently see managers struggling to get control of controversial issues like detailed use of the Web, e-mail surveillance, cell phone and mobile surveillance, IM tracking etc., etc. These are not — repeat NOT — information security issues. They are policies that should be defined, justified, and carried out based on the needs of the business; whether information security needs to be involved due to the tools chosen to enforce these policies is a totally separate matter. It’s not up to the information security guy — or gal — to define whether certain religiously oriented websites should be accessible over the employer owned intranet. This task should be defined by someone in human resources according to the cultural needs of the company. All too often, information security tools are misused in a way that increases confusion and anxiety in the minds of employees and the information security manager bears the blame. Case in point: I once implemented a web tracking system at a major investment bank. Initially 10 or 15 categories of “inappropriate site” were implemented and on day one my phone began ringing. “Why can’t I access university research data?” “I can’t get to brewery sales information.” Etc. etc. We found out in the space of about one week many issues where an ostensibly “inappropriate” category of information turned out to be necessary for business. In the end, the things we stuck to in terms of implemented policy were hate, porn and gambling. These sites were never needed for business. But we did have to tinker with the filtering system because the word “sex” also appears in many situations that are most decidedly not pornographic but are in fact necessary for business. Also, I learned from my UK colleagues that online betting on the ponies is not considered in any way inappropriate in many UK cultures and so the restriction against gambling related sites also had to be fiddled with.
As an information security manager, it is important to be able to separate the concept of the tools we use from the policies we enforce. In a world of increasingly powerful tools such as data leakage prevention it is very important to have pre-established policies and methods to enforce those policies well in advance of implementing the tools. If a tool can be used for intrusive “snooping” then be prepared to show how such snooping never happened and demonstrate conclusively that effective controls exist for use of the tool that limit all potentially intrusive access to only those instances that are approved according to the company’s policy.