Your “Digital Tattoo” – Help for Minors and Victims of Revenge Porn

“I hope you know this will go down on your Permanent Record.”
– Numerous teachers, parents (including mine) and Van Halen

I’ve been talking for a while about the digital detritus of our lives, and how that information is used/misused in ways we don’t even think about.

A recent article from Bernard Marr provided the analogy I’ve been looking for to tie it all together – your online data creates a “digital tattoo.”

People get tattoos for a lot of reasons – some good (as a memorial, as a celebration, or as a statement of spirituality); some bad (peer pressure or under the influence of drugs, alcohol or emotionally-induced bad judgment) and sometimes (as history has shown) unwillingly for truly evil reasons. According to a study reported by the British Association of Dermatologists, “nearly 1/3 of people who get tattoos regret it afterwards.” In the US some states even regulate tattooing of minors.

Even if you’re doing it for a good reason, a tattoo can end up being done poorly or have unintended consequences – including on your ability to get a job.

“…[A]ppearance matters. For the most part, we don’t live in a very ‘forgiving’ society. Studies have been done where people are asked to judge a Doctor’s level of competence by looking at photographs. Study after study has surmised that our brains are wired to equate good looks with competence. If someone has visible tattoos there is an automatic lifestyle judgment (whether we admit it or not). …[V]isible tattoos or piercings prompt assumptions about the person.”

Moving into the online world, like their real world analog, “digital tattoos” can adversely impact your ability to get a job. recently released the results of a survey it conducted looking into the use of social media and hiring of potential employees. The results show that what you put out in social media can have a negative impact on your ability to get hired. According to the survey:

43% of responding hiring managers and HR professionals said they found information on social media that caused them not to hire a particular applicant. This number is up from 35% in 2012.

The main reasons cited for concern were:

> Provocative/inappropriate photos/information: 50%

> Information posted about drinking/using drugs: 48%

> Candidate bad mouthed previous employer: 33%

> Poor communication skills: 30%

> Discriminatory comments about race, gender, etc: 28%

> Lies about qualifications: 24%

(Note to those who actually do the hiring, using social media is a dangerous way to evaluate candidates. Read this before you look at the Facebook profile of that applicant.) Social media is also now being used to evaluate creditworthiness.

The “Permanent Record” our parents warned us about is here and, for the most part, we’re creating it ourselves.

Even though many states regulate minors getting real tattoos, teens are getting “digital tattoos” – including the equivalent of a “Scarlet A” – without being old enough to appreciate the consequences. Adults who ought to know better are doing it, too, but then many adults who ought to know better make real mistakes in getting tattoos, too.

At the most basic level, kids, teens and adults who ought to know better are posting things online without realizing that, like a tattoo, unless you make it small and put it in a very private place, more people than you planned are going to see it and judge you based on it.

    One Small Step Towards Tattoo Removal for Minors

Legislation recently signed by California Gov. Jerry Brown will require web companies, starting in 2015, to remove online activity – whether it be scandalous or simply embarrassing – upon request from a California minor.

Unfortunately, a law like this can only do so much. Like removing a real tattoo, you can’t erase the memory of everyone who saw it, and you can’t erase every picture your friends took that shows it. If that picture of you hammered in the French Quarter at Mardi Gras, or that nude/nearly nude picture you sent to your boyfriend/girlfriend is posted by someone else, it’s not covered by the law. If the image is copied and posted to another web site, that’s not covered, either. The game of internet “Whack-a-Mole” trying to remove material will continue.

The law also doesn’t require web companies to remove the data from their systems; it just requires them remove the requested item from public viewing. Under the law, sites can offer ways for users to make the redaction directly, or provide an avenue for users to request one.

The protection under the law is a nice concept, but since it only applies to sites where a user posted his/her own information, it doesn’t actually provide too much help since most social media companies allow users to delete their own data. For those sites that don’t allow users to delete their own data there’s an additional catch: the law doesn’t extend to adults who want to go back and delete material they posted as minors.

Some have argued that laws like this will cause social media companies to lock out minors, or force minors to claim to be adults as happened with under 13-year olds and the Children’s Online Privacy Protection Act. Another potential problem, opponents say, is that California will have a different policy than other states, creating a patchwork of regulation that could be difficult for the industry to navigate. While this is true, California has led the way in privacy and data breach notification laws, and this leadership has caused a number of states to enact laws that follow California’s model.

    Dealing with Revenge Porn

Another area where California is leading is dealing with what’s called “revenge porn”. “Revenge porn” sites feature explicit photos frequently posted by ex-boyfriends/husbands/lovers, and are often accompanied by identifying details, like where the women live and work, as well as links to their social media sites – Twitter accounts, Facebook pages, etc. And, according to this article from the NY Times, the effects on the victims can be substantial –

Victims say they have lost jobs, been approached in stores by strangers who recognized their photographs, and watched close friendships and family relationships dissolve. Some have changed their names or altered their appearance.

That was the case for Holly Jacobs, who discovered a month after she and her boyfriend broke up, her naked picture had been posted on her Facebook profile. (As noted in the Today show video, her ex-boyfriend claims his computer was hacked.) Regardless of how they got online, the photos went viral and less than a year later, Jacobs’ photo was on as many as 200 websites, as were her name, email address and place of business. She tried to get the photos removed, she changed her phone number, she changed her name, she quit her job, but she wasn’t able to escape the digital tattoo that had been applied to her online persona.

Despite repeated stories of teens and adults who share intimate photos with people they believe they can trust only to have that trust violated, as another revenge porn victim said in the NY Times:

“You don’t want to really think that five years down the line, your boyfriend at the time could be your not-boyfriend and do something really bad to you,”

so people keep sharing intimate photos. And like every other aspect of your digital tattoo, once the images are online they spread to other web sites, and these sites are, unfortunately, largely immune to legal action.

But that may be changing. California recently became the first state to enact a law specifically aimed at revenge porn sites. Other states that are supposedly considering revenge porn legislation include Florida, Texas, Wisconsin and Georgia.

The California law makes it a misdemeanor to distribute sexual images “with the intent to cause serious emotional distress” and would carry a fine of as much as $1,000 and as long as six months in jail — even if the pictures were originally taken with consent. The law bans only images taken by the person posting them, meaning that self-photos taken and sent to the poster aren’t protected. While there are some issues with the idea of criminalizing posting online material that was created with consent at the time, and laws like California’s will have to be carefully crafted, this is behavior that shouldn’t be protected.

Unfortunately, the majority of postings of this type come from pictures taken by the subject in a mirror (“selfies”) and sent to the person’s love interest. Despite the myriad stories about how this can go wrong, people continue to send these pictures out – with predictably unfortunate results:

“My friend, she was VC-ing,” or video chatting, “this guy she was kind of dating,” Melissa said. “He sent so many nudes to her, but she wasn’t trusting that he wouldn’t show the pictures to other people. So she Skyped him and showed him nudes that way. He took a screenshot without her knowing it. He sent it to so many people and the entire baseball team. She was whispered about and called names. It’s never gone away. He still has it and won’t delete it.”

I asked if they knew girls who posted provocative pictures of themselves. They all said yes.

So, in the absence of discretion, the next step in legislation will be figuring out how to extend the type of protection offered by this new California law to “selfies.”

    Changing Behavior

Hopefully the tattoo analogy might help people think about the long-term consequences of what they post. Even a besotted teen would think twice about getting a real tattoo for his/her love interest, and we should all consider the permanence of what we place online.

Posted in privacy, social media | Tagged , , , , , | Leave a comment

Beware the Robocop

Today I received an official notice from the Social Security Administration that they had overpaid me on some benefits and if I don’t pay them back the Treasury Dept. may withhold the money from any future tax refund that might be due to my SSN, which they conveniently list in the letter. Fortunately, the SSN they listed isn’t mine, but it’s my name on the letter, and my new address – so clearly something has gotten screwed up.

While I was pondering the potentially Kafkaesque phone call I’ll have with Social Security on Monday, I saw some news about the new version of Robocop coming out in February 2014, where, among other things, Samuel L. Jackson’s character talks about Robocop and the use of armed drones as the “future of law enforcement.”

That got me thinking about the trend towards the use of automated tools – from speed and red light cameras to the use of black boxes in cars and facial recognition – for law enforcement. Computers are good for a lot of things, but without a human element to check the result and examine the context of certain behaviors, we may be setting ourselves up for a world we won’t like.

As is clearly the case with the letter I got from the SSA, computers can make errors. Sometimes the errors are the result of human input, sometimes it’s just a bug, but the consequences can be significant and it moves us to a “guilty until proven innocent” world where the computer decides you’re guilty and it’s up to you to prove the computer is wrong.

That was the case for John Gass. As reported by the Boston Globe:

John H. Gass hadn’t had a traffic ticket in years, so the Natick resident was surprised this spring when he received a letter from the Massachusetts Registry of Motor Vehicles informing him to cease driving because his license had been revoked.

After frantic calls and a hearing with Registry officials, Gass learned the problem: An antiterrorism computerized facial recognition system that scans a database of millions of state driver’s license images had picked his as a possible fraud.

It turned out Gass was flagged because he looks like another driver, not because his image was being used to create a fake identity. His driving privileges were returned but, he alleges in a lawsuit, only after 10 days of bureaucratic wrangling to prove he is who he says he is.

But even if everything operates perfectly, do we really want a world where we can be monitored 24×7 by cameras with facial recognition software that automatically churn out fines for misdemeanor-level behavior?

Jaywalking is illegal in many cities, but if there’s no one around and no traffic, we all cross the street outside of a cross-walk or even against the light from time to time. A police officer seeing that would evaluate the situation and decide whether it was appropriate to strictly enforce the law in that situation. Software, on the other hand, sees the violation, uses facial recognition to identify the perpetrator and mails out the ticket – just like automated speed and red light cameras do with license plates.

Right now while driving we only deal with automated enforcement of speed and red lights, but as the technology expands, think about all of the violations you could possibly be cited for while driving that an automated system could be used to monitor:

– Passing on the right
– Failure to turn on headlights when windshield wipers are on
– Failure to use a turn signal
– Failure to come to a complete stop at EVERY stop sign
– and the list goes on…

Pervasive automated enforcement could turn a drive of almost any distance into an expensive venture.

And for bikers the issue could be even worse. Many bikers use discretion when deciding which traffic laws to follow at any given time, among other things because stopping, losing your momentum and unlocking out of clipless pedals at every stop sign and stoplight when there’s no traffic makes biking a whole lot harder.

Going beyond the realm of simple traffic enforcement, you also have to be careful what you admit online. While the case of Jacob Cox-Brown seems pretty clear cut as reported by TechCrunch:

Police made an example out of a teenager from Oregon who boasted about driving drunk on Facebook. “Drivin drunk… classic 😉 but whoever’s vehicle i hit i am sorry. 😛 ,” wrote the clueless 18-year-old. According to local news channel KGW, two people tipped the officers via Facebook about the post. After inspecting the most-likely-profusely-sweating/hungover teen’s car, the damage on his vehicle matched that of two other vehicles hit earlier that New Year’s morning.

And, with their powers of deduction…bam! Handcuffs. The suspect was charged with two counts of “failing to perform the duties of a driver,” but not drunk driving, because a Facebook post is apparently not sufficient evidence of intoxication, according to KGW’s report from Deputy Chief Brad Johnston.

there are plenty of other things people confess online, and an algorithm that searched pictures and posts for confessions of illegal activity and automatically identified the culprits via Facebook photo tagging and sent them a ticket isn’t that far fetched.

Technology has the power to assist law enforcement – but it should be an assistant rather than automated evidence collector and punishment distributor.

Posted in privacy, social media | Tagged , , , , , , | Leave a comment

Social Media Companies Object to Government Monitoring – Principled Concern or Commercial Hypocrisy

In response to the revelations about NSA monitoring, companies like Google have stepped up their efforts to protect the privacy of their users. Among other things, Google announced that they are encrypting all of their data to try to protect against spying by the NSA and other state actors.

In addition, Google, Facebook, Microsoft and Yahoo have filed suit in the now somewhat famous (but still secretive) Foreign Intelligence Surveillance Court (usually called the FISA Court in the media) seeking to publish more information about the NSA’s PRISM program and, presumably, how they tried to resist it. Google and Facebook have both used the argument that media descriptions of tech companies as pliant partners in the NSA’s monitoring have hurt their image and their business. Google’s motion claims that “Google’s reputation has been harmed by what it calls “false or misleading” press reports about the company’s relationship with the National Security Agency.”

If this is genuine concern for protecting their customers’ data from intrusive and unwarranted surveillance, which, in the case of Google, at least, you’d expect from a company who’s original motto was “Don’t be evil,” then that’s a good thing.

But this sudden focus on protecting consumer data might have a little self interest, as well.

The scope and extent of NSA monitoring of online activity has many people considering the degree to which they expose information online. According to a new survey from the Pew Internet and American Life project:

[M]ost internet users would like to be anonymous online, but many think it is not possible to be completely anonymous online. Some of the key findings:

– 86% of internet users have taken steps online to remove or mask their digital footprints—ranging from clearing cookies to encrypting their email.
– 55% of internet users have taken steps to avoid observation by specific people, organizations, or the government.

In the same vein, Omnicom Media’s Annalect service recently reported results of a survey that showed, among other things:

– 51% of users are unclear about how their information is collected and used.
– 57% are worried that their personal information is shared without their consent.
– 66% feel a lack of control over how their information is collected, tracked and shared.
Most notably for social media companies:
– 74% of users’ perception of a company would be negatively impacted if they discovered they were being tracked by the company without their knowledge.
– 77% of consumers have altered their online and offline behaviors to protect their online privacy.

This type of behavior can have significant consequences for businesses that make their money from online advertisers, and recent actions of some of these players indicate that these companies might be more interested in the threat to their profits than the principle of protecting customer privacy. (For steps you can take to protect your information online, see this previous post – Privacy and Data Security for the Normal Person.)

Consumer Watchdog has reported on a motion filed by Google on Sept. 5 in the United States District Court for the Northern District of California with regards to ongoing litigation challenging how the company operates its free email service.

The motion, which asks the court to dismiss a class action complaint against the company, says Gmail users should assume that any electronic correspondence that’s passed through Google’s servers can be accessed and used for an array of options, such as selling ads to customers.

“Just as a sender of a letter to a business colleague cannot be surprised that the recipient’s assistant opens the letter, people who use Web-based email today cannot be surprised if their emails are processed by the recipient’s [email provider] in the course of delivery,” the motion reads in part. “Indeed, ‘a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.’”

So, the essence of Google’s arguments is that users voluntarily gave their information to Google, so Google should be able to use it in any way they want, but Google doesn’t want the government to have unlimited access to that information. It’s easy to imagine Facebook and other social media companies whose business model is based on targeting advertising based on user-disclosed information making the same argument.

As they say, “If you’re not paying for it, you’re not the customer – you’re the product.” It seems that Google (and probably the others, as well) are objecting to the government diminishing the value of their product rather than taking a principled view on consumer privacy.

Posted in privacy, social media | Tagged , , , , , , , | Leave a comment

Don’t Underestimate the Technology

Over at, Patrick McGuire has a good article analyzing a new combination of hardware and app called “Tile” that bills itself as “the worlds largest lost and found.”

Tiles are little 1″x1″ wifi-enabled tags that you can stick to things you might lose. The tiles connect to smart phones through the tile app, and the app reports the GPS location of the tile to Tile’s cloud service. When you’re within 50-150′ of the tile, the app will show you a warmer/cooler type display, and each tile has a speaker that can chirp to help you find something that’s not in plain view.

Tile lets you share access to your tiles, so multiple people (families, roommates, etc.) can find communal objects, and if you report something marked with a tile as being stolen, the Tile app on every device scans for it and if it is detected Tile reports its location back to you.

Sounds pretty handy, but, as McGuire says:

Anyway, when I first saw Tile being advertised, I got an itchy, all-over heebie jeebies feeling. Firstly, adding a whole new matrix of location data to our digital world of over-sharing has some potentially scary implications. Do we really need a brand new social network, set up to monitor the whereabouts of our personal property? And what could be done with that data if it were to be placed in the hands of someone with malicious intent?

So, McGuire did what any privacy-sensitive person would do – he checked the FAQ, and since that was a little light on the privacy and security side of things, he called up Tile. According to Tile:

A Tile does not contain a GPS unit or a cellular radio and cannot provide continuous automatic location updates. Therefore it is not a good solution for real-time tracking of moving objects. The goal of Tile is to help people keep track of or find items they are likely to lose, and will not support long-distance tracking of moving items.

So basically, if you lose your keys and there’s a Tile attached to it, you will need to be within 100-150 feet of your lost property for your phone to recognize it’s in the presence of your precious, lost Tile. This means if you were to clandestinely put a Tile on someone or something you wanted to keep secret tabs on, you would need to be so close to them in the first place that you’d essentially be stalking them anyway. And while you could, theoretically, boost the range of your Tile tracking capabilities by having a bunch of co-conspirators with the Tile app, tracking that same Tile you put onto someone else’s property—that would be a wildly inefficient criminal operation, and again, would be tantamount to stalking anyway.

Even at 50-150′ a group of dedicated individuals could follow a tile without being noticed, but presumably the police and the government have better tools for that sort of thing anyway. So McGuire concludes that “All in all, it doesn’t sound like Tile is going to produce a serious security vulnerability given its poor location range, nor does it sound like the perfect solution to finding stolen property, given it’s limited range.”

But that assumes Tile’s range is only 50-150 feet.

Over at Quora there’s an article about Flutter, an alternative to Wi-Fi that can cover 100 times as great an area, with a range of 3,200 feet, using relatively little power. At half a mile or more, the network created by Tiles or something like them has a lot more insidious potential for disclosing information about where we are, what we’re doing, and who and what we’re doing it with.

So, while McGuire is probably right that the current incarnation of Tile doesn’t create a significant privacy threat (at least not more of one than our current infatuation with location based services and general flooding of the world with personal information), the privacy implications of technologies like this need to be evaluated assuming that they will improve.

What happens when each of our things is reporting its location to anyone within half a mile who can listen? Tiles seem to be reasonably well designed in terms of limiting access by others to the information being sent out by your stuff, but it wouldn’t take much for there to be a “skeleton key,” and it wouldn’t be surprising if the government were willing to pay for it.

Like so many conveniences, tools like Tiles create backdoors into the privacy and security of our lives. Worried that you might lock yourself out of your house? Put a key in a secret place – the kind that thieves know to look for. If you forget your password, there’s an easy way to reset it with a “secret” code – one that is much weaker than your password and frequently disclosed by you on social media (as Paris Hilton and Sarah Palin both discovered). We make it so easy for ourselves to recover from forgetting things, losing things, etc., that we make it that much easier for someone else to take advantage of the same tricks.

Once, again, Pogo was right.

Posted in cybersecurity, mobile apps, privacy, social media | Tagged , , , | Leave a comment

Monitoring via Mobile Devices’ Unconnected WiFi

I’ve written a lot about the amount of data we willingly disclose, and how to try to minimize it. A couple of recent articles caught my eye about the degree to which we shed information, even when we don’t mean to.

Most mobile devices have the ability to connect to wireless networks, and that’s generally a good thing because we can minimize the amount of cellular data we use, we can get faster connections, and we can get functionality like FaceTime. Like most people, the wifi function on my phone is always on because it’s too big a hassle to turn it off when I leave the house, and I didn’t think much about it. However, a couple of other people have.

First, there was the news out of the UK that recycling bins produced by a company called Renew were monitoring the wifi signals put out by mobile devices carried by pedestrians so the video screens on the bins could figure out the best ads to show.

Every device that can connect to a network has a unique code assigned to it called a MAC address. When you have the wifi turned on, your mobile device is constantly looking to connect to networks and broadcasting the device’s MAC address to any wireless network that is listening.

The idea behind the Renew bins is something like this:
Say a coffee chain wanted to win customers from a rival. If it had the same tracking devices in its stores, it could tell whether you’re already loyal to the brand and tailor its ads on the recycling bins accordingly. “Why not Pret?” the screen might say to you. Over time, the bins could also tell whether you’ve altered your habits.

The company analogizes the collection of the MAC address to a website dropping a tracking cookie on a person’s computer. This might’ve been a poor choice of analogy, because in the EU the use of tracking cookies is regulated under each country’s data protection laws and users have to be informed about the cookie and given the opportunity to opt out. See the UK Information Commissioner’s website for an example. As a result, Renew has been told to remove the bins and a complaint about them has been submitted to the UK ICO.

But at least the Renew recycling bins were in the open. The recent Black Hat conference saw a demonstration of a tool called the “Creepy Distributed Object Locator” or “CreepyDOL”. The conference session description for CreepyDOL gives a flavor for it:

Are you a person with a few hundred dollars and an insatiable curiosity about your neighbors, who is fed up with the hard work of tracking your target’s every move in person? Good news! You, too, can learn the intimate secrets and continuous physical location of an entire city from the comfort of your desk! CreepyDOL is a distributed sensing and data mining system combining very-low-cost sensors, open-source software, and a focus on user experience to provide personnel identification, tracking, and analysis without sending any data to the targets. In other words, it takes you from hand-crafted, artisan skeeviness to big-box commodity creepiness, and enables government-level total awareness for about $500 of off-the-shelf hardware.

When the wifi connectivity on your mobile device is turned on, it broadcasts MAC addresses, the names of recently connected networks, and other data. Between the MAC addresses and the other data, that creates a “fingerprint” of your device that could be used to track the physical movement through a neighborhood or entire city over an extended period of time. If the names of the wireless networks a device has recently connected to are sufficiently unique (e.g., “1000Mass” – which could be an apartment building at 1000 Massachusetts Ave. in Washington, DC), the CreepyDOL system may be able to know where the owner of the device works, lives, or hangs out.

The CreepyDOL system uses a bunch of small, black boxes can be plugged into a wall socket. The device monitors the signals of all wifi devices within range, automatically connects to any available wireless networks, sends the data it collects to all other nodes on the CreepyDOL network, and receives any data collected from other nodes. A person with access to this kind of system could use it for stalking to simply vacuum up data that could be sorted out later. When the developer of the system used it on his own iPhone, he discovered that the system collected his use of a dating website where his photo was displayed and other services broadcasting via his wifi connection that disclosed his full name and other information.

So, another entry into the list of things the normal person can do to protect privacy:

Turn off the wifi on your mobile device unless you’re at home.

Posted in cybersecurity, mobile apps, privacy | Tagged , , , , , , , | Leave a comment

The Lack of NSA Supervision

As I noted in my last post, President Obama and the NSA are saying, “Trust us, we’re not doing anything wrong and we’re not spying on Americans.” Even if that were the case, I argued, the wholesale collection of data on Americans’ communications “just in case” is contrary to the principles enshrined in the 4th Amendment and the concept of innocent until proven guilty.

The latest hand grenade tossed by Edward Snowden and reported by the Washington Post is the results of an internal NSA audit disclosing that:

The National Security Agency has broken privacy rules or overstepped its legal authority thousands of times each year since Congress granted the agency broad new powers in 2008. Most of the infractions involve unauthorized surveillance of Americans or foreign intelligence targets in the United States, both of which are restricted by law and executive order. They range from significant violations of law to typographical errors that resulted in unintended interception of U.S. emails and telephone calls…

The audit … dated May 2012, counted 2,776 incidents in the preceding 12 months of unauthorized collection, storage, access to or distribution of legally protected communications. Most were unintended. Many involved failures of due diligence or violations of standard operating procedure. The most serious incidents included a violation of a court order and unauthorized use of data about more than 3,000 Americans and green-card holders.

According to the audit in one case the NSA implemented a new collection methodology and operated it for several months before disclosing it to the FISA court, which held the new methodology unconstitutional.

But the most interesting item (to me) is the 2008 interception of a “large number” of calls placed from Washington due to a supposed “programming error” that confused the Washington, DC area code 202 for 20, the international dialing code for Egypt. While it’s possible that was a programming error, the people who work at Ft. Meade live and work within local calling distance of Washington, DC. If I wanted data on calls being made (perhaps by a spouse, ex-, whomever) and didn’t want to get fired for it, I’d commit a plausible “programming error” that just happened to capture the data I wanted along with a large amount of other noise.

Consistent with the principle of innocent until proven guilty, Jennifer Rubin at the Post makes some excellent points about how we should investigate more before we hyperventilate too much about the audit. Among other things, she says:

In percentage or absolute terms, what is the error rate compared to the total number of bits of data being collected? The Post notes that there were 2,776 incidents of error since 2008. Was this a 5 percent error rate or a 0.0000005 percent error rate? An information sheet put out by the NSA on Aug. 9 indicates that “According to figures published by a major tech provider, the Internet carries 1,826 Petabytes of information per day. In its foreign intelligence mission, NSA touches about 1.6% of that. However, of the 1.6% of the data, only 0.025% is actually selected for review.” That is still tons and tons of data. If there were only 2,776 errors in five years, it may be of the best-run programs anywhere in government.

An important piece of information is that one error can cause a significant amount of unauthorized data to be collected. The 202 vs. 20 error could have collected thousands if not millions of calls. Similarly, as one commentator joked “This is why somehow the NSA wound up with every e-mail sent to or from the state of Georgia, after that same programming error mistook it for the former Soviet republic.”

Rubin continues:

What happened when an error occurred? Was a U.S. citizen’s e-mail read? Was a phone call listened to? When the error was identified what action was taken to make sure the bit of data was not used in an improper way?

When were the most significant problems identified and did the serious error rate drop significantly after the fix?

Both of these are excellent questions, and we should get answers to them from the Administration. However, neither of them justify the level of intrusion that has been committed.

Finally, she asks, “What sorts of problems were reported to Congress and what were not? Were items that were not reported trivial?” She notes that the NSA decided the 202 vs. 20 “error” was sufficiently trivial that it did not need to be reported to Congress. Given that Congress itself sits within the 202 area code, and many members live in the 202 area code and have cell phones with 202 area codes, it seems a little self-serving to decide not to disclose to your supervising authority that you “accidentally” might have captured data about all of their phone calls because that was “trivial.”

Up to this point, the government has claimed that the NSA is well supervised. Leaving aside the “fox guarding the hen house” nature of the NSA deciding which issues it should report to Congress and which ones are trivial, it also turns out that Congress’ ability to supervise the NSA involves looking at heavily redacted reports while sitting in a locked room without being allowed to take notes. And only 10% of members of Congress have staffers with the security clearance necessary to even enter the room. As Glenn Greenwald recently documented, “Members of Congress have been repeatedly thwarted when attempting to learn basic information about the National Security Agency (NSA) and the secret FISA court…”

But, as reported by the Post, according to the Chief Judge of the FISA court – the court that is supposed to supervise the NSA’s monitoring – “the court lacks the tools to independently verify how often the government’s surveillance breaks the court’s rules that aim to protect Americans’ privacy. Without taking drastic steps, it also cannot check the veracity of the government’s assertions that the violations its staff members report are unintentional mistakes.”

Before the Post story about the NSA audit, President Obama said, “In other words, it’s not enough for me as president to have confidence in these programs. The American people need to have confidence in them, as well.”

The people at the NSA are human. They make mistakes and they are trying to do a difficult job. However, for us to have confidence that our Constitutional and human rights are not being violated, there is supposed to be significant monitoring of these programs. If the recent disclosures are correct, that monitoring is poorly designed (at best) and is being actively thwarted (at worst).

– The NSA itself decides which violations are “trivial” and which should be reported.
– The FISA court relies on the NSA to disclose the things the FISA court should review and lacks the staff and the technical ability to actually supervise what the NSA is doing.
– Congress relies on the NSA to disclose things, only a few members of Congress have staffers who can review the NSA reports on itself, and those who don’t appear to be refused access to information they need.

As I noted before, the basic premise of the monitoring currently performed by the NSA is contrary to some of our country’s founding principles and should be stopped. Even if it isn’t stopped, it appears that the oversight structure associated with these programs needs to be significantly restructured so that the NSA is asking specific permission to do things rather than asking forgiveness after the fact.

As President Reagan once said, “Trust, but verify.”

Posted in privacy | Tagged , , , , , , | Leave a comment

Government Surveilance – Should We “Get Over It”?

As I’ve noted before, in 1999 Scott McNeely said, “You have zero privacy anyway. Get over it.” Roughly 14 years later (multiple lifetimes in terms of politics and the internet), President Obama said pretty much the same thing, albeit disguised in more erudite terms, when talking about the “conversation” we ought to have about government monitoring.

The interesting thing about his proposals about “improving” how the government goes about monitoring all of our communications is that he didn’t say we should talk about whether the government should be monitoring our communications, he said that we need to talk about the steps the government should take so that we trust them when they monitor all of our communications – so that we can have the same kind of confidence he does that it’s the right thing to do. In other words, the government is going to monitor all of your communications – get over it.

But should we get over it?

Now that we’ve had a glimpse into what the NSA is doing by “touching 1.6% of Internet information” with programs like PRISM (which facilitates the collection of data by the government from companies like Google, Yahoo, Microsoft and Facebook) and XKeyScore (which, according to the NSA training materials disclosed by Edward Snowden, is the NSA’s “widest reaching” system for developing intelligence from the Internet and covers “nearly everything a typical user does on the Internet.”) we need to be asking ourselves what level of government surveillance and data collection should we actually be permitting?

We live in an “information economy,” and the essence of an economy is transactions in some kind of currency. In this case, personal data. We regularly evaluate and criticize what the government does with our tax dollars, because we understand the current and future value of that currency. With these programs we are evaluating what the government is doing with this currency of personal data without any meaningful idea of what it’s actually worth to us and what the costs are of giving it up.

Many people look at what the government is doing and say things like, “Personally, I have nothing to hide, so it’s not really affecting me. It’s not like they’re invading my privacy. I worry about New York because it’s such a target.” In Britain, for example, the government has installed millions of public-surveillance cameras in cities and towns, which are watched by officials via closed-circuit television. In a campaign slogan for the program, the government declares: “If you’ve got nothing to hide, you’ve got nothing to fear.”

Although fairly common, that’s a very myopic view. As Cardinal Richelieu said, “If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him,” and Aleksandr Solzhenitsyn declared, “Everyone is guilty of something or has something to conceal. All one has to do is look hard enough to find what it is.” Almost everyone has “something to hide” — some intimate corners of our lives we don’t want exposed to strangers (or even worse, in some cases, people who know us), even if we’re not doing anything we think is “wrong” or “illegal.” As Billy Joel put it:

Well we all fall in love
But we disregard the danger
Though we share so many secrets
There are some we never tell
Why were you so surprised
That you never saw the stranger
Did you ever let your lover see
The stranger in yourself?

The desire for “privacy” comes out in polls that also show that people aren’t nearly as relaxed about the idea of the government reading their emails and online chats. So, when people say that they’re ok with government monitoring, what they really mean is that they don’t think the information the government is collecting (a list of phone numbers, IP addresses, etc.) will expose any of those sensitive secrets. As President Obama said, “When it comes to telephone calls, nobody is listening to your telephone calls.” Instead, the government was just “sifting through this so-called metadata.”

So, if the disclosure of metadata isn’t a big deal, let’s look at the kinds of things we’re ok with having the government “know”:

• You received a call from a private investigator, then called your mother, spent some time on the website, then called a divorce lawyer. But the actual conversations were not recorded.
• You regularly receive email from even though you’re married (but no one knows what’s inside the email because they only collect the metadata, of course) and you are frequently recorded clicking through to the website where you frequently look at profiles of women looking for dates.
• You received a call from your doctor (but the conversation was not recorded), then went on and looked at books on cancer, then went to a website that specializes in realistic looking wigs.
• You called a suicide hotline from near the Golden Gate Bridge (since law enforcement can also access the location of your cell phone without a warrant). But the actual conversation remains a secret.
• You called a gynecologist, spoke for a half hour, and then called the Planned Parenthood chapter in a nearby state later that day. But the actual conversations were not recorded.
• Someone using a computer in your house or your mobile device regularly connects to a website with the URL

And if we add the type of data collected by license plate readers and cell phone tower records and your own GPS-tagged pictures and Tweets from your mobile device, the profile of your activities becomes fairly detailed.

But, while the idea that someone at the NSA could conceivably find these things out about you might be a little creepy, none of these things is illegal. It might make life awkward if it were disclosed, but we’re not talking about criminal behavior or matters of national security. But this is where President Obama’s “trust us” factor comes into play.

As long as the government is only using their access to “1.6 percent of all internet traffic” (which is more than Google “touches”) and the other sources in their database to look for specific terror or espionage suspects, very few people are actually going to pop up as targets. But since the existence of the program was secret before Snowden, and even the authority under which the program operated was classified, if they change their minds about the rules governing access to the database or how it’s put to use – we’re unlikely to ever know. Were someone to decide that the NSA database should be used whenever there is evidence of a crime, we could all be in trouble – for example as discussed in “How You’re Breaking the Law Every Day (and What You Can Do About It)” and the book “Technically That’s Illegal”.

This kind of data collection and analysis, which is hidden from view of (or even knowledge of) the public and the data subject creates a structural imbalance between the people and the government. The government has, under secret interpretations of laws, created secret programs that are capable of collecting and assembling enormous and frighteningly accurate dossiers on people’s personal activities. This is not a question of what people want to hide or not, it is a fundamental question about the power relationship between the government and the people.

The problems with government surveillance and aggregation of data go much further than the Orwellian “Big Brother” implications. As Prof. Daniel Solove has noted, they move to the Kafkaesque:

Government information-gathering programs are problematic even if no information that people want to hide is uncovered. In The Trial, the problem is not inhibited behavior but rather a suffocating powerlessness and vulnerability created by the court system’s use of personal data and its denial to the protagonist of any knowledge of or participation in the process. The harms are bureaucratic ones—indifference, error, abuse, frustration, and lack of transparency and accountability.

The best example of this is the TSA’s infamous “No-Fly List.” People arrived at the airport only to discover they were “on the list.” No one could tell you why you were on the list or how to get off the list, as described in great detail in this piece, which includes the following quotes from the FBI’s FAQ about the No-Fly List:

Can I find out if I am in the TSDB? [that’s Terrorist Screening Database]

The TSC cannot reveal whether a particular person is in the TSDB. The TSDB remains an effective tool in the government’s counterterrorism efforts because its contents are not disclosed. If TSC revealed who was in the TSDB, terrorist organizations would be able to circumvent the purpose of the terrorist watchlist by determining in advance which of their members are likely to be questioned or detained.

I am having trouble when I try to fly or cross the border into the United States. Does this mean I am in the TSDB? No. At security checkpoints like our nation’s borders, there are many law enforcement or security reasons that an individual may be singled out for additional screening. Most agencies have redress offices (e.g., Ombudsman) where individuals who are experiencing repeated problems can seek help. If an individual is experiencing these kinds of difficulties, he/she should cooperate with the agency screeners and explain the recurring problems. The screeners can supply instructions on how to raise concerns to the appropriate agency redress office.

As the article points out, “So they won’t tell you if you’re on the list, and if you’re denied the ability to fly you could be on it… but then again, not necessarily, because other government agencies have their own secret blacklists.”

Solove goes on to highlight two more problems with this type of governmental data collection:

A related problem involves secondary use. Secondary use is the exploitation of data obtained for one purpose for an unrelated purpose without the subject’s consent. How long will personal data be stored? How will the information be used? What could it be used for in the future? The potential uses of any piece of personal information are vast. Without limits on or accountability for how that information is used, it is hard for people to assess the dangers of the data’s being in the government’s control.

In other words, without understanding the value of the currency the government has collected, we can’t accurately decide what we’ve given up and whether the government is making good use of this currency.

Yet another problem with government gathering and use of personal data is distortion. Although personal information can reveal quite a lot about people’s personalities and activities, it often fails to reflect the whole person. It can paint a distorted picture, especially since records are reductive—they often capture information in a standardized format with many details omitted.

For example, suppose government officials learn that a person has bought a number of books on how to manufacture methamphetamine. That information makes them suspect that he’s building a meth lab. What is missing from the records is the full story: The person is writing a novel about a character who makes meth. When he bought the books, he didn’t consider how suspicious the purchase might appear to government officials, and his records didn’t reveal the reason for the purchases. Should he have to worry about government scrutiny of all his purchases and actions? Should he have to be concerned that he’ll wind up on a suspicious-persons list? Even if he isn’t doing anything wrong, he may want to keep his records away from government officials who might make faulty inferences from them. He might not want to have to worry about how everything he does will be perceived by officials nervously monitoring for criminal activity. He might not want to have a computer flag him as suspicious because he has an unusual pattern of behavior.

Our privacy is not being stripped from us in one fell swoop – that would cause an uproar. And, as I’ve mentioned before, we are complicit in the erosion of our own personal privacy given the amount of data we willingly give out. But look at where we already are and where we could easily go:

• The government is monitoring the phone numbers you call and the ones that call you. “Well,” you say, “that helps them catch terrorists and criminals, and they’re just checking phone numbers, not listening to my calls.” But they don’t tell you how long they’re keeping your phone records or who is looking at them. So you just assume that they are all being logged. So do the criminals/terrorists, so they use methods to make this process more difficult for the government. The cleanest, easiest to analyze records belong to the law abiding citizens who have nothing to hide.

• Then the government might start monitoring some phone calls. But, of course, in order to prevent the criminals/terrorists from circumventing this, they keep the monitoring program a secret. When it becomes public knowledge, you say, “Well, my life is boring. If they want to listen to me talk to Marie about her bunions, well that’s just fine. I’ve got nothing to hide and it’ll help them catch the criminals and the terrorists.” The sophisticated criminals and terrorists move on to other, more secure means of communicating, and the only calls that get recorded are the general public and the criminals too dumb to use a different tool.

• The local and state police are monitoring where your car goes by photographing the license plate and putting it into a database. “Well,” you say, “that helps them identify stolen cars, and it might help them with one of those Amber alerts.” Again, it’s difficult to find out how long this information is stored, who has access to it and how it is being used.

• The government might install more video cameras in public places (or if you’re in the UK, they already did). “Well,” you say, “those cameras prevent crime and they help solve crimes. I’m not doing anything wrong – I don’t care if someone’s watching me.” There have been studies to show that the cameras in the UK don’t have a significant impact on solving crimes, but as with each of the other measures, it’s difficult to find out how long this information is stored, who has access to it and how it is being used.

• The government is capable of monitoring who you email and who emails you. “Well,” you say, “I feel sorry for the poor shmuck who has to look at that. I don’t do anything interesting, and if it helps them protect us from the terrorists, that’s ok.” Like the phone records, they don’t tell you how long they’re keeping your email history or who is looking at them. So you just assume that they are all being logged. So do the criminals/terrorists, so they use methods to make this process more difficult for the government. The cleanest, easiest to analyze records belong to the law abiding citizens who have nothing to hide.

• The government is capable of monitoring every website you look at. They record them and never delete them, creating a history of what you were thinking and interested in at any given time that is far more accurate than your own memory. “Well,” you say, “that’s a little creepy. But no one actually looks at that stuff unless they think you’ve done something wrong.”

Each step seems trivial, and is only a minor intrusion given each of the previous steps, and, after all, it’s for a good cause. But after a while, the government will be watching and knowing everything about us in near real-time. We will have willingly walked into the Panopticon.

As Julian Sanchez wrote here,

It’s slow and subtle, but surveillance societies inexorably train us for helplessness, anxiety and compliance. Maybe they’ll never look at your call logs, read your emails or listen in on your intimate conversations. You’ll just live with the knowledge that they always could — and if you ever had anything worth hiding, there would be nowhere left to hide it.

So, if you’ve read this far, I think you know my answer. We should not “go gently into that good night” of the Panopticon, just because the government says its for our own good. The 4th Amendment to the US Constitution says,

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

The wholesale collection of data and personal information by the government “just in case they need it in the future” is inconsistent with the principles of the 4th Amendment and it should be stopped. The government’s approach treats us all as potential criminals/terrorists whose information needs to be collected just in case they discover a reason to put us under the microscope. This premise is anathema to the concept of innocent until proven guilty. Information should only be collected about a person who is legitimately under investigation or who is identified as being reasonably related to an investigation. Government data collection should be subject to reasonable limitations on scope, duration of retention, use and access, and there should be a more reasonable way for the public to know about and approve (or disapprove) these programs. We should not have to rely on people like Edward Snowden (whatever you think of him) to bring these issues to the public consciousness.

Posted in privacy, social media | Tagged , | Leave a comment