Entries Tagged "ACLU"

Page 1 of 4

How Did Facebook Beat a Federal Wiretap Demand?

This is interesting:

Facebook Inc. in 2018 beat back federal prosecutors seeking to wiretap its encrypted Messenger app. Now the American Civil Liberties Union is seeking to find out how.

The entire proceeding was confidential, with only the result leaking to the press. Lawyers for the ACLU and the Washington Post on Tuesday asked a San Francisco-based federal court of appeals to unseal the judge’s decision, arguing the public has a right to know how the law is being applied, particularly in the area of privacy.

[…]

The Facebook case stems from a federal investigation of members of the violent MS-13 criminal gang. Prosecutors tried to hold Facebook in contempt after the company refused to help investigators wiretap its Messenger app, but the judge ruled against them. If the decision is unsealed, other tech companies will likely try to use its reasoning to ward off similar government requests in the future.

Here’s the 2018 story. Slashdot thread.

Posted on April 29, 2020 at 12:29 PMView Comments

Internet Voting in Puerto Rico

Puerto Rico is considered allowing for Internet voting. I have joined a group of security experts in a letter opposing the bill.

Cybersecurity experts agree that under current technology, no practically proven method exists to securely, verifiably, or privately return voted materials over the internet. That means that votes could be manipulated or deleted on the voter’s computer without the voter’s knowledge, local elections officials cannot verify that the voter’s ballot reflects the voter’s intent, and the voter’s selections could be traceable back to the individual voter. Such a system could violate protections guaranteeing a secret ballot, as outlined in Section 2, Article II of the Puerto Rico Constitution.

The ACLU agrees.

Posted on March 24, 2020 at 6:01 AMView Comments

Computers and Video Surveillance

It used to be that surveillance cameras were passive. Maybe they just recorded, and no one looked at the video unless they needed to. Maybe a bored guard watched a dozen different screens, scanning for something interesting. In either case, the video was only stored for a few days because storage was expensive.

Increasingly, none of that is true. Recent developments in video analytics—fueled by artificial intelligence techniques like machine learning—enable computers to watch and understand surveillance videos with human-like discernment. Identification technologies make it easier to automatically figure out who is in the videos. And finally, the cameras themselves have become cheaper, more ubiquitous, and much better; cameras mounted on drones can effectively watch an entire city. Computers can watch all the video without human issues like distraction, fatigue, training, or needing to be paid. The result is a level of surveillance that was impossible just a few years ago.

An ACLU report published Thursday called “the Dawn of Robot Surveillance” says AI-aided video surveillance “won’t just record us, but will also make judgments about us based on their understanding of our actions, emotions, skin color, clothing, voice, and more. These automated ‘video analytics’ technologies threaten to fundamentally change the nature of surveillance.”

Let’s take the technologies one at a time. First: video analytics. Computers are getting better at recognizing what’s going on in a video. Detecting when a person or vehicle enters a forbidden area is easy. Modern systems can alarm when someone is walking in the wrong direction—going in through an exit-only corridor, for example. They can count people or cars. They can detect when luggage is left unattended, or when previously unattended luggage is picked up and removed. They can detect when someone is loitering in an area, is lying down, or is running. Increasingly, they can detect particular actions by people. Amazon’s cashier-less stores rely on video analytics to figure out when someone picks an item off a shelf and doesn’t put it back.

More than identifying actions, video analytics allow computers to understand what’s going on in a video: They can flag people based on their clothing or behavior, identify people’s emotions through body language and behavior, and find people who are acting “unusual” based on everyone else around them. Those same Amazon in-store cameras can analyze customer sentiment. Other systems can describe what’s happening in a video scene.

Computers can also identify people. AIs are getting better at identifying people in those videos. Facial recognition technology is improving all the time, made easier by the enormous stockpile of tagged photographs we give to Facebook and other social media sites, and the photos governments collect in the process of issuing ID cards and drivers licenses. The technology already exists to automatically identify everyone a camera “sees” in real time. Even without video identification, we can be identified by the unique information continuously broadcasted by the smartphones we carry with us everywhere, or by our laptops or Bluetooth-connected devices. Police have been tracking phones for years, and this practice can now be combined with video analytics.

Once a monitoring system identifies people, their data can be combined with other data, either collected or purchased: from cell phone records, GPS surveillance history, purchasing data, and so on. Social media companies like Facebook have spent years learning about our personalities and beliefs by what we post, comment on, and “like.” This is “data inference,” and when combined with video it offers a powerful window into people’s behaviors and motivations.

Camera resolution is also improving. Gigapixel cameras as so good that they can capture individual faces and identify license places in photos taken miles away. “Wide-area surveillance” cameras can be mounted on airplanes and drones, and can operate continuously. On the ground, cameras can be hidden in street lights and other regular objects. In space, satellite cameras have also dramatically improved.

Data storage has become incredibly cheap, and cloud storage makes it all so easy. Video data can easily be saved for years, allowing computers to conduct all of this surveillance backwards in time.

In democratic countries, such surveillance is marketed as crime prevention—or counterterrorism. In countries like China, it is blatantly used to suppress political activity and for social control. In all instances, it’s being implemented without a lot of public debate by law-enforcement agencies and by corporations in public spaces they control.

This is bad, because ubiquitous surveillance will drastically change our relationship to society. We’ve never lived in this sort of world, even those of us who have lived through previous totalitarian regimes. The effects will be felt in many different areas. False positives­—when the surveillance system gets it wrong­—will lead to harassment and worse. Discrimination will become automated. Those who fall outside norms will be marginalized. And most importantly, the inability to live anonymously will have an enormous chilling effect on speech and behavior, which in turn will hobble society’s ability to experiment and change. A recent ACLU report discusses these harms in more depth. While it’s possible that some of this surveillance is worth the trade-offs, we as society need to deliberately and intelligently make decisions about it.

Some jurisdictions are starting to notice. Last month, San Francisco became the first city to ban facial recognition technology by police and other government agencies. A similar ban is being considered in Somerville, MA, and Oakland, CA. These are exceptions, and limited to the more liberal areas of the country.

We often believe that technological change is inevitable, and that there’s nothing we can do to stop it—or even to steer it. That’s simply not true. We’re led to believe this because we don’t often see it, understand it, or have a say in how or when it is deployed. The problem is that technologies of cameras, resolution, machine learning, and artificial intelligence are complex and specialized.

Laws like what was just passed in San Francisco won’t stop the development of these technologies, but they’re not intended to. They’re intended as pauses, so our policy making can catch up with technology. As a general rule, the US government tends to ignore technologies as they’re being developed and deployed, so as not to stifle innovation. But as the rate of technological change increases, so does the unanticipated effects on our lives. Just as we’ve been surprised by the threats to democracy caused by surveillance capitalism, AI-enabled video surveillance will have similar surprising effects. Maybe a pause in our headlong deployment of these technologies will allow us the time to discuss what kind of society we want to live in, and then enact rules to bring that kind of society about.

This essay previously appeared on Vice Motherboard.

Posted on June 14, 2019 at 12:04 PMView Comments

Former Mozilla CTO Harassed at the US Border

This is a pretty awful story of how Andreas Gal, former Mozilla CTO and US citizen, was detained and threatened at the US border. CBP agents demanded that he unlock his phone and computer.

Know your rights when you enter the US. The EFF publishes a handy guide. And if you want to encrypt your computer so that you are unable to unlock it on demand, here’s my guide. Remember not to lie to a customs officer; that’s a crime all by itself.

Posted on April 4, 2019 at 2:10 PMView Comments

Merry Christmas from the NSA

On Christmas Eve, the NSA released a bunch of audit reports on illegal spying using EO 12333 from 2001 to 2013.

Bloomberg article.

The heavily-redacted reports include examples of data on Americans being e-mailed to unauthorized recipients, stored in unsecured computers and retained after it was supposed to be destroyed, according to the documents. They were posted on the NSA’s website at around 1:30 p.m. on Christmas Eve.

In a 2012 case, for example, an NSA analyst “searched her spouse’s personal telephone directory without his knowledge to obtain names and telephone numbers for targeting,” according to one report. The analyst “has been advised to cease her activities,” it said.

The documents were released in response to an ACLU lawsuit.

Another article.

EDITED TO ADD (12/27): Remember Edward Snowden’s comment that he could eavesdrop on anybody? “I, sitting at my desk, certainly had the authorities to wiretap anyone, from you, or your accountant, to a federal judge, to even the President if I had a personal email.” Lots of people have accused him of lying. Here’s former NSA General Counsel Stewart Baker: “All that makes Snowden’s claim about being able to wiretap anyone extremely unlikely—and certainly not demonstrated by the latest disclosures, despite Glenn Greenwald’s claims to the contrary.”

These documents demonstrate that Snowden is probably correct. In these documents, NSA agents target all sorts of random Americans.

Posted on December 26, 2014 at 6:29 AMView Comments

1 2 3 4

Sidebar photo of Bruce Schneier by Joe MacInnis.