Are Apple’s Tools Against Child Abuse Bad for Your Privacy?
Apple unveiled a plan two weeks ago founded in good intentions: Root out images of child sexual abuse from iPhones. But as is often the case when …
Apple unveiled a plan two weeks ago founded in good intentions: Root out images of child sexual abuse from iPhones.
But as is often the case when changes are made to digital privacy and security, technology experts quickly identified the downside: Apple’s approach to scanning people’s private photos could give law enforcement authorities and governments a new way to surveil citizens and persecute dissidents. Once one chip in privacy armor is identified, anyone can attack it, they argued.
The conflicting concerns laid bare an intractable issue that the tech industry seems no closer to solving today than when Apple first fought with the F.B.I. over a dead terrorist’s iPhone five years ago.
The technology that protects the ordinary person’s privacy can also hamstring criminal investigations. But the alternative, according to privacy groups and many security experts, would be worse.
“Once you create that back door, it will be used by people whom you don’t want to use it,” said Eva Galperin, the cybersecurity director at the Electronic Frontier Foundation, a digital-rights group. “That is not a theoretical harm. That is a harm we’ve seen happen time and time again.”
Apple was not expecting such backlash. When the company announced the changes, it sent reporters complex technical explainers and laudatory statements from child-safety groups, computer scientists and Eric H. Holder Jr., the former U.S. attorney general. After the news went public, an Apple spokesman emailed a reporter a tweet from Ashton Kutcher, the actor who helped found a group that fights child sexual abuse, cheering the moves.
But his voice was largely drowned out. Cybersecurity experts, the head of the messaging app WhatsApp and Edward J. Snowden, the former intelligence contractor who leaked classified documents about government surveillance, all denounced the move as setting a dangerous precedent that could enable governments to look into people’s private phones. Apple scheduled four more press briefings to combat what it said were misunderstandings, admitted it had bungled its messaging and announced new safeguards meant to address some concerns. More than 8,000 people responded with an open letter calling on Apple to halt its moves.
As of now, Apple has said it is going forward with the plans. But the company is in a precarious position. It has for years worked to make iPhones more secure, and in turn, it has made privacy central to its marketing pitch. But what has been good for business also turned out to be bad for abused children.
A few years ago, the National Center for Missing and Exploited Children began disclosing how often tech companies reported cases of child sexual abuse material, commonly known as child pornography, on their products.
Apple said it would soon allow parents to turn on a feature that can flag when their children send or receive nude photos in text messages.Credit…Apple
Apple was near the bottom of the pack. The company reported 265 cases to the authorities last year, compared with Facebook’s 20.3 million. That enormous gap was largely due, in most cases, to Apple’s electing not to look for such images to protect the privacy of its users.
In late 2019, after reports in The New York Times about the proliferation of child sexual abuse images online, members of Congress told Apple that it had better do more to help law enforcement officials or they would force the company to do so. Eighteen months later, Apple announced that it had figured out a way to tackle the problem on iPhones, while, in its view, protecting the privacy of its users.
The plan included modifying its virtual assistant, Siri, to direct people who ask about child sexual abuse to appropriate resources. Apple said it would also soon enable parents to turn on technology that scans images in their children’s text messages for nudity. Children 13 and older would be warned before sending or viewing a nude photo, while parents could ask to be notified if children under 13 did so.
Those changes were met with little controversy compared with Apple’s third new tool: software that scans users’ iPhone photos and compares them against a database of known child sexual abuse images.
To prevent false positives and hide the images of abuse, Apple took a complex approach. Its software reduces each photo to a unique set of numbers — a sort of image fingerprint called a hash — and then runs them against hashes of known images of child abuse provided by groups like the National Center for Missing and Exploited Children.
If 30 or more of a user’s photos appear to match the abuse images, an Apple employee reviews the matches. If any of the photos show child sexual abuse, Apple sends them to the authorities and locks the user’s account. Apple said it would turn on the feature in the United States over the next several months.
Law enforcement officials, child-safety groups, abuse survivors and some computer scientists praised the moves. In statements provided by Apple, the president of the National Center for Missing and Exploited Children called it a “game changer,” while David Forsyth, chairman of computer science at the University of Illinois at Urbana-Champaign, said that the technology would catch child abusers and that “harmless users should experience minimal to no loss of privacy.”
But other computer scientists, as well as privacy groups and civil-liberty lawyers, immediately condemned the approach.
Other tech companies, like Facebook, Google and Microsoft, also scan users’ photos to look for child sexual abuse, but they do so only on images that are on the companies’ computer servers. In Apple’s case, much of the scanning happens directly on people’s iPhones. (Apple said it would scan photos that users had chosen to upload to its iCloud storage service, but scanning still happens on the phone.)
To many technologists, Apple has opened a Pandora’s box. The tool would be the first technology built into a phone’s operating system that can look at a person’s private data and report it to law enforcement authorities. Privacy groups and security experts are worried that governments looking for criminals, opponents or other targets could find plenty of ways to use such a system.
“As we now understand it, I’m not so worried about Apple’s specific implementation being abused,” said Alex Stamos, a Stanford University researcher who previously led Facebook’s cybersecurity efforts. “The problem is, they’ve now opened the door to a class of surveillance that was never open before.”
If governments had previously asked Apple to analyze people’s photos, the company could have responded that it couldn’t. Now that it has built a system that can, Apple must argue that it won’t.
“I think Apple has clearly tried to do this as responsibly as possible, but the fact they’re doing it at all is the problem,” Ms. Galperin said. “Once you build a system that can be aimed at any database, you will be asked to aim the system at a database.”
In response, Apple has assured the public that it will not accede to such requests. “We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future,” the company said in a statement.
Apple has indeed fought demands to weaken smartphone encryption in the United States, but it has also bowed to governments in other cases. In China, where Apple makes nearly all of its products, it stores its Chinese customers’ data on computer servers owned and run by a state-owned company, at the demand of the government.
In the United States, Apple has been able to avoid more intense fights with the government because it still turns over plenty of data to law enforcement officials. From January 2018 through June 2020, the most recent data available, Apple turned over the contents of 340 customers’ iCloud accounts a month to American authorities with warrants. Apple still hasn’t fully encrypted iCloud, allowing it to have access to its customers’ data, and the company scrapped plans to add more encryption when the F.B.I. balked, according to Reuters.
Apple’s fights with the F.B.I. over smartphone encryption have also been defused because other companies have regularly been able to hack into iPhones for the police. It is still expensive and time-consuming to get into a locked iPhone, but that has created an effective middle ground where the police can gain access to devices they need for investigations but it is more difficult for them to abuse the technology.
That stalemate on encryption has also enabled Apple to retain its brand as a champion of privacy, because it is not actively giving the police a way in. But that compounds the potential harm of its new tools, security experts said.
For years, technologists have argued that giving the police a way into phones would fundamentally undermine the devices’ security, but now governments can point to Apple’s endorsement of its photo-scanning tools as a method that helps the police while preserving privacy.
Apple has “taken all their platinum privacy branding and they’ve applied it to this idea,” Mr. Stamos said. “This Apple solution screws up the entire debate and sets us back years.”