Continue reading this on our app for a better experience

Open in App
Floating Button
Home Views Tech

The brouhaha over Apple's plan to stop child porn

Assif Shameen
Assif Shameen • 9 min read
The brouhaha over Apple's plan to stop child porn
How does Apple's child sexual abuse material (CSAM) detection technology work and what are its unintended consequences?
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Two weeks ago, Apple Inc unveiled some new documents related to its ongoing security threat review of child sexual abuse material, or CSAM, scanning and detection initiative. Essentially, it was about preventing child abuse material from being used on iPhones and iPads. The documents had new details about how the tech supremo had designed its CSAM detection system to combat child pornography.

CSAM scanning aims to identify images of child sexual abuse using a process called hashing, which turns images into numbers. Apple announced its CSAM initiatives at the same time as its parental controls for young children viewing explicit photos or enhanced Messages Safety initiatives.

You might think almost every rational human being finds CSAM abhorrent. But in the cut-throat world of Big Tech where five giants worth between US$1 trillion ($1.36 trillion) and US$2.5 trillion are battling for dominance, even keeping CSAM off your smartphone has become an excuse to hammer rivals and bring them down a notch or two.

Backdoor’ controversy

In part because of unclear messaging, the iPhone maker’s CSAM scanning and detection initiative has become a battle over privacy and the potential use of a security “backdoor” access to data by governments and law enforcement agencies. Top executives at rival tech giants have used the controversy to accuse Apple of colluding with China to provide a backdoor that would allow Beijing to peep into the smartphone data of its citizens.

Facebook senior executive and WhatsApp CEO Will Cathcart earlier this month warned that by allowing such pattern-matching technology to be used on encrypted photos on iPhones, Apple was essentially opening itself and others to pressure from governments like China to do the same for other types of content, such as images of opposition protests, something Apple says is not possible because no such backdoor exists. Apple has for years been forced to store Chinese users’ iCloud data like photos in a data centre inside the country as part of a quid pro quo that allows it to sell devices to people in China. Most of the data of Chinese iPhone owners stays within the phone and Beijing does not have access to it.

Apple has long differentiated itself from other tech players like Google’s owner Alphabet Inc, Facebook, Amazon.com and Microsoft by vowing never to collect surreptitiously data or monetise the data that it does collect legally. Apple wants to be known for controlling an ecosystem that protects both its users as well as their data. Because people trust Apple, they are willing to pay premium prices for its products and services.

Apple says it wants to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of CSAM. New software updates to its virtual assistant Siri and its Search feature will provide parents and children more information and help if they encounter unsafe situations. When a user searches for topics related to child sexual abuse, Apple will redirect the user to resources for reporting CSAM, or getting help for an attraction to such content.

A more controversial feature is its neuralMatch software to scan on-device images to find CSAM before any images are uploaded to its cloud service iCloud. If any CSAM is found, it will be reported to Apple’s moderators who can then turn the images over to the National Centre for Missing and Exploited Children in the case of a potential match. Apple claims that feature will protect users while allowing them to find illegal content. Its detractors say the provision is basically a security backdoor.

To be sure, the iPhone maker has a long history of resisting government pressure — even in extreme circumstances like terrorist attacks and mass shootings — to insert a backdoor in code to facilitate law enforcement agencies to access its devices. After terrorist shootings in San Bernardino, California, six years ago, the FBI pressed Apple to help unlock the iPhone of one of the terrorists to help expedite its investigations and prevent further such attacks. Apple told the FBI that it would not help because protection of customers’ privacy and their data was the key reason why customers trust the iPhone maker over rival brands and rival tech firms that were quick to monetise any data they collected. In 2019, after another shooting, this time in Florida, the FBI again approached Apple to help unlock an iPhone so it could gather more information on the shooter and his motives. Apple again turned down the FBI’s request.

Over the years, end-to-encryption has become key to many products and services, particularly messaging. Apple and privacy advocates have long argued that if the FBI is allowed “backdoor” access to data of terrorists or shooters, it would open a Pandora’s box leading to increased snooping into a device that most people consider their most personal possession. It is better to protect data and keep it far from the prying eyes of law enforcement agencies and governments than install a backdoor key that can be abused at any time using the flimsiest of excuses.

For their part, law enforcement agencies say encrypted devices and messaging apps like Apple’s iMessage and Facebook’s WhatsApp are used by terrorists, organised crime syndicates and child abusers. Apple and others have argued that creating a backdoor will only open the way for hackers, cyber criminals as well as unscrupulous governments to abuse it.

The term “freedom of speech” is embedded in the First Amendment of the US Constitution. Most Americans are proud of their key rights as citizens including free and public expression of opinions without censorship, interference and restraint by the government. As disgusting child sexual abuse imagery is, most Americans believe that the idea that law enforcement agencies like the local police or FBI could read or scan content on a smartphone is indeed equally abhorrent.

Apple’s software chief Craig Federighi in media interviews recently has argued that its neuralMatch tool is not a security backdoor because it does not provide direct access to content through the operating system. The neuralMatch technology will proactively scan images on iPhones looking for matches with those on a US database of known child abuse images. Matches are only flagged when photos are uploaded to iCloud, studied by human reviewers, and then sent to law enforcement if verified. If you have your iCloud photos turned off, Apple can’t scan any child porn images. Moreover, if you have just one child porn image it is unlikely that you would be immediately reported to law enforcement authorities. You would be reported only if you had several such images.

Much to Apple’s chagrin, instead of applauding it for taking a bold initiative on restricting child porn on smartphones and following its lead, lobbyists for Facebook, Google and other companies that surreptitiously gather our data and sell advertising against it managed to quickly turn the CSAM initiative into a debate over privacy, claiming Apple was deliberately scanning or searching through people’s iPhones and iPads to find CSAM.

What constitutes child porn? Will a young mother taking a photo of her toddler being bathed be hauled in by the FBI because the new CSAM software flags her as a peddler of child porn? Apple says a bathtub snap of a toddler taken by a mom will not fit the description of porn images that its algorithms will flag and as such won’t trigger a child abuse alert.

The way Federighi tells it, Apple has come up with a way to find child sex abuse images uploaded to iPhone owners’ iCloud photos without actually dissecting and probing photo libraries. Apple’s software chief notes that it is not going through iPhone photo libraries because it uses unreadable hashes or strings of numbers that represent known CSAM images validated by child safety organisations. The hashes are stored on the device. Once a certain CSAM image threshold is met, Apple is then alerted to the iCloud Photo library in question — and only those images can be seen by Apple.

Why is Apple trying to stamp out child porn now? Over the past three years, regulators in Washington, London, Canberra, Wellington and Ottawa have urged global tech giants to enable governments — with appropriate legal authority — to gain access to data on devices, particularly when it comes to heinous crimes or urgent matters of national security like a terrorist attack.

By making the first move in its home territory, some analysts believe Apple is hoping that legislators around the world will back off or water down legislations that tie the hands of tech giants on privacy. Rivals like Facebook and Google are upset that by making such a preemptive move Apple may have forced their own hands on compliance.

Slippery slope

The Electronic Frontier Foundation (EFF), a digital rights group, believes the CSAM initiative could be a slippery slope. “One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of ‘terrorist’ content that companies can contribute to and access for the purpose of banning such content,” it said in a statement last week. That database, managed by the Global Internet Forum to Counter Terrorism, is troubling without external oversight, EFF said. “While it is impossible to know whether the database has overreached, we do know that platforms regularly flag critical content in documentation of violence and repression, counter speech art, and satire,” EFF noted.

The initiative’s main critic has been Facebook and its WhatsApp subsidiary. One of the biggest battles between Apple and Facebook is over instant messaging. WhatsApp aside, Facebook also has another app called Messenger. Apple is the undisputed Champ of messaging in America, the most coveted market of all. Facebook dominates much of the rest of the world. Apple’s iMessage is the largest messaging app in the US because 55% of smartphone-owning Americans have an iPhone. If you include people who don’t have an iPhone but do have another Apple device, that number is close to 60%. iMessage is safe and secure with end-to-end encryption.

WhatsApp is more popular in Europe and Asia where penetration of iPhones, iPads and Apple’s Mac PCs is low. But Facebook has in recent years caught up with Apple in safety and security features on its WhatsApp even if many users remain concerned that their personal data is being mined and monetised to boost the social media giant’s ad revenues and bottom line.

Clearly, Apple’s messaging and PR on CSAM could have been a lot better, but privacy and CSAM are issues that need to be taken seriously and tackled. Unfortunately, the longstanding Apple-Facebook rivalry has made it all a kerfuffle.

Assif Shameen is a technology and business writer based in North America

Photo: Bloomberg

Highlights

Re test Testing QA Spotlight
1000th issue

Re test Testing QA Spotlight

Get the latest news updates in your mailbox
Never miss out on important financial news and get daily updates today
×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.