Child Protection vs User Privacy:Apple's Dilemma
Last week, Apple some new features to be launched for its US users later this year which are meant to, according to the tech giant, improve child safety by reducing the amount of child sexual abuse material on their products and service. The features introduced include:
- CSAM(Child Sexual Abuse Material) Detection in iCloud Photos: Using a technology called NeuralHash, Apple will be able to compare an Apple device user's iCloud Photos to known child sexual abuse material from different databases like the National Center for Missing and Exploited Children database. If a threshold number of such images is identified, Apple human moderators get to see the user photos in question in order to make a final confirmation. Users can disable iCloud Photos in order to disable this feature.
- Communication Safety in Messages: This feature allows parents to be notified when their underage children send or receive sexually explicit images on their Apple devices. The feature also obscures the received image in question and tells the underage user that if they open the image, their parent will be alerted. All this happens within the device, meaning Apple moderators do not get to see the images in question.
- Interventions in Siri and Search: This feature will intervene and warn users who try to search for child sexual abuse material using Siri or Apple's Search functionality.
With regards to the first feature which is meant to detect CSAM on Apple devices, critics argue that the feature, apart from invading a user's privacy on their device, will open a gateway for Apple to eventually expand the scope of their photo scanning from just CSAM to other not-so-noble causes. An example cited is that oppressive governments might in the future pressure Apple to scan devices of protesters and dissidents, an argument which Apple has vehemently rebutted by showing their record of refusing to decrypt users' iPhones to assist law enforcement officials.
The company continues to explain that the way the feature works is that Apple won't even get to see the actual user photo that is compared to a known CSAM image because what NeuralHash compares is hashes— unique digits representing the characteristics of an image— of the photos in question. It is only when a user's image hash has been matched with a known CSAM image that Apple will be able to see the actual image in order for human moderators to confirm it as CSAM and notify both the user and relevant authorities.
Apple also argues that it doesn't unhash a user photo based on one match but uses a voucher system to count the number of matched CSMA material then only unhashes when a particular threshold has been reached. Simply put, if Apple matches one child abuse image on a user device, it won't immediately forward it to human moderators, meaning that if you have no child pornography on your Apple device then you have nothing to worry about. The algorithm keeps counting identified CSAM on a user's device until it reaches a threshold (let's say it's 10 images for argument's sake), then only then will it unhash the user photos and makes them available to Apple's human moderators.
Apple explains although even one image of child pornography is deplorable enough to warrant a report to the relevant authorities, having a threshold instead of flagging one individual image is meant to prevent accidental flagging of user images as CSAM because one image match might be a mistake but 10 image matches— not so likely.
With regards to the second feature of Apple alerting parents when underage children send or receive explicit material, critics argue that Apple is allowing parents to invade children's privacy by basically "snitching" on them regarding what they are sending or receiving on their phones. They argue that children, especially queer or transgender ones who are still learning about their bodies, might send explicit images to friends and by informing parents, Apple is impeding on this communication with their peers.
Apple's new features present quite the dilemma for the tech giant who, up to now, has been regarded as the beacon of privacy among other tech giants like Google and Facebook who have been accused of getting rich on the back of lack of regard for their users' privacy. On the other hand, though, Apple can argue that it is finally doing something about the proliferation of child sexual abuse on its products and services, something it has been accused of being slow to do in the past compared to its big tech competitors.
Whether Apple comes out of this dilemma being labeled as just another big tech privacy trampler or a beacon of child protection is going to depend on how well the $2 trillion tech giant manages to successfully justify these new features. Judging by the reception of the features and the aftermath, so far,so-not-good.
This comment has been removed by a blog administrator.
ReplyDelete