Apple announced late Thursday evening that with the launch of iOS 15 and Co., it will begin scanning iCloud Photos in the US to look for known Child Sexual Abuse Material (CSAM) and plans to report the results to the National Center for Missing and Exploited Children (NCMEC).
Even before Apple has announced its own plans in detail presented news of the CSAM initiative leaked out throughAs a result, security researchers have begun to raise concerns about how Apple’s new image scanning protocol could be used in the future. reported now the Financial Times. Apple uses a "NeuralHash" system to compare known CSAM images with photos on a user's iPhone before they are uploaded to iCloud.
Matthew Green on Apple's plan: "A really bad idea"
If there's a match, the photo is uploaded with a cryptographic security voucher and at a certain threshold, a check is triggered to ensure the person actually has CSAM on their devices. Currently, Apple uses this technology to scan and match images to look for child abuse. But security researchers fear it could be adapted in the future to look for other types of images that are more concerning, such as anti-government signs at protests. In a series of tweets, John Hopkins cryptography researcher Matthew Green said CSAM scanning is a "really bad idea" because in the future it could be expanded to scan end-to-end encrypted photos, rather than just content uploaded to iCloud. Side note: For children, Apple is implementing a separate scanning feature that looks for sexually explicit content directly in messages that are end-to-end encrypted.
Scanning technology should be very accurate
Green also raised concerns about the hashes Apple plans to use, as there could potentially be "collisions" where someone sends a harmless file that shares a hash with CSAM and could result in a false positive. As a reminder, Apple says its scanning technology has an "extremely high level of accuracy" to ensure accounts are not falsely flagged. Reports are manually reviewed before a person's iCloud account is disabled and a report is sent to the NCMEC. Green believes Apple's implementation will push other tech companies to adopt similar techniques.
That will break the dam. Governments will demand it from everyone.
Security researcher Alec Muffett, who formerly worked at Facebook, said Apple's decision to implement this type of image scanning is a "huge and regressive step for individual privacy."
Apple has been using screening technology for some time
As many noted on Twitter, several tech companies are already doing image scanning for CSAM. Google, Twitter, Microsoft, Facebook, and others use image hashing methods to search for and report known child abuse images. It's also worth noting that Apple was already scanning some content for child abuse images before the new CSAM initiative rolled out. In 2020 confirmed Apple's Chief Privacy Officer Jane Horvath said Apple uses screening technology to look for illegal images and then disable accounts if evidence of CSAM is discovered. That being said, Apple already updated its privacy policy in 2019 to indicate that it scans uploaded content for "potentially illegal content, including child sexual exploitation material." So yesterday's announcement is actually not entirely new. (Photo by weyo / Bigstockphoto)