The European Commission will soon publish draft legislation that could require tech companies such as Apple and Google to identify, remove and report illegal child abuse images on their platforms to law enforcement authorities, it is now reported.
According to a leaked document that Politico The European Commission believes that voluntary measures taken by some digital companies have so far proved insufficient to combat the growing misuse of online services for sharing Child Sexual Abuse Material (CSAM), which is why the Commission wants to make the detection of such material mandatory. After months of lobbying, groups representing tech companies and child rights organisations are waiting to see how strict the rules could be and how they would work without requiring tech companies to scan the full range of user content - a practice the European Court of Justice ruled illegal in 2016.
Was that it for end-to-end encryption?
Aside from how identifying illegal material would work under the law, privacy groups and tech companies are concerned that the EU could create backdoors to end-to-end encrypted messaging services, whose content the hosting platform cannot access. EU Home Affairs Commissioner Ylva Johansson has stated that there are technical solutions to keep conversations secure while finding illegal content, but cybersecurity experts disagree. However, MEPs in the European Parliament are far from united on the issue. Reacting to the proposal's revelation, Renew Europe MEP Moritz Körner told Politico that the Commission's proposal would mean "that the privacy of digital correspondence is dead." The heated debate echoes last year's controversy surrounding Apple's plan to scan iPhones and iPads for CSAM (child sexual abuse material).
Apple under criticism
In August 2021, Apple announced a number of new child safety features, including scanning users' iCloud photo libraries for CSAM and Communications Safety to alert children and their parents when they receive or send sexually explicit photos. The latter, and arguably less controversial, feature already exists on Apple's iMessage platform in the US. Apple's method of scanning for CSAM has not yet been deployed. Following Apple's announcement, the features were criticized by numerous individuals and organizations, including security researchers, the Electronic Frontier Foundation (EFF), Facebook's former security chief, politicians, political groups, university researchers, and even some Apple employees. Most of the criticism was directed at Apple's planned on-device CSAM detection, which researchers criticized as dangerous technology bordering on surveillance and derided as ineffective at detecting child sexual abuse images.
CSAM detection will come at some point
Apple initially tried to clear up some misconceptions and reassure users by releasing detailed information and giving interviews with company executives to allay concerns. But despite Apple's efforts, the controversy did not subside and Apple decided to delay the rollout of CSAM indefinitely after the barrage of criticism. In December 2021, Apple quietly removed all mentions of CSAM from its child safety webpage, suggesting that its controversial plan to detect child sexual abuse images on iPhones and iPads is in limbo after its methods were heavily criticized. However, Apple says plans for CSAM detection have not changed since September, suggesting that CSAM detection in some form will come in the future. (Photo by kckate16 / Bigstockphoto)