Researchers say they built a CSAM detection system like Apple’s and discovered flaws

[ad_1]

Since Apple announced it was working on a technology for detecting child sexual abuse material (CSAM), the system has been a lightning rod for controversy. Now, two Princeton University academics say they know the tool Apple built is open to abuse because they spent years developing almost precisely the same system. “We wrote the only peer-reviewed publication on how to build a system like Apple’s — and we concluded the technology was dangerous,” assistant professor Jonathan Mayer and graduate researcher Anunay Kulshrestha wrote in an op-ed The Washington Post published this week.

The two worked together on a system for identifying CSAM in end-to-end encrypted online services. Like Apple, they wanted to find a way to limit the proliferation of CSAM while maintaining user privacy. Part of their motivation was to encourage more online services to adopt end-to-end encryption. “We worry online services are reluctant to use encryption without additional tools to combat CSAM,” the researchers said.

The two spent years working on the idea, eventually creating a working prototype. However, they quickly determined there was a “glaring problem” with their tech. “Our system could be easily repurposed for surveillance and censorship,” Mayer and Kulshrestha wrote. “The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.”

That’s not a hypothetical worry either, they warn. The two researchers point to examples like WeChat, which the University of Toronto’s Citizen Lab found uses content-matching algorithms to detect dissident material. “China is Apple’s second-largest market, with probably hundreds of millions of devices. What stops the Chinese government from demanding Apple scan those devices for pro-democracy materials?” Mayer and Kulshrestha ask, pointing to several instances where Apple acquiesced to demands from the Chinese government. For example, there’s the time the company gave local control of customer data over to the country.

“We spotted other shortcomings,” Mayer and Kulshrestha continue. “The content-matching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny.” Those are concerns privacy advocates have also raised about Apple’s system.

For the most part, Apple has attempted to downplay many of the concerns Mayer and Kulshrestha iterate in their opinion piece. Senior vice president of software engineering Craig Federighi recently attributed the controversy to poor messaging. He rejected the idea the system could be used for scanning for other material, noting the database of images comes from various child safety groups. And on the subject of false positives, he said the system only triggers a manual review after someone uploads 30 images to iCloud. We’ve reached out to Apple for comment on the op-ed. 

Despite those statements, Mayer and Kulshrestha note their reservations don’t come from a lack of understanding. They said they had planned to discuss the pitfalls of their system at an academic conference but never got a chance because Apple announced its tech a week before the presentation. “Apple’s motivation, like ours, was to protect children. And its system was technically more efficient and capable than ours,” they said. “But we were baffled to see that Apple had few answers for the hard questions we’d surfaced.”

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Household Attire
Logo