Apple Announces Photo-Scanning Feature For Child Exploitation Images

Apple+Announces+Photo-Scanning+Feature+For+Child+Exploitation+Images

Photo courtesy Sopa/LightRocket/Getty

Apple recently unveiled plans to scan iPhones for images of sexual abuse in its next iOS software update. This update was met with mixed reactions; some applauded Apple for its efforts to reduce child abuse while others raised security and privacy concerns. This tool, known as neuralMatch, will scan images before they are uploaded to iCloud. If the application flags a photo, it will then be reviewed by a human monitor. If the picture is confirmed to contain content related to abuse, the user’s Apple account will be disabled and the National Center for Missing and Exploited Children (NCMEC) will be notified. NeuralMatch assigns each image with a hash code, or function, and compares the photos with known child abuse material in preexisting databases.

To prevent incorrect flagging, neuralMatch will only alert a human reviewer if it detects over 30 images that match photos with known abuse content. Apple states that the system will have less than a one-in-one-trillion chance of incorrectly flagging an account annually.

In addition to scanning photos, Apple also plans to scan iMessage for sexually explicit content as a safety measure. This is drawing concern for security experts who claim that Apple’s ability to scan for photos on devices could eventually lead to the U.S. government securing private photos as well. Other researchers worry that Apple’s system can be used to frame innocent users by sending them images that will purposely trigger the application’s matching system. Matthew Green, a cryptography researcher at Johns Hopkins University, adds that “Researchers have been able to do this pretty easily.” As a result, innocent users’ accounts may be disabled and they could be wrongfully punished for crimes they did not commit. 

Other experts believe that the system’s positive impact will outweigh any potentially harmful effects. Hany Farid, a researcher at the University of California at Berkeley, argues that other programs, such as Whatsapp, have implemented similar systems to detect malware and viruses in links sent by users in messaging services. However, the link scanning feature usually warns the other user of potential viruses on the website rather than scanning the website’s content itself and reporting it to the authorities. 

After receiving criticism for this feature, Apple decided to delay plans for implementing the scanning of photos to make adjustments before releasing it in an update. The company’s statement reads, “We have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.” While it is unclear as to how Apple plans to get around the privacy concerns, from the backlash they have received from various groups, it seems that Apple will at least continue to refuse the government access to images while addressing user concerns. 

While Apple’s potential new feature can be extremely important and valuable, it also involves a complicated set of possible disadvantages that can be damaging to users’ privacy. And while Apple has previously resisted the U.S. government’s request to give law enforcement the ability to unlock and decrypt iOS devices, it is uncertain whether Apple will continue to refuse or cave into the government’s demands. Alex Stamos, the former chief security officer at Facebook, believes that it is important for Apple to include more of the research community in developing and testing new features related to personal security. Although Apple is facing criticism from many people regarding their photo-scanning features, it is important for the corporation to obtain inputs from other researchers and current customers to further improve this update and address its shortcomings.