Apple has announced that it will implement a new system to keep tabs on iCloud gallery’s photographs to detect potential child abuse.
This comes soon after some rumours that Apple would indulge in surveillance of millions of users’ data and photos to check for “illegal property”, a claim which the company has refuted.
In the previous week, it had started initial testing of this system which will detect CSAM (Child Abuse Sexual Material), without explicitly learning about the precise nature of contents.
How Does This Work?
It makes use of advanced and sophisticated levels of cryptography to achieve this. Before an image is uploaded and saved into the iCloud, Apple’s cloud service platform, its cryptographic hash will be obtained using an SHA algorithm.
This will then be tallied against a list of hashes pre-compiled by the National Centre for Missing and Exploited Children (NCMEC) database.
If it is ascertained in any way that a certain threshold has been breached, the violated file in iCloud would then be decrypted by Apple by uploading a file.
This will enable it to see the images, which can be independently and manually verified for a match. As of now, the matching process takes place only on iPhone and not on the cloud platform.
Apple claims that it will detect only those images that possess incriminating content already known and reported to databases.
It cannot snoop in on any private data, for instance, any beach vacation or family trips taken personally, et cetera.
If manual scrutiny unveils that the system was not erroneous, Apple will disable the iCloud account and report the version to relevant law enforcement authorities.
The user also has the power to challenge this decision if he has reason to believe that he was a victim of a false flag, an Apple representative elucidated.
For the time being, this system can work only on photographs and those uploaded to iCloud.
Apple doesn’t have the expertise to perform this detection on videos or those images/files not available on online servers.
Security researchers and cybersecurity experts have raised legitimate concerns that this type of technology can potentially be used to intrude upon privacy.
For example, it can be used for political purposes to single out people who are typically deemed to be dissidents of any ruling government’s ideologue.
Apple claims that this system has been built so that it can be used only with images catalogued by the NCMEC. This can deter any possible misuse.
The company also asserts that it doesn’t have the requisite power to add any additional cryptographic hashes.
To drive home the point, independent security research will also test the system to certify that it can detect CSAM images without compromising valuable privacy.
Another issue raised is that the user on whose device/cloud these inappropriate images are recovered might be a victim of hacking.
A malicious player may have taken remote access to their device or cloud platform and install known child abuse material in a bid to implicate them falsely.
Even though such occurrences are rare, it adds a layer of scrutiny which needs to be imperatively fulfilled.
Apple has repeatedly come under criticism and international pressure on account of its low reporting of child abuse material as juxtaposed with other providers.
Some European legislations are contemplating legislation and laws to hold these platforms more accountable.