The unveiling recently of a program that can block facial recognition software /AI identification of images received a lot of industry attention, but we should be mindful of limitations which seriously curb its effectiveness, as against law enforcement investigators or compliance officers.
The new product was created by individuals who were unable to post their photographs on social media websites, due to the sensitive nature of their military occupations. After leaving the service, they formed a company, ID-D, whose product can digitally alter images, to render them unreadable by facial recognition/Artificial Intelligence algorithms. To an untrained eye, an image so rendered reportedly does not appear to have been altered, but it cannot be read or accessed by an FR program.
The effectiveness of the product, which has multiple potential applications for corporations and governmental entities, is limited to those images to which it has been applied. Compliance officers, or law enforcement investigators, employing FR programs to identify targets, are accessing images in every category, from social media and social networking images taken of, but not by, the target, news and CCTV footage, and official and government issue photos. none of these images were posted by the target, and he cannot alter them. Generally, we are not even aware that many of these photos are being taken, or they may have been taken in the past, and are available to investigators.
Therefore, all the untouched inages of the target, on the 'web or elsewhere, will be identified by the FR program user. Only the altered photos are blocked. Given these limitations, and for the appropriate purposes, the program can be an effective anti-crime tool, but it will never defeat the use of FR in criminal or civil investigations.