An invisibility sweater for image and facial recognition systems has been developed at the University of Maryland in the USA. Developed printing for clothes allows you to cheat neural networks and machine learning systems. It was originally created to find vulnerabilities in such systems. This is based on a principle called hostile attack. Its essence is to deceive the neural network and make it give a false result.
The image was created using a large dataset and then applied to the sweater. Once a person is identified, it is reviewed to assess how much the pattern reduces recognition. As a result, it was possible to create a pattern that prevented people from being recognized. So far this works with the YOLOv2 system.
The image resembles the works of impressionist artists and looks blurry. All things considered, it works. The hidden sweater’s template was created using the Common Objects in Context (COCO) dataset. Its list has more than 330 thousand images, containing more than 1.5 million objects. Previously, Russian scientists created a computer vision system that can recognize any object in video camera images. Source