cheating ai monitoring system with simple pictures

Posted by trammel at 2020-03-20

Artificial intelligence, especially machine learning for computer vision, relies heavily on long-term training. AI system will be given a lot of practice materials (Photos) to understand exactly what kind of objects need to be identified. The more training materials, the higher the final recognition rate of AI system. Then in practical application, AI system can quickly identify whether there is a specific object from photos or videos. But recently researchers at Leuven University in Ku, Belgium, have found a very simple way to trick a trained AI system, which is used to detect arbitrary objects.

This AI system for human detection has been used by law enforcement agencies and large companies all over the world. For example, the most common use is to count how many people enter or leave a public place, and try to locate and track specific people, or certain types of people. This application has also caused many people to discuss whether civil rights have been violated.

However, a recent discovery by Leuven University in Ku, Belgium, proves that whether these AI systems are legal or not, at least they are not completely reliable. Researchers can cheat them with only one printed photo.

As can be seen from the picture above, the AI system can identify the researchers and chairs standing on the left, but the researchers on the right do not recognize them. The biggest difference between the two researchers is that the researchers on the right have a blurred picture with many people holding umbrellas around their waist. It should be noted that the blurring of this picture has been specially processed.

When the researchers flipped the image (white on the back), the AI system immediately identified the "human" on the right.

Once the picture is turned back, the AI system can't recognize it.

Even if the researchers were sideways, the AI system could not recognize them.

Move the picture slightly to the left, and the AI system immediately recognizes "human".

When the picture was handed to the researcher on the left, AI could not identify the researcher standing on the left.

The researchers turned their backs on the AI system, which could not recognize it. Full video: https://youtu

This AI system uses Yolo (V2) algorithm for practice, to identify all recognizable objects. However, only one picture can make it fail in recognition. Although you and I can see it, it can't. Of course, this particular vulnerability is easy to solve, but it can be predicted that there are still many pictures that can cheat AI.

本文由白帽汇整理并翻译,不代表白帽汇任何观点和立场 来源: