A group of security researchers have found that by modifying street signs, attackers could confuse self-driving cars and make their image recognition system to misclassify signs and take wrong choices, possibly causing accidents
The researchers explained various methods to disrupt the way autonomous cars read and classify road signs using just a colour printer and a camera. They found that strategically placed stickers are enough to trick the image recognition system in autonomous cars. An experiment showed that stickers attached to a stop sign caused sensors to misidentify it as a speed-limit sign.
According to the research paper “Robust Physical-World Attacks on Machine Learning Models”:
“Deep neural network-based classifiers are known to be vulnerable to adversarial examples that can fool them into misclassifying their input through the addition of small-magnitude perturbations. However, recent studies have demonstrated that such adversarial examples are not very effective in the physical world–they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper.”
Most autonomous-driving systems match what the car is “recognizing” through its cameras to saved images, so modifying the appearance of an object can cause the software to make a mistake.
“We show that adversarial examples generated by RP2 achieve high success rates under various conditions for real road sign recognition by using an evaluation methodology that captures physical world conditions. We physically realized and evaluated two attacks, one that causes a Stop sign to be misclassified as a Speed Limit sign in 100% of the testing conditions, and one that causes a Right Turn sign to be misclassified as either a Stop or Added Lane sign in 100% of the testing conditions.”