Phantom Attack Bluffs Self Driving Cars By Displaying Simulated Objects

Recent research has demonstrated a phantom attack that fools self-driving cars by displaying virtual objects. Such attacks can trigger sudden action by a car that can cause serious disruptions.

Phantom Attack Target Self-Driving Cars

Researchers from Ben-Gurion University of the Negev, Israel, have shared an interesting cyber attack targeting self-driving cars.

Dubbed ‘Phantom’, the attack involves displaying digital objects to self driving cars that confuses their AI systems. As a result, the cars would consider virtual objects as real and would take action accordingly.

Briefly, the researchers tricked the two commercial Advanced Driver-Assistance Systems (ADASs), the Tesla Model X (HW 2.5 and HW 3), and Mobileye 630, by displaying ‘phantoms’ – digital objects used to trick the ADASs – for split-seconds.

For this, they displayed two digital signs for very short instances during an advertisement presented on a digital billboard. Eventually, they could make the Tesla car stop by showing the “Stop” sign during the ad.

Likewise, they could trick the cars’ systems by displaying virtual pedestrians on road via projectors.

The following video shows how phantom attack bluffs self-driving cars systems in real-time.

Proposed Remediation

Through phantom attack, the researchers demonstrated the limitation of the AI models empowering the cars. In real-time, an adversary could easily perform such split-second phantom attacks. That too, without even the fear of getting caught because of four reasons.

(1) there is no need to physically approach the attack scene (a drone can project the image or a digital billboard can display it),
(2) there will be no identifying physical evidence left behind,
(3) there will likely be few witnesses, since the attack is so brief,
(4) it is unlikely that the target of the attack will try to prevent the attack (by taking control of the vehicle), since he/she won’t notice anything out of the ordinary

Hence, these optical illusions for the computer vision algorithms challenge the usefulness of the huge self-driving vehicles. It’s because these cars’ systems treat digital objects as real as a safety rule.

For instance, the Tesla Model X, despite detect digital objects via depth sensing, treats digital objects as real.

Thus, exploiting this safety feature can cause the car to perform undesirable actions in real-time, like stopping suddenly in the middle of the road.

As remediation, the researchers propose the application of multiple convolutional neural networks (CNNs).

It consists of four lightweight deep CNNs which assess the realism and authenticity of an object by examining the object’s reflected light, context, surface, and depth. A fifth model uses the four models’ embeddings to identify phantom objects.

The technical details of the attack are available in a research paper.

Related posts

Water Facilities Must Secure Exposed HMIs – Warns CISA

Microsoft December Patch Tuesday Arrived With 70+ Bug Fixes

NachoVPN Attack Risks Corporate VPN Clients