Home Cyber Attack New Attack Strategy Against Smart Assistants Dubbed ‘LightCommands’

New Attack Strategy Against Smart Assistants Dubbed ‘LightCommands’

by Abeerah Hashim
LightCommands

Researchers have come up with a new attack strategy against smart assistants. These attacks threaten all devices featuring voice assistants. Dubbed as ‘LightCommands’, these attacks enable a potential attacker to inject voice commands to the devices and take control of them.

LightCommands Attacks On Voice Assistants

Researchers have developed new attacks that allow meddling with smart assistants. These attacks named ‘LightCommands’ allow injecting audio signals to voice assistants.

LightCommands attacks basically exploit a flaw in MEMS (microelectromechanical systems) that recognize light as voice commands. Thus, the researchers could easily inject voice sound into the microphones using laser light.

Moreover, the researchers also noticed the absence of robust user authentication systems in most voice assistants. Hence, a potential attacker can simply hack into the assistants and meddle with smart home systems, smart cars, and other devices.

In this way, LightCommand attacks exploit a hardware and a software issue to compromise Google Assistant, Amazon Alexa, Apple Siri, or Facebook Portal.

Conducting the attack requires no physical interaction with the target device. An attacker may use laser light from a distance of up to 110 meters and from separate buildings.

In their experiment, the researchers chose four different voice commands to assess the attack method on various speakers, phones, a tablet, a thermostat, and a camera. These devices had Google Assistant, Siri, Alexa and Facebook Portal at the backend.

Whereas the voice commands were: “what time is it”, “set the volume to zero”, “purchase a laser point”, and “open the garage door”. They generated these commands whilst appending a wake word to call the voice assistant. They then injected those commands to the devices’ microphone by aiming a laser at the microphone ports.

The researchers left all devices in their default settings and experimented in a closed environment while aiming through clear glass windows. As a result, they found all the devices vulnerable to light-based audio command injection attacks.

They have published their findings in detail as a research paper. Whereas they have also set up a website to describe LightCommands and shared a quick explanatory video.

Possible Mitigations

The researchers have ensured no detection of these attacks in the wild, yet.

As for possible mitigation methods, users can simply observe their devices for any laser beam spots. While they can also alter their devices’ settings to change how the devices respond to light.

Applying user authentication and voice specification requirements also helps in fending off LightCommands.

Furthermore, users may also apply physical barriers to decrease the amount of light reaching the devices’ diaphragm. Though, the researchers clarify that such barriers are only effective to a certain extent.

Such physical barriers are only effective to a certain point, as an attacker can always increase the laser power in an attempt to compensate for the cover-induced attenuation or for burning through the barriers, creating a new light path.

Let us know your thoughts in the comments.

You may also like