Popular speech recognition systems are vulnerable to DolphinAttack

A group of security researchers from the Zhejiang University in China have proved how several popular speech recognition systems can be controlled using ultrasound via an attack method they have called “DolphinAttack.”

The attack is effective against popular speech recognition systems including Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa. Pretty much all of them then. The researchers modulated different voice commands on ultrasonic carriers, using frequency of 20,000 Hz or above, in order to make them unheard to humans.

According to researchers:
“This paper aims at examining the feasibility of the attacks that are difficult to detect, and the paper is driven by the following key questions: Can voice commands be inaudible to human while still being audible to devices and intelligible to speech recognition systems? Can injecting a sequence of inaudible voice commands lead to unnoticed security breaches to the voice controllable systems? To answer these questions, we designed DolphinAttack”

The researchers tested the attack (DolphinAttack) on 16 devices with 7 different speech recognition systems, it was successful in all situations from different distances.

“By injecting a sequence of inaudible voice commands, we show a few proof-of-concept attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to switch the phone to the airplane mode, and even manipulating the navigation system in an Audi automobile. We propose hardware and software defense solutions,”

This attack may not be realistic, but it shows the several ways by which attackers can try to hack our devices.

Related posts

Microsoft December Patch Tuesday Arrived With 70+ Bug Fixes

NachoVPN Attack Risks Corporate VPN Clients

Anti-Spam WordPress Plugin Vulnerabilities Risked 200K+ Websites