BNMC Blog
How Can We Secure Our Use of Smart Assistants?
Smart assistants are one of the most intriguing and confounding technologies developed over the past decade. At the time of this writing, over 150 million smart speakers are in 60 million homes in the United States, when you add in the smart assistants available on mobile devices and other various smart devices, you’re talking a billion people actively using some type of smart assistant. Over the past couple of years, you’re beginning to see these assistants being used more for business and this has made certain security-minded people a little weary of them. Let’s take a look at some of the security questions surrounding the smart assistants.
What Do Our Smart Assistants Actually Hear?
We all know that person that claims that the smart assistants are being hacked into by the government and they are listening into our conversations. For the majority of us, that conspiracy doesn’t make a whole lot of sense. That said, these devices do listen, when they are prompted to. Here is how to trigger four of the most popular assistants:
- Amazon Alexa devices respond to the term “Alexa,” ”Computer,” ”Amazon,” or “Echo.”
- Google Home devices wake up to “Okay/Hey, Google.”
- Apple’s Siri responds to “Hey Siri.”
- Microsoft’s Cortana reacts to its name, “Cortana,” or “Hey, Cortana.”
There have, in fact, been instances where these smart assistants, and especially with the smart speakers, pick up some things they weren’t supposed to. If you have one of these speakers in your home, there have to be some natural security concerns, but they probably aren’t from the manufacturers.
The Analysis
Researchers looked into the question of what exactly these smart assistants hear and formed a paper titled, Unacceptable, where is my privacy? Exploring Accidental Triggers of Smart Speakers. They analyzed when the terms that successfully activated the assistants were spoken, finishing with over a thousand phrases. They then further analyzed them into their phonetic sounds to try and ascertain why there were so many false positives.
Depending on how a user pronounced a word, some triggers were found, including:
- Alexa devices also responded to “unacceptable” and “election,” while “tobacco” could stand in for the wake word “Echo.” Furthermore, “and the zone” was mistaken for “Amazon.”
- Google Home devices would wake up to “Okay, cool.”
- Apple’s Siri also reacted to “a city.”
- Microsoft’s Cortana could be activated by “Montana.”
Of course, these assistants are used on devices all over the world, and as a result found that when used in other languages had a lot of the same issues. For example, the German phrase for “On Sunday” (“Am Sonntag”) was commonly mistaken for “Amazon.”
What Does This Mean for Individual Privacy?
Even with the interesting nature of this analysis, the findings are a little more disconcerting. The study shows that once the wake word or phrase is recognized by the device, it immediately starts listening for queries, commands, and the like. So even though they claim to only start listening when prompted to, several different iterations of phrases can cause the assistant to start listening.
The complications don’t end there, since the data is reviewed manually by people—which already destroys any notion of privacy—one of those technicians could potentially be given information that wasn’t intended to be captured by an assistant. This could potentially be devastating if the technician whose job is to manually check this information were to gain access to account information or some other PII and use it in an unethical way.
The smart speaker, and smart assistant are useful products that need a little more refinement before we can completely trust them. To learn more about new technology and how it is being used, check back to our blog regularly.
Comments