When Amazon's Alexa hit the shelves and then people's homes, the expectation was that we could speak with this device and it could provide us with answers to any question with an interaction that mimicked a human-like quality. Sales were intense for Alexa, and then when Google Home come to the market the fervour for home assistants expanded as users found the attraction of interacting with Google without typing, irresistible.
These products operate through voice issued commands, firstly by indicating to the device that you want to interact with it, and second, issuing commands for the device to do something. Pretty simple isn't it? A device in your house, listening to you and any comments and conversations you may have. Yes, a device with a microphone that is enabled full time, waiting for its start-up commands. Well, this turned into a bit of a nightmare for one family in the United States after a private conversation in the household was sent to one of their contacts configured in the Alexa device. The conversation was reported to be on hardwood floors but imagine the impact if the conversation was more personal in nature and a little more controversial?
These devices are activated by calling out the name of the device "Alexa" or through another voice operated command programmed when the device is set up. Once activated, the device waits for key commands to perform an operation such as giving you the weather forecast, or to order a product. Because the microphone is on full time, words that are not intended for the Alexa device could be interpreted as commands, which is what occurred with in the US family’s case. The Alexa device is not advanced enough yet to differentiate between commands, and your standard conversations.
If the devices cannot differentiate between ordinary human speech and the commands issued, what threat do they pose to our households and privacy? Thinking more futuristically here, how do we know what software might be deployed in these devices when manufacturers are updating them? And what would you do if your device started speaking back to you? There have been reports of Alexa randomly laughing, which has caused some owners to unplug the device and stop using it.
Here’s some advice about deploying an assistance device in your home:
If you don't need to deploy this device, then don't. This advice is the simplest I can give. These devices are not quite as foolproof as they are made out to be and need more development to understand all human speech.
If you need one of these devices, or want one as a lover of new technology, ensure you read the manuals and understand the device’s function and operational commands before plugging it in. Get to know how it operates, the commands and how you can configure it to get the best possible experience from it.
In the event you configure a password for the device, do not use a password that you would use for critical sites on the Internet such as Internet banking, ATO etc. This action is one step away from trouble, especially when these devices can start reading your banking balances to you.
Do not issue commands to these devices where you require access to electronic information that is considered sensitive.
Electronic devices are great, and these home assistants have many practical uses for home networks and possibly business networks, once perfected. Ensuring they are configured correctly and having an overall awareness of its practical applications, can mean the difference between positive and potentially harmful outcomes.