Last month, the Guardian reported that Apple contractors regularly listen to confidential medical information, drug deals, and personal recordings of couples, as part of their job via Siri’s recordings.
The contractors are responsible for grading Siri’s responses on a variety of factors such as checking if the activation of the voice assistant was deliberate or accidental, if the query was something Siri was expected to help with and whether Siri’s response was appropriate.
As per the report by the Guardian, one of the Apple contractors explained the grading process. In the grading process, the audio snippets are taken which are not connected to names or IDs of individuals and contractors are made to listen to them in order to check whether Siri is accurately hearing them or Siri may have been invoked by mistake.
In a statement to the Guardian, Apple said, “A small portion of Siri requests are analysed to improve Siri and dictation. User requests are not associated with the user’s Apple ID. Siri responses are analysed in secure facilities and all reviewers are under the obligation to adhere to Apple’s strict confidentiality requirements.”
Additionally, Apple said that the data “is used to help Siri and dictation … understand you better and recognise what you say.”
Siri can also accidentally get activated when it by mistakenly hears the word ‘wake’ or the phrase “Hey Siri”. The Apple contractor explained, “The sound of a zip, Siri often hears as a trigger.”
This month, Apple has planned to suspend Siri’s response grading and review the process, this might be the company’s counter move against this report by the Guardian.
Apple will also be issuing a software update in the future that will give Siri users a choice to choose whether they participate in the grading process or not.
In a statement to TechCrunch, Apple said, “We are committed to delivering a great Siri experience while protecting user privacy.” The company further added, “While we conduct a thorough review, we are suspending Siri grading globally. Additionally, as part of a future software update, users will have the ability to choose to participate in grading.”
Companies like Amazon and Google have also come into the radar because of involving humans for monitoring their automatic voice assistants. There were reports that stated that Amazon’s staff was listening to some of Alexa’s recordings. And there was a similar incident that happened with Google Assistant. This month, Amazon came up with an option to disable the human review of Alexa recordings. It seems users might appreciate if they are asked for their consent before their personal recordings get monitored.
Also, these recordings get stored on the server and if any incident of data breach takes place or if a malicious attacker targets the server or datacenter, there is a high possibility of such data getting into the wrong hands. And this might make us think if our personal data is really secure?
In a recent Threatpost Podcast on voice assistant privacy issues, Tim Mackey, principal security strategist at cybersecurity research center at Synopsys said, “The biggest concern that I have is actually around data retention policies and disclosure.”
Mackey further added, “So we have an expectation that these are connected devices, and that perhaps short of the Alexa-then-perform-action activity, that the communication, the actual processing of our request is going to occur on an Amazon server, Google server or so forth…. And what we’re learning is that the providers tend to keep this data for an indeterminate amount of time. And that’s a significant risk, because the volume of data itself means that it’s potentially very interesting to a malicious actor someplace who wishes to say, target an individual.”