³Ô¹ÏÍøÕ¾

Before buying a voice assistant for Christmas, you should worry about misinformation

Smart speakers with voice assistants like Alexa are a popular Christmas gift , and due to the recent development of generative AI, conversations with voice assistants are becoming more natural and “human” than ever.

Author

  • Sonja Utz

    Professor of Communication Via Social Media, University of Tübingen

Instead of treating voice assistants as servants that switch off the light or play music, they are now marketed as companions that can be used for sophisticated tasks. For example, more and more people turn to them to find out information and facts.

Voice assistants are especially convenient for people who have problems with writing or reading, such as children, blind people or some older adults .

But before buying a grandparent or child one of these devices, you should consider the risks. Voice assistants sometimes provide misinformation – and this is harder to detect when it’s delivered by voice.

Research from my lab has shown that the same information is perceived as more credible when it is read by a voice assistant than when it is formatted like a Google search snippet or like a Wikipedia article.

In our experiments , we also varied the accuracy of information. The inaccuracies were inconsistencies within the text, so that they could be detected without knowledge of the topic. For example, information on appendicitis first listed common symptoms and stated in the next sentence that about 80% of people do not have the typical symptoms.

People generally judged the inaccurate information pieces as less credible than the accurate information. However, the credibility ratings for the inaccurate information were still surprisingly high.

More importantly, the subjects in our experiments judged the inaccurate information as significantly more credible when presented by a voice assistant versus reading it as text. Two processes play a role in this phenomenon: first, the conversational nature of the interaction gives people the feeling that they are interacting with an intelligent being.

They experience a kind of “social presence” with the voice assistant that is related to perceived credibility. A theory known as the “media as social actors paradigm” proposes that social cues like language use or a human-like voice let us perceive voice assistants or generative AI such as ChatGPT as social beings.

That leads us to apply rules for them that we use for interaction with real people. When we talk to another person, we usually also believe the answer we receive and do not ask for the sources or additional information – which it might be appropriate to do with a voice assistant.

Second, it is more difficult to process spoken language. Written text can be read again, and it is easier to recognise inconsistencies.

AI hallucinations

In two experiments that compared information read by voice assistants versus snippets presented as Google searches, we also varied the information about the source of information. There was either no source, a trustworthy source or a less trustworthy source.

People reading the information that was presented as the result of a web search treated the information without a source as if it was information from an untrustworthy source. Meanwhile, for participants receiving the information from voice assistants, information without a source was considered just as credible as information from a trustworthy source.

The media often report that AI chatbots or voice assistants experience hallucinations, where they generate inaccurate or misleading information in response to a request from a user. In fact, many AI chatbots tell users that they should fact check the information they receive.

People are, however, not aware of the fact that the conversational nature of the interaction with these devices makes them more credible. We do not have the necessary skills yet to deal with these technologies.

Most internet users, meanwhile, learned that they have to check the source of the information they read on the web. Many of us now routinely question whether a website is trustworthy and objective, or if it has been made by someone who wants to promote their own product.

But we don’t do this habitually when talking to a voice assistant. So when you give a voice assistant as a present, consider whether the person will most likely use it for switching off the lights and playing music or for searching for information. If it’s the latter, tell them to remain sceptical about the answers they receive.

The increasing prevalence of voice assistants and generative AI in our daily lives brings both opportunities and challenges. On one hand, these technologies offer unprecedented convenience and access to information, especially for people with special needs.

On the other hand, we must be aware of the potential risks and learn how to use these technologies responsibly. It is important to recognise the limitations and weaknesses of these systems and not to rely blindly on their outputs.

By remaining critical and questioning the sources of information, we can harness the benefits of these technologies without exposing ourselves to their risks.

The Conversation

/Courtesy of The Conversation. View in full .