So-called – algorithms that can be used to create content using machine learning – has been much in the news of late. In particular, the tool ChatGPT, created by , has attracted considerable attention.
In Australia, this has no doubt been contributed to by , who has recently visited our country and is keen to promote the benefits of the technology.
Most of the debate has been about ChatGPT and similar AI tools, such as , used for creating supposedly realistic images and arts, and , which offers “speech recognition in natural language”, and their potential to facilitate student cheating or to replace workers.
They are also seen as having the potential to free up people’s time to enable them to undertake other tasks.
In , an entrepreneur, who founded a company specialising in tech start-ups, enthused:
“ChatGPT feels like the introduction of the PC – a tool that allows us to work smarter and enhances the ability of humans to do what they do best, which is create, dream and innovate.”
However, thus far, little has been said about the potential use and misuse of these tools in medicine and healthcare, including for diagnoses, including self-diagnoses, and prescriptions.
, these are the areas in which these technologies are likely to find early widespread application. They are also areas where there’s much potential for exploitation and misuse, especially given the ready accessibility of the tools online.
In a recent , it was reported that Bill Gates saw “obvious benefits [of ChatGPT] in the medical profession, and across other industries where a lot of information needed to be understood”.
According to the , “AI could help a doctor write prescriptions, and explain medical bills to patients, for example, or also assist in both writing and understanding legal documents”.
As a health sociologist interested in new and emerging health technologies, these comments caught my eye. It’s typical of the promissory discourse surrounding new health technologies. It’s also deeply worrying.
On the face of it, there’s much that is appealing about the near-instantaneous production of information using what is in effect a sophisticated chatbot.
Chatbots have been widely used for some time and, although sometimes useful, their limitations are well understood. As a form of AI, they rely on information harvested from many online sources – some of questionable reliability – that also carry biases, including those based on differences of gender, , , and .
Much information available online is “personalised” algorithmic-driven advertising, designed to engage users. These personalised messages are crafted to be “emotionally resonant”.
In , I explored the mechanisms by which emotions are exploited online; for example, through deceptive designs or , designed to trick users and make them feel and act in certain ways, generally with the aim of encouraging them to stay online and to purchase advertised goods and services.
The emotions are exploited as never before, with affective computing, a field founded by , oriented to making machines more “human-like” and “conversational”. Picard co-founded , an MIT-media lab spinoff that claims to be on “a mission to humanise technology”, which potentially has vast applications in advertising. This is where generative AI, like ChatGPT, is of great concern.
OpenAI has produced ChatGPT with that this language model “interacts in a conversational way”, and that “the dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests”.
This claim, which suggests that AI can think, feel and respond like a human, is significant and has evidently attracted much interest, with claims there were about one million users of the tool in a week after its release.
The prospect of a thinking, feeling, sentient AI is far-fetched, but well-entrenched in science-fiction and the popular imagination.
What greatly concerns me is people using ChatGPT, and similar tools, for routine medical procedures, including self-diagnoses.
, including Gates (), Elon Musk (current owner of Twitter), Peter Thiel (founder of PayPal), and other big tech entrepreneurs.
They are hardly disinterested players when they talk up the benefits of generative AI. They, and other billionaire entrepreneurs, will no doubt be looking at the huge profits that can be made from generative AI in the fields of health and medicine, and other areas.