A new UK study suggests 20 per cent of general practitioners use artificial intelligence in diagnosis and treatment in addition to administrative tasks. RMIT experts comment on whether AI technology is ready for healthcare settings, and whether the sector is ready for it.
Professor Karin Verspoor, Dean, School of Computing Technologies
“AI technologies are not yet ‘medical grade’ and the use of generative AI as a ‘Dr ChatGPT’ resource is dangerous – particularly commercial systems that have not been trained specifically for clinical knowledge.
“These models integrate text from the whole internet, including both highly anecdotal information and more reliable sources such as the medical literature, but critically lack access to electronic health record systems.
“This mix of sources, and the absence of important real-world clinical data, means that the information accessible in the models has unclear trustworthiness.
“Technology-assisted dictation tools have been in use in clinical practice for many years, and the capabilities of the modern AI scribes certainly exceed prior versions thanks to the sophisticated language models that now underpin them. However, the new versions introduce significant uncertainty due to their reliance on generative AI, which explicitly incorporates text prediction – essentially educated guesses.
“While this can work well when systems are directly adapted to specific contexts and use cases, there is insufficient evidence about just how well they work in real-world clinical contexts.
“ has documented instances of confabulations and errors in real-world clinical settings. Beyond technical performance, research is needed about their impact on patient experience, clinician behaviour and clinical workflows.
“More targeted tools are needed, that can be tailored to specific types of clinical questions, and be probed for accuracy and robustness.”
Professor Karin Verspoor is Dean of the School of Computing Technologies at RMIT University in Melbourne, Australia, and an expert on the use of artificial intelligence in biomedical applications. She is a co-founder and the Victoria node lead of the Australian Alliance for Artificial Intelligence in Healthcare, and a Director of BioGrid Australia.
Professor Vishaal Kishore, Executive Director, RMIT-Cisco Health Transformation Lab
“Truly, done right, AI holds the promise of genuinely smart healthcare. While the potential of AI in healthcare is immense, its widespread adoption comes with significant challenges.
“Clinicians are grasping for existing AI tools not merely in the administrative periphery of their work (in terms of scribing, scheduling and data entry), but also at the very core of their roles (in diagnosis, decision-making, treatment planning, patient engagement and education).
“In a recent report, we highlighted that AI has the ability to augment ‘healthcare intelligences’ at multiple levels, including to predict and prevent health needs; radically improve diagnosis and analysis; analyse and prioritise healthcare, emergency response, resource usage or specific patient needs; and improve care and treatment planning and management.
“Clinicians and healthcare leaders must recognise that AI, particularly in its generative form, is not a magic bullet. It can support clinical decisions, but caution is warranted in the following areas:
- Efficacy and interpretation: AI, particularly in complex cases, is not infallible. Without proper training, clinicians risk over-reliance or misinterpretation of AI outputs, which can lead to critical errors. A deep understanding of AI’s strengths and limitations is essential.
- Cybersecurity: As AI processes sensitive patient data, the risk of cyber threats increases exponentially. Strong cybersecurity measures are vital to protect patient information and ensure resilience against breaches or system failures.
- Interoperability: AI must seamlessly integrate with existing healthcare technologies like electronic health records. Without this, fragmentation and data silos will hinder AI’s efficiency, limiting its effectiveness in clinical practice.
- Model of care innovation: AI’s potential to reshape care delivery requires more than simple integration into current workflows; it demands a fundamental redesign of care models, blending digital tools with human expertise to create a more responsive system.
- The irreducibility of human touch: Despite AI’s advances, it cannot replace the critical human elements of judgment, empathy and connection. The human touch in healthcare remains irreplaceable, and essential to effective care.
“The healthcare sector is already teeming with emerging AI-driven innovations, from automated diagnostics to predictive analytics. The real challenge lies in integration and humility: recognising AI’s limitations, while building resilient, adaptable systems that are prepared to harness its power without losing sight of the human element.”
Professor Vishaal Kishore is the Executive Chair of the RMIT-Cisco Health Transformation Lab and Professor of Innovation and Public Policy at RMIT. A former Deputy Secretary at the Department of Health and Human Services, he has extensive experience in public systems and health innovation and digital transformation and is a leading voice on the intersection of healthcare, technology, and policy innovation.
***