Many doctors are already using AI in medical care
One in five UK doctors use a generative intelligence (GenAI) tool – such as OpenAI’s ChatGPT or Google’s Gemini – to help with medical practice. This is according to a recent survey of nearly 1,000 GPs.
Doctors have reported using GenAI to generate documents after appointments, to help make clinical decisions and to provide information to patients – such as a meaningful discharge summary and treatment plans.
Given the huge excitement around artificial intelligence and the health system’s challenges, it’s no wonder that doctors and policymakers alike see AI as the key to reinventing and transforming our healthcare services. good health.
But GenAI is the latest innovation that is challenging the way we think about patient safety. There is still much we need to know about GenAI before it can be safely used in everyday medical practice.
The challenges of GenAI
Typically, AI applications are designed to perform a very specific task. For example, deep learning neural networks have been used for image classification and analysis. Such systems are proving effective in analyzing mammograms to aid in the diagnosis of breast cancer.
But GenAI is not trained to perform a narrowly defined task. This technology is based on so-called basic models, with common capabilities. This means they can display text, pixels, audio or even a combination of these.
These capabilities are fine-tuned for different applications – such as answering user questions, generating code or creating images. The potential to interact with this type of AI seems to be limited only by the user’s imagination.
Importantly, because the technology has not been developed for a specific use or purpose, we do not know for sure how doctors can use it safely. This is just one reason why GenAI should not be widely used in healthcare just yet.
Another problem with using GenAI in healthcare is the well-documented phenomenon of “hallucinations”. Hallucinations are irrational or unrealistic results based on a given presentation.
Hallucinations have been studied in order for GenAI to create text summaries. Another study found that various GenAI tools produced results that made incorrect links to what was stated in the text, or summaries included information that was not stated in the text.
Hallucinations occur because GenAI works on the principle of probability – such as predicting which word will follow in a given context – rather than based on “understanding” in the human sense. This means that the results produced by GenAI are reasonable rather than necessarily accurate.
This plausibility is another reason why it is so close to safely using GenAI in mainstream medical practice.
Consider a GenAI device that listens to patient conversations and generates an electronic summary. In turn, this frees up the GP or nurse to interact better with their patient. But on the other hand, GenAI can generate information based on what it thinks makes sense.
For example, a GenAI summary may change the frequency or severity of a patient’s symptoms, add symptoms that the patient did not complain about or include information that the patient or doctor did not mention.
Doctors and nurses will need to do an eagle eye analysis of any data generated by AI and have a good idea to distinguish the information from the information – but it is done -.
This would be fine in a typical family doctor setting, where the Doctor knows the patient well enough to recognize errors. But in our fragmented health system, where patients are often seen by different health care workers, any inaccuracy in patient data can pose significant risks to their health – including delays, inappropriate and wrong assessment.
Visual risks are very important. But it’s important to note that researchers and developers are working to reduce the chances of negative feedback.
Patient safety
Another reason why it is so soon to use GenAI in healthcare is because patient safety depends on interacting with AI to find out how well it works in certain situations and situations – to see how the technology works how it fits with the people, how it fits with the rules and the pressures and the culture and the priorities within the larger health system. Such a systems perspective will determine whether the use of GenAI is safe.
But because GenAI isn’t designed for a specific application, this means it’s adaptable and can be used in ways we can’t predict. On top of this, developers are constantly improving their technology, adding new capabilities that change the behavior of GenAI applications.
Furthermore, risk can arise even if the technology appears to be working safely and as intended – again, depending on the conditions of use.
For example, developing GenAI communication agents for testing can affect the willingness of different patients to interact with the health system. Patients with low digital literacy, non-English speaking and non-speaking patients may find it difficult to use GenAI. So even if the technology may “work” in practice, this could still contribute to harm if the technology did not work equally well for all users.
The point here is that such risks with GenAI are very difficult to predict with traditional security analysis methods. These relate to understanding how technological failures can cause harm in certain situations. Healthcare can greatly benefit from the adoption of GenAI and other AI tools.
But before these technologies can be applied to health care more broadly, safety assurance and regulation will need to be more responsive to improvements in where and how these technologies are used.
It is also necessary for GenAI device developers and regulators to work with the public using these technologies to create devices that can be used routinely and safely in medical applications.
Mark Sujan, Chair of Defense Science, University of York
This article is reprinted from The Conversation under a Creative Commons license. Read the first article.
#doctors #medical #care