The Limitations of ChatGPT in Providing Accurate Drug Information
A new study released on Tuesday suggests that the free version of OpenAI’s chatbot, ChatGPT, may provide inaccurate or incomplete responses to questions related to medications. Pharmacists at Long Island University conducted the study and found that out of 39 drug-related questions posed to ChatGPT, only 10 responses were considered satisfactory based on established criteria. The remaining responses either did not address the question directly or were inaccurate and incomplete.
Lead author Sara Grossman, an associate professor of pharmacy practice at LIU, emphasized that patients and healthcare professionals should exercise caution when relying on ChatGPT for drug information. It is important to verify the responses from the chatbot with trusted sources such as doctors or government-based medication information websites like MedlinePlus.
An OpenAI spokesperson clarified that ChatGPT is not fine-tuned to provide medical information and users should not rely on its responses as a substitute for professional medical advice. The chatbot’s accuracy and consumer protections are currently under investigation by the Federal Trade Commission.
Concerns About ChatGPT’s Accuracy and Misinformation
ChatGPT has faced various concerns since its launch, including issues related to fraud, intellectual property, discrimination, and misinformation. Several studies have highlighted instances of erroneous responses from the chatbot. In October alone, ChatGPT received around 1.7 billion visits worldwide.
It is worth noting that the free version of ChatGPT is limited to using data sets through September 2021, which means it may lack up-to-date information in the rapidly changing medical landscape. While a paid version of ChatGPT exists with real-time internet browsing capabilities, it remains unclear how accurately it can answer medication-related questions.
Grossman acknowledged that a paid version of ChatGPT might yield different study results, but the research focused on the free version to reflect what the general population uses and can access. She also mentioned that the study provided only a snapshot of ChatGPT’s performance earlier this year, and it is possible that the free version has improved since then.
ChatGPT Study Results
The study involved real questions posed to Long Island University’s College of Pharmacy drug information service between January 2022 and April 2023. Pharmacists answered 45 questions, which were then reviewed for accuracy against ChatGPT. Researchers found that ChatGPT did not directly address 11 questions, provided inaccurate responses to 10 questions, and gave incomplete or wrong answers to another 12.
For each question, researchers requested references from ChatGPT to verify the information provided. However, the chatbot only provided references in eight responses, and those sources did not exist. Examples of incorrect responses included failing to acknowledge a drug interaction between Pfizer’s Covid antiviral pill Paxlovid and a blood-pressure-lowering medication and providing an unsupported conversion method for doses of the drug baclofen.
Hot Take: The Importance of Verifying Medical Information
The study highlights the need for caution when relying on chatbots like ChatGPT for accurate drug information. While these AI-powered tools can be helpful, they are not a substitute for professional medical advice or trusted sources. Patients and healthcare professionals should always verify any information obtained from chatbots with reliable sources such as doctors or reputable medical websites. As technology continues to advance, it is crucial to prioritize accuracy and ensure that users have access to up-to-date and reliable medical information.