According to a Long Island University study, artificial intelligence tool ChatGPT’s answers to nearly 75% of drug-related questions were incomplete or wrong… and in some cases, even generated fake citations to support its answers.
Researchers challenged the free version of ChatGPT over a 16-month period, comparing its responses to those of pharmacists who were asked the same questions. Of the 39 ChatGPT-provided responses, only 10 were considered satisfactory based on study criteria. Moreover, just eight of the responses provided references, all of which cited non-existent sources.
“Healthcare professionals and patients should be cautious about using ChatGPT as an authoritative source for medication-related information,” said Sara Grossman, PharmD, a pharmacy professor at the university and a lead author of the study. “Anyone who uses ChatGPT for medication-related information should verify the information using trusted sources.”