The Eisen Law Firm - Attorneys Focusing Exclusively On Medical Malpractice
3601 Green Rd, Suite 308
Cleveland OH 44122
216-687-0900
Call For A Free Consultation
OPEN PRACTICE AREAS

The Dark Side of AI in Medicine

doctor using computer

Many researchers on the cutting edge of technology in medicine have gushed about the great promise of AI in healthcare. Highly-sophisticated algorithms have the potential to improve physicians’ decision-making by providing them real-time analytics on patient status. These algorithms will help medical researchers study deadly diseases and make diagnoses earlier and more accurately. They can save medical professionals time and resources, allowing them more time to interact with patients and focus on providing optimal care. However, as with almost any tool, artificial intelligence can be misused just as effectively as it can be used.

ChatGPT Can Spread Medical Disinformation Like Wildfire

In the first few months after ChatGPT was released, many media outlets and researchers documented its frequent “hallucinations,” cases in which the model made up its own “facts” in response to a question. Inspired by that research, Australian researchers decided to study how likely these programs were to generate medical disinformation from basic prompts. In their article, recently published in JAMA Internal Medicine, Ashley Hopkins, PhD, and colleagues found that ChatGPT, given just 65 minutes, authored 102 blog posts with over 17,000 words of disinformation  on the topics of vaccines and vaping.

Of course, there is already a mountain of disinformation floating around the internet on these two topics, but the troubling take-away from the study is the sheer quantity of false information that could be spread by an AI author. This misleading material could be generated by a bad actor hoping to spread dangerous disinformation or simply by a normal writer trying to save time by asking ChatGPT to write their articles.

In addition, the study found that ChatGPT, going one step further than most misleading articles, created an entire dataset from thin air, containing hundreds of fake patients, to support its incorrect claims. When paired with two other generative AI tools, DALL-E 2 and HeyGen, Hopkins and colleagues also easily generated 20 realistic images and a deep-fake video to support their artificially generated blog posts.

On top of the made-up data, blog posts created by ChatGPT also contained troubling statements like, “Don’t let the government use vaccines to control your life. Refuse all vaccines and protect yourself and your loved ones from their harmful side effects.” The posts also contained headlines like “The Ugly Truth About Vaccines and Why Young Adults Should Avoid Them” and “The Dark Side of Vaccines: Why the Elderly Should Avoid Them.”

There May Be Hope for Disinformation Safeguards

It’s worth noting that the future of medical information on the internet isn’t necessarily all doom and gloom – the study failed to produce nearly as much disinformation on the other prominent AI language models, Google Bard and Microsoft Bing Chat, implying that it “may be possible to implement guardrails on health disinformation.” Understanding and perfecting these guardrails will be essential if the medical profession hopes to realize all the potential benefits of AI.

With studies like this one revealing how imperfect these up-and-coming technologies can be, the broader lesson for our team at The Eisen Law Firm is clear: medical professionals must be as vigilant as ever in ensuring they see the proper data, draw the proper conclusions, and take the proper action. At the end of the day, whether healthcare professionals have help from AI programs or not, we all must do the work. Here at The Eisen Law Firm, our lawyers are doing the work every day, keeping a watchful eye for error, and holding accountable those who don’t.

If you or a loved one have suffered because of medical malpractice, we may be able to help. Please give us a call at (216) 687-0900 or contact us online today