Library staff can assist you with:
Contact or visit your local CLIN Library to find out more about our full range of services and for assistance with your research project.
Haupt, C. E. and M. Marks (2024). "FTC Regulation of AI-Generated Medical Disinformation." JAMA 332(23): 1975-1976 https://doi.org/10.1001/jama.2024.19971
FTC Regulation of AI-Generated Medical Disinformation | Digital Health | JAMA | JAMA Network
Artificial intelligence (AI) has made unprecedented advancements. Large language models (LLMs), such as OpenAI’s ChatGPT, allow people to interact with computers as if speaking with a friend. LLMs can rapidly perform monotonous tasks, enhance learning, and create engaging text, image, and video content. The potential social and medical benefits are substantial.However, these achievements bring new risks. AI can produce authoritative-looking content that is false or misleading, including deceptive medical text and illustrations or persuasive but inaccurate health videos. Voice-cloning AI can synthesize realistic-sounding human voices or imitate individuals’ speech. Pairing cloned voices with AI-generated avatars can produce deepfakes—manipulated videos presented as authentic. Deepfakes can impersonate famous or respected authorities, including physicians and public health officials. Although they can be used to entertain, deepfakes are often used to mislead and manipulate. They can spread misinformation with the appearance of professional authority.
Artificial intelligence (AI) has made unprecedented advancements. Large language models (LLMs), such as OpenAI’s ChatGPT, allow people to interact with computers as if speaking with a friend. LLMs can rapidly perform monotonous tasks, enhance learning, and create engaging text, image, and video content. The potential social and medical benefits are substantial.However, these achievements bring new risks. AI can produce authoritative-looking content that is false or misleading, including deceptive medical text and illustrations or persuasive but inaccurate health videos. Voice-cloning AI can synthesize realistic-sounding human voices or imitate individuals’ speech. Pairing cloned voices with AI-generated avatars can produce deepfakes—manipulated videos presented as authentic. Deepfakes can impersonate famous or respected authorities, including physicians and public health officials. Although they can be used to entertain, deepfakes are often used to mislead and manipulate. They can spread misinformation with the appearance of professional authority.
Welcome to the Grand Rounds Further Reading List.
This library guide is to help support you in your professional development.
If you have any questions, please contact the Clinical Library on 9722 8250 or email SWSLHD-BankstownLibrary@health.nsw.gov.
"The Missing Link"
Do you require a copy of a journal article in full text, but CIAP doesn't supply it?
Ask the Library! Use our online journal request form, or use the Request an Article link in Medline and Embase databases