WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

1.3K

Archive: https://archive.today/iG6XA

From the post:

>Health practitioners are becoming increasingly uneasy about the medical community making widespread use of error-prone generative AI tools. The proliferation of the tech has repeatedly been hampered by rampant "hallucinations," a euphemistic term for the bots' made-up facts and convincingly-told lies. One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions.

Archive: https://archive.today/iG6XA From the post: >>Health practitioners are becoming increasingly uneasy about the medical community making widespread use of error-prone generative AI tools. The proliferation of the tech has repeatedly been hampered by rampant "hallucinations," a euphemistic term for the bots' made-up facts and convincingly-told lies. One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions.

(post is archived)

[–] 1 pt

Bet some people would want that new part added, because the computer recommended it.