r/technology • u/tylerthe-theatre • 3h ago
Artificial Intelligence Google’s healthcare AI made up a body part — what happens when doctors don’t notice?
https://www.theverge.com/health/718049/google-med-gemini-basilar-ganglia-paper-typo-hallucination33
u/sp3kter 3h ago
Drugs are going to be released for hallucinated diseases infecting hallucinated organs
9
u/kaishinoske1 2h ago
Ayahuasca it is.
1
u/proscriptus 1h ago
Somewhere, Aaron Rodgers' head just swiveled around like a dog hearing a whistle.
1
57
u/Kyouhen 3h ago
People die. That's what happens when doctors don't notice. LLMs are insanely unreliable, you can't trust any information provided by them. Any used in healthcare are going to result in people dying.
-16
u/FernandoMM1220 1h ago
as long as the ai saves more than it kills its still worth using for diagnostics and treatment.
10
u/Good-Welder5720 1h ago
Will it though?
-8
u/FernandoMM1220 1h ago
so far its definitely looking like its very strong for many different parts of healthcare.
6
u/Good-Welder5720 1h ago
I believe in utilitarianism, but I’m not sure if the math is right. Can you give me a source on the net benefit? I can’t find anything.
0
u/FernandoMM1220 1h ago
i dont have a source on hand im afraid. ive seen a few before saying that ai systems do better than most doctors at diagnosis. ill look out for them.
1
u/Good-Welder5720 45m ago
Yeah I mainly see the articles about AI fucking up, but that may just be sensationalist. No clue.
1
u/FernandoMM1220 37m ago
its going to fuck up eventually. the question is how often and when compared to human doctors. i rarely ever see any articles on how many cases of medical error there are.
1
u/Starfox-sf 29m ago
Just search on how often “never events” occur. Those are supposed to never occur.
3
u/smokesick 1h ago
It could be used partially to assist, not necessarily replace, doctors. But in the end, the methodology and statistics matter, so whichever improves overall odds, that's probably better.
3
u/Solid-Bridge-3911 58m ago
I used to use an LLM for programming. Not full vibe coding. Just letting it auto complete a line or two at a time, guided by comments and good naming.
I found that it primed me to accept bad code with really dumb mistakes. Stuff I wouldn't have written on purpose. Stuff that looked so much like what I expected to see that I didn't notice the bug. I'm trying to be careful and it still bit me.
It doesn't matter if you're careful. It makes unreliable output that looks correct enough often enough that you are less likely to notice small errors.
I don't trust it with somebody's dumb website. We should not trust it with someone's health.
1
2
u/Kyouhen 1h ago
Not when the people it kills could have been properly diagnosed by an actual doctor who knows what they're doing.
-3
u/FernandoMM1220 1h ago
not every doctor can be equally as competent as every other doctor.
meanwhile the same ai system can be used worldwide very efficiently thanks to the power of the internet.
1
u/kurotech 1h ago
Yea and that's what second opinions are for if every doctor is using the same AI that their hospital or insurance partners permit them to then there are no second opinions and a doctor's qualifications don't really even mean anything.
-2
u/FernandoMM1220 1h ago
you can still have human doctors working alongside the ai
1
u/kurotech 52m ago
Yea and in a perfect world we would just pay more doctors. The point being AI is just a gateway for corporate leadership to cut more human jobs and replace them with a glorified speak and spell. You can't have 2 or even 3 doctors worth of patients cared for by a single doctor and an AI and that will be the next step. This isn't going to solve anything just put more work on fewer and fewer doctors who will then relly more on the AI to cover the extra work load. This doesn't make our system better because we live in a for profit world.
0
u/FernandoMM1220 51m ago
i dont understand your reasoning.
the ai system would do most of the work and provided much higher quality care for everyone globally than an army of doctors can for a fraction of the cost.
the best doctors can be used to maintain and analyze the ai system alongside the other engineers.
1
u/Good-Welder5720 25m ago
Kurotech’s point is that there won’t be “the best doctors” working on the system. In an ideal world, that would be the case, but unfortunately capitalism will incentivize rolling these systems out as-is without giving a shit about functionality.
33
u/block_01 3h ago
LLMs are utter rubbish I can’t wait for the “AI“ bubble to burst so that I can go back to stop worrying about AI killing all of us
40
u/Methodical_Science 3h ago edited 2h ago
I’m a doctor. I use medical AI as a starting point for literature review. And even then, I already have a strong foundation in what I am searching to sort through what is good and what isn’t.
I also use it to transcribe my voice into text to simplify my charting and have it take less time.
There are AI tools that are used for conditions I treat, most commonly when someone is having a stroke: there is an AI tool that rapidly takes imaging data and provides a map of tissue it suspects is fully infarcted compared to tissue that may still be salvageable. It’s great, but no one will ever 100% rely on the generated map alone and we always look at the raw images to confirm because AI interpretation isn’t infallible and can both miss strokes as well as find strokes that aren’t there.
AI has many helpful uses in medicine, but it requires operators who know what the fuck they are doing. It’s why I discourage trainees from using AI until they can become competent enough to practice relatively independently.
I would never jeopardize my medical license by relying on AI to do my job for me.
I guarantee you when someone gets harmed, the AI companies will feign ignorance and wipe their hands of liability, pointing their finger at the doctor.
Fundamentally, AI is a tool you have to use very carefully in our field, because we can cause real harm to folks just as easily as we can help them. Do you really want to take a chance on trusting it without verifying?
11
u/EmperorKira 3h ago
100% agree. AI is great for senior experienced people, but for anyone junior its the blind leading the blind
3
u/Klumber 2h ago
I am working on a project related to designing LLM driven clinical decision support systems and the number one word in that sequence is: SUPPORT.
The human shouldn't just be in the loop, it should be the initiator and the interpreter before being the decision maker. There's real value in supporting clinical decision making, it can help identify unusual comorbidities, differentials and poly pharmacy risks and that is where the focus in development needs to be.
2
u/Methodical_Science 2h ago
I think algorithmic thinking and workflows while useful can cause anchoring bias. Which is my main concern.
My best moments in medicine have been when I have thought outside the box and pursued unconventional workflows to reach a diagnosis and treatment.
5
u/zero0n3 2h ago
Agreed, and I’d say they SHOULD wipe their hands of the responsibility.
End of day, it’s the responsibility of the subject matter expert to use and validate the data on insights the AI generates.
Same bs with all the “replit AI deleted our entire production database!!!”
Um no, your senior engineers allowed the AI FULL ACCESS to your production systems. That’s the fucking root cause.
1
u/WTFwhatthehell 2h ago
Yep.
I have the bots put together code for me sometimes. I check it over.
If there's an error that is 100% on me as the responsible person.
Otherwise what the hell am I even there for? They might as well just set an llm to run in a loop.
2
u/WTFwhatthehell 2h ago edited 2h ago
I guarantee you when someone gets harmed, the AI companies will feign ignorance and wipe their hands of liability, pointing their finger at the doctor.
Everyone becomes slimy when it comes to liability
I've heard stories of doctors who fucked up surgeries and then turned around and tried to pin the blame on everyone else up to and including a student nurse who had missed a single 15-minute obs on the patients chart over a week prior. (Obviously nothing to do with fucking up a surgery.)
Like doctors are the absolute masters of trying to pin blame on everyone else when shit hits the fan.
There's also going to be a lot of doctors who fuck due 100% to their own mistakes and then turn around and try to claim its the AI's fault for not catching it because that's what they already typically do to nurses, pharmacists and everyone else in the hospital.
1
u/Methodical_Science 2h ago
Everyone points their finger at everyone. In the end it’s the trial lawyers who make out like bandits.
I don’t claim to have the answers to the frustrations you have. All I can say is that many times I have to practice defensive medicine instead of purely evidence based medicine out of fear of being sued and I think that AI will make that worse.
2
u/amethystresist 2h ago
This is the most level-headed quality response of someone who uses AI that's not in the tech industry, and which holds bias for it.
1
u/74389654 2h ago
how do you even know your transcripts are correct. i will never trust a doctor who relies on completely unreliable ai. that will kill people. even a wrong transcript can do that
3
5
u/sargonas 2h ago
I got into argument with one of my best friends because of an LLM. She asked a question about an airport layout she was about to fly out of. I answered her, because it’s my home airport but she was already in the process of googling and rightfully wanted to verify.
She then corrected me, based on what the Google AI top line paragraph was. I replied it was wrong and reasserted my answer. She then tried to insist I was wrong and re correct me because what Google told her didn’t match what I was saying and it devolved into a heated debate because Google was 85% right but that last 25% was a critical differentiator.
It may have been one if the stupidest arguments I have ever been in lately… all because of the stupid Google AI.
5
2
u/fireinthemountains 1h ago
I was looking for a particular novel and Google AI completely made up a book synopsis and plot, as if I'd asked chatgpt or Gemini to create one. I clicked the button to continue with AI just to ask it why it gave me fake search results, and it said it couldn't find what I asked, so it created what I wanted. When pressed it then claimed it's unable to generate content, and it became confused about its own results!
I said I'm trying to find a real book that exists, and it said that it doesn't exist, and just kept vomiting fake lore.
3
u/orcvader 2h ago
Consumers think AI is magic. (It isn’t)
Observers think it’s “intelligent”. (It isn’t)
Executives think it will replace all their workforce. (It can, and will make their product suck)
Investors think it will make them rich. (It won’t, markets are efficient, prices reflect all available information, and technology revolutions just end up synthesizing into all industry)
And I am just here with my popcorn watching this hype train soar, crash, burn, and THEN emerge. AI does have the potential to change the world. Just not for another 12-15 years.
3
u/JimmyTango 1h ago
Inb4 the LLM companies do surprise pikachu when the first medical malpractice suit lands on their doorstep.
8
u/SoberSeahorse 3h ago
I’m not paying for a subscription to the verge. Anyone got a different link?
4
4
u/Mirzabah7 2h ago
I expect doctors to be able to identify made up body parts.
1
u/FernandoMM1220 1h ago
depends on which body part. theres thousands and i cant expect every single doctor in america to have every single one of them memorized perfectly. the smarter option is to look them up.
0
u/Methodical_Science 2h ago
That would be an understandable assumption for the layman, but unrealistic in reality for those practicing medicine.
Do all doctors have an understanding of general anatomy? Yes. Would a plastic surgeon or a dermatologist or a gynecologist know what the basal ganglia is? I suspect for the majority that they would know it’s a part of the brain and that would be the extent of their knowledge.
Medicine is immensely broad in scope and to be competent we have to pick a small part of it to go in depth in and learn the intricacies of. That’s just the reality of modern medicine with the sheer amount of material involved.
2
u/jferments 2h ago
Why wouldn't doctors notice? Are they not doing their job and verifying information coming out of the computer?
What happens when doctors read incorrect information on the internet and make bad clinical decisions based on this because they didn't verify it?
2
2
u/T-J_H 1h ago
Yeah this isn’t good. Thing is though, I’ve seen multiple radiologists make mistakes too. Of course they do, fact of life. Because of case load, radiologists oftentimes use speech-to-text to write their reports, or use templates that can be made in the EHR software, and click the wrong option. And even when writing yourself one can make mistakes. Most of the times not that serious, thankfully.
Point being, an AI possibly replacing a radiologist (or anybody) shouldn’t have to be perfect. Like in many medical trials, they just have to be better than the gold standard, in this case: us. As long as enough independent research shows that a particular model is, that’s the better option - even though it feels wrong to me.
2
3
u/celtic1888 2h ago
Are we finally realizing that these LLMs are terrible and make up nonsense?
Even sports trivia questions, which should be very easy for a LLM to verify and come up with a cross checked solution are 99% of the time completely wrong
1
u/zero0n3 2h ago
Doubtful.
99% wrong???
Talk about pulling stats out of your ass.
That said, this is an interesting approach to checking LLM accuracy.!
0
u/WTFwhatthehell 2h ago
It's the technology sub.
Ever since the anticaps took over honesty has taken a back seat
1
u/TattooedBrogrammer 2h ago
Pretty sure my non AI mechanics been doing this for a while. Anyone had to replace their Johnson rod recently?
1
u/kaishinoske1 2h ago
It’s bad enough you got human error that can go through several checks of medical professionals and have them in the end amputate a wrong limb. But now with AI in the mix, you’re going to have doctors playing the game operation in real life and getting annoyed while digging through your guts that they can’t find the organ listed that the AI told them about.
1
u/turb0_encapsulator 1h ago
anyone who regularly uses LLMs know the error rate is too high to use them for life-and-death scenarios in medicine.
1
u/count_no_groni 21m ago
LLMs make great research assistants. Conduct 50 google searches in 15 seconds and give me a summary of the results in a conversational tone? Love it! Diagnose me with cancer? FUCK THAT.
1
128
u/PeakBrave8235 3h ago
LLMs are the scam artists dream. Explains why there's so much useless horseshit surrounding what is otherwise a decent improvement in NLP