Doctor Chat GPT

yfarm

WKR
Joined
Apr 24, 2018
Messages
984
Location
Arroyo City, Tx
Spoke with a physician today. Subspecialist, saw a new female patient in the office friday for a problem that was minor, not life threatening. Woman said a friend of mine has a few questions for you about my care, Doc said sure what can I answer? Is looking at her phone and asks a question on the screen, Doc answers questions and that answer heard by her phone immediately generates a response and five more questions, at this point he realizes the friend is Chat GPT and tells her he doesnt have discussions with Chat GPT and tells her to find another Doctor.
I believe in having good back and forth conversations with people in any type business but at some point you just say enough, good bye
 
I don't blame the doctor for being upset or feeling undermined by AI, but at the same time if the questions were valid he basically fired a client for wanting more information. Would he have had the same reaction if the client was knowledgeable enough to come up with these questions organically? If the answer is no, I don't think he was justified in his response. If she was just seeking more information and using the AI to help determine what kind of questions might be helpful to ask, I don't see a problem with it, though the way she handled it was questionable.

AI is everywhere. The veterinary practice my wife works at has an AI "assistant" that basically records and summarizes the conversations, diagnoses, and treatment plans as discussed between client/provider rather than vets and techs having to manually record notes.

The technicians (good ones at least) still end up reviewing and updating the AI notes as they are usually far from perfect. It wouldn't surprise me at all to find out the same type of thing being employed in human healthcare.
 
This is the kind of shit that will be common place in the next 18 months and totally a thing by the end of 2 years.

Look at the quality of AI short films being made on YouTube. 2 years ago they looked weird. Now they are 98% of photographic realism.

The future of a lot of positions will be in turmoil in 5 years.

I am a wildlife biologist by education and a safety and occupational health manager by profession. I don't see any reason for actual employees within 5 years.

Drones with AI can already do a good job at finding problems in buildings.

Drones with AI can do complete wildlife counts in any weather any time of day without need for overtime, or anything else.

I always say please and thank you to AI, I am hoping that my death will be swift and painless when they take over.
 
Awaiting AI to control robotic surgery systems, no need for a surgeon, whoops had a hallucination and your liver or heart are gone. A morcellator run amuk inside your body
 
I don't blame the doctor for being upset or feeling undermined by AI, but at the same time if the questions were valid he basically fired a client for wanting more information. Would he have had the same reaction if the client was knowledgeable enough to come up with these questions organically? If the answer is no, I don't think he was justified in his response. If she was just seeking more information and using the AI to help determine what kind of questions might be helpful to ask, I don't see a problem with it, though the way she handled it was questionable.

AI is everywhere. The veterinary practice my wife works at has an AI "assistant" that basically records and summarizes the conversations, diagnoses, and treatment plans as discussed between client/provider rather than vets and techs having to manually record notes.

The technicians (good ones at least) still end up reviewing and updating the AI notes as they are usually far from perfect. It wouldn't surprise me at all to find out the same type of thing being employed in human healthcare.
His point is that there was no end and Chat GPT was asking him questions that the patient could not understand the answer to so whats the point.
 
It won't be long and the AI is going to be much much smarter than any doctor provided it is given the proper inputs which the doctor is likely providing. Much better than the alternative of the lady describing the problem to the AI.
 
In my business AI is everywhere, I wouldn't say I see the end for a lot of my colleagues in the future, but there's a real chance a lot of those jobs will be at some point.

The only thing that worries me is if we take these skills away from people or if people stop training for them and practicing in say surgery, programming, what happens if, or when the robot can't do it?
 
No question that having a database of all known medical knowledge at your fingertips will disrupt processes currently in use. The problem will be the user of the information being able to understand it and make appropriate decisions from it.
I
 
So AI replaces jobs all over the place. People hope that they will be retired by the time AI takes their job. It’s inevitable. In the performance of work, it will only take 5 people to do the work of 500. AI will be smarter than any combined generation of people.

What, exactly, is the vision for the future of humans in this scenario? I don’t want to hear “learn to code” - AI will outcode you all. I don’t want to hear -“everyone will just adjust.” Adjust to what? Obsolescence? Do we institute a basic minimum income so high that we will simply bore ourselves to death with lack of purpose that only the Soma makes it OK?

I see a lot of excited people talking about the possibilities of AI and robotics etc. I see almost NO ONE concerned about humanity. That is the elephant in the room…what will all the people actually DO in the future some are predicting?
 
So AI replaces jobs all over the place. People hope that they will be retired by the time AI takes their job. It’s inevitable. In the performance of work, it will only take 5 people to do the work of 500. AI will be smarter than any combined generation of people.

What, exactly, is the vision for the future of humans in this scenario? I don’t want to hear “learn to code” - AI will outcode you all. I don’t want to hear -“everyone will just adjust.” Adjust to what? Obsolescence? Do we institute a basic minimum income so high that we will simply bore ourselves to death with lack of purpose that only the Soma makes it OK?

I see a lot of excited people talking about the possibilities of AI and robotics etc. I see almost NO ONE concerned about humanity. That is the elephant in the room…what will all the people actually DO in the future some are predicting?

A guy I work with had to program a new network switch, decided to let AI have a crack at it, it would normally take him a day to provision a device like this and test it, by the time he was back from getting a cup of coffee it was provisioned and tested, and he said, better than he would have done it.

Yeah, nobody wants to talk about what are you actually going to do with yourself? I think we're going to see an unsettled society where anything goes (kind of there now) in the name of entertainment, and not getting bored.
 
  • Like
Reactions: ODB
A guy I work with had to program a new network switch, decided to let AI have a crack at it, it would normally take him a day to provision a device like this and test it, by the time he was back from getting a cup of coffee it was provisioned and tested, and he said, better than he would have done it.

Yeah, nobody wants to talk about what are you actually going to do with yourself? I think we're going to see an unsettled society where anything goes (kind of there now) in the name of entertainment, and not getting bored.

i also watched a co-worker use AI for a project... he nearly ruined our worldwide GS1/UPC/EAN dataset because he didn't know that it was spitting out the wrong info. It took me 45 minutes to convince him it was wrong.

Gell-Man Amnesia and the Dunning Kruger Effect are going to combine in a frightening way with AI.

It's from a different realm, but I have always loved this CS Lewis Quote - and I think it applies here. We do need some signposts else the results will be similar.

"Indeed, the safest road to Hell is the gradual one—the gentle slope, soft underfoot, without sudden turnings, without milestones, without signposts."
 
I am a physician and have played around with chat GPT and also “open evidence” which is AI supposedly for physicians.

Sometimes the answers are correct, sometimes they are completely bonkers. Unless you already know the answer to the question I don’t trust it at all.

I have found the same information more or less comes up in a google search, except you can screen what are reliable sources and go back to familiar resources. With the AI it “feels” like the information is coming from an authority but in reality it’s pulling from some trash, bot generated webpage.

And yes a lot of my patients are on google, chat GPT, etc. can’t tell you how many times a person comes to the ER in the middle of the night because they started googling their symptoms and then couldn’t sleep. For the most part I appreciate when patients take an interest in their own health and medications. It’s rare but more annoying when they already “know” what they have, and “want” some treatment or test that is kooky or harmful or unnecessary.
 
Not gonna lie I have used ChatGPT to generate coyote call sequences and killed coyotes with them lol

Edit- this was in response to the comment about AI becoming commonplace, not necessarily about AI in the medical field
 
I saw a comment I felt was pretty accurate with the advancement of robotics and AI.

"I wanted AI to mow the lawn, clean the house, and get the groceries so that I can focus on the interesting parts of my work.

But now AI is doing my work and I'm left to mowing the lawn, cleaning the house, and getting groceries..."
 
I am a physician and have played around with chat GPT and also “open evidence” which is AI supposedly for physicians.

Sometimes the answers are correct, sometimes they are completely bonkers. Unless you already know the answer to the question I don’t trust it at all.

I have found the same information more or less comes up in a google search, except you can screen what are reliable sources and go back to familiar resources. With the AI it “feels” like the information is coming from an authority but in reality it’s pulling from some trash, bot generated webpage.

And yes a lot of my patients are on google, chat GPT, etc. can’t tell you how many times a person comes to the ER in the middle of the night because they started googling their symptoms and then couldn’t sleep. For the most part I appreciate when patients take an interest in their own health and medications. It’s rare but more annoying when they already “know” what they have, and “want” some treatment or test that is kooky or harmful or unnecessary.
Same in the legal field - it’s sometimes right on, sometimes slightly off, and sometimes wildly wrong. If you didn’t already know the answer, or do backup research to confirm, you couldn’t tell between the three. But if it can to 99% spot on, or do a better job “showing its work” so you can double-check the information live, it will change the game.
 
Perhaps crystal clear but how do you know the conclusions being drawn are accurate or appropriate?
its not a replacement for the doctor, he still has a job to do. Its about more information thats articulated better than he can. She learned a lot more about her high TSH levels through Chat GPT, and the doctor confirmed what it said and put her on medication for it.

It is a tool to use, it is not a replacement for doctors.
 
Back
Top