my wife takes her blood test results and punches them into Chat GPT and she gets a crystal clear answer from it. Way better than our own doctor can explain. It uses plain language and is very easy to understand.
It’s unfortunate that the high fence (generally female pit bulls that answer phones) that surround most physicians has lead to this. The doc would be happy to call you back and explain stuff, half the time they don’t even know someone is asking.
AI will definitely have a large presence in healthcare within the next 3-5 years.
That being said, I don’t think it will be replacing that many people. In certain areas, yes it will replace a lot. But not in healthcare
Also, one thing AI doesn’t have is that “gut feeling.” Studies have shown that the “gut feeling” of healthcare providers is often a more accurate indicator of issues than objective findings.
It won't be long and the AI is going to be much much smarter than any doctor provided it is given the proper inputs which the doctor is likely providing. Much better than the alternative of the lady describing the problem to the AI.
I use ChatGPT a lot of analyzing backtests and live performance in my day trading. Like any math or logic problem, you should have a good idea of where you are heading so that you can identify where the AI engine (process) goes astray so you can rein it in.
Those that know enough and have enough experience in the field to see when things make sense and when they don't will have much more success using any AI engine than those that just blindly ask questions and trust the answers.
Today I had to write a recommendation letter for career extension for one my people. It normally takes me about 45 minutes to draft it.
I had a sample letter, and I started by retyping the sample letter and adding the individuals name, changing out the parts that fit and didn't fit. But this was for a different agency so it required quite a bit of rewrite.
I then decided to let military CHAT AI work on it. I gave it permameters of what I needed it to say and gave it the sample and then gave it the inidivudals resume.
It was done in 9 seconds.
I changed 2 words in the letter it wrote and sent it up for approval. My boss changed deleted one word and signed it and we were both done in 15 minutes.
Was great because I actually had time to complete a ton of other pressing issues.
I didn't say please and thank you to it this time.
So AI replaces jobs all over the place. People hope that they will be retired by the time AI takes their job. It’s inevitable. In the performance of work, it will only take 5 people to do the work of 500. AI will be smarter than any combined generation of people.
What, exactly, is the vision for the future of humans in this scenario? I don’t want to hear “learn to code” - AI will outcode you all. I don’t want to hear -“everyone will just adjust.” Adjust to what? Obsolescence? Do we institute a basic minimum income so high that we will simply bore ourselves to death with lack of purpose that only the Soma makes it OK?
I see a lot of excited people talking about the possibilities of AI and robotics etc. I see almost NO ONE concerned about humanity. That is the elephant in the room…what will all the people actually DO in the future some are predicting?
So I posted this in another thread about AI, but the future reality is AI will be making diagnosis of conditions exclusively. AI is already better at determining illnesses and faster. Robots are already performing surgeries with higher success rates, then some of the most skilled surgeons. They can perform brain surgery with precision and speed that humans can’t. At some point insurance companies won’t cover doc visits or surgeries without AI/Robots bc the risk for issues otherwise will be too high.
@ODB very well said re:concerns about AI's potential impact on society and culture.
One thing the OP illustrates to me is that our social interactions are going to get increasingly weird/cumbersome as a growing subset of folks take AI outputs at face value/as gospel and trust the AI to do their thinking for them.
So I posted this in another thread about AI, but the future reality is AI will be making diagnosis of conditions exclusively. AI is already better at determining illnesses and faster. Robots are already performing surgeries with higher success rates, then some of the most skilled surgeons. They can perform brain surgery with precision and speed that humans can’t. At some point insurance companies won’t cover doc visits or surgeries without AI/Robots bc the risk for issues otherwise will be too high.
This is only the future if we allow it to be. This mentality is sort of defeatist IMO. I also think this line neglects the implications of allowing sometimes life or death decisions to be made by non-human actors and the fact that AI can be (and is often) wrong too.
Who is ultimately held accountable when SHTF? Bureaucracy already makes it difficult to get issues resolved. Take customer service or arguing insurance claims for example. Now imagine that there is nobody to appeal to (except maybe an AI customer service bot) to resolve your concern about decision that was made by AI in the first place. Diffusion of responsibility into the ether.
This is only the future if we allow it to be. This mentality is sort of defeatist IMO. I also think this line neglects the implications of allowing sometimes life or death decisions to be made by non-human actors and the fact that AI can be (and is often) wrong too.
Who is ultimately held accountable when SHTF? Bureaucracy already makes it difficult to get issues resolved. Take customer service or arguing insurance claims for example. Now imagine that there is nobody to appeal to (except maybe an AI customer service bot) to resolve your concern about decision that was made by AI in the first place. Diffusion of responsibility into the ether.
I don’t think you or me have the ability to stop it, there’s way too much $ involved. Just ask all the folks in Cali whose house burned down, odd that insurance companies pulled out of there before hand due to “high risk” and no one could do a thing about it. Also, no one was held accountable for any of that.
I do think there will still be a doctor to go over the information with a patient, but all of it will be checked by AI before the doctor can even read the first line of suggested treatments.
I am also afraid that people will rely heavily on AI, and in some cases giving it a god like idol status.
I’m not excited about all the new issues AI will bring, but there will also be benefits. Folks need to know this is coming faster than one is expecting, bc it will impact all of our lives.
I think the future with AI is misunderstood. AI needs something to learn from, either itself via iteration or something else via data inputs. I see AI removing the need for, say, 50 HR positions at a company. You will still need HR, just not 50 people. AI is only going to be as successful as the user interface is useable by most people. If it's really specialized it loses utility and like everyone says, produces a lot of shitty answers.
So I posted this in another thread about AI, but the future reality is AI will be making diagnosis of conditions exclusively. AI is already better at determining illnesses and faster. Robots are already performing surgeries with higher success rates, then some of the most skilled surgeons. They can perform brain surgery with precision and speed that humans can’t. At some point insurance companies won’t cover doc visits or surgeries without AI/Robots bc the risk for issues otherwise will be too high.
Zero chance. Medicine is incredibly nuanced. It’s going to be the absolute last thing to become automated.
Speaking of risk and liability. Would you rather be maimed/killed in the OR by a surgeon or by a robot? I don’t think the average American would be able to handle the inevitable robot/AI “malpractice” situation.
Not sure where you’re seeing robots do surgery but robotic surgery is just another tool in the hands of a surgeon. The robot is not deciding where/how to cut and sew.
And finally who is going to teach these AI algorithms how to practice medicine? Even if you had a ton of doctors willing to teach these AI algorithms (can’t imagine many would agree based on principle), again it’s too nuanced with too many decision points and possibilities and acceptable answers and intangibles.
I just don’t see it in our lifetimes, probably never. It’s a pipe dream and would be a waste of time an effort to attempt when there are certainly more profitable and attainable ways to use AI.
Edit here’s a real life example of where AI would fall flat.
Patient has an EKG with somewhat peaked T waves suggestive of hyperkalemia but not classic. Compared to the previous EKG there is no significant change. Same with the EKG before that. AI may assume EKG is normal/or a normal variant. It would probably order blood tests and wait for the results. The doctor knows the context of those previous EKGs… the patient was critically hyperkalemic and almost died both times. The doctor rushes the patient back from the waiting room, administers medications, and averts crisis. Repeat this kind of thing over and over with the myriad of medical issues out there..I just don’t see it happening.
Zero chance. Medicine is incredibly nuanced. It’s going to be the absolute last thing to become automated.
Speaking of risk and liability. Would you rather be maimed/killed in the OR by a surgeon or by a robot? I don’t think the average American would be able to handle the inevitable robot/AI “malpractice” situation.
Not sure where you’re seeing robots do surgery but robotic surgery is just another tool in the hands of a surgeon. The robot is not deciding where/how to cut and sew.
And finally who is going to teach these AI algorithms how to practice medicine? Even if you had a ton of doctors willing to teach these AI algorithms (can’t imagine many would agree based on principle), again it’s too nuanced with too many decision points and possibilities and acceptable answers and intangibles.
I just don’t see it in our lifetimes, probably never. It’s a pipe dream and would be a waste of time an effort to attempt when there are certainly more profitable and attainable ways to use AI.
Edit here’s a real life example of where AI would fall flat.
Patient has an EKG with somewhat peaked T waves suggestive of hyperkalemia but not classic. Compared to the previous EKG there is no significant change. Same with the EKG before that. AI may assume EKG is normal/or a normal variant. It would probably order blood tests and wait for the results. The doctor knows the context of those previous EKGs… the patient was critically hyperkalemic and almost died both times. The doctor rushes the patient back from the waiting room, administers medications, and averts crisis. Repeat this kind of thing over and over with the myriad of medical issues out there..I just don’t see it happening.
You probably don’t believe cars can drive themselves, rockets can be caught out of the air, or that we are even having a conversation on a device in our hands. To say this will never happen is very naive.
Will it take time, yes, but AI is self learning, never sleeps, never gets sick, doesn’t take vacations. Will there still be doctors to consult with, yes, but instead of there being 100 docs at a hospital you’ll only have maybe 50.
Your example shows that for some medical examinations having a human administering is the best, and I think those types of things would be limited to just that.
I personally would rather speak with a real person when it comes to medical, I don’t even like doing telemedicine.
Just a quick google search will show you where the tech is going.
When you read a marketing blurb about AI and robots doing this or that better than the old way, look for an actual study proving that point. DaVinci arrived on the scene around 2000, to date the studies comparing robotic surgery to conventional open surgery show little difference in cost, recovery time or complications. Part of the problem is the surgeon developing the skill to use the robot, a secondary problem is developing a knowlege base in the maintenance, use and troubleshooting from a technical standpoint in hospital personnel which has been well documented. Despite laparoscopic techniques being around even longer there are no studies as of 2023 comparing the two modalities for outcomes,costs and complications. Now we are suggesting completely replacing the operator with an AI system without human guidance, that will be a ways down the pike if ever. A surgeon can always convert to an open procedure in the event of a problem with a robot or scope. I see AI in medicine making providers more aware of zebras lurking in the herds of horses. I live by the saying you don’t know what you don’t know, it applies to all of this. When Elon says a robot can guide a neuralink implant more precisely than a human, that may be true until that robot cannot judge when a patient has variant anatomy and you transect a circuit resulting in death. As always, choose wisely.
When a study is published about comparisons, always look for the sponsors, early adopters often are sponsored by the manufacturer, the early studies in any area of medicine report outcomes more optimistic than subsequently proven over time. Buyer beware.
Edit here’s a real life example of where AI would fall flat.
Patient has an EKG with somewhat peaked T waves suggestive of hyperkalemia but not classic. Compared to the previous EKG there is no significant change. Same with the EKG before that. AI may assume EKG is normal/or a normal variant. It would probably order blood tests and wait for the results. The doctor knows the context of those previous EKGs… the patient was critically hyperkalemic and almost died both times. The doctor rushes the patient back from the waiting room, administers medications, and averts crisis. Repeat this kind of thing over and over with the myriad of medical issues out there..I just don’t see it happening.
You honestly don't think that AI would have access to past patient data, visits, notes, conclusions, etc.? That it can't analyze that faster and with less loss of details than a person?
Not only that but it has access immediately to the entire world's worth of similar cases which the physician does not.
Can AI replace that physician today? No. Will it be able to in the future? Absolutely. Any other conclusion simply is ignorant to the current abilities of advanced AI and the speed with which it is improving.
You honestly don't think that AI would have access to past patient data, visits, notes, conclusions, etc.? That it can't analyze that faster and with less loss of details than a person?
Not only that but it has access immediately to the entire world's worth of similar cases which the physician does not.
Can AI replace that physician today? No. Will it be able to in the future? Absolutely. Any other conclusion simply is ignorant to the current abilities of advanced AI and the speed with which it is improving.
AI would have access to that but connecting the dots here is pretty complex third or 4th degree reasoning. Not to mention there is a certain gestalt that you develop about how time sensitive something is despite the vital signs or numbers. Yes a chart is there from the previous visit but it “lives” in a different part of the EMR. Synthesizing the information and reading between the lines/quickly synthesizing what happened and knowing what to do with that information when there are 5 different options. You might be able to teach AI how to deal with hyperkalemia but the real world of patient care has infinitely more complexities and nuances that are just a much higher level of reasoning than I’ve ever seen any AI able to do.
Edit: I may be ignorant to what’s currently possible and what might be possible in the future but my experience with healthcare related AI is that there is significant hype without any real substance.
You probably don’t believe cars can drive themselves, rockets can be caught out of the air, or that we are even having a conversation on a device in our hands. To say this will never happen is very naive.
Will it take time, yes, but AI is self learning, never sleeps, never gets sick, doesn’t take vacations. Will there still be doctors to consult with, yes, but instead of there being 100 docs at a hospital you’ll only have maybe 50.
Your example shows that for some medical examinations having a human administering is the best, and I think those types of things would be limited to just that.
I personally would rather speak with a real person when it comes to medical, I don’t even like doing telemedicine.
Just a quick google search will show you where the tech is going.
You probably don’t believe cars can drive themselves, rockets can be caught out of the air, or that we are even having a conversation on a device in our hands. To say this will never happen is very naive.
Will it take time, yes, but AI is self learning, never sleeps, never gets sick, doesn’t take vacations. Will there still be doctors to consult with, yes, but instead of there being 100 docs at a hospital you’ll only have maybe 50.
Your example shows that for some medical examinations having a human administering is the best, and I think those types of things would be limited to just that.
I personally would rather speak with a real person when it comes to medical, I don’t even like doing telemedicine.
Just a quick google search will show you where the tech is going.
Sure cars can drive themselves. They also crash and do stupid things and there is more outrage in those situations than when an idiot driver does the same thing. I do not think that socially, we are ready for widespread AI use when human lives are involved.
Using a robot to perform a specific mechanical task in a surgery is one thing. The neurosurgeons are using a “new” tool but the robot is not “doing the surgery” in that application.
And the robot doing a simulated surgery on a dummy is cool, but highly doubt it makes it past that stage. How many IRBs are going to allow this to be studied on real humans with complex anatomy? Who is going to sign up to have their open heart surgery done by a robot in the early stages? You want the surgeon who knows what to do when things go wrong or are not textbook. When I google as you said, I see essentially advertising and sensationalizing something without the legs to truly go the distance.
Huge difference between using a robot or any other technology as a tool during a procedure but replacing physicians with a robot..far fetched even if technically possible, someday.
Medicine is its own complicated, political, and often self serving system. Equally complicated is how embedded the legal system is within medicine. Even IF AI advanced to a place where it could do something relevant in healthcare, the actual social and political changes needed to apply it are IMO an even larger hurdle. The various governing bodies in medicine will never go for it and I don’t think the populous would either. Maybe once robots are in households changing diapers and wiping butts people might warm up to the idea but that is generations away IMO.
Sure cars can drive themselves. They also crash and do stupid things and there is more outrage in those situations than when an idiot driver does the same thing. I do not think that socially, we are ready for widespread AI use when human lives are involved.
Using a robot to perform a specific mechanical task in a surgery is one thing. The neurosurgeons are using a “new” tool but the robot is not “doing the surgery” in that application.
And the robot doing a simulated surgery on a dummy is cool, but highly doubt it makes it past that stage. How many IRBs are going to allow this to be studied on real humans with complex anatomy? Who is going to sign up to have their open heart surgery done by a robot in the early stages? You want the surgeon who knows what to do when things go wrong or are not textbook. When I google as you said, I see essentially advertising and sensationalizing something without the legs to truly go the distance.
Huge difference between using a robot or any other technology as a tool during a procedure but replacing physicians with a robot..far fetched even if technically possible, someday.
Medicine is its own complicated, political, and often self serving system. Equally complicated is how embedded the legal system is within medicine. Even IF AI advanced to a place where it could do something relevant in healthcare, the actual social and political changes needed to apply it are IMO an even larger hurdle. The various governing bodies in medicine will never go for it and I don’t think the populous would either. Maybe once robots are in households changing diapers and wiping butts people might warm up to the idea but that is generations away IMO.
The simulation over and over again is where the precision comes from. The equipment being used by doctors today are built by the same companies that are building these medical robots. I’d be surprised if they aren’t logging every move a doc makes with the current equipment and then put that into AI to speed learning. Then in return use multiple robots doing the exact same simulation and take that info and do it again but learning, and again and again.
It’s really the ultimate companding interest in learning tech. Or as Albert Einstein would refer to it as the eighth wonder of the world the idea is for gaining wealth but the idea applies. By compounding the information already known and investing it back onto itself the growth of knowledge and capabilities accelerate overtime. When the tools being used (AI) has no concept of time, instead just does, what does that look like? Idk
How many simulated surgeries do doctors do on pigs and cadavers? How long do they mentor an experienced surgeon? There are some super talented doctors out there but it took time to get there.
I do believe there will still be a doctor overseeing the entire process for those decisions that need to be addressed.
I don’t think comparing loss of life on any level from self driving or non self driving vehicles is going to win supporters. In both cases it’s unfortunate.
Believe it’s possible or don’t, makes no difference to me.
I am a physician and have played around with chat GPT and also “open evidence” which is AI supposedly for physicians.
Sometimes the answers are correct, sometimes they are completely bonkers. Unless you already know the answer to the question I don’t trust it at all.
I have found the same information more or less comes up in a google search, except you can screen what are reliable sources and go back to familiar resources. With the AI it “feels” like the information is coming from an authority but in reality it’s pulling from some trash, bot generated webpage.
And yes a lot of my patients are on google, chat GPT, etc. can’t tell you how many times a person comes to the ER in the middle of the night because they started googling their symptoms and then couldn’t sleep. For the most part I appreciate when patients take an interest in their own health and medications. It’s rare but more annoying when they already “know” what they have, and “want” some treatment or test that is kooky or harmful or unnecessary.
Agree, I have not been impressed with AI, other than how confident it can come across while being absurdly wrong.
To many people are getting impressed. This is like seeing Deep Blue in 1997 and thinking computer will be winning go in 5 years (it took 19). AI may get their, but not in its current format and probably not with today's technology.
Musk promised autonomous dring in 2016, we still dont have that from Tesla. Of course we do have it, but only in defined geographic locations with everything mapped and pre planned. Not exactly creative problem solving.
Medicine is asking that self driving car to be dropped on a remote dirt road with wash outs, lots of side roads, no signs, downed trees, and only a napkin schetch map. Well, that would be easier than some cases in medicine.
This, 100%. AI, in its current form, is best in a well defined sand box, not the full complexity of life. Using AI to analyze echos or EKGs to flag for amyloid is one great example Mayo is developing, but it doesn't make the diagnosis, only cue it to be considered.