Article Type
Changed
Tue, 06/20/2023 - 10:09

ChatGPT and other artificial intelligence (AI)–driven natural language processing platforms are here to stay, so like them or not, physicians might as well figure out how to optimize their role in medicine and health care. That’s the takeaway from a three-expert panel session about the technology held at the annual Digestive Disease Week® (DDW).

The chatbot can help doctors to a certain extent by suggesting differential diagnoses, assisting with clinical note-taking, and producing rapid and easy-to-understand patient communication and educational materials, they noted. However, it can also make mistakes. And, unlike a medical trainee who might give a clinical answer and express some doubt, ChatGPT (Open AI/Microsoft) clearly states its findings as fact, even when it’s wrong.

Known as “hallucinating,” this problem of AI inaccuracy was displayed at the packed DDW session.

When asked when Leonardo da Vinci painted the Mona Lisa, for example, ChatGPT replied 1815. That’s off by about 300 years; the masterpiece was created sometime between 1503 and 1519. Asked for a fact about George Washington, ChatGPT said he invented the cotton gin. Also not true. (Eli Whitney patented the cotton gin.)

In an example more suited for gastroenterologists at DDW, ChatGPT correctly stated that Barrett esophagus can lead to adenocarcinoma of the esophagus in some cases. However, the technology also said that the condition could lead to prostate cancer.

So, if someone asked ChatGPT for a list of possible risks for Barrett’s esophagus, it would include prostate cancer. A person without medical knowledge “could take it at face value that it causes prostate cancer,” said panelist Sravanthi Parasa, MD, a gastroenterologist at Swedish Medical Center, Seattle.

“That is a lot of misinformation that is going to come our way,” she added at the session, which was sponsored by the American Society for Gastrointestinal Endoscopy.

The potential for inaccuracy is a downside to ChatGPT, agreed panelist Prateek Sharma, MD, a gastroenterologist at the University of Kansas Medical Center in Kansas City, Kansas.

“There is no quality control. You have to double check its answers,” said Dr. Sharma, who is president-elect of ASGE.

ChatGPT is not going to replace physicians in general or gastroenterologists doing endoscopies, said Ian Gralnek, MD, chief of the Institute of Gastroenterology and Hepatology at Emek Medical Center in Afula, Israel.

Even though the tool could play a role in medicine, “we need to be very careful as a society going forward ... and see where things are going,” Dr. Gralnek said.
 

How you ask makes a difference

Future iterations of ChatGPT are likely to produce fewer hallucinations, the experts said. In the meantime, users can lower the risk by paying attention to how they’re wording their queries, a practice known as “prompt engineering.”

It’s best to ask a question that has a firm answer to it. If you ask a vague question, you’ll likely get a vague answer, Dr. Sharma said.

ChatGPT is a large language model (LLM). GPT stands for “generative pretrained transformer” – specialized algorithms that find long-range patterns in sequences of data. LLMs can predict the next word in a sentence.

“That’s why this is also called generative AI,” Dr. Sharma said. “For example, if you put in ‘Where are we?’, it will predict for you that perhaps the next word is ‘going?’ ”

The current public version is ChatGPT 3.5, which was trained on open-source online information up until early 2022. It can gather information from open-access scientific journals and medical society guidelines, as well as from Twitter, Reddit, and other social media. It does not have access to private information, like electronic health records.

The use of ChatGPT has exploded in the past 6 months, Dr. Sharma said.

“ChatGPT has been the most-searched website or platform ever in history since it was launched in December of 2022,” he said.
 

 

 

What’s in it for doctors?

Although not specifically trained for health care–related tasks, the panelists noted that ChatGPT does have potential as a virtual medical assistant, chatbot, clinical decision-support tool, source of medical education, natural language processor for documentation, or medical note-taker.

ChatGPT can help physicians write a letter of support to a patient who, for example, was just diagnosed with stage IV colon cancer. It can do that in only 15 seconds, whereas it would take us much longer, Dr. Sharma said.

ChatGPT is the “next frontier” for generating patient education materials, Dr. Parasa said. It can help time-constrained health care providers, as long as the information is accurate.

ChatGPT 4.0, now available by subscription, can do “almost real-time note-taking during patient encounters,” she added.

Another reason to be familiar with the technology: “Many of your patients are using it, even if you don’t know about it,” Dr. Sharma said.
 

Questions abound

A conference attendee asked the panel what to do when a patient comes in with ChatGPT medical advice that does not align with official guidelines.

Dr. Gralnek said that he would explain to patients that medical information based on guidelines are not “black and white.” The panel likened it to how patients come to an appointment now armed with information from the Internet, which is not always correct, that must then be countered by doctors. The same would likely happen with ChatGPT.

Another attendee asked whether ChatGPT eventually will work in accordance with electronic health record systems.

“Open AI and Microsoft are already working with Epic,” Dr. Parasa said.

A question arose about the reading level of information provided by ChatGPT. Dr. Parasa noted that it’s not standard. However, a person can prompt ChatGPT to provide an answer either at an eighth-grade reading level or for a well-trained physician.

Dr. Sharma offered a final warning: The technology learns over time.

“It knows what your habits are. It will learn what you’re doing,” Dr. Sharma said. “Everything else on your browsers that are open, it’s learning from that also. So be careful what websites you visit before you go to ChatGPT.”

Dr. Sharma is a stock shareholder in Microsoft. Dr. Parasa and Dr. Gralneck reported no relevant financial relationships.

DDW is sponsored by the American Association for the Study of Liver Diseases, the American Gastroenterological Association, the American Society for Gastrointestinal Endoscopy, and The Society for Surgery of the Alimentary Tract.

A version of this article originally appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

ChatGPT and other artificial intelligence (AI)–driven natural language processing platforms are here to stay, so like them or not, physicians might as well figure out how to optimize their role in medicine and health care. That’s the takeaway from a three-expert panel session about the technology held at the annual Digestive Disease Week® (DDW).

The chatbot can help doctors to a certain extent by suggesting differential diagnoses, assisting with clinical note-taking, and producing rapid and easy-to-understand patient communication and educational materials, they noted. However, it can also make mistakes. And, unlike a medical trainee who might give a clinical answer and express some doubt, ChatGPT (Open AI/Microsoft) clearly states its findings as fact, even when it’s wrong.

Known as “hallucinating,” this problem of AI inaccuracy was displayed at the packed DDW session.

When asked when Leonardo da Vinci painted the Mona Lisa, for example, ChatGPT replied 1815. That’s off by about 300 years; the masterpiece was created sometime between 1503 and 1519. Asked for a fact about George Washington, ChatGPT said he invented the cotton gin. Also not true. (Eli Whitney patented the cotton gin.)

In an example more suited for gastroenterologists at DDW, ChatGPT correctly stated that Barrett esophagus can lead to adenocarcinoma of the esophagus in some cases. However, the technology also said that the condition could lead to prostate cancer.

So, if someone asked ChatGPT for a list of possible risks for Barrett’s esophagus, it would include prostate cancer. A person without medical knowledge “could take it at face value that it causes prostate cancer,” said panelist Sravanthi Parasa, MD, a gastroenterologist at Swedish Medical Center, Seattle.

“That is a lot of misinformation that is going to come our way,” she added at the session, which was sponsored by the American Society for Gastrointestinal Endoscopy.

The potential for inaccuracy is a downside to ChatGPT, agreed panelist Prateek Sharma, MD, a gastroenterologist at the University of Kansas Medical Center in Kansas City, Kansas.

“There is no quality control. You have to double check its answers,” said Dr. Sharma, who is president-elect of ASGE.

ChatGPT is not going to replace physicians in general or gastroenterologists doing endoscopies, said Ian Gralnek, MD, chief of the Institute of Gastroenterology and Hepatology at Emek Medical Center in Afula, Israel.

Even though the tool could play a role in medicine, “we need to be very careful as a society going forward ... and see where things are going,” Dr. Gralnek said.
 

How you ask makes a difference

Future iterations of ChatGPT are likely to produce fewer hallucinations, the experts said. In the meantime, users can lower the risk by paying attention to how they’re wording their queries, a practice known as “prompt engineering.”

It’s best to ask a question that has a firm answer to it. If you ask a vague question, you’ll likely get a vague answer, Dr. Sharma said.

ChatGPT is a large language model (LLM). GPT stands for “generative pretrained transformer” – specialized algorithms that find long-range patterns in sequences of data. LLMs can predict the next word in a sentence.

“That’s why this is also called generative AI,” Dr. Sharma said. “For example, if you put in ‘Where are we?’, it will predict for you that perhaps the next word is ‘going?’ ”

The current public version is ChatGPT 3.5, which was trained on open-source online information up until early 2022. It can gather information from open-access scientific journals and medical society guidelines, as well as from Twitter, Reddit, and other social media. It does not have access to private information, like electronic health records.

The use of ChatGPT has exploded in the past 6 months, Dr. Sharma said.

“ChatGPT has been the most-searched website or platform ever in history since it was launched in December of 2022,” he said.
 

 

 

What’s in it for doctors?

Although not specifically trained for health care–related tasks, the panelists noted that ChatGPT does have potential as a virtual medical assistant, chatbot, clinical decision-support tool, source of medical education, natural language processor for documentation, or medical note-taker.

ChatGPT can help physicians write a letter of support to a patient who, for example, was just diagnosed with stage IV colon cancer. It can do that in only 15 seconds, whereas it would take us much longer, Dr. Sharma said.

ChatGPT is the “next frontier” for generating patient education materials, Dr. Parasa said. It can help time-constrained health care providers, as long as the information is accurate.

ChatGPT 4.0, now available by subscription, can do “almost real-time note-taking during patient encounters,” she added.

Another reason to be familiar with the technology: “Many of your patients are using it, even if you don’t know about it,” Dr. Sharma said.
 

Questions abound

A conference attendee asked the panel what to do when a patient comes in with ChatGPT medical advice that does not align with official guidelines.

Dr. Gralnek said that he would explain to patients that medical information based on guidelines are not “black and white.” The panel likened it to how patients come to an appointment now armed with information from the Internet, which is not always correct, that must then be countered by doctors. The same would likely happen with ChatGPT.

Another attendee asked whether ChatGPT eventually will work in accordance with electronic health record systems.

“Open AI and Microsoft are already working with Epic,” Dr. Parasa said.

A question arose about the reading level of information provided by ChatGPT. Dr. Parasa noted that it’s not standard. However, a person can prompt ChatGPT to provide an answer either at an eighth-grade reading level or for a well-trained physician.

Dr. Sharma offered a final warning: The technology learns over time.

“It knows what your habits are. It will learn what you’re doing,” Dr. Sharma said. “Everything else on your browsers that are open, it’s learning from that also. So be careful what websites you visit before you go to ChatGPT.”

Dr. Sharma is a stock shareholder in Microsoft. Dr. Parasa and Dr. Gralneck reported no relevant financial relationships.

DDW is sponsored by the American Association for the Study of Liver Diseases, the American Gastroenterological Association, the American Society for Gastrointestinal Endoscopy, and The Society for Surgery of the Alimentary Tract.

A version of this article originally appeared on Medscape.com.

ChatGPT and other artificial intelligence (AI)–driven natural language processing platforms are here to stay, so like them or not, physicians might as well figure out how to optimize their role in medicine and health care. That’s the takeaway from a three-expert panel session about the technology held at the annual Digestive Disease Week® (DDW).

The chatbot can help doctors to a certain extent by suggesting differential diagnoses, assisting with clinical note-taking, and producing rapid and easy-to-understand patient communication and educational materials, they noted. However, it can also make mistakes. And, unlike a medical trainee who might give a clinical answer and express some doubt, ChatGPT (Open AI/Microsoft) clearly states its findings as fact, even when it’s wrong.

Known as “hallucinating,” this problem of AI inaccuracy was displayed at the packed DDW session.

When asked when Leonardo da Vinci painted the Mona Lisa, for example, ChatGPT replied 1815. That’s off by about 300 years; the masterpiece was created sometime between 1503 and 1519. Asked for a fact about George Washington, ChatGPT said he invented the cotton gin. Also not true. (Eli Whitney patented the cotton gin.)

In an example more suited for gastroenterologists at DDW, ChatGPT correctly stated that Barrett esophagus can lead to adenocarcinoma of the esophagus in some cases. However, the technology also said that the condition could lead to prostate cancer.

So, if someone asked ChatGPT for a list of possible risks for Barrett’s esophagus, it would include prostate cancer. A person without medical knowledge “could take it at face value that it causes prostate cancer,” said panelist Sravanthi Parasa, MD, a gastroenterologist at Swedish Medical Center, Seattle.

“That is a lot of misinformation that is going to come our way,” she added at the session, which was sponsored by the American Society for Gastrointestinal Endoscopy.

The potential for inaccuracy is a downside to ChatGPT, agreed panelist Prateek Sharma, MD, a gastroenterologist at the University of Kansas Medical Center in Kansas City, Kansas.

“There is no quality control. You have to double check its answers,” said Dr. Sharma, who is president-elect of ASGE.

ChatGPT is not going to replace physicians in general or gastroenterologists doing endoscopies, said Ian Gralnek, MD, chief of the Institute of Gastroenterology and Hepatology at Emek Medical Center in Afula, Israel.

Even though the tool could play a role in medicine, “we need to be very careful as a society going forward ... and see where things are going,” Dr. Gralnek said.
 

How you ask makes a difference

Future iterations of ChatGPT are likely to produce fewer hallucinations, the experts said. In the meantime, users can lower the risk by paying attention to how they’re wording their queries, a practice known as “prompt engineering.”

It’s best to ask a question that has a firm answer to it. If you ask a vague question, you’ll likely get a vague answer, Dr. Sharma said.

ChatGPT is a large language model (LLM). GPT stands for “generative pretrained transformer” – specialized algorithms that find long-range patterns in sequences of data. LLMs can predict the next word in a sentence.

“That’s why this is also called generative AI,” Dr. Sharma said. “For example, if you put in ‘Where are we?’, it will predict for you that perhaps the next word is ‘going?’ ”

The current public version is ChatGPT 3.5, which was trained on open-source online information up until early 2022. It can gather information from open-access scientific journals and medical society guidelines, as well as from Twitter, Reddit, and other social media. It does not have access to private information, like electronic health records.

The use of ChatGPT has exploded in the past 6 months, Dr. Sharma said.

“ChatGPT has been the most-searched website or platform ever in history since it was launched in December of 2022,” he said.
 

 

 

What’s in it for doctors?

Although not specifically trained for health care–related tasks, the panelists noted that ChatGPT does have potential as a virtual medical assistant, chatbot, clinical decision-support tool, source of medical education, natural language processor for documentation, or medical note-taker.

ChatGPT can help physicians write a letter of support to a patient who, for example, was just diagnosed with stage IV colon cancer. It can do that in only 15 seconds, whereas it would take us much longer, Dr. Sharma said.

ChatGPT is the “next frontier” for generating patient education materials, Dr. Parasa said. It can help time-constrained health care providers, as long as the information is accurate.

ChatGPT 4.0, now available by subscription, can do “almost real-time note-taking during patient encounters,” she added.

Another reason to be familiar with the technology: “Many of your patients are using it, even if you don’t know about it,” Dr. Sharma said.
 

Questions abound

A conference attendee asked the panel what to do when a patient comes in with ChatGPT medical advice that does not align with official guidelines.

Dr. Gralnek said that he would explain to patients that medical information based on guidelines are not “black and white.” The panel likened it to how patients come to an appointment now armed with information from the Internet, which is not always correct, that must then be countered by doctors. The same would likely happen with ChatGPT.

Another attendee asked whether ChatGPT eventually will work in accordance with electronic health record systems.

“Open AI and Microsoft are already working with Epic,” Dr. Parasa said.

A question arose about the reading level of information provided by ChatGPT. Dr. Parasa noted that it’s not standard. However, a person can prompt ChatGPT to provide an answer either at an eighth-grade reading level or for a well-trained physician.

Dr. Sharma offered a final warning: The technology learns over time.

“It knows what your habits are. It will learn what you’re doing,” Dr. Sharma said. “Everything else on your browsers that are open, it’s learning from that also. So be careful what websites you visit before you go to ChatGPT.”

Dr. Sharma is a stock shareholder in Microsoft. Dr. Parasa and Dr. Gralneck reported no relevant financial relationships.

DDW is sponsored by the American Association for the Study of Liver Diseases, the American Gastroenterological Association, the American Society for Gastrointestinal Endoscopy, and The Society for Surgery of the Alimentary Tract.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT DDW 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article