Article Type
Changed
Thu, 06/01/2023 - 01:15
Display Headline
More on AI-generated content

In his recent editorial (“A ‘guest editorial’ … generated by ChatGPT?Current Psychiatry, April 2023, p. 6-7, doi:10.12788/cp.0348), Dr. Nasrallah asked for feedback on the ChatGPT-generated editorial on myths surrounding psychiatry. I found the “product” equivalent to a diligent high schooler’s homework assignment. ChatGPT lacks the nuance of a historical perspective, one that has observed the ever-changing enthusiasms (from Freud’s “cure” for posttraumatic stress disorder through dopamine, then 5HT, now glutamate and psychedelics) because mental illness is so difficult to treat. For the guest editorial on myths, a little googling would have yielded the same content, if not a similar list of myths. Surely such an editorial would never be accepted in any psychiatry journal; maybe in Reader’s Digest!

Sara Hartley, MD
Berkeley, California

I just read the “guest editorial” generated by ChatGPT. Thank you for this article. Although this is truly an amazing advancement in artificial intelligence (AI), I feel this guest editorial was very basic. It did not read like scientific writing. It read more like it was written at an 11th- or 12th-grade level, though I am fully aware that the question was simple, and thus the answer was not very deep. I can’t deny that if I had been tested, chances are good I would have fallen among the 32% of my peers who would not have recognized it as AI. I appreciate that you (and your team) are working on a protocol regarding how to include content generated by or with the help of AI. God knows if (most likely, when) people with evil minds will use AI to spread false information that may dispute the accredited scientific data and research that guide the medical world and many other fields. I wonder if AI can serve as a search engine that is better or easier to use than PubMed (for example) and the other services we use for research and learning.

Alex Mustachi, PMHNP-BC
Suffern, New York

I wanted to let you know how much I enjoyed reading your recent editorial on AI and scientific writing. Sharing the 4 AI-generated “articles” with readers (“For artificial intelligence, the future is finally here,” Current Psychiatry, April 2023, p. 8-11,29, doi:10.12788/cp.0354) was a delightfully clever/engaging exercise. Other journals need to take a more proactive/targeted stand on this very important issue.

Martha Sajatovic, MD
Cleveland, Ohio

Continue to: The AI-generated samples...

 

 

The Al-generated samples were fascinating. As far as I superficially noted, the spelling, grammar, and punctuation were correct. That is better than one gets from most student compositions. However, the articles were completely lacking in depth or apparent insight. The article on anosognosia mentioned it can be present in up to 50% of cases of schizophrenia. In my experience, it is present in approximately 99.9% of cases. It clearly did not consider if anosognosia is also present in alcoholics, codependents, abusers, or people with bizarre political beliefs. But I guess the “intelligence” wasn’t asked that. The other samples also show shallow thinking and repetitive wording—pretty much like my high school junior compositions.

Maybe an appropriate use for AI is a task such as evaluating suicide notes. AI’s success causes one to feel nonplussed. Much more disconcerting was a recent news article that reported AI made up nonexistent references to a professor’s alleged sexual harassment, and then generated citations to its own made-up reference.1 That is indeed frightening new territory. How does one fight against a machine to clear their own name?

Linda Miller, NP
Harrisonburg, Virginia

References

1. Verma P, Oremus W. ChatGPT invented a sexual harassment scandal and named a real law prof as the accused. The Washington Post. April 5, 2023. Accessed May 8, 2023. https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

Thank you, Dr. Nasrallah, for your latest thought-provoking articles on AI. Time and again you provide the profession with cutting-edge, relevant food for thought. Caveat emptor, indeed.

Lawrence E. Cormier, MD
Denver, Colorado

Continue to: We read with interest...

 

 

We read with interest Dr. Nasrallah’s editorial that invited readers to share their take on the quality of an AI-generated writing sample. I (MZP) was a computational neuroscience major at Columbia University and was accepted to medical school in 2022 at age 19. I identify with the character traits common among many young tech entrepreneurs driving the AI revolution—social awkwardness; discomfort with subjective emotions; restricted areas of interest; algorithmic thinking; strict, naive idealism; and an obsession with data. To gain a deeper understanding of Sam Altman, the CEO of OpenAI (the company that created ChatGPT), we analyzed a 2.5-hour interview that MIT research scientist Lex Fridman conducted with Altman.1 As a result, we began to discern why AI-generated text feels so stiff and bland compared to the superior fluidity and expressiveness of human communication. As of now, the creation is a reflection of its creator.

Generally speaking, computer scientists are not warm and fuzzy types. Hence, ChatGPT strives to be neutral, accurate, and objective compared to more biased and fallible humans, and, consequently, its language lacks the emotive flair we have come to relish in normal human interactions. In the interview, Altman discusses several solutions that will soon raise the quality of ChatGPT’s currently deficient emotional quotient to approximate its superior IQ. Altruistically, Altman has opened ChatGPT to all, so we can freely interact and utilize its potential to increase our productivity exponentially. As a result, ChatGPT interfaces with millions of humans through RLHF (reinforcement learning from human feedback), which makes each iteration more in tune with our sensibilities.2 Another initiative Altman is undertaking is to depart his Silicon Valley bubble for a road trip to interact with “regular people” and gain a better sense of how to make ChatGPT more user-friendly.1

What’s so saddening about Dr. Nasrallah’s homework assignment is that he is asking us to evaluate with our mature adult standards an article that was written at the emotional stage of a child in early high school. But our hubris and complacency are entirely unfounded because ChatGPT is learning much faster than we ever could, and it will quickly surpass us all as it continues to evolve.

It is also quite disconcerting to hear how Altman is naively relying upon governmental regulation and corporate responsibility to manage the potential misuse of future artificial general intelligence for social, economic, and political control and upheaval. We know well the harmful effects of the internet and social media, particularly on our youth, yet our laws still lag far behind the fact that these technological innovations are simultaneously enhancing our knowledge while destroying our souls. As custodians of our world, dedicated to promoting and preserving mental well-being, we cannot wait much longer to intervene in properly parenting AI along its wisest developmental trajectory before it is too late.

Maxwell Zachary Price, BA
Nutley, New Jersey

Richard Louis Price, MD
New York, New York

References

1. Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI. Lex Fridman Podcast #367. March 25, 2023. Accessed April 5, 2023. https://www.youtube.com/watch?v=L_Guz73e6fw

2. Heikkilä M. How OpenAI is trying to make ChatGPT safer and less biased. MIT Technology Review. Published February 21, 2023. Accessed April 5, 2023. https://www.technologyreview.com/2023/02/21/1068893/how-openai-is-trying-to-make-chatgpt-safer-and-less-biased/

Disclosures

The authors report no financial relationships with any companies whose products are mentioned in their letters, or with manufacturers of competing products.

Article PDF
Issue
Current Psychiatry - 22(6)
Publications
Page Number
e4-e5
Sections
Article PDF
Article PDF

In his recent editorial (“A ‘guest editorial’ … generated by ChatGPT?Current Psychiatry, April 2023, p. 6-7, doi:10.12788/cp.0348), Dr. Nasrallah asked for feedback on the ChatGPT-generated editorial on myths surrounding psychiatry. I found the “product” equivalent to a diligent high schooler’s homework assignment. ChatGPT lacks the nuance of a historical perspective, one that has observed the ever-changing enthusiasms (from Freud’s “cure” for posttraumatic stress disorder through dopamine, then 5HT, now glutamate and psychedelics) because mental illness is so difficult to treat. For the guest editorial on myths, a little googling would have yielded the same content, if not a similar list of myths. Surely such an editorial would never be accepted in any psychiatry journal; maybe in Reader’s Digest!

Sara Hartley, MD
Berkeley, California

I just read the “guest editorial” generated by ChatGPT. Thank you for this article. Although this is truly an amazing advancement in artificial intelligence (AI), I feel this guest editorial was very basic. It did not read like scientific writing. It read more like it was written at an 11th- or 12th-grade level, though I am fully aware that the question was simple, and thus the answer was not very deep. I can’t deny that if I had been tested, chances are good I would have fallen among the 32% of my peers who would not have recognized it as AI. I appreciate that you (and your team) are working on a protocol regarding how to include content generated by or with the help of AI. God knows if (most likely, when) people with evil minds will use AI to spread false information that may dispute the accredited scientific data and research that guide the medical world and many other fields. I wonder if AI can serve as a search engine that is better or easier to use than PubMed (for example) and the other services we use for research and learning.

Alex Mustachi, PMHNP-BC
Suffern, New York

I wanted to let you know how much I enjoyed reading your recent editorial on AI and scientific writing. Sharing the 4 AI-generated “articles” with readers (“For artificial intelligence, the future is finally here,” Current Psychiatry, April 2023, p. 8-11,29, doi:10.12788/cp.0354) was a delightfully clever/engaging exercise. Other journals need to take a more proactive/targeted stand on this very important issue.

Martha Sajatovic, MD
Cleveland, Ohio

Continue to: The AI-generated samples...

 

 

The Al-generated samples were fascinating. As far as I superficially noted, the spelling, grammar, and punctuation were correct. That is better than one gets from most student compositions. However, the articles were completely lacking in depth or apparent insight. The article on anosognosia mentioned it can be present in up to 50% of cases of schizophrenia. In my experience, it is present in approximately 99.9% of cases. It clearly did not consider if anosognosia is also present in alcoholics, codependents, abusers, or people with bizarre political beliefs. But I guess the “intelligence” wasn’t asked that. The other samples also show shallow thinking and repetitive wording—pretty much like my high school junior compositions.

Maybe an appropriate use for AI is a task such as evaluating suicide notes. AI’s success causes one to feel nonplussed. Much more disconcerting was a recent news article that reported AI made up nonexistent references to a professor’s alleged sexual harassment, and then generated citations to its own made-up reference.1 That is indeed frightening new territory. How does one fight against a machine to clear their own name?

Linda Miller, NP
Harrisonburg, Virginia

References

1. Verma P, Oremus W. ChatGPT invented a sexual harassment scandal and named a real law prof as the accused. The Washington Post. April 5, 2023. Accessed May 8, 2023. https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

Thank you, Dr. Nasrallah, for your latest thought-provoking articles on AI. Time and again you provide the profession with cutting-edge, relevant food for thought. Caveat emptor, indeed.

Lawrence E. Cormier, MD
Denver, Colorado

Continue to: We read with interest...

 

 

We read with interest Dr. Nasrallah’s editorial that invited readers to share their take on the quality of an AI-generated writing sample. I (MZP) was a computational neuroscience major at Columbia University and was accepted to medical school in 2022 at age 19. I identify with the character traits common among many young tech entrepreneurs driving the AI revolution—social awkwardness; discomfort with subjective emotions; restricted areas of interest; algorithmic thinking; strict, naive idealism; and an obsession with data. To gain a deeper understanding of Sam Altman, the CEO of OpenAI (the company that created ChatGPT), we analyzed a 2.5-hour interview that MIT research scientist Lex Fridman conducted with Altman.1 As a result, we began to discern why AI-generated text feels so stiff and bland compared to the superior fluidity and expressiveness of human communication. As of now, the creation is a reflection of its creator.

Generally speaking, computer scientists are not warm and fuzzy types. Hence, ChatGPT strives to be neutral, accurate, and objective compared to more biased and fallible humans, and, consequently, its language lacks the emotive flair we have come to relish in normal human interactions. In the interview, Altman discusses several solutions that will soon raise the quality of ChatGPT’s currently deficient emotional quotient to approximate its superior IQ. Altruistically, Altman has opened ChatGPT to all, so we can freely interact and utilize its potential to increase our productivity exponentially. As a result, ChatGPT interfaces with millions of humans through RLHF (reinforcement learning from human feedback), which makes each iteration more in tune with our sensibilities.2 Another initiative Altman is undertaking is to depart his Silicon Valley bubble for a road trip to interact with “regular people” and gain a better sense of how to make ChatGPT more user-friendly.1

What’s so saddening about Dr. Nasrallah’s homework assignment is that he is asking us to evaluate with our mature adult standards an article that was written at the emotional stage of a child in early high school. But our hubris and complacency are entirely unfounded because ChatGPT is learning much faster than we ever could, and it will quickly surpass us all as it continues to evolve.

It is also quite disconcerting to hear how Altman is naively relying upon governmental regulation and corporate responsibility to manage the potential misuse of future artificial general intelligence for social, economic, and political control and upheaval. We know well the harmful effects of the internet and social media, particularly on our youth, yet our laws still lag far behind the fact that these technological innovations are simultaneously enhancing our knowledge while destroying our souls. As custodians of our world, dedicated to promoting and preserving mental well-being, we cannot wait much longer to intervene in properly parenting AI along its wisest developmental trajectory before it is too late.

Maxwell Zachary Price, BA
Nutley, New Jersey

Richard Louis Price, MD
New York, New York

References

1. Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI. Lex Fridman Podcast #367. March 25, 2023. Accessed April 5, 2023. https://www.youtube.com/watch?v=L_Guz73e6fw

2. Heikkilä M. How OpenAI is trying to make ChatGPT safer and less biased. MIT Technology Review. Published February 21, 2023. Accessed April 5, 2023. https://www.technologyreview.com/2023/02/21/1068893/how-openai-is-trying-to-make-chatgpt-safer-and-less-biased/

Disclosures

The authors report no financial relationships with any companies whose products are mentioned in their letters, or with manufacturers of competing products.

In his recent editorial (“A ‘guest editorial’ … generated by ChatGPT?Current Psychiatry, April 2023, p. 6-7, doi:10.12788/cp.0348), Dr. Nasrallah asked for feedback on the ChatGPT-generated editorial on myths surrounding psychiatry. I found the “product” equivalent to a diligent high schooler’s homework assignment. ChatGPT lacks the nuance of a historical perspective, one that has observed the ever-changing enthusiasms (from Freud’s “cure” for posttraumatic stress disorder through dopamine, then 5HT, now glutamate and psychedelics) because mental illness is so difficult to treat. For the guest editorial on myths, a little googling would have yielded the same content, if not a similar list of myths. Surely such an editorial would never be accepted in any psychiatry journal; maybe in Reader’s Digest!

Sara Hartley, MD
Berkeley, California

I just read the “guest editorial” generated by ChatGPT. Thank you for this article. Although this is truly an amazing advancement in artificial intelligence (AI), I feel this guest editorial was very basic. It did not read like scientific writing. It read more like it was written at an 11th- or 12th-grade level, though I am fully aware that the question was simple, and thus the answer was not very deep. I can’t deny that if I had been tested, chances are good I would have fallen among the 32% of my peers who would not have recognized it as AI. I appreciate that you (and your team) are working on a protocol regarding how to include content generated by or with the help of AI. God knows if (most likely, when) people with evil minds will use AI to spread false information that may dispute the accredited scientific data and research that guide the medical world and many other fields. I wonder if AI can serve as a search engine that is better or easier to use than PubMed (for example) and the other services we use for research and learning.

Alex Mustachi, PMHNP-BC
Suffern, New York

I wanted to let you know how much I enjoyed reading your recent editorial on AI and scientific writing. Sharing the 4 AI-generated “articles” with readers (“For artificial intelligence, the future is finally here,” Current Psychiatry, April 2023, p. 8-11,29, doi:10.12788/cp.0354) was a delightfully clever/engaging exercise. Other journals need to take a more proactive/targeted stand on this very important issue.

Martha Sajatovic, MD
Cleveland, Ohio

Continue to: The AI-generated samples...

 

 

The Al-generated samples were fascinating. As far as I superficially noted, the spelling, grammar, and punctuation were correct. That is better than one gets from most student compositions. However, the articles were completely lacking in depth or apparent insight. The article on anosognosia mentioned it can be present in up to 50% of cases of schizophrenia. In my experience, it is present in approximately 99.9% of cases. It clearly did not consider if anosognosia is also present in alcoholics, codependents, abusers, or people with bizarre political beliefs. But I guess the “intelligence” wasn’t asked that. The other samples also show shallow thinking and repetitive wording—pretty much like my high school junior compositions.

Maybe an appropriate use for AI is a task such as evaluating suicide notes. AI’s success causes one to feel nonplussed. Much more disconcerting was a recent news article that reported AI made up nonexistent references to a professor’s alleged sexual harassment, and then generated citations to its own made-up reference.1 That is indeed frightening new territory. How does one fight against a machine to clear their own name?

Linda Miller, NP
Harrisonburg, Virginia

References

1. Verma P, Oremus W. ChatGPT invented a sexual harassment scandal and named a real law prof as the accused. The Washington Post. April 5, 2023. Accessed May 8, 2023. https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

Thank you, Dr. Nasrallah, for your latest thought-provoking articles on AI. Time and again you provide the profession with cutting-edge, relevant food for thought. Caveat emptor, indeed.

Lawrence E. Cormier, MD
Denver, Colorado

Continue to: We read with interest...

 

 

We read with interest Dr. Nasrallah’s editorial that invited readers to share their take on the quality of an AI-generated writing sample. I (MZP) was a computational neuroscience major at Columbia University and was accepted to medical school in 2022 at age 19. I identify with the character traits common among many young tech entrepreneurs driving the AI revolution—social awkwardness; discomfort with subjective emotions; restricted areas of interest; algorithmic thinking; strict, naive idealism; and an obsession with data. To gain a deeper understanding of Sam Altman, the CEO of OpenAI (the company that created ChatGPT), we analyzed a 2.5-hour interview that MIT research scientist Lex Fridman conducted with Altman.1 As a result, we began to discern why AI-generated text feels so stiff and bland compared to the superior fluidity and expressiveness of human communication. As of now, the creation is a reflection of its creator.

Generally speaking, computer scientists are not warm and fuzzy types. Hence, ChatGPT strives to be neutral, accurate, and objective compared to more biased and fallible humans, and, consequently, its language lacks the emotive flair we have come to relish in normal human interactions. In the interview, Altman discusses several solutions that will soon raise the quality of ChatGPT’s currently deficient emotional quotient to approximate its superior IQ. Altruistically, Altman has opened ChatGPT to all, so we can freely interact and utilize its potential to increase our productivity exponentially. As a result, ChatGPT interfaces with millions of humans through RLHF (reinforcement learning from human feedback), which makes each iteration more in tune with our sensibilities.2 Another initiative Altman is undertaking is to depart his Silicon Valley bubble for a road trip to interact with “regular people” and gain a better sense of how to make ChatGPT more user-friendly.1

What’s so saddening about Dr. Nasrallah’s homework assignment is that he is asking us to evaluate with our mature adult standards an article that was written at the emotional stage of a child in early high school. But our hubris and complacency are entirely unfounded because ChatGPT is learning much faster than we ever could, and it will quickly surpass us all as it continues to evolve.

It is also quite disconcerting to hear how Altman is naively relying upon governmental regulation and corporate responsibility to manage the potential misuse of future artificial general intelligence for social, economic, and political control and upheaval. We know well the harmful effects of the internet and social media, particularly on our youth, yet our laws still lag far behind the fact that these technological innovations are simultaneously enhancing our knowledge while destroying our souls. As custodians of our world, dedicated to promoting and preserving mental well-being, we cannot wait much longer to intervene in properly parenting AI along its wisest developmental trajectory before it is too late.

Maxwell Zachary Price, BA
Nutley, New Jersey

Richard Louis Price, MD
New York, New York

References

1. Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI. Lex Fridman Podcast #367. March 25, 2023. Accessed April 5, 2023. https://www.youtube.com/watch?v=L_Guz73e6fw

2. Heikkilä M. How OpenAI is trying to make ChatGPT safer and less biased. MIT Technology Review. Published February 21, 2023. Accessed April 5, 2023. https://www.technologyreview.com/2023/02/21/1068893/how-openai-is-trying-to-make-chatgpt-safer-and-less-biased/

Disclosures

The authors report no financial relationships with any companies whose products are mentioned in their letters, or with manufacturers of competing products.

Issue
Current Psychiatry - 22(6)
Issue
Current Psychiatry - 22(6)
Page Number
e4-e5
Page Number
e4-e5
Publications
Publications
Article Type
Display Headline
More on AI-generated content
Display Headline
More on AI-generated content
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media