Artificial intelligence in psychiatry

Article Type
Changed
Tue, 03/31/2020 - 16:36
Display Headline
Artificial intelligence in psychiatry

For many people, artificial intelligence (AI) brings to mind some form of humanoid robot that speaks and acts like a human. However, AI is much more than merely robotics and machines. Professor John McCarthy of Stanford University, who first coined the term “artificial intelligence” in the early 1950s, defined it as “the science and engineering of making intelligent machines, especially intelligent computer programs”; he defined intelligence as “the computational part of the ability to achieve goals.”1 Artificial intelligence also is commonly defined as the development of computer systems able to perform tasks that normally require human intelligence.2 English Mathematician Alan Turing is considered one of the forefathers of AI research, and devised the first test to determine if a computer program was intelligent (Box 13). Today, AI has established itself as an integral part of medicine and psychiatry.

Box 1

The Turing Test: How to tell if a computer program is intelligent

During World War II, the English Mathematician Alan Turing helped the British government crack the Enigma machine, a coding device used by the Nazi army. He went on to pioneer many research projects in the field of artificial intelligence, including developing the Turing Test, which can determine if a computer program is intelligent.3 In this test, a human questioner uses a computer interface to pose questions to 2 respondents in different rooms; one of the respondents is a human and the other a computer program. If the questioner cannot tell the difference between the 2 respondents’ answers, then the computer program is deemed to be “artificially intelligent” because it can pass

The semantics of AI

Two subsets of AI are machine learning and deep learning.4,5 Machine learning is defined as a set of methods that can automatically detect patterns in data and then use the uncovered pattern to predict future data.4 Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts.5

Machine learning can be supervised, semi-supervised, or unsupervised. The majority of practical machine learning uses supervised learning, where all data are labeled and an algorithm is used to learn the mapping function from the input to the output. In unsupervised learning, all data are unlabeled and the algorithm models the underlying structure of the data by itself. Semi-supervised learning is a mixture of both.6

Many researchers also categorize AI into 2 types: general or “strong” AI, and narrow or “weak” AI. Strong AI is defined as computers that can think on a level at least equal to humans and are able to experience emotions and even consciousness.7 Weak AI includes adding “thinking-like” features to computers to make them more useful tools. Almost all AI technologies available today are considered to be weak AI.

AI in medicine

AI is being developed for a broad range of applications in medicine. This includes informatics approaches, including learning in health management systems such as electronic health records, and actively guiding physicians in their treatment decisions.8

AI has been applied to assist administrative workflows that reach beyond automated non-patient care activities such as chart documentation and placing orders. One example is the Judy Reitz Capacity Command Center, which was designed and built with GE Healthcare Partners.9 It combines AI technology in the form of systems engineering and predictive analytics to better manage multiple workflows in different administrative settings, including patient safety, volume, flow, and access to care.9

In April 2018, Intel Corporation surveyed 200 health-care decision makers in the United States regarding their use of AI in practice and their attitudes toward it.10 Overall, 37% of respondents reported using AI and 54% expected to increase their use of AI in the next 5 years. Clinical use of AI (77%) was more common than administrative use (41%) or financial use (26 %).10

Continue to: Box 2

 

 

Box 211-19 describes studies that evaluated the clinical use of AI in specialties other than psychiatry.

Box 2

Beyond psychiatry: Using artificial intelligence in other specialties

Ophthalmology. Multiple studies have evaluated using artificial intelligence (AI) to screen for diabetic retinopathy, which is one of the fastest growing causes of blindness worldwide.11 In a recent study, researchers used a deep learning algorithm to automatically detect diabetic retinopathy and diabetic macular edema by analyzing retinal images. It was trained over a dataset of 128,000 images that were evaluated by 3 to 7 ophthalmologists. The algorithm showed high sensitivity and specificity for detecting referable diabetic retinopathy.11

Cardiology. One study looked at training a deep learning algorithm to predict cardiovascular risk based on analysis of retinal fundus images from 284,335 patients. In this study, the algorithm was able to predict a cardiovascular event in the next 5 years with 70% accuracy.12 The results were based on risk factors not previously thought to be quantifiable in retinal images, such as age, gender, smoking status, systolic blood pressure, and major adverse cardiac events.12 Similarly, researchers in the United Kingdom wanted to assess AI’s ability to predict a first cardiovascular event over 10 years by comparing a machine-learning algorithm to current guidelines from the American College of Cardiology, which include age, smoking history, cholesterol levels, and diabetes history.13 The algorithm was applied to data from approximately 82,000 patients known to have a future cardiac event. It was able to significantly improve the accuracy of cardiovascular risk prediction.13

Radiology. Researchers in the Department of Radiology at Thomas Jefferson University Hospital trained 2 convolutional neural networks (CNNs), AlexNet and GoogleNet, on 150 chest X-ray images to diagnose the presence or absence of tuberculosis (TB).14 They found that the CNNs could accurately classify TB on chest X-ray, with an area under the curve of 0.99.14 The best-performing AI model was a combination of the 2 networks, which had an accuracy of 96%.14

Stroke. The ALADIN trial compared an AI algorithm vs 2 trained neuroradiologists for detecting large artery occlusions on 300 CT scans.15 The algorithm had a sensitivity of 97%, a specificity of 52%, and an accuracy of 78%.15

Surgery. AI in the form of surgical robots has been around for many decades. Probably the best-known surgical robot is the da Vinci Surgical System, which was FDA-approved in 2000 for laparoscopic procedures.16 The da Vinci Surgical System functions as an extension of the human surgeon, who controls the device from a nearby console. Researchers at McGill University developed an anesthesia robot called “McSleepy” that can analyze biological information and recognize malfunctions while constantly adapting its own behavior.17

Dermatology. One study compared the use of deep CNNs vs 21 board-certified dermatologists to identify skin cancer on 2,000 biopsy-proven clinical images.18 The CNNs were capable of classifying skin cancer with a level of competence comparable to that of the dermatologists.18

Pathology. One study compared the efficacy of a CNN to that of human pathologists in detecting breast cancer metastasis to lymph nodes on microscopy images.19 The CNN detected 92.4% of the tumors, whereas the pathologists had a sensitivity of 73.2%.19

How can AI be used in psychiatry?

Artificially intelligent technologies have been used in psychiatry for several decades. One of the earliest examples is ELIZA, a computer program published by Professor Joseph Weizenbaum of the Massachusetts Institute of Technology in 1966.20 ELIZA consisted of a language analyzer and a script or a set of rules to improvise around a certain theme; the script DOCTOR was used to simulate a Rogerian psychotherapist.20

The application of AI in psychiatry has come a long way since the pioneering work of Weizenbaum. A recent study examined AI’s ability to distinguish between an individual who had suicidal ideation vs a control group. Machine-learning algorithms were used to evaluate functional MRI scans of 34 participants (17 who had suicidal ideation and 17 controls) to identify certain neural signatures of concepts related to life and death.21 The machine-learning algorithms were able to distinguish between these 2 groups with 91% accuracy. They also were able to distinguish between individuals who attempted suicide and those who did not with 94% accuracy.21

A study from the University of Cincinnati looked at using machine learning and natural language processing to distinguish genuine suicide notes from “fake” suicide notes that had been written by a healthy control group.22 Sixty-six notes were evaluated and categorized by 11 mental health professionals (psychiatrists, social workers, and emergency medicine physicians) and 31 PGY-3 residents. The accuracy of their results was compared with that of 9 machine-learning algorithms.22 The best machine-learning algorithm accurately classified the notes 78% of the time, compared with 63% of the time for the mental health professionals and 49% of the time for the residents.22

Researchers at Vanderbilt University examined using machine learning to predict suicide risk.23 They developed algorithms to scan electronic health records of 5,167 adults, 3,250 of whom had attempted suicide. In a review of the patients’ data from 1 week to 2 years before the attempt, the algorithms looked for certain predictors of suicide attempts, including recurrent depression, psychotic disorder, and substance use. The algorithm was 80% accurate at predicting whether a patient would attempt suicide within the next 2 years, and 84% accurate at predicting an attempt within the next week.23

Continue to: In a prospective study...

 

 

In a prospective study, researchers at Cincinnati Children’s Hospital used a machine-learning algorithm to evaluate 379 patients who were categorized into 3 groups: suicidal, mentally ill but not suicidal, or controls.24 All participants completed a standardized behavioral rating scale and participated in a semi-structured interview. Based on the participants’ linguistic and acoustic characteristics, the algorithm was able to classify them into the 3 groups with 85% accuracy.24

Many studies have looked at using language analysis to predicting the risk of psychosis in at-risk individuals. In one study, researchers evaluated individuals known to be at high risk for developing psychosis, some of whom eventually did develop psychosis.25 Participants were asked to retell a story and to answer questions about that story. Researchers fed the transcripts of these interviews into a language analysis program that looked at semantic coherence, syntactic complexity, and other factors. The algorithm was able to predict the future occurrence of psychosis with 82% accuracy. Participants who converted to psychosis had decreased semantic coherence and reduced syntactic complexity.25

A similar study looked at 34 at-risk youth in an attempt to predict who would develop psychosis based on speech pattern analysis.26 The participants underwent baseline interviews and were assessed quarterly for 2.5 years. The algorithm was able to predict who would develop psychosis with 100% accuracy.26

 

Challenges and limitations

The amount of research about applying machine learning to various fields of psychiatry continues to grow. With this increased interest, there have been reports of bias and human influence in the various stages of machine learning. Therefore, being aware of these challenges and engaging in practices to minimize their effects are necessary. Such practices include providing more details on data collection and processing, and constantly evaluating machine learning models for their relevance and utility to the research question proposed.27

As is the case with most innovative, fast-growing technologies, AI has its fair share of criticisms and concerns. Critics have focused on the potential threat of privacy issues, medical errors, and ethical concerns. Researchers at the Stanford Center for Biomedical Ethics emphasize the importance of being aware of the different types of bias that humans and algorithm designs can introduce into health data.28

Continue to: The Nuffield Council on Bioethics...

 

 

The Nuffield Council on Bioethics also emphasizes the importance of identifying the ethical issues raised by using AI in health care. Concerns include erroneous decisions made by AI and determining who is responsible for such errors, difficulty in validating the outputs of AI systems, and the potential for AI to be used for malicious purposes.29

For clinicians who are considering implementing AI into their practice, it is vital to recognize where this technology belongs in a workflow and in the decision-making process. Jeffery Axt, a researcher on the clinical applications of AI, encourages clinicians to view using AI as a consulting tool to eliminate the element of fear associated with not having control over diagnostics and management.30

What’s on the horizon

Research into using AI in psychiatry has drawn the attention of large companies. IBM is building an automated speech analysis application that uses machine learning to provide a real-time overview of a patient’s mental health.31 Social media platforms are also starting to incorporate AI technologies to scan posts for language and image patterns suggestive of suicidal thoughts or behavior.32

“Chat bots”—AI that can conduct a conversation in natural language—are becoming popular as well. Woebot is a cognitive-behavioral therapy–based chat bot designed by a Stanford psychologist that can be accessed through Facebook Messenger. In a 2-week study, 70 young adults (age 18 to 28) with depression were randomly assigned to use Woebot or to read mental health e-books.33 Participants who used Woebot experienced a significant reduction in depressive symptoms as measured by change in score on the Patient Health Questionnaire-9, while those assigned to the reading group did not.33

Other researchers have focused on identifying patterns of inattention, hyperactivity, and impulsivity in children using AI technologies such as computer vision, machine learning, and data mining. For example, researchers at the University of Texas at Arlington and Yale University are analyzing data from watching children perform certain tasks involving attention, decision making, and emotion management.34 There have been several advances in using AI to note abnormalities in a child’s gaze pattern that might suggest autism.35

Continue to: A project at...

 

 

A project at the University of Southern California called SimSensei/Multisense uses software to track real-time behavior descriptors such as facial expressions, body postures, and acoustic features that can help identify psychological distress.36 This software is combined with a virtual human platform that communicates with the patient as a therapist would.36

The future of AI in health care appears to have great possibilities. Putting aside irrational fears of being replaced by computers one day, AI may someday be highly transformative, leading to vast improvements in patient care.

Bottom Line

Artificial intelligence (AI) —the development of computer systems able to perform tasks that normally require human intelligence—is being developed for use in a wide range of medical specialties. Potential uses in psychiatry include predicting a patient’s risk for suicide or psychosis. Privacy concerns, ethical issues, and the potential for medical errors are among the challenges of AI use in psychiatry.

Related Resources

  • Durstewitz D, Koppe G, Meyer-Lindenberg A. Deep neural networks in psychiatry. Mol Psychiatry. 2019. doi:10.1038/s41380-019-0365-9.
  • Kretzschmar K, Tyroll H, Pavarini G, et al; NeurOx Young People’s Advisory Group. Can your phone be your therapist? Young people’s ethical perspectives on the use of fully automated conversational agents (chatbots) in mental health support. Biomed Inform Insights. 2019;11:1178222619829083. doi: 10.1177/1178222619829083.
References

1. McCarthy J. What is AI? Basic questions. http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html. Accessed July 19, 2019.
2. Oxford Reference. Artificial intelligence. http://www.oxfordreference.com/view/10.1093/oi/authority.20110803095426960. Accessed July 19, 2019.
3. Turing AM. Computing machinery and intelligence. Mind. 1950;49:433-460.
4. Robert C. Book review: machine learning, a probabilistic perspective. CHANCE. 2014;27:2:62-63.
5. Goodfellow I, Bengio Y, Courville A. Deep learning. Cambridge, MA: The MIT Press; 2016.
6. Brownlee J. Supervised and unsupervised machine learning algorithms. https://machinelearningmastery.com/supervised-and-unsupervised-machine-learning-algorithms/. Published March 16, 2016. Accessed July 19, 2019.
7. Russell S, Norvig P. Artificial intelligence: a modern approach. Upper Saddle River, NJ: Pearson; 1995.
8. Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism. 2017;69S:S36-S40.
9. The Johns Hopkins hospital launches capacity command center to enhance hospital operations. Johns Hopkins Medicine. https://www.hopkinsmedicine.org/news/media/releases/the_johns_hopkins_hospital_launches_capacity_command_center_to_enhance_hospital_operations. Published October 26, 2016. Accessed July, 19 2019.
10. U.S. healthcare leaders expect widespread adoption of artificial intelligence by 2023. Intel. https://newsroom.intel.com/news-releases/u-s-healthcare-leaders-expect-widespread-adoption-artificial-intelligence-2023/#gs.7j7yjk. Published July 2, 2018. Accessed July, 19 2019.
11. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402-2410.
12. Poplin R, Varadarajan AV, Blumer K, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering. 2018;2:158-164.
13. Weng SF, Reps J, Kai J, et al. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS One. 2017;12(4):e0174944. doi: 10.1371/journal.pone. 0174944.
14. Lakhani P, Sundaram B. Deep learning at chest radiography: Automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574-582.
15. Bluemke DA. Radiology in 2018: Are you working with ai or being replaced by AI? Radiology. 2018;287(2):365-366.
16. Kakar PN, Das J, Roy PM, et al. Robotic invasion of operation theatre and associated anaesthetic issues: A review. Indian J Anaesth. 2011;55(1):18-25.
17. World first: researchers develop completely automated anesthesia system. McGill University. https://www.mcgill.ca/newsroom/channels/news/world-first-researchers-develop-completely-automated-anesthesia-system-100263. Published May 1, 2008. Accessed July 19, 2019.
18. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118.
19. Liu Y, Gadepalli K, Norouzi M, et al. Detecting cancer metastases on gigapixel pathology images. https://arxiv.org/abs/1703.02442. Published March 8, 2017. Accessed July 19, 2019.
20. Bassett C. The computational therapeutic: exploring Weizenbaum’s ELIZA as a history of the present. AI & Soc. 2018. https://doi.org/10.1007/s00146-018-0825-9.
21. Just MA, Pan L, Cherkassky VL, et al. Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth. Nat Hum Behav. 2017;1:911-919.
22. Pestian J, Nasrallah H, Matykiewicz P, et al. Suicide note classification using natural language processing: a content analysis. Biomed Inform Insights. 2010;2010(3):19-28.
23. Walsh CG, Ribeiro JD, Franklin JC. Predicting risk of suicide attempts over time through machine learning. Clinical Psychological Science. 2017;5(3):457-469.
24. Pestian JP, Sorter M, Connolly B, et al; STM Research Group. A machine learning approach to identifying the thought markers of suicidal subjects: a prospective multicenter trial. Suicide Life Threat Behav. 2017;47(1):112-121.
25. Corcoran CM, Carrillo F, Fernández-Slezak D, et al. Prediction of psychosis across protocols and risk cohorts using automated language analysis. World Psychiatry. 2018;17(1):67-75.
26. Bedi G, Carrillo F, Cecchi GA, et al. Automated analysis of free speech predicts psychosis onset in high-risk youths. NPJ Schizophr. 2015;1:15030. doi:10.1038/npjschz.2015.30.
27. Tandon N, Tandon R. Will machine learning enable us to finally cut the Gordian Knot of schizophrenia. Schizophr Bull. 2018;44(5):939-941.
28. Char DS, Shah NH, Magnus D. Implementing machine learning in health care - addressing ethical challenges. N Engl J Med. 2018;378(11):981-983.
29. Nuffield Council on Bioethics. The big ethical questions for artificial intelligence (AI) in healthcare. http://nuffieldbioethics.org/news/2018/big-ethical-questions-artificial-intelligence-ai-healthcare. Published May 15, 2018. Accessed July 19, 2019.
30. Axt J. Artificial neural networks: a systematic review of their efficacy as an innovative resource for health care practice managers. https://www.researchgate.net/publication/322101587_Running_head_ANN_EFFICACY_IN_HEALTHCARE-A_SYSTEMATIC_REVIEW_1_Artificial_Neural_Networks_A_systematic_review_of_their_efficacy_as_an_innovative_resource_for_healthcare_practice_managers. Published October 2017. Accessed July 19, 2019.
31. Cecchi G. IBM 5 in 5: with AI, our words will be a window into our mental health. IBM Research Blog. https://www.ibm.com/blogs/research/2017/1/ibm-5-in-5-our-words-will-be-the-windows-to-our-mental-health/. Published January 5, 2017. Accessed July 19, 2019.
32. Constine J. Facebook rolls out AI to detect suicidal posts before they’re reported. TechCrunch. http://tcrn.ch/2hUBi3B. Published November 27, 2017. Accessed July 19, 2019.
33. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health. 2017;4(2):e19. doi:10.2196/mental.7785.
34. UTA researchers use artificial intelligence to assess, enhance cognitive abilities in school-aged children. University of Texas at Arlington. https://www.uta.edu/news/releases/2016/10/makedon-children-learning-difficulties.php. Published October 13, 2016. Accessed July 19, 2019.
35. Nealon C. App for early autism detection launched on World Autism Awareness Day, April 2. University at Buffalo. http://www.buffalo.edu/news/releases/2018/04/001.html. Published April 2, 2018. Accessed July 19, 2019.
36. SimSensei. University of Southern California Institute for Creative Technologies. http://ict.usc.edu/prototypes/simsensei/. Accessed July 19, 2019.

Article PDF
Author and Disclosure Information

Hripsime Kalanderian, MD
Psychiatrist
The Vancouver Clinic
Vancouver, Washington

Henry A. Nasrallah, MD
Professor of Psychiatry, Neurology, and Neuroscience
Medical Director: Neuropsychiatry
Director, Schizophrenia and Neuropsychiatry Programs
University of Cincinnati College of Medicine
Cincinnati, Ohio
Professor Emeritus, Saint Louis University
St. Louis. Missouri

Disclosures
The authors report no financial relationships with any companies whose products are mentioned in this article, or with manufacturers of competing products

Issue
Current Psychiatry - 18(8)
Publications
Topics
Page Number
33-38
Sections
Author and Disclosure Information

Hripsime Kalanderian, MD
Psychiatrist
The Vancouver Clinic
Vancouver, Washington

Henry A. Nasrallah, MD
Professor of Psychiatry, Neurology, and Neuroscience
Medical Director: Neuropsychiatry
Director, Schizophrenia and Neuropsychiatry Programs
University of Cincinnati College of Medicine
Cincinnati, Ohio
Professor Emeritus, Saint Louis University
St. Louis. Missouri

Disclosures
The authors report no financial relationships with any companies whose products are mentioned in this article, or with manufacturers of competing products

Author and Disclosure Information

Hripsime Kalanderian, MD
Psychiatrist
The Vancouver Clinic
Vancouver, Washington

Henry A. Nasrallah, MD
Professor of Psychiatry, Neurology, and Neuroscience
Medical Director: Neuropsychiatry
Director, Schizophrenia and Neuropsychiatry Programs
University of Cincinnati College of Medicine
Cincinnati, Ohio
Professor Emeritus, Saint Louis University
St. Louis. Missouri

Disclosures
The authors report no financial relationships with any companies whose products are mentioned in this article, or with manufacturers of competing products

Article PDF
Article PDF

For many people, artificial intelligence (AI) brings to mind some form of humanoid robot that speaks and acts like a human. However, AI is much more than merely robotics and machines. Professor John McCarthy of Stanford University, who first coined the term “artificial intelligence” in the early 1950s, defined it as “the science and engineering of making intelligent machines, especially intelligent computer programs”; he defined intelligence as “the computational part of the ability to achieve goals.”1 Artificial intelligence also is commonly defined as the development of computer systems able to perform tasks that normally require human intelligence.2 English Mathematician Alan Turing is considered one of the forefathers of AI research, and devised the first test to determine if a computer program was intelligent (Box 13). Today, AI has established itself as an integral part of medicine and psychiatry.

Box 1

The Turing Test: How to tell if a computer program is intelligent

During World War II, the English Mathematician Alan Turing helped the British government crack the Enigma machine, a coding device used by the Nazi army. He went on to pioneer many research projects in the field of artificial intelligence, including developing the Turing Test, which can determine if a computer program is intelligent.3 In this test, a human questioner uses a computer interface to pose questions to 2 respondents in different rooms; one of the respondents is a human and the other a computer program. If the questioner cannot tell the difference between the 2 respondents’ answers, then the computer program is deemed to be “artificially intelligent” because it can pass

The semantics of AI

Two subsets of AI are machine learning and deep learning.4,5 Machine learning is defined as a set of methods that can automatically detect patterns in data and then use the uncovered pattern to predict future data.4 Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts.5

Machine learning can be supervised, semi-supervised, or unsupervised. The majority of practical machine learning uses supervised learning, where all data are labeled and an algorithm is used to learn the mapping function from the input to the output. In unsupervised learning, all data are unlabeled and the algorithm models the underlying structure of the data by itself. Semi-supervised learning is a mixture of both.6

Many researchers also categorize AI into 2 types: general or “strong” AI, and narrow or “weak” AI. Strong AI is defined as computers that can think on a level at least equal to humans and are able to experience emotions and even consciousness.7 Weak AI includes adding “thinking-like” features to computers to make them more useful tools. Almost all AI technologies available today are considered to be weak AI.

AI in medicine

AI is being developed for a broad range of applications in medicine. This includes informatics approaches, including learning in health management systems such as electronic health records, and actively guiding physicians in their treatment decisions.8

AI has been applied to assist administrative workflows that reach beyond automated non-patient care activities such as chart documentation and placing orders. One example is the Judy Reitz Capacity Command Center, which was designed and built with GE Healthcare Partners.9 It combines AI technology in the form of systems engineering and predictive analytics to better manage multiple workflows in different administrative settings, including patient safety, volume, flow, and access to care.9

In April 2018, Intel Corporation surveyed 200 health-care decision makers in the United States regarding their use of AI in practice and their attitudes toward it.10 Overall, 37% of respondents reported using AI and 54% expected to increase their use of AI in the next 5 years. Clinical use of AI (77%) was more common than administrative use (41%) or financial use (26 %).10

Continue to: Box 2

 

 

Box 211-19 describes studies that evaluated the clinical use of AI in specialties other than psychiatry.

Box 2

Beyond psychiatry: Using artificial intelligence in other specialties

Ophthalmology. Multiple studies have evaluated using artificial intelligence (AI) to screen for diabetic retinopathy, which is one of the fastest growing causes of blindness worldwide.11 In a recent study, researchers used a deep learning algorithm to automatically detect diabetic retinopathy and diabetic macular edema by analyzing retinal images. It was trained over a dataset of 128,000 images that were evaluated by 3 to 7 ophthalmologists. The algorithm showed high sensitivity and specificity for detecting referable diabetic retinopathy.11

Cardiology. One study looked at training a deep learning algorithm to predict cardiovascular risk based on analysis of retinal fundus images from 284,335 patients. In this study, the algorithm was able to predict a cardiovascular event in the next 5 years with 70% accuracy.12 The results were based on risk factors not previously thought to be quantifiable in retinal images, such as age, gender, smoking status, systolic blood pressure, and major adverse cardiac events.12 Similarly, researchers in the United Kingdom wanted to assess AI’s ability to predict a first cardiovascular event over 10 years by comparing a machine-learning algorithm to current guidelines from the American College of Cardiology, which include age, smoking history, cholesterol levels, and diabetes history.13 The algorithm was applied to data from approximately 82,000 patients known to have a future cardiac event. It was able to significantly improve the accuracy of cardiovascular risk prediction.13

Radiology. Researchers in the Department of Radiology at Thomas Jefferson University Hospital trained 2 convolutional neural networks (CNNs), AlexNet and GoogleNet, on 150 chest X-ray images to diagnose the presence or absence of tuberculosis (TB).14 They found that the CNNs could accurately classify TB on chest X-ray, with an area under the curve of 0.99.14 The best-performing AI model was a combination of the 2 networks, which had an accuracy of 96%.14

Stroke. The ALADIN trial compared an AI algorithm vs 2 trained neuroradiologists for detecting large artery occlusions on 300 CT scans.15 The algorithm had a sensitivity of 97%, a specificity of 52%, and an accuracy of 78%.15

Surgery. AI in the form of surgical robots has been around for many decades. Probably the best-known surgical robot is the da Vinci Surgical System, which was FDA-approved in 2000 for laparoscopic procedures.16 The da Vinci Surgical System functions as an extension of the human surgeon, who controls the device from a nearby console. Researchers at McGill University developed an anesthesia robot called “McSleepy” that can analyze biological information and recognize malfunctions while constantly adapting its own behavior.17

Dermatology. One study compared the use of deep CNNs vs 21 board-certified dermatologists to identify skin cancer on 2,000 biopsy-proven clinical images.18 The CNNs were capable of classifying skin cancer with a level of competence comparable to that of the dermatologists.18

Pathology. One study compared the efficacy of a CNN to that of human pathologists in detecting breast cancer metastasis to lymph nodes on microscopy images.19 The CNN detected 92.4% of the tumors, whereas the pathologists had a sensitivity of 73.2%.19

How can AI be used in psychiatry?

Artificially intelligent technologies have been used in psychiatry for several decades. One of the earliest examples is ELIZA, a computer program published by Professor Joseph Weizenbaum of the Massachusetts Institute of Technology in 1966.20 ELIZA consisted of a language analyzer and a script or a set of rules to improvise around a certain theme; the script DOCTOR was used to simulate a Rogerian psychotherapist.20

The application of AI in psychiatry has come a long way since the pioneering work of Weizenbaum. A recent study examined AI’s ability to distinguish between an individual who had suicidal ideation vs a control group. Machine-learning algorithms were used to evaluate functional MRI scans of 34 participants (17 who had suicidal ideation and 17 controls) to identify certain neural signatures of concepts related to life and death.21 The machine-learning algorithms were able to distinguish between these 2 groups with 91% accuracy. They also were able to distinguish between individuals who attempted suicide and those who did not with 94% accuracy.21

A study from the University of Cincinnati looked at using machine learning and natural language processing to distinguish genuine suicide notes from “fake” suicide notes that had been written by a healthy control group.22 Sixty-six notes were evaluated and categorized by 11 mental health professionals (psychiatrists, social workers, and emergency medicine physicians) and 31 PGY-3 residents. The accuracy of their results was compared with that of 9 machine-learning algorithms.22 The best machine-learning algorithm accurately classified the notes 78% of the time, compared with 63% of the time for the mental health professionals and 49% of the time for the residents.22

Researchers at Vanderbilt University examined using machine learning to predict suicide risk.23 They developed algorithms to scan electronic health records of 5,167 adults, 3,250 of whom had attempted suicide. In a review of the patients’ data from 1 week to 2 years before the attempt, the algorithms looked for certain predictors of suicide attempts, including recurrent depression, psychotic disorder, and substance use. The algorithm was 80% accurate at predicting whether a patient would attempt suicide within the next 2 years, and 84% accurate at predicting an attempt within the next week.23

Continue to: In a prospective study...

 

 

In a prospective study, researchers at Cincinnati Children’s Hospital used a machine-learning algorithm to evaluate 379 patients who were categorized into 3 groups: suicidal, mentally ill but not suicidal, or controls.24 All participants completed a standardized behavioral rating scale and participated in a semi-structured interview. Based on the participants’ linguistic and acoustic characteristics, the algorithm was able to classify them into the 3 groups with 85% accuracy.24

Many studies have looked at using language analysis to predicting the risk of psychosis in at-risk individuals. In one study, researchers evaluated individuals known to be at high risk for developing psychosis, some of whom eventually did develop psychosis.25 Participants were asked to retell a story and to answer questions about that story. Researchers fed the transcripts of these interviews into a language analysis program that looked at semantic coherence, syntactic complexity, and other factors. The algorithm was able to predict the future occurrence of psychosis with 82% accuracy. Participants who converted to psychosis had decreased semantic coherence and reduced syntactic complexity.25

A similar study looked at 34 at-risk youth in an attempt to predict who would develop psychosis based on speech pattern analysis.26 The participants underwent baseline interviews and were assessed quarterly for 2.5 years. The algorithm was able to predict who would develop psychosis with 100% accuracy.26

 

Challenges and limitations

The amount of research about applying machine learning to various fields of psychiatry continues to grow. With this increased interest, there have been reports of bias and human influence in the various stages of machine learning. Therefore, being aware of these challenges and engaging in practices to minimize their effects are necessary. Such practices include providing more details on data collection and processing, and constantly evaluating machine learning models for their relevance and utility to the research question proposed.27

As is the case with most innovative, fast-growing technologies, AI has its fair share of criticisms and concerns. Critics have focused on the potential threat of privacy issues, medical errors, and ethical concerns. Researchers at the Stanford Center for Biomedical Ethics emphasize the importance of being aware of the different types of bias that humans and algorithm designs can introduce into health data.28

Continue to: The Nuffield Council on Bioethics...

 

 

The Nuffield Council on Bioethics also emphasizes the importance of identifying the ethical issues raised by using AI in health care. Concerns include erroneous decisions made by AI and determining who is responsible for such errors, difficulty in validating the outputs of AI systems, and the potential for AI to be used for malicious purposes.29

For clinicians who are considering implementing AI into their practice, it is vital to recognize where this technology belongs in a workflow and in the decision-making process. Jeffery Axt, a researcher on the clinical applications of AI, encourages clinicians to view using AI as a consulting tool to eliminate the element of fear associated with not having control over diagnostics and management.30

What’s on the horizon

Research into using AI in psychiatry has drawn the attention of large companies. IBM is building an automated speech analysis application that uses machine learning to provide a real-time overview of a patient’s mental health.31 Social media platforms are also starting to incorporate AI technologies to scan posts for language and image patterns suggestive of suicidal thoughts or behavior.32

“Chat bots”—AI that can conduct a conversation in natural language—are becoming popular as well. Woebot is a cognitive-behavioral therapy–based chat bot designed by a Stanford psychologist that can be accessed through Facebook Messenger. In a 2-week study, 70 young adults (age 18 to 28) with depression were randomly assigned to use Woebot or to read mental health e-books.33 Participants who used Woebot experienced a significant reduction in depressive symptoms as measured by change in score on the Patient Health Questionnaire-9, while those assigned to the reading group did not.33

Other researchers have focused on identifying patterns of inattention, hyperactivity, and impulsivity in children using AI technologies such as computer vision, machine learning, and data mining. For example, researchers at the University of Texas at Arlington and Yale University are analyzing data from watching children perform certain tasks involving attention, decision making, and emotion management.34 There have been several advances in using AI to note abnormalities in a child’s gaze pattern that might suggest autism.35

Continue to: A project at...

 

 

A project at the University of Southern California called SimSensei/Multisense uses software to track real-time behavior descriptors such as facial expressions, body postures, and acoustic features that can help identify psychological distress.36 This software is combined with a virtual human platform that communicates with the patient as a therapist would.36

The future of AI in health care appears to have great possibilities. Putting aside irrational fears of being replaced by computers one day, AI may someday be highly transformative, leading to vast improvements in patient care.

Bottom Line

Artificial intelligence (AI) —the development of computer systems able to perform tasks that normally require human intelligence—is being developed for use in a wide range of medical specialties. Potential uses in psychiatry include predicting a patient’s risk for suicide or psychosis. Privacy concerns, ethical issues, and the potential for medical errors are among the challenges of AI use in psychiatry.

Related Resources

  • Durstewitz D, Koppe G, Meyer-Lindenberg A. Deep neural networks in psychiatry. Mol Psychiatry. 2019. doi:10.1038/s41380-019-0365-9.
  • Kretzschmar K, Tyroll H, Pavarini G, et al; NeurOx Young People’s Advisory Group. Can your phone be your therapist? Young people’s ethical perspectives on the use of fully automated conversational agents (chatbots) in mental health support. Biomed Inform Insights. 2019;11:1178222619829083. doi: 10.1177/1178222619829083.

For many people, artificial intelligence (AI) brings to mind some form of humanoid robot that speaks and acts like a human. However, AI is much more than merely robotics and machines. Professor John McCarthy of Stanford University, who first coined the term “artificial intelligence” in the early 1950s, defined it as “the science and engineering of making intelligent machines, especially intelligent computer programs”; he defined intelligence as “the computational part of the ability to achieve goals.”1 Artificial intelligence also is commonly defined as the development of computer systems able to perform tasks that normally require human intelligence.2 English Mathematician Alan Turing is considered one of the forefathers of AI research, and devised the first test to determine if a computer program was intelligent (Box 13). Today, AI has established itself as an integral part of medicine and psychiatry.

Box 1

The Turing Test: How to tell if a computer program is intelligent

During World War II, the English Mathematician Alan Turing helped the British government crack the Enigma machine, a coding device used by the Nazi army. He went on to pioneer many research projects in the field of artificial intelligence, including developing the Turing Test, which can determine if a computer program is intelligent.3 In this test, a human questioner uses a computer interface to pose questions to 2 respondents in different rooms; one of the respondents is a human and the other a computer program. If the questioner cannot tell the difference between the 2 respondents’ answers, then the computer program is deemed to be “artificially intelligent” because it can pass

The semantics of AI

Two subsets of AI are machine learning and deep learning.4,5 Machine learning is defined as a set of methods that can automatically detect patterns in data and then use the uncovered pattern to predict future data.4 Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts.5

Machine learning can be supervised, semi-supervised, or unsupervised. The majority of practical machine learning uses supervised learning, where all data are labeled and an algorithm is used to learn the mapping function from the input to the output. In unsupervised learning, all data are unlabeled and the algorithm models the underlying structure of the data by itself. Semi-supervised learning is a mixture of both.6

Many researchers also categorize AI into 2 types: general or “strong” AI, and narrow or “weak” AI. Strong AI is defined as computers that can think on a level at least equal to humans and are able to experience emotions and even consciousness.7 Weak AI includes adding “thinking-like” features to computers to make them more useful tools. Almost all AI technologies available today are considered to be weak AI.

AI in medicine

AI is being developed for a broad range of applications in medicine. This includes informatics approaches, including learning in health management systems such as electronic health records, and actively guiding physicians in their treatment decisions.8

AI has been applied to assist administrative workflows that reach beyond automated non-patient care activities such as chart documentation and placing orders. One example is the Judy Reitz Capacity Command Center, which was designed and built with GE Healthcare Partners.9 It combines AI technology in the form of systems engineering and predictive analytics to better manage multiple workflows in different administrative settings, including patient safety, volume, flow, and access to care.9

In April 2018, Intel Corporation surveyed 200 health-care decision makers in the United States regarding their use of AI in practice and their attitudes toward it.10 Overall, 37% of respondents reported using AI and 54% expected to increase their use of AI in the next 5 years. Clinical use of AI (77%) was more common than administrative use (41%) or financial use (26 %).10

Continue to: Box 2

 

 

Box 211-19 describes studies that evaluated the clinical use of AI in specialties other than psychiatry.

Box 2

Beyond psychiatry: Using artificial intelligence in other specialties

Ophthalmology. Multiple studies have evaluated using artificial intelligence (AI) to screen for diabetic retinopathy, which is one of the fastest growing causes of blindness worldwide.11 In a recent study, researchers used a deep learning algorithm to automatically detect diabetic retinopathy and diabetic macular edema by analyzing retinal images. It was trained over a dataset of 128,000 images that were evaluated by 3 to 7 ophthalmologists. The algorithm showed high sensitivity and specificity for detecting referable diabetic retinopathy.11

Cardiology. One study looked at training a deep learning algorithm to predict cardiovascular risk based on analysis of retinal fundus images from 284,335 patients. In this study, the algorithm was able to predict a cardiovascular event in the next 5 years with 70% accuracy.12 The results were based on risk factors not previously thought to be quantifiable in retinal images, such as age, gender, smoking status, systolic blood pressure, and major adverse cardiac events.12 Similarly, researchers in the United Kingdom wanted to assess AI’s ability to predict a first cardiovascular event over 10 years by comparing a machine-learning algorithm to current guidelines from the American College of Cardiology, which include age, smoking history, cholesterol levels, and diabetes history.13 The algorithm was applied to data from approximately 82,000 patients known to have a future cardiac event. It was able to significantly improve the accuracy of cardiovascular risk prediction.13

Radiology. Researchers in the Department of Radiology at Thomas Jefferson University Hospital trained 2 convolutional neural networks (CNNs), AlexNet and GoogleNet, on 150 chest X-ray images to diagnose the presence or absence of tuberculosis (TB).14 They found that the CNNs could accurately classify TB on chest X-ray, with an area under the curve of 0.99.14 The best-performing AI model was a combination of the 2 networks, which had an accuracy of 96%.14

Stroke. The ALADIN trial compared an AI algorithm vs 2 trained neuroradiologists for detecting large artery occlusions on 300 CT scans.15 The algorithm had a sensitivity of 97%, a specificity of 52%, and an accuracy of 78%.15

Surgery. AI in the form of surgical robots has been around for many decades. Probably the best-known surgical robot is the da Vinci Surgical System, which was FDA-approved in 2000 for laparoscopic procedures.16 The da Vinci Surgical System functions as an extension of the human surgeon, who controls the device from a nearby console. Researchers at McGill University developed an anesthesia robot called “McSleepy” that can analyze biological information and recognize malfunctions while constantly adapting its own behavior.17

Dermatology. One study compared the use of deep CNNs vs 21 board-certified dermatologists to identify skin cancer on 2,000 biopsy-proven clinical images.18 The CNNs were capable of classifying skin cancer with a level of competence comparable to that of the dermatologists.18

Pathology. One study compared the efficacy of a CNN to that of human pathologists in detecting breast cancer metastasis to lymph nodes on microscopy images.19 The CNN detected 92.4% of the tumors, whereas the pathologists had a sensitivity of 73.2%.19

How can AI be used in psychiatry?

Artificially intelligent technologies have been used in psychiatry for several decades. One of the earliest examples is ELIZA, a computer program published by Professor Joseph Weizenbaum of the Massachusetts Institute of Technology in 1966.20 ELIZA consisted of a language analyzer and a script or a set of rules to improvise around a certain theme; the script DOCTOR was used to simulate a Rogerian psychotherapist.20

The application of AI in psychiatry has come a long way since the pioneering work of Weizenbaum. A recent study examined AI’s ability to distinguish between an individual who had suicidal ideation vs a control group. Machine-learning algorithms were used to evaluate functional MRI scans of 34 participants (17 who had suicidal ideation and 17 controls) to identify certain neural signatures of concepts related to life and death.21 The machine-learning algorithms were able to distinguish between these 2 groups with 91% accuracy. They also were able to distinguish between individuals who attempted suicide and those who did not with 94% accuracy.21

A study from the University of Cincinnati looked at using machine learning and natural language processing to distinguish genuine suicide notes from “fake” suicide notes that had been written by a healthy control group.22 Sixty-six notes were evaluated and categorized by 11 mental health professionals (psychiatrists, social workers, and emergency medicine physicians) and 31 PGY-3 residents. The accuracy of their results was compared with that of 9 machine-learning algorithms.22 The best machine-learning algorithm accurately classified the notes 78% of the time, compared with 63% of the time for the mental health professionals and 49% of the time for the residents.22

Researchers at Vanderbilt University examined using machine learning to predict suicide risk.23 They developed algorithms to scan electronic health records of 5,167 adults, 3,250 of whom had attempted suicide. In a review of the patients’ data from 1 week to 2 years before the attempt, the algorithms looked for certain predictors of suicide attempts, including recurrent depression, psychotic disorder, and substance use. The algorithm was 80% accurate at predicting whether a patient would attempt suicide within the next 2 years, and 84% accurate at predicting an attempt within the next week.23

Continue to: In a prospective study...

 

 

In a prospective study, researchers at Cincinnati Children’s Hospital used a machine-learning algorithm to evaluate 379 patients who were categorized into 3 groups: suicidal, mentally ill but not suicidal, or controls.24 All participants completed a standardized behavioral rating scale and participated in a semi-structured interview. Based on the participants’ linguistic and acoustic characteristics, the algorithm was able to classify them into the 3 groups with 85% accuracy.24

Many studies have looked at using language analysis to predicting the risk of psychosis in at-risk individuals. In one study, researchers evaluated individuals known to be at high risk for developing psychosis, some of whom eventually did develop psychosis.25 Participants were asked to retell a story and to answer questions about that story. Researchers fed the transcripts of these interviews into a language analysis program that looked at semantic coherence, syntactic complexity, and other factors. The algorithm was able to predict the future occurrence of psychosis with 82% accuracy. Participants who converted to psychosis had decreased semantic coherence and reduced syntactic complexity.25

A similar study looked at 34 at-risk youth in an attempt to predict who would develop psychosis based on speech pattern analysis.26 The participants underwent baseline interviews and were assessed quarterly for 2.5 years. The algorithm was able to predict who would develop psychosis with 100% accuracy.26

 

Challenges and limitations

The amount of research about applying machine learning to various fields of psychiatry continues to grow. With this increased interest, there have been reports of bias and human influence in the various stages of machine learning. Therefore, being aware of these challenges and engaging in practices to minimize their effects are necessary. Such practices include providing more details on data collection and processing, and constantly evaluating machine learning models for their relevance and utility to the research question proposed.27

As is the case with most innovative, fast-growing technologies, AI has its fair share of criticisms and concerns. Critics have focused on the potential threat of privacy issues, medical errors, and ethical concerns. Researchers at the Stanford Center for Biomedical Ethics emphasize the importance of being aware of the different types of bias that humans and algorithm designs can introduce into health data.28

Continue to: The Nuffield Council on Bioethics...

 

 

The Nuffield Council on Bioethics also emphasizes the importance of identifying the ethical issues raised by using AI in health care. Concerns include erroneous decisions made by AI and determining who is responsible for such errors, difficulty in validating the outputs of AI systems, and the potential for AI to be used for malicious purposes.29

For clinicians who are considering implementing AI into their practice, it is vital to recognize where this technology belongs in a workflow and in the decision-making process. Jeffery Axt, a researcher on the clinical applications of AI, encourages clinicians to view using AI as a consulting tool to eliminate the element of fear associated with not having control over diagnostics and management.30

What’s on the horizon

Research into using AI in psychiatry has drawn the attention of large companies. IBM is building an automated speech analysis application that uses machine learning to provide a real-time overview of a patient’s mental health.31 Social media platforms are also starting to incorporate AI technologies to scan posts for language and image patterns suggestive of suicidal thoughts or behavior.32

“Chat bots”—AI that can conduct a conversation in natural language—are becoming popular as well. Woebot is a cognitive-behavioral therapy–based chat bot designed by a Stanford psychologist that can be accessed through Facebook Messenger. In a 2-week study, 70 young adults (age 18 to 28) with depression were randomly assigned to use Woebot or to read mental health e-books.33 Participants who used Woebot experienced a significant reduction in depressive symptoms as measured by change in score on the Patient Health Questionnaire-9, while those assigned to the reading group did not.33

Other researchers have focused on identifying patterns of inattention, hyperactivity, and impulsivity in children using AI technologies such as computer vision, machine learning, and data mining. For example, researchers at the University of Texas at Arlington and Yale University are analyzing data from watching children perform certain tasks involving attention, decision making, and emotion management.34 There have been several advances in using AI to note abnormalities in a child’s gaze pattern that might suggest autism.35

Continue to: A project at...

 

 

A project at the University of Southern California called SimSensei/Multisense uses software to track real-time behavior descriptors such as facial expressions, body postures, and acoustic features that can help identify psychological distress.36 This software is combined with a virtual human platform that communicates with the patient as a therapist would.36

The future of AI in health care appears to have great possibilities. Putting aside irrational fears of being replaced by computers one day, AI may someday be highly transformative, leading to vast improvements in patient care.

Bottom Line

Artificial intelligence (AI) —the development of computer systems able to perform tasks that normally require human intelligence—is being developed for use in a wide range of medical specialties. Potential uses in psychiatry include predicting a patient’s risk for suicide or psychosis. Privacy concerns, ethical issues, and the potential for medical errors are among the challenges of AI use in psychiatry.

Related Resources

  • Durstewitz D, Koppe G, Meyer-Lindenberg A. Deep neural networks in psychiatry. Mol Psychiatry. 2019. doi:10.1038/s41380-019-0365-9.
  • Kretzschmar K, Tyroll H, Pavarini G, et al; NeurOx Young People’s Advisory Group. Can your phone be your therapist? Young people’s ethical perspectives on the use of fully automated conversational agents (chatbots) in mental health support. Biomed Inform Insights. 2019;11:1178222619829083. doi: 10.1177/1178222619829083.
References

1. McCarthy J. What is AI? Basic questions. http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html. Accessed July 19, 2019.
2. Oxford Reference. Artificial intelligence. http://www.oxfordreference.com/view/10.1093/oi/authority.20110803095426960. Accessed July 19, 2019.
3. Turing AM. Computing machinery and intelligence. Mind. 1950;49:433-460.
4. Robert C. Book review: machine learning, a probabilistic perspective. CHANCE. 2014;27:2:62-63.
5. Goodfellow I, Bengio Y, Courville A. Deep learning. Cambridge, MA: The MIT Press; 2016.
6. Brownlee J. Supervised and unsupervised machine learning algorithms. https://machinelearningmastery.com/supervised-and-unsupervised-machine-learning-algorithms/. Published March 16, 2016. Accessed July 19, 2019.
7. Russell S, Norvig P. Artificial intelligence: a modern approach. Upper Saddle River, NJ: Pearson; 1995.
8. Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism. 2017;69S:S36-S40.
9. The Johns Hopkins hospital launches capacity command center to enhance hospital operations. Johns Hopkins Medicine. https://www.hopkinsmedicine.org/news/media/releases/the_johns_hopkins_hospital_launches_capacity_command_center_to_enhance_hospital_operations. Published October 26, 2016. Accessed July, 19 2019.
10. U.S. healthcare leaders expect widespread adoption of artificial intelligence by 2023. Intel. https://newsroom.intel.com/news-releases/u-s-healthcare-leaders-expect-widespread-adoption-artificial-intelligence-2023/#gs.7j7yjk. Published July 2, 2018. Accessed July, 19 2019.
11. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402-2410.
12. Poplin R, Varadarajan AV, Blumer K, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering. 2018;2:158-164.
13. Weng SF, Reps J, Kai J, et al. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS One. 2017;12(4):e0174944. doi: 10.1371/journal.pone. 0174944.
14. Lakhani P, Sundaram B. Deep learning at chest radiography: Automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574-582.
15. Bluemke DA. Radiology in 2018: Are you working with ai or being replaced by AI? Radiology. 2018;287(2):365-366.
16. Kakar PN, Das J, Roy PM, et al. Robotic invasion of operation theatre and associated anaesthetic issues: A review. Indian J Anaesth. 2011;55(1):18-25.
17. World first: researchers develop completely automated anesthesia system. McGill University. https://www.mcgill.ca/newsroom/channels/news/world-first-researchers-develop-completely-automated-anesthesia-system-100263. Published May 1, 2008. Accessed July 19, 2019.
18. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118.
19. Liu Y, Gadepalli K, Norouzi M, et al. Detecting cancer metastases on gigapixel pathology images. https://arxiv.org/abs/1703.02442. Published March 8, 2017. Accessed July 19, 2019.
20. Bassett C. The computational therapeutic: exploring Weizenbaum’s ELIZA as a history of the present. AI & Soc. 2018. https://doi.org/10.1007/s00146-018-0825-9.
21. Just MA, Pan L, Cherkassky VL, et al. Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth. Nat Hum Behav. 2017;1:911-919.
22. Pestian J, Nasrallah H, Matykiewicz P, et al. Suicide note classification using natural language processing: a content analysis. Biomed Inform Insights. 2010;2010(3):19-28.
23. Walsh CG, Ribeiro JD, Franklin JC. Predicting risk of suicide attempts over time through machine learning. Clinical Psychological Science. 2017;5(3):457-469.
24. Pestian JP, Sorter M, Connolly B, et al; STM Research Group. A machine learning approach to identifying the thought markers of suicidal subjects: a prospective multicenter trial. Suicide Life Threat Behav. 2017;47(1):112-121.
25. Corcoran CM, Carrillo F, Fernández-Slezak D, et al. Prediction of psychosis across protocols and risk cohorts using automated language analysis. World Psychiatry. 2018;17(1):67-75.
26. Bedi G, Carrillo F, Cecchi GA, et al. Automated analysis of free speech predicts psychosis onset in high-risk youths. NPJ Schizophr. 2015;1:15030. doi:10.1038/npjschz.2015.30.
27. Tandon N, Tandon R. Will machine learning enable us to finally cut the Gordian Knot of schizophrenia. Schizophr Bull. 2018;44(5):939-941.
28. Char DS, Shah NH, Magnus D. Implementing machine learning in health care - addressing ethical challenges. N Engl J Med. 2018;378(11):981-983.
29. Nuffield Council on Bioethics. The big ethical questions for artificial intelligence (AI) in healthcare. http://nuffieldbioethics.org/news/2018/big-ethical-questions-artificial-intelligence-ai-healthcare. Published May 15, 2018. Accessed July 19, 2019.
30. Axt J. Artificial neural networks: a systematic review of their efficacy as an innovative resource for health care practice managers. https://www.researchgate.net/publication/322101587_Running_head_ANN_EFFICACY_IN_HEALTHCARE-A_SYSTEMATIC_REVIEW_1_Artificial_Neural_Networks_A_systematic_review_of_their_efficacy_as_an_innovative_resource_for_healthcare_practice_managers. Published October 2017. Accessed July 19, 2019.
31. Cecchi G. IBM 5 in 5: with AI, our words will be a window into our mental health. IBM Research Blog. https://www.ibm.com/blogs/research/2017/1/ibm-5-in-5-our-words-will-be-the-windows-to-our-mental-health/. Published January 5, 2017. Accessed July 19, 2019.
32. Constine J. Facebook rolls out AI to detect suicidal posts before they’re reported. TechCrunch. http://tcrn.ch/2hUBi3B. Published November 27, 2017. Accessed July 19, 2019.
33. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health. 2017;4(2):e19. doi:10.2196/mental.7785.
34. UTA researchers use artificial intelligence to assess, enhance cognitive abilities in school-aged children. University of Texas at Arlington. https://www.uta.edu/news/releases/2016/10/makedon-children-learning-difficulties.php. Published October 13, 2016. Accessed July 19, 2019.
35. Nealon C. App for early autism detection launched on World Autism Awareness Day, April 2. University at Buffalo. http://www.buffalo.edu/news/releases/2018/04/001.html. Published April 2, 2018. Accessed July 19, 2019.
36. SimSensei. University of Southern California Institute for Creative Technologies. http://ict.usc.edu/prototypes/simsensei/. Accessed July 19, 2019.

References

1. McCarthy J. What is AI? Basic questions. http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html. Accessed July 19, 2019.
2. Oxford Reference. Artificial intelligence. http://www.oxfordreference.com/view/10.1093/oi/authority.20110803095426960. Accessed July 19, 2019.
3. Turing AM. Computing machinery and intelligence. Mind. 1950;49:433-460.
4. Robert C. Book review: machine learning, a probabilistic perspective. CHANCE. 2014;27:2:62-63.
5. Goodfellow I, Bengio Y, Courville A. Deep learning. Cambridge, MA: The MIT Press; 2016.
6. Brownlee J. Supervised and unsupervised machine learning algorithms. https://machinelearningmastery.com/supervised-and-unsupervised-machine-learning-algorithms/. Published March 16, 2016. Accessed July 19, 2019.
7. Russell S, Norvig P. Artificial intelligence: a modern approach. Upper Saddle River, NJ: Pearson; 1995.
8. Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism. 2017;69S:S36-S40.
9. The Johns Hopkins hospital launches capacity command center to enhance hospital operations. Johns Hopkins Medicine. https://www.hopkinsmedicine.org/news/media/releases/the_johns_hopkins_hospital_launches_capacity_command_center_to_enhance_hospital_operations. Published October 26, 2016. Accessed July, 19 2019.
10. U.S. healthcare leaders expect widespread adoption of artificial intelligence by 2023. Intel. https://newsroom.intel.com/news-releases/u-s-healthcare-leaders-expect-widespread-adoption-artificial-intelligence-2023/#gs.7j7yjk. Published July 2, 2018. Accessed July, 19 2019.
11. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402-2410.
12. Poplin R, Varadarajan AV, Blumer K, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering. 2018;2:158-164.
13. Weng SF, Reps J, Kai J, et al. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS One. 2017;12(4):e0174944. doi: 10.1371/journal.pone. 0174944.
14. Lakhani P, Sundaram B. Deep learning at chest radiography: Automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574-582.
15. Bluemke DA. Radiology in 2018: Are you working with ai or being replaced by AI? Radiology. 2018;287(2):365-366.
16. Kakar PN, Das J, Roy PM, et al. Robotic invasion of operation theatre and associated anaesthetic issues: A review. Indian J Anaesth. 2011;55(1):18-25.
17. World first: researchers develop completely automated anesthesia system. McGill University. https://www.mcgill.ca/newsroom/channels/news/world-first-researchers-develop-completely-automated-anesthesia-system-100263. Published May 1, 2008. Accessed July 19, 2019.
18. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118.
19. Liu Y, Gadepalli K, Norouzi M, et al. Detecting cancer metastases on gigapixel pathology images. https://arxiv.org/abs/1703.02442. Published March 8, 2017. Accessed July 19, 2019.
20. Bassett C. The computational therapeutic: exploring Weizenbaum’s ELIZA as a history of the present. AI & Soc. 2018. https://doi.org/10.1007/s00146-018-0825-9.
21. Just MA, Pan L, Cherkassky VL, et al. Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth. Nat Hum Behav. 2017;1:911-919.
22. Pestian J, Nasrallah H, Matykiewicz P, et al. Suicide note classification using natural language processing: a content analysis. Biomed Inform Insights. 2010;2010(3):19-28.
23. Walsh CG, Ribeiro JD, Franklin JC. Predicting risk of suicide attempts over time through machine learning. Clinical Psychological Science. 2017;5(3):457-469.
24. Pestian JP, Sorter M, Connolly B, et al; STM Research Group. A machine learning approach to identifying the thought markers of suicidal subjects: a prospective multicenter trial. Suicide Life Threat Behav. 2017;47(1):112-121.
25. Corcoran CM, Carrillo F, Fernández-Slezak D, et al. Prediction of psychosis across protocols and risk cohorts using automated language analysis. World Psychiatry. 2018;17(1):67-75.
26. Bedi G, Carrillo F, Cecchi GA, et al. Automated analysis of free speech predicts psychosis onset in high-risk youths. NPJ Schizophr. 2015;1:15030. doi:10.1038/npjschz.2015.30.
27. Tandon N, Tandon R. Will machine learning enable us to finally cut the Gordian Knot of schizophrenia. Schizophr Bull. 2018;44(5):939-941.
28. Char DS, Shah NH, Magnus D. Implementing machine learning in health care - addressing ethical challenges. N Engl J Med. 2018;378(11):981-983.
29. Nuffield Council on Bioethics. The big ethical questions for artificial intelligence (AI) in healthcare. http://nuffieldbioethics.org/news/2018/big-ethical-questions-artificial-intelligence-ai-healthcare. Published May 15, 2018. Accessed July 19, 2019.
30. Axt J. Artificial neural networks: a systematic review of their efficacy as an innovative resource for health care practice managers. https://www.researchgate.net/publication/322101587_Running_head_ANN_EFFICACY_IN_HEALTHCARE-A_SYSTEMATIC_REVIEW_1_Artificial_Neural_Networks_A_systematic_review_of_their_efficacy_as_an_innovative_resource_for_healthcare_practice_managers. Published October 2017. Accessed July 19, 2019.
31. Cecchi G. IBM 5 in 5: with AI, our words will be a window into our mental health. IBM Research Blog. https://www.ibm.com/blogs/research/2017/1/ibm-5-in-5-our-words-will-be-the-windows-to-our-mental-health/. Published January 5, 2017. Accessed July 19, 2019.
32. Constine J. Facebook rolls out AI to detect suicidal posts before they’re reported. TechCrunch. http://tcrn.ch/2hUBi3B. Published November 27, 2017. Accessed July 19, 2019.
33. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health. 2017;4(2):e19. doi:10.2196/mental.7785.
34. UTA researchers use artificial intelligence to assess, enhance cognitive abilities in school-aged children. University of Texas at Arlington. https://www.uta.edu/news/releases/2016/10/makedon-children-learning-difficulties.php. Published October 13, 2016. Accessed July 19, 2019.
35. Nealon C. App for early autism detection launched on World Autism Awareness Day, April 2. University at Buffalo. http://www.buffalo.edu/news/releases/2018/04/001.html. Published April 2, 2018. Accessed July 19, 2019.
36. SimSensei. University of Southern California Institute for Creative Technologies. http://ict.usc.edu/prototypes/simsensei/. Accessed July 19, 2019.

Issue
Current Psychiatry - 18(8)
Issue
Current Psychiatry - 18(8)
Page Number
33-38
Page Number
33-38
Publications
Publications
Topics
Article Type
Display Headline
Artificial intelligence in psychiatry
Display Headline
Artificial intelligence in psychiatry
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media