Article Type
Changed
Tue, 05/01/2018 - 08:06
Display Headline
The algorithm less traveled

Quite a while ago, when I used to moonlight as the medicine attending in a university medical center emergency department, I took a course, passed an exam, and became certified in advanced trauma life support. I guess that I am one of few board-certified rheumatologists to hold such certification, as there is little apparent clinical crossover between the management of patients with lupus or vasculitis and those with life-threatening trauma.

To this day I remain impressed by the algorithmic nature of trauma management. A routine that to the internist could appear mindless and slavish was to the trauma physician a protocol designed to take no chances on missing a life-threatening complication in the heat of the moment. The trauma physician cannot afford to wait for a cognitively derived epiphany in a clinical setting that often rapidly unfolds as a series of “never-miss” scenarios. The appropriate algorithm, rigorously followed, offers the best chance of avoiding a catastrophe of omission. This was long before Atul Gawande published his Checklist Manifesto.

Reviewing the article by Sussman et al, “Eyes of the mimicker,” in this issue of the Journal got me thinking about the power of algorithmic thinking and practice in internal medicine, how the patient they describe specifically relates to my practice experiences over the years, and how important the context of where we practice and who we treat informs (and can misinform) our clinical reasoning. When I was a medical student at Bellevue Hospital in New York City (in the pre-HIV era), the rapid plasma reagin (RPR) was a routine blood test, as syphilis routinely earned its moniker as the “great imitator.” When I did my residency at the Hospital of the University of Pennsylvania, my ingrained habit of ordering this test was extinguished, along with my also previously learned habit of obtaining blood cultures in all patients who presented with new heart failure that was not explained by the electrocardiogram. These habits disappeared not because of arguments steeped in evidence-based medicine or an emphasis on Bayesian test-ordering, but because in Philadelphia at that time we were not seeing patients with occult syphilis and endocarditis with the same frequency as at Bellevue. Context can and should play a role in our diagnostic reasoning.

But I still remember the patient I saw in the Philadelphia emergency room, a second visit for a man in his 20s with a diffuse, mostly macular rash on his trunk, palms, and soles (visible when the light was turned up in his darkened room, as he felt uncomfortable with bright light), diffuse adenopathy, and enlarged doughy and minimally tender wrists and finger (metacarpophalangeal) joints. I recall wondering why no one had thought to obtain an RPR test on him the first time he had presented to the emergency room; if he had been at Bellevue, the test results would already have returned.

Without appropriate algorithms, things get missed. But using algorithms indiscriminately is cost-ineffective and can lead to cascades of inappropriate tests and interventions. Striking the appropriate balance is part of what comprises the writing of useful clinical care paths.

As I read the article by Sussman et al I wondered who first looked at the patient’s retinas and what initially prompted the testing that was ordered. The presentation was not typical of ocular syphilis, and I would guess that an ophthalmologist or infectious disease consultant evaluating the blurred vision observed the retinal findings, suspected the diagnosis, and ordered serologies, as well as other studies searching for infections and systemic autoimmune disorders that can also cause Roth spots. Gone are the days when internists (and residents) routinely examine the eyes as part of a full physical examination. I am certain an evidence-based study of this practice would find it time-ineffective and with inappropriately low sensitivity.

I don’t think the retinal examination will return to the internist’s checklist. Yet that is where the algorithm that led to this patient’s diagnosis likely began. One can “google” the causes of Roth spots, but as yet there is no app for demonstrating that they are present.

Article PDF
Author and Disclosure Information
Issue
Cleveland Clinic Journal of Medicine - 85(5)
Publications
Topics
Page Number
346-347
Legacy Keywords
syphilis, Roth spots, testing, rapid plasma reagin, RPR, algorithm, Bellevue, Bayes theorem, pretest probability, reactive arthritis, Brian Mandell
Sections
Author and Disclosure Information
Author and Disclosure Information
Article PDF
Article PDF
Related Articles

Quite a while ago, when I used to moonlight as the medicine attending in a university medical center emergency department, I took a course, passed an exam, and became certified in advanced trauma life support. I guess that I am one of few board-certified rheumatologists to hold such certification, as there is little apparent clinical crossover between the management of patients with lupus or vasculitis and those with life-threatening trauma.

To this day I remain impressed by the algorithmic nature of trauma management. A routine that to the internist could appear mindless and slavish was to the trauma physician a protocol designed to take no chances on missing a life-threatening complication in the heat of the moment. The trauma physician cannot afford to wait for a cognitively derived epiphany in a clinical setting that often rapidly unfolds as a series of “never-miss” scenarios. The appropriate algorithm, rigorously followed, offers the best chance of avoiding a catastrophe of omission. This was long before Atul Gawande published his Checklist Manifesto.

Reviewing the article by Sussman et al, “Eyes of the mimicker,” in this issue of the Journal got me thinking about the power of algorithmic thinking and practice in internal medicine, how the patient they describe specifically relates to my practice experiences over the years, and how important the context of where we practice and who we treat informs (and can misinform) our clinical reasoning. When I was a medical student at Bellevue Hospital in New York City (in the pre-HIV era), the rapid plasma reagin (RPR) was a routine blood test, as syphilis routinely earned its moniker as the “great imitator.” When I did my residency at the Hospital of the University of Pennsylvania, my ingrained habit of ordering this test was extinguished, along with my also previously learned habit of obtaining blood cultures in all patients who presented with new heart failure that was not explained by the electrocardiogram. These habits disappeared not because of arguments steeped in evidence-based medicine or an emphasis on Bayesian test-ordering, but because in Philadelphia at that time we were not seeing patients with occult syphilis and endocarditis with the same frequency as at Bellevue. Context can and should play a role in our diagnostic reasoning.

But I still remember the patient I saw in the Philadelphia emergency room, a second visit for a man in his 20s with a diffuse, mostly macular rash on his trunk, palms, and soles (visible when the light was turned up in his darkened room, as he felt uncomfortable with bright light), diffuse adenopathy, and enlarged doughy and minimally tender wrists and finger (metacarpophalangeal) joints. I recall wondering why no one had thought to obtain an RPR test on him the first time he had presented to the emergency room; if he had been at Bellevue, the test results would already have returned.

Without appropriate algorithms, things get missed. But using algorithms indiscriminately is cost-ineffective and can lead to cascades of inappropriate tests and interventions. Striking the appropriate balance is part of what comprises the writing of useful clinical care paths.

As I read the article by Sussman et al I wondered who first looked at the patient’s retinas and what initially prompted the testing that was ordered. The presentation was not typical of ocular syphilis, and I would guess that an ophthalmologist or infectious disease consultant evaluating the blurred vision observed the retinal findings, suspected the diagnosis, and ordered serologies, as well as other studies searching for infections and systemic autoimmune disorders that can also cause Roth spots. Gone are the days when internists (and residents) routinely examine the eyes as part of a full physical examination. I am certain an evidence-based study of this practice would find it time-ineffective and with inappropriately low sensitivity.

I don’t think the retinal examination will return to the internist’s checklist. Yet that is where the algorithm that led to this patient’s diagnosis likely began. One can “google” the causes of Roth spots, but as yet there is no app for demonstrating that they are present.

Quite a while ago, when I used to moonlight as the medicine attending in a university medical center emergency department, I took a course, passed an exam, and became certified in advanced trauma life support. I guess that I am one of few board-certified rheumatologists to hold such certification, as there is little apparent clinical crossover between the management of patients with lupus or vasculitis and those with life-threatening trauma.

To this day I remain impressed by the algorithmic nature of trauma management. A routine that to the internist could appear mindless and slavish was to the trauma physician a protocol designed to take no chances on missing a life-threatening complication in the heat of the moment. The trauma physician cannot afford to wait for a cognitively derived epiphany in a clinical setting that often rapidly unfolds as a series of “never-miss” scenarios. The appropriate algorithm, rigorously followed, offers the best chance of avoiding a catastrophe of omission. This was long before Atul Gawande published his Checklist Manifesto.

Reviewing the article by Sussman et al, “Eyes of the mimicker,” in this issue of the Journal got me thinking about the power of algorithmic thinking and practice in internal medicine, how the patient they describe specifically relates to my practice experiences over the years, and how important the context of where we practice and who we treat informs (and can misinform) our clinical reasoning. When I was a medical student at Bellevue Hospital in New York City (in the pre-HIV era), the rapid plasma reagin (RPR) was a routine blood test, as syphilis routinely earned its moniker as the “great imitator.” When I did my residency at the Hospital of the University of Pennsylvania, my ingrained habit of ordering this test was extinguished, along with my also previously learned habit of obtaining blood cultures in all patients who presented with new heart failure that was not explained by the electrocardiogram. These habits disappeared not because of arguments steeped in evidence-based medicine or an emphasis on Bayesian test-ordering, but because in Philadelphia at that time we were not seeing patients with occult syphilis and endocarditis with the same frequency as at Bellevue. Context can and should play a role in our diagnostic reasoning.

But I still remember the patient I saw in the Philadelphia emergency room, a second visit for a man in his 20s with a diffuse, mostly macular rash on his trunk, palms, and soles (visible when the light was turned up in his darkened room, as he felt uncomfortable with bright light), diffuse adenopathy, and enlarged doughy and minimally tender wrists and finger (metacarpophalangeal) joints. I recall wondering why no one had thought to obtain an RPR test on him the first time he had presented to the emergency room; if he had been at Bellevue, the test results would already have returned.

Without appropriate algorithms, things get missed. But using algorithms indiscriminately is cost-ineffective and can lead to cascades of inappropriate tests and interventions. Striking the appropriate balance is part of what comprises the writing of useful clinical care paths.

As I read the article by Sussman et al I wondered who first looked at the patient’s retinas and what initially prompted the testing that was ordered. The presentation was not typical of ocular syphilis, and I would guess that an ophthalmologist or infectious disease consultant evaluating the blurred vision observed the retinal findings, suspected the diagnosis, and ordered serologies, as well as other studies searching for infections and systemic autoimmune disorders that can also cause Roth spots. Gone are the days when internists (and residents) routinely examine the eyes as part of a full physical examination. I am certain an evidence-based study of this practice would find it time-ineffective and with inappropriately low sensitivity.

I don’t think the retinal examination will return to the internist’s checklist. Yet that is where the algorithm that led to this patient’s diagnosis likely began. One can “google” the causes of Roth spots, but as yet there is no app for demonstrating that they are present.

Issue
Cleveland Clinic Journal of Medicine - 85(5)
Issue
Cleveland Clinic Journal of Medicine - 85(5)
Page Number
346-347
Page Number
346-347
Publications
Publications
Topics
Article Type
Display Headline
The algorithm less traveled
Display Headline
The algorithm less traveled
Legacy Keywords
syphilis, Roth spots, testing, rapid plasma reagin, RPR, algorithm, Bellevue, Bayes theorem, pretest probability, reactive arthritis, Brian Mandell
Legacy Keywords
syphilis, Roth spots, testing, rapid plasma reagin, RPR, algorithm, Bellevue, Bayes theorem, pretest probability, reactive arthritis, Brian Mandell
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 04/25/2018 - 11:45
Un-Gate On Date
Wed, 04/25/2018 - 11:45
Use ProPublica
CFC Schedule Remove Status
Wed, 04/25/2018 - 11:45
Article PDF Media