In reply: Cognitive bias and diagnostic error

Article Type
Changed
Wed, 08/16/2017 - 13:53
Display Headline
In reply: Cognitive bias and diagnostic error

In Reply: We thank Dr. Field for his insights and personal observations related to diagnosis and biases that contribute to diagnostic errors.

Dr. Field’s comment about the importance of revisiting one’s initial working diagnosis is consistent with our proposed diagnostic time out. A diagnostic time out can incorporate a short checklist and aid in debiasing clinicians when findings do not fit the case presentation, such as lack of response to diuretic therapy. Being mindful of slowing down and not necessarily rushing to judgment is another important component.1 Of note, the residents in our case did revisit their initial working diagnosis, as suggested by Dr. Field. Questions from learners have great potential to serve as debiasing instruments and should always be encouraged. Those who do not work with students can do the same by speaking with nurses or other members of the healthcare team, who offer observations that busy physicians might miss.

Our case highlights the problem that we lack objective criteria to diagnose symptomatic heart failure. While B-type natriuretic factor (BNP) has a strong negative predictive value, serial BNP measurements have not been established to be helpful in the management of heart failure.2 Although certain findings on chest radiography have strong positive and negative likelihood associations, the role of serial chest radiographs is less clear.3 Thus, heart failure remains a clinical diagnosis in current practice.

As Dr. Field points out, the accuracy and performance characteristics of diagnostic testing, such as the respiratory rate, need to be considered in conjunction with debiasing strategies to achieve higher diagnostic accuracy. Multiple factors can contribute to low-performing or misinterpreted diagnostic tests, and inaccurate vital signs have been shown to be similarly prone to potential error.4

Finally, we wholeheartedly agree with Dr. Field’s comment on unnecessary testing.  High-value care is appropriate care. Using Bayesian reasoning to guide testing, monitoring the treatment course appropriately, and eliminating waste is highly likely to improve both value and diagnostic accuracy. Automated, ritual ordering of daily tests can indicate that thinking has been shut off, leaving clinicians susceptible to premature closure of the diagnostic process as well as the potential for “incidentalomas” to distract them from the right diagnosis, all the while leading to low-value care such as wasteful spending, patient dissatisfaction, and hospital-acquired anemia.5 We believe that deciding on a daily basis what the next day’s tests will be can be another powerful debiasing habit, one with benefits beyond diagnosis.

References
  1. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008; 121(suppl):S38–S42.
  2. Yancy CW, Jessup M, Bozkurt B, et al. 2013 ACCF/AHA guideline for the management of heart failure. Circulation 2013; 128:e240–e327.
  3. Wang CS, FitzGerald JM, Schulzer M, Mak E, Ayas NT. Does this dyspneic patient in the emergency department have congestive heart failure? JAMA 2005; 294:1944–1956.
  4. Philip KE, Pack E, Cambiano V, Rollmann H, Weil S, O’Beirne J. The accuracy of respiratory rate assessment by doctors in a London teaching hospital: a cross-sectional study. J Clin Monit Comput 2015; 29:455–460.
  5. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: prevalence, outcomes, and healthcare implications. J Hosp Med 2013; 8:506–512. 
Article PDF
Author and Disclosure Information

Nikhil Mull, MD
University of Pennsylvania, Philadelphia

James B. Reilly, MD, MS
Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
University of Pennsylvania, Philadelphia

Issue
Cleveland Clinic Journal of Medicine - 83(6)
Publications
Topics
Page Number
407-408
Legacy Keywords
heart failure, cognitive bias, diagnostic error, Emergency medicine, General internal medicine, Hospital medicine, morton field
Sections
Author and Disclosure Information

Nikhil Mull, MD
University of Pennsylvania, Philadelphia

James B. Reilly, MD, MS
Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
University of Pennsylvania, Philadelphia

Author and Disclosure Information

Nikhil Mull, MD
University of Pennsylvania, Philadelphia

James B. Reilly, MD, MS
Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
University of Pennsylvania, Philadelphia

Article PDF
Article PDF
Related Articles

In Reply: We thank Dr. Field for his insights and personal observations related to diagnosis and biases that contribute to diagnostic errors.

Dr. Field’s comment about the importance of revisiting one’s initial working diagnosis is consistent with our proposed diagnostic time out. A diagnostic time out can incorporate a short checklist and aid in debiasing clinicians when findings do not fit the case presentation, such as lack of response to diuretic therapy. Being mindful of slowing down and not necessarily rushing to judgment is another important component.1 Of note, the residents in our case did revisit their initial working diagnosis, as suggested by Dr. Field. Questions from learners have great potential to serve as debiasing instruments and should always be encouraged. Those who do not work with students can do the same by speaking with nurses or other members of the healthcare team, who offer observations that busy physicians might miss.

Our case highlights the problem that we lack objective criteria to diagnose symptomatic heart failure. While B-type natriuretic factor (BNP) has a strong negative predictive value, serial BNP measurements have not been established to be helpful in the management of heart failure.2 Although certain findings on chest radiography have strong positive and negative likelihood associations, the role of serial chest radiographs is less clear.3 Thus, heart failure remains a clinical diagnosis in current practice.

As Dr. Field points out, the accuracy and performance characteristics of diagnostic testing, such as the respiratory rate, need to be considered in conjunction with debiasing strategies to achieve higher diagnostic accuracy. Multiple factors can contribute to low-performing or misinterpreted diagnostic tests, and inaccurate vital signs have been shown to be similarly prone to potential error.4

Finally, we wholeheartedly agree with Dr. Field’s comment on unnecessary testing.  High-value care is appropriate care. Using Bayesian reasoning to guide testing, monitoring the treatment course appropriately, and eliminating waste is highly likely to improve both value and diagnostic accuracy. Automated, ritual ordering of daily tests can indicate that thinking has been shut off, leaving clinicians susceptible to premature closure of the diagnostic process as well as the potential for “incidentalomas” to distract them from the right diagnosis, all the while leading to low-value care such as wasteful spending, patient dissatisfaction, and hospital-acquired anemia.5 We believe that deciding on a daily basis what the next day’s tests will be can be another powerful debiasing habit, one with benefits beyond diagnosis.

In Reply: We thank Dr. Field for his insights and personal observations related to diagnosis and biases that contribute to diagnostic errors.

Dr. Field’s comment about the importance of revisiting one’s initial working diagnosis is consistent with our proposed diagnostic time out. A diagnostic time out can incorporate a short checklist and aid in debiasing clinicians when findings do not fit the case presentation, such as lack of response to diuretic therapy. Being mindful of slowing down and not necessarily rushing to judgment is another important component.1 Of note, the residents in our case did revisit their initial working diagnosis, as suggested by Dr. Field. Questions from learners have great potential to serve as debiasing instruments and should always be encouraged. Those who do not work with students can do the same by speaking with nurses or other members of the healthcare team, who offer observations that busy physicians might miss.

Our case highlights the problem that we lack objective criteria to diagnose symptomatic heart failure. While B-type natriuretic factor (BNP) has a strong negative predictive value, serial BNP measurements have not been established to be helpful in the management of heart failure.2 Although certain findings on chest radiography have strong positive and negative likelihood associations, the role of serial chest radiographs is less clear.3 Thus, heart failure remains a clinical diagnosis in current practice.

As Dr. Field points out, the accuracy and performance characteristics of diagnostic testing, such as the respiratory rate, need to be considered in conjunction with debiasing strategies to achieve higher diagnostic accuracy. Multiple factors can contribute to low-performing or misinterpreted diagnostic tests, and inaccurate vital signs have been shown to be similarly prone to potential error.4

Finally, we wholeheartedly agree with Dr. Field’s comment on unnecessary testing.  High-value care is appropriate care. Using Bayesian reasoning to guide testing, monitoring the treatment course appropriately, and eliminating waste is highly likely to improve both value and diagnostic accuracy. Automated, ritual ordering of daily tests can indicate that thinking has been shut off, leaving clinicians susceptible to premature closure of the diagnostic process as well as the potential for “incidentalomas” to distract them from the right diagnosis, all the while leading to low-value care such as wasteful spending, patient dissatisfaction, and hospital-acquired anemia.5 We believe that deciding on a daily basis what the next day’s tests will be can be another powerful debiasing habit, one with benefits beyond diagnosis.

References
  1. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008; 121(suppl):S38–S42.
  2. Yancy CW, Jessup M, Bozkurt B, et al. 2013 ACCF/AHA guideline for the management of heart failure. Circulation 2013; 128:e240–e327.
  3. Wang CS, FitzGerald JM, Schulzer M, Mak E, Ayas NT. Does this dyspneic patient in the emergency department have congestive heart failure? JAMA 2005; 294:1944–1956.
  4. Philip KE, Pack E, Cambiano V, Rollmann H, Weil S, O’Beirne J. The accuracy of respiratory rate assessment by doctors in a London teaching hospital: a cross-sectional study. J Clin Monit Comput 2015; 29:455–460.
  5. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: prevalence, outcomes, and healthcare implications. J Hosp Med 2013; 8:506–512. 
References
  1. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008; 121(suppl):S38–S42.
  2. Yancy CW, Jessup M, Bozkurt B, et al. 2013 ACCF/AHA guideline for the management of heart failure. Circulation 2013; 128:e240–e327.
  3. Wang CS, FitzGerald JM, Schulzer M, Mak E, Ayas NT. Does this dyspneic patient in the emergency department have congestive heart failure? JAMA 2005; 294:1944–1956.
  4. Philip KE, Pack E, Cambiano V, Rollmann H, Weil S, O’Beirne J. The accuracy of respiratory rate assessment by doctors in a London teaching hospital: a cross-sectional study. J Clin Monit Comput 2015; 29:455–460.
  5. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: prevalence, outcomes, and healthcare implications. J Hosp Med 2013; 8:506–512. 
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Page Number
407-408
Page Number
407-408
Publications
Publications
Topics
Article Type
Display Headline
In reply: Cognitive bias and diagnostic error
Display Headline
In reply: Cognitive bias and diagnostic error
Legacy Keywords
heart failure, cognitive bias, diagnostic error, Emergency medicine, General internal medicine, Hospital medicine, morton field
Legacy Keywords
heart failure, cognitive bias, diagnostic error, Emergency medicine, General internal medicine, Hospital medicine, morton field
Sections
Disallow All Ads
Alternative CME
Article PDF Media

An elderly woman with ‘heart failure’: Cognitive biases and diagnostic error

Article Type
Changed
Tue, 09/12/2017 - 14:22
Display Headline
An elderly woman with ‘heart failure’: Cognitive biases and diagnostic error

An elderly Spanish-speaking woman with morbid obesity, diabetes, hypertension, and rheumatoid arthritis presents to the emergency department with worsening shortness of breath and cough. She speaks only Spanish, so her son provides the history without the aid of an interpreter.

Her shortness of breath is most noticeable with exertion and has increased gradually over the past 2 months. She has a nonproductive cough. Her son has noticed decreased oral intake and weight loss over the past few weeks.  She has neither traveled recently nor been in contact with anyone known to have an infectious disease.

A review of systems is otherwise negative: specifically, she denies chest pain, fevers, or chills. She saw her primary care physician 3 weeks ago for these complaints and was prescribed a 3-day course of azithromycin with no improvement.

Her medications include lisinopril, atenolol, glipizide, and metformin; her son believes she may be taking others as well but is not sure. He is also unsure of what treatment his mother has received for her rheumatoid arthritis, and most of her medical records are within another health system.

The patient’s son believes she may be taking other medications but is not sure; her records are at another institution

On physical examination, the patient is coughing and appears ill. Her temperature is 99.9°F (37.7°C), heart rate 105 beats per minute, blood pressure 140/70 mm Hg, res­piratory rate 24 per minute, and oxygen saturation by pulse oximetry 89% on room air. Heart sounds are normal, jugular venous pressure cannot be assessed because of her obese body habitus, pulmonary examination demonstrates crackles in all lung fields, and lower-extremity edema is not present. Her extremities are warm and well perfused. Musculoskeletal examination reveals deformities of the joints in both hands consistent with rheumatoid arthritis.

Laboratory data:

  • White blood cell count 13.0 × 109/L (reference range 3.7–11.0)
  • Hemoglobin level 10 g/dL (11.5–15)
  • Serum creatinine 1.0 mg/dL (0.7–1.4)
  • Pro-brain-type natriuretic peptide (pro-BNP) level greater than the upper limit of normal.

A chest radiograph is obtained, and the resident radiologist’s preliminary impression is that it is consistent with pulmonary vascular congestion.

The patient is admitted for further diagnostic evaluation. The emergency department resident orders intravenous furosemide and signs out to the night float medicine resident that this is an “elderly woman with hypertension, diabetes, and heart failure being admitted for a heart failure exacerbation.”

What is the accuracy of a physician’s initial working diagnosis?

Diagnostic accuracy requires both clinical knowledge and problem-solving skills.1

A decade ago, a National Patient Safety Foundation survey2 found that one in six patients had suffered a medical error related to misdiagnosis. In a large systematic review of autopsy-based diagnostic errors, the theorized rate of major errors ranged from 8.4% to as high as 24.4%.3 A study by Neale et al4 found that admitting diagnoses were incorrect in 6% of cases. In emergency departments, inaccuracy rates of up to 12% have been described.5

What factors influence the prevalence of diagnostic errors?

Initial empiric treatments, such as intravenous furosemide in the above scenario, add to the challenge of diagnosis in acute care settings and can influence clinical decisions made by subsequent providers.6

Nonspecific or vague symptoms make diagnosis especially challenging. Shortness of breath, for example, is a common chief complaint in medical patients, as in this case. Green et al7 found emergency department physicians reported clinical uncertainty for a diagnosis of heart failure in 31% of patients evaluated for “dyspnea.” Pulmonary embolism and pulmonary tuberculosis are also in the differential diagnosis for our patient, with studies reporting a misdiagnosis rate of 55% for pulmonary embolism8 and 50% for pulmonary tuberculosis.9

Hertwig et al,10 describing the diagnostic process in patients presenting to emergency departments with a nonspecific constellation of symptoms, found particularly low rates of agreement between the initial diagnostic impression and the final, correct one. In fact, the actual diagnosis was only in the physician’s initial “top three” differential diagnoses 29% to 83% of the time.

Atypical presentations of common diseases, initial nonspecific presentations of common diseases, and confounding comorbid conditions have also been associated with misdiagnosis.11 Our case scenario illustrates the frequent challenges physicians face when diagnosing patients who present with nonspecific symptoms and signs on a background of multiple, chronic comorbidities.

Contextual factors in the system and environment contribute to the potential for error.12 Examples include frequent interruptions, time pressure, poor handoffs, insufficient data, and multitasking.

In our scenario, incomplete data, time constraints, and multitasking in a busy work environment compelled the emergency department resident to rapidly synthesize information to establish a working diagnosis. Interpretations of radiographs by on-call radiology residents are similarly at risk of diagnostic error for the same reasons.13

Physician factors also influence diagnosis. Interestingly, physician certainty or uncertainty at the time of initial diagnosis does not uniformly appear to correlate with diagnostic accuracy. A recent study showed that physician confidence remained high regardless of the degree of difficulty in a given case, and degree of confidence also correlated poorly with whether the physician’s diagnosis was accurate.14

For patients admitted with a chief complaint of dyspnea, as in our scenario, Zwaan et al15 showed that “inappropriate selectivity” in reasoning contributed to an inaccurate diagnosis 23% of the time. Inappropriate selectivity, as defined by these authors, occurs when a probable diagnosis is not sufficiently considered and therefore is neither confirmed nor ruled out.

In our patient scenario, the failure to consider diagnoses other than heart failure and the inability to confirm a prior diagnosis of heart failure in the emergency department may contribute to a diagnostic error.

 

 

CASE CONTINUED: NO IMPROVEMENT OVER 3 DAYS

The night float resident, who has six other admissions this night, cannot ask the resident who evaluated this patient in the emergency department for further information because the shift has ended. The patient’s son left at the time of admission and is not available when the patient arrives on the medical ward.

The night float resident quickly examines the patient, enters admission orders, and signs the patient out to the intern and resident who will be caring for her during her hospitalization. The verbal handoff notes that the history was limited due to a language barrier. The initial problem list includes heart failure without a differential diagnosis, but notes that an elevated pro-BNP and chest radiograph confirm heart failure as the likely diagnosis.

Several hours after the night float resident has left, the resident presents this history to the attending physician, and together they decide to order her regular at-home medications, as well as deep vein thrombosis prophylaxis and echocardiography. In writing the orders, subcutaneous heparin once daily is erroneously entered instead of low-molecular-weight heparin daily, as this is the default in the medical record system. The tired resident fails to recognize this, and the pharmacist does not question it.

Over the next 2 days, the patient’s cough and shortness of breath persist.

After the attending physician dismisses their concerns, the residents do not bring up their idea again

On hospital day 3, two junior residents on the team (who finished their internship 2 weeks ago) review the attending radiologist’s interpretation of the chest radiograph. Unflagged, it confirms the resident’s interpretation but notes ill-defined, scattered, faint opacities. The residents believe that an interstitial pattern may be present and suggest that the patient may not have heart failure but rather a primary pulmonary disease. They bring this to the attention of their attending physician, who dismisses their concerns and comments that heart failure is a clinical diagnosis. The residents do not bring this idea up again to the attending physician.

That night, the float team is called by the nursing staff because of worsening oxygenation and cough. They add an intravenous corticosteroid, a broad-spectrum antibiotic, and an inhaled bronchodilator to the patient’s drug regimen.

How do cognitive errors predispose physicians to diagnostic errors?

When errors in diagnosis are reviewed retrospectively, cognitive or “thinking” errors are generally found, especially in nonprocedural or primary care specialties such as internal medicine, pediatrics, and emergency medicine.16,17

A widely accepted theory on how humans make decisions was described by the psychologists Tversky and Kahneman in 197418 and has been applied more recently to physicians’ diagnostic processes.19 Their dual process model theory states that persons with a requisite level of expertise use either the intuitive “system 1” process of thinking, based on pattern-recognition and heuristics, or the slower, more analytical “system 2” process.20 Experts disagree as to whether in medicine these processes represent a binary either-or model or a continuum21 with relative contributions of each process determined by the physician and the task.

What are some common types of cognitive error?

Experts agree that many diagnostic errors in medicine stem from decisions arrived at by inappropriate system 1 thinking due to biases. These biases have been identified and described as they relate to medicine, most notably by Croskerry.22

Several cognitive biases are illustrated in our clinical scenario:

The framing effect occurred when the emergency department resident listed the patient’s admitting diagnosis as heart failure during the clinical handoff of care.

Anchoring bias, as defined by Croskerry,22 is the tendency to lock onto salient features of the case too early in the diagnostic process and then to fail to adjust this initial diagnostic impression. This bias affected the admitting night float resident, primary intern, resident, and attending physician.

Diagnostic momentum, in turn, is a well-described phenomenon that clinical providers are especially vulnerable to in today’s environment of “copy-and-paste” medical records and numerous handovers of care as a consequence of residency duty-hour restrictions.23

Availability bias refers to commonly seen diagnoses like heart failure or recently seen diagnoses, which are more “available” to the human memory. These diagnoses, which spring to mind quickly, often trick providers into thinking that because they are more easily recalled, they are also more common or more likely.

Confirmation bias. The initial working diagnosis of heart failure may have led the medical team to place greater emphasis on the elevated pro-BNP and the chest radiograph to support the initial impression while ignoring findings such as weight loss that do not support this impression.

Blind obedience. Although the residents recognized the possibility of a primary pulmonary disease, they did not investigate this further. And when the attending physician dismissed their suggestion, they thus deferred to the person in authority or with a reputation of expertise.

Overconfidence bias. Despite minimal improvement in the patient’s clinical status after effective diuresis and the suggestion of alternative diagnoses by the residents, the attending physician remained confident—perhaps overconfident—in the diagnosis of heart failure and would not consider alternatives. Overconfidence bias has been well described and occurs when a medical provider believes too strongly in his or her ability to be correct and therefore fails to consider alternative diagnoses.24

Despite succumbing to overconfidence bias, the attending physician was able to overcome base-rate neglect, ie, failure to consider the prevalence of potential diagnoses in diagnostic reasoning.

Definitions and representative examples of cognitive biases in the case

Each of these biases, and others not mentioned, can lead to premature closure, which is the unfortunate root cause of many diagnostic errors and delays. We have illustrated several biases in our case scenario that led several physicians on the medical team to prematurely “close” on the diagnosis of heart failure (Table 1).

CASE CONTINUED: SURPRISES AND REASSESSMENT

On hospital day 4, the patient’s medication lists from her previous hospitalizations arrive, and the team is surprised to discover that she has been receiving infliximab for the past 3 to 4 months for her rheumatoid arthritis.

Additionally, an echocardiogram that was ordered on hospital day 1 but was lost in the cardiologist’s reading queue comes in and shows a normal ejection fraction with no evidence of elevated filling pressures.

Computed tomography of the chest reveals a reticular pattern with innumerable, tiny, 1- to 2-mm pulmonary nodules. The differential diagnosis is expanded to include hypersensitivity pneumonitis, lymphoma, fungal infection, and miliary tuberculosis.

How do faulty systems contribute to diagnostic error?

It is increasingly recognized that diagnostic errors can occur as a result of cognitive error, systems-based error, or quite commonly, both. Graber et al17 analyzed 100 cases of diagnostic error and determined that while cognitive errors did occur in most of them, nearly half the time both cognitive and systems-based errors contributed simultaneously.17 Observers have further delineated the importance of the systems context and how it affects our thinking.25

In this case, the language barrier, lack of availability of family, and inability to promptly utilize interpreter services contributed to early problems in acquiring a detailed history and a complete medication list that included the immunosuppressant infliximab. Later, a systems error led to a delay in the interpretation of an echocardiogram. Each of these factors, if prevented, would have presumably resulted in expansion of the differential diagnosis and earlier arrival at the correct diagnosis.

CASE CONTINUED: THE PATIENT DIES OF TUBERCULOSIS

The patient is moved to a negative pressure room, and the pulmonary consultants recommend bronchoscopy. During the procedure, the patient suffers acute respiratory failure, is intubated, and is transferred to the medical intensive care unit, where a saddle pulmonary embolism is diagnosed by computed tomographic angiography.

One day later, the sputum culture from the bronchoscopy returns as positive for acid-fast bacilli. A four-drug regimen for tuberculosis is started. The patient continues to have a downward course and expires 2 weeks later. Autopsy reveals miliary tuberculosis.

What is the frequency of diagnostic error in medicine?

Diagnostic error is estimated to have a frequency of 10% to 20%.24 Rates of diagnostic error are similar irrespective of method of determination, eg, from autopsy,3 standardized patients (ie, actors presenting with scripted scenarios),26 or case reviews.27 Patient surveys report patient-perceived harm from diagnostic error at a rate of 35% to 42%.28,29 The landmark Harvard Medical Practice Study found that 17% of all adverse events were attributable to diagnostic error.30

Diagnostic error is the most common type of medical error in nonprocedural medical fields.31 It causes a disproportionately large amount of morbidity and death.

Diagnostic error is the most common cause of malpractice claims in the United States. In inpatient and outpatient settings, for both medical and surgical patients, it accounted for 45.9% of all outpatient malpractice claims in 2009, making it the most common reason for medical malpractice litigation.32 A 2013 study indicated that diagnostic error is more common, more expensive, and two times more likely to result in death than any other category of error.33

 

 

CASE CONTINUED: MORBIDITY AND MORTALITY CONFERENCE

The patient’s case is brought to a morbidity and mortality conference for discussion. The systems issues in the case—including medication reconciliation, availability of interpreters, and timing and process of echocardiogram readings—are all discussed, but clinical reasoning and cognitive errors made in the case are avoided.

Why are cognitive errors often neglected in discussions of medical error?

Historically, openly discussing error in medicine has been difficult. Over the past decade, however, and fueled by the landmark Institute of Medicine report To Err is Human,34 the healthcare community has made substantial strides in identifying and talking about systems factors as a cause of preventable medical error.34,35

While systems contributions to medical error are inherently “external” to physicians and other healthcare providers, the cognitive contributions to error are inherently “internal” and are often considered personal. This has led to diagnostic error being kept out of many patient safety conversations. Further, while the solutions to systems errors are often tangible, such as implementing a fall prevention program or changing the physical packaging of a medication to reduce a medication dispensing or administration error, solutions to cognitive errors are generally considered more challenging to address by organizations trying to improve patient safety.

How can hospitals and department leaders do better?

Healthcare organizations and leaders of clinical teams or departments can implement several strategies.36

First, they can seek out and analyze the causes of diagnostic errors that are occurring locally in their institution and learn from their diagnostic errors, such as the one in our clinical scenario.

Trainees, physicians, and nurses should be comfortable questioning each other

Second, they can promote a culture of open communication and questioning around diagnosis. Trainees, physicians, and nurses should be comfortable questioning each other, including those higher up in the hierarchy, by saying, “I’m not sure” or “What else could this be?” to help reduce cognitive bias and expand the diagnostic possibilities.

Similarly, developing strategies to promote feedback on diagnosis among physicians will allow us all to learn from our diagnostic mistakes.

Use of the electronic medical record to assist in follow-up of pending diagnostic studies and patient return visits is yet another strategy.

Finally, healthcare organizations can adopt strategies to promote patient involvement in diagnosis, such as providing patients with copies of their test results and discharge summaries, encouraging the use of electronic patient communication portals, and empowering patients to ask questions related to their diagnosis. Prioritizing potential solutions to reduce diagnostic errors may be helpful in situations, depending on the context and environment, in which all proposed interventions may not be possible.

CASE CONTINUED: LEARNING FROM MISTAKES

The attending physician and resident in the case meet after the conference to review their clinical decision-making. Both are interested in learning from this case and improving their diagnostic skills in the future.

What specific steps can clinicians take to mitigate cognitive bias in daily practice?

In addition to continuing to expand one’s medical knowledge and gain more clinical experience, we can suggest several small steps to busy clinicians, taken individually or in combination with others that may improve diagnostic skills by reducing the potential for biased thinking in clinical practice.

Approaches to decision-making
From Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ 2009; 14:27–35. With kind permission from Springer Science and Business Media.
Figure 1. Approaches to decision-making can be located along a continuum, with unconscious, intuitive ones clustering at one end and deliberate, analytical ones at the other.

Think about your thinking. Our first recommendation would be to become more familiar with the dual process theory of clinical cognition (Figure 1).37,38 This theoretical framework may be very helpful as a foundation from which to build better thinking skills. Physicians, especially residents, and students can be taught these concepts and their potential to contribute to diagnostic errors, and can use these skills to recognize those contributions in others’ diagnostic practices and even in their own.39

Facilitating metacognition, or “thinking about one’s thinking,” may help clinicians catch themselves in thinking traps and provide the opportunity to reflect on biases retrospectively, as a double check or an opportunity to learn from a mistake.

Recognize your emotions. Gaining an understanding of the effect of one’s emotions on decision-making also can help clinicians free themselves of bias. As human beings, healthcare professionals are  susceptible to emotion, and the best approach to mitigate the emotional influences may be to consciously name them and adjust for them.40

Because it is impractical to apply slow, analytical system 2 approaches to every case, skills that hone and develop more accurate, reliable system 1 thinking are crucial. Gaining broad exposure to increased numbers of cases may be the most reliable way to build an experiential repertoire of “illness scripts,” but there are ways to increase the experiential value of any case with a few techniques that have potential to promote better intuition.41

Embracing uncertainty in the early diagnostic process and envisioning the worst-case scenario in a case allows the consideration of additional diagnostic paths outside of the current working diagnosis, potentially priming the clinician to look for and recognize early warning signs that could argue against the initial diagnosis at a time when an adjustment could be made to prevent a bad outcome.

Practice progressive problem-solving,42 a technique in which the physician creates additional challenges to increase the cognitive burden of a “routine” case in an effort to train his or her mind and sharpen intuition. An example of this practice is contemplating a backup treatment plan in advance in the event of a poor response to or an adverse effect of treatment. Highly rated physicians and teachers perform this regularly.43,44 Other ways to maximize the learning value of an individual case include seeking feedback on patient outcomes, especially when a patient has been discharged or transferred to another provider’s care, or when the physician goes off service.

Simulation, traditionally used for procedural training, has potential as well. Cognitive simulation, such as case reports or virtual patient modules, have potential to enhance clinical reasoning skills as well, though possibly at greater cost of time and expense.

Decreased reliance on memory is likely to improve diagnostic reasoning. Systems tools such as checklists45 and health information technology46 have potential to reduce diagnostic errors, not by taking thinking away from the clinician but by relieving the cognitive load enough to facilitate greater effort toward reasoning.

Slow down. Finally, and perhaps most important, recent models of clinical expertise have suggested that mastery comes from having a robust intuitive method, with a sense of the limitations of the intuitive approach, an ability to recognize the need to perform more analytical reasoning in select cases, and the willingness to do so. In short, it may well be that the hallmark of a master clinician is the propensity to slow down when necessary.47

A ‘diagnostic time-out’ for safety might catch opportunities to recognize and mitigate biases and errors

If one considers diagnosis a cognitive procedure, perhaps a brief “diagnostic time-out” for safety might afford an opportunity to recognize and mitigate biases and errors. There are likely many potential scripts for a good diagnostic time-out, but to be functional it should be brief and simple to facilitate consistent use. We have recommended the following four questions to our residents as a starting point, any of which could signal the need to switch to a slower, analytic approach.

Four-step diagnostic time-out

  • What else can it be?
  • Is there anything about the case that does not fit?
  • Is it possible that multiple processes are going on?
  • Do I need to slow down?

These questions can serve as a double check for an intuitively formed initial working diagnosis, incorporating many of the principles discussed above, in a way that would hopefully avoid undue burden on a busy clinician. These techniques, it must be acknowledged, have not yet been directly tied to reductions in diagnostic errors. However, diagnostic errors, as discussed, are very difficult to identify and study, and these techniques will serve mainly to improve habits that are likely to show benefits over much longer time periods than most studies can measure.

References
  1. Kassirer JP. Diagnostic reasoning. Ann Intern Med 1989; 110:893–900.
  2. Golodner L. How the public perceives patient safety. Newsletter of the National Patient Safety Foundation 2004; 1997:1–6.
  3. Shojania KG, Burton EC, McDonald KM, Goldman L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2003; 289:2849–2856.
  4. Neale G, Woloshynowych M, Vincent C. Exploring the causes of adverse events in NHS hospital practice. J R Soc Med 2001; 94:322–330.
  5. Chellis M, Olson J, Augustine J, Hamilton G. Evaluation of missed diagnoses for patients admitted from the emergency department. Acad Emerg Med 2001; 8:125–130.
  6. Tallentire VR, Smith SE, Skinner J, Cameron HS. Exploring error in team-based acute care scenarios: an observational study from the United Kingdom. Acad Med 2012; 87:792–798.
  7. Green SM, Martinez-Rumayor A, Gregory SA, et al. Clinical uncertainty, diagnostic accuracy, and outcomes in emergency department patients presenting with dyspnea. Arch Intern Med 2008; 168:741–748.
  8. Pineda LA, Hathwar VS, Grant BJ. Clinical suspicion of fatal pulmonary embolism. Chest 2001; 120:791–795.
  9. Shojania KG, Burton EC, McDonald KM, Goldman L. The autopsy as an outcome and performance measure. Evid Rep Technol Assess (Summ) 2002; 58:1–5.
  10. Hertwig R, Meier N, Nickel C, et al. Correlates of diagnostic accuracy in patients with nonspecific complaints. Med Decis Making 2013; 33:533–543.
  11. Kostopoulou O, Delaney BC, Munro CW. Diagnostic difficulty and error in primary care—a systematic review. Fam Pract 2008; 25:400–413.
  12. Ogdie AR, Reilly JB, Pang WG, et al. Seen through their eyes: residents’ reflections on the cognitive and contextual components of diagnostic errors in medicine. Acad Med 2012; 87:1361–1367.
  13. Feldmann EJ, Jain VR, Rakoff S, Haramati LB. Radiology residents’ on-call interpretation of chest radiographs for congestive heart failure. Acad Radiol 2007; 14:1264–1270.
  14. Meyer AN, Payne VL, Meeks DW, Rao R, Singh H. Physicians’ diagnostic accuracy, confidence, and resource requests: a vignette study. JAMA Intern Med 2013; 173:1952–1958.
  15. Zwaan L, Thijs A, Wagner C, Timmermans DR. Does inappropriate selectivity in information use relate to diagnostic errors and patient harm? The diagnosis of patients with dyspnea. Soc Sci Med 2013; 91:32–38.
  16. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med 2009; 169:1881–1887.
  17. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med 2005; 165:1493–1499.
  18. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science 1974; 185:1124–1131.
  19. Kahneman D. Thinking, fast and slow. New York, NY: Farrar, Straus, and Giroux; 2011.
  20. Croskerry P. A universal model of diagnostic reasoning. Acad Med 2009; 84:1022–1028.
  21. Custers EJ. Medical education and cognitive continuum theory: an alternative perspective on medical problem solving and clinical reasoning. Acad Med 2013; 88:1074–1080.
  22. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med 2003; 78:775–780.
  23. Hirschtick RE. A piece of my mind. Copy-and-paste. JAMA 2006; 295:2335–2336.
  24. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med 2008;121(suppl 5):S2–S23.
  25. Henriksen K, Brady J. The pursuit of better diagnostic performance: a human factors perspective. BMJ Qual Saf 2013; 22(suppl 2):ii1–ii5.
  26. Peabody JW, Luck J, Jain S, Bertenthal D, Glassman P. Assessing the accuracy of administrative data in health information systems. Med Care 2004; 42:1066–1072.
  27. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf 2012; 21:737–745.
  28. Blendon RJ, DesRoches CM, Brodie M, et al. Views of practicing physicians and the public on medical errors. N Engl J Med 2002; 347:1933–1940.
  29. Burroughs TE, Waterman AD, Gallagher TH, et al. Patient concerns about medical errors in emergency departments. Acad Emerg Med 2005; 12:57–64.
  30. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med 1991; 324:377–384.
  31. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000; 38:261–271.
  32. Bishop TF, Ryan AM, Casalino LP. Paid malpractice claims for adverse events in inpatient and outpatient settings. JAMA 2011; 305:2427–2431.
  33. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-year summary of US malpractice claims for diagnostic errors 1986–2010: an analysis from the national practitioner data bank. BMJ Qual Saf 2013; 22:672–680.
  34. Kohn LT, Corrigan JM, Donaldson MS. To err is human: building a safer health system. Washington, DC: The National Academies Press; 2000.
  35. Singh H. Diagnostic errors: moving beyond ‘no respect’ and getting ready for prime time. BMJ Qual Saf 2013; 22:789–792.
  36. Graber ML, Trowbridge R, Myers JS, Umscheid CA, Strull W, Kanter MH. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf 2014; 40:102–110.
  37. Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract 2009; 14(suppl 1):27–35.
  38. Norman G. Dual processing and diagnostic errors. Adv Health Sci Educ Theory Pract 2009; 14(suppl 1):37–49.
  39. Reilly JB, Ogdie AR, Von Feldt JM, Myers JS. Teaching about how doctors think: a longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Qual Saf 2013; 22:1044–1050.
  40. Croskerry P, Abbass A, Wu AW. Emotional influences in patient safety. J Patient Saf 2010; 6:199–205.
  41. Rajkomar A, Dhaliwal G. Improving diagnostic reasoning to improve patient safety. Perm J 2011; 15:68–73.
  42. Trowbridge RL, Dhaliwal G, Cosby KS. Educational agenda for diagnostic error reduction. BMJ Qual Saf 2013; 22(suppl 2):ii28­–ii32.
  43. Sargeant J, Mann K, Sinclair D, et al. Learning in practice: experiences and perceptions of high-scoring physicians. Acad Med 2006; 81:655–660.
  44. Mylopoulos M, Lohfeld L, Norman GR, Dhaliwal G, Eva KW. Renowned physicians' perceptions of expert diagnostic practice. Acad Med 2012; 87:1413–1417.
  45. Sibbald M, de Bruin AB, van Merrienboer JJ. Checklists improve experts' diagnostic decisions. Med Educ 2013; 47:301–308.
  46. El-Kareh R, Hasan O, Schiff GD. Use of health information technology to reduce diagnostic errors. BMJ Qual Saf 2013; 22(suppl 2):ii40–ii51.
  47. Moulton CA, Regehr G, Mylopoulos M, MacRae HM. Slowing down when you should: a new model of expert judgment. Acad Med 2007; 82(suppl 10):S109–S116.
Click for Credit Link
Article PDF
Author and Disclosure Information

Nikhill Mull, MD
Assistant Professor of Clinical Medicine, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia; Assistant Director, Center for Evidence-based Practice, University of Pennsylvania Health System, Philadelphia, PA

James B. Reilly, MD, MS
Director, Internal Medicine Residency Program, Allegheny Health Network, Pittsburgh, PA; Assistant Professor of Medicine, Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
Associate Professor of Medicine, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia

Address: Nikhil Mull, MD, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, 3400 Spruce Street, Penn Tower 2009, Philadelphia, PA 19104; e-mail: Nikhil.Mull@uphs.upenn.edu

Issue
Cleveland Clinic Journal of Medicine - 82(11)
Publications
Topics
Page Number
745-753
Legacy Keywords
Cognitive bias, diagnostic error, medical error, misdiagnosis, heart failure, tuberculosis, Nikhil Mull, James Reilly, Jennifer Myers
Sections
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Nikhill Mull, MD
Assistant Professor of Clinical Medicine, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia; Assistant Director, Center for Evidence-based Practice, University of Pennsylvania Health System, Philadelphia, PA

James B. Reilly, MD, MS
Director, Internal Medicine Residency Program, Allegheny Health Network, Pittsburgh, PA; Assistant Professor of Medicine, Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
Associate Professor of Medicine, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia

Address: Nikhil Mull, MD, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, 3400 Spruce Street, Penn Tower 2009, Philadelphia, PA 19104; e-mail: Nikhil.Mull@uphs.upenn.edu

Author and Disclosure Information

Nikhill Mull, MD
Assistant Professor of Clinical Medicine, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia; Assistant Director, Center for Evidence-based Practice, University of Pennsylvania Health System, Philadelphia, PA

James B. Reilly, MD, MS
Director, Internal Medicine Residency Program, Allegheny Health Network, Pittsburgh, PA; Assistant Professor of Medicine, Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
Associate Professor of Medicine, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia

Address: Nikhil Mull, MD, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, 3400 Spruce Street, Penn Tower 2009, Philadelphia, PA 19104; e-mail: Nikhil.Mull@uphs.upenn.edu

Article PDF
Article PDF
Related Articles

An elderly Spanish-speaking woman with morbid obesity, diabetes, hypertension, and rheumatoid arthritis presents to the emergency department with worsening shortness of breath and cough. She speaks only Spanish, so her son provides the history without the aid of an interpreter.

Her shortness of breath is most noticeable with exertion and has increased gradually over the past 2 months. She has a nonproductive cough. Her son has noticed decreased oral intake and weight loss over the past few weeks.  She has neither traveled recently nor been in contact with anyone known to have an infectious disease.

A review of systems is otherwise negative: specifically, she denies chest pain, fevers, or chills. She saw her primary care physician 3 weeks ago for these complaints and was prescribed a 3-day course of azithromycin with no improvement.

Her medications include lisinopril, atenolol, glipizide, and metformin; her son believes she may be taking others as well but is not sure. He is also unsure of what treatment his mother has received for her rheumatoid arthritis, and most of her medical records are within another health system.

The patient’s son believes she may be taking other medications but is not sure; her records are at another institution

On physical examination, the patient is coughing and appears ill. Her temperature is 99.9°F (37.7°C), heart rate 105 beats per minute, blood pressure 140/70 mm Hg, res­piratory rate 24 per minute, and oxygen saturation by pulse oximetry 89% on room air. Heart sounds are normal, jugular venous pressure cannot be assessed because of her obese body habitus, pulmonary examination demonstrates crackles in all lung fields, and lower-extremity edema is not present. Her extremities are warm and well perfused. Musculoskeletal examination reveals deformities of the joints in both hands consistent with rheumatoid arthritis.

Laboratory data:

  • White blood cell count 13.0 × 109/L (reference range 3.7–11.0)
  • Hemoglobin level 10 g/dL (11.5–15)
  • Serum creatinine 1.0 mg/dL (0.7–1.4)
  • Pro-brain-type natriuretic peptide (pro-BNP) level greater than the upper limit of normal.

A chest radiograph is obtained, and the resident radiologist’s preliminary impression is that it is consistent with pulmonary vascular congestion.

The patient is admitted for further diagnostic evaluation. The emergency department resident orders intravenous furosemide and signs out to the night float medicine resident that this is an “elderly woman with hypertension, diabetes, and heart failure being admitted for a heart failure exacerbation.”

What is the accuracy of a physician’s initial working diagnosis?

Diagnostic accuracy requires both clinical knowledge and problem-solving skills.1

A decade ago, a National Patient Safety Foundation survey2 found that one in six patients had suffered a medical error related to misdiagnosis. In a large systematic review of autopsy-based diagnostic errors, the theorized rate of major errors ranged from 8.4% to as high as 24.4%.3 A study by Neale et al4 found that admitting diagnoses were incorrect in 6% of cases. In emergency departments, inaccuracy rates of up to 12% have been described.5

What factors influence the prevalence of diagnostic errors?

Initial empiric treatments, such as intravenous furosemide in the above scenario, add to the challenge of diagnosis in acute care settings and can influence clinical decisions made by subsequent providers.6

Nonspecific or vague symptoms make diagnosis especially challenging. Shortness of breath, for example, is a common chief complaint in medical patients, as in this case. Green et al7 found emergency department physicians reported clinical uncertainty for a diagnosis of heart failure in 31% of patients evaluated for “dyspnea.” Pulmonary embolism and pulmonary tuberculosis are also in the differential diagnosis for our patient, with studies reporting a misdiagnosis rate of 55% for pulmonary embolism8 and 50% for pulmonary tuberculosis.9

Hertwig et al,10 describing the diagnostic process in patients presenting to emergency departments with a nonspecific constellation of symptoms, found particularly low rates of agreement between the initial diagnostic impression and the final, correct one. In fact, the actual diagnosis was only in the physician’s initial “top three” differential diagnoses 29% to 83% of the time.

Atypical presentations of common diseases, initial nonspecific presentations of common diseases, and confounding comorbid conditions have also been associated with misdiagnosis.11 Our case scenario illustrates the frequent challenges physicians face when diagnosing patients who present with nonspecific symptoms and signs on a background of multiple, chronic comorbidities.

Contextual factors in the system and environment contribute to the potential for error.12 Examples include frequent interruptions, time pressure, poor handoffs, insufficient data, and multitasking.

In our scenario, incomplete data, time constraints, and multitasking in a busy work environment compelled the emergency department resident to rapidly synthesize information to establish a working diagnosis. Interpretations of radiographs by on-call radiology residents are similarly at risk of diagnostic error for the same reasons.13

Physician factors also influence diagnosis. Interestingly, physician certainty or uncertainty at the time of initial diagnosis does not uniformly appear to correlate with diagnostic accuracy. A recent study showed that physician confidence remained high regardless of the degree of difficulty in a given case, and degree of confidence also correlated poorly with whether the physician’s diagnosis was accurate.14

For patients admitted with a chief complaint of dyspnea, as in our scenario, Zwaan et al15 showed that “inappropriate selectivity” in reasoning contributed to an inaccurate diagnosis 23% of the time. Inappropriate selectivity, as defined by these authors, occurs when a probable diagnosis is not sufficiently considered and therefore is neither confirmed nor ruled out.

In our patient scenario, the failure to consider diagnoses other than heart failure and the inability to confirm a prior diagnosis of heart failure in the emergency department may contribute to a diagnostic error.

 

 

CASE CONTINUED: NO IMPROVEMENT OVER 3 DAYS

The night float resident, who has six other admissions this night, cannot ask the resident who evaluated this patient in the emergency department for further information because the shift has ended. The patient’s son left at the time of admission and is not available when the patient arrives on the medical ward.

The night float resident quickly examines the patient, enters admission orders, and signs the patient out to the intern and resident who will be caring for her during her hospitalization. The verbal handoff notes that the history was limited due to a language barrier. The initial problem list includes heart failure without a differential diagnosis, but notes that an elevated pro-BNP and chest radiograph confirm heart failure as the likely diagnosis.

Several hours after the night float resident has left, the resident presents this history to the attending physician, and together they decide to order her regular at-home medications, as well as deep vein thrombosis prophylaxis and echocardiography. In writing the orders, subcutaneous heparin once daily is erroneously entered instead of low-molecular-weight heparin daily, as this is the default in the medical record system. The tired resident fails to recognize this, and the pharmacist does not question it.

Over the next 2 days, the patient’s cough and shortness of breath persist.

After the attending physician dismisses their concerns, the residents do not bring up their idea again

On hospital day 3, two junior residents on the team (who finished their internship 2 weeks ago) review the attending radiologist’s interpretation of the chest radiograph. Unflagged, it confirms the resident’s interpretation but notes ill-defined, scattered, faint opacities. The residents believe that an interstitial pattern may be present and suggest that the patient may not have heart failure but rather a primary pulmonary disease. They bring this to the attention of their attending physician, who dismisses their concerns and comments that heart failure is a clinical diagnosis. The residents do not bring this idea up again to the attending physician.

That night, the float team is called by the nursing staff because of worsening oxygenation and cough. They add an intravenous corticosteroid, a broad-spectrum antibiotic, and an inhaled bronchodilator to the patient’s drug regimen.

How do cognitive errors predispose physicians to diagnostic errors?

When errors in diagnosis are reviewed retrospectively, cognitive or “thinking” errors are generally found, especially in nonprocedural or primary care specialties such as internal medicine, pediatrics, and emergency medicine.16,17

A widely accepted theory on how humans make decisions was described by the psychologists Tversky and Kahneman in 197418 and has been applied more recently to physicians’ diagnostic processes.19 Their dual process model theory states that persons with a requisite level of expertise use either the intuitive “system 1” process of thinking, based on pattern-recognition and heuristics, or the slower, more analytical “system 2” process.20 Experts disagree as to whether in medicine these processes represent a binary either-or model or a continuum21 with relative contributions of each process determined by the physician and the task.

What are some common types of cognitive error?

Experts agree that many diagnostic errors in medicine stem from decisions arrived at by inappropriate system 1 thinking due to biases. These biases have been identified and described as they relate to medicine, most notably by Croskerry.22

Several cognitive biases are illustrated in our clinical scenario:

The framing effect occurred when the emergency department resident listed the patient’s admitting diagnosis as heart failure during the clinical handoff of care.

Anchoring bias, as defined by Croskerry,22 is the tendency to lock onto salient features of the case too early in the diagnostic process and then to fail to adjust this initial diagnostic impression. This bias affected the admitting night float resident, primary intern, resident, and attending physician.

Diagnostic momentum, in turn, is a well-described phenomenon that clinical providers are especially vulnerable to in today’s environment of “copy-and-paste” medical records and numerous handovers of care as a consequence of residency duty-hour restrictions.23

Availability bias refers to commonly seen diagnoses like heart failure or recently seen diagnoses, which are more “available” to the human memory. These diagnoses, which spring to mind quickly, often trick providers into thinking that because they are more easily recalled, they are also more common or more likely.

Confirmation bias. The initial working diagnosis of heart failure may have led the medical team to place greater emphasis on the elevated pro-BNP and the chest radiograph to support the initial impression while ignoring findings such as weight loss that do not support this impression.

Blind obedience. Although the residents recognized the possibility of a primary pulmonary disease, they did not investigate this further. And when the attending physician dismissed their suggestion, they thus deferred to the person in authority or with a reputation of expertise.

Overconfidence bias. Despite minimal improvement in the patient’s clinical status after effective diuresis and the suggestion of alternative diagnoses by the residents, the attending physician remained confident—perhaps overconfident—in the diagnosis of heart failure and would not consider alternatives. Overconfidence bias has been well described and occurs when a medical provider believes too strongly in his or her ability to be correct and therefore fails to consider alternative diagnoses.24

Despite succumbing to overconfidence bias, the attending physician was able to overcome base-rate neglect, ie, failure to consider the prevalence of potential diagnoses in diagnostic reasoning.

Definitions and representative examples of cognitive biases in the case

Each of these biases, and others not mentioned, can lead to premature closure, which is the unfortunate root cause of many diagnostic errors and delays. We have illustrated several biases in our case scenario that led several physicians on the medical team to prematurely “close” on the diagnosis of heart failure (Table 1).

CASE CONTINUED: SURPRISES AND REASSESSMENT

On hospital day 4, the patient’s medication lists from her previous hospitalizations arrive, and the team is surprised to discover that she has been receiving infliximab for the past 3 to 4 months for her rheumatoid arthritis.

Additionally, an echocardiogram that was ordered on hospital day 1 but was lost in the cardiologist’s reading queue comes in and shows a normal ejection fraction with no evidence of elevated filling pressures.

Computed tomography of the chest reveals a reticular pattern with innumerable, tiny, 1- to 2-mm pulmonary nodules. The differential diagnosis is expanded to include hypersensitivity pneumonitis, lymphoma, fungal infection, and miliary tuberculosis.

How do faulty systems contribute to diagnostic error?

It is increasingly recognized that diagnostic errors can occur as a result of cognitive error, systems-based error, or quite commonly, both. Graber et al17 analyzed 100 cases of diagnostic error and determined that while cognitive errors did occur in most of them, nearly half the time both cognitive and systems-based errors contributed simultaneously.17 Observers have further delineated the importance of the systems context and how it affects our thinking.25

In this case, the language barrier, lack of availability of family, and inability to promptly utilize interpreter services contributed to early problems in acquiring a detailed history and a complete medication list that included the immunosuppressant infliximab. Later, a systems error led to a delay in the interpretation of an echocardiogram. Each of these factors, if prevented, would have presumably resulted in expansion of the differential diagnosis and earlier arrival at the correct diagnosis.

CASE CONTINUED: THE PATIENT DIES OF TUBERCULOSIS

The patient is moved to a negative pressure room, and the pulmonary consultants recommend bronchoscopy. During the procedure, the patient suffers acute respiratory failure, is intubated, and is transferred to the medical intensive care unit, where a saddle pulmonary embolism is diagnosed by computed tomographic angiography.

One day later, the sputum culture from the bronchoscopy returns as positive for acid-fast bacilli. A four-drug regimen for tuberculosis is started. The patient continues to have a downward course and expires 2 weeks later. Autopsy reveals miliary tuberculosis.

What is the frequency of diagnostic error in medicine?

Diagnostic error is estimated to have a frequency of 10% to 20%.24 Rates of diagnostic error are similar irrespective of method of determination, eg, from autopsy,3 standardized patients (ie, actors presenting with scripted scenarios),26 or case reviews.27 Patient surveys report patient-perceived harm from diagnostic error at a rate of 35% to 42%.28,29 The landmark Harvard Medical Practice Study found that 17% of all adverse events were attributable to diagnostic error.30

Diagnostic error is the most common type of medical error in nonprocedural medical fields.31 It causes a disproportionately large amount of morbidity and death.

Diagnostic error is the most common cause of malpractice claims in the United States. In inpatient and outpatient settings, for both medical and surgical patients, it accounted for 45.9% of all outpatient malpractice claims in 2009, making it the most common reason for medical malpractice litigation.32 A 2013 study indicated that diagnostic error is more common, more expensive, and two times more likely to result in death than any other category of error.33

 

 

CASE CONTINUED: MORBIDITY AND MORTALITY CONFERENCE

The patient’s case is brought to a morbidity and mortality conference for discussion. The systems issues in the case—including medication reconciliation, availability of interpreters, and timing and process of echocardiogram readings—are all discussed, but clinical reasoning and cognitive errors made in the case are avoided.

Why are cognitive errors often neglected in discussions of medical error?

Historically, openly discussing error in medicine has been difficult. Over the past decade, however, and fueled by the landmark Institute of Medicine report To Err is Human,34 the healthcare community has made substantial strides in identifying and talking about systems factors as a cause of preventable medical error.34,35

While systems contributions to medical error are inherently “external” to physicians and other healthcare providers, the cognitive contributions to error are inherently “internal” and are often considered personal. This has led to diagnostic error being kept out of many patient safety conversations. Further, while the solutions to systems errors are often tangible, such as implementing a fall prevention program or changing the physical packaging of a medication to reduce a medication dispensing or administration error, solutions to cognitive errors are generally considered more challenging to address by organizations trying to improve patient safety.

How can hospitals and department leaders do better?

Healthcare organizations and leaders of clinical teams or departments can implement several strategies.36

First, they can seek out and analyze the causes of diagnostic errors that are occurring locally in their institution and learn from their diagnostic errors, such as the one in our clinical scenario.

Trainees, physicians, and nurses should be comfortable questioning each other

Second, they can promote a culture of open communication and questioning around diagnosis. Trainees, physicians, and nurses should be comfortable questioning each other, including those higher up in the hierarchy, by saying, “I’m not sure” or “What else could this be?” to help reduce cognitive bias and expand the diagnostic possibilities.

Similarly, developing strategies to promote feedback on diagnosis among physicians will allow us all to learn from our diagnostic mistakes.

Use of the electronic medical record to assist in follow-up of pending diagnostic studies and patient return visits is yet another strategy.

Finally, healthcare organizations can adopt strategies to promote patient involvement in diagnosis, such as providing patients with copies of their test results and discharge summaries, encouraging the use of electronic patient communication portals, and empowering patients to ask questions related to their diagnosis. Prioritizing potential solutions to reduce diagnostic errors may be helpful in situations, depending on the context and environment, in which all proposed interventions may not be possible.

CASE CONTINUED: LEARNING FROM MISTAKES

The attending physician and resident in the case meet after the conference to review their clinical decision-making. Both are interested in learning from this case and improving their diagnostic skills in the future.

What specific steps can clinicians take to mitigate cognitive bias in daily practice?

In addition to continuing to expand one’s medical knowledge and gain more clinical experience, we can suggest several small steps to busy clinicians, taken individually or in combination with others that may improve diagnostic skills by reducing the potential for biased thinking in clinical practice.

Approaches to decision-making
From Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ 2009; 14:27–35. With kind permission from Springer Science and Business Media.
Figure 1. Approaches to decision-making can be located along a continuum, with unconscious, intuitive ones clustering at one end and deliberate, analytical ones at the other.

Think about your thinking. Our first recommendation would be to become more familiar with the dual process theory of clinical cognition (Figure 1).37,38 This theoretical framework may be very helpful as a foundation from which to build better thinking skills. Physicians, especially residents, and students can be taught these concepts and their potential to contribute to diagnostic errors, and can use these skills to recognize those contributions in others’ diagnostic practices and even in their own.39

Facilitating metacognition, or “thinking about one’s thinking,” may help clinicians catch themselves in thinking traps and provide the opportunity to reflect on biases retrospectively, as a double check or an opportunity to learn from a mistake.

Recognize your emotions. Gaining an understanding of the effect of one’s emotions on decision-making also can help clinicians free themselves of bias. As human beings, healthcare professionals are  susceptible to emotion, and the best approach to mitigate the emotional influences may be to consciously name them and adjust for them.40

Because it is impractical to apply slow, analytical system 2 approaches to every case, skills that hone and develop more accurate, reliable system 1 thinking are crucial. Gaining broad exposure to increased numbers of cases may be the most reliable way to build an experiential repertoire of “illness scripts,” but there are ways to increase the experiential value of any case with a few techniques that have potential to promote better intuition.41

Embracing uncertainty in the early diagnostic process and envisioning the worst-case scenario in a case allows the consideration of additional diagnostic paths outside of the current working diagnosis, potentially priming the clinician to look for and recognize early warning signs that could argue against the initial diagnosis at a time when an adjustment could be made to prevent a bad outcome.

Practice progressive problem-solving,42 a technique in which the physician creates additional challenges to increase the cognitive burden of a “routine” case in an effort to train his or her mind and sharpen intuition. An example of this practice is contemplating a backup treatment plan in advance in the event of a poor response to or an adverse effect of treatment. Highly rated physicians and teachers perform this regularly.43,44 Other ways to maximize the learning value of an individual case include seeking feedback on patient outcomes, especially when a patient has been discharged or transferred to another provider’s care, or when the physician goes off service.

Simulation, traditionally used for procedural training, has potential as well. Cognitive simulation, such as case reports or virtual patient modules, have potential to enhance clinical reasoning skills as well, though possibly at greater cost of time and expense.

Decreased reliance on memory is likely to improve diagnostic reasoning. Systems tools such as checklists45 and health information technology46 have potential to reduce diagnostic errors, not by taking thinking away from the clinician but by relieving the cognitive load enough to facilitate greater effort toward reasoning.

Slow down. Finally, and perhaps most important, recent models of clinical expertise have suggested that mastery comes from having a robust intuitive method, with a sense of the limitations of the intuitive approach, an ability to recognize the need to perform more analytical reasoning in select cases, and the willingness to do so. In short, it may well be that the hallmark of a master clinician is the propensity to slow down when necessary.47

A ‘diagnostic time-out’ for safety might catch opportunities to recognize and mitigate biases and errors

If one considers diagnosis a cognitive procedure, perhaps a brief “diagnostic time-out” for safety might afford an opportunity to recognize and mitigate biases and errors. There are likely many potential scripts for a good diagnostic time-out, but to be functional it should be brief and simple to facilitate consistent use. We have recommended the following four questions to our residents as a starting point, any of which could signal the need to switch to a slower, analytic approach.

Four-step diagnostic time-out

  • What else can it be?
  • Is there anything about the case that does not fit?
  • Is it possible that multiple processes are going on?
  • Do I need to slow down?

These questions can serve as a double check for an intuitively formed initial working diagnosis, incorporating many of the principles discussed above, in a way that would hopefully avoid undue burden on a busy clinician. These techniques, it must be acknowledged, have not yet been directly tied to reductions in diagnostic errors. However, diagnostic errors, as discussed, are very difficult to identify and study, and these techniques will serve mainly to improve habits that are likely to show benefits over much longer time periods than most studies can measure.

An elderly Spanish-speaking woman with morbid obesity, diabetes, hypertension, and rheumatoid arthritis presents to the emergency department with worsening shortness of breath and cough. She speaks only Spanish, so her son provides the history without the aid of an interpreter.

Her shortness of breath is most noticeable with exertion and has increased gradually over the past 2 months. She has a nonproductive cough. Her son has noticed decreased oral intake and weight loss over the past few weeks.  She has neither traveled recently nor been in contact with anyone known to have an infectious disease.

A review of systems is otherwise negative: specifically, she denies chest pain, fevers, or chills. She saw her primary care physician 3 weeks ago for these complaints and was prescribed a 3-day course of azithromycin with no improvement.

Her medications include lisinopril, atenolol, glipizide, and metformin; her son believes she may be taking others as well but is not sure. He is also unsure of what treatment his mother has received for her rheumatoid arthritis, and most of her medical records are within another health system.

The patient’s son believes she may be taking other medications but is not sure; her records are at another institution

On physical examination, the patient is coughing and appears ill. Her temperature is 99.9°F (37.7°C), heart rate 105 beats per minute, blood pressure 140/70 mm Hg, res­piratory rate 24 per minute, and oxygen saturation by pulse oximetry 89% on room air. Heart sounds are normal, jugular venous pressure cannot be assessed because of her obese body habitus, pulmonary examination demonstrates crackles in all lung fields, and lower-extremity edema is not present. Her extremities are warm and well perfused. Musculoskeletal examination reveals deformities of the joints in both hands consistent with rheumatoid arthritis.

Laboratory data:

  • White blood cell count 13.0 × 109/L (reference range 3.7–11.0)
  • Hemoglobin level 10 g/dL (11.5–15)
  • Serum creatinine 1.0 mg/dL (0.7–1.4)
  • Pro-brain-type natriuretic peptide (pro-BNP) level greater than the upper limit of normal.

A chest radiograph is obtained, and the resident radiologist’s preliminary impression is that it is consistent with pulmonary vascular congestion.

The patient is admitted for further diagnostic evaluation. The emergency department resident orders intravenous furosemide and signs out to the night float medicine resident that this is an “elderly woman with hypertension, diabetes, and heart failure being admitted for a heart failure exacerbation.”

What is the accuracy of a physician’s initial working diagnosis?

Diagnostic accuracy requires both clinical knowledge and problem-solving skills.1

A decade ago, a National Patient Safety Foundation survey2 found that one in six patients had suffered a medical error related to misdiagnosis. In a large systematic review of autopsy-based diagnostic errors, the theorized rate of major errors ranged from 8.4% to as high as 24.4%.3 A study by Neale et al4 found that admitting diagnoses were incorrect in 6% of cases. In emergency departments, inaccuracy rates of up to 12% have been described.5

What factors influence the prevalence of diagnostic errors?

Initial empiric treatments, such as intravenous furosemide in the above scenario, add to the challenge of diagnosis in acute care settings and can influence clinical decisions made by subsequent providers.6

Nonspecific or vague symptoms make diagnosis especially challenging. Shortness of breath, for example, is a common chief complaint in medical patients, as in this case. Green et al7 found emergency department physicians reported clinical uncertainty for a diagnosis of heart failure in 31% of patients evaluated for “dyspnea.” Pulmonary embolism and pulmonary tuberculosis are also in the differential diagnosis for our patient, with studies reporting a misdiagnosis rate of 55% for pulmonary embolism8 and 50% for pulmonary tuberculosis.9

Hertwig et al,10 describing the diagnostic process in patients presenting to emergency departments with a nonspecific constellation of symptoms, found particularly low rates of agreement between the initial diagnostic impression and the final, correct one. In fact, the actual diagnosis was only in the physician’s initial “top three” differential diagnoses 29% to 83% of the time.

Atypical presentations of common diseases, initial nonspecific presentations of common diseases, and confounding comorbid conditions have also been associated with misdiagnosis.11 Our case scenario illustrates the frequent challenges physicians face when diagnosing patients who present with nonspecific symptoms and signs on a background of multiple, chronic comorbidities.

Contextual factors in the system and environment contribute to the potential for error.12 Examples include frequent interruptions, time pressure, poor handoffs, insufficient data, and multitasking.

In our scenario, incomplete data, time constraints, and multitasking in a busy work environment compelled the emergency department resident to rapidly synthesize information to establish a working diagnosis. Interpretations of radiographs by on-call radiology residents are similarly at risk of diagnostic error for the same reasons.13

Physician factors also influence diagnosis. Interestingly, physician certainty or uncertainty at the time of initial diagnosis does not uniformly appear to correlate with diagnostic accuracy. A recent study showed that physician confidence remained high regardless of the degree of difficulty in a given case, and degree of confidence also correlated poorly with whether the physician’s diagnosis was accurate.14

For patients admitted with a chief complaint of dyspnea, as in our scenario, Zwaan et al15 showed that “inappropriate selectivity” in reasoning contributed to an inaccurate diagnosis 23% of the time. Inappropriate selectivity, as defined by these authors, occurs when a probable diagnosis is not sufficiently considered and therefore is neither confirmed nor ruled out.

In our patient scenario, the failure to consider diagnoses other than heart failure and the inability to confirm a prior diagnosis of heart failure in the emergency department may contribute to a diagnostic error.

 

 

CASE CONTINUED: NO IMPROVEMENT OVER 3 DAYS

The night float resident, who has six other admissions this night, cannot ask the resident who evaluated this patient in the emergency department for further information because the shift has ended. The patient’s son left at the time of admission and is not available when the patient arrives on the medical ward.

The night float resident quickly examines the patient, enters admission orders, and signs the patient out to the intern and resident who will be caring for her during her hospitalization. The verbal handoff notes that the history was limited due to a language barrier. The initial problem list includes heart failure without a differential diagnosis, but notes that an elevated pro-BNP and chest radiograph confirm heart failure as the likely diagnosis.

Several hours after the night float resident has left, the resident presents this history to the attending physician, and together they decide to order her regular at-home medications, as well as deep vein thrombosis prophylaxis and echocardiography. In writing the orders, subcutaneous heparin once daily is erroneously entered instead of low-molecular-weight heparin daily, as this is the default in the medical record system. The tired resident fails to recognize this, and the pharmacist does not question it.

Over the next 2 days, the patient’s cough and shortness of breath persist.

After the attending physician dismisses their concerns, the residents do not bring up their idea again

On hospital day 3, two junior residents on the team (who finished their internship 2 weeks ago) review the attending radiologist’s interpretation of the chest radiograph. Unflagged, it confirms the resident’s interpretation but notes ill-defined, scattered, faint opacities. The residents believe that an interstitial pattern may be present and suggest that the patient may not have heart failure but rather a primary pulmonary disease. They bring this to the attention of their attending physician, who dismisses their concerns and comments that heart failure is a clinical diagnosis. The residents do not bring this idea up again to the attending physician.

That night, the float team is called by the nursing staff because of worsening oxygenation and cough. They add an intravenous corticosteroid, a broad-spectrum antibiotic, and an inhaled bronchodilator to the patient’s drug regimen.

How do cognitive errors predispose physicians to diagnostic errors?

When errors in diagnosis are reviewed retrospectively, cognitive or “thinking” errors are generally found, especially in nonprocedural or primary care specialties such as internal medicine, pediatrics, and emergency medicine.16,17

A widely accepted theory on how humans make decisions was described by the psychologists Tversky and Kahneman in 197418 and has been applied more recently to physicians’ diagnostic processes.19 Their dual process model theory states that persons with a requisite level of expertise use either the intuitive “system 1” process of thinking, based on pattern-recognition and heuristics, or the slower, more analytical “system 2” process.20 Experts disagree as to whether in medicine these processes represent a binary either-or model or a continuum21 with relative contributions of each process determined by the physician and the task.

What are some common types of cognitive error?

Experts agree that many diagnostic errors in medicine stem from decisions arrived at by inappropriate system 1 thinking due to biases. These biases have been identified and described as they relate to medicine, most notably by Croskerry.22

Several cognitive biases are illustrated in our clinical scenario:

The framing effect occurred when the emergency department resident listed the patient’s admitting diagnosis as heart failure during the clinical handoff of care.

Anchoring bias, as defined by Croskerry,22 is the tendency to lock onto salient features of the case too early in the diagnostic process and then to fail to adjust this initial diagnostic impression. This bias affected the admitting night float resident, primary intern, resident, and attending physician.

Diagnostic momentum, in turn, is a well-described phenomenon that clinical providers are especially vulnerable to in today’s environment of “copy-and-paste” medical records and numerous handovers of care as a consequence of residency duty-hour restrictions.23

Availability bias refers to commonly seen diagnoses like heart failure or recently seen diagnoses, which are more “available” to the human memory. These diagnoses, which spring to mind quickly, often trick providers into thinking that because they are more easily recalled, they are also more common or more likely.

Confirmation bias. The initial working diagnosis of heart failure may have led the medical team to place greater emphasis on the elevated pro-BNP and the chest radiograph to support the initial impression while ignoring findings such as weight loss that do not support this impression.

Blind obedience. Although the residents recognized the possibility of a primary pulmonary disease, they did not investigate this further. And when the attending physician dismissed their suggestion, they thus deferred to the person in authority or with a reputation of expertise.

Overconfidence bias. Despite minimal improvement in the patient’s clinical status after effective diuresis and the suggestion of alternative diagnoses by the residents, the attending physician remained confident—perhaps overconfident—in the diagnosis of heart failure and would not consider alternatives. Overconfidence bias has been well described and occurs when a medical provider believes too strongly in his or her ability to be correct and therefore fails to consider alternative diagnoses.24

Despite succumbing to overconfidence bias, the attending physician was able to overcome base-rate neglect, ie, failure to consider the prevalence of potential diagnoses in diagnostic reasoning.

Definitions and representative examples of cognitive biases in the case

Each of these biases, and others not mentioned, can lead to premature closure, which is the unfortunate root cause of many diagnostic errors and delays. We have illustrated several biases in our case scenario that led several physicians on the medical team to prematurely “close” on the diagnosis of heart failure (Table 1).

CASE CONTINUED: SURPRISES AND REASSESSMENT

On hospital day 4, the patient’s medication lists from her previous hospitalizations arrive, and the team is surprised to discover that she has been receiving infliximab for the past 3 to 4 months for her rheumatoid arthritis.

Additionally, an echocardiogram that was ordered on hospital day 1 but was lost in the cardiologist’s reading queue comes in and shows a normal ejection fraction with no evidence of elevated filling pressures.

Computed tomography of the chest reveals a reticular pattern with innumerable, tiny, 1- to 2-mm pulmonary nodules. The differential diagnosis is expanded to include hypersensitivity pneumonitis, lymphoma, fungal infection, and miliary tuberculosis.

How do faulty systems contribute to diagnostic error?

It is increasingly recognized that diagnostic errors can occur as a result of cognitive error, systems-based error, or quite commonly, both. Graber et al17 analyzed 100 cases of diagnostic error and determined that while cognitive errors did occur in most of them, nearly half the time both cognitive and systems-based errors contributed simultaneously.17 Observers have further delineated the importance of the systems context and how it affects our thinking.25

In this case, the language barrier, lack of availability of family, and inability to promptly utilize interpreter services contributed to early problems in acquiring a detailed history and a complete medication list that included the immunosuppressant infliximab. Later, a systems error led to a delay in the interpretation of an echocardiogram. Each of these factors, if prevented, would have presumably resulted in expansion of the differential diagnosis and earlier arrival at the correct diagnosis.

CASE CONTINUED: THE PATIENT DIES OF TUBERCULOSIS

The patient is moved to a negative pressure room, and the pulmonary consultants recommend bronchoscopy. During the procedure, the patient suffers acute respiratory failure, is intubated, and is transferred to the medical intensive care unit, where a saddle pulmonary embolism is diagnosed by computed tomographic angiography.

One day later, the sputum culture from the bronchoscopy returns as positive for acid-fast bacilli. A four-drug regimen for tuberculosis is started. The patient continues to have a downward course and expires 2 weeks later. Autopsy reveals miliary tuberculosis.

What is the frequency of diagnostic error in medicine?

Diagnostic error is estimated to have a frequency of 10% to 20%.24 Rates of diagnostic error are similar irrespective of method of determination, eg, from autopsy,3 standardized patients (ie, actors presenting with scripted scenarios),26 or case reviews.27 Patient surveys report patient-perceived harm from diagnostic error at a rate of 35% to 42%.28,29 The landmark Harvard Medical Practice Study found that 17% of all adverse events were attributable to diagnostic error.30

Diagnostic error is the most common type of medical error in nonprocedural medical fields.31 It causes a disproportionately large amount of morbidity and death.

Diagnostic error is the most common cause of malpractice claims in the United States. In inpatient and outpatient settings, for both medical and surgical patients, it accounted for 45.9% of all outpatient malpractice claims in 2009, making it the most common reason for medical malpractice litigation.32 A 2013 study indicated that diagnostic error is more common, more expensive, and two times more likely to result in death than any other category of error.33

 

 

CASE CONTINUED: MORBIDITY AND MORTALITY CONFERENCE

The patient’s case is brought to a morbidity and mortality conference for discussion. The systems issues in the case—including medication reconciliation, availability of interpreters, and timing and process of echocardiogram readings—are all discussed, but clinical reasoning and cognitive errors made in the case are avoided.

Why are cognitive errors often neglected in discussions of medical error?

Historically, openly discussing error in medicine has been difficult. Over the past decade, however, and fueled by the landmark Institute of Medicine report To Err is Human,34 the healthcare community has made substantial strides in identifying and talking about systems factors as a cause of preventable medical error.34,35

While systems contributions to medical error are inherently “external” to physicians and other healthcare providers, the cognitive contributions to error are inherently “internal” and are often considered personal. This has led to diagnostic error being kept out of many patient safety conversations. Further, while the solutions to systems errors are often tangible, such as implementing a fall prevention program or changing the physical packaging of a medication to reduce a medication dispensing or administration error, solutions to cognitive errors are generally considered more challenging to address by organizations trying to improve patient safety.

How can hospitals and department leaders do better?

Healthcare organizations and leaders of clinical teams or departments can implement several strategies.36

First, they can seek out and analyze the causes of diagnostic errors that are occurring locally in their institution and learn from their diagnostic errors, such as the one in our clinical scenario.

Trainees, physicians, and nurses should be comfortable questioning each other

Second, they can promote a culture of open communication and questioning around diagnosis. Trainees, physicians, and nurses should be comfortable questioning each other, including those higher up in the hierarchy, by saying, “I’m not sure” or “What else could this be?” to help reduce cognitive bias and expand the diagnostic possibilities.

Similarly, developing strategies to promote feedback on diagnosis among physicians will allow us all to learn from our diagnostic mistakes.

Use of the electronic medical record to assist in follow-up of pending diagnostic studies and patient return visits is yet another strategy.

Finally, healthcare organizations can adopt strategies to promote patient involvement in diagnosis, such as providing patients with copies of their test results and discharge summaries, encouraging the use of electronic patient communication portals, and empowering patients to ask questions related to their diagnosis. Prioritizing potential solutions to reduce diagnostic errors may be helpful in situations, depending on the context and environment, in which all proposed interventions may not be possible.

CASE CONTINUED: LEARNING FROM MISTAKES

The attending physician and resident in the case meet after the conference to review their clinical decision-making. Both are interested in learning from this case and improving their diagnostic skills in the future.

What specific steps can clinicians take to mitigate cognitive bias in daily practice?

In addition to continuing to expand one’s medical knowledge and gain more clinical experience, we can suggest several small steps to busy clinicians, taken individually or in combination with others that may improve diagnostic skills by reducing the potential for biased thinking in clinical practice.

Approaches to decision-making
From Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ 2009; 14:27–35. With kind permission from Springer Science and Business Media.
Figure 1. Approaches to decision-making can be located along a continuum, with unconscious, intuitive ones clustering at one end and deliberate, analytical ones at the other.

Think about your thinking. Our first recommendation would be to become more familiar with the dual process theory of clinical cognition (Figure 1).37,38 This theoretical framework may be very helpful as a foundation from which to build better thinking skills. Physicians, especially residents, and students can be taught these concepts and their potential to contribute to diagnostic errors, and can use these skills to recognize those contributions in others’ diagnostic practices and even in their own.39

Facilitating metacognition, or “thinking about one’s thinking,” may help clinicians catch themselves in thinking traps and provide the opportunity to reflect on biases retrospectively, as a double check or an opportunity to learn from a mistake.

Recognize your emotions. Gaining an understanding of the effect of one’s emotions on decision-making also can help clinicians free themselves of bias. As human beings, healthcare professionals are  susceptible to emotion, and the best approach to mitigate the emotional influences may be to consciously name them and adjust for them.40

Because it is impractical to apply slow, analytical system 2 approaches to every case, skills that hone and develop more accurate, reliable system 1 thinking are crucial. Gaining broad exposure to increased numbers of cases may be the most reliable way to build an experiential repertoire of “illness scripts,” but there are ways to increase the experiential value of any case with a few techniques that have potential to promote better intuition.41

Embracing uncertainty in the early diagnostic process and envisioning the worst-case scenario in a case allows the consideration of additional diagnostic paths outside of the current working diagnosis, potentially priming the clinician to look for and recognize early warning signs that could argue against the initial diagnosis at a time when an adjustment could be made to prevent a bad outcome.

Practice progressive problem-solving,42 a technique in which the physician creates additional challenges to increase the cognitive burden of a “routine” case in an effort to train his or her mind and sharpen intuition. An example of this practice is contemplating a backup treatment plan in advance in the event of a poor response to or an adverse effect of treatment. Highly rated physicians and teachers perform this regularly.43,44 Other ways to maximize the learning value of an individual case include seeking feedback on patient outcomes, especially when a patient has been discharged or transferred to another provider’s care, or when the physician goes off service.

Simulation, traditionally used for procedural training, has potential as well. Cognitive simulation, such as case reports or virtual patient modules, have potential to enhance clinical reasoning skills as well, though possibly at greater cost of time and expense.

Decreased reliance on memory is likely to improve diagnostic reasoning. Systems tools such as checklists45 and health information technology46 have potential to reduce diagnostic errors, not by taking thinking away from the clinician but by relieving the cognitive load enough to facilitate greater effort toward reasoning.

Slow down. Finally, and perhaps most important, recent models of clinical expertise have suggested that mastery comes from having a robust intuitive method, with a sense of the limitations of the intuitive approach, an ability to recognize the need to perform more analytical reasoning in select cases, and the willingness to do so. In short, it may well be that the hallmark of a master clinician is the propensity to slow down when necessary.47

A ‘diagnostic time-out’ for safety might catch opportunities to recognize and mitigate biases and errors

If one considers diagnosis a cognitive procedure, perhaps a brief “diagnostic time-out” for safety might afford an opportunity to recognize and mitigate biases and errors. There are likely many potential scripts for a good diagnostic time-out, but to be functional it should be brief and simple to facilitate consistent use. We have recommended the following four questions to our residents as a starting point, any of which could signal the need to switch to a slower, analytic approach.

Four-step diagnostic time-out

  • What else can it be?
  • Is there anything about the case that does not fit?
  • Is it possible that multiple processes are going on?
  • Do I need to slow down?

These questions can serve as a double check for an intuitively formed initial working diagnosis, incorporating many of the principles discussed above, in a way that would hopefully avoid undue burden on a busy clinician. These techniques, it must be acknowledged, have not yet been directly tied to reductions in diagnostic errors. However, diagnostic errors, as discussed, are very difficult to identify and study, and these techniques will serve mainly to improve habits that are likely to show benefits over much longer time periods than most studies can measure.

References
  1. Kassirer JP. Diagnostic reasoning. Ann Intern Med 1989; 110:893–900.
  2. Golodner L. How the public perceives patient safety. Newsletter of the National Patient Safety Foundation 2004; 1997:1–6.
  3. Shojania KG, Burton EC, McDonald KM, Goldman L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2003; 289:2849–2856.
  4. Neale G, Woloshynowych M, Vincent C. Exploring the causes of adverse events in NHS hospital practice. J R Soc Med 2001; 94:322–330.
  5. Chellis M, Olson J, Augustine J, Hamilton G. Evaluation of missed diagnoses for patients admitted from the emergency department. Acad Emerg Med 2001; 8:125–130.
  6. Tallentire VR, Smith SE, Skinner J, Cameron HS. Exploring error in team-based acute care scenarios: an observational study from the United Kingdom. Acad Med 2012; 87:792–798.
  7. Green SM, Martinez-Rumayor A, Gregory SA, et al. Clinical uncertainty, diagnostic accuracy, and outcomes in emergency department patients presenting with dyspnea. Arch Intern Med 2008; 168:741–748.
  8. Pineda LA, Hathwar VS, Grant BJ. Clinical suspicion of fatal pulmonary embolism. Chest 2001; 120:791–795.
  9. Shojania KG, Burton EC, McDonald KM, Goldman L. The autopsy as an outcome and performance measure. Evid Rep Technol Assess (Summ) 2002; 58:1–5.
  10. Hertwig R, Meier N, Nickel C, et al. Correlates of diagnostic accuracy in patients with nonspecific complaints. Med Decis Making 2013; 33:533–543.
  11. Kostopoulou O, Delaney BC, Munro CW. Diagnostic difficulty and error in primary care—a systematic review. Fam Pract 2008; 25:400–413.
  12. Ogdie AR, Reilly JB, Pang WG, et al. Seen through their eyes: residents’ reflections on the cognitive and contextual components of diagnostic errors in medicine. Acad Med 2012; 87:1361–1367.
  13. Feldmann EJ, Jain VR, Rakoff S, Haramati LB. Radiology residents’ on-call interpretation of chest radiographs for congestive heart failure. Acad Radiol 2007; 14:1264–1270.
  14. Meyer AN, Payne VL, Meeks DW, Rao R, Singh H. Physicians’ diagnostic accuracy, confidence, and resource requests: a vignette study. JAMA Intern Med 2013; 173:1952–1958.
  15. Zwaan L, Thijs A, Wagner C, Timmermans DR. Does inappropriate selectivity in information use relate to diagnostic errors and patient harm? The diagnosis of patients with dyspnea. Soc Sci Med 2013; 91:32–38.
  16. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med 2009; 169:1881–1887.
  17. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med 2005; 165:1493–1499.
  18. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science 1974; 185:1124–1131.
  19. Kahneman D. Thinking, fast and slow. New York, NY: Farrar, Straus, and Giroux; 2011.
  20. Croskerry P. A universal model of diagnostic reasoning. Acad Med 2009; 84:1022–1028.
  21. Custers EJ. Medical education and cognitive continuum theory: an alternative perspective on medical problem solving and clinical reasoning. Acad Med 2013; 88:1074–1080.
  22. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med 2003; 78:775–780.
  23. Hirschtick RE. A piece of my mind. Copy-and-paste. JAMA 2006; 295:2335–2336.
  24. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med 2008;121(suppl 5):S2–S23.
  25. Henriksen K, Brady J. The pursuit of better diagnostic performance: a human factors perspective. BMJ Qual Saf 2013; 22(suppl 2):ii1–ii5.
  26. Peabody JW, Luck J, Jain S, Bertenthal D, Glassman P. Assessing the accuracy of administrative data in health information systems. Med Care 2004; 42:1066–1072.
  27. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf 2012; 21:737–745.
  28. Blendon RJ, DesRoches CM, Brodie M, et al. Views of practicing physicians and the public on medical errors. N Engl J Med 2002; 347:1933–1940.
  29. Burroughs TE, Waterman AD, Gallagher TH, et al. Patient concerns about medical errors in emergency departments. Acad Emerg Med 2005; 12:57–64.
  30. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med 1991; 324:377–384.
  31. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000; 38:261–271.
  32. Bishop TF, Ryan AM, Casalino LP. Paid malpractice claims for adverse events in inpatient and outpatient settings. JAMA 2011; 305:2427–2431.
  33. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-year summary of US malpractice claims for diagnostic errors 1986–2010: an analysis from the national practitioner data bank. BMJ Qual Saf 2013; 22:672–680.
  34. Kohn LT, Corrigan JM, Donaldson MS. To err is human: building a safer health system. Washington, DC: The National Academies Press; 2000.
  35. Singh H. Diagnostic errors: moving beyond ‘no respect’ and getting ready for prime time. BMJ Qual Saf 2013; 22:789–792.
  36. Graber ML, Trowbridge R, Myers JS, Umscheid CA, Strull W, Kanter MH. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf 2014; 40:102–110.
  37. Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract 2009; 14(suppl 1):27–35.
  38. Norman G. Dual processing and diagnostic errors. Adv Health Sci Educ Theory Pract 2009; 14(suppl 1):37–49.
  39. Reilly JB, Ogdie AR, Von Feldt JM, Myers JS. Teaching about how doctors think: a longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Qual Saf 2013; 22:1044–1050.
  40. Croskerry P, Abbass A, Wu AW. Emotional influences in patient safety. J Patient Saf 2010; 6:199–205.
  41. Rajkomar A, Dhaliwal G. Improving diagnostic reasoning to improve patient safety. Perm J 2011; 15:68–73.
  42. Trowbridge RL, Dhaliwal G, Cosby KS. Educational agenda for diagnostic error reduction. BMJ Qual Saf 2013; 22(suppl 2):ii28­–ii32.
  43. Sargeant J, Mann K, Sinclair D, et al. Learning in practice: experiences and perceptions of high-scoring physicians. Acad Med 2006; 81:655–660.
  44. Mylopoulos M, Lohfeld L, Norman GR, Dhaliwal G, Eva KW. Renowned physicians' perceptions of expert diagnostic practice. Acad Med 2012; 87:1413–1417.
  45. Sibbald M, de Bruin AB, van Merrienboer JJ. Checklists improve experts' diagnostic decisions. Med Educ 2013; 47:301–308.
  46. El-Kareh R, Hasan O, Schiff GD. Use of health information technology to reduce diagnostic errors. BMJ Qual Saf 2013; 22(suppl 2):ii40–ii51.
  47. Moulton CA, Regehr G, Mylopoulos M, MacRae HM. Slowing down when you should: a new model of expert judgment. Acad Med 2007; 82(suppl 10):S109–S116.
References
  1. Kassirer JP. Diagnostic reasoning. Ann Intern Med 1989; 110:893–900.
  2. Golodner L. How the public perceives patient safety. Newsletter of the National Patient Safety Foundation 2004; 1997:1–6.
  3. Shojania KG, Burton EC, McDonald KM, Goldman L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2003; 289:2849–2856.
  4. Neale G, Woloshynowych M, Vincent C. Exploring the causes of adverse events in NHS hospital practice. J R Soc Med 2001; 94:322–330.
  5. Chellis M, Olson J, Augustine J, Hamilton G. Evaluation of missed diagnoses for patients admitted from the emergency department. Acad Emerg Med 2001; 8:125–130.
  6. Tallentire VR, Smith SE, Skinner J, Cameron HS. Exploring error in team-based acute care scenarios: an observational study from the United Kingdom. Acad Med 2012; 87:792–798.
  7. Green SM, Martinez-Rumayor A, Gregory SA, et al. Clinical uncertainty, diagnostic accuracy, and outcomes in emergency department patients presenting with dyspnea. Arch Intern Med 2008; 168:741–748.
  8. Pineda LA, Hathwar VS, Grant BJ. Clinical suspicion of fatal pulmonary embolism. Chest 2001; 120:791–795.
  9. Shojania KG, Burton EC, McDonald KM, Goldman L. The autopsy as an outcome and performance measure. Evid Rep Technol Assess (Summ) 2002; 58:1–5.
  10. Hertwig R, Meier N, Nickel C, et al. Correlates of diagnostic accuracy in patients with nonspecific complaints. Med Decis Making 2013; 33:533–543.
  11. Kostopoulou O, Delaney BC, Munro CW. Diagnostic difficulty and error in primary care—a systematic review. Fam Pract 2008; 25:400–413.
  12. Ogdie AR, Reilly JB, Pang WG, et al. Seen through their eyes: residents’ reflections on the cognitive and contextual components of diagnostic errors in medicine. Acad Med 2012; 87:1361–1367.
  13. Feldmann EJ, Jain VR, Rakoff S, Haramati LB. Radiology residents’ on-call interpretation of chest radiographs for congestive heart failure. Acad Radiol 2007; 14:1264–1270.
  14. Meyer AN, Payne VL, Meeks DW, Rao R, Singh H. Physicians’ diagnostic accuracy, confidence, and resource requests: a vignette study. JAMA Intern Med 2013; 173:1952–1958.
  15. Zwaan L, Thijs A, Wagner C, Timmermans DR. Does inappropriate selectivity in information use relate to diagnostic errors and patient harm? The diagnosis of patients with dyspnea. Soc Sci Med 2013; 91:32–38.
  16. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med 2009; 169:1881–1887.
  17. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med 2005; 165:1493–1499.
  18. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science 1974; 185:1124–1131.
  19. Kahneman D. Thinking, fast and slow. New York, NY: Farrar, Straus, and Giroux; 2011.
  20. Croskerry P. A universal model of diagnostic reasoning. Acad Med 2009; 84:1022–1028.
  21. Custers EJ. Medical education and cognitive continuum theory: an alternative perspective on medical problem solving and clinical reasoning. Acad Med 2013; 88:1074–1080.
  22. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med 2003; 78:775–780.
  23. Hirschtick RE. A piece of my mind. Copy-and-paste. JAMA 2006; 295:2335–2336.
  24. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med 2008;121(suppl 5):S2–S23.
  25. Henriksen K, Brady J. The pursuit of better diagnostic performance: a human factors perspective. BMJ Qual Saf 2013; 22(suppl 2):ii1–ii5.
  26. Peabody JW, Luck J, Jain S, Bertenthal D, Glassman P. Assessing the accuracy of administrative data in health information systems. Med Care 2004; 42:1066–1072.
  27. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf 2012; 21:737–745.
  28. Blendon RJ, DesRoches CM, Brodie M, et al. Views of practicing physicians and the public on medical errors. N Engl J Med 2002; 347:1933–1940.
  29. Burroughs TE, Waterman AD, Gallagher TH, et al. Patient concerns about medical errors in emergency departments. Acad Emerg Med 2005; 12:57–64.
  30. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med 1991; 324:377–384.
  31. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000; 38:261–271.
  32. Bishop TF, Ryan AM, Casalino LP. Paid malpractice claims for adverse events in inpatient and outpatient settings. JAMA 2011; 305:2427–2431.
  33. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-year summary of US malpractice claims for diagnostic errors 1986–2010: an analysis from the national practitioner data bank. BMJ Qual Saf 2013; 22:672–680.
  34. Kohn LT, Corrigan JM, Donaldson MS. To err is human: building a safer health system. Washington, DC: The National Academies Press; 2000.
  35. Singh H. Diagnostic errors: moving beyond ‘no respect’ and getting ready for prime time. BMJ Qual Saf 2013; 22:789–792.
  36. Graber ML, Trowbridge R, Myers JS, Umscheid CA, Strull W, Kanter MH. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf 2014; 40:102–110.
  37. Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract 2009; 14(suppl 1):27–35.
  38. Norman G. Dual processing and diagnostic errors. Adv Health Sci Educ Theory Pract 2009; 14(suppl 1):37–49.
  39. Reilly JB, Ogdie AR, Von Feldt JM, Myers JS. Teaching about how doctors think: a longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Qual Saf 2013; 22:1044–1050.
  40. Croskerry P, Abbass A, Wu AW. Emotional influences in patient safety. J Patient Saf 2010; 6:199–205.
  41. Rajkomar A, Dhaliwal G. Improving diagnostic reasoning to improve patient safety. Perm J 2011; 15:68–73.
  42. Trowbridge RL, Dhaliwal G, Cosby KS. Educational agenda for diagnostic error reduction. BMJ Qual Saf 2013; 22(suppl 2):ii28­–ii32.
  43. Sargeant J, Mann K, Sinclair D, et al. Learning in practice: experiences and perceptions of high-scoring physicians. Acad Med 2006; 81:655–660.
  44. Mylopoulos M, Lohfeld L, Norman GR, Dhaliwal G, Eva KW. Renowned physicians' perceptions of expert diagnostic practice. Acad Med 2012; 87:1413–1417.
  45. Sibbald M, de Bruin AB, van Merrienboer JJ. Checklists improve experts' diagnostic decisions. Med Educ 2013; 47:301–308.
  46. El-Kareh R, Hasan O, Schiff GD. Use of health information technology to reduce diagnostic errors. BMJ Qual Saf 2013; 22(suppl 2):ii40–ii51.
  47. Moulton CA, Regehr G, Mylopoulos M, MacRae HM. Slowing down when you should: a new model of expert judgment. Acad Med 2007; 82(suppl 10):S109–S116.
Issue
Cleveland Clinic Journal of Medicine - 82(11)
Issue
Cleveland Clinic Journal of Medicine - 82(11)
Page Number
745-753
Page Number
745-753
Publications
Publications
Topics
Article Type
Display Headline
An elderly woman with ‘heart failure’: Cognitive biases and diagnostic error
Display Headline
An elderly woman with ‘heart failure’: Cognitive biases and diagnostic error
Legacy Keywords
Cognitive bias, diagnostic error, medical error, misdiagnosis, heart failure, tuberculosis, Nikhil Mull, James Reilly, Jennifer Myers
Legacy Keywords
Cognitive bias, diagnostic error, medical error, misdiagnosis, heart failure, tuberculosis, Nikhil Mull, James Reilly, Jennifer Myers
Sections
Inside the Article

KEY POINTS

  • Diagnostic errors are common and lead to bad outcomes.
  • Factors that increase the risk of diagnostic error include initial empiric treatment, nonspecific or vague symptoms, atypical presentation, confounding comorbid conditions, contextual factors, and physician factors.
  • Common types of cognitive error include the framing effect, anchoring bias, diagnostic momentum, availability bias, confirmation bias, blind obedience, overconfidence bias, base-rate neglect, and premature closure.
  • Organizations and leaders can implement strategies to reduce diagnostic errors.
Disallow All Ads
Alternative CME
Article PDF Media