Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Incorporating metacognition into morbidity and mortality rounds: The next frontier in quality improvement

A 71‐year‐old man with widely metastatic nonsmall cell lung cancer presented to an emergency department of a teaching hospital at 7 pm with a chief complaint of severe chest pain relieved by sitting upright and leaning forward. A senior cardiologist, with expertise in echocardiography, assessed the patient and performed a bedside echocardiogram. He found a large pericardial effusion but concluded there was no cardiac tamponade. Given the patient's other medical problems, he referred him to internal medicine for admission to their service. The attending internist agreed to admit the patient, suggesting close cardiac monitoring and reevaluation with a formal echocardiogram in the morning. At 9 am, the team and the cardiologist were urgently summoned to the echo lab by the technician who now diagnosed tamponade. After looking at the images, the cardiologist disagreed with the technician's interpretation and declared that there was no sign of tamponade.

After leaving the echo lab, the attending internist led a team discussion on the phenomenon of and reasons for interobserver variation. The residents initially focused on the difference in expertise between the cardiologist and technician. The attending, who felt this was unlikely because the technician was very experienced, introduced the possibility of a cognitive misstep. Having staked out an opinion on the lack of tamponade the night before and acting on that interpretation by declining admission to his service, the cardiologist was susceptible to anchoring bias, where adjustments to a preliminary diagnosis are insufficient because of the influence of the initial interpretation.[1] The following day, the cardiologist performed a pericardiocentesis and reported that the fluid came out under pressure. In the face of this definitive information, he concluded that his prior assessment was incorrect and that tamponade had been present from the start.

The origins of medical error reduction lie in the practice of using autopsies to determine the cause of death spearheaded by Karl Rokitansky at the Vienna Medical School in the 1800s.[2] Ernest Amory Codman expanded the effort through the linkage of treatment decisions to subsequent outcomes by following patients after hospital discharge.[3] The advent of modern imaging techniques coupled with interventional methods of obtaining pathological specimens has dramatically improved diagnostic accuracy over the past 40 years. As a result, the practice of using autopsies to improve clinical acumen and reduce diagnostic error has virtually disappeared, while the focus on medical error has actually increased. The forum for reducing error shifted to morbidity and mortality rounds (MMRs), which have been relabeled quality‐improvement rounds in many hospitals.

In these regularly scheduled meetings, interprofessional clinicians discuss errors and adverse outcomes. Because deaths are rarely unexpected and often occur outside of the acute care setting, the focus is usually on errors in the execution of complex clinical plans that combine the wide array of modern laboratory, imaging, pharmaceutical, interventional, surgical, and pathological tools available to clinicians today. In the era of patient safety and quality improvement, errors are mostly blamed on systems‐based issues that lead to hospital complications, despite evidence that cognitive factors play a large role.[4] Systems‐based analysis was popularized by the landmark report of the Institute of Medicine.[5] In our local institutions (the University of Toronto teaching hospitals), improving diagnostic accuracy is almost never on the agenda. We suspect the same is true elsewhere. Common themes include mistakes in medication administration and dosing, communication, and physician handover. The Swiss cheese model[6] is often invoked to diffuse blame across a number of individuals, processes, and even machines. However, as Wachter and Pronovost point out, reengineering of systems has limited capacity for solving all safety and quality improvement issues when people are involved; human error can still sabotage the effort.[7]

Discussions centered on a physician's raw thinking ability have become a third rail, even though clinical reasoning lies at the core of patient safety. Human error is rarely discussed, in part because it is mistakenly believed to be uncommon and felt to be the result of deficits in knowledge or incompetence. Furthermore, the fear of assigning blame to individuals in front of their peers may be counterproductive, discouraging identification of future errors. However, the fields of cognitive psychology and medical decision making have clearly established that cognitive errors occur predictably and often, especially at times of high cognitive load (eg, when many high stakes complex decisions need to be made in a short period of time). Errors do not usually result from a lack of knowledge (although they can), but rather because people rely on instincts that include common biases called heuristics.[8] Most of the time, heuristics are a helpful and necessary evolutionary adaptation of the human thought process, but by their inherent nature, they can lead to predictable and repeatable errors. Because the effects of cognitive biases are inherent to all decision makers, using this framework for discussing individual error may be a method of decreasing the second victim effect[9] and avoid demoralizing the individual.

MMRs thus represent fertile ground for introducing cognitive psychology into medical education and quality improvement. The existing format is useful for teaching cognitive psychology because it is an open forum where discussions center on errors of omission and commission, many of which are a result of both systems issues and decision making heuristics. Several studies have attempted to describe methods for improving MMRs[10, 11, 12]; however, none have incorporated concepts from cognitive psychology. This type of analysis has penetrated several cases in the WebM&M series created by the Agency of Healthcare Quality Research, which can be used as a model for hospital‐based MMRs.[13] For the vignette described above, a MMR that considers systems‐based approaches might discuss how a busy emergency room, limitations of capacity on the cardiology service, and closure of the echo lab at night, played a role in this story. However, although it is difficult to replay another person's mental processing, ignoring the possibility that the cardiologist in this case may have fallen prey to a common cognitive error would be a missed opportunity to learn how frequently heuristics can be faulty. A cognitive approach applied to this example would explore explanations such as anchoring, ego, and hassle biases. Front‐line clinicians in busy hospital settings will recognize the interaction between workload pressures and cognitive mistakes common to examples like this one.

Cognitive heuristics should first be introduced to MMRs by experienced clinicians, well respected for their clinical acumen, by telling specific personal stories where heuristics led to errors in their practices and why the shortcut in thinking occurred. Thereafter, the traditional MMR format can be used: presenting a case, describing how an experienced clinician might manage the case, and then asking the audience members for comment. Incorporating discussions of cognitive missteps, in medical and nonmedical contexts, would help normalize the understanding that even the most experienced and smartest people fall prey to them. The tone must be positive.

Attendees could be encouraged to review their own thought processes through diagnostic verification for cases where their initial diagnosis was incorrect. This would involve assessment for adequacy, ensuring that potential diagnoses account for all abnormal and normal clinical findings, and coherency, ensuring that the diagnoses are pathophysiologically consistent with all clinical findings. Another strategy may be to illustrate cognitive forcing strategies for particular biases.[14] For example, in the case of anchoring bias, trainees may be encouraged to replay the clinical scenario with a different priming stem and evaluate if they would come to the same clinical conclusion. A challenge for all MMRs is how best to select cases; given the difficulties in replaying one's cognitive processes, this problem may be magnified. Potential selection methods could utilize anonymous reporting systems or patient complaints; however, the optimal strategy is yet to be determined.

Graber et al. have summarized the limited research on attempts to improve cognitive processes through educational interventions and illustrate its mixed results.[15] The most positive study was a randomized control trial using combined pattern recognition and deliberative reasoning to improve diagnostic accuracy in the face of biasing information.[16] Despite positive results, others have suggested that cognitive biases are impossible to teach due to their subconscious nature.[17] They argue that training physicians to avoid heuristics will simply lead to over investigation. These polarizing views highlight the need for research to evaluate interventions like the cognitive autopsy suggested here.

Trainees recognize early that their knowledge base is limited. However, it takes more internal analysis to realize that their brains' decision‐making capacity is similarly limited. Utilizing these regularly scheduled clinical meetings in the manner described above may build improved metacognition, cognition about cognition or more colloquially thinking about thinking. Clinicians understand that bias can easily occur in research and accept mechanisms to protect studies from those potential threats to validity such as double blinding of outcome assessments. Supplementing MMRs with cognitive discussions represents an analogous intent to reduce biases, introducing metacognition as the next frontier in advancing clinical care. Errors are inevitable,[18] and recognition of our cognitive blind spots will provide physicians with an improved framework for analysis of these errors. Building metacognition is a difficult task; however, this is not a reason to stop trying. In the spirit of innovation begun by pioneers like Rokitansky and Codman, and renewed focus on diagnostic errors generated by the recent report of the National Academy of Sciences[19], it is time for the cognitive autopsy to be built into the quality improvement and patient safety map.

Acknowledgements

The authors thank Donald A. Redelemeier, MD, MSc, University of Toronto, and Gurpreet Dhaliwal, MD, University of California, San Francisco, for providing comments on an earlier draft of this article. Neither was compensated for their contributions.

Disclosure: Nothing to report.

Files
References
  1. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):11241131.
  2. Nuland SB. Doctors: The Biography of Medicine. New York, NY: Vintage Books; 1995.
  3. Codman EA. The classic: a study in hospital efficiency: as demonstrated by the case report of first five years of private hospital. Clin Orthop Relat Res. 2013;471(6):17781783.
  4. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):14931499.
  5. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 1999.
  6. Reason J. The contribution of latent human failures to the breakdown of complex systems. Philos Trans R Soc Lond B Biol Sci. 1990;327(1241):475484.
  7. Wachter RM, Pronovost PJ. Balancing “no blame” with accountability in patient safety. N Engl J Med. 2009;361(14):14011406.
  8. Croskerry P. From mindless to mindful practice—cognitive bias and clinical decision making. N Engl J Med. 2013;368(26):24452448.
  9. Wu AW. Medical error: the second victim. The doctor who makes the mistake needs help too. BMJ. 2000;320(7237):726727.
  10. Ksouri H, Balanant PY, Tadie JM, et al. Impact of morbidity and mortality conferences on analysis of mortality and critical events in intensive care practice. Am J Crit Care. 2010;19(2):135145.
  11. Szekendi MK, Barnard C, Creamer J, Noskin GA. Using patient safety morbidity and mortality conferences to promote transparency and a culture of safety. Jt Comm J Qual Patient Saf. 2010;36(1):39.
  12. Calder LA, Kwok ESH, Adam Cwinn A, et al. Enhancing the quality of morbidity and mortality rounds: the Ottawa M21(3):314321.
  13. Agency for Healthcare Research and Quality. AHRQ WebM41(1):110120.
  14. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535557.
  15. Eva KW, Hatala RM, Leblanc VR, Brooks LR. Teaching from the clinical reasoning literature: combined reasoning strategies help novice diagnosticians overcome misleading information. Med Educ. 2007;41(12):11521158.
  16. Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44(1):94100.
  17. Cain DM, Detsky AS. Everyone's a little bit biased (even physicians). JAMA. 2008;299(24):28932895.
  18. Balogh EP, Miller BT, Ball JR. Improving Diagnosis in Health Care. Washington, DC: National Academies Press; 2015.
Article PDF
Issue
Journal of Hospital Medicine - 11(2)
Publications
Page Number
120-122
Sections
Files
Files
Article PDF
Article PDF

A 71‐year‐old man with widely metastatic nonsmall cell lung cancer presented to an emergency department of a teaching hospital at 7 pm with a chief complaint of severe chest pain relieved by sitting upright and leaning forward. A senior cardiologist, with expertise in echocardiography, assessed the patient and performed a bedside echocardiogram. He found a large pericardial effusion but concluded there was no cardiac tamponade. Given the patient's other medical problems, he referred him to internal medicine for admission to their service. The attending internist agreed to admit the patient, suggesting close cardiac monitoring and reevaluation with a formal echocardiogram in the morning. At 9 am, the team and the cardiologist were urgently summoned to the echo lab by the technician who now diagnosed tamponade. After looking at the images, the cardiologist disagreed with the technician's interpretation and declared that there was no sign of tamponade.

After leaving the echo lab, the attending internist led a team discussion on the phenomenon of and reasons for interobserver variation. The residents initially focused on the difference in expertise between the cardiologist and technician. The attending, who felt this was unlikely because the technician was very experienced, introduced the possibility of a cognitive misstep. Having staked out an opinion on the lack of tamponade the night before and acting on that interpretation by declining admission to his service, the cardiologist was susceptible to anchoring bias, where adjustments to a preliminary diagnosis are insufficient because of the influence of the initial interpretation.[1] The following day, the cardiologist performed a pericardiocentesis and reported that the fluid came out under pressure. In the face of this definitive information, he concluded that his prior assessment was incorrect and that tamponade had been present from the start.

The origins of medical error reduction lie in the practice of using autopsies to determine the cause of death spearheaded by Karl Rokitansky at the Vienna Medical School in the 1800s.[2] Ernest Amory Codman expanded the effort through the linkage of treatment decisions to subsequent outcomes by following patients after hospital discharge.[3] The advent of modern imaging techniques coupled with interventional methods of obtaining pathological specimens has dramatically improved diagnostic accuracy over the past 40 years. As a result, the practice of using autopsies to improve clinical acumen and reduce diagnostic error has virtually disappeared, while the focus on medical error has actually increased. The forum for reducing error shifted to morbidity and mortality rounds (MMRs), which have been relabeled quality‐improvement rounds in many hospitals.

In these regularly scheduled meetings, interprofessional clinicians discuss errors and adverse outcomes. Because deaths are rarely unexpected and often occur outside of the acute care setting, the focus is usually on errors in the execution of complex clinical plans that combine the wide array of modern laboratory, imaging, pharmaceutical, interventional, surgical, and pathological tools available to clinicians today. In the era of patient safety and quality improvement, errors are mostly blamed on systems‐based issues that lead to hospital complications, despite evidence that cognitive factors play a large role.[4] Systems‐based analysis was popularized by the landmark report of the Institute of Medicine.[5] In our local institutions (the University of Toronto teaching hospitals), improving diagnostic accuracy is almost never on the agenda. We suspect the same is true elsewhere. Common themes include mistakes in medication administration and dosing, communication, and physician handover. The Swiss cheese model[6] is often invoked to diffuse blame across a number of individuals, processes, and even machines. However, as Wachter and Pronovost point out, reengineering of systems has limited capacity for solving all safety and quality improvement issues when people are involved; human error can still sabotage the effort.[7]

Discussions centered on a physician's raw thinking ability have become a third rail, even though clinical reasoning lies at the core of patient safety. Human error is rarely discussed, in part because it is mistakenly believed to be uncommon and felt to be the result of deficits in knowledge or incompetence. Furthermore, the fear of assigning blame to individuals in front of their peers may be counterproductive, discouraging identification of future errors. However, the fields of cognitive psychology and medical decision making have clearly established that cognitive errors occur predictably and often, especially at times of high cognitive load (eg, when many high stakes complex decisions need to be made in a short period of time). Errors do not usually result from a lack of knowledge (although they can), but rather because people rely on instincts that include common biases called heuristics.[8] Most of the time, heuristics are a helpful and necessary evolutionary adaptation of the human thought process, but by their inherent nature, they can lead to predictable and repeatable errors. Because the effects of cognitive biases are inherent to all decision makers, using this framework for discussing individual error may be a method of decreasing the second victim effect[9] and avoid demoralizing the individual.

MMRs thus represent fertile ground for introducing cognitive psychology into medical education and quality improvement. The existing format is useful for teaching cognitive psychology because it is an open forum where discussions center on errors of omission and commission, many of which are a result of both systems issues and decision making heuristics. Several studies have attempted to describe methods for improving MMRs[10, 11, 12]; however, none have incorporated concepts from cognitive psychology. This type of analysis has penetrated several cases in the WebM&M series created by the Agency of Healthcare Quality Research, which can be used as a model for hospital‐based MMRs.[13] For the vignette described above, a MMR that considers systems‐based approaches might discuss how a busy emergency room, limitations of capacity on the cardiology service, and closure of the echo lab at night, played a role in this story. However, although it is difficult to replay another person's mental processing, ignoring the possibility that the cardiologist in this case may have fallen prey to a common cognitive error would be a missed opportunity to learn how frequently heuristics can be faulty. A cognitive approach applied to this example would explore explanations such as anchoring, ego, and hassle biases. Front‐line clinicians in busy hospital settings will recognize the interaction between workload pressures and cognitive mistakes common to examples like this one.

Cognitive heuristics should first be introduced to MMRs by experienced clinicians, well respected for their clinical acumen, by telling specific personal stories where heuristics led to errors in their practices and why the shortcut in thinking occurred. Thereafter, the traditional MMR format can be used: presenting a case, describing how an experienced clinician might manage the case, and then asking the audience members for comment. Incorporating discussions of cognitive missteps, in medical and nonmedical contexts, would help normalize the understanding that even the most experienced and smartest people fall prey to them. The tone must be positive.

Attendees could be encouraged to review their own thought processes through diagnostic verification for cases where their initial diagnosis was incorrect. This would involve assessment for adequacy, ensuring that potential diagnoses account for all abnormal and normal clinical findings, and coherency, ensuring that the diagnoses are pathophysiologically consistent with all clinical findings. Another strategy may be to illustrate cognitive forcing strategies for particular biases.[14] For example, in the case of anchoring bias, trainees may be encouraged to replay the clinical scenario with a different priming stem and evaluate if they would come to the same clinical conclusion. A challenge for all MMRs is how best to select cases; given the difficulties in replaying one's cognitive processes, this problem may be magnified. Potential selection methods could utilize anonymous reporting systems or patient complaints; however, the optimal strategy is yet to be determined.

Graber et al. have summarized the limited research on attempts to improve cognitive processes through educational interventions and illustrate its mixed results.[15] The most positive study was a randomized control trial using combined pattern recognition and deliberative reasoning to improve diagnostic accuracy in the face of biasing information.[16] Despite positive results, others have suggested that cognitive biases are impossible to teach due to their subconscious nature.[17] They argue that training physicians to avoid heuristics will simply lead to over investigation. These polarizing views highlight the need for research to evaluate interventions like the cognitive autopsy suggested here.

Trainees recognize early that their knowledge base is limited. However, it takes more internal analysis to realize that their brains' decision‐making capacity is similarly limited. Utilizing these regularly scheduled clinical meetings in the manner described above may build improved metacognition, cognition about cognition or more colloquially thinking about thinking. Clinicians understand that bias can easily occur in research and accept mechanisms to protect studies from those potential threats to validity such as double blinding of outcome assessments. Supplementing MMRs with cognitive discussions represents an analogous intent to reduce biases, introducing metacognition as the next frontier in advancing clinical care. Errors are inevitable,[18] and recognition of our cognitive blind spots will provide physicians with an improved framework for analysis of these errors. Building metacognition is a difficult task; however, this is not a reason to stop trying. In the spirit of innovation begun by pioneers like Rokitansky and Codman, and renewed focus on diagnostic errors generated by the recent report of the National Academy of Sciences[19], it is time for the cognitive autopsy to be built into the quality improvement and patient safety map.

Acknowledgements

The authors thank Donald A. Redelemeier, MD, MSc, University of Toronto, and Gurpreet Dhaliwal, MD, University of California, San Francisco, for providing comments on an earlier draft of this article. Neither was compensated for their contributions.

Disclosure: Nothing to report.

A 71‐year‐old man with widely metastatic nonsmall cell lung cancer presented to an emergency department of a teaching hospital at 7 pm with a chief complaint of severe chest pain relieved by sitting upright and leaning forward. A senior cardiologist, with expertise in echocardiography, assessed the patient and performed a bedside echocardiogram. He found a large pericardial effusion but concluded there was no cardiac tamponade. Given the patient's other medical problems, he referred him to internal medicine for admission to their service. The attending internist agreed to admit the patient, suggesting close cardiac monitoring and reevaluation with a formal echocardiogram in the morning. At 9 am, the team and the cardiologist were urgently summoned to the echo lab by the technician who now diagnosed tamponade. After looking at the images, the cardiologist disagreed with the technician's interpretation and declared that there was no sign of tamponade.

After leaving the echo lab, the attending internist led a team discussion on the phenomenon of and reasons for interobserver variation. The residents initially focused on the difference in expertise between the cardiologist and technician. The attending, who felt this was unlikely because the technician was very experienced, introduced the possibility of a cognitive misstep. Having staked out an opinion on the lack of tamponade the night before and acting on that interpretation by declining admission to his service, the cardiologist was susceptible to anchoring bias, where adjustments to a preliminary diagnosis are insufficient because of the influence of the initial interpretation.[1] The following day, the cardiologist performed a pericardiocentesis and reported that the fluid came out under pressure. In the face of this definitive information, he concluded that his prior assessment was incorrect and that tamponade had been present from the start.

The origins of medical error reduction lie in the practice of using autopsies to determine the cause of death spearheaded by Karl Rokitansky at the Vienna Medical School in the 1800s.[2] Ernest Amory Codman expanded the effort through the linkage of treatment decisions to subsequent outcomes by following patients after hospital discharge.[3] The advent of modern imaging techniques coupled with interventional methods of obtaining pathological specimens has dramatically improved diagnostic accuracy over the past 40 years. As a result, the practice of using autopsies to improve clinical acumen and reduce diagnostic error has virtually disappeared, while the focus on medical error has actually increased. The forum for reducing error shifted to morbidity and mortality rounds (MMRs), which have been relabeled quality‐improvement rounds in many hospitals.

In these regularly scheduled meetings, interprofessional clinicians discuss errors and adverse outcomes. Because deaths are rarely unexpected and often occur outside of the acute care setting, the focus is usually on errors in the execution of complex clinical plans that combine the wide array of modern laboratory, imaging, pharmaceutical, interventional, surgical, and pathological tools available to clinicians today. In the era of patient safety and quality improvement, errors are mostly blamed on systems‐based issues that lead to hospital complications, despite evidence that cognitive factors play a large role.[4] Systems‐based analysis was popularized by the landmark report of the Institute of Medicine.[5] In our local institutions (the University of Toronto teaching hospitals), improving diagnostic accuracy is almost never on the agenda. We suspect the same is true elsewhere. Common themes include mistakes in medication administration and dosing, communication, and physician handover. The Swiss cheese model[6] is often invoked to diffuse blame across a number of individuals, processes, and even machines. However, as Wachter and Pronovost point out, reengineering of systems has limited capacity for solving all safety and quality improvement issues when people are involved; human error can still sabotage the effort.[7]

Discussions centered on a physician's raw thinking ability have become a third rail, even though clinical reasoning lies at the core of patient safety. Human error is rarely discussed, in part because it is mistakenly believed to be uncommon and felt to be the result of deficits in knowledge or incompetence. Furthermore, the fear of assigning blame to individuals in front of their peers may be counterproductive, discouraging identification of future errors. However, the fields of cognitive psychology and medical decision making have clearly established that cognitive errors occur predictably and often, especially at times of high cognitive load (eg, when many high stakes complex decisions need to be made in a short period of time). Errors do not usually result from a lack of knowledge (although they can), but rather because people rely on instincts that include common biases called heuristics.[8] Most of the time, heuristics are a helpful and necessary evolutionary adaptation of the human thought process, but by their inherent nature, they can lead to predictable and repeatable errors. Because the effects of cognitive biases are inherent to all decision makers, using this framework for discussing individual error may be a method of decreasing the second victim effect[9] and avoid demoralizing the individual.

MMRs thus represent fertile ground for introducing cognitive psychology into medical education and quality improvement. The existing format is useful for teaching cognitive psychology because it is an open forum where discussions center on errors of omission and commission, many of which are a result of both systems issues and decision making heuristics. Several studies have attempted to describe methods for improving MMRs[10, 11, 12]; however, none have incorporated concepts from cognitive psychology. This type of analysis has penetrated several cases in the WebM&M series created by the Agency of Healthcare Quality Research, which can be used as a model for hospital‐based MMRs.[13] For the vignette described above, a MMR that considers systems‐based approaches might discuss how a busy emergency room, limitations of capacity on the cardiology service, and closure of the echo lab at night, played a role in this story. However, although it is difficult to replay another person's mental processing, ignoring the possibility that the cardiologist in this case may have fallen prey to a common cognitive error would be a missed opportunity to learn how frequently heuristics can be faulty. A cognitive approach applied to this example would explore explanations such as anchoring, ego, and hassle biases. Front‐line clinicians in busy hospital settings will recognize the interaction between workload pressures and cognitive mistakes common to examples like this one.

Cognitive heuristics should first be introduced to MMRs by experienced clinicians, well respected for their clinical acumen, by telling specific personal stories where heuristics led to errors in their practices and why the shortcut in thinking occurred. Thereafter, the traditional MMR format can be used: presenting a case, describing how an experienced clinician might manage the case, and then asking the audience members for comment. Incorporating discussions of cognitive missteps, in medical and nonmedical contexts, would help normalize the understanding that even the most experienced and smartest people fall prey to them. The tone must be positive.

Attendees could be encouraged to review their own thought processes through diagnostic verification for cases where their initial diagnosis was incorrect. This would involve assessment for adequacy, ensuring that potential diagnoses account for all abnormal and normal clinical findings, and coherency, ensuring that the diagnoses are pathophysiologically consistent with all clinical findings. Another strategy may be to illustrate cognitive forcing strategies for particular biases.[14] For example, in the case of anchoring bias, trainees may be encouraged to replay the clinical scenario with a different priming stem and evaluate if they would come to the same clinical conclusion. A challenge for all MMRs is how best to select cases; given the difficulties in replaying one's cognitive processes, this problem may be magnified. Potential selection methods could utilize anonymous reporting systems or patient complaints; however, the optimal strategy is yet to be determined.

Graber et al. have summarized the limited research on attempts to improve cognitive processes through educational interventions and illustrate its mixed results.[15] The most positive study was a randomized control trial using combined pattern recognition and deliberative reasoning to improve diagnostic accuracy in the face of biasing information.[16] Despite positive results, others have suggested that cognitive biases are impossible to teach due to their subconscious nature.[17] They argue that training physicians to avoid heuristics will simply lead to over investigation. These polarizing views highlight the need for research to evaluate interventions like the cognitive autopsy suggested here.

Trainees recognize early that their knowledge base is limited. However, it takes more internal analysis to realize that their brains' decision‐making capacity is similarly limited. Utilizing these regularly scheduled clinical meetings in the manner described above may build improved metacognition, cognition about cognition or more colloquially thinking about thinking. Clinicians understand that bias can easily occur in research and accept mechanisms to protect studies from those potential threats to validity such as double blinding of outcome assessments. Supplementing MMRs with cognitive discussions represents an analogous intent to reduce biases, introducing metacognition as the next frontier in advancing clinical care. Errors are inevitable,[18] and recognition of our cognitive blind spots will provide physicians with an improved framework for analysis of these errors. Building metacognition is a difficult task; however, this is not a reason to stop trying. In the spirit of innovation begun by pioneers like Rokitansky and Codman, and renewed focus on diagnostic errors generated by the recent report of the National Academy of Sciences[19], it is time for the cognitive autopsy to be built into the quality improvement and patient safety map.

Acknowledgements

The authors thank Donald A. Redelemeier, MD, MSc, University of Toronto, and Gurpreet Dhaliwal, MD, University of California, San Francisco, for providing comments on an earlier draft of this article. Neither was compensated for their contributions.

Disclosure: Nothing to report.

References
  1. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):11241131.
  2. Nuland SB. Doctors: The Biography of Medicine. New York, NY: Vintage Books; 1995.
  3. Codman EA. The classic: a study in hospital efficiency: as demonstrated by the case report of first five years of private hospital. Clin Orthop Relat Res. 2013;471(6):17781783.
  4. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):14931499.
  5. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 1999.
  6. Reason J. The contribution of latent human failures to the breakdown of complex systems. Philos Trans R Soc Lond B Biol Sci. 1990;327(1241):475484.
  7. Wachter RM, Pronovost PJ. Balancing “no blame” with accountability in patient safety. N Engl J Med. 2009;361(14):14011406.
  8. Croskerry P. From mindless to mindful practice—cognitive bias and clinical decision making. N Engl J Med. 2013;368(26):24452448.
  9. Wu AW. Medical error: the second victim. The doctor who makes the mistake needs help too. BMJ. 2000;320(7237):726727.
  10. Ksouri H, Balanant PY, Tadie JM, et al. Impact of morbidity and mortality conferences on analysis of mortality and critical events in intensive care practice. Am J Crit Care. 2010;19(2):135145.
  11. Szekendi MK, Barnard C, Creamer J, Noskin GA. Using patient safety morbidity and mortality conferences to promote transparency and a culture of safety. Jt Comm J Qual Patient Saf. 2010;36(1):39.
  12. Calder LA, Kwok ESH, Adam Cwinn A, et al. Enhancing the quality of morbidity and mortality rounds: the Ottawa M21(3):314321.
  13. Agency for Healthcare Research and Quality. AHRQ WebM41(1):110120.
  14. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535557.
  15. Eva KW, Hatala RM, Leblanc VR, Brooks LR. Teaching from the clinical reasoning literature: combined reasoning strategies help novice diagnosticians overcome misleading information. Med Educ. 2007;41(12):11521158.
  16. Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44(1):94100.
  17. Cain DM, Detsky AS. Everyone's a little bit biased (even physicians). JAMA. 2008;299(24):28932895.
  18. Balogh EP, Miller BT, Ball JR. Improving Diagnosis in Health Care. Washington, DC: National Academies Press; 2015.
References
  1. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):11241131.
  2. Nuland SB. Doctors: The Biography of Medicine. New York, NY: Vintage Books; 1995.
  3. Codman EA. The classic: a study in hospital efficiency: as demonstrated by the case report of first five years of private hospital. Clin Orthop Relat Res. 2013;471(6):17781783.
  4. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):14931499.
  5. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 1999.
  6. Reason J. The contribution of latent human failures to the breakdown of complex systems. Philos Trans R Soc Lond B Biol Sci. 1990;327(1241):475484.
  7. Wachter RM, Pronovost PJ. Balancing “no blame” with accountability in patient safety. N Engl J Med. 2009;361(14):14011406.
  8. Croskerry P. From mindless to mindful practice—cognitive bias and clinical decision making. N Engl J Med. 2013;368(26):24452448.
  9. Wu AW. Medical error: the second victim. The doctor who makes the mistake needs help too. BMJ. 2000;320(7237):726727.
  10. Ksouri H, Balanant PY, Tadie JM, et al. Impact of morbidity and mortality conferences on analysis of mortality and critical events in intensive care practice. Am J Crit Care. 2010;19(2):135145.
  11. Szekendi MK, Barnard C, Creamer J, Noskin GA. Using patient safety morbidity and mortality conferences to promote transparency and a culture of safety. Jt Comm J Qual Patient Saf. 2010;36(1):39.
  12. Calder LA, Kwok ESH, Adam Cwinn A, et al. Enhancing the quality of morbidity and mortality rounds: the Ottawa M21(3):314321.
  13. Agency for Healthcare Research and Quality. AHRQ WebM41(1):110120.
  14. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535557.
  15. Eva KW, Hatala RM, Leblanc VR, Brooks LR. Teaching from the clinical reasoning literature: combined reasoning strategies help novice diagnosticians overcome misleading information. Med Educ. 2007;41(12):11521158.
  16. Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44(1):94100.
  17. Cain DM, Detsky AS. Everyone's a little bit biased (even physicians). JAMA. 2008;299(24):28932895.
  18. Balogh EP, Miller BT, Ball JR. Improving Diagnosis in Health Care. Washington, DC: National Academies Press; 2015.
Issue
Journal of Hospital Medicine - 11(2)
Issue
Journal of Hospital Medicine - 11(2)
Page Number
120-122
Page Number
120-122
Publications
Publications
Article Type
Display Headline
Incorporating metacognition into morbidity and mortality rounds: The next frontier in quality improvement
Display Headline
Incorporating metacognition into morbidity and mortality rounds: The next frontier in quality improvement
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence: Dr. Allan Detsky, MD, Mount Sinai Hospital, Room 429, 600 University Ave., Toronto, Ontario M5G 1X5, Canada; Telephone: 416‐586‐8507; Fax: 416‐586‐8350; E‐mail: adetsky@mtsinai.on.ca
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files