Affiliations
Department of Medicine, University of Maryland School of Medicine
Given name(s)
Sanjay V.
Family name
Desai
Degrees
MD

A Randomized Cohort Controlled Trial to Compare Intern Sign-Out Training Interventions

Article Type
Changed
Sat, 12/16/2017 - 20:29

Patient sign-outs are defined as the transition of patient care that includes the transfer of information, task accountability, and personal responsibility between providers.1-3 The adoption of mnemonics as a memory aid has been used to improve the transfer of patient information between providers.4 In the transfer of task accountability, providers transfer follow-up tasks to on-call or coverage providers and ensure that directives are understood. Joint task accountability is enhanced through collaborative giving and cross-checking of information received through assertive questioning to detect errors, and it also enables the receiver to codevelop an understanding of a patient’s condition.5-8 In the transfer of personal responsibility for the primary team’s patients, the provision of anticipatory guidance enables the coverage provider to have prospective information about potential, upcoming issues to facilitate care plans.6 Enabling coverage providers to anticipate overnight events helps them exercise responsibility for patients who are under their temporary care.2

The Accreditation Council for Graduate Medical Education requires residency programs to provide formal instruction on sign-outs.9 Yet, variability across training programs exists,8,10 with training emphasis on the transfer of information over accountability or responsibility.11 Previous studies have demonstrated the efficacy of sign-out training, such as the illness severity, patient summary, action list, situation awareness and contingency planning, and synthesis by reviewer (I-PASS) bundle.3 Yet, participation is far from 100% because the I-PASS bundle requires in-person workshops, e-learning platforms, organizational change campaigns, and faculty participation,12 involving resource and time commitments that few programs can afford. To address this issue, we seek to compare resource-efficient, knowledge-based, skill-based, compliance-based, and learner-initiated sign-out training pedagogies. We focused on the evening sign-out because it is a high-risk period when care for inpatients is transferred to smaller coverage intern teams.

METHODS

Setting and Study Design

A prospective, randomized cohort trial of 4 training interventions was conducted at an internal medicine residency program at a Mid-Atlantic, academic, tertiary-care hospital with 1192 inpatient beds. The 52 interns admitted to the program were randomly assigned to 4 firms caring for up to 25 inpatients on each floor of the hospital. The case mix faced by each firm was similar because patients were randomly assigned to firms based on bed availability. Teams of 5 interns in each firm worked in 5-day duty cycles, during which each intern rotated as a night cover for his or her firm. Interns remain in their firm throughout their residency. Sign-outs were conducted face to face with a computer. Receivers printed sign-out sheets populated with patient information and took notes when senders communicated information from the computer. The hospital’s institutional review board approved this study.

Interventions

The firms were randomly assigned to 1 of 4 one-hour quality-improvement training interventions delivered at the same time and day in November 2014 at each firm’s office, located on different floors of the hospital. There was virtually no cross-talk among the firms in the first year, which ensured the integrity of the cohort randomization and interventions. Faculty from an affiliated business school of the academic center worked with attending physicians to train the firms.

All interventions took 1 hour at noontime. Firm 1 (the control) received a didactic lecture on sign-out, which participants heard during orientation. Repeating that lecture reinforced their knowledge of sign-outs. Firm 2 was trained on the I-PASS mnemonic with a predictable progression of information elements to transfer.3,12 Interns role-played 3 scenarios to practice sign-out.3 They received skills feedback and a debriefing to link I-PASS with information elements to transfer. Firm 3 was dealt a policy mandate by the interns’ attending physician to perform specific tasks at sign-out. Senders were to provide the night cover with to-do tasks, and receivers were to actively discuss and verify these tasks to ensure task accountability.13 Firm 4 was trained on a Plan-Do-Study-Act (PDSA) protocol to identify and solve perceived barriers to sign-outs. Firm 4 agreed to solve the problem of the lack of care plans by the day team to the night cover. An ad hoc team in Firm 4 refined, pilot tested, and rolled out the solution within a month. Its protocol emphasized information on anticipated changes in patient status, providing contingency plans and their rationale as well as discussions to clarify care plans. Details of the 4 interventions are shown in the Table.

 

 

Data Collection Process

Eight trained senior residents, recruited by the last author (S.V.D.), volunteered to observe 10 evening sign-outs in each firm 1 month prior to the intervention and another 10 nights 4 months after training. Observations were standardized with a sign-out checklist developed from the literature review and the Joint Commission’s 2006 National Patient Safety Goal 2E that followed the Situation, Background, Assessment, and Recommendation communication structure with opportunities for questioning and information verification.14,15 Observers indicated “1” for each of the 17 sign-out elements in the checklist they observed, as shown in the supporting Table. Observers did not have supervisory relationships with the interns. Occasionally, the pairs of observers were different depending on their availability.

Outcomes

We measured improvements in sign-out quality by the mean percentage differences for each of the 3 dimensions of sign-out, as well as a multidimensional measure of sign-out comprising the 3 dimensions for each firm in 2 ways: (1) pre- and postintervention, and (2) vis-à-vis the control group postintervention.

Statistical Analysis

We factor analyzed the 17 sign-out elements using principal components analysis with varimax rotation to confirm their groupings within the 3 dimensions of sign-out using Statistical Package for the Social Sciences (SPSS) version 24 (IBM, North Castle, NY). We calculated the mean percentage differences and used Student t tests to evaluate statistical differences at P < 0.05.

RESULTS

Five hundred and sixty-three patient sign-outs were observed prior to the training interventions (κ = 0.646), and 620 patient sign-outs were observed after the interventions (κ = 0.648). Kappa values derived from SPSS were within acceptable interrater agreement ranges. Factor analysis of the 17 sign-out elements yielded 3 factors that we named patient information, task accountability, and responsibility, as shown in the supporting Table.

The supporting Figure reports 2 sets of results. The line graphs show the pre- and postintervention differences for each firm while the bar charts show the postintervention differences between each firm vis-à-vis the control group on sign-out dimensions. The line graphs indicate the greatest improvements in patient information, task accountability, and responsibility for the I-PASS, policy mandate, and PDSA groups, respectively. Mandate and PDSA groups reported low relative scores on sign-out dimensions that were not the foci of their training while the didactics group scored around 0 pre- and postintervention. I-PASS had the highest improvement on the multidimensional measure of sign-out quality but was not significantly different from the PDSA group at P < 0.05 (see supporting Figure for the calculations). The bar charts indicate that all groups vis-à-vis the control had higher improvements in task accountability, responsibility, and the multidimensional measure of sign-out quality. I-PASS vis-à-vis the control had the highest improvement but was not statistically different from the PDSA at P < 0.05. No sentinel events were reported during the entire study period.

DISCUSSION

The results indicated that after only 1 hour of training, skill-based, compliance-based, and learner-initiated sign-out training improved sign-out quality beyond knowledge-based didactics even though the number of sign-out elements taught in the latter 2 was lower than in the didactics group. Different training emphases influenced different dimensions of sign-out quality so that training interns to focus on task accountability or responsibility led to improvements in those dimensions only. The lower scores in other dimensions suggest potential risks in sign-out quality from focusing attention on 1 dimension at the expense of other dimensions. I-PASS, which covered the most sign-out elements and utilized 5 facilitators, led to the best overall improvement in sign-out quality, which is consistent with previous studies.3,12 We demonstrated that only 1 hour of training on the I-PASS mnemonics using video, role-playing, and feedback led to significant improvements. This approach is portable and easily applied to any program. Potential improvements in I-PASS training could be obtained by emphasizing task accountability and responsibility because the mandate and PDSA groups obtained higher scores than the I-PASS group in these dimensions.

Limitations

We measured sign-out quality in the evening at this site because it was at greatest risk for errors. Future studies should consider daytime sign-outs, interunit handoffs, and other hospital settings, such as community or rural hospitals and nonacute patient settings, to ascertain generalizability. Data were collected from observations, so Hawthorne effects may introduce bias. However, we believe that using a standardized checklist, a control group, and assessing relative changes minimized this risk. Although we observed almost 1200 patient sign-outs over 80 shift changes, we were not able to observe every intern in every firm. Finally, no sentinel events were reported during the study period, and we did not include other measures of clinical outcomes, which represent an opportunity for future researchers to test which specific sign-out elements or dimensions are related to clinical outcomes or are relevant to specific patient types.

 

 

CONCLUSION

The results of this study indicate that 1 hour of formal training can improve sign-out quality. Program directors should consider including I-PASS with additional focus on task accountability and personal responsibility in their sign-out training plans.

Disclosure

The authors have nothing to disclose.

References

1. Darbyshire D, Gordon M, Baker P. Teaching handover of care to medical students. Clin Teach. 2013;10:32-37. PubMed
2. Lee SH, Phan PH, Dorman T, Weaver SJ, Pronovost PJ. Handoffs, safety culture, and practices: evidence from the hospital survey on patient safety culture. BMJ Health Serv Res. 2016;16:254. DOI 10.1186/s12913-016-1502-7. PubMed
3. Starmer AJ, O’Toole JK, Rosenbluth G, et al. Development, implementation, and dissemination of the I-PASS handoff curriculum: a multisite educational intervention to improve patient handoffs. Acad Med. 2014:89:876-884. PubMed
4. Riesenberg LA, Leitzsch J, Little BW. Systematic review of handoff mnemonics literature. Am J Med Qual. 2009;24:196-204. PubMed
5. Cohen MD, Hilligoss B, Kajdacsy-Balla A. A handoff is not a telegram: an understanding of the patient is co-constructed. Crit Care. 2012;16:303. PubMed
6. McMullan A, Parush A, Momtahan K. Transferring patient care: patterns of synchronous bidisciplinary communication between physicians and nurses during handoffs in a critical care unit. J Perianesth Nurs. 2015;30:92-104. PubMed
7. Rayo MF, Mount-Campbell AF, O’Brien JM, et al. Interactive questioning in critical care during handovers: a transcript analysis of communication behaviours by physicians, nurses and nurse practitioners. BMJ Qual Saf. 2014;23:483-489. PubMed
8. Gordon M, Findley R. Educational interventions to improve handover in health care: a systematic review. Med Educ. 2011;45:1081-1089. PubMed
9. Nasca TJ, Day SH, Amis ES Jr; ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363:e3. PubMed
10. Wohlauer MV, Arora VM, Horwitz LI, et al. The patient handoff: a comprehensive curricular blueprint for resident education to improve continuity of care. Acad Med. 2012;87:411-418. PubMed
11. Riesenberg LA, Leitzsch J, Massucci JL, et al. Residents’ and attending physicians’ handoffs: a systematic review of the literature. Acad Med. 2009;84:1775-1787. PubMed
12. Huth K, Hart F, Moreau K, et al. Real-world implementation of a standardized handover program (I-PASS) on a pediatric clinical teaching unit. Acad Ped. 2016;16:532-539. PubMed
13. Jonas E, Schulz-Hardt S, Frey D, Thelen N. Confirmation bias in sequential information search after preliminary decisions: An expansion of dissonance theoretical research on selective exposure to information. J Per Soc Psy. 2001;80:557-571. PubMed
14. Joint Commission. Improving handoff communications: Meeting national patient safety goal 2E. Jt Pers Patient Saf. 2006;6:9-15. 
15. Improving Hand-off Communication. Joint Commission Resources. 2007. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12(12)
Publications
Topics
Page Number
979-983
Sections
Article PDF
Article PDF

Patient sign-outs are defined as the transition of patient care that includes the transfer of information, task accountability, and personal responsibility between providers.1-3 The adoption of mnemonics as a memory aid has been used to improve the transfer of patient information between providers.4 In the transfer of task accountability, providers transfer follow-up tasks to on-call or coverage providers and ensure that directives are understood. Joint task accountability is enhanced through collaborative giving and cross-checking of information received through assertive questioning to detect errors, and it also enables the receiver to codevelop an understanding of a patient’s condition.5-8 In the transfer of personal responsibility for the primary team’s patients, the provision of anticipatory guidance enables the coverage provider to have prospective information about potential, upcoming issues to facilitate care plans.6 Enabling coverage providers to anticipate overnight events helps them exercise responsibility for patients who are under their temporary care.2

The Accreditation Council for Graduate Medical Education requires residency programs to provide formal instruction on sign-outs.9 Yet, variability across training programs exists,8,10 with training emphasis on the transfer of information over accountability or responsibility.11 Previous studies have demonstrated the efficacy of sign-out training, such as the illness severity, patient summary, action list, situation awareness and contingency planning, and synthesis by reviewer (I-PASS) bundle.3 Yet, participation is far from 100% because the I-PASS bundle requires in-person workshops, e-learning platforms, organizational change campaigns, and faculty participation,12 involving resource and time commitments that few programs can afford. To address this issue, we seek to compare resource-efficient, knowledge-based, skill-based, compliance-based, and learner-initiated sign-out training pedagogies. We focused on the evening sign-out because it is a high-risk period when care for inpatients is transferred to smaller coverage intern teams.

METHODS

Setting and Study Design

A prospective, randomized cohort trial of 4 training interventions was conducted at an internal medicine residency program at a Mid-Atlantic, academic, tertiary-care hospital with 1192 inpatient beds. The 52 interns admitted to the program were randomly assigned to 4 firms caring for up to 25 inpatients on each floor of the hospital. The case mix faced by each firm was similar because patients were randomly assigned to firms based on bed availability. Teams of 5 interns in each firm worked in 5-day duty cycles, during which each intern rotated as a night cover for his or her firm. Interns remain in their firm throughout their residency. Sign-outs were conducted face to face with a computer. Receivers printed sign-out sheets populated with patient information and took notes when senders communicated information from the computer. The hospital’s institutional review board approved this study.

Interventions

The firms were randomly assigned to 1 of 4 one-hour quality-improvement training interventions delivered at the same time and day in November 2014 at each firm’s office, located on different floors of the hospital. There was virtually no cross-talk among the firms in the first year, which ensured the integrity of the cohort randomization and interventions. Faculty from an affiliated business school of the academic center worked with attending physicians to train the firms.

All interventions took 1 hour at noontime. Firm 1 (the control) received a didactic lecture on sign-out, which participants heard during orientation. Repeating that lecture reinforced their knowledge of sign-outs. Firm 2 was trained on the I-PASS mnemonic with a predictable progression of information elements to transfer.3,12 Interns role-played 3 scenarios to practice sign-out.3 They received skills feedback and a debriefing to link I-PASS with information elements to transfer. Firm 3 was dealt a policy mandate by the interns’ attending physician to perform specific tasks at sign-out. Senders were to provide the night cover with to-do tasks, and receivers were to actively discuss and verify these tasks to ensure task accountability.13 Firm 4 was trained on a Plan-Do-Study-Act (PDSA) protocol to identify and solve perceived barriers to sign-outs. Firm 4 agreed to solve the problem of the lack of care plans by the day team to the night cover. An ad hoc team in Firm 4 refined, pilot tested, and rolled out the solution within a month. Its protocol emphasized information on anticipated changes in patient status, providing contingency plans and their rationale as well as discussions to clarify care plans. Details of the 4 interventions are shown in the Table.

 

 

Data Collection Process

Eight trained senior residents, recruited by the last author (S.V.D.), volunteered to observe 10 evening sign-outs in each firm 1 month prior to the intervention and another 10 nights 4 months after training. Observations were standardized with a sign-out checklist developed from the literature review and the Joint Commission’s 2006 National Patient Safety Goal 2E that followed the Situation, Background, Assessment, and Recommendation communication structure with opportunities for questioning and information verification.14,15 Observers indicated “1” for each of the 17 sign-out elements in the checklist they observed, as shown in the supporting Table. Observers did not have supervisory relationships with the interns. Occasionally, the pairs of observers were different depending on their availability.

Outcomes

We measured improvements in sign-out quality by the mean percentage differences for each of the 3 dimensions of sign-out, as well as a multidimensional measure of sign-out comprising the 3 dimensions for each firm in 2 ways: (1) pre- and postintervention, and (2) vis-à-vis the control group postintervention.

Statistical Analysis

We factor analyzed the 17 sign-out elements using principal components analysis with varimax rotation to confirm their groupings within the 3 dimensions of sign-out using Statistical Package for the Social Sciences (SPSS) version 24 (IBM, North Castle, NY). We calculated the mean percentage differences and used Student t tests to evaluate statistical differences at P < 0.05.

RESULTS

Five hundred and sixty-three patient sign-outs were observed prior to the training interventions (κ = 0.646), and 620 patient sign-outs were observed after the interventions (κ = 0.648). Kappa values derived from SPSS were within acceptable interrater agreement ranges. Factor analysis of the 17 sign-out elements yielded 3 factors that we named patient information, task accountability, and responsibility, as shown in the supporting Table.

The supporting Figure reports 2 sets of results. The line graphs show the pre- and postintervention differences for each firm while the bar charts show the postintervention differences between each firm vis-à-vis the control group on sign-out dimensions. The line graphs indicate the greatest improvements in patient information, task accountability, and responsibility for the I-PASS, policy mandate, and PDSA groups, respectively. Mandate and PDSA groups reported low relative scores on sign-out dimensions that were not the foci of their training while the didactics group scored around 0 pre- and postintervention. I-PASS had the highest improvement on the multidimensional measure of sign-out quality but was not significantly different from the PDSA group at P < 0.05 (see supporting Figure for the calculations). The bar charts indicate that all groups vis-à-vis the control had higher improvements in task accountability, responsibility, and the multidimensional measure of sign-out quality. I-PASS vis-à-vis the control had the highest improvement but was not statistically different from the PDSA at P < 0.05. No sentinel events were reported during the entire study period.

DISCUSSION

The results indicated that after only 1 hour of training, skill-based, compliance-based, and learner-initiated sign-out training improved sign-out quality beyond knowledge-based didactics even though the number of sign-out elements taught in the latter 2 was lower than in the didactics group. Different training emphases influenced different dimensions of sign-out quality so that training interns to focus on task accountability or responsibility led to improvements in those dimensions only. The lower scores in other dimensions suggest potential risks in sign-out quality from focusing attention on 1 dimension at the expense of other dimensions. I-PASS, which covered the most sign-out elements and utilized 5 facilitators, led to the best overall improvement in sign-out quality, which is consistent with previous studies.3,12 We demonstrated that only 1 hour of training on the I-PASS mnemonics using video, role-playing, and feedback led to significant improvements. This approach is portable and easily applied to any program. Potential improvements in I-PASS training could be obtained by emphasizing task accountability and responsibility because the mandate and PDSA groups obtained higher scores than the I-PASS group in these dimensions.

Limitations

We measured sign-out quality in the evening at this site because it was at greatest risk for errors. Future studies should consider daytime sign-outs, interunit handoffs, and other hospital settings, such as community or rural hospitals and nonacute patient settings, to ascertain generalizability. Data were collected from observations, so Hawthorne effects may introduce bias. However, we believe that using a standardized checklist, a control group, and assessing relative changes minimized this risk. Although we observed almost 1200 patient sign-outs over 80 shift changes, we were not able to observe every intern in every firm. Finally, no sentinel events were reported during the study period, and we did not include other measures of clinical outcomes, which represent an opportunity for future researchers to test which specific sign-out elements or dimensions are related to clinical outcomes or are relevant to specific patient types.

 

 

CONCLUSION

The results of this study indicate that 1 hour of formal training can improve sign-out quality. Program directors should consider including I-PASS with additional focus on task accountability and personal responsibility in their sign-out training plans.

Disclosure

The authors have nothing to disclose.

Patient sign-outs are defined as the transition of patient care that includes the transfer of information, task accountability, and personal responsibility between providers.1-3 The adoption of mnemonics as a memory aid has been used to improve the transfer of patient information between providers.4 In the transfer of task accountability, providers transfer follow-up tasks to on-call or coverage providers and ensure that directives are understood. Joint task accountability is enhanced through collaborative giving and cross-checking of information received through assertive questioning to detect errors, and it also enables the receiver to codevelop an understanding of a patient’s condition.5-8 In the transfer of personal responsibility for the primary team’s patients, the provision of anticipatory guidance enables the coverage provider to have prospective information about potential, upcoming issues to facilitate care plans.6 Enabling coverage providers to anticipate overnight events helps them exercise responsibility for patients who are under their temporary care.2

The Accreditation Council for Graduate Medical Education requires residency programs to provide formal instruction on sign-outs.9 Yet, variability across training programs exists,8,10 with training emphasis on the transfer of information over accountability or responsibility.11 Previous studies have demonstrated the efficacy of sign-out training, such as the illness severity, patient summary, action list, situation awareness and contingency planning, and synthesis by reviewer (I-PASS) bundle.3 Yet, participation is far from 100% because the I-PASS bundle requires in-person workshops, e-learning platforms, organizational change campaigns, and faculty participation,12 involving resource and time commitments that few programs can afford. To address this issue, we seek to compare resource-efficient, knowledge-based, skill-based, compliance-based, and learner-initiated sign-out training pedagogies. We focused on the evening sign-out because it is a high-risk period when care for inpatients is transferred to smaller coverage intern teams.

METHODS

Setting and Study Design

A prospective, randomized cohort trial of 4 training interventions was conducted at an internal medicine residency program at a Mid-Atlantic, academic, tertiary-care hospital with 1192 inpatient beds. The 52 interns admitted to the program were randomly assigned to 4 firms caring for up to 25 inpatients on each floor of the hospital. The case mix faced by each firm was similar because patients were randomly assigned to firms based on bed availability. Teams of 5 interns in each firm worked in 5-day duty cycles, during which each intern rotated as a night cover for his or her firm. Interns remain in their firm throughout their residency. Sign-outs were conducted face to face with a computer. Receivers printed sign-out sheets populated with patient information and took notes when senders communicated information from the computer. The hospital’s institutional review board approved this study.

Interventions

The firms were randomly assigned to 1 of 4 one-hour quality-improvement training interventions delivered at the same time and day in November 2014 at each firm’s office, located on different floors of the hospital. There was virtually no cross-talk among the firms in the first year, which ensured the integrity of the cohort randomization and interventions. Faculty from an affiliated business school of the academic center worked with attending physicians to train the firms.

All interventions took 1 hour at noontime. Firm 1 (the control) received a didactic lecture on sign-out, which participants heard during orientation. Repeating that lecture reinforced their knowledge of sign-outs. Firm 2 was trained on the I-PASS mnemonic with a predictable progression of information elements to transfer.3,12 Interns role-played 3 scenarios to practice sign-out.3 They received skills feedback and a debriefing to link I-PASS with information elements to transfer. Firm 3 was dealt a policy mandate by the interns’ attending physician to perform specific tasks at sign-out. Senders were to provide the night cover with to-do tasks, and receivers were to actively discuss and verify these tasks to ensure task accountability.13 Firm 4 was trained on a Plan-Do-Study-Act (PDSA) protocol to identify and solve perceived barriers to sign-outs. Firm 4 agreed to solve the problem of the lack of care plans by the day team to the night cover. An ad hoc team in Firm 4 refined, pilot tested, and rolled out the solution within a month. Its protocol emphasized information on anticipated changes in patient status, providing contingency plans and their rationale as well as discussions to clarify care plans. Details of the 4 interventions are shown in the Table.

 

 

Data Collection Process

Eight trained senior residents, recruited by the last author (S.V.D.), volunteered to observe 10 evening sign-outs in each firm 1 month prior to the intervention and another 10 nights 4 months after training. Observations were standardized with a sign-out checklist developed from the literature review and the Joint Commission’s 2006 National Patient Safety Goal 2E that followed the Situation, Background, Assessment, and Recommendation communication structure with opportunities for questioning and information verification.14,15 Observers indicated “1” for each of the 17 sign-out elements in the checklist they observed, as shown in the supporting Table. Observers did not have supervisory relationships with the interns. Occasionally, the pairs of observers were different depending on their availability.

Outcomes

We measured improvements in sign-out quality by the mean percentage differences for each of the 3 dimensions of sign-out, as well as a multidimensional measure of sign-out comprising the 3 dimensions for each firm in 2 ways: (1) pre- and postintervention, and (2) vis-à-vis the control group postintervention.

Statistical Analysis

We factor analyzed the 17 sign-out elements using principal components analysis with varimax rotation to confirm their groupings within the 3 dimensions of sign-out using Statistical Package for the Social Sciences (SPSS) version 24 (IBM, North Castle, NY). We calculated the mean percentage differences and used Student t tests to evaluate statistical differences at P < 0.05.

RESULTS

Five hundred and sixty-three patient sign-outs were observed prior to the training interventions (κ = 0.646), and 620 patient sign-outs were observed after the interventions (κ = 0.648). Kappa values derived from SPSS were within acceptable interrater agreement ranges. Factor analysis of the 17 sign-out elements yielded 3 factors that we named patient information, task accountability, and responsibility, as shown in the supporting Table.

The supporting Figure reports 2 sets of results. The line graphs show the pre- and postintervention differences for each firm while the bar charts show the postintervention differences between each firm vis-à-vis the control group on sign-out dimensions. The line graphs indicate the greatest improvements in patient information, task accountability, and responsibility for the I-PASS, policy mandate, and PDSA groups, respectively. Mandate and PDSA groups reported low relative scores on sign-out dimensions that were not the foci of their training while the didactics group scored around 0 pre- and postintervention. I-PASS had the highest improvement on the multidimensional measure of sign-out quality but was not significantly different from the PDSA group at P < 0.05 (see supporting Figure for the calculations). The bar charts indicate that all groups vis-à-vis the control had higher improvements in task accountability, responsibility, and the multidimensional measure of sign-out quality. I-PASS vis-à-vis the control had the highest improvement but was not statistically different from the PDSA at P < 0.05. No sentinel events were reported during the entire study period.

DISCUSSION

The results indicated that after only 1 hour of training, skill-based, compliance-based, and learner-initiated sign-out training improved sign-out quality beyond knowledge-based didactics even though the number of sign-out elements taught in the latter 2 was lower than in the didactics group. Different training emphases influenced different dimensions of sign-out quality so that training interns to focus on task accountability or responsibility led to improvements in those dimensions only. The lower scores in other dimensions suggest potential risks in sign-out quality from focusing attention on 1 dimension at the expense of other dimensions. I-PASS, which covered the most sign-out elements and utilized 5 facilitators, led to the best overall improvement in sign-out quality, which is consistent with previous studies.3,12 We demonstrated that only 1 hour of training on the I-PASS mnemonics using video, role-playing, and feedback led to significant improvements. This approach is portable and easily applied to any program. Potential improvements in I-PASS training could be obtained by emphasizing task accountability and responsibility because the mandate and PDSA groups obtained higher scores than the I-PASS group in these dimensions.

Limitations

We measured sign-out quality in the evening at this site because it was at greatest risk for errors. Future studies should consider daytime sign-outs, interunit handoffs, and other hospital settings, such as community or rural hospitals and nonacute patient settings, to ascertain generalizability. Data were collected from observations, so Hawthorne effects may introduce bias. However, we believe that using a standardized checklist, a control group, and assessing relative changes minimized this risk. Although we observed almost 1200 patient sign-outs over 80 shift changes, we were not able to observe every intern in every firm. Finally, no sentinel events were reported during the study period, and we did not include other measures of clinical outcomes, which represent an opportunity for future researchers to test which specific sign-out elements or dimensions are related to clinical outcomes or are relevant to specific patient types.

 

 

CONCLUSION

The results of this study indicate that 1 hour of formal training can improve sign-out quality. Program directors should consider including I-PASS with additional focus on task accountability and personal responsibility in their sign-out training plans.

Disclosure

The authors have nothing to disclose.

References

1. Darbyshire D, Gordon M, Baker P. Teaching handover of care to medical students. Clin Teach. 2013;10:32-37. PubMed
2. Lee SH, Phan PH, Dorman T, Weaver SJ, Pronovost PJ. Handoffs, safety culture, and practices: evidence from the hospital survey on patient safety culture. BMJ Health Serv Res. 2016;16:254. DOI 10.1186/s12913-016-1502-7. PubMed
3. Starmer AJ, O’Toole JK, Rosenbluth G, et al. Development, implementation, and dissemination of the I-PASS handoff curriculum: a multisite educational intervention to improve patient handoffs. Acad Med. 2014:89:876-884. PubMed
4. Riesenberg LA, Leitzsch J, Little BW. Systematic review of handoff mnemonics literature. Am J Med Qual. 2009;24:196-204. PubMed
5. Cohen MD, Hilligoss B, Kajdacsy-Balla A. A handoff is not a telegram: an understanding of the patient is co-constructed. Crit Care. 2012;16:303. PubMed
6. McMullan A, Parush A, Momtahan K. Transferring patient care: patterns of synchronous bidisciplinary communication between physicians and nurses during handoffs in a critical care unit. J Perianesth Nurs. 2015;30:92-104. PubMed
7. Rayo MF, Mount-Campbell AF, O’Brien JM, et al. Interactive questioning in critical care during handovers: a transcript analysis of communication behaviours by physicians, nurses and nurse practitioners. BMJ Qual Saf. 2014;23:483-489. PubMed
8. Gordon M, Findley R. Educational interventions to improve handover in health care: a systematic review. Med Educ. 2011;45:1081-1089. PubMed
9. Nasca TJ, Day SH, Amis ES Jr; ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363:e3. PubMed
10. Wohlauer MV, Arora VM, Horwitz LI, et al. The patient handoff: a comprehensive curricular blueprint for resident education to improve continuity of care. Acad Med. 2012;87:411-418. PubMed
11. Riesenberg LA, Leitzsch J, Massucci JL, et al. Residents’ and attending physicians’ handoffs: a systematic review of the literature. Acad Med. 2009;84:1775-1787. PubMed
12. Huth K, Hart F, Moreau K, et al. Real-world implementation of a standardized handover program (I-PASS) on a pediatric clinical teaching unit. Acad Ped. 2016;16:532-539. PubMed
13. Jonas E, Schulz-Hardt S, Frey D, Thelen N. Confirmation bias in sequential information search after preliminary decisions: An expansion of dissonance theoretical research on selective exposure to information. J Per Soc Psy. 2001;80:557-571. PubMed
14. Joint Commission. Improving handoff communications: Meeting national patient safety goal 2E. Jt Pers Patient Saf. 2006;6:9-15. 
15. Improving Hand-off Communication. Joint Commission Resources. 2007. PubMed

References

1. Darbyshire D, Gordon M, Baker P. Teaching handover of care to medical students. Clin Teach. 2013;10:32-37. PubMed
2. Lee SH, Phan PH, Dorman T, Weaver SJ, Pronovost PJ. Handoffs, safety culture, and practices: evidence from the hospital survey on patient safety culture. BMJ Health Serv Res. 2016;16:254. DOI 10.1186/s12913-016-1502-7. PubMed
3. Starmer AJ, O’Toole JK, Rosenbluth G, et al. Development, implementation, and dissemination of the I-PASS handoff curriculum: a multisite educational intervention to improve patient handoffs. Acad Med. 2014:89:876-884. PubMed
4. Riesenberg LA, Leitzsch J, Little BW. Systematic review of handoff mnemonics literature. Am J Med Qual. 2009;24:196-204. PubMed
5. Cohen MD, Hilligoss B, Kajdacsy-Balla A. A handoff is not a telegram: an understanding of the patient is co-constructed. Crit Care. 2012;16:303. PubMed
6. McMullan A, Parush A, Momtahan K. Transferring patient care: patterns of synchronous bidisciplinary communication between physicians and nurses during handoffs in a critical care unit. J Perianesth Nurs. 2015;30:92-104. PubMed
7. Rayo MF, Mount-Campbell AF, O’Brien JM, et al. Interactive questioning in critical care during handovers: a transcript analysis of communication behaviours by physicians, nurses and nurse practitioners. BMJ Qual Saf. 2014;23:483-489. PubMed
8. Gordon M, Findley R. Educational interventions to improve handover in health care: a systematic review. Med Educ. 2011;45:1081-1089. PubMed
9. Nasca TJ, Day SH, Amis ES Jr; ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363:e3. PubMed
10. Wohlauer MV, Arora VM, Horwitz LI, et al. The patient handoff: a comprehensive curricular blueprint for resident education to improve continuity of care. Acad Med. 2012;87:411-418. PubMed
11. Riesenberg LA, Leitzsch J, Massucci JL, et al. Residents’ and attending physicians’ handoffs: a systematic review of the literature. Acad Med. 2009;84:1775-1787. PubMed
12. Huth K, Hart F, Moreau K, et al. Real-world implementation of a standardized handover program (I-PASS) on a pediatric clinical teaching unit. Acad Ped. 2016;16:532-539. PubMed
13. Jonas E, Schulz-Hardt S, Frey D, Thelen N. Confirmation bias in sequential information search after preliminary decisions: An expansion of dissonance theoretical research on selective exposure to information. J Per Soc Psy. 2001;80:557-571. PubMed
14. Joint Commission. Improving handoff communications: Meeting national patient safety goal 2E. Jt Pers Patient Saf. 2006;6:9-15. 
15. Improving Hand-off Communication. Joint Commission Resources. 2007. PubMed

Issue
Journal of Hospital Medicine 12(12)
Issue
Journal of Hospital Medicine 12(12)
Page Number
979-983
Page Number
979-983
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
"Phillip H. Phan, PhD", 100 International Drive, Baltimore, MD 21202; Telephone: 410-234-9434; E-mail: pphan@jhu.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gating Strategy
First Peek Free
Article PDF Media

Primary Care Provider Preferences for Communication with Inpatient Teams: One Size Does Not Fit All

Article Type
Changed
Thu, 03/15/2018 - 21:59

As the hospitalist’s role in medicine grows, the transition of care from inpatient to primary care providers (PCPs, including primary care physicians, nurse practitioners, or physician assistants), becomes increasingly important. Inadequate communication at this transition is associated with preventable adverse events leading to rehospitalization, disability, and death.1-3 While professional societies recommend PCPs be notified at every care transition, the specific timing and modality of this communication is not well defined.4

Providing PCPs access to the inpatient electronic health record (EHR) may reduce the need for active communication. However, a recent survey of PCPs in the general internal medicine division of an academic hospital found a strong preference for additional communication with inpatient providers, despite a shared EHR.5

We examined communication preferences of general internal medicine PCPs at a different academic institution and extended our study to include community-based PCPs who were both affiliated and unaffiliated with the institution.

METHODS

Between October 2015 and June 2016, we surveyed PCPs from 3 practice groups with institutional affiliation or proximity to The Johns Hopkins Hospital: all general internal medicine faculty with outpatient practices (“academic,” 2 practice sites, n = 35), all community-based PCPs affiliated with the health system (“community,” 36 practice sites, n = 220), and all PCPs from an unaffiliated managed care organization (“unaffiliated,” 5 practice sites ranging from 0.3 to 4 miles from The Johns Hopkins Hospital, n = 29).

All groups have work-sponsored e-mail services. At the time of the survey, both the academic and community groups used an EHR that allowed access to inpatient laboratory and radiology data and discharge summaries. The unaffiliated group used paper health records. The hospital faxes discharge summaries to all PCPs who are identified by patients.

The investigators and representatives from each practice group collaborated to develop 15 questions with mutually exclusive answers to evaluate PCP experiences with and preferences for communication with inpatient teams. The survey was constructed and administered through Qualtrics’ online platform (Qualtrics, Provo, UT) and distributed via e-mail. The study was reviewed and acknowledged by the Johns Hopkins institutional review board as quality improvement activity.

The survey contained branching logic. Only respondents who indicated preference for communication received questions regarding preferred mode of communication. We used the preferred mode of communication for initial contact from the inpatient team in our analysis. χ2 and Fischer’s exact tests were performed with JMP 12 software (SAS Institute Inc, Cary, NC).

RESULTS

Fourteen (40%) academic, 43 (14%) community, and 16 (55%) unaffiliated PCPs completed the survey, for 73 total responses from 284 surveys distributed (26%).

Among the 73 responding PCPs, 31 (42%) reported receiving notification of admission during “every” or “almost every” hospitalization, with no significant variation across practice groups (P = 0.5).

Across all groups, 64 PCPs (88%) preferred communication at 1 or more points during hospitalizations (panel A of Figure). “Both upon admission and prior to discharge” was selected most frequently, and there were no differences between practice groups (P = 0.2).



Preferred mode of communication, however, differed significantly between groups (panel B of Figure). The academic group had a greater preference for telephone (54%) than the community (8%; P < 0.001) and unaffiliated groups (8%; P < 0.001), the community group a greater preference for EHR (77%) than the academic (23%; P = 0.002) and unaffiliated groups (0%; P < 0.001), and the unaffiliated group a greater preference for fax (58%) than the other groups (both 0%; P < 0.001).

DISCUSSION

Our findings add to previous evidence of low rates of communication between inpatient providers and PCPs6 and a preference from PCPs for communication during hospitalizations despite shared EHRs.5 We extend previous work by demonstrating that PCP preferences for mode of communication vary by practice setting. Our findings lead us to hypothesize that identifying and incorporating PCP preferences may improve communication, though at the potential expense of standardization and efficiency.

There may be several reasons for the differing communication preferences observed. Most academic PCPs are located near or have admitting privileges to the hospital and are not in clinic full time. Their preference for the telephone may thus result from interpersonal relationships born from proximity and greater availability for telephone calls, or reduced fluency with the EHR compared to full-time community clinicians.

The unaffiliated group’s preference for fax may reflect a desire for communication that integrates easily with paper charts and is least disruptive to workflow, or concerns about health information confidentiality in e-mails.

Our study’s generalizability is limited by a low response rate, though it is comparable to prior studies.7 The unaffiliated group was accessed by convenience (acquaintance with the medical director); however, we note it had the highest response rate (55%).

In summary, we found low rates of communication between inpatient providers and PCPs, despite a strong preference from most PCPs for such communication during hospitalizations. PCPs’ preferred mode of communication differed based on practice setting. Addressing PCP communication preferences may be important to future care transition interventions.

 

 

 

Disclosure

The authors report no conflicts of interest.

 

References

1. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med. 2003;138(3):161-174. PubMed
2. Moore C, Wisnivesky J, Williams S, McGinn T. Medical errors related to discontinuity of care from an inpatient to an outpatient setting. J Gen Intern Med. 2003;18(8):646-651. PubMed
3. van Walraven C, Mamdani M, Fang J, Austin PC. Continuity of care and patient outcomes after hospital discharge. J Gen Intern Med. 2004;19(6):624-631. PubMed
4. Snow V, Beck D, Budnitz T, et al. Transitions of Care Consensus policy statement: American College of Physicians, Society of General Internal Medicine, Society of Hospital Medicine, American Geriatrics Society, American College Of Emergency Physicians, and Society for Academic Emergency M. J Hosp Med. 2009;4(6):364-370. PubMed
5. Sheu L, Fung K, Mourad M, Ranji S, Wu E. We need to talk: Primary care provider communication at discharge in the era of a shared electronic medical record. J Hosp Med. 2015;10(5):307-310. PubMed
6. Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in communication and information transfer between hospital-based and primary care physicians. JAMA. 2007;297(8):831-841. PubMed
7. Pantilat SZ, Lindenauer PK, Katz PP, Wachter RM. Primary care physician attitudes regarding communication with hospitalists. Am J Med. 2001(9B);111:15-20. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(3)
Publications
Topics
Page Number
177-178
Sections
Article PDF
Article PDF

As the hospitalist’s role in medicine grows, the transition of care from inpatient to primary care providers (PCPs, including primary care physicians, nurse practitioners, or physician assistants), becomes increasingly important. Inadequate communication at this transition is associated with preventable adverse events leading to rehospitalization, disability, and death.1-3 While professional societies recommend PCPs be notified at every care transition, the specific timing and modality of this communication is not well defined.4

Providing PCPs access to the inpatient electronic health record (EHR) may reduce the need for active communication. However, a recent survey of PCPs in the general internal medicine division of an academic hospital found a strong preference for additional communication with inpatient providers, despite a shared EHR.5

We examined communication preferences of general internal medicine PCPs at a different academic institution and extended our study to include community-based PCPs who were both affiliated and unaffiliated with the institution.

METHODS

Between October 2015 and June 2016, we surveyed PCPs from 3 practice groups with institutional affiliation or proximity to The Johns Hopkins Hospital: all general internal medicine faculty with outpatient practices (“academic,” 2 practice sites, n = 35), all community-based PCPs affiliated with the health system (“community,” 36 practice sites, n = 220), and all PCPs from an unaffiliated managed care organization (“unaffiliated,” 5 practice sites ranging from 0.3 to 4 miles from The Johns Hopkins Hospital, n = 29).

All groups have work-sponsored e-mail services. At the time of the survey, both the academic and community groups used an EHR that allowed access to inpatient laboratory and radiology data and discharge summaries. The unaffiliated group used paper health records. The hospital faxes discharge summaries to all PCPs who are identified by patients.

The investigators and representatives from each practice group collaborated to develop 15 questions with mutually exclusive answers to evaluate PCP experiences with and preferences for communication with inpatient teams. The survey was constructed and administered through Qualtrics’ online platform (Qualtrics, Provo, UT) and distributed via e-mail. The study was reviewed and acknowledged by the Johns Hopkins institutional review board as quality improvement activity.

The survey contained branching logic. Only respondents who indicated preference for communication received questions regarding preferred mode of communication. We used the preferred mode of communication for initial contact from the inpatient team in our analysis. χ2 and Fischer’s exact tests were performed with JMP 12 software (SAS Institute Inc, Cary, NC).

RESULTS

Fourteen (40%) academic, 43 (14%) community, and 16 (55%) unaffiliated PCPs completed the survey, for 73 total responses from 284 surveys distributed (26%).

Among the 73 responding PCPs, 31 (42%) reported receiving notification of admission during “every” or “almost every” hospitalization, with no significant variation across practice groups (P = 0.5).

Across all groups, 64 PCPs (88%) preferred communication at 1 or more points during hospitalizations (panel A of Figure). “Both upon admission and prior to discharge” was selected most frequently, and there were no differences between practice groups (P = 0.2).



Preferred mode of communication, however, differed significantly between groups (panel B of Figure). The academic group had a greater preference for telephone (54%) than the community (8%; P < 0.001) and unaffiliated groups (8%; P < 0.001), the community group a greater preference for EHR (77%) than the academic (23%; P = 0.002) and unaffiliated groups (0%; P < 0.001), and the unaffiliated group a greater preference for fax (58%) than the other groups (both 0%; P < 0.001).

DISCUSSION

Our findings add to previous evidence of low rates of communication between inpatient providers and PCPs6 and a preference from PCPs for communication during hospitalizations despite shared EHRs.5 We extend previous work by demonstrating that PCP preferences for mode of communication vary by practice setting. Our findings lead us to hypothesize that identifying and incorporating PCP preferences may improve communication, though at the potential expense of standardization and efficiency.

There may be several reasons for the differing communication preferences observed. Most academic PCPs are located near or have admitting privileges to the hospital and are not in clinic full time. Their preference for the telephone may thus result from interpersonal relationships born from proximity and greater availability for telephone calls, or reduced fluency with the EHR compared to full-time community clinicians.

The unaffiliated group’s preference for fax may reflect a desire for communication that integrates easily with paper charts and is least disruptive to workflow, or concerns about health information confidentiality in e-mails.

Our study’s generalizability is limited by a low response rate, though it is comparable to prior studies.7 The unaffiliated group was accessed by convenience (acquaintance with the medical director); however, we note it had the highest response rate (55%).

In summary, we found low rates of communication between inpatient providers and PCPs, despite a strong preference from most PCPs for such communication during hospitalizations. PCPs’ preferred mode of communication differed based on practice setting. Addressing PCP communication preferences may be important to future care transition interventions.

 

 

 

Disclosure

The authors report no conflicts of interest.

 

As the hospitalist’s role in medicine grows, the transition of care from inpatient to primary care providers (PCPs, including primary care physicians, nurse practitioners, or physician assistants), becomes increasingly important. Inadequate communication at this transition is associated with preventable adverse events leading to rehospitalization, disability, and death.1-3 While professional societies recommend PCPs be notified at every care transition, the specific timing and modality of this communication is not well defined.4

Providing PCPs access to the inpatient electronic health record (EHR) may reduce the need for active communication. However, a recent survey of PCPs in the general internal medicine division of an academic hospital found a strong preference for additional communication with inpatient providers, despite a shared EHR.5

We examined communication preferences of general internal medicine PCPs at a different academic institution and extended our study to include community-based PCPs who were both affiliated and unaffiliated with the institution.

METHODS

Between October 2015 and June 2016, we surveyed PCPs from 3 practice groups with institutional affiliation or proximity to The Johns Hopkins Hospital: all general internal medicine faculty with outpatient practices (“academic,” 2 practice sites, n = 35), all community-based PCPs affiliated with the health system (“community,” 36 practice sites, n = 220), and all PCPs from an unaffiliated managed care organization (“unaffiliated,” 5 practice sites ranging from 0.3 to 4 miles from The Johns Hopkins Hospital, n = 29).

All groups have work-sponsored e-mail services. At the time of the survey, both the academic and community groups used an EHR that allowed access to inpatient laboratory and radiology data and discharge summaries. The unaffiliated group used paper health records. The hospital faxes discharge summaries to all PCPs who are identified by patients.

The investigators and representatives from each practice group collaborated to develop 15 questions with mutually exclusive answers to evaluate PCP experiences with and preferences for communication with inpatient teams. The survey was constructed and administered through Qualtrics’ online platform (Qualtrics, Provo, UT) and distributed via e-mail. The study was reviewed and acknowledged by the Johns Hopkins institutional review board as quality improvement activity.

The survey contained branching logic. Only respondents who indicated preference for communication received questions regarding preferred mode of communication. We used the preferred mode of communication for initial contact from the inpatient team in our analysis. χ2 and Fischer’s exact tests were performed with JMP 12 software (SAS Institute Inc, Cary, NC).

RESULTS

Fourteen (40%) academic, 43 (14%) community, and 16 (55%) unaffiliated PCPs completed the survey, for 73 total responses from 284 surveys distributed (26%).

Among the 73 responding PCPs, 31 (42%) reported receiving notification of admission during “every” or “almost every” hospitalization, with no significant variation across practice groups (P = 0.5).

Across all groups, 64 PCPs (88%) preferred communication at 1 or more points during hospitalizations (panel A of Figure). “Both upon admission and prior to discharge” was selected most frequently, and there were no differences between practice groups (P = 0.2).



Preferred mode of communication, however, differed significantly between groups (panel B of Figure). The academic group had a greater preference for telephone (54%) than the community (8%; P < 0.001) and unaffiliated groups (8%; P < 0.001), the community group a greater preference for EHR (77%) than the academic (23%; P = 0.002) and unaffiliated groups (0%; P < 0.001), and the unaffiliated group a greater preference for fax (58%) than the other groups (both 0%; P < 0.001).

DISCUSSION

Our findings add to previous evidence of low rates of communication between inpatient providers and PCPs6 and a preference from PCPs for communication during hospitalizations despite shared EHRs.5 We extend previous work by demonstrating that PCP preferences for mode of communication vary by practice setting. Our findings lead us to hypothesize that identifying and incorporating PCP preferences may improve communication, though at the potential expense of standardization and efficiency.

There may be several reasons for the differing communication preferences observed. Most academic PCPs are located near or have admitting privileges to the hospital and are not in clinic full time. Their preference for the telephone may thus result from interpersonal relationships born from proximity and greater availability for telephone calls, or reduced fluency with the EHR compared to full-time community clinicians.

The unaffiliated group’s preference for fax may reflect a desire for communication that integrates easily with paper charts and is least disruptive to workflow, or concerns about health information confidentiality in e-mails.

Our study’s generalizability is limited by a low response rate, though it is comparable to prior studies.7 The unaffiliated group was accessed by convenience (acquaintance with the medical director); however, we note it had the highest response rate (55%).

In summary, we found low rates of communication between inpatient providers and PCPs, despite a strong preference from most PCPs for such communication during hospitalizations. PCPs’ preferred mode of communication differed based on practice setting. Addressing PCP communication preferences may be important to future care transition interventions.

 

 

 

Disclosure

The authors report no conflicts of interest.

 

References

1. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med. 2003;138(3):161-174. PubMed
2. Moore C, Wisnivesky J, Williams S, McGinn T. Medical errors related to discontinuity of care from an inpatient to an outpatient setting. J Gen Intern Med. 2003;18(8):646-651. PubMed
3. van Walraven C, Mamdani M, Fang J, Austin PC. Continuity of care and patient outcomes after hospital discharge. J Gen Intern Med. 2004;19(6):624-631. PubMed
4. Snow V, Beck D, Budnitz T, et al. Transitions of Care Consensus policy statement: American College of Physicians, Society of General Internal Medicine, Society of Hospital Medicine, American Geriatrics Society, American College Of Emergency Physicians, and Society for Academic Emergency M. J Hosp Med. 2009;4(6):364-370. PubMed
5. Sheu L, Fung K, Mourad M, Ranji S, Wu E. We need to talk: Primary care provider communication at discharge in the era of a shared electronic medical record. J Hosp Med. 2015;10(5):307-310. PubMed
6. Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in communication and information transfer between hospital-based and primary care physicians. JAMA. 2007;297(8):831-841. PubMed
7. Pantilat SZ, Lindenauer PK, Katz PP, Wachter RM. Primary care physician attitudes regarding communication with hospitalists. Am J Med. 2001(9B);111:15-20. PubMed

References

1. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med. 2003;138(3):161-174. PubMed
2. Moore C, Wisnivesky J, Williams S, McGinn T. Medical errors related to discontinuity of care from an inpatient to an outpatient setting. J Gen Intern Med. 2003;18(8):646-651. PubMed
3. van Walraven C, Mamdani M, Fang J, Austin PC. Continuity of care and patient outcomes after hospital discharge. J Gen Intern Med. 2004;19(6):624-631. PubMed
4. Snow V, Beck D, Budnitz T, et al. Transitions of Care Consensus policy statement: American College of Physicians, Society of General Internal Medicine, Society of Hospital Medicine, American Geriatrics Society, American College Of Emergency Physicians, and Society for Academic Emergency M. J Hosp Med. 2009;4(6):364-370. PubMed
5. Sheu L, Fung K, Mourad M, Ranji S, Wu E. We need to talk: Primary care provider communication at discharge in the era of a shared electronic medical record. J Hosp Med. 2015;10(5):307-310. PubMed
6. Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in communication and information transfer between hospital-based and primary care physicians. JAMA. 2007;297(8):831-841. PubMed
7. Pantilat SZ, Lindenauer PK, Katz PP, Wachter RM. Primary care physician attitudes regarding communication with hospitalists. Am J Med. 2001(9B);111:15-20. PubMed

Issue
Journal of Hospital Medicine 13(3)
Issue
Journal of Hospital Medicine 13(3)
Page Number
177-178
Page Number
177-178
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Stephen A. Berry, MD PhD, Johns Hopkins University School of Medicine, 725 N. Wolfe St, Room 217, Baltimore, MD 21205; Telephone: 443-287-4841; Fax: 410-502-7029; E-mail:
sberry8@jhmi.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Un-Gate On Date
Tue, 03/13/2018 - 06:00
Article PDF Media

Outcomes after 2011 Residency Reform

Article Type
Changed
Sun, 05/21/2017 - 14:15
Display Headline
Inpatient safety outcomes following the 2011 residency work‐hour reform

The Accreditation Council for Graduate Medical Education (ACGME) Common Program Requirements implemented in July 2011 increased supervision requirements and limited continuous work hours for first‐year residents.[1] Similar to the 2003 mandates, these requirements were introduced to improve patient safety and education at academic medical centers.[2] Work‐hour reforms have been associated with decreased resident burnout and improved sleep.[3, 4, 5] However, national observational studies and systematic reviews of the impact of the 2003 reforms on patient safety and quality of care have been varied in terms of outcome.[6, 7, 8, 9, 10] Small studies of the 2011 recommendations have shown increased sleep duration and decreased burnout, but also an increased number of handoffs and increased resident concerns about making a serious medical error.[11, 12, 13, 14] Although national surveys of residents and program directors have not indicated improvements in education or quality of life, 1 observational study did show improvement in clinical exposure and conference attendance.[15, 16, 17, 18] The impact of the 2011 reforms on patient safety remains unclear.[19, 20]

The objective of this study was to evaluate the association between implementation of the 2011 residency work‐hour mandates and patient safety outcomes at a large academic medical center.

METHODS

Study Design

This observational study used a quasi‐experimental difference‐in‐differences approach to evaluate whether residency work‐hour changes were associated with patient safety outcomes among general medicine inpatients. We compared safety outcomes among adult patients discharged from resident general medical services (referred to as resident) to safety outcomes among patients discharged by the hospitalist general medical service (referred to as hospitalist) before and after the 2011 residency work‐hour reforms at a large academic medical center. Differences in outcomes for the resident group were compared to differences observed in the hospitalist group, with adjustment for relevant demographic and case mix factors.[21] We used the hospitalist service as a control group, because ACGME changes applied only to resident services. The strength of this design is that it controls for secular trends that are correlated with patient safety, impacting both residents and hospitalists similarly.[9]

Approval for this study and a Health Insurance Portability and Accountability Act waiver were granted by the Johns Hopkins University School of Medicine institutional review board. We retrospectively examined administrative data on all patient discharges from the general medicine services at Johns Hopkins Hospital between July 1, 2008 and June 30, 2012 that were identified as pertaining to resident or hospitalist services.

Patient Allocation and Physician Scheduling

Patient admission to the resident or hospitalist service was decided by a number of factors. To maintain continuity of care, patients were preferentially admitted to the same service as for prior admissions. New patients were admitted to a service based on bed availability, nurse staffing, patient gender, isolation precautions, and cardiac monitor availability.

The inpatient resident services were staffed prior to July 2011 using a traditional 30‐hour overnight call system. Following July 2011, the inpatient resident services were staffed using a modified overnight call system, in which interns took overnight calls from 8 pm until 12 pm the following day, once every 5 nights with supervision by upper‐level residents. These interns rotated through daytime admitting and coverage roles on the intervening days. The hospitalist service was organized into a 3‐physician rotation of day shift, evening shift, and overnight shift.

Data and Outcomes

Twenty‐nine percent of patients in the sample were admitted more than once during the study period, and patients were generally admitted to the same resident team during each admission. Patients with multiple admissions were counted multiple times in the model. We categorized admissions as prereform (July 1, 2008June 30, 2011) and postreform (July 1, 2011June 30, 2012). Outcomes evaluated included hospital length of stay, 30‐day readmission, intensive care unit stay (ICU) stay, inpatient mortality, and number of Maryland Hospital Acquired Conditions (MHACs). ICU stay pertained to any ICU admission including initial admission and transfer from the inpatient floor. MHACs are a set of inpatient performance indicators derived from a list of 64 inpatient Potentially Preventable Complications developed by 3M Health Information Systems.[22] MHACs are used by the Maryland Health Services Cost Review Commission to link hospital payment to performance for costly, preventable, and clinically relevant complications. MHACs were coded in our analysis as a dichotomous variable. Independent variables included patient age at admission, race, gender, and case mix index. Case mix index (CMI) is a numeric score that measures resource utilization for a specific patient population. CMI is a weighted value assigned to patients based on resource utilization and All Patient Refined Diagnostic Related Group and was included as an indicator of patient illness severity and risk of mortality.[23] Data were obtained from administrative records from the case mix research team at Johns Hopkins Medicine.

To account for transitional differences that may have coincided with the opening of a new hospital wing in late April 2012, we conducted a sensitivity analysis, in which we excluded from analysis any visits that took place in May 2012 to June 2012.

Data Analysis

Based on historical studies, we calculated that a sample size of at least 3600 discharges would allow us to detect a difference of 5% between the pre‐ and postreform period assuming baseline 20% occurrence of dichotomous outcomes (=0.05; =0.2; r=4).[21]

The primary unit of analysis was the hospital discharge. Similar to Horwitz et al., we analyzed data using a difference‐in‐differences estimation strategy.[21] We used multivariable linear regression for length of stay measured as a continuous variable, and multivariable logistic regression for inpatient mortality, 30‐day readmission, MHACs coded as a dichotomous variable, and ICU stay coded as a dichotomous variable.[9] The difference‐in‐differences estimation was used to determine whether the postreform period relative to prereform period was associated with differences in outcomes comparing resident and hospitalist services. In the regression models, the independent variables of interest included an indicator variable for whether a patient was treated on a resident service, an indicator variable for whether a patient was discharged in the postreform period, and the interaction of these 2 variables (resident*postreform). The interaction term can be interpreted as a differential change over time comparing resident and hospitalist services. In all models, we adjusted for patient age, gender, race, and case mix index.

To determine whether prereform trends were similar among the resident and hospitalist services, we performed a test of controls as described by Volpp and colleagues.[6] Interaction terms for resident service and prereform years 2010 and 2011 were added to the model. A Wald test was then used to test for improved model fit, which would indicate differential trends among resident and hospitalist services during the prereform period. Where such trends were found, postreform results were compared only to 2011 rather than the 2009 to 2011 prereform period.[6]

To account for correlation within patients who had multiple discharges, we used a clustering approach and estimated robust variances.[24] From the regression model results, we calculated predicted probabilities adjusted for relevant covariates and prepost differences, and used linear probability models to estimate percentage‐point differences in outcomes, comparing residents and hospitalists in the pre‐ and postreform periods.[25] All analyses were performed using Stata/IC version 11 (StataCorp, College Station, TX).

RESULTS

In the 3 years before the 2011 residency work‐hour reforms were implemented (prereform), there were a total of 15,688 discharges for 8983 patients to the resident services and 4622 discharges for 3649 patients to the hospitalist services. In the year following implementation of residency work‐hour changes (postreform), there were 5253 discharges for 3805 patients to the resident services and 1767 discharges for 1454 patients to the hospitalist service. Table 1 shows the characteristics of patients discharged from the resident and hospitalist services in the pre‐ and postreform periods. Patients discharged from the resident services were more likely to be older, male, African American, and have a higher CMI.

Demographics and Case Mix Index of Patients Discharged From Resident and Hospitalist (Nonresident) General Medicine Services 20092012 at Johns Hopkins Hospital
 Resident ServicesHospitalist Service 
 20092010201120122009201020112012P Valuea
  • NOTE: Abbreviations: SD, standard deviation.

  • Comparing patients admitted to resident versus hospitalist service over the length of the study period 2009 to 2012. Case mix index range for this sample was 0.2 to 21.9 (SD 0.9). Higher case mix index indicates higher risk of mortality.

Discharges, n53455299504452531366149217641767 
Unique patients, n30822968293338051106118013631454 
Age, y, mean (SD)55.1 (17.7)55.7 (17.4)56.4 (17.9)56.7 (17.1)55.9 (17.9)56.2 (18.4)55.5 (18.8)54 (18.7)0.02
Sex male, n (%)1503 (48.8)1397 (47.1)1432 (48.8)1837 (48.3)520 (47)550 (46.6)613 (45)654 (45)<0.01
Race         
African American, n (%)2072 (67.2)1922 (64.8)1820 (62.1)2507 (65.9)500 (45.2)592 (50.2)652 (47.8)747 (51.4)<0.01
White, n (%)897 (29.1)892 (30.1)957 (32.6)1118 (29.4)534 (48.3)527 (44.7)621 (45.6)619 (42.6) 
Asian, n (%)19 (.6%)35 (1.2)28 (1)32 (.8)11 (1)7 (.6)25 (1.8)12 (.8) 
Other, n (%)94 (3.1)119 (4)128 (4.4)148 (3.9)61 (5.5)54 (4.6)65 (4.8)76 (5.2) 
Case mix index, mean (SD)1.2 (1)1.1 (0.9)1.1 (0.9)1.1 (1.2)1.2 (1)1.1 (1)1.1 (1)1 (0.7)<0.01

Differences in Outcomes Among Resident and Hospitalist Services Pre‐ and Postreform

Table 2 shows unadjusted results. Patients discharged from the resident services in the postreform period as compared to the prereform period had a higher likelihood of an ICU stay (5.9% vs 4.5%, P<0.01), and lower likelihood of 30‐day readmission (17.1% vs 20.1%, P<0.01). Patients discharged from the hospitalist service in the postreform period as compared to the prereform period had a significantly shorter mean length of stay (4.51 vs 4.88 days, P=0.03)

Unadjusted Patient Safety Outcomes by Year and Service
 Resident ServicesHospitalist Service
OutcomePrereformaPostreformP ValuePrereformaPostreformP Value
  • NOTE: Abbreviations: ICU, intensive care unit; MHACs, Maryland Hospital Acquired Conditions.

  • For the outcomes length of stay and ICU admission, the postreform period was compared to 2011 only. For MHACs, readmissions, and mortality the postreform period was compared to 2009 to 2011.

Length of stay (mean)4.55 (5.39)4.50 (5.47)0.614.88 (5.36)4.51 (4.64)0.03
Any ICU stay (%)225 (4.5%)310 (5.9%)<0.0182 (4.7%)83 (4.7%)0.95
Any MHACs (%)560 (3.6%)180 (3.4%)0.62210 (4.5%)64 (3.6%)0.09
Readmit in 30 days (%)3155 (20.1%)900 (17.1%)<0.01852 (18.4%)296 (16.8%)0.11
Inpatient mortality (%)71 (0.5%)28 (0.5%)0.4818 (0.4%)7 (0.4%)0.97

Table 3 presents the results of regression analyses examining correlates of patient safety outcomes, adjusted for age, gender, race, and CMI. As the test of controls indicated differential prereform trends for ICU admission and length of stay, the prereform period was limited to 2011 for these outcomes. After adjustment for covariates, the probability of an ICU stay remained greater, and the 30‐day readmission rate was lower among patients discharged from resident services in the postreform period than the prereform period. Among patients discharged from the hospitalist services, there were no significant differences in length of stay, readmissions, ICU admissions, MHACs, or inpatient mortality comparing the pre‐ and postreform periods.

Adjusted Changes in Patient Safety Outcomes by Year and Service
 Resident ServicesHospitalist ServiceDifference in Differences
OutcomePrereformaPostreformDifferencePrereformPostreformDifference(ResidentHospitalist)
  • NOTE: Predicted probabilities and 95% confidence intervals were obtained via margins command. Logistic regression was used for dichotomous outcomes and linear regression for continuous outcomes, adjusted for case mix index, age, race, gender, and clustering at patient level.

  • Abbreviations: ICU, intensive care unit; MHACs, Maryland Hospital Acquired Conditions.

  • For the outcomes length of stay and ICU admission, the postreform period was compared to 2011 only. For MHACs, readmissions, and mortality, the postreform period was compared to 2009 to 2011.

ICU stay4.5% (4.0% to 5.1%)5.7% (5.1% to 6.3%)1.4% (0.5% to 2.2%)4.4% (3.5% to 5.3%)5.3% (4.3% to 6.3%)1.1% (0.2 to 2.4%)0.3% (1.1% to 1.8%)
Inpatient mortality0.5% (0.4% to 0.6%)0.5% (0.3% to 0.7%)0 (0.2% to 0.2%)0.3% (0.2% to 0.6%)0.5% (0.1% to 0.8%)0.1% (0.3% to 0.5%)0.1% (0.5% to 0.3%)
MHACs3.6% (3.3% to 3.9%)3.3% (2.9% to 3.7%)0.4% (0.9 to 0.2%)4.5% (3.9% to 5.1%)4.1% (3.2% to 5.1%)0.3% (1.4% to 0.7%)0.2% (1.0% to 1.3%)
Readmit 30 days20.1% (19.1% to 21.1%)17.2% (15.9% to 18.5%)2.8% (4.3% to 1.3%)18.4% (16.5% to 20.2%)16.6% (14.7% to 18.5%)1.7% (4.1% to 0.8%)1.8% (0.2% to 3.7%)
Length of stay4.6 (4.4 to 4.7)4.4 (4.3 to 4.6)0.1 (0.3 to 0.1)4.9 (4.6 to 5.1)4.7 (4.5 to 5.0)0.1 (0.4 to 0.2)0.01 (0.37 to 0.34)

Differences in Outcomes Comparing Resident and Hospitalist Services Pre‐ and Postreform

Comparing pre‐ and postreform periods in the resident and hospitalist services, there were no significant differences in ICU admission, length of stay, MHACs, 30‐day readmissions, or inpatient mortality. In the sensitivity analysis, in which we excluded all discharges in May 2012 to June 2012, results were not significantly different for any of the outcomes examined.

DISCUSSION

Using difference‐in‐differences estimation, we evaluated whether the implementation of the 2011 residency work‐hour mandate was associated with differences in patient safety outcomes including length of stay, 30‐day readmission, inpatient mortality, MHACs, and ICU admissions comparing resident and hospitalist services at a large academic medical center. Adjusting for patient age, race, gender, and clinical complexity, we found no significant changes in any of the patient safety outcomes indicators in the postreform period comparing resident to hospitalist services.

Our quasiexperimental study design allowed us to gauge differences in patient safety outcomes, while reducing bias due to unmeasured confounders that might impact patient safety indicators.[9] We were able to examine all discharges from the resident and hospitalist general medicine services during the academic years 2009 to 2012, while adjusting for age, race, gender, and clinical complexity. Though ICU admission was higher and readmission rates were lower on the resident services post‐2011, we did not observe a significant difference in ICU admission or 30‐day readmission rates in the postreform period comparing patients discharged from the resident and hospitalist services and all patients in the prereform period.

Our neutral findings differ from some other single‐institution evaluations of reduced resident work hours, several of which have shown improved quality of life, education, and patient safety indicators.[18, 21, 26, 27, 28] It is unclear why improvements in patient safety were not identified in the current study. The 2011 reforms were more broad‐based than some of the preliminary studies of reduced work hours, and therefore additional variables may be at play. For instance, challenges related to decreased work hours, including the increased number of handoffs in care and work compression, may require specific interventions to produce sustained improvements in patient safety.[3, 14, 29, 30]

Improving patient safety requires more than changing resident work hours. Blum et al. recommended enhanced funding to increase supervision, decrease resident caseload, and incentivize achievement of quality indicators to achieve the goal of improved patient safety within work‐hour reform.[31] Schumacher et al. proposed a focus on supervision, professionalism, safe transitions of care, and optimizing workloads as a means to improve patient safety and education within the new residency training paradigm.[29]

Limitations of this study include limited follow‐up time after implementation of the work‐hour reforms. It may take more time to optimize systems of care to see benefits in patient safety indicators. This was a single‐institution study of a limited number of outcomes in a single department, which limits generalizability and may reflect local experience rather than broader trends. The call schedule on the resident service in this study differs from programs that have adopted night float schedules. [27] This may have had an effect on patient care outcomes.[32] In an attempt to conduct a timely study of inpatient safety indicators following the 2011 changes, our study was not powered to detect small changes in low‐frequency outcomes such as mortality; longer‐term studies at multiple institutions will be needed to answer these key questions. We limited the prereform period where our test of controls indicated differential prereform trends, which reduced power.

As this was an observational study rather than an experiment, there may have been both measured and unmeasured differences in patient characteristics and comorbidity between the intervention and control group. For example, CMI was lower on the hospitalist service than the resident services. Demographics varied somewhat between services; male and African American patients were more likely to be discharged from resident services than hospitalist services for unknown reasons. Although we adjusted for demographics and CMI in our model, there may be residual confounding. Limitations in data collection did not allow us to separate patients initially admitted to the ICU from patients transferred to the ICU from the inpatient floors. We attempted to overcome this limitation through use of a difference‐in‐differences model to account for secular trends, but factors other than residency work hours may have impacted the resident and hospitalist services differentially. For example, hospital quality‐improvement programs or provider‐level factors may have differentially impacted the resident versus hospitalist services during the study period.

Work‐hour limitations for residents were established to improve residency education and patient safety. As noted by the Institute of Medicine, improving patient safety will require significant investment by program directors, hospitals, and the public to keep resident caseloads manageable, ensure adequate supervision of first‐year residents, train residents on safe handoffs in care, and conduct ongoing evaluations of patient safety and any unintended consequences of the regulations.[33] In the first year after implementation of the 2011 work‐hour reforms, we found no change in ICU admission, inpatient mortality, 30‐day readmission rates, length of stay, or MHACs compared with patients treated by hospitalists. Studies of the long‐term impact of residency work‐hour reform are necessary to determine whether changes in work hours have been associated with improvement in resident education and patient safety.

Disclosure: Nothing to report.

Files
References
  1. Accreditation Council for Graduate Medical Education. Common program requirements effective: July 1, 2011. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/ProgramResources/Common_Program_Requirements_07012011[1].pdf. Accessed February 10, 2014.
  2. Nasca TJ, Day SH, Amis ES. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363:e3.
  3. Landrigan CP, Barger LK, Cade BE, Ayas NT, Czeisler CA. Interns' compliance with Accreditation Council for Graduate Medical Education work‐hour limits. JAMA. 2006;296(9):10631070.
  4. Fletcher KE, Underwood W, Davis SQ, Mangulkar RS, McMahon LF, Saint S. Effects of work hour reduction on residents' lives: a systematic review. JAMA. 2005;294(9):10881100.
  5. Landrigan CP, Fahrenkopf AM, Lewin D, et al. Effects of the ACGME duty hour limits on sleep, work hours, and safety. Pediatrics. 2008;122(2):250258.
  6. Volpp KG, Small DS, Romano PS. Teaching hospital five‐year mortality trends in the wake of duty hour reforms. J Gen Intern Med. 2013;28(8):10481055.
  7. Philibert I, Nasca T, Brigham T, Shapiro J. Duty hour limits and patient care and resident outcomes: can high‐quality studies offer insight into complex relationships? Ann Rev Med. 2013;64:467483.
  8. Fletcher KE, Reed DA, Arora VM. Patient safety, resident education and resident well‐being following implementation of the 2003 ACGME duty hour rules. J Gen Intern Med. 2011;26(8):907919.
  9. Volpp KG, Rosen AK, Rosenbaum PR, et al. Mortality among hospitalized Medicare beneficiaries in the first 2 years following ACGME resident duty hour reform. JAMA. 2007;298(9):975983.
  10. Rosen AK, Loveland SA, Romano PS, et al. Effects of resident duty hour reform on surgical and procedural patient safety indicators among hospitalized Veterans Health Administration and Medicare patients. Med Care. 2009;47(7):723731.
  11. Schuh LA, Khan MA, Harle H, et al. Pilot trial of IOM duty hour recommendations in neurology residency programs. Neurology. 2011;77(9):883887.
  12. McCoy CP, Halvorsen AJ, Loftus CG, et al. Effect of 16‐hour duty periods of patient care and resident education. Mayo Clin Proc. 2011;86:192196.
  13. Sen S, Kranzler HR, Didwania AK, et al. Effects of the 2011 duty hour reforms on interns and their patients: a prospective longitudinal cohort study. JAMA Intern Med. 2013;173(8):657662.
  14. Desai SV, Feldman L, Brown L, et al. Effect of the 2011 vs 2003 duty hour regulation—compliant models on sleep duration, trainee education, and continuity of patient care among internal medicine house staff. JAMA Intern Med. 2013;173(8):649655.
  15. Drolet BC, Christopher DA, Fischer SA. Residents' response to duty‐hour regulations—a follow‐up national survey. N Engl J Med. 2012;366:e35.
  16. Drolet BS, Sangisetty S, Tracy TF, Cioffi WG. Surgical residents' perceptions of 2011 Accreditation Council for Graduate Medical Education duty hour regulations. JAMA Surg. 2013;148(5):427433.
  17. Drolet BC, Khokhar MT, Fischer SA. The 2011 duty hour requirements—a survey of residency program directors. N Engl J Med. 2013;368:694697.
  18. Theobald CN, Stover DG, Choma NN, et al. The effect of reducing maximum shift lengths to 16 hours on internal medicine interns' educational opportunities. Acad Med. 2013;88(4):512518.
  19. Nuckols TK, Escarce JJ. Residency work‐hours reform. A cost analysis including preventable adverse events. J Gen Intern Med. 2005;20(10):873878.
  20. Nuckols TK, Bhattacharya J, Wolman DM, Ulmer C, Escarce JJ. Cost implications of reduced work hours and workloads for resident physicians. N Engl J Med. 2009;360:22022215.
  21. Horwitz LI, Kosiborod M, Lin Z, Krumholz HM. Changes in outcomes for internal medicine inpatients after work‐hour regulations. Ann Intern Med. 2007;147:97103.
  22. .Maryland Health Services Cost Review Commission. Complications: Maryland Hospital Acquired Conditions. Available at: http://www.hscrc.state.md.us/init_qi_MHAC.cfm. Accessed May 23, 2013.
  23. Averill R, Goldfield N, Hughes J, et al. What are APR‐DRGs? An introduction to severity of illness and risk of mortality adjustment methodology. 3M Health Information Systems. Available at: http://solutions.3m.com/3MContentRetrievalAPI/BlobServlet?locale=it_IT44(4):10491060.
  24. Ross JS, Wang R, Long JB, Gross CP, Ma X. Impact of the 2008 US Preventive Services Task Force Recommendation to discontinue prostate cancer screening among male Medicare beneficiaries. Arch Intern Med. 2012;172(20):16011603.
  25. Landrigan CP, Rothschild JM, Cronin JW, et al. Effect of reducing interns' work hour on serious medical errors in intensive care units. N Engl J Med. 2004;351(18):18381848.
  26. Levine AC, Adusumilli J, Landrigan CP. Effects of reducing or eliminating resident work shifts over 16 hours: a systematic review. Sleep. 2010;33(8):10431053.
  27. Bhavsar J, Montgomery D, Li J, et al. Impact of duty hours restrictions on quality of care and clinical outcomes. Am J Med. 2007;120(11):968974.
  28. Schumacher DJ, Slovein SR, Riebschleger MP, Englander R, Hicks P, Carraccio C. Beyond counting hours: the importance of supervision, professionalism, transitions in care, and workload in residency training. Acad Med. 2012;87(7):883888.
  29. Tessing S, Amendt A, Jennings J, Thomson J, Auger KA, Gonzalez del Rey JA. One possible future for resident hours: interns' perspective on a one‐month trial of the Institute of Medicine recommended duty hour limits. J Grad Med Educ. 2009;1(2):185187.
  30. Blum AB, Shea S, Czeisler CA, Landrigan CP, Leape L. Implementing the 2009 Institute of Medicine recommendations on resident physician work hours, supervision, and safety. Nature Sci Sleep. 2001;3:4785.
  31. Bricker DA, Markert RJ. Night float teaching and learning: perceptions of residents and faculty. J Grad Med Educ. 2010;2(2):236241.
  32. Institute of Medicine. Resident duty hours: enhancing sleep, supervision, and safety. Report brief. Washington, DC: National Academies; 2008. Available at: http://www.iom.edu/∼/media/Files/Report Files/2008/Resident‐Duty‐Hours/residency hours revised for web.pdf. Accessed May 23, 2013.
Article PDF
Issue
Journal of Hospital Medicine - 9(6)
Publications
Page Number
347-352
Sections
Files
Files
Article PDF
Article PDF

The Accreditation Council for Graduate Medical Education (ACGME) Common Program Requirements implemented in July 2011 increased supervision requirements and limited continuous work hours for first‐year residents.[1] Similar to the 2003 mandates, these requirements were introduced to improve patient safety and education at academic medical centers.[2] Work‐hour reforms have been associated with decreased resident burnout and improved sleep.[3, 4, 5] However, national observational studies and systematic reviews of the impact of the 2003 reforms on patient safety and quality of care have been varied in terms of outcome.[6, 7, 8, 9, 10] Small studies of the 2011 recommendations have shown increased sleep duration and decreased burnout, but also an increased number of handoffs and increased resident concerns about making a serious medical error.[11, 12, 13, 14] Although national surveys of residents and program directors have not indicated improvements in education or quality of life, 1 observational study did show improvement in clinical exposure and conference attendance.[15, 16, 17, 18] The impact of the 2011 reforms on patient safety remains unclear.[19, 20]

The objective of this study was to evaluate the association between implementation of the 2011 residency work‐hour mandates and patient safety outcomes at a large academic medical center.

METHODS

Study Design

This observational study used a quasi‐experimental difference‐in‐differences approach to evaluate whether residency work‐hour changes were associated with patient safety outcomes among general medicine inpatients. We compared safety outcomes among adult patients discharged from resident general medical services (referred to as resident) to safety outcomes among patients discharged by the hospitalist general medical service (referred to as hospitalist) before and after the 2011 residency work‐hour reforms at a large academic medical center. Differences in outcomes for the resident group were compared to differences observed in the hospitalist group, with adjustment for relevant demographic and case mix factors.[21] We used the hospitalist service as a control group, because ACGME changes applied only to resident services. The strength of this design is that it controls for secular trends that are correlated with patient safety, impacting both residents and hospitalists similarly.[9]

Approval for this study and a Health Insurance Portability and Accountability Act waiver were granted by the Johns Hopkins University School of Medicine institutional review board. We retrospectively examined administrative data on all patient discharges from the general medicine services at Johns Hopkins Hospital between July 1, 2008 and June 30, 2012 that were identified as pertaining to resident or hospitalist services.

Patient Allocation and Physician Scheduling

Patient admission to the resident or hospitalist service was decided by a number of factors. To maintain continuity of care, patients were preferentially admitted to the same service as for prior admissions. New patients were admitted to a service based on bed availability, nurse staffing, patient gender, isolation precautions, and cardiac monitor availability.

The inpatient resident services were staffed prior to July 2011 using a traditional 30‐hour overnight call system. Following July 2011, the inpatient resident services were staffed using a modified overnight call system, in which interns took overnight calls from 8 pm until 12 pm the following day, once every 5 nights with supervision by upper‐level residents. These interns rotated through daytime admitting and coverage roles on the intervening days. The hospitalist service was organized into a 3‐physician rotation of day shift, evening shift, and overnight shift.

Data and Outcomes

Twenty‐nine percent of patients in the sample were admitted more than once during the study period, and patients were generally admitted to the same resident team during each admission. Patients with multiple admissions were counted multiple times in the model. We categorized admissions as prereform (July 1, 2008June 30, 2011) and postreform (July 1, 2011June 30, 2012). Outcomes evaluated included hospital length of stay, 30‐day readmission, intensive care unit stay (ICU) stay, inpatient mortality, and number of Maryland Hospital Acquired Conditions (MHACs). ICU stay pertained to any ICU admission including initial admission and transfer from the inpatient floor. MHACs are a set of inpatient performance indicators derived from a list of 64 inpatient Potentially Preventable Complications developed by 3M Health Information Systems.[22] MHACs are used by the Maryland Health Services Cost Review Commission to link hospital payment to performance for costly, preventable, and clinically relevant complications. MHACs were coded in our analysis as a dichotomous variable. Independent variables included patient age at admission, race, gender, and case mix index. Case mix index (CMI) is a numeric score that measures resource utilization for a specific patient population. CMI is a weighted value assigned to patients based on resource utilization and All Patient Refined Diagnostic Related Group and was included as an indicator of patient illness severity and risk of mortality.[23] Data were obtained from administrative records from the case mix research team at Johns Hopkins Medicine.

To account for transitional differences that may have coincided with the opening of a new hospital wing in late April 2012, we conducted a sensitivity analysis, in which we excluded from analysis any visits that took place in May 2012 to June 2012.

Data Analysis

Based on historical studies, we calculated that a sample size of at least 3600 discharges would allow us to detect a difference of 5% between the pre‐ and postreform period assuming baseline 20% occurrence of dichotomous outcomes (=0.05; =0.2; r=4).[21]

The primary unit of analysis was the hospital discharge. Similar to Horwitz et al., we analyzed data using a difference‐in‐differences estimation strategy.[21] We used multivariable linear regression for length of stay measured as a continuous variable, and multivariable logistic regression for inpatient mortality, 30‐day readmission, MHACs coded as a dichotomous variable, and ICU stay coded as a dichotomous variable.[9] The difference‐in‐differences estimation was used to determine whether the postreform period relative to prereform period was associated with differences in outcomes comparing resident and hospitalist services. In the regression models, the independent variables of interest included an indicator variable for whether a patient was treated on a resident service, an indicator variable for whether a patient was discharged in the postreform period, and the interaction of these 2 variables (resident*postreform). The interaction term can be interpreted as a differential change over time comparing resident and hospitalist services. In all models, we adjusted for patient age, gender, race, and case mix index.

To determine whether prereform trends were similar among the resident and hospitalist services, we performed a test of controls as described by Volpp and colleagues.[6] Interaction terms for resident service and prereform years 2010 and 2011 were added to the model. A Wald test was then used to test for improved model fit, which would indicate differential trends among resident and hospitalist services during the prereform period. Where such trends were found, postreform results were compared only to 2011 rather than the 2009 to 2011 prereform period.[6]

To account for correlation within patients who had multiple discharges, we used a clustering approach and estimated robust variances.[24] From the regression model results, we calculated predicted probabilities adjusted for relevant covariates and prepost differences, and used linear probability models to estimate percentage‐point differences in outcomes, comparing residents and hospitalists in the pre‐ and postreform periods.[25] All analyses were performed using Stata/IC version 11 (StataCorp, College Station, TX).

RESULTS

In the 3 years before the 2011 residency work‐hour reforms were implemented (prereform), there were a total of 15,688 discharges for 8983 patients to the resident services and 4622 discharges for 3649 patients to the hospitalist services. In the year following implementation of residency work‐hour changes (postreform), there were 5253 discharges for 3805 patients to the resident services and 1767 discharges for 1454 patients to the hospitalist service. Table 1 shows the characteristics of patients discharged from the resident and hospitalist services in the pre‐ and postreform periods. Patients discharged from the resident services were more likely to be older, male, African American, and have a higher CMI.

Demographics and Case Mix Index of Patients Discharged From Resident and Hospitalist (Nonresident) General Medicine Services 20092012 at Johns Hopkins Hospital
 Resident ServicesHospitalist Service 
 20092010201120122009201020112012P Valuea
  • NOTE: Abbreviations: SD, standard deviation.

  • Comparing patients admitted to resident versus hospitalist service over the length of the study period 2009 to 2012. Case mix index range for this sample was 0.2 to 21.9 (SD 0.9). Higher case mix index indicates higher risk of mortality.

Discharges, n53455299504452531366149217641767 
Unique patients, n30822968293338051106118013631454 
Age, y, mean (SD)55.1 (17.7)55.7 (17.4)56.4 (17.9)56.7 (17.1)55.9 (17.9)56.2 (18.4)55.5 (18.8)54 (18.7)0.02
Sex male, n (%)1503 (48.8)1397 (47.1)1432 (48.8)1837 (48.3)520 (47)550 (46.6)613 (45)654 (45)<0.01
Race         
African American, n (%)2072 (67.2)1922 (64.8)1820 (62.1)2507 (65.9)500 (45.2)592 (50.2)652 (47.8)747 (51.4)<0.01
White, n (%)897 (29.1)892 (30.1)957 (32.6)1118 (29.4)534 (48.3)527 (44.7)621 (45.6)619 (42.6) 
Asian, n (%)19 (.6%)35 (1.2)28 (1)32 (.8)11 (1)7 (.6)25 (1.8)12 (.8) 
Other, n (%)94 (3.1)119 (4)128 (4.4)148 (3.9)61 (5.5)54 (4.6)65 (4.8)76 (5.2) 
Case mix index, mean (SD)1.2 (1)1.1 (0.9)1.1 (0.9)1.1 (1.2)1.2 (1)1.1 (1)1.1 (1)1 (0.7)<0.01

Differences in Outcomes Among Resident and Hospitalist Services Pre‐ and Postreform

Table 2 shows unadjusted results. Patients discharged from the resident services in the postreform period as compared to the prereform period had a higher likelihood of an ICU stay (5.9% vs 4.5%, P<0.01), and lower likelihood of 30‐day readmission (17.1% vs 20.1%, P<0.01). Patients discharged from the hospitalist service in the postreform period as compared to the prereform period had a significantly shorter mean length of stay (4.51 vs 4.88 days, P=0.03)

Unadjusted Patient Safety Outcomes by Year and Service
 Resident ServicesHospitalist Service
OutcomePrereformaPostreformP ValuePrereformaPostreformP Value
  • NOTE: Abbreviations: ICU, intensive care unit; MHACs, Maryland Hospital Acquired Conditions.

  • For the outcomes length of stay and ICU admission, the postreform period was compared to 2011 only. For MHACs, readmissions, and mortality the postreform period was compared to 2009 to 2011.

Length of stay (mean)4.55 (5.39)4.50 (5.47)0.614.88 (5.36)4.51 (4.64)0.03
Any ICU stay (%)225 (4.5%)310 (5.9%)<0.0182 (4.7%)83 (4.7%)0.95
Any MHACs (%)560 (3.6%)180 (3.4%)0.62210 (4.5%)64 (3.6%)0.09
Readmit in 30 days (%)3155 (20.1%)900 (17.1%)<0.01852 (18.4%)296 (16.8%)0.11
Inpatient mortality (%)71 (0.5%)28 (0.5%)0.4818 (0.4%)7 (0.4%)0.97

Table 3 presents the results of regression analyses examining correlates of patient safety outcomes, adjusted for age, gender, race, and CMI. As the test of controls indicated differential prereform trends for ICU admission and length of stay, the prereform period was limited to 2011 for these outcomes. After adjustment for covariates, the probability of an ICU stay remained greater, and the 30‐day readmission rate was lower among patients discharged from resident services in the postreform period than the prereform period. Among patients discharged from the hospitalist services, there were no significant differences in length of stay, readmissions, ICU admissions, MHACs, or inpatient mortality comparing the pre‐ and postreform periods.

Adjusted Changes in Patient Safety Outcomes by Year and Service
 Resident ServicesHospitalist ServiceDifference in Differences
OutcomePrereformaPostreformDifferencePrereformPostreformDifference(ResidentHospitalist)
  • NOTE: Predicted probabilities and 95% confidence intervals were obtained via margins command. Logistic regression was used for dichotomous outcomes and linear regression for continuous outcomes, adjusted for case mix index, age, race, gender, and clustering at patient level.

  • Abbreviations: ICU, intensive care unit; MHACs, Maryland Hospital Acquired Conditions.

  • For the outcomes length of stay and ICU admission, the postreform period was compared to 2011 only. For MHACs, readmissions, and mortality, the postreform period was compared to 2009 to 2011.

ICU stay4.5% (4.0% to 5.1%)5.7% (5.1% to 6.3%)1.4% (0.5% to 2.2%)4.4% (3.5% to 5.3%)5.3% (4.3% to 6.3%)1.1% (0.2 to 2.4%)0.3% (1.1% to 1.8%)
Inpatient mortality0.5% (0.4% to 0.6%)0.5% (0.3% to 0.7%)0 (0.2% to 0.2%)0.3% (0.2% to 0.6%)0.5% (0.1% to 0.8%)0.1% (0.3% to 0.5%)0.1% (0.5% to 0.3%)
MHACs3.6% (3.3% to 3.9%)3.3% (2.9% to 3.7%)0.4% (0.9 to 0.2%)4.5% (3.9% to 5.1%)4.1% (3.2% to 5.1%)0.3% (1.4% to 0.7%)0.2% (1.0% to 1.3%)
Readmit 30 days20.1% (19.1% to 21.1%)17.2% (15.9% to 18.5%)2.8% (4.3% to 1.3%)18.4% (16.5% to 20.2%)16.6% (14.7% to 18.5%)1.7% (4.1% to 0.8%)1.8% (0.2% to 3.7%)
Length of stay4.6 (4.4 to 4.7)4.4 (4.3 to 4.6)0.1 (0.3 to 0.1)4.9 (4.6 to 5.1)4.7 (4.5 to 5.0)0.1 (0.4 to 0.2)0.01 (0.37 to 0.34)

Differences in Outcomes Comparing Resident and Hospitalist Services Pre‐ and Postreform

Comparing pre‐ and postreform periods in the resident and hospitalist services, there were no significant differences in ICU admission, length of stay, MHACs, 30‐day readmissions, or inpatient mortality. In the sensitivity analysis, in which we excluded all discharges in May 2012 to June 2012, results were not significantly different for any of the outcomes examined.

DISCUSSION

Using difference‐in‐differences estimation, we evaluated whether the implementation of the 2011 residency work‐hour mandate was associated with differences in patient safety outcomes including length of stay, 30‐day readmission, inpatient mortality, MHACs, and ICU admissions comparing resident and hospitalist services at a large academic medical center. Adjusting for patient age, race, gender, and clinical complexity, we found no significant changes in any of the patient safety outcomes indicators in the postreform period comparing resident to hospitalist services.

Our quasiexperimental study design allowed us to gauge differences in patient safety outcomes, while reducing bias due to unmeasured confounders that might impact patient safety indicators.[9] We were able to examine all discharges from the resident and hospitalist general medicine services during the academic years 2009 to 2012, while adjusting for age, race, gender, and clinical complexity. Though ICU admission was higher and readmission rates were lower on the resident services post‐2011, we did not observe a significant difference in ICU admission or 30‐day readmission rates in the postreform period comparing patients discharged from the resident and hospitalist services and all patients in the prereform period.

Our neutral findings differ from some other single‐institution evaluations of reduced resident work hours, several of which have shown improved quality of life, education, and patient safety indicators.[18, 21, 26, 27, 28] It is unclear why improvements in patient safety were not identified in the current study. The 2011 reforms were more broad‐based than some of the preliminary studies of reduced work hours, and therefore additional variables may be at play. For instance, challenges related to decreased work hours, including the increased number of handoffs in care and work compression, may require specific interventions to produce sustained improvements in patient safety.[3, 14, 29, 30]

Improving patient safety requires more than changing resident work hours. Blum et al. recommended enhanced funding to increase supervision, decrease resident caseload, and incentivize achievement of quality indicators to achieve the goal of improved patient safety within work‐hour reform.[31] Schumacher et al. proposed a focus on supervision, professionalism, safe transitions of care, and optimizing workloads as a means to improve patient safety and education within the new residency training paradigm.[29]

Limitations of this study include limited follow‐up time after implementation of the work‐hour reforms. It may take more time to optimize systems of care to see benefits in patient safety indicators. This was a single‐institution study of a limited number of outcomes in a single department, which limits generalizability and may reflect local experience rather than broader trends. The call schedule on the resident service in this study differs from programs that have adopted night float schedules. [27] This may have had an effect on patient care outcomes.[32] In an attempt to conduct a timely study of inpatient safety indicators following the 2011 changes, our study was not powered to detect small changes in low‐frequency outcomes such as mortality; longer‐term studies at multiple institutions will be needed to answer these key questions. We limited the prereform period where our test of controls indicated differential prereform trends, which reduced power.

As this was an observational study rather than an experiment, there may have been both measured and unmeasured differences in patient characteristics and comorbidity between the intervention and control group. For example, CMI was lower on the hospitalist service than the resident services. Demographics varied somewhat between services; male and African American patients were more likely to be discharged from resident services than hospitalist services for unknown reasons. Although we adjusted for demographics and CMI in our model, there may be residual confounding. Limitations in data collection did not allow us to separate patients initially admitted to the ICU from patients transferred to the ICU from the inpatient floors. We attempted to overcome this limitation through use of a difference‐in‐differences model to account for secular trends, but factors other than residency work hours may have impacted the resident and hospitalist services differentially. For example, hospital quality‐improvement programs or provider‐level factors may have differentially impacted the resident versus hospitalist services during the study period.

Work‐hour limitations for residents were established to improve residency education and patient safety. As noted by the Institute of Medicine, improving patient safety will require significant investment by program directors, hospitals, and the public to keep resident caseloads manageable, ensure adequate supervision of first‐year residents, train residents on safe handoffs in care, and conduct ongoing evaluations of patient safety and any unintended consequences of the regulations.[33] In the first year after implementation of the 2011 work‐hour reforms, we found no change in ICU admission, inpatient mortality, 30‐day readmission rates, length of stay, or MHACs compared with patients treated by hospitalists. Studies of the long‐term impact of residency work‐hour reform are necessary to determine whether changes in work hours have been associated with improvement in resident education and patient safety.

Disclosure: Nothing to report.

The Accreditation Council for Graduate Medical Education (ACGME) Common Program Requirements implemented in July 2011 increased supervision requirements and limited continuous work hours for first‐year residents.[1] Similar to the 2003 mandates, these requirements were introduced to improve patient safety and education at academic medical centers.[2] Work‐hour reforms have been associated with decreased resident burnout and improved sleep.[3, 4, 5] However, national observational studies and systematic reviews of the impact of the 2003 reforms on patient safety and quality of care have been varied in terms of outcome.[6, 7, 8, 9, 10] Small studies of the 2011 recommendations have shown increased sleep duration and decreased burnout, but also an increased number of handoffs and increased resident concerns about making a serious medical error.[11, 12, 13, 14] Although national surveys of residents and program directors have not indicated improvements in education or quality of life, 1 observational study did show improvement in clinical exposure and conference attendance.[15, 16, 17, 18] The impact of the 2011 reforms on patient safety remains unclear.[19, 20]

The objective of this study was to evaluate the association between implementation of the 2011 residency work‐hour mandates and patient safety outcomes at a large academic medical center.

METHODS

Study Design

This observational study used a quasi‐experimental difference‐in‐differences approach to evaluate whether residency work‐hour changes were associated with patient safety outcomes among general medicine inpatients. We compared safety outcomes among adult patients discharged from resident general medical services (referred to as resident) to safety outcomes among patients discharged by the hospitalist general medical service (referred to as hospitalist) before and after the 2011 residency work‐hour reforms at a large academic medical center. Differences in outcomes for the resident group were compared to differences observed in the hospitalist group, with adjustment for relevant demographic and case mix factors.[21] We used the hospitalist service as a control group, because ACGME changes applied only to resident services. The strength of this design is that it controls for secular trends that are correlated with patient safety, impacting both residents and hospitalists similarly.[9]

Approval for this study and a Health Insurance Portability and Accountability Act waiver were granted by the Johns Hopkins University School of Medicine institutional review board. We retrospectively examined administrative data on all patient discharges from the general medicine services at Johns Hopkins Hospital between July 1, 2008 and June 30, 2012 that were identified as pertaining to resident or hospitalist services.

Patient Allocation and Physician Scheduling

Patient admission to the resident or hospitalist service was decided by a number of factors. To maintain continuity of care, patients were preferentially admitted to the same service as for prior admissions. New patients were admitted to a service based on bed availability, nurse staffing, patient gender, isolation precautions, and cardiac monitor availability.

The inpatient resident services were staffed prior to July 2011 using a traditional 30‐hour overnight call system. Following July 2011, the inpatient resident services were staffed using a modified overnight call system, in which interns took overnight calls from 8 pm until 12 pm the following day, once every 5 nights with supervision by upper‐level residents. These interns rotated through daytime admitting and coverage roles on the intervening days. The hospitalist service was organized into a 3‐physician rotation of day shift, evening shift, and overnight shift.

Data and Outcomes

Twenty‐nine percent of patients in the sample were admitted more than once during the study period, and patients were generally admitted to the same resident team during each admission. Patients with multiple admissions were counted multiple times in the model. We categorized admissions as prereform (July 1, 2008June 30, 2011) and postreform (July 1, 2011June 30, 2012). Outcomes evaluated included hospital length of stay, 30‐day readmission, intensive care unit stay (ICU) stay, inpatient mortality, and number of Maryland Hospital Acquired Conditions (MHACs). ICU stay pertained to any ICU admission including initial admission and transfer from the inpatient floor. MHACs are a set of inpatient performance indicators derived from a list of 64 inpatient Potentially Preventable Complications developed by 3M Health Information Systems.[22] MHACs are used by the Maryland Health Services Cost Review Commission to link hospital payment to performance for costly, preventable, and clinically relevant complications. MHACs were coded in our analysis as a dichotomous variable. Independent variables included patient age at admission, race, gender, and case mix index. Case mix index (CMI) is a numeric score that measures resource utilization for a specific patient population. CMI is a weighted value assigned to patients based on resource utilization and All Patient Refined Diagnostic Related Group and was included as an indicator of patient illness severity and risk of mortality.[23] Data were obtained from administrative records from the case mix research team at Johns Hopkins Medicine.

To account for transitional differences that may have coincided with the opening of a new hospital wing in late April 2012, we conducted a sensitivity analysis, in which we excluded from analysis any visits that took place in May 2012 to June 2012.

Data Analysis

Based on historical studies, we calculated that a sample size of at least 3600 discharges would allow us to detect a difference of 5% between the pre‐ and postreform period assuming baseline 20% occurrence of dichotomous outcomes (=0.05; =0.2; r=4).[21]

The primary unit of analysis was the hospital discharge. Similar to Horwitz et al., we analyzed data using a difference‐in‐differences estimation strategy.[21] We used multivariable linear regression for length of stay measured as a continuous variable, and multivariable logistic regression for inpatient mortality, 30‐day readmission, MHACs coded as a dichotomous variable, and ICU stay coded as a dichotomous variable.[9] The difference‐in‐differences estimation was used to determine whether the postreform period relative to prereform period was associated with differences in outcomes comparing resident and hospitalist services. In the regression models, the independent variables of interest included an indicator variable for whether a patient was treated on a resident service, an indicator variable for whether a patient was discharged in the postreform period, and the interaction of these 2 variables (resident*postreform). The interaction term can be interpreted as a differential change over time comparing resident and hospitalist services. In all models, we adjusted for patient age, gender, race, and case mix index.

To determine whether prereform trends were similar among the resident and hospitalist services, we performed a test of controls as described by Volpp and colleagues.[6] Interaction terms for resident service and prereform years 2010 and 2011 were added to the model. A Wald test was then used to test for improved model fit, which would indicate differential trends among resident and hospitalist services during the prereform period. Where such trends were found, postreform results were compared only to 2011 rather than the 2009 to 2011 prereform period.[6]

To account for correlation within patients who had multiple discharges, we used a clustering approach and estimated robust variances.[24] From the regression model results, we calculated predicted probabilities adjusted for relevant covariates and prepost differences, and used linear probability models to estimate percentage‐point differences in outcomes, comparing residents and hospitalists in the pre‐ and postreform periods.[25] All analyses were performed using Stata/IC version 11 (StataCorp, College Station, TX).

RESULTS

In the 3 years before the 2011 residency work‐hour reforms were implemented (prereform), there were a total of 15,688 discharges for 8983 patients to the resident services and 4622 discharges for 3649 patients to the hospitalist services. In the year following implementation of residency work‐hour changes (postreform), there were 5253 discharges for 3805 patients to the resident services and 1767 discharges for 1454 patients to the hospitalist service. Table 1 shows the characteristics of patients discharged from the resident and hospitalist services in the pre‐ and postreform periods. Patients discharged from the resident services were more likely to be older, male, African American, and have a higher CMI.

Demographics and Case Mix Index of Patients Discharged From Resident and Hospitalist (Nonresident) General Medicine Services 20092012 at Johns Hopkins Hospital
 Resident ServicesHospitalist Service 
 20092010201120122009201020112012P Valuea
  • NOTE: Abbreviations: SD, standard deviation.

  • Comparing patients admitted to resident versus hospitalist service over the length of the study period 2009 to 2012. Case mix index range for this sample was 0.2 to 21.9 (SD 0.9). Higher case mix index indicates higher risk of mortality.

Discharges, n53455299504452531366149217641767 
Unique patients, n30822968293338051106118013631454 
Age, y, mean (SD)55.1 (17.7)55.7 (17.4)56.4 (17.9)56.7 (17.1)55.9 (17.9)56.2 (18.4)55.5 (18.8)54 (18.7)0.02
Sex male, n (%)1503 (48.8)1397 (47.1)1432 (48.8)1837 (48.3)520 (47)550 (46.6)613 (45)654 (45)<0.01
Race         
African American, n (%)2072 (67.2)1922 (64.8)1820 (62.1)2507 (65.9)500 (45.2)592 (50.2)652 (47.8)747 (51.4)<0.01
White, n (%)897 (29.1)892 (30.1)957 (32.6)1118 (29.4)534 (48.3)527 (44.7)621 (45.6)619 (42.6) 
Asian, n (%)19 (.6%)35 (1.2)28 (1)32 (.8)11 (1)7 (.6)25 (1.8)12 (.8) 
Other, n (%)94 (3.1)119 (4)128 (4.4)148 (3.9)61 (5.5)54 (4.6)65 (4.8)76 (5.2) 
Case mix index, mean (SD)1.2 (1)1.1 (0.9)1.1 (0.9)1.1 (1.2)1.2 (1)1.1 (1)1.1 (1)1 (0.7)<0.01

Differences in Outcomes Among Resident and Hospitalist Services Pre‐ and Postreform

Table 2 shows unadjusted results. Patients discharged from the resident services in the postreform period as compared to the prereform period had a higher likelihood of an ICU stay (5.9% vs 4.5%, P<0.01), and lower likelihood of 30‐day readmission (17.1% vs 20.1%, P<0.01). Patients discharged from the hospitalist service in the postreform period as compared to the prereform period had a significantly shorter mean length of stay (4.51 vs 4.88 days, P=0.03)

Unadjusted Patient Safety Outcomes by Year and Service
 Resident ServicesHospitalist Service
OutcomePrereformaPostreformP ValuePrereformaPostreformP Value
  • NOTE: Abbreviations: ICU, intensive care unit; MHACs, Maryland Hospital Acquired Conditions.

  • For the outcomes length of stay and ICU admission, the postreform period was compared to 2011 only. For MHACs, readmissions, and mortality the postreform period was compared to 2009 to 2011.

Length of stay (mean)4.55 (5.39)4.50 (5.47)0.614.88 (5.36)4.51 (4.64)0.03
Any ICU stay (%)225 (4.5%)310 (5.9%)<0.0182 (4.7%)83 (4.7%)0.95
Any MHACs (%)560 (3.6%)180 (3.4%)0.62210 (4.5%)64 (3.6%)0.09
Readmit in 30 days (%)3155 (20.1%)900 (17.1%)<0.01852 (18.4%)296 (16.8%)0.11
Inpatient mortality (%)71 (0.5%)28 (0.5%)0.4818 (0.4%)7 (0.4%)0.97

Table 3 presents the results of regression analyses examining correlates of patient safety outcomes, adjusted for age, gender, race, and CMI. As the test of controls indicated differential prereform trends for ICU admission and length of stay, the prereform period was limited to 2011 for these outcomes. After adjustment for covariates, the probability of an ICU stay remained greater, and the 30‐day readmission rate was lower among patients discharged from resident services in the postreform period than the prereform period. Among patients discharged from the hospitalist services, there were no significant differences in length of stay, readmissions, ICU admissions, MHACs, or inpatient mortality comparing the pre‐ and postreform periods.

Adjusted Changes in Patient Safety Outcomes by Year and Service
 Resident ServicesHospitalist ServiceDifference in Differences
OutcomePrereformaPostreformDifferencePrereformPostreformDifference(ResidentHospitalist)
  • NOTE: Predicted probabilities and 95% confidence intervals were obtained via margins command. Logistic regression was used for dichotomous outcomes and linear regression for continuous outcomes, adjusted for case mix index, age, race, gender, and clustering at patient level.

  • Abbreviations: ICU, intensive care unit; MHACs, Maryland Hospital Acquired Conditions.

  • For the outcomes length of stay and ICU admission, the postreform period was compared to 2011 only. For MHACs, readmissions, and mortality, the postreform period was compared to 2009 to 2011.

ICU stay4.5% (4.0% to 5.1%)5.7% (5.1% to 6.3%)1.4% (0.5% to 2.2%)4.4% (3.5% to 5.3%)5.3% (4.3% to 6.3%)1.1% (0.2 to 2.4%)0.3% (1.1% to 1.8%)
Inpatient mortality0.5% (0.4% to 0.6%)0.5% (0.3% to 0.7%)0 (0.2% to 0.2%)0.3% (0.2% to 0.6%)0.5% (0.1% to 0.8%)0.1% (0.3% to 0.5%)0.1% (0.5% to 0.3%)
MHACs3.6% (3.3% to 3.9%)3.3% (2.9% to 3.7%)0.4% (0.9 to 0.2%)4.5% (3.9% to 5.1%)4.1% (3.2% to 5.1%)0.3% (1.4% to 0.7%)0.2% (1.0% to 1.3%)
Readmit 30 days20.1% (19.1% to 21.1%)17.2% (15.9% to 18.5%)2.8% (4.3% to 1.3%)18.4% (16.5% to 20.2%)16.6% (14.7% to 18.5%)1.7% (4.1% to 0.8%)1.8% (0.2% to 3.7%)
Length of stay4.6 (4.4 to 4.7)4.4 (4.3 to 4.6)0.1 (0.3 to 0.1)4.9 (4.6 to 5.1)4.7 (4.5 to 5.0)0.1 (0.4 to 0.2)0.01 (0.37 to 0.34)

Differences in Outcomes Comparing Resident and Hospitalist Services Pre‐ and Postreform

Comparing pre‐ and postreform periods in the resident and hospitalist services, there were no significant differences in ICU admission, length of stay, MHACs, 30‐day readmissions, or inpatient mortality. In the sensitivity analysis, in which we excluded all discharges in May 2012 to June 2012, results were not significantly different for any of the outcomes examined.

DISCUSSION

Using difference‐in‐differences estimation, we evaluated whether the implementation of the 2011 residency work‐hour mandate was associated with differences in patient safety outcomes including length of stay, 30‐day readmission, inpatient mortality, MHACs, and ICU admissions comparing resident and hospitalist services at a large academic medical center. Adjusting for patient age, race, gender, and clinical complexity, we found no significant changes in any of the patient safety outcomes indicators in the postreform period comparing resident to hospitalist services.

Our quasiexperimental study design allowed us to gauge differences in patient safety outcomes, while reducing bias due to unmeasured confounders that might impact patient safety indicators.[9] We were able to examine all discharges from the resident and hospitalist general medicine services during the academic years 2009 to 2012, while adjusting for age, race, gender, and clinical complexity. Though ICU admission was higher and readmission rates were lower on the resident services post‐2011, we did not observe a significant difference in ICU admission or 30‐day readmission rates in the postreform period comparing patients discharged from the resident and hospitalist services and all patients in the prereform period.

Our neutral findings differ from some other single‐institution evaluations of reduced resident work hours, several of which have shown improved quality of life, education, and patient safety indicators.[18, 21, 26, 27, 28] It is unclear why improvements in patient safety were not identified in the current study. The 2011 reforms were more broad‐based than some of the preliminary studies of reduced work hours, and therefore additional variables may be at play. For instance, challenges related to decreased work hours, including the increased number of handoffs in care and work compression, may require specific interventions to produce sustained improvements in patient safety.[3, 14, 29, 30]

Improving patient safety requires more than changing resident work hours. Blum et al. recommended enhanced funding to increase supervision, decrease resident caseload, and incentivize achievement of quality indicators to achieve the goal of improved patient safety within work‐hour reform.[31] Schumacher et al. proposed a focus on supervision, professionalism, safe transitions of care, and optimizing workloads as a means to improve patient safety and education within the new residency training paradigm.[29]

Limitations of this study include limited follow‐up time after implementation of the work‐hour reforms. It may take more time to optimize systems of care to see benefits in patient safety indicators. This was a single‐institution study of a limited number of outcomes in a single department, which limits generalizability and may reflect local experience rather than broader trends. The call schedule on the resident service in this study differs from programs that have adopted night float schedules. [27] This may have had an effect on patient care outcomes.[32] In an attempt to conduct a timely study of inpatient safety indicators following the 2011 changes, our study was not powered to detect small changes in low‐frequency outcomes such as mortality; longer‐term studies at multiple institutions will be needed to answer these key questions. We limited the prereform period where our test of controls indicated differential prereform trends, which reduced power.

As this was an observational study rather than an experiment, there may have been both measured and unmeasured differences in patient characteristics and comorbidity between the intervention and control group. For example, CMI was lower on the hospitalist service than the resident services. Demographics varied somewhat between services; male and African American patients were more likely to be discharged from resident services than hospitalist services for unknown reasons. Although we adjusted for demographics and CMI in our model, there may be residual confounding. Limitations in data collection did not allow us to separate patients initially admitted to the ICU from patients transferred to the ICU from the inpatient floors. We attempted to overcome this limitation through use of a difference‐in‐differences model to account for secular trends, but factors other than residency work hours may have impacted the resident and hospitalist services differentially. For example, hospital quality‐improvement programs or provider‐level factors may have differentially impacted the resident versus hospitalist services during the study period.

Work‐hour limitations for residents were established to improve residency education and patient safety. As noted by the Institute of Medicine, improving patient safety will require significant investment by program directors, hospitals, and the public to keep resident caseloads manageable, ensure adequate supervision of first‐year residents, train residents on safe handoffs in care, and conduct ongoing evaluations of patient safety and any unintended consequences of the regulations.[33] In the first year after implementation of the 2011 work‐hour reforms, we found no change in ICU admission, inpatient mortality, 30‐day readmission rates, length of stay, or MHACs compared with patients treated by hospitalists. Studies of the long‐term impact of residency work‐hour reform are necessary to determine whether changes in work hours have been associated with improvement in resident education and patient safety.

Disclosure: Nothing to report.

References
  1. Accreditation Council for Graduate Medical Education. Common program requirements effective: July 1, 2011. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/ProgramResources/Common_Program_Requirements_07012011[1].pdf. Accessed February 10, 2014.
  2. Nasca TJ, Day SH, Amis ES. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363:e3.
  3. Landrigan CP, Barger LK, Cade BE, Ayas NT, Czeisler CA. Interns' compliance with Accreditation Council for Graduate Medical Education work‐hour limits. JAMA. 2006;296(9):10631070.
  4. Fletcher KE, Underwood W, Davis SQ, Mangulkar RS, McMahon LF, Saint S. Effects of work hour reduction on residents' lives: a systematic review. JAMA. 2005;294(9):10881100.
  5. Landrigan CP, Fahrenkopf AM, Lewin D, et al. Effects of the ACGME duty hour limits on sleep, work hours, and safety. Pediatrics. 2008;122(2):250258.
  6. Volpp KG, Small DS, Romano PS. Teaching hospital five‐year mortality trends in the wake of duty hour reforms. J Gen Intern Med. 2013;28(8):10481055.
  7. Philibert I, Nasca T, Brigham T, Shapiro J. Duty hour limits and patient care and resident outcomes: can high‐quality studies offer insight into complex relationships? Ann Rev Med. 2013;64:467483.
  8. Fletcher KE, Reed DA, Arora VM. Patient safety, resident education and resident well‐being following implementation of the 2003 ACGME duty hour rules. J Gen Intern Med. 2011;26(8):907919.
  9. Volpp KG, Rosen AK, Rosenbaum PR, et al. Mortality among hospitalized Medicare beneficiaries in the first 2 years following ACGME resident duty hour reform. JAMA. 2007;298(9):975983.
  10. Rosen AK, Loveland SA, Romano PS, et al. Effects of resident duty hour reform on surgical and procedural patient safety indicators among hospitalized Veterans Health Administration and Medicare patients. Med Care. 2009;47(7):723731.
  11. Schuh LA, Khan MA, Harle H, et al. Pilot trial of IOM duty hour recommendations in neurology residency programs. Neurology. 2011;77(9):883887.
  12. McCoy CP, Halvorsen AJ, Loftus CG, et al. Effect of 16‐hour duty periods of patient care and resident education. Mayo Clin Proc. 2011;86:192196.
  13. Sen S, Kranzler HR, Didwania AK, et al. Effects of the 2011 duty hour reforms on interns and their patients: a prospective longitudinal cohort study. JAMA Intern Med. 2013;173(8):657662.
  14. Desai SV, Feldman L, Brown L, et al. Effect of the 2011 vs 2003 duty hour regulation—compliant models on sleep duration, trainee education, and continuity of patient care among internal medicine house staff. JAMA Intern Med. 2013;173(8):649655.
  15. Drolet BC, Christopher DA, Fischer SA. Residents' response to duty‐hour regulations—a follow‐up national survey. N Engl J Med. 2012;366:e35.
  16. Drolet BS, Sangisetty S, Tracy TF, Cioffi WG. Surgical residents' perceptions of 2011 Accreditation Council for Graduate Medical Education duty hour regulations. JAMA Surg. 2013;148(5):427433.
  17. Drolet BC, Khokhar MT, Fischer SA. The 2011 duty hour requirements—a survey of residency program directors. N Engl J Med. 2013;368:694697.
  18. Theobald CN, Stover DG, Choma NN, et al. The effect of reducing maximum shift lengths to 16 hours on internal medicine interns' educational opportunities. Acad Med. 2013;88(4):512518.
  19. Nuckols TK, Escarce JJ. Residency work‐hours reform. A cost analysis including preventable adverse events. J Gen Intern Med. 2005;20(10):873878.
  20. Nuckols TK, Bhattacharya J, Wolman DM, Ulmer C, Escarce JJ. Cost implications of reduced work hours and workloads for resident physicians. N Engl J Med. 2009;360:22022215.
  21. Horwitz LI, Kosiborod M, Lin Z, Krumholz HM. Changes in outcomes for internal medicine inpatients after work‐hour regulations. Ann Intern Med. 2007;147:97103.
  22. .Maryland Health Services Cost Review Commission. Complications: Maryland Hospital Acquired Conditions. Available at: http://www.hscrc.state.md.us/init_qi_MHAC.cfm. Accessed May 23, 2013.
  23. Averill R, Goldfield N, Hughes J, et al. What are APR‐DRGs? An introduction to severity of illness and risk of mortality adjustment methodology. 3M Health Information Systems. Available at: http://solutions.3m.com/3MContentRetrievalAPI/BlobServlet?locale=it_IT44(4):10491060.
  24. Ross JS, Wang R, Long JB, Gross CP, Ma X. Impact of the 2008 US Preventive Services Task Force Recommendation to discontinue prostate cancer screening among male Medicare beneficiaries. Arch Intern Med. 2012;172(20):16011603.
  25. Landrigan CP, Rothschild JM, Cronin JW, et al. Effect of reducing interns' work hour on serious medical errors in intensive care units. N Engl J Med. 2004;351(18):18381848.
  26. Levine AC, Adusumilli J, Landrigan CP. Effects of reducing or eliminating resident work shifts over 16 hours: a systematic review. Sleep. 2010;33(8):10431053.
  27. Bhavsar J, Montgomery D, Li J, et al. Impact of duty hours restrictions on quality of care and clinical outcomes. Am J Med. 2007;120(11):968974.
  28. Schumacher DJ, Slovein SR, Riebschleger MP, Englander R, Hicks P, Carraccio C. Beyond counting hours: the importance of supervision, professionalism, transitions in care, and workload in residency training. Acad Med. 2012;87(7):883888.
  29. Tessing S, Amendt A, Jennings J, Thomson J, Auger KA, Gonzalez del Rey JA. One possible future for resident hours: interns' perspective on a one‐month trial of the Institute of Medicine recommended duty hour limits. J Grad Med Educ. 2009;1(2):185187.
  30. Blum AB, Shea S, Czeisler CA, Landrigan CP, Leape L. Implementing the 2009 Institute of Medicine recommendations on resident physician work hours, supervision, and safety. Nature Sci Sleep. 2001;3:4785.
  31. Bricker DA, Markert RJ. Night float teaching and learning: perceptions of residents and faculty. J Grad Med Educ. 2010;2(2):236241.
  32. Institute of Medicine. Resident duty hours: enhancing sleep, supervision, and safety. Report brief. Washington, DC: National Academies; 2008. Available at: http://www.iom.edu/∼/media/Files/Report Files/2008/Resident‐Duty‐Hours/residency hours revised for web.pdf. Accessed May 23, 2013.
References
  1. Accreditation Council for Graduate Medical Education. Common program requirements effective: July 1, 2011. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/ProgramResources/Common_Program_Requirements_07012011[1].pdf. Accessed February 10, 2014.
  2. Nasca TJ, Day SH, Amis ES. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363:e3.
  3. Landrigan CP, Barger LK, Cade BE, Ayas NT, Czeisler CA. Interns' compliance with Accreditation Council for Graduate Medical Education work‐hour limits. JAMA. 2006;296(9):10631070.
  4. Fletcher KE, Underwood W, Davis SQ, Mangulkar RS, McMahon LF, Saint S. Effects of work hour reduction on residents' lives: a systematic review. JAMA. 2005;294(9):10881100.
  5. Landrigan CP, Fahrenkopf AM, Lewin D, et al. Effects of the ACGME duty hour limits on sleep, work hours, and safety. Pediatrics. 2008;122(2):250258.
  6. Volpp KG, Small DS, Romano PS. Teaching hospital five‐year mortality trends in the wake of duty hour reforms. J Gen Intern Med. 2013;28(8):10481055.
  7. Philibert I, Nasca T, Brigham T, Shapiro J. Duty hour limits and patient care and resident outcomes: can high‐quality studies offer insight into complex relationships? Ann Rev Med. 2013;64:467483.
  8. Fletcher KE, Reed DA, Arora VM. Patient safety, resident education and resident well‐being following implementation of the 2003 ACGME duty hour rules. J Gen Intern Med. 2011;26(8):907919.
  9. Volpp KG, Rosen AK, Rosenbaum PR, et al. Mortality among hospitalized Medicare beneficiaries in the first 2 years following ACGME resident duty hour reform. JAMA. 2007;298(9):975983.
  10. Rosen AK, Loveland SA, Romano PS, et al. Effects of resident duty hour reform on surgical and procedural patient safety indicators among hospitalized Veterans Health Administration and Medicare patients. Med Care. 2009;47(7):723731.
  11. Schuh LA, Khan MA, Harle H, et al. Pilot trial of IOM duty hour recommendations in neurology residency programs. Neurology. 2011;77(9):883887.
  12. McCoy CP, Halvorsen AJ, Loftus CG, et al. Effect of 16‐hour duty periods of patient care and resident education. Mayo Clin Proc. 2011;86:192196.
  13. Sen S, Kranzler HR, Didwania AK, et al. Effects of the 2011 duty hour reforms on interns and their patients: a prospective longitudinal cohort study. JAMA Intern Med. 2013;173(8):657662.
  14. Desai SV, Feldman L, Brown L, et al. Effect of the 2011 vs 2003 duty hour regulation—compliant models on sleep duration, trainee education, and continuity of patient care among internal medicine house staff. JAMA Intern Med. 2013;173(8):649655.
  15. Drolet BC, Christopher DA, Fischer SA. Residents' response to duty‐hour regulations—a follow‐up national survey. N Engl J Med. 2012;366:e35.
  16. Drolet BS, Sangisetty S, Tracy TF, Cioffi WG. Surgical residents' perceptions of 2011 Accreditation Council for Graduate Medical Education duty hour regulations. JAMA Surg. 2013;148(5):427433.
  17. Drolet BC, Khokhar MT, Fischer SA. The 2011 duty hour requirements—a survey of residency program directors. N Engl J Med. 2013;368:694697.
  18. Theobald CN, Stover DG, Choma NN, et al. The effect of reducing maximum shift lengths to 16 hours on internal medicine interns' educational opportunities. Acad Med. 2013;88(4):512518.
  19. Nuckols TK, Escarce JJ. Residency work‐hours reform. A cost analysis including preventable adverse events. J Gen Intern Med. 2005;20(10):873878.
  20. Nuckols TK, Bhattacharya J, Wolman DM, Ulmer C, Escarce JJ. Cost implications of reduced work hours and workloads for resident physicians. N Engl J Med. 2009;360:22022215.
  21. Horwitz LI, Kosiborod M, Lin Z, Krumholz HM. Changes in outcomes for internal medicine inpatients after work‐hour regulations. Ann Intern Med. 2007;147:97103.
  22. .Maryland Health Services Cost Review Commission. Complications: Maryland Hospital Acquired Conditions. Available at: http://www.hscrc.state.md.us/init_qi_MHAC.cfm. Accessed May 23, 2013.
  23. Averill R, Goldfield N, Hughes J, et al. What are APR‐DRGs? An introduction to severity of illness and risk of mortality adjustment methodology. 3M Health Information Systems. Available at: http://solutions.3m.com/3MContentRetrievalAPI/BlobServlet?locale=it_IT44(4):10491060.
  24. Ross JS, Wang R, Long JB, Gross CP, Ma X. Impact of the 2008 US Preventive Services Task Force Recommendation to discontinue prostate cancer screening among male Medicare beneficiaries. Arch Intern Med. 2012;172(20):16011603.
  25. Landrigan CP, Rothschild JM, Cronin JW, et al. Effect of reducing interns' work hour on serious medical errors in intensive care units. N Engl J Med. 2004;351(18):18381848.
  26. Levine AC, Adusumilli J, Landrigan CP. Effects of reducing or eliminating resident work shifts over 16 hours: a systematic review. Sleep. 2010;33(8):10431053.
  27. Bhavsar J, Montgomery D, Li J, et al. Impact of duty hours restrictions on quality of care and clinical outcomes. Am J Med. 2007;120(11):968974.
  28. Schumacher DJ, Slovein SR, Riebschleger MP, Englander R, Hicks P, Carraccio C. Beyond counting hours: the importance of supervision, professionalism, transitions in care, and workload in residency training. Acad Med. 2012;87(7):883888.
  29. Tessing S, Amendt A, Jennings J, Thomson J, Auger KA, Gonzalez del Rey JA. One possible future for resident hours: interns' perspective on a one‐month trial of the Institute of Medicine recommended duty hour limits. J Grad Med Educ. 2009;1(2):185187.
  30. Blum AB, Shea S, Czeisler CA, Landrigan CP, Leape L. Implementing the 2009 Institute of Medicine recommendations on resident physician work hours, supervision, and safety. Nature Sci Sleep. 2001;3:4785.
  31. Bricker DA, Markert RJ. Night float teaching and learning: perceptions of residents and faculty. J Grad Med Educ. 2010;2(2):236241.
  32. Institute of Medicine. Resident duty hours: enhancing sleep, supervision, and safety. Report brief. Washington, DC: National Academies; 2008. Available at: http://www.iom.edu/∼/media/Files/Report Files/2008/Resident‐Duty‐Hours/residency hours revised for web.pdf. Accessed May 23, 2013.
Issue
Journal of Hospital Medicine - 9(6)
Issue
Journal of Hospital Medicine - 9(6)
Page Number
347-352
Page Number
347-352
Publications
Publications
Article Type
Display Headline
Inpatient safety outcomes following the 2011 residency work‐hour reform
Display Headline
Inpatient safety outcomes following the 2011 residency work‐hour reform
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Lauren Block, MD, Hofstra North Shore‐LIJ School of Medicine, 2001 Marcus Ave, Suite S160, Lake Success NY 11042; Telephone: 516‐519‐5600; Fax: 516‐519‐5601; E‐mail: lblock2@nshs.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Etiquette‐Based Medicine Among Interns

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Do internal medicine interns practice etiquette‐based communication? A critical look at the inpatient encounter

Patient‐centered communication may impact several aspects of the patientdoctor relationship including patient disclosure of illness‐related information, patient satisfaction, anxiety, and compliance with medical recommendations.[1, 2, 3, 4] Etiquette‐based medicine, a term coined by Kahn, involves simple patient‐centered communication strategies that convey professionalism and respect to patients.[5] Studies have confirmed that patients prefer physicians who practice etiquette‐based medicine behaviors, including sitting down and introducing one's self.[6, 7, 8, 9] Performance of etiquette‐based medicine is associated with higher Press Ganey patient satisfaction scores. However, these easy‐to‐practice behaviors may not be modeled commonly in the inpatient setting.[10] We sought to understand whether etiquette‐based communication behaviors are practiced by trainees on inpatient medicine rotations.

METHODS

Design

This was a prospective study incorporating direct observation of intern interactions with patients during January 2012 at 2 internal medicine residency programs in Baltimore Maryland, Johns Hopkins Hospital (JHH) and the University of Maryland Medical Center (UMMC). We then surveyed participants from JHH in June 2012 to assess perceptions of their practice of etiquette‐based communication.

Participants and Setting

We observed a convenience sample of 29 internal medicine interns from the 2 institutions. We sought to observe interns over an equal number of hours at both sites and to sample shifts in proportion to the amount of time interns spend on each of these shifts. All interns who were asked to participate in the study agreed and comprised a total of 27% of the 108 interns in the 2 programs. The institutional review board at Johns Hopkins School of Medicine approved the study; the University of Maryland institutional review board deemed it not human subjects research. All observed interns provided informed consent to be observed during 1 to 4 inpatient shifts.

Observers

Twenty‐two undergraduate university students served as the observers for the study and were trained to collect data with the iPod Touch (Apple, Cupertino, CA) without interrupting patient care. We then tested the observers to ensure 85% concordance rate with the researchers in mock observation. Four hours of quality assurance were completed at both institutions during the study. Congruence between observer and research team member was >85% for each hour of observation.

Observation

Observers recorded intern activities on the iPod Touch spreadsheet application. The application allowed for real‐time data entry and direct export of results. The primary dependent variables for this study were 5 behaviors that were assessed each time an intern went into a patient's room. The 5 observed behaviors included (1) introducing one's self, (2) introducing one's role on the medical team, (3) touching the patient, (4) sitting down, and (5) asking the patient at least 1 open‐ended question. These behaviors were chosen for observation because they are central to Kahn's framework of etiquette‐based medicine, applicable to each inpatient encounter, and readily observed by trained nonmedical observers. These behaviors are defined in Table 1. Use of open‐ended questions was observed as a more general form of Kahn's recommendation to ask how the patient is feeling. Interns were not aware of which behaviors were being evaluated.

Observed Behaviors and Definitions
Behavior Definition
Introduced self Providing a name
Introduced role Uses term doctor, resident, intern, or medical team
Sat down Sitting on the bed, in a chair, or crouching if no chair was available during at least part of the encounter
Touched the patient Any form of physical contact that occurred at least once during the encounter including shaking a patient's hand, touching a patient on the shoulder, or performing any part of the physical exam
Asked open‐ended question Asked the patient any question that required more than a yes/no answer

Each time an observed intern entered a patient room, the observer recorded whether or not each of the 5 behaviors was performed, coded as a dichotomous variable. Although data collection was anonymous, observers recorded the team, hospital site, gender of the intern, and whether the intern was admitting new patients during the shift.

Survey

Following the observational portion of the study, participants at JHH completed a cross‐sectional, anonymous survey that asked them to estimate how frequently they currently performed each of the behaviors observed in this study. Response options included the following categories: <20%, 20% to 40%, 40% to 60%, 60% to 80%, or 80% to 100%.

Data Analysis

We determined the percent of patient visits during which each behavior was performed. Data were analyzed using Student t and [2] tests evaluating differences by hospital, intern gender, type of shift, and time of day. To account for correlation within subjects and observers, we performed multilevel logistic regression analysis adjusted for clustering at the intern and observer levels. For the survey analysis, the mean of the response category was used as the basis for comparison. All quantitative analyses were performed in Excel 2010 (Microsoft Corp., Redmond, WA) and Stata/IC version 11 (StataCorp, College Station, TX).

RESULTS

A total of 732 inpatient encounters were observed during 118 intern shifts. Interns were observed for a mean of 25 patient encounters each (range, 361; standard deviation [SD] 17). Overall, interns introduced themselves 40% of the time and stated their role 37% of the time (Table 2). Interns touched patients on 65% of visits, sat down with patients during 9% of visits, and asked open‐ended questions on 75% of visits. Interns performed all 5 of the behaviors during 4% of the total encounters. The percentage of the 5 behaviors performed by each intern during all observed visits ranged from 24% to 100%, with a mean of 51% (SD 17%) per intern.

Frequency of Performing Behaviors During Patient Encounters by Intern Gender and Shift Type
Total Encounters, N (%) Introduced Self (%) Introduced Role (%) Touched Patient (%) Sat Down (%) Open‐Ended Question (%)
  • NOTE: Abbreviations: JHH, Johns Hopkins Hospital; UMMC, University of Maryland Medical Center.

  • P<0.05 in unadjusted bivariate analysis.

  • P<0.05 in analysis adjusted for clustering at observer and intern levels.

Overall 732 40 37 65 9 75
JHH 373 (51) 35ab 29ab 62a 10 70a
UMMC 359 (49) 45 44 69 8 81
Male 284 (39) 39 35 64 9 74
Female 448 (61) 41 38 67 10 76
Day shift 551 (75) 37a 34a 65 9 77
Night shift 181 (25) 48 45 67 12 71
Admitting shift 377 (52) 46a 42a 63 10 75
Nonadmitting shift 355 (48) 34 30 69 9 76

During night shifts as compared to day shifts, interns were more likely to introduce themselves (48% vs 37%, P=0.01) and their role (45% vs 34%, P<0.01). During shifts in which they admitted patients as compared to coverage shifts, interns were more likely to introduce themselves (46% vs 34%, P<0.01) and their role (42% vs 30%, P<0.01). Interns at UMMC as compared to JHH interns were more likely to introduce themselves (45% vs 35%, P<0.01) and describe their role to patients (44% vs 29%, P<0.01). Interns at UMMC were also more likely to ask open‐ended questions (81% vs 70%, P<0.01) and to touch patients (69% vs 62%, P=0.04). Performance of these behaviors did not vary significantly by gender, time of day, or shift. After adjustment for clustering at the observer and intern levels, differences by institution persisted in the rate of introducing oneself and one's role.

We performed a sensitivity analysis examining the first patient encounters of the day, and found that interns were somewhat more likely to introduce themselves (50% vs 40%, P=0.03) but were not significantly more likely to introduce their role, sit down, ask open‐ended questions, or touch the patient.

Nine of the 10 interns at JHH who participated in the study completed the survey (response rate=90%). Interns estimated introducing themselves and their role and sitting with patients significantly more frequently than was observed (80% vs 40%, P<0.01; 80% vs 37%, P<0.01; and 58% vs 9%, P<0.01, respectively) (Figure 1).

Figure 1
Comparison of observed and self‐reported performance of etiquette‐based communication behaviors among interns at Johns Hopkins Hospital. *P < 0.01 comparing observed and reported values.

DISCUSSION

The interns we observed in 2 urban academic internal medicine residency programs did not routinely practice etiquette‐based communication. Interns surveyed tended to overestimate their performance of these behaviors. These behaviors are simple to perform and are each associated with improved patient experiences of hospital care. Tackett et al. recently demonstrated that interns are not alone. Hospitalist physicians do not universally practice etiquette‐based medicine, even though these behaviors correlate with patient satisfaction scores.[10]

Introducing oneself to patients may improve patient satisfaction and acceptance of trainee involvement in care.[6] However, only 10% of hospitalized patients in 1 study correctly identified a physician on their inpatient team, demonstrating the need for introductions during each and every inpatient encounter.[11] The interns we observed introduced themselves to patients in only 40% of encounters. During admitting shifts, when the first encounter with a patient likely took place, interns introduced themselves during 46% of encounters.

A comforting touch has been shown to reduce anxiety levels among patients and improve compliance with treatment regimens, but the interns did not touch patients in one‐third of visits, including during admitting shifts. Sixty‐six percent of patients consider a physician's touch comforting, and 58% believe it to be healing.[8]

A randomized trial found that most patients preferred a sitting physician, and believed that practitioners who sat were more compassionate and spent more time with them.[9] Unfortunately, interns sat down with patients in fewer than 10% of encounters.

We do not know why interns do not engage in these simple behaviors, but it is not surprising given that their role models, including hospitalist physicians, do not practice them universally.[10] Personality differences, medical school experiences, and hospital factors such as patient volume and complexity may explain variability in performance.

Importantly, we know that habits learned in residency tend to be retained when physicians enter independent practice.[12] If we want attending physicians to practice etiquette‐based communication, then it must be role modeled, taught, and evaluated during residency by clinical educators and hospitalist physicians. The gap between intern perceptions and actual practice of these behaviors provides a window of opportunity for education and feedback in bedside communication. Attending physicians rate communication skills as 1 of the top values they seek to pass on to house officers.[13] Curricula on communication skills improve physician attitudes and beliefs about the importance of good communication as well as long‐term performance of communication skills.[14]

Our study had several limitations. First, all 732 patient encounters were assessed, regardless of whether the intern had seen the patient previously. This differed slightly from Kahn's assertion that these behaviors be performed at least on the first encounter with the patient. We believe that the need for common courtesy does not diminish after the first visit, and although certain behaviors may not be indicated on 100% of visits, our sensitivity analysis indicated performance of these behaviors was not likely even on the first visit of the day.

Second, our observations were limited to medicine interns at 2 programs in Baltimore during a single month, limiting generalizability. A convenience sample of interns was chosen for recruitment based on rotation on a general medicine rotation during the study month. We observed interns over the course of several shifts and throughout various positions in the call cycle.

Third, in any observational study, the Hawthorne effect is a potential limitation. We attempted to limit this bias by collecting information anonymously and not indicating to the interns which aspects of the patient encounter were being recorded.

Fourth, we defined the behaviors broadly in an attempt to measure the outcomes conservatively and maximize inter‐rater reliability. For instance, we did not differentiate in data collection between comforting touch and physical examination. Because chairs may not be readily available in all patient rooms, we included sitting on the patient's bed or crouching next to the bed as sitting with the patient. Use of open‐ended questions was observed as a more general form of Kahn's recommendation to ask how the patient is feeling.

Fifth, our poststudy survey was conducted 6 months after the observations were performed, used an ordinal rather than continuous response scale, and was limited to only 1 of the 2 programs and 9 of the 29 participants. Given this small sample size, generalizability of the results is limited. Additionally, intern practice of etiquette‐based communication may have improved between the observations and survey that took place 6 months later.

As hospital admissions are a time of vulnerability for patients, physicians can take a basic etiquette‐based communication approach to comfort patients and help them feel more secure. We found that even though interns believed they were practicing Kahn's recommended etiquette‐based communication, only a minority actually were. Curricula on communication styles or environmental changes, such as providing chairs in patient rooms or photographs identifying members of the medical team, may encourage performance of these behaviors.[15]

Acknowledgments

The authors acknowledge Dr. Lisa Cooper, MD, MPH, and Dr. Mary Catherine Beach, MD, MPH, who provided tremendous help in editing. The authors also thank Kevin Wang, whose assistance with observer hiring, training, and management was essential.

Disclosures: The Osler Center for Clinical Excellence at Johns Hopkins and the Johns Hopkins Hospitalist Scholars Fund provided stipends for our observers as well as transportation and logistical costs of the study. The authors report no conflicts of interest.

Files
References
  1. Beck RS, Daughtridge R, Sloane PD. Physician‐patient communication in the primary care office: a systematic review. J Am Board Fam Pract. 2002;15:2538.
  2. Duggan P, Parrott L. Physicians' nonverbal rapport building and patients' talk about the subjective component of illness. Hum Commun Res. 2001;27:299311.
  3. Fogarty LA, Curbow BA, Wingard JR, McDonnell K, Somerfield MR. Can 40 seconds of compassion reduce patient anxiety? J Clin Oncol. 1999;17:371379.
  4. Griffith CH, Wilson J, Langer S, Haist SA. House staff nonverbal communication skills and patient satisfaction. J Gen Intern Med. 2003;18:170174.
  5. Kahn, Michael W. Etiquette‐based medicine. N Engl J Med. 2008;358:19881989.
  6. Francis JJ, Pankratz VS, Huddleston JM. Patient satisfaction associated with correct identification of physician's photographs. Mayo Clin Proc. 2001;76:604608.
  7. Stewart MA. Effective physician‐patient communication and health outcomes: a review. CMAJ. 1995;152:14231433.
  8. Osmun WE, Brown JB, Stewart M, Graham S. Patients' attitudes to comforting touch in family practice. Can Fam Physician. 2000;46:24112416.
  9. Strasser F, Palmer JL, Williey J, et al. Impact of physician sitting versus standing during inpatient oncology consultations: patients' preference and perception of compassion and duration. A randomized controlled trial. J Pain Symptom Manage. 2005;29:489497.
  10. Tackett S, Tad‐Y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  11. Arora V, Gangireddy S, Mehrotra A, Ginde R, Tormey M, Meltzer D. Ability of hospitalized patients to identify their in‐hospital physicians. Arch Intern Med. 2009;169:199201.
  12. Martin GJ, Curry RH, Yarnold PR. The content of internal medicine residency training and its relevance to the practice of medicine. J Gen Intern Med. 1989;4:304308.
  13. Wright SM, Carrese JA. Which values to attending physicians try to pass on to house officers? Med Educ. 2001;35:941945.
  14. Laidlaw TS, Kaufman DM, MacLeod H, Zanten SV, Simpson D, Wrixon W. Relationship of resident characteristics, attitudes, prior training, and clinical knowledge to communication skills performance. Med Educ. 2006;40:1825.
  15. Dudas R, Lemerman H, Barone M, Serwint J. PHACES (Photographs of academic clinicians and their educational status): a tool to improve delivery of family‐centered care. Acad Pediatr. 2010;10:138145.
Article PDF
Issue
Journal of Hospital Medicine - 8(11)
Publications
Page Number
631-634
Sections
Files
Files
Article PDF
Article PDF

Patient‐centered communication may impact several aspects of the patientdoctor relationship including patient disclosure of illness‐related information, patient satisfaction, anxiety, and compliance with medical recommendations.[1, 2, 3, 4] Etiquette‐based medicine, a term coined by Kahn, involves simple patient‐centered communication strategies that convey professionalism and respect to patients.[5] Studies have confirmed that patients prefer physicians who practice etiquette‐based medicine behaviors, including sitting down and introducing one's self.[6, 7, 8, 9] Performance of etiquette‐based medicine is associated with higher Press Ganey patient satisfaction scores. However, these easy‐to‐practice behaviors may not be modeled commonly in the inpatient setting.[10] We sought to understand whether etiquette‐based communication behaviors are practiced by trainees on inpatient medicine rotations.

METHODS

Design

This was a prospective study incorporating direct observation of intern interactions with patients during January 2012 at 2 internal medicine residency programs in Baltimore Maryland, Johns Hopkins Hospital (JHH) and the University of Maryland Medical Center (UMMC). We then surveyed participants from JHH in June 2012 to assess perceptions of their practice of etiquette‐based communication.

Participants and Setting

We observed a convenience sample of 29 internal medicine interns from the 2 institutions. We sought to observe interns over an equal number of hours at both sites and to sample shifts in proportion to the amount of time interns spend on each of these shifts. All interns who were asked to participate in the study agreed and comprised a total of 27% of the 108 interns in the 2 programs. The institutional review board at Johns Hopkins School of Medicine approved the study; the University of Maryland institutional review board deemed it not human subjects research. All observed interns provided informed consent to be observed during 1 to 4 inpatient shifts.

Observers

Twenty‐two undergraduate university students served as the observers for the study and were trained to collect data with the iPod Touch (Apple, Cupertino, CA) without interrupting patient care. We then tested the observers to ensure 85% concordance rate with the researchers in mock observation. Four hours of quality assurance were completed at both institutions during the study. Congruence between observer and research team member was >85% for each hour of observation.

Observation

Observers recorded intern activities on the iPod Touch spreadsheet application. The application allowed for real‐time data entry and direct export of results. The primary dependent variables for this study were 5 behaviors that were assessed each time an intern went into a patient's room. The 5 observed behaviors included (1) introducing one's self, (2) introducing one's role on the medical team, (3) touching the patient, (4) sitting down, and (5) asking the patient at least 1 open‐ended question. These behaviors were chosen for observation because they are central to Kahn's framework of etiquette‐based medicine, applicable to each inpatient encounter, and readily observed by trained nonmedical observers. These behaviors are defined in Table 1. Use of open‐ended questions was observed as a more general form of Kahn's recommendation to ask how the patient is feeling. Interns were not aware of which behaviors were being evaluated.

Observed Behaviors and Definitions
Behavior Definition
Introduced self Providing a name
Introduced role Uses term doctor, resident, intern, or medical team
Sat down Sitting on the bed, in a chair, or crouching if no chair was available during at least part of the encounter
Touched the patient Any form of physical contact that occurred at least once during the encounter including shaking a patient's hand, touching a patient on the shoulder, or performing any part of the physical exam
Asked open‐ended question Asked the patient any question that required more than a yes/no answer

Each time an observed intern entered a patient room, the observer recorded whether or not each of the 5 behaviors was performed, coded as a dichotomous variable. Although data collection was anonymous, observers recorded the team, hospital site, gender of the intern, and whether the intern was admitting new patients during the shift.

Survey

Following the observational portion of the study, participants at JHH completed a cross‐sectional, anonymous survey that asked them to estimate how frequently they currently performed each of the behaviors observed in this study. Response options included the following categories: <20%, 20% to 40%, 40% to 60%, 60% to 80%, or 80% to 100%.

Data Analysis

We determined the percent of patient visits during which each behavior was performed. Data were analyzed using Student t and [2] tests evaluating differences by hospital, intern gender, type of shift, and time of day. To account for correlation within subjects and observers, we performed multilevel logistic regression analysis adjusted for clustering at the intern and observer levels. For the survey analysis, the mean of the response category was used as the basis for comparison. All quantitative analyses were performed in Excel 2010 (Microsoft Corp., Redmond, WA) and Stata/IC version 11 (StataCorp, College Station, TX).

RESULTS

A total of 732 inpatient encounters were observed during 118 intern shifts. Interns were observed for a mean of 25 patient encounters each (range, 361; standard deviation [SD] 17). Overall, interns introduced themselves 40% of the time and stated their role 37% of the time (Table 2). Interns touched patients on 65% of visits, sat down with patients during 9% of visits, and asked open‐ended questions on 75% of visits. Interns performed all 5 of the behaviors during 4% of the total encounters. The percentage of the 5 behaviors performed by each intern during all observed visits ranged from 24% to 100%, with a mean of 51% (SD 17%) per intern.

Frequency of Performing Behaviors During Patient Encounters by Intern Gender and Shift Type
Total Encounters, N (%) Introduced Self (%) Introduced Role (%) Touched Patient (%) Sat Down (%) Open‐Ended Question (%)
  • NOTE: Abbreviations: JHH, Johns Hopkins Hospital; UMMC, University of Maryland Medical Center.

  • P<0.05 in unadjusted bivariate analysis.

  • P<0.05 in analysis adjusted for clustering at observer and intern levels.

Overall 732 40 37 65 9 75
JHH 373 (51) 35ab 29ab 62a 10 70a
UMMC 359 (49) 45 44 69 8 81
Male 284 (39) 39 35 64 9 74
Female 448 (61) 41 38 67 10 76
Day shift 551 (75) 37a 34a 65 9 77
Night shift 181 (25) 48 45 67 12 71
Admitting shift 377 (52) 46a 42a 63 10 75
Nonadmitting shift 355 (48) 34 30 69 9 76

During night shifts as compared to day shifts, interns were more likely to introduce themselves (48% vs 37%, P=0.01) and their role (45% vs 34%, P<0.01). During shifts in which they admitted patients as compared to coverage shifts, interns were more likely to introduce themselves (46% vs 34%, P<0.01) and their role (42% vs 30%, P<0.01). Interns at UMMC as compared to JHH interns were more likely to introduce themselves (45% vs 35%, P<0.01) and describe their role to patients (44% vs 29%, P<0.01). Interns at UMMC were also more likely to ask open‐ended questions (81% vs 70%, P<0.01) and to touch patients (69% vs 62%, P=0.04). Performance of these behaviors did not vary significantly by gender, time of day, or shift. After adjustment for clustering at the observer and intern levels, differences by institution persisted in the rate of introducing oneself and one's role.

We performed a sensitivity analysis examining the first patient encounters of the day, and found that interns were somewhat more likely to introduce themselves (50% vs 40%, P=0.03) but were not significantly more likely to introduce their role, sit down, ask open‐ended questions, or touch the patient.

Nine of the 10 interns at JHH who participated in the study completed the survey (response rate=90%). Interns estimated introducing themselves and their role and sitting with patients significantly more frequently than was observed (80% vs 40%, P<0.01; 80% vs 37%, P<0.01; and 58% vs 9%, P<0.01, respectively) (Figure 1).

Figure 1
Comparison of observed and self‐reported performance of etiquette‐based communication behaviors among interns at Johns Hopkins Hospital. *P < 0.01 comparing observed and reported values.

DISCUSSION

The interns we observed in 2 urban academic internal medicine residency programs did not routinely practice etiquette‐based communication. Interns surveyed tended to overestimate their performance of these behaviors. These behaviors are simple to perform and are each associated with improved patient experiences of hospital care. Tackett et al. recently demonstrated that interns are not alone. Hospitalist physicians do not universally practice etiquette‐based medicine, even though these behaviors correlate with patient satisfaction scores.[10]

Introducing oneself to patients may improve patient satisfaction and acceptance of trainee involvement in care.[6] However, only 10% of hospitalized patients in 1 study correctly identified a physician on their inpatient team, demonstrating the need for introductions during each and every inpatient encounter.[11] The interns we observed introduced themselves to patients in only 40% of encounters. During admitting shifts, when the first encounter with a patient likely took place, interns introduced themselves during 46% of encounters.

A comforting touch has been shown to reduce anxiety levels among patients and improve compliance with treatment regimens, but the interns did not touch patients in one‐third of visits, including during admitting shifts. Sixty‐six percent of patients consider a physician's touch comforting, and 58% believe it to be healing.[8]

A randomized trial found that most patients preferred a sitting physician, and believed that practitioners who sat were more compassionate and spent more time with them.[9] Unfortunately, interns sat down with patients in fewer than 10% of encounters.

We do not know why interns do not engage in these simple behaviors, but it is not surprising given that their role models, including hospitalist physicians, do not practice them universally.[10] Personality differences, medical school experiences, and hospital factors such as patient volume and complexity may explain variability in performance.

Importantly, we know that habits learned in residency tend to be retained when physicians enter independent practice.[12] If we want attending physicians to practice etiquette‐based communication, then it must be role modeled, taught, and evaluated during residency by clinical educators and hospitalist physicians. The gap between intern perceptions and actual practice of these behaviors provides a window of opportunity for education and feedback in bedside communication. Attending physicians rate communication skills as 1 of the top values they seek to pass on to house officers.[13] Curricula on communication skills improve physician attitudes and beliefs about the importance of good communication as well as long‐term performance of communication skills.[14]

Our study had several limitations. First, all 732 patient encounters were assessed, regardless of whether the intern had seen the patient previously. This differed slightly from Kahn's assertion that these behaviors be performed at least on the first encounter with the patient. We believe that the need for common courtesy does not diminish after the first visit, and although certain behaviors may not be indicated on 100% of visits, our sensitivity analysis indicated performance of these behaviors was not likely even on the first visit of the day.

Second, our observations were limited to medicine interns at 2 programs in Baltimore during a single month, limiting generalizability. A convenience sample of interns was chosen for recruitment based on rotation on a general medicine rotation during the study month. We observed interns over the course of several shifts and throughout various positions in the call cycle.

Third, in any observational study, the Hawthorne effect is a potential limitation. We attempted to limit this bias by collecting information anonymously and not indicating to the interns which aspects of the patient encounter were being recorded.

Fourth, we defined the behaviors broadly in an attempt to measure the outcomes conservatively and maximize inter‐rater reliability. For instance, we did not differentiate in data collection between comforting touch and physical examination. Because chairs may not be readily available in all patient rooms, we included sitting on the patient's bed or crouching next to the bed as sitting with the patient. Use of open‐ended questions was observed as a more general form of Kahn's recommendation to ask how the patient is feeling.

Fifth, our poststudy survey was conducted 6 months after the observations were performed, used an ordinal rather than continuous response scale, and was limited to only 1 of the 2 programs and 9 of the 29 participants. Given this small sample size, generalizability of the results is limited. Additionally, intern practice of etiquette‐based communication may have improved between the observations and survey that took place 6 months later.

As hospital admissions are a time of vulnerability for patients, physicians can take a basic etiquette‐based communication approach to comfort patients and help them feel more secure. We found that even though interns believed they were practicing Kahn's recommended etiquette‐based communication, only a minority actually were. Curricula on communication styles or environmental changes, such as providing chairs in patient rooms or photographs identifying members of the medical team, may encourage performance of these behaviors.[15]

Acknowledgments

The authors acknowledge Dr. Lisa Cooper, MD, MPH, and Dr. Mary Catherine Beach, MD, MPH, who provided tremendous help in editing. The authors also thank Kevin Wang, whose assistance with observer hiring, training, and management was essential.

Disclosures: The Osler Center for Clinical Excellence at Johns Hopkins and the Johns Hopkins Hospitalist Scholars Fund provided stipends for our observers as well as transportation and logistical costs of the study. The authors report no conflicts of interest.

Patient‐centered communication may impact several aspects of the patientdoctor relationship including patient disclosure of illness‐related information, patient satisfaction, anxiety, and compliance with medical recommendations.[1, 2, 3, 4] Etiquette‐based medicine, a term coined by Kahn, involves simple patient‐centered communication strategies that convey professionalism and respect to patients.[5] Studies have confirmed that patients prefer physicians who practice etiquette‐based medicine behaviors, including sitting down and introducing one's self.[6, 7, 8, 9] Performance of etiquette‐based medicine is associated with higher Press Ganey patient satisfaction scores. However, these easy‐to‐practice behaviors may not be modeled commonly in the inpatient setting.[10] We sought to understand whether etiquette‐based communication behaviors are practiced by trainees on inpatient medicine rotations.

METHODS

Design

This was a prospective study incorporating direct observation of intern interactions with patients during January 2012 at 2 internal medicine residency programs in Baltimore Maryland, Johns Hopkins Hospital (JHH) and the University of Maryland Medical Center (UMMC). We then surveyed participants from JHH in June 2012 to assess perceptions of their practice of etiquette‐based communication.

Participants and Setting

We observed a convenience sample of 29 internal medicine interns from the 2 institutions. We sought to observe interns over an equal number of hours at both sites and to sample shifts in proportion to the amount of time interns spend on each of these shifts. All interns who were asked to participate in the study agreed and comprised a total of 27% of the 108 interns in the 2 programs. The institutional review board at Johns Hopkins School of Medicine approved the study; the University of Maryland institutional review board deemed it not human subjects research. All observed interns provided informed consent to be observed during 1 to 4 inpatient shifts.

Observers

Twenty‐two undergraduate university students served as the observers for the study and were trained to collect data with the iPod Touch (Apple, Cupertino, CA) without interrupting patient care. We then tested the observers to ensure 85% concordance rate with the researchers in mock observation. Four hours of quality assurance were completed at both institutions during the study. Congruence between observer and research team member was >85% for each hour of observation.

Observation

Observers recorded intern activities on the iPod Touch spreadsheet application. The application allowed for real‐time data entry and direct export of results. The primary dependent variables for this study were 5 behaviors that were assessed each time an intern went into a patient's room. The 5 observed behaviors included (1) introducing one's self, (2) introducing one's role on the medical team, (3) touching the patient, (4) sitting down, and (5) asking the patient at least 1 open‐ended question. These behaviors were chosen for observation because they are central to Kahn's framework of etiquette‐based medicine, applicable to each inpatient encounter, and readily observed by trained nonmedical observers. These behaviors are defined in Table 1. Use of open‐ended questions was observed as a more general form of Kahn's recommendation to ask how the patient is feeling. Interns were not aware of which behaviors were being evaluated.

Observed Behaviors and Definitions
Behavior Definition
Introduced self Providing a name
Introduced role Uses term doctor, resident, intern, or medical team
Sat down Sitting on the bed, in a chair, or crouching if no chair was available during at least part of the encounter
Touched the patient Any form of physical contact that occurred at least once during the encounter including shaking a patient's hand, touching a patient on the shoulder, or performing any part of the physical exam
Asked open‐ended question Asked the patient any question that required more than a yes/no answer

Each time an observed intern entered a patient room, the observer recorded whether or not each of the 5 behaviors was performed, coded as a dichotomous variable. Although data collection was anonymous, observers recorded the team, hospital site, gender of the intern, and whether the intern was admitting new patients during the shift.

Survey

Following the observational portion of the study, participants at JHH completed a cross‐sectional, anonymous survey that asked them to estimate how frequently they currently performed each of the behaviors observed in this study. Response options included the following categories: <20%, 20% to 40%, 40% to 60%, 60% to 80%, or 80% to 100%.

Data Analysis

We determined the percent of patient visits during which each behavior was performed. Data were analyzed using Student t and [2] tests evaluating differences by hospital, intern gender, type of shift, and time of day. To account for correlation within subjects and observers, we performed multilevel logistic regression analysis adjusted for clustering at the intern and observer levels. For the survey analysis, the mean of the response category was used as the basis for comparison. All quantitative analyses were performed in Excel 2010 (Microsoft Corp., Redmond, WA) and Stata/IC version 11 (StataCorp, College Station, TX).

RESULTS

A total of 732 inpatient encounters were observed during 118 intern shifts. Interns were observed for a mean of 25 patient encounters each (range, 361; standard deviation [SD] 17). Overall, interns introduced themselves 40% of the time and stated their role 37% of the time (Table 2). Interns touched patients on 65% of visits, sat down with patients during 9% of visits, and asked open‐ended questions on 75% of visits. Interns performed all 5 of the behaviors during 4% of the total encounters. The percentage of the 5 behaviors performed by each intern during all observed visits ranged from 24% to 100%, with a mean of 51% (SD 17%) per intern.

Frequency of Performing Behaviors During Patient Encounters by Intern Gender and Shift Type
Total Encounters, N (%) Introduced Self (%) Introduced Role (%) Touched Patient (%) Sat Down (%) Open‐Ended Question (%)
  • NOTE: Abbreviations: JHH, Johns Hopkins Hospital; UMMC, University of Maryland Medical Center.

  • P<0.05 in unadjusted bivariate analysis.

  • P<0.05 in analysis adjusted for clustering at observer and intern levels.

Overall 732 40 37 65 9 75
JHH 373 (51) 35ab 29ab 62a 10 70a
UMMC 359 (49) 45 44 69 8 81
Male 284 (39) 39 35 64 9 74
Female 448 (61) 41 38 67 10 76
Day shift 551 (75) 37a 34a 65 9 77
Night shift 181 (25) 48 45 67 12 71
Admitting shift 377 (52) 46a 42a 63 10 75
Nonadmitting shift 355 (48) 34 30 69 9 76

During night shifts as compared to day shifts, interns were more likely to introduce themselves (48% vs 37%, P=0.01) and their role (45% vs 34%, P<0.01). During shifts in which they admitted patients as compared to coverage shifts, interns were more likely to introduce themselves (46% vs 34%, P<0.01) and their role (42% vs 30%, P<0.01). Interns at UMMC as compared to JHH interns were more likely to introduce themselves (45% vs 35%, P<0.01) and describe their role to patients (44% vs 29%, P<0.01). Interns at UMMC were also more likely to ask open‐ended questions (81% vs 70%, P<0.01) and to touch patients (69% vs 62%, P=0.04). Performance of these behaviors did not vary significantly by gender, time of day, or shift. After adjustment for clustering at the observer and intern levels, differences by institution persisted in the rate of introducing oneself and one's role.

We performed a sensitivity analysis examining the first patient encounters of the day, and found that interns were somewhat more likely to introduce themselves (50% vs 40%, P=0.03) but were not significantly more likely to introduce their role, sit down, ask open‐ended questions, or touch the patient.

Nine of the 10 interns at JHH who participated in the study completed the survey (response rate=90%). Interns estimated introducing themselves and their role and sitting with patients significantly more frequently than was observed (80% vs 40%, P<0.01; 80% vs 37%, P<0.01; and 58% vs 9%, P<0.01, respectively) (Figure 1).

Figure 1
Comparison of observed and self‐reported performance of etiquette‐based communication behaviors among interns at Johns Hopkins Hospital. *P < 0.01 comparing observed and reported values.

DISCUSSION

The interns we observed in 2 urban academic internal medicine residency programs did not routinely practice etiquette‐based communication. Interns surveyed tended to overestimate their performance of these behaviors. These behaviors are simple to perform and are each associated with improved patient experiences of hospital care. Tackett et al. recently demonstrated that interns are not alone. Hospitalist physicians do not universally practice etiquette‐based medicine, even though these behaviors correlate with patient satisfaction scores.[10]

Introducing oneself to patients may improve patient satisfaction and acceptance of trainee involvement in care.[6] However, only 10% of hospitalized patients in 1 study correctly identified a physician on their inpatient team, demonstrating the need for introductions during each and every inpatient encounter.[11] The interns we observed introduced themselves to patients in only 40% of encounters. During admitting shifts, when the first encounter with a patient likely took place, interns introduced themselves during 46% of encounters.

A comforting touch has been shown to reduce anxiety levels among patients and improve compliance with treatment regimens, but the interns did not touch patients in one‐third of visits, including during admitting shifts. Sixty‐six percent of patients consider a physician's touch comforting, and 58% believe it to be healing.[8]

A randomized trial found that most patients preferred a sitting physician, and believed that practitioners who sat were more compassionate and spent more time with them.[9] Unfortunately, interns sat down with patients in fewer than 10% of encounters.

We do not know why interns do not engage in these simple behaviors, but it is not surprising given that their role models, including hospitalist physicians, do not practice them universally.[10] Personality differences, medical school experiences, and hospital factors such as patient volume and complexity may explain variability in performance.

Importantly, we know that habits learned in residency tend to be retained when physicians enter independent practice.[12] If we want attending physicians to practice etiquette‐based communication, then it must be role modeled, taught, and evaluated during residency by clinical educators and hospitalist physicians. The gap between intern perceptions and actual practice of these behaviors provides a window of opportunity for education and feedback in bedside communication. Attending physicians rate communication skills as 1 of the top values they seek to pass on to house officers.[13] Curricula on communication skills improve physician attitudes and beliefs about the importance of good communication as well as long‐term performance of communication skills.[14]

Our study had several limitations. First, all 732 patient encounters were assessed, regardless of whether the intern had seen the patient previously. This differed slightly from Kahn's assertion that these behaviors be performed at least on the first encounter with the patient. We believe that the need for common courtesy does not diminish after the first visit, and although certain behaviors may not be indicated on 100% of visits, our sensitivity analysis indicated performance of these behaviors was not likely even on the first visit of the day.

Second, our observations were limited to medicine interns at 2 programs in Baltimore during a single month, limiting generalizability. A convenience sample of interns was chosen for recruitment based on rotation on a general medicine rotation during the study month. We observed interns over the course of several shifts and throughout various positions in the call cycle.

Third, in any observational study, the Hawthorne effect is a potential limitation. We attempted to limit this bias by collecting information anonymously and not indicating to the interns which aspects of the patient encounter were being recorded.

Fourth, we defined the behaviors broadly in an attempt to measure the outcomes conservatively and maximize inter‐rater reliability. For instance, we did not differentiate in data collection between comforting touch and physical examination. Because chairs may not be readily available in all patient rooms, we included sitting on the patient's bed or crouching next to the bed as sitting with the patient. Use of open‐ended questions was observed as a more general form of Kahn's recommendation to ask how the patient is feeling.

Fifth, our poststudy survey was conducted 6 months after the observations were performed, used an ordinal rather than continuous response scale, and was limited to only 1 of the 2 programs and 9 of the 29 participants. Given this small sample size, generalizability of the results is limited. Additionally, intern practice of etiquette‐based communication may have improved between the observations and survey that took place 6 months later.

As hospital admissions are a time of vulnerability for patients, physicians can take a basic etiquette‐based communication approach to comfort patients and help them feel more secure. We found that even though interns believed they were practicing Kahn's recommended etiquette‐based communication, only a minority actually were. Curricula on communication styles or environmental changes, such as providing chairs in patient rooms or photographs identifying members of the medical team, may encourage performance of these behaviors.[15]

Acknowledgments

The authors acknowledge Dr. Lisa Cooper, MD, MPH, and Dr. Mary Catherine Beach, MD, MPH, who provided tremendous help in editing. The authors also thank Kevin Wang, whose assistance with observer hiring, training, and management was essential.

Disclosures: The Osler Center for Clinical Excellence at Johns Hopkins and the Johns Hopkins Hospitalist Scholars Fund provided stipends for our observers as well as transportation and logistical costs of the study. The authors report no conflicts of interest.

References
  1. Beck RS, Daughtridge R, Sloane PD. Physician‐patient communication in the primary care office: a systematic review. J Am Board Fam Pract. 2002;15:2538.
  2. Duggan P, Parrott L. Physicians' nonverbal rapport building and patients' talk about the subjective component of illness. Hum Commun Res. 2001;27:299311.
  3. Fogarty LA, Curbow BA, Wingard JR, McDonnell K, Somerfield MR. Can 40 seconds of compassion reduce patient anxiety? J Clin Oncol. 1999;17:371379.
  4. Griffith CH, Wilson J, Langer S, Haist SA. House staff nonverbal communication skills and patient satisfaction. J Gen Intern Med. 2003;18:170174.
  5. Kahn, Michael W. Etiquette‐based medicine. N Engl J Med. 2008;358:19881989.
  6. Francis JJ, Pankratz VS, Huddleston JM. Patient satisfaction associated with correct identification of physician's photographs. Mayo Clin Proc. 2001;76:604608.
  7. Stewart MA. Effective physician‐patient communication and health outcomes: a review. CMAJ. 1995;152:14231433.
  8. Osmun WE, Brown JB, Stewart M, Graham S. Patients' attitudes to comforting touch in family practice. Can Fam Physician. 2000;46:24112416.
  9. Strasser F, Palmer JL, Williey J, et al. Impact of physician sitting versus standing during inpatient oncology consultations: patients' preference and perception of compassion and duration. A randomized controlled trial. J Pain Symptom Manage. 2005;29:489497.
  10. Tackett S, Tad‐Y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  11. Arora V, Gangireddy S, Mehrotra A, Ginde R, Tormey M, Meltzer D. Ability of hospitalized patients to identify their in‐hospital physicians. Arch Intern Med. 2009;169:199201.
  12. Martin GJ, Curry RH, Yarnold PR. The content of internal medicine residency training and its relevance to the practice of medicine. J Gen Intern Med. 1989;4:304308.
  13. Wright SM, Carrese JA. Which values to attending physicians try to pass on to house officers? Med Educ. 2001;35:941945.
  14. Laidlaw TS, Kaufman DM, MacLeod H, Zanten SV, Simpson D, Wrixon W. Relationship of resident characteristics, attitudes, prior training, and clinical knowledge to communication skills performance. Med Educ. 2006;40:1825.
  15. Dudas R, Lemerman H, Barone M, Serwint J. PHACES (Photographs of academic clinicians and their educational status): a tool to improve delivery of family‐centered care. Acad Pediatr. 2010;10:138145.
References
  1. Beck RS, Daughtridge R, Sloane PD. Physician‐patient communication in the primary care office: a systematic review. J Am Board Fam Pract. 2002;15:2538.
  2. Duggan P, Parrott L. Physicians' nonverbal rapport building and patients' talk about the subjective component of illness. Hum Commun Res. 2001;27:299311.
  3. Fogarty LA, Curbow BA, Wingard JR, McDonnell K, Somerfield MR. Can 40 seconds of compassion reduce patient anxiety? J Clin Oncol. 1999;17:371379.
  4. Griffith CH, Wilson J, Langer S, Haist SA. House staff nonverbal communication skills and patient satisfaction. J Gen Intern Med. 2003;18:170174.
  5. Kahn, Michael W. Etiquette‐based medicine. N Engl J Med. 2008;358:19881989.
  6. Francis JJ, Pankratz VS, Huddleston JM. Patient satisfaction associated with correct identification of physician's photographs. Mayo Clin Proc. 2001;76:604608.
  7. Stewart MA. Effective physician‐patient communication and health outcomes: a review. CMAJ. 1995;152:14231433.
  8. Osmun WE, Brown JB, Stewart M, Graham S. Patients' attitudes to comforting touch in family practice. Can Fam Physician. 2000;46:24112416.
  9. Strasser F, Palmer JL, Williey J, et al. Impact of physician sitting versus standing during inpatient oncology consultations: patients' preference and perception of compassion and duration. A randomized controlled trial. J Pain Symptom Manage. 2005;29:489497.
  10. Tackett S, Tad‐Y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  11. Arora V, Gangireddy S, Mehrotra A, Ginde R, Tormey M, Meltzer D. Ability of hospitalized patients to identify their in‐hospital physicians. Arch Intern Med. 2009;169:199201.
  12. Martin GJ, Curry RH, Yarnold PR. The content of internal medicine residency training and its relevance to the practice of medicine. J Gen Intern Med. 1989;4:304308.
  13. Wright SM, Carrese JA. Which values to attending physicians try to pass on to house officers? Med Educ. 2001;35:941945.
  14. Laidlaw TS, Kaufman DM, MacLeod H, Zanten SV, Simpson D, Wrixon W. Relationship of resident characteristics, attitudes, prior training, and clinical knowledge to communication skills performance. Med Educ. 2006;40:1825.
  15. Dudas R, Lemerman H, Barone M, Serwint J. PHACES (Photographs of academic clinicians and their educational status): a tool to improve delivery of family‐centered care. Acad Pediatr. 2010;10:138145.
Issue
Journal of Hospital Medicine - 8(11)
Issue
Journal of Hospital Medicine - 8(11)
Page Number
631-634
Page Number
631-634
Publications
Publications
Article Type
Display Headline
Do internal medicine interns practice etiquette‐based communication? A critical look at the inpatient encounter
Display Headline
Do internal medicine interns practice etiquette‐based communication? A critical look at the inpatient encounter
Sections
Article Source
© 2013 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Lauren Block, MD, Assistant Professor, North Shore–LIJ Hofstra School of Medicine, 2001 Marcus Ave., Suite S160, Lake Success, NY 11042; Telephone: 516–519‐5600; Fax: 516–519‐5601; E‐mail: lblock2@nshs.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files