Affiliations
Brookdale Department of Geriatrics, Samual M. Bronfman Department of Internal Medicine, Mount Sinai School of Medicine, New York, New York
Department of Medical Education, Mount Sinai School of Medicine, New York, New York
Email
Lauren.peccoralo@mssm.edu
Given name(s)
Lauren
Family name
Peccoralo
Degrees
MD

Improving Feedback to Ward Residents

Article Type
Changed
Mon, 05/22/2017 - 19:53
Display Headline
Pocket card and dedicated feedback session to improve feedback to ward residents: A randomized trial

Feedback has long been recognized as pivotal to the attainment of clinical acumen and skills in medical training.1 Formative feedback can give trainees insight into their strengths and weaknesses, and provide them with clear goals and methods to attain those goals.1, 2 In fact, feedback given regularly over time by a respected figure has shown to improve physician performance.3 However, most faculty are not trained to provide effective feedback. As a result, supervisors often believe they are giving more feedback than trainees believe they are receiving, and residents receive little feedback that they perceive as useful.4 Most residents receive little to no feedback on their communications skills4 or professionalism,5 and rarely receive corrective feedback.6, 7

Faculty may fail to give feedback to residents for a number of reasons. Those barriers most commonly cited in the literature are discomfort with criticizing residents,6, 7 lack of time,4 and lack of direct observation of residents in clinical settings.810 Several studies have looked at tools to guide feedback and address the barrier of discomfort with criticism.6, 7, 11 Some showed improvements in overall feedback, though often supervisors gave only positive feedback and avoided giving feedback about weaknesses.6, 7, 11 Despite the recognition of lack of time as a barrier to feedback,4 most studies on feedback interventions thus far have not included setting aside time for the feedback to occur.6, 7, 11, 12 Finally, a number of studies utilized objective structured clinical examinations (OSCEs) coupled with immediate feedback to improve direct observation of residents, with success in improving feedback related to the encounter.9, 10, 13 To address the gaps in the current literature, the goals of our study were to address 2 specific barriers to feedback for residents: lack of time and discomfort with giving feedback.

The aim of this study was to improve Internal Medicine (IM) residents' and attendings' experiences with feedback on the wards using a pocket card and a dedicated feedback session. We developed and evaluated the pocket feedback card and session for faculty to improve the quality and frequency of their feedback to residents in the inpatient setting. We performed a randomized trial to evaluate our intervention. We hypothesized that the intervention would: 1) improve the quality and quantity of attendings' feedback given to IM ward residents; and 2) improve attendings' comfort with feedback delivery on the wards.

PARTICIPANTS AND METHODS

Setting

The study was performed at Mount Sinai Medical Center in New York City, New York, between July 2008 and January 2009.

Participants

Participants in this study were IM residents and ward teaching attendings on inpatient ward teams at Mount Sinai Medical Center from July 2008 to January 2009. There are 12 ward teams on 3 inpatient services (each service has 4 teams) during each block at our hospital. Ward teams are made up of 1 teaching attending, 1 resident, 1 to 3 interns, and 1 to 2 medical students. The majority of attendings are on the ward service for 4‐week blocks, but some are only on for 1 or 2 weeks. Teams included in the randomization were the General Medicine and Gastroenterology/Cardiology service teams. Half of the General Medicine service attendings are hospitalists. Ward teams were excluded from the study randomization if the attending on the team was on the wards for less than 2 weeks, or if the attending had already been assigned to the experimental group in a previous block, given the influence of having used the card and feedback session previously. Since residents were unaware of the intervention and random assignments were based on attendings, residents could be assigned to the intervention group or the control group on any given inpatient rotation. Therefore, a resident could be in the control group in 1 block and the intervention group in his/her next block on the wards or vice versa, or could be assigned to either the intervention or the control group on more than 1 occasion. Because resident participants were blinded to their team's assignment (as intervention or control) and all surveys were anonymous (tracked as intervention or control by the team name only), it was not possible to exclude residents based on their prior participation or to match the surveys completed by the same residents.

Study Design

We performed a prospective randomized study to evaluate our educational innovation. The unit of randomization was the ward team. For each block, approximately half of the 6‐8 teams were randomized to the intervention group and half to the control group. Randomization assignments were completed the day prior to the start of the block using the random allocation software based on the ward team letters (blind to the attending and resident names). Of the 48 possible ward teams (8 teams per block over 6 blocks), 36 teams were randomized to the intervention or control groups, and 12 teams were not based on the above exclusion criteria. Of the 36 teams, 16 (composed of 16 attendings and 48 residents and interns) were randomized to the intervention group, and 20 (composed of 20 attendings and 63 residents and interns) were randomized to the control group.

The study was blinded such that residents and attendings in the control group were unaware of the study. The study was exempt from IRB review by the Mount Sinai Institutional Review Board, and Grants and Contracts Office, as an evaluation of the effectiveness of an instructional technique in medical education.

Intervention Design

We designed a pocket feedback card to guide a feedback session and assist attendings in giving useful feedback to IM residents on the wards (Figure 1).14 The individual items and categories were adapted from the Accreditation Council for Graduate Medical Education (ACGME) Common Program Requirements Core Competencies section and were revised via the expert consensus of the authors.14 We included 20 items related to resident skills, knowledge, attitudes, and behaviors important to the care of hospitalized patients, grouped under the 6 ACGME core competency domains.14 Many of these items correspond to competencies in the Society of Hospital Medicine (SHM) Core Competencies; in particular, the categories of Systems‐Based Practice and Practice‐Based Learning mirror competencies in the SHM Core Competencies Healthcare Systems chapter.15 Each item utilized a 5‐point Likert scale (1 = very poor, 3 = at expected level, 5 = superior) to evaluate resident performance (Figure 1). We created this card to serve as a directive and specific guide for attendings to provide feedback about specific domains and to give more constructive feedback. The card was to be used during a specific dedicated feedback session in order to overcome the commonly cited barrier of lack of time.

Figure 1
Inpatient housestaff feedback guide—mid‐rotation.

Program Implementation

On the first day of the block, both groups of attendings received the standard inpatient ward orientation given by the program director, including instructions about teaching and administrative responsibilities, and explicit instructions to provide mid‐rotation feedback to residents. Attendings randomized to the intervention group had an additional 5‐minute orientation given by 1 of the investigators. The orientation included a brief discussion on the importance of feedback and an introduction to the items on the card.2 In addition, faculty were instructed to dedicate 1 mid‐rotation attending rounds as a feedback session, to meet individually for 10‐15 minutes with each of the 3‐4 residents on their team, and to use the card to provide feedback on skills in each domain. As noted on the feedback card, if a resident scored less than 3 on a skill set, the attending was instructed to give examples of skills within that domain needing improvement and to offer suggestions for improvement. The intervention group was also asked not to discuss the card or session with others. No other instructions were provided.

Survey Design

At the end of each block, residents and attendings in both groups completed questionnaires to assess satisfaction with, and attitudes toward, feedback (Supporting Information Appendices 1 and 2 in the online version of this article). Survey questions were based on the competency areas included in the feedback card, previously published surveys evaluating feedback interventions,5, 9, 11 and expert opinion. The resident survey was designed to address the impact of feedback on the domains of resident knowledge, clinical and communication skills, and attitudes about feedback from supervisors and peers. We utilized a 5‐point Likert scale including: strongly disagree, disagree, neutral, agree, and strongly agree. The attending survey addressed attendings' satisfaction with feedback encounters and resident performance. At the completion of the study, investigators compared responses in intervention and control groups.

Statistical Analysis

For purposes of analysis, due to the relatively small number of responses for certain answer choices, the Likert scale was converted to a dichotomous variable. The responses of agree and strongly agree were coded as agree; and disagree, strongly disagree, and neutral were coded as disagree. Neutral was coded as disagree in order to avoid overestimating positive attitudes and, in effect, bias our results toward the null hypothesis. Differences between groups were analyzed using chi‐square Fisher's exact test (2‐sided).

Qualitative Interviews

In order to understand the relative contribution of the feedback card versus the feedback session, we performed a qualitative survey of attendings in the intervention group. Following the conclusion of the study period, we selected a convenience sample of 8 attendings from the intervention group for these brief qualitative interviews. We asked 3 basic questions. Was the intervention of the feedback card and dedicated time for feedback useful? Did you find one component, either the card or the dedicated time for feedback, more useful than the other? Were there any negative effects on patient care, education, or other areas, from using an attending rounds as a feedback session? This data was coded and analyzed for common themes.

RESULTS

During the 6‐month study period, 34 teaching attendings (over 36 attending inpatient blocks) and 93 IM residents (over 111 resident inpatient blocks) participated in the study. Thirty‐four of 36 attending surveys and 96 of 111 resident surveys were completed. The overall survey response rates for residents and attendings were 85% and 94%, respectively. Two attendings participated during 2 separate blocks, first in the control group and then in the intervention group, and 18 residents participated during 2 separate blocks. No attendings or residents participated more than twice.

Resident survey response rate was 81.2% in the intervention group and 87.3% in the control group (Table 1). Residents in the intervention group reported receiving more feedback regarding skills they did well (89.7% vs 63.6%, P = 0.004) and skills needing improvement (51.3% vs 25.5%, P = 0.02) than those in the control group. In addition, more intervention residents reported receiving useful information regarding how to improve their skills (53.8% vs 27.3%, P = 0.01), and reported actually improving both their clinical skills (61.5% vs 27.8%, P = 0.001) and their professionalism/communication skills (51.3% vs 29.1%, P = 0.03) based on feedback received from attendings.

Resident Responses on the End of Block Feedback Survey
Survey ItemResident Intervention Agree* % (No.) N = 39Resident Control Agree*% (No.) N = 55P Value
  • Agree is the collapsed variable including the responses of agree and strongly agree.

  • Data analyzed using the chi‐square Fisher's exact test (2‐sided).

I did NOT receive a sufficient amount of feedback from my attending supervisor(s) this block.20.5 (8)38.2 (21)0.08
I received feedback from my attending regarding skills I did well during this block.89.7 (35)63.6 (35)0.004
I received feedback from my attending regarding specific skills that needed improvement during this block.51.3 (20)25.5 (14)0.02
I received useful information from my attending about how to improve my skills during this block.53.8 (21)27.3 (15)0.01
I improved my clinical skills based on feedback I received from my attending this block.61.5 (24)27.8 (15)0.001
I improved my professionalism/communication skills based on feedback I received from my attending this block.51.3 (20)29.1 (16)0.03
I improved my knowledge base because of feedback I received from my attending this block.64.1 (25)60.0 (33)0.83
The feedback I received from my attending this block gave me an overall sense of my performance more than it helped me identify specific areas for improvement.64.1 (25)65.5 (36)1.0
Feedback from colleagues (other interns and residents) is more helpful than feedback from attendings.41.0 (16)43.6 (24)0.84
Independent of feedback received from others, I am able to identify areas in which I need improvement.84.6 (33)80.0 (44)0.60

The attending survey response rates for the intervention and control groups were 100% and 90%, respectively. In general, both groups of attendings reported that they were comfortable giving feedback and that they did, in fact, give feedback in each area during their ward block (Table 2). More intervention attendings felt that at least 1 of their residents improved their professionalism/communication skills based on the feedback given (76.9% vs 31.1%, P = 0.02). There were no other significant differences between the groups of attendings.

Attending Reponses on the End of Block Feedback Survey
Survey ItemAttending Intervention Agree* % (No.) N = 16Attending Control Agree* % (No.) N = 18P Value
  • Agree is the collapsed variable including the responses of agree and strongly agree.

  • Data analyzed using the chi‐square Fisher's exact test (2‐sided).

Giving feedback to housestaff was DIFFICULT this block.6.3 (1)16.7 (3)0.60
I was comfortable giving feedback to my housestaff this block.93.8 (15)94.4 (17)1.00
I did NOT give a sufficient amount of feedback to my housestaff this block.18.8 (3)38.9 (7)0.27
My skills in giving feedback improved during this block.50 (8)16.7 (3)0.07
I gave feedback to housestaff regarding skills they did well during this block.100 (16)94.4 (17)1.00
I gave feedback to housestaff which targeted specific areas for their improvement.81.3 (13)70.6 (12)0.69
At least one of my housestaff improved his/her clinical skills based on feedback I gave this block.68.8 (11)47.1 (8)0.30
At least one of my housestaff improved his/her professionalism/communication skills based on feedback I gave this block.76.9 (10)31.1 (5)0.02
At least one of my housestaff improved his/her fund of knowledge based on feedback I gave this block.50.0 (8)52.9 (9)1.00
Housestaff found the feedback I gave them useful.66.7 (10)62.5 (10)1.00
I find it DIFFICULT to find time during inpatient rotations to give feedback to residents regarding their performance.50.0 (8)33.3 (6)0.49

Intervention attendings also shared their attitudes toward the feedback card and session. A majority felt that using 1 attending rounds as a feedback session helped create a dedicated time for giving feedback (68.8%), and that the feedback card helped them to give specific, constructive feedback (62.5%). Most attendings reported they would use the feedback card and session again during future inpatient blocks (81%), and would recommend them to other attendings (75%).

Qualitative data from intervention attending interviews revealed further thoughts about the feedback card and feedback session. Most attendings interviewed (7/8) felt that the card was useful for the structure and topic guidance it provided. Half felt that setting aside time for feedback was the more useful component. The other half reported that, because they usually set aside time for feedback regardless, the card was more useful. None of the attendings felt that the feedback card or session was detrimental for patient care or education, and many said that the intervention had positive effects on these areas. For example, 1 attending said that the session added to patient care because I used particular [patient] cases as examples for giving feedback.

DISCUSSION

In this randomized study, we found that a simple pocket feedback card and dedicated feedback session was acceptable to ward attendings and improved resident satisfaction with feedback. Unlike most prior studies of feedback, we demonstrated more feedback around skills needing improvement, and intervention residents felt the feedback they received helped them improve their skills. Our educational intervention was unique in that it combined a pocket card to structure feedback content and dedicated time to structure the feedback process, to address 2 of the major barriers to giving feedback: lack of time and lack of comfort.

The pocket card itself as a tool for improving feedback is innovative and valuable. As a short but directive guide, the card supports attendings' delivery of relevant and specific feedback about residents' performance, and because it is based on the ACGME competencies, it may help attendings focus feedback on areas in which they will later evaluate residents. The inclusion of a prespecified time for giving feedback was important as well, in that it allowed for face‐to‐face feedback to occur, as opposed to a passing comment after a presentation or brief notes in a written final evaluation. Both the card and the feedback session seemed equally important for the success of this intervention, with attitudes varying based on individual attending preferences. Those who usually set aside time for feedback on their own found the card more useful, whereas those who had more trouble finding time for feedback found the specific session more useful. Most attendings found the intervention as a whole helpful, and without any detrimental effects on patient care or education. The card and session may be particularly valuable for hospital attendings, given their growing presence as teachers and supervisors for residents, and their busy days on the wards.

Our study results have important implications for resident training in the hospital. Improving resident receipt of feedback about strengths and weaknesses is an ACGME training requirement, and specific guidance about how to improve skills is critical for focusing improvement efforts. Previous studies have demonstrated that directive feedback in medical training can lead to a variety of performance improvements, including better evaluations by other professionals,9, 16 and objective improvements in resident communication skills,17 chart documentation,18 and clinical management of patients.11, 15, 19 By improving the quality of feedback across several domains and facilitating the feedback process, our intervention may lead to similar improvements. Future studies should examine the global impact of guided feedback as in our study. Perhaps most importantly, attendings found the intervention acceptable and would recommend its use, implying longer term sustainability of its integration into the hospital routine.

One strength of our study was its prospective randomized design. Despite the importance of rigor in medical education research, there remains a paucity of randomized studies to evaluate educational interventions for residents in inpatient settings. Few studies of feedback interventions in particular have performed randomized trials,5, 6, 11 and only one has examined a feedback intervention in a randomized fashion in the inpatient setting.12 This evaluation of a 20‐minute intervention, and a reminder card for supervising attendings to improve written and verbal feedback to residents, modestly improved the amount of verbal feedback given to residents, but did not impact the number of residents receiving mid‐rotation feedback or feedback overall as our study did by report.12

There were several important limitations to our study. First, because this was a single institution study, we only achieved modest sample sizes, particularly in the attending groups, and were unable to assess all of the differences in attending attitudes related to feedback. Second, control and intervention participants were on service simultaneously, which may have led to contamination of the control group and an underestimation of the true impact of our intervention. Since residents were not exclusive to 1 study group on 1 occasion (18 of the 93 residents participated during 2 separate blocks), our results may be biased. In particular, those residents who had the intervention first, and were subsequently in the control group, may have rated the control experience worse than they would have otherwise, creating a bias in favor of a positive result for our intervention. Nonetheless, we believe this situation was uncommon and the potential associated bias minimal. Further, this study assessed attitudes related to feedback and self‐reported knowledge and skills, but did not directly assess resident knowledge, skills, or patient outcomes. We recognize the importance of these outcomes and hope that future interventions can determine these important downstream effects of feedback. We were also unable to assess the card and session's impact on attendings' comfort with feedback, because most attendings in both groups reported feeling comfortable giving feedback. This result may indicate that attendings actually are comfortable giving feedback, or may suggest some element of social desirability bias. Finally, in this study, we designed an intervention which combined the pocket card and dedicated feedback time. We did not quantitatively examine the effect of either component alone, and it is unclear if offering the feedback card without protected time or offering protected time without a guide would have impacted feedback on the wards. However, qualitative data from our study support the use of both components, and implementing the 2 components together is feasible in any inpatient teaching setting.

Despite these limitations, protected time for feedback guided by a pocket feedback card is a simple intervention that appears to improve feedback quantity and quality for ward residents, and guides them to improve their performance. Our low‐intensity intervention helped attendings give residents the tools to improve their clinical and communication skills. An opportunity to make a positive impact on resident education with such a small intervention is rare. The use of a feedback card with protected feedback time could be easily implemented in any training program, and is a valuable tool for busy hospitalists who are more commonly supervising residents on their inpatient rotations.

Files
References
  1. Ende J.Feedback in clinical medical education.JAMA.1983;250(6):777781.
  2. Hewson MG,Little ML.Giving feedback in medical education: verification of recommended techniques.J Gen Intern Med.1998;13(2):111116.
  3. Veloski J,Boex JR,Grasberger MJ,Evans A,Wolfson DB.Systematic review of the literature on assessment, feedback and physicians' clinical performance: BEME Guide No. 7.Med Teach.2006;28(2):117128.
  4. Hutul OA,Carpenter RO,Tarpley JL,Lomis KD.Missed opportunities: a descriptive assessment of teaching and attitudes regarding communication skills in a surgical residency.Curr Surg.2006;63(6):401409.
  5. Stark R,Korenstein D,Karani R.Impact of a 360‐degree professionalism assessment on faculty comfort and skills in feedback delivery.J Gen Intern Med.2008;23(7):969972.
  6. Bandiera G,Lendrum D.Daily encounter cards facilitate competency‐based feedback while leniency bias persists.CJEM.2008;10(1):4450.
  7. Burack JH,Irby DM,Carline JD,Root RK,Larson EB.Teaching compassion and respect. Attending physicians' responses to problematic behaviors.J Gen Intern Med.1999;14(1):4955.
  8. Holmboe ES.Faculty and the observation of trainees' clinical skills: problems and opportunities.Acad Med.2004;79(1):1622.
  9. Dorfsman ML,Wolfson AB.Direct observation of residents in the emergency department: a structured educational program.Acad Emerg Med.2009;16(4):343351.
  10. Donato AA,Pangaro L,Smith C, et al.Evaluation of a novel assessment form for observing medical residents: a randomised, controlled trial.Med Educ.2008;42(12):12341242.
  11. Humphrey‐Murto S,Khalidi N,Smith CD, et al.Resident evaluations: the use of daily evaluation forms in rheumatology ambulatory care.J Rheumatol.2009;36(6):12981303.
  12. Holmboe ES,Fiebach NH,Galaty LA,Huot S.Effectiveness of a focused educational intervention on resident evaluations from faculty a randomized controlled trial.J Gen Intern Med.2001;16(7):427434.
  13. Holmboe ES,Hawkins RE,Huot SJ.Effects of training in direct observation of medical residents' clinical competence: a randomized trial.Ann Intern Med.2004;140(11):874881.
  14. Internal Medicine Program Requirements. ACGME. July 1, 2009. Available at: http://www.acgme.org/acWebsite/downloads/RRC_progReq/140_internal_medicine_07012009.pdf. Accessed November 8,2009.
  15. McKean SC,Budnitz TL,Dressler DD,Amin AN,Pistoria MJ.How to use the core competencies in hospital medicine: a framework for curriculum development.J Hosp Med. 2006;1(suppl 1):5767.
  16. Clay AS,Que L,Petrusa ER,Sebastian M,Govert J.Debriefing in the intensive care unit: a feedback tool to facilitate bedside teaching.Crit Care Med.2007;35(3):738754.
  17. Roter DL,Larson S,Shinitzky H, et al.Use of an innovative video feedback technique to enhance communication skills training.Med Educ.2004;38(2):145157.
  18. Opila DA.The impact of feedback to medical housestaff on chart documentation and quality of care in the outpatient setting.J Gen Intern Med.1997;12(6):352356.
  19. Holmboe ES,Yepes M,Williams F,Huot SJ.Feedback and the mini clinical evaluation exercise.J Gen Intern Med.2004;19(5 pt 2):558561.
Article PDF
Issue
Journal of Hospital Medicine - 7(1)
Publications
Page Number
35-40
Sections
Files
Files
Article PDF
Article PDF

Feedback has long been recognized as pivotal to the attainment of clinical acumen and skills in medical training.1 Formative feedback can give trainees insight into their strengths and weaknesses, and provide them with clear goals and methods to attain those goals.1, 2 In fact, feedback given regularly over time by a respected figure has shown to improve physician performance.3 However, most faculty are not trained to provide effective feedback. As a result, supervisors often believe they are giving more feedback than trainees believe they are receiving, and residents receive little feedback that they perceive as useful.4 Most residents receive little to no feedback on their communications skills4 or professionalism,5 and rarely receive corrective feedback.6, 7

Faculty may fail to give feedback to residents for a number of reasons. Those barriers most commonly cited in the literature are discomfort with criticizing residents,6, 7 lack of time,4 and lack of direct observation of residents in clinical settings.810 Several studies have looked at tools to guide feedback and address the barrier of discomfort with criticism.6, 7, 11 Some showed improvements in overall feedback, though often supervisors gave only positive feedback and avoided giving feedback about weaknesses.6, 7, 11 Despite the recognition of lack of time as a barrier to feedback,4 most studies on feedback interventions thus far have not included setting aside time for the feedback to occur.6, 7, 11, 12 Finally, a number of studies utilized objective structured clinical examinations (OSCEs) coupled with immediate feedback to improve direct observation of residents, with success in improving feedback related to the encounter.9, 10, 13 To address the gaps in the current literature, the goals of our study were to address 2 specific barriers to feedback for residents: lack of time and discomfort with giving feedback.

The aim of this study was to improve Internal Medicine (IM) residents' and attendings' experiences with feedback on the wards using a pocket card and a dedicated feedback session. We developed and evaluated the pocket feedback card and session for faculty to improve the quality and frequency of their feedback to residents in the inpatient setting. We performed a randomized trial to evaluate our intervention. We hypothesized that the intervention would: 1) improve the quality and quantity of attendings' feedback given to IM ward residents; and 2) improve attendings' comfort with feedback delivery on the wards.

PARTICIPANTS AND METHODS

Setting

The study was performed at Mount Sinai Medical Center in New York City, New York, between July 2008 and January 2009.

Participants

Participants in this study were IM residents and ward teaching attendings on inpatient ward teams at Mount Sinai Medical Center from July 2008 to January 2009. There are 12 ward teams on 3 inpatient services (each service has 4 teams) during each block at our hospital. Ward teams are made up of 1 teaching attending, 1 resident, 1 to 3 interns, and 1 to 2 medical students. The majority of attendings are on the ward service for 4‐week blocks, but some are only on for 1 or 2 weeks. Teams included in the randomization were the General Medicine and Gastroenterology/Cardiology service teams. Half of the General Medicine service attendings are hospitalists. Ward teams were excluded from the study randomization if the attending on the team was on the wards for less than 2 weeks, or if the attending had already been assigned to the experimental group in a previous block, given the influence of having used the card and feedback session previously. Since residents were unaware of the intervention and random assignments were based on attendings, residents could be assigned to the intervention group or the control group on any given inpatient rotation. Therefore, a resident could be in the control group in 1 block and the intervention group in his/her next block on the wards or vice versa, or could be assigned to either the intervention or the control group on more than 1 occasion. Because resident participants were blinded to their team's assignment (as intervention or control) and all surveys were anonymous (tracked as intervention or control by the team name only), it was not possible to exclude residents based on their prior participation or to match the surveys completed by the same residents.

Study Design

We performed a prospective randomized study to evaluate our educational innovation. The unit of randomization was the ward team. For each block, approximately half of the 6‐8 teams were randomized to the intervention group and half to the control group. Randomization assignments were completed the day prior to the start of the block using the random allocation software based on the ward team letters (blind to the attending and resident names). Of the 48 possible ward teams (8 teams per block over 6 blocks), 36 teams were randomized to the intervention or control groups, and 12 teams were not based on the above exclusion criteria. Of the 36 teams, 16 (composed of 16 attendings and 48 residents and interns) were randomized to the intervention group, and 20 (composed of 20 attendings and 63 residents and interns) were randomized to the control group.

The study was blinded such that residents and attendings in the control group were unaware of the study. The study was exempt from IRB review by the Mount Sinai Institutional Review Board, and Grants and Contracts Office, as an evaluation of the effectiveness of an instructional technique in medical education.

Intervention Design

We designed a pocket feedback card to guide a feedback session and assist attendings in giving useful feedback to IM residents on the wards (Figure 1).14 The individual items and categories were adapted from the Accreditation Council for Graduate Medical Education (ACGME) Common Program Requirements Core Competencies section and were revised via the expert consensus of the authors.14 We included 20 items related to resident skills, knowledge, attitudes, and behaviors important to the care of hospitalized patients, grouped under the 6 ACGME core competency domains.14 Many of these items correspond to competencies in the Society of Hospital Medicine (SHM) Core Competencies; in particular, the categories of Systems‐Based Practice and Practice‐Based Learning mirror competencies in the SHM Core Competencies Healthcare Systems chapter.15 Each item utilized a 5‐point Likert scale (1 = very poor, 3 = at expected level, 5 = superior) to evaluate resident performance (Figure 1). We created this card to serve as a directive and specific guide for attendings to provide feedback about specific domains and to give more constructive feedback. The card was to be used during a specific dedicated feedback session in order to overcome the commonly cited barrier of lack of time.

Figure 1
Inpatient housestaff feedback guide—mid‐rotation.

Program Implementation

On the first day of the block, both groups of attendings received the standard inpatient ward orientation given by the program director, including instructions about teaching and administrative responsibilities, and explicit instructions to provide mid‐rotation feedback to residents. Attendings randomized to the intervention group had an additional 5‐minute orientation given by 1 of the investigators. The orientation included a brief discussion on the importance of feedback and an introduction to the items on the card.2 In addition, faculty were instructed to dedicate 1 mid‐rotation attending rounds as a feedback session, to meet individually for 10‐15 minutes with each of the 3‐4 residents on their team, and to use the card to provide feedback on skills in each domain. As noted on the feedback card, if a resident scored less than 3 on a skill set, the attending was instructed to give examples of skills within that domain needing improvement and to offer suggestions for improvement. The intervention group was also asked not to discuss the card or session with others. No other instructions were provided.

Survey Design

At the end of each block, residents and attendings in both groups completed questionnaires to assess satisfaction with, and attitudes toward, feedback (Supporting Information Appendices 1 and 2 in the online version of this article). Survey questions were based on the competency areas included in the feedback card, previously published surveys evaluating feedback interventions,5, 9, 11 and expert opinion. The resident survey was designed to address the impact of feedback on the domains of resident knowledge, clinical and communication skills, and attitudes about feedback from supervisors and peers. We utilized a 5‐point Likert scale including: strongly disagree, disagree, neutral, agree, and strongly agree. The attending survey addressed attendings' satisfaction with feedback encounters and resident performance. At the completion of the study, investigators compared responses in intervention and control groups.

Statistical Analysis

For purposes of analysis, due to the relatively small number of responses for certain answer choices, the Likert scale was converted to a dichotomous variable. The responses of agree and strongly agree were coded as agree; and disagree, strongly disagree, and neutral were coded as disagree. Neutral was coded as disagree in order to avoid overestimating positive attitudes and, in effect, bias our results toward the null hypothesis. Differences between groups were analyzed using chi‐square Fisher's exact test (2‐sided).

Qualitative Interviews

In order to understand the relative contribution of the feedback card versus the feedback session, we performed a qualitative survey of attendings in the intervention group. Following the conclusion of the study period, we selected a convenience sample of 8 attendings from the intervention group for these brief qualitative interviews. We asked 3 basic questions. Was the intervention of the feedback card and dedicated time for feedback useful? Did you find one component, either the card or the dedicated time for feedback, more useful than the other? Were there any negative effects on patient care, education, or other areas, from using an attending rounds as a feedback session? This data was coded and analyzed for common themes.

RESULTS

During the 6‐month study period, 34 teaching attendings (over 36 attending inpatient blocks) and 93 IM residents (over 111 resident inpatient blocks) participated in the study. Thirty‐four of 36 attending surveys and 96 of 111 resident surveys were completed. The overall survey response rates for residents and attendings were 85% and 94%, respectively. Two attendings participated during 2 separate blocks, first in the control group and then in the intervention group, and 18 residents participated during 2 separate blocks. No attendings or residents participated more than twice.

Resident survey response rate was 81.2% in the intervention group and 87.3% in the control group (Table 1). Residents in the intervention group reported receiving more feedback regarding skills they did well (89.7% vs 63.6%, P = 0.004) and skills needing improvement (51.3% vs 25.5%, P = 0.02) than those in the control group. In addition, more intervention residents reported receiving useful information regarding how to improve their skills (53.8% vs 27.3%, P = 0.01), and reported actually improving both their clinical skills (61.5% vs 27.8%, P = 0.001) and their professionalism/communication skills (51.3% vs 29.1%, P = 0.03) based on feedback received from attendings.

Resident Responses on the End of Block Feedback Survey
Survey ItemResident Intervention Agree* % (No.) N = 39Resident Control Agree*% (No.) N = 55P Value
  • Agree is the collapsed variable including the responses of agree and strongly agree.

  • Data analyzed using the chi‐square Fisher's exact test (2‐sided).

I did NOT receive a sufficient amount of feedback from my attending supervisor(s) this block.20.5 (8)38.2 (21)0.08
I received feedback from my attending regarding skills I did well during this block.89.7 (35)63.6 (35)0.004
I received feedback from my attending regarding specific skills that needed improvement during this block.51.3 (20)25.5 (14)0.02
I received useful information from my attending about how to improve my skills during this block.53.8 (21)27.3 (15)0.01
I improved my clinical skills based on feedback I received from my attending this block.61.5 (24)27.8 (15)0.001
I improved my professionalism/communication skills based on feedback I received from my attending this block.51.3 (20)29.1 (16)0.03
I improved my knowledge base because of feedback I received from my attending this block.64.1 (25)60.0 (33)0.83
The feedback I received from my attending this block gave me an overall sense of my performance more than it helped me identify specific areas for improvement.64.1 (25)65.5 (36)1.0
Feedback from colleagues (other interns and residents) is more helpful than feedback from attendings.41.0 (16)43.6 (24)0.84
Independent of feedback received from others, I am able to identify areas in which I need improvement.84.6 (33)80.0 (44)0.60

The attending survey response rates for the intervention and control groups were 100% and 90%, respectively. In general, both groups of attendings reported that they were comfortable giving feedback and that they did, in fact, give feedback in each area during their ward block (Table 2). More intervention attendings felt that at least 1 of their residents improved their professionalism/communication skills based on the feedback given (76.9% vs 31.1%, P = 0.02). There were no other significant differences between the groups of attendings.

Attending Reponses on the End of Block Feedback Survey
Survey ItemAttending Intervention Agree* % (No.) N = 16Attending Control Agree* % (No.) N = 18P Value
  • Agree is the collapsed variable including the responses of agree and strongly agree.

  • Data analyzed using the chi‐square Fisher's exact test (2‐sided).

Giving feedback to housestaff was DIFFICULT this block.6.3 (1)16.7 (3)0.60
I was comfortable giving feedback to my housestaff this block.93.8 (15)94.4 (17)1.00
I did NOT give a sufficient amount of feedback to my housestaff this block.18.8 (3)38.9 (7)0.27
My skills in giving feedback improved during this block.50 (8)16.7 (3)0.07
I gave feedback to housestaff regarding skills they did well during this block.100 (16)94.4 (17)1.00
I gave feedback to housestaff which targeted specific areas for their improvement.81.3 (13)70.6 (12)0.69
At least one of my housestaff improved his/her clinical skills based on feedback I gave this block.68.8 (11)47.1 (8)0.30
At least one of my housestaff improved his/her professionalism/communication skills based on feedback I gave this block.76.9 (10)31.1 (5)0.02
At least one of my housestaff improved his/her fund of knowledge based on feedback I gave this block.50.0 (8)52.9 (9)1.00
Housestaff found the feedback I gave them useful.66.7 (10)62.5 (10)1.00
I find it DIFFICULT to find time during inpatient rotations to give feedback to residents regarding their performance.50.0 (8)33.3 (6)0.49

Intervention attendings also shared their attitudes toward the feedback card and session. A majority felt that using 1 attending rounds as a feedback session helped create a dedicated time for giving feedback (68.8%), and that the feedback card helped them to give specific, constructive feedback (62.5%). Most attendings reported they would use the feedback card and session again during future inpatient blocks (81%), and would recommend them to other attendings (75%).

Qualitative data from intervention attending interviews revealed further thoughts about the feedback card and feedback session. Most attendings interviewed (7/8) felt that the card was useful for the structure and topic guidance it provided. Half felt that setting aside time for feedback was the more useful component. The other half reported that, because they usually set aside time for feedback regardless, the card was more useful. None of the attendings felt that the feedback card or session was detrimental for patient care or education, and many said that the intervention had positive effects on these areas. For example, 1 attending said that the session added to patient care because I used particular [patient] cases as examples for giving feedback.

DISCUSSION

In this randomized study, we found that a simple pocket feedback card and dedicated feedback session was acceptable to ward attendings and improved resident satisfaction with feedback. Unlike most prior studies of feedback, we demonstrated more feedback around skills needing improvement, and intervention residents felt the feedback they received helped them improve their skills. Our educational intervention was unique in that it combined a pocket card to structure feedback content and dedicated time to structure the feedback process, to address 2 of the major barriers to giving feedback: lack of time and lack of comfort.

The pocket card itself as a tool for improving feedback is innovative and valuable. As a short but directive guide, the card supports attendings' delivery of relevant and specific feedback about residents' performance, and because it is based on the ACGME competencies, it may help attendings focus feedback on areas in which they will later evaluate residents. The inclusion of a prespecified time for giving feedback was important as well, in that it allowed for face‐to‐face feedback to occur, as opposed to a passing comment after a presentation or brief notes in a written final evaluation. Both the card and the feedback session seemed equally important for the success of this intervention, with attitudes varying based on individual attending preferences. Those who usually set aside time for feedback on their own found the card more useful, whereas those who had more trouble finding time for feedback found the specific session more useful. Most attendings found the intervention as a whole helpful, and without any detrimental effects on patient care or education. The card and session may be particularly valuable for hospital attendings, given their growing presence as teachers and supervisors for residents, and their busy days on the wards.

Our study results have important implications for resident training in the hospital. Improving resident receipt of feedback about strengths and weaknesses is an ACGME training requirement, and specific guidance about how to improve skills is critical for focusing improvement efforts. Previous studies have demonstrated that directive feedback in medical training can lead to a variety of performance improvements, including better evaluations by other professionals,9, 16 and objective improvements in resident communication skills,17 chart documentation,18 and clinical management of patients.11, 15, 19 By improving the quality of feedback across several domains and facilitating the feedback process, our intervention may lead to similar improvements. Future studies should examine the global impact of guided feedback as in our study. Perhaps most importantly, attendings found the intervention acceptable and would recommend its use, implying longer term sustainability of its integration into the hospital routine.

One strength of our study was its prospective randomized design. Despite the importance of rigor in medical education research, there remains a paucity of randomized studies to evaluate educational interventions for residents in inpatient settings. Few studies of feedback interventions in particular have performed randomized trials,5, 6, 11 and only one has examined a feedback intervention in a randomized fashion in the inpatient setting.12 This evaluation of a 20‐minute intervention, and a reminder card for supervising attendings to improve written and verbal feedback to residents, modestly improved the amount of verbal feedback given to residents, but did not impact the number of residents receiving mid‐rotation feedback or feedback overall as our study did by report.12

There were several important limitations to our study. First, because this was a single institution study, we only achieved modest sample sizes, particularly in the attending groups, and were unable to assess all of the differences in attending attitudes related to feedback. Second, control and intervention participants were on service simultaneously, which may have led to contamination of the control group and an underestimation of the true impact of our intervention. Since residents were not exclusive to 1 study group on 1 occasion (18 of the 93 residents participated during 2 separate blocks), our results may be biased. In particular, those residents who had the intervention first, and were subsequently in the control group, may have rated the control experience worse than they would have otherwise, creating a bias in favor of a positive result for our intervention. Nonetheless, we believe this situation was uncommon and the potential associated bias minimal. Further, this study assessed attitudes related to feedback and self‐reported knowledge and skills, but did not directly assess resident knowledge, skills, or patient outcomes. We recognize the importance of these outcomes and hope that future interventions can determine these important downstream effects of feedback. We were also unable to assess the card and session's impact on attendings' comfort with feedback, because most attendings in both groups reported feeling comfortable giving feedback. This result may indicate that attendings actually are comfortable giving feedback, or may suggest some element of social desirability bias. Finally, in this study, we designed an intervention which combined the pocket card and dedicated feedback time. We did not quantitatively examine the effect of either component alone, and it is unclear if offering the feedback card without protected time or offering protected time without a guide would have impacted feedback on the wards. However, qualitative data from our study support the use of both components, and implementing the 2 components together is feasible in any inpatient teaching setting.

Despite these limitations, protected time for feedback guided by a pocket feedback card is a simple intervention that appears to improve feedback quantity and quality for ward residents, and guides them to improve their performance. Our low‐intensity intervention helped attendings give residents the tools to improve their clinical and communication skills. An opportunity to make a positive impact on resident education with such a small intervention is rare. The use of a feedback card with protected feedback time could be easily implemented in any training program, and is a valuable tool for busy hospitalists who are more commonly supervising residents on their inpatient rotations.

Feedback has long been recognized as pivotal to the attainment of clinical acumen and skills in medical training.1 Formative feedback can give trainees insight into their strengths and weaknesses, and provide them with clear goals and methods to attain those goals.1, 2 In fact, feedback given regularly over time by a respected figure has shown to improve physician performance.3 However, most faculty are not trained to provide effective feedback. As a result, supervisors often believe they are giving more feedback than trainees believe they are receiving, and residents receive little feedback that they perceive as useful.4 Most residents receive little to no feedback on their communications skills4 or professionalism,5 and rarely receive corrective feedback.6, 7

Faculty may fail to give feedback to residents for a number of reasons. Those barriers most commonly cited in the literature are discomfort with criticizing residents,6, 7 lack of time,4 and lack of direct observation of residents in clinical settings.810 Several studies have looked at tools to guide feedback and address the barrier of discomfort with criticism.6, 7, 11 Some showed improvements in overall feedback, though often supervisors gave only positive feedback and avoided giving feedback about weaknesses.6, 7, 11 Despite the recognition of lack of time as a barrier to feedback,4 most studies on feedback interventions thus far have not included setting aside time for the feedback to occur.6, 7, 11, 12 Finally, a number of studies utilized objective structured clinical examinations (OSCEs) coupled with immediate feedback to improve direct observation of residents, with success in improving feedback related to the encounter.9, 10, 13 To address the gaps in the current literature, the goals of our study were to address 2 specific barriers to feedback for residents: lack of time and discomfort with giving feedback.

The aim of this study was to improve Internal Medicine (IM) residents' and attendings' experiences with feedback on the wards using a pocket card and a dedicated feedback session. We developed and evaluated the pocket feedback card and session for faculty to improve the quality and frequency of their feedback to residents in the inpatient setting. We performed a randomized trial to evaluate our intervention. We hypothesized that the intervention would: 1) improve the quality and quantity of attendings' feedback given to IM ward residents; and 2) improve attendings' comfort with feedback delivery on the wards.

PARTICIPANTS AND METHODS

Setting

The study was performed at Mount Sinai Medical Center in New York City, New York, between July 2008 and January 2009.

Participants

Participants in this study were IM residents and ward teaching attendings on inpatient ward teams at Mount Sinai Medical Center from July 2008 to January 2009. There are 12 ward teams on 3 inpatient services (each service has 4 teams) during each block at our hospital. Ward teams are made up of 1 teaching attending, 1 resident, 1 to 3 interns, and 1 to 2 medical students. The majority of attendings are on the ward service for 4‐week blocks, but some are only on for 1 or 2 weeks. Teams included in the randomization were the General Medicine and Gastroenterology/Cardiology service teams. Half of the General Medicine service attendings are hospitalists. Ward teams were excluded from the study randomization if the attending on the team was on the wards for less than 2 weeks, or if the attending had already been assigned to the experimental group in a previous block, given the influence of having used the card and feedback session previously. Since residents were unaware of the intervention and random assignments were based on attendings, residents could be assigned to the intervention group or the control group on any given inpatient rotation. Therefore, a resident could be in the control group in 1 block and the intervention group in his/her next block on the wards or vice versa, or could be assigned to either the intervention or the control group on more than 1 occasion. Because resident participants were blinded to their team's assignment (as intervention or control) and all surveys were anonymous (tracked as intervention or control by the team name only), it was not possible to exclude residents based on their prior participation or to match the surveys completed by the same residents.

Study Design

We performed a prospective randomized study to evaluate our educational innovation. The unit of randomization was the ward team. For each block, approximately half of the 6‐8 teams were randomized to the intervention group and half to the control group. Randomization assignments were completed the day prior to the start of the block using the random allocation software based on the ward team letters (blind to the attending and resident names). Of the 48 possible ward teams (8 teams per block over 6 blocks), 36 teams were randomized to the intervention or control groups, and 12 teams were not based on the above exclusion criteria. Of the 36 teams, 16 (composed of 16 attendings and 48 residents and interns) were randomized to the intervention group, and 20 (composed of 20 attendings and 63 residents and interns) were randomized to the control group.

The study was blinded such that residents and attendings in the control group were unaware of the study. The study was exempt from IRB review by the Mount Sinai Institutional Review Board, and Grants and Contracts Office, as an evaluation of the effectiveness of an instructional technique in medical education.

Intervention Design

We designed a pocket feedback card to guide a feedback session and assist attendings in giving useful feedback to IM residents on the wards (Figure 1).14 The individual items and categories were adapted from the Accreditation Council for Graduate Medical Education (ACGME) Common Program Requirements Core Competencies section and were revised via the expert consensus of the authors.14 We included 20 items related to resident skills, knowledge, attitudes, and behaviors important to the care of hospitalized patients, grouped under the 6 ACGME core competency domains.14 Many of these items correspond to competencies in the Society of Hospital Medicine (SHM) Core Competencies; in particular, the categories of Systems‐Based Practice and Practice‐Based Learning mirror competencies in the SHM Core Competencies Healthcare Systems chapter.15 Each item utilized a 5‐point Likert scale (1 = very poor, 3 = at expected level, 5 = superior) to evaluate resident performance (Figure 1). We created this card to serve as a directive and specific guide for attendings to provide feedback about specific domains and to give more constructive feedback. The card was to be used during a specific dedicated feedback session in order to overcome the commonly cited barrier of lack of time.

Figure 1
Inpatient housestaff feedback guide—mid‐rotation.

Program Implementation

On the first day of the block, both groups of attendings received the standard inpatient ward orientation given by the program director, including instructions about teaching and administrative responsibilities, and explicit instructions to provide mid‐rotation feedback to residents. Attendings randomized to the intervention group had an additional 5‐minute orientation given by 1 of the investigators. The orientation included a brief discussion on the importance of feedback and an introduction to the items on the card.2 In addition, faculty were instructed to dedicate 1 mid‐rotation attending rounds as a feedback session, to meet individually for 10‐15 minutes with each of the 3‐4 residents on their team, and to use the card to provide feedback on skills in each domain. As noted on the feedback card, if a resident scored less than 3 on a skill set, the attending was instructed to give examples of skills within that domain needing improvement and to offer suggestions for improvement. The intervention group was also asked not to discuss the card or session with others. No other instructions were provided.

Survey Design

At the end of each block, residents and attendings in both groups completed questionnaires to assess satisfaction with, and attitudes toward, feedback (Supporting Information Appendices 1 and 2 in the online version of this article). Survey questions were based on the competency areas included in the feedback card, previously published surveys evaluating feedback interventions,5, 9, 11 and expert opinion. The resident survey was designed to address the impact of feedback on the domains of resident knowledge, clinical and communication skills, and attitudes about feedback from supervisors and peers. We utilized a 5‐point Likert scale including: strongly disagree, disagree, neutral, agree, and strongly agree. The attending survey addressed attendings' satisfaction with feedback encounters and resident performance. At the completion of the study, investigators compared responses in intervention and control groups.

Statistical Analysis

For purposes of analysis, due to the relatively small number of responses for certain answer choices, the Likert scale was converted to a dichotomous variable. The responses of agree and strongly agree were coded as agree; and disagree, strongly disagree, and neutral were coded as disagree. Neutral was coded as disagree in order to avoid overestimating positive attitudes and, in effect, bias our results toward the null hypothesis. Differences between groups were analyzed using chi‐square Fisher's exact test (2‐sided).

Qualitative Interviews

In order to understand the relative contribution of the feedback card versus the feedback session, we performed a qualitative survey of attendings in the intervention group. Following the conclusion of the study period, we selected a convenience sample of 8 attendings from the intervention group for these brief qualitative interviews. We asked 3 basic questions. Was the intervention of the feedback card and dedicated time for feedback useful? Did you find one component, either the card or the dedicated time for feedback, more useful than the other? Were there any negative effects on patient care, education, or other areas, from using an attending rounds as a feedback session? This data was coded and analyzed for common themes.

RESULTS

During the 6‐month study period, 34 teaching attendings (over 36 attending inpatient blocks) and 93 IM residents (over 111 resident inpatient blocks) participated in the study. Thirty‐four of 36 attending surveys and 96 of 111 resident surveys were completed. The overall survey response rates for residents and attendings were 85% and 94%, respectively. Two attendings participated during 2 separate blocks, first in the control group and then in the intervention group, and 18 residents participated during 2 separate blocks. No attendings or residents participated more than twice.

Resident survey response rate was 81.2% in the intervention group and 87.3% in the control group (Table 1). Residents in the intervention group reported receiving more feedback regarding skills they did well (89.7% vs 63.6%, P = 0.004) and skills needing improvement (51.3% vs 25.5%, P = 0.02) than those in the control group. In addition, more intervention residents reported receiving useful information regarding how to improve their skills (53.8% vs 27.3%, P = 0.01), and reported actually improving both their clinical skills (61.5% vs 27.8%, P = 0.001) and their professionalism/communication skills (51.3% vs 29.1%, P = 0.03) based on feedback received from attendings.

Resident Responses on the End of Block Feedback Survey
Survey ItemResident Intervention Agree* % (No.) N = 39Resident Control Agree*% (No.) N = 55P Value
  • Agree is the collapsed variable including the responses of agree and strongly agree.

  • Data analyzed using the chi‐square Fisher's exact test (2‐sided).

I did NOT receive a sufficient amount of feedback from my attending supervisor(s) this block.20.5 (8)38.2 (21)0.08
I received feedback from my attending regarding skills I did well during this block.89.7 (35)63.6 (35)0.004
I received feedback from my attending regarding specific skills that needed improvement during this block.51.3 (20)25.5 (14)0.02
I received useful information from my attending about how to improve my skills during this block.53.8 (21)27.3 (15)0.01
I improved my clinical skills based on feedback I received from my attending this block.61.5 (24)27.8 (15)0.001
I improved my professionalism/communication skills based on feedback I received from my attending this block.51.3 (20)29.1 (16)0.03
I improved my knowledge base because of feedback I received from my attending this block.64.1 (25)60.0 (33)0.83
The feedback I received from my attending this block gave me an overall sense of my performance more than it helped me identify specific areas for improvement.64.1 (25)65.5 (36)1.0
Feedback from colleagues (other interns and residents) is more helpful than feedback from attendings.41.0 (16)43.6 (24)0.84
Independent of feedback received from others, I am able to identify areas in which I need improvement.84.6 (33)80.0 (44)0.60

The attending survey response rates for the intervention and control groups were 100% and 90%, respectively. In general, both groups of attendings reported that they were comfortable giving feedback and that they did, in fact, give feedback in each area during their ward block (Table 2). More intervention attendings felt that at least 1 of their residents improved their professionalism/communication skills based on the feedback given (76.9% vs 31.1%, P = 0.02). There were no other significant differences between the groups of attendings.

Attending Reponses on the End of Block Feedback Survey
Survey ItemAttending Intervention Agree* % (No.) N = 16Attending Control Agree* % (No.) N = 18P Value
  • Agree is the collapsed variable including the responses of agree and strongly agree.

  • Data analyzed using the chi‐square Fisher's exact test (2‐sided).

Giving feedback to housestaff was DIFFICULT this block.6.3 (1)16.7 (3)0.60
I was comfortable giving feedback to my housestaff this block.93.8 (15)94.4 (17)1.00
I did NOT give a sufficient amount of feedback to my housestaff this block.18.8 (3)38.9 (7)0.27
My skills in giving feedback improved during this block.50 (8)16.7 (3)0.07
I gave feedback to housestaff regarding skills they did well during this block.100 (16)94.4 (17)1.00
I gave feedback to housestaff which targeted specific areas for their improvement.81.3 (13)70.6 (12)0.69
At least one of my housestaff improved his/her clinical skills based on feedback I gave this block.68.8 (11)47.1 (8)0.30
At least one of my housestaff improved his/her professionalism/communication skills based on feedback I gave this block.76.9 (10)31.1 (5)0.02
At least one of my housestaff improved his/her fund of knowledge based on feedback I gave this block.50.0 (8)52.9 (9)1.00
Housestaff found the feedback I gave them useful.66.7 (10)62.5 (10)1.00
I find it DIFFICULT to find time during inpatient rotations to give feedback to residents regarding their performance.50.0 (8)33.3 (6)0.49

Intervention attendings also shared their attitudes toward the feedback card and session. A majority felt that using 1 attending rounds as a feedback session helped create a dedicated time for giving feedback (68.8%), and that the feedback card helped them to give specific, constructive feedback (62.5%). Most attendings reported they would use the feedback card and session again during future inpatient blocks (81%), and would recommend them to other attendings (75%).

Qualitative data from intervention attending interviews revealed further thoughts about the feedback card and feedback session. Most attendings interviewed (7/8) felt that the card was useful for the structure and topic guidance it provided. Half felt that setting aside time for feedback was the more useful component. The other half reported that, because they usually set aside time for feedback regardless, the card was more useful. None of the attendings felt that the feedback card or session was detrimental for patient care or education, and many said that the intervention had positive effects on these areas. For example, 1 attending said that the session added to patient care because I used particular [patient] cases as examples for giving feedback.

DISCUSSION

In this randomized study, we found that a simple pocket feedback card and dedicated feedback session was acceptable to ward attendings and improved resident satisfaction with feedback. Unlike most prior studies of feedback, we demonstrated more feedback around skills needing improvement, and intervention residents felt the feedback they received helped them improve their skills. Our educational intervention was unique in that it combined a pocket card to structure feedback content and dedicated time to structure the feedback process, to address 2 of the major barriers to giving feedback: lack of time and lack of comfort.

The pocket card itself as a tool for improving feedback is innovative and valuable. As a short but directive guide, the card supports attendings' delivery of relevant and specific feedback about residents' performance, and because it is based on the ACGME competencies, it may help attendings focus feedback on areas in which they will later evaluate residents. The inclusion of a prespecified time for giving feedback was important as well, in that it allowed for face‐to‐face feedback to occur, as opposed to a passing comment after a presentation or brief notes in a written final evaluation. Both the card and the feedback session seemed equally important for the success of this intervention, with attitudes varying based on individual attending preferences. Those who usually set aside time for feedback on their own found the card more useful, whereas those who had more trouble finding time for feedback found the specific session more useful. Most attendings found the intervention as a whole helpful, and without any detrimental effects on patient care or education. The card and session may be particularly valuable for hospital attendings, given their growing presence as teachers and supervisors for residents, and their busy days on the wards.

Our study results have important implications for resident training in the hospital. Improving resident receipt of feedback about strengths and weaknesses is an ACGME training requirement, and specific guidance about how to improve skills is critical for focusing improvement efforts. Previous studies have demonstrated that directive feedback in medical training can lead to a variety of performance improvements, including better evaluations by other professionals,9, 16 and objective improvements in resident communication skills,17 chart documentation,18 and clinical management of patients.11, 15, 19 By improving the quality of feedback across several domains and facilitating the feedback process, our intervention may lead to similar improvements. Future studies should examine the global impact of guided feedback as in our study. Perhaps most importantly, attendings found the intervention acceptable and would recommend its use, implying longer term sustainability of its integration into the hospital routine.

One strength of our study was its prospective randomized design. Despite the importance of rigor in medical education research, there remains a paucity of randomized studies to evaluate educational interventions for residents in inpatient settings. Few studies of feedback interventions in particular have performed randomized trials,5, 6, 11 and only one has examined a feedback intervention in a randomized fashion in the inpatient setting.12 This evaluation of a 20‐minute intervention, and a reminder card for supervising attendings to improve written and verbal feedback to residents, modestly improved the amount of verbal feedback given to residents, but did not impact the number of residents receiving mid‐rotation feedback or feedback overall as our study did by report.12

There were several important limitations to our study. First, because this was a single institution study, we only achieved modest sample sizes, particularly in the attending groups, and were unable to assess all of the differences in attending attitudes related to feedback. Second, control and intervention participants were on service simultaneously, which may have led to contamination of the control group and an underestimation of the true impact of our intervention. Since residents were not exclusive to 1 study group on 1 occasion (18 of the 93 residents participated during 2 separate blocks), our results may be biased. In particular, those residents who had the intervention first, and were subsequently in the control group, may have rated the control experience worse than they would have otherwise, creating a bias in favor of a positive result for our intervention. Nonetheless, we believe this situation was uncommon and the potential associated bias minimal. Further, this study assessed attitudes related to feedback and self‐reported knowledge and skills, but did not directly assess resident knowledge, skills, or patient outcomes. We recognize the importance of these outcomes and hope that future interventions can determine these important downstream effects of feedback. We were also unable to assess the card and session's impact on attendings' comfort with feedback, because most attendings in both groups reported feeling comfortable giving feedback. This result may indicate that attendings actually are comfortable giving feedback, or may suggest some element of social desirability bias. Finally, in this study, we designed an intervention which combined the pocket card and dedicated feedback time. We did not quantitatively examine the effect of either component alone, and it is unclear if offering the feedback card without protected time or offering protected time without a guide would have impacted feedback on the wards. However, qualitative data from our study support the use of both components, and implementing the 2 components together is feasible in any inpatient teaching setting.

Despite these limitations, protected time for feedback guided by a pocket feedback card is a simple intervention that appears to improve feedback quantity and quality for ward residents, and guides them to improve their performance. Our low‐intensity intervention helped attendings give residents the tools to improve their clinical and communication skills. An opportunity to make a positive impact on resident education with such a small intervention is rare. The use of a feedback card with protected feedback time could be easily implemented in any training program, and is a valuable tool for busy hospitalists who are more commonly supervising residents on their inpatient rotations.

References
  1. Ende J.Feedback in clinical medical education.JAMA.1983;250(6):777781.
  2. Hewson MG,Little ML.Giving feedback in medical education: verification of recommended techniques.J Gen Intern Med.1998;13(2):111116.
  3. Veloski J,Boex JR,Grasberger MJ,Evans A,Wolfson DB.Systematic review of the literature on assessment, feedback and physicians' clinical performance: BEME Guide No. 7.Med Teach.2006;28(2):117128.
  4. Hutul OA,Carpenter RO,Tarpley JL,Lomis KD.Missed opportunities: a descriptive assessment of teaching and attitudes regarding communication skills in a surgical residency.Curr Surg.2006;63(6):401409.
  5. Stark R,Korenstein D,Karani R.Impact of a 360‐degree professionalism assessment on faculty comfort and skills in feedback delivery.J Gen Intern Med.2008;23(7):969972.
  6. Bandiera G,Lendrum D.Daily encounter cards facilitate competency‐based feedback while leniency bias persists.CJEM.2008;10(1):4450.
  7. Burack JH,Irby DM,Carline JD,Root RK,Larson EB.Teaching compassion and respect. Attending physicians' responses to problematic behaviors.J Gen Intern Med.1999;14(1):4955.
  8. Holmboe ES.Faculty and the observation of trainees' clinical skills: problems and opportunities.Acad Med.2004;79(1):1622.
  9. Dorfsman ML,Wolfson AB.Direct observation of residents in the emergency department: a structured educational program.Acad Emerg Med.2009;16(4):343351.
  10. Donato AA,Pangaro L,Smith C, et al.Evaluation of a novel assessment form for observing medical residents: a randomised, controlled trial.Med Educ.2008;42(12):12341242.
  11. Humphrey‐Murto S,Khalidi N,Smith CD, et al.Resident evaluations: the use of daily evaluation forms in rheumatology ambulatory care.J Rheumatol.2009;36(6):12981303.
  12. Holmboe ES,Fiebach NH,Galaty LA,Huot S.Effectiveness of a focused educational intervention on resident evaluations from faculty a randomized controlled trial.J Gen Intern Med.2001;16(7):427434.
  13. Holmboe ES,Hawkins RE,Huot SJ.Effects of training in direct observation of medical residents' clinical competence: a randomized trial.Ann Intern Med.2004;140(11):874881.
  14. Internal Medicine Program Requirements. ACGME. July 1, 2009. Available at: http://www.acgme.org/acWebsite/downloads/RRC_progReq/140_internal_medicine_07012009.pdf. Accessed November 8,2009.
  15. McKean SC,Budnitz TL,Dressler DD,Amin AN,Pistoria MJ.How to use the core competencies in hospital medicine: a framework for curriculum development.J Hosp Med. 2006;1(suppl 1):5767.
  16. Clay AS,Que L,Petrusa ER,Sebastian M,Govert J.Debriefing in the intensive care unit: a feedback tool to facilitate bedside teaching.Crit Care Med.2007;35(3):738754.
  17. Roter DL,Larson S,Shinitzky H, et al.Use of an innovative video feedback technique to enhance communication skills training.Med Educ.2004;38(2):145157.
  18. Opila DA.The impact of feedback to medical housestaff on chart documentation and quality of care in the outpatient setting.J Gen Intern Med.1997;12(6):352356.
  19. Holmboe ES,Yepes M,Williams F,Huot SJ.Feedback and the mini clinical evaluation exercise.J Gen Intern Med.2004;19(5 pt 2):558561.
References
  1. Ende J.Feedback in clinical medical education.JAMA.1983;250(6):777781.
  2. Hewson MG,Little ML.Giving feedback in medical education: verification of recommended techniques.J Gen Intern Med.1998;13(2):111116.
  3. Veloski J,Boex JR,Grasberger MJ,Evans A,Wolfson DB.Systematic review of the literature on assessment, feedback and physicians' clinical performance: BEME Guide No. 7.Med Teach.2006;28(2):117128.
  4. Hutul OA,Carpenter RO,Tarpley JL,Lomis KD.Missed opportunities: a descriptive assessment of teaching and attitudes regarding communication skills in a surgical residency.Curr Surg.2006;63(6):401409.
  5. Stark R,Korenstein D,Karani R.Impact of a 360‐degree professionalism assessment on faculty comfort and skills in feedback delivery.J Gen Intern Med.2008;23(7):969972.
  6. Bandiera G,Lendrum D.Daily encounter cards facilitate competency‐based feedback while leniency bias persists.CJEM.2008;10(1):4450.
  7. Burack JH,Irby DM,Carline JD,Root RK,Larson EB.Teaching compassion and respect. Attending physicians' responses to problematic behaviors.J Gen Intern Med.1999;14(1):4955.
  8. Holmboe ES.Faculty and the observation of trainees' clinical skills: problems and opportunities.Acad Med.2004;79(1):1622.
  9. Dorfsman ML,Wolfson AB.Direct observation of residents in the emergency department: a structured educational program.Acad Emerg Med.2009;16(4):343351.
  10. Donato AA,Pangaro L,Smith C, et al.Evaluation of a novel assessment form for observing medical residents: a randomised, controlled trial.Med Educ.2008;42(12):12341242.
  11. Humphrey‐Murto S,Khalidi N,Smith CD, et al.Resident evaluations: the use of daily evaluation forms in rheumatology ambulatory care.J Rheumatol.2009;36(6):12981303.
  12. Holmboe ES,Fiebach NH,Galaty LA,Huot S.Effectiveness of a focused educational intervention on resident evaluations from faculty a randomized controlled trial.J Gen Intern Med.2001;16(7):427434.
  13. Holmboe ES,Hawkins RE,Huot SJ.Effects of training in direct observation of medical residents' clinical competence: a randomized trial.Ann Intern Med.2004;140(11):874881.
  14. Internal Medicine Program Requirements. ACGME. July 1, 2009. Available at: http://www.acgme.org/acWebsite/downloads/RRC_progReq/140_internal_medicine_07012009.pdf. Accessed November 8,2009.
  15. McKean SC,Budnitz TL,Dressler DD,Amin AN,Pistoria MJ.How to use the core competencies in hospital medicine: a framework for curriculum development.J Hosp Med. 2006;1(suppl 1):5767.
  16. Clay AS,Que L,Petrusa ER,Sebastian M,Govert J.Debriefing in the intensive care unit: a feedback tool to facilitate bedside teaching.Crit Care Med.2007;35(3):738754.
  17. Roter DL,Larson S,Shinitzky H, et al.Use of an innovative video feedback technique to enhance communication skills training.Med Educ.2004;38(2):145157.
  18. Opila DA.The impact of feedback to medical housestaff on chart documentation and quality of care in the outpatient setting.J Gen Intern Med.1997;12(6):352356.
  19. Holmboe ES,Yepes M,Williams F,Huot SJ.Feedback and the mini clinical evaluation exercise.J Gen Intern Med.2004;19(5 pt 2):558561.
Issue
Journal of Hospital Medicine - 7(1)
Issue
Journal of Hospital Medicine - 7(1)
Page Number
35-40
Page Number
35-40
Publications
Publications
Article Type
Display Headline
Pocket card and dedicated feedback session to improve feedback to ward residents: A randomized trial
Display Headline
Pocket card and dedicated feedback session to improve feedback to ward residents: A randomized trial
Sections
Article Source

Copyright © 2011 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Mount Sinai School of Medicine, 1 Gustave L. Levy Place, Box 1087, New York, NY 10029
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files