Affiliations
Center for Quality and Safety Research, Baystate Medical Center and Tufts University School of Medicine, Springfield, Massachusetts
Given name(s)
Peter
Family name
Lindenauer
Degrees
MD, MSc

Quality of care of hospitalized infective endocarditis patients: Report from a tertiary medical center

Article Type
Changed
Thu, 06/22/2017 - 14:22
Display Headline
Quality of care of hospitalized infective endocarditis patients: Report from a tertiary medical center

Infective endocarditis (IE) affected an estimated 46,800 Americans in 2011, and its incidence is increasing due to greater numbers of invasive procedures and prevalence of IE risk factors.1-3 Despite recent advances in the treatment of IE, morbidity and mortality remain high: in-hospital mortality in IE patients is 15% to 20%, and the 1-year mortality rate is approximately 40%.2,4,5

Poor IE outcomes may be the result of difficulty in diagnosing IE and identifying its optimal treatments. The American Heart Association (AHA), the American College of Cardiology (ACC), and the European Society of Cardiology (ESC) have published guidelines to address these challenges. Recent guidelines recommend a multidisciplinary approach that includes cardiology, cardiac surgery, and infectious disease (ID) specialty involvement in decision-making.5,6

In the absence of published quality measures for IE management, guidelines can be used to evaluate the quality of care of IE. Studies have showed poor concordance with guideline recommendations but did not examine agreement with more recently published guidelines.7,8 Furthermore, few studies have examined the management, outcomes, and quality of care received by IE patients. Therefore, we aimed to describe a modern cohort of patients with IE admitted to a tertiary medical center over a 4-year period. In particular, we aimed to assess quality of care received by this cohort, as measured by concordance with AHA and ACC guidelines to identify gaps in care and spur quality improvement (QI) efforts.

METHODS

Design and Study Population

We conducted a retrospective cohort study of adult IE patients admitted to Baystate Medical Center (BMC), a 716-bed tertiary academic center that covers a population of 800,000 people throughout western New England. We used the International Classification of Diseases (ICD)–Ninth Revision, to identify IE patients discharged with a principal or secondary diagnosis of IE between 2007 and 2011 (codes 421.0, 421.1, 421.9, 424.9, 424.90, and 424.91). Three co-authors confirmed the diagnosis by conducting a review of the electronic health records.

We included only patients who met modified Duke criteria for definite or possible IE.5 Definite IE defines patients with pathological criteria (microorganisms demonstrated by culture or histologic examination or a histologic examination showing active endocarditis); or patients with 2 major criteria (positive blood culture and evidence of endocardial involvement by echocardiogram), 1 major criterion and 3 minor criteria (minor criteria: predisposing heart conditions or intravenous drug (IVD) use, fever, vascular phenomena, immunologic phenomena, and microbiologic evidence that do not meet the major criteria) or 5 minor criteria. Possible IE defines patients with 1 major and 1 minor criterion or 3 minor criteria.5

 

 

Data Collection

We used billing and clinical databases to collect demographics, comorbidities, antibiotic treatment, 6-month readmission and 1-year mortality. Comorbid conditions were classified into Elixhauser comorbidities using software provided by the Healthcare Costs and Utilization Project of the Agency for Healthcare Research and Quality.9,10

We obtained all other data through electronic health record abstraction. These included microbiology, type of endocarditis (native valve endocarditis [NVE] or prosthetic valve endocarditis [PVE]), echocardiographic location of the vegetation, and complications involving the valve (eg, valve perforation, ruptured chorda, perivalvular abscess, or valvular insufficiency).

Using 2006 AHA/ACC guidelines,11 we identified quality metrics, including the presence of at least 2 sets of blood cultures prior to start of antibiotics and use of transthoracic echocardiogram (TTE) and transesophageal echocardiogram (TEE). Guidelines recommend using TTE as first-line to detect valvular vegetations and assess IE complications. TEE is recommended if TTE is nondiagnostic and also as first-line to diagnose PVE. We assessed the rate of consultation with ID, cardiology, and cardiac surgery specialties. While these consultations were not explicitly emphasized in the 2006 AHA/ACC guidelines, there is a class I recommendation in 2014 AHA/ACC guidelines5 to manage IE patients with consultation of all these specialties.

We reported the number of patients with intracardiac leads (pacemaker or defibrillator) who had documentation of intracardiac lead removal. Complete removal of intracardiac leads is indicated in IE patients with infection of leads or device (class I) and suggested for IE caused by Staphylococcus aureus or fungi (even without evidence of device or lead infection), and for patients undergoing valve surgery (class IIa).5 We entered abstracted data elements into a RedCap database, hosted by Tufts Clinical and Translational Science Institute.12

Outcomes

Outcomes included embolic events, strokes, need for cardiac surgery, LOS, inhospital mortality, 6-month readmission, and 1-year mortality. We identified embolic events using documentation of clinical or imaging evidence of an embolic event to the cerebral, coronary, peripheral arterial, renal, splenic, or pulmonary vasculature. We used record extraction to identify incidence of valve surgery. Nearly all patients who require surgery at BMC have it done onsite. We compared outcomes among patients who received less than 3 vs. 3 consultations provided by ID, cardiology, and cardiac surgery specialties. We also compared outcomes among patients who received 0, 1, 2, or 3 consultations to look for a trend.

Statistical Analysis

We divided the cohort into patients with NVE and PVE because there are differences in pathophysiology, treatment, and outcomes of these groups. We calculated descriptive statistics, including means/standard deviation (SD) and n (%). We conducted univariable analyses using Fisher exact (categorical), unpaired t tests (Gaussian), or Kruskal-Wallis equality-of-populations rank test (non-Gaussian). Common language effect sizes were also calculated to quantify group differences without respect to sample size.13,14 Analyses were performed using Stata 14.1. (StataCorp LLC, College Station, Texas). The BMC Institutional Review Board approved the protocol.

RESULTS

We identified a total of 317 hospitalizations at BMC meeting criteria for IE. Of these, 147 hospitalizations were readmissions or did not meet the clinical criteria of definite or possible IE. Thus, we included a total of 170 patients in the final analysis. Definite IE was present in 135 (79.4%) and possible IE in 35 (20.6%) patients.

Characteristics of 170 Hospitalized Patients with Infective Endocarditis
Table 1

Patient Characteristics

Of 170 patients, 127 (74.7%) had NVE and 43 (25.3%) had PVE. Mean ± SD age was 60.0 ± 17.9 years, 66.5% (n = 113) of patients were male, and 79.4% (n = 135) were white (Table 1). Hypertension and chronic kidney disease were the most common comorbidities. The median Gagne score15 was 4, corresponding to a 1-year mortality risk of 15%. Predisposing factors for IE included previous history of IE (n = 14 or 8.2%), IVD use (n = 23 or 13.5%), and presence of long-term venous catheters (n = 19 or 11.2%). Intracardiac leads were present in 17.1% (n = 29) of patients. Bicuspid aortic valve was reported in 6.5% (n = 11) of patients with NVE. Patients with PVE were older (+11.5 years, 95% confidence interval [CI] 5.5, 17.5) and more likely to have intracardiac leads (44.2% vs. 7.9%; P < 0.001; Table 1).

Microbiology and Antibiotics

Staphylococcus aureus was isolated in 40.0% of patients (methicillin-sensitive: 21.2%, n = 36; methicillin-resistant: 18.8%, n = 32) and vancomycin (88.2%, n = 150) was the most common initial antibiotic used. Nearly half (44.7%, n = 76) of patients received gentamicin as part of their initial antibiotic regimen. Appendix 1 provides information on final blood culture results, prosthetic versus native valve IE, and antimicrobial agents that each patient received. PVE patients were more likely to receive gentamicin as their initial antibiotic regimen than NVE (58.1% vs. 40.2%; P = 0.051; Table 1).

 

 

Echocardiography and Affected Valves

As per study inclusion criteria, all patients received echocardiography (either TTE, TEE, or both). Overall, the most common infected valve was mitral (41.3%), n = 59), followed by aortic valve (28.7%), n = 41). Patients in whom the location of infected valve could not be determined (15.9%, n = 27) had echocardiographic features of intracardiac device infection or intracardiac mass (Table 1).

Quality of Care

Nearly all (n = 165, 97.1%) of patients had at least 2 sets of blood cultures drawn, most on the first day of admission (71.2%). The vast majority of patients (n = 152, 89.4%) also received their first dose of antibiotics on the day of admission. Ten (5.9%) patients did not receive any consults, and 160 (94.1%) received at least 1 consultation. An ID consultation was obtained for most (147, 86.5%) patients; cardiac surgery consultation was obtained for about half of patients (92, 54.1%), and cardiology consultation was also obtained for nearly half of patients (80, 47.1%). One-third (53, 31.2%) did not receive a cardiology or cardiac surgery consult, two-thirds (117, 68.8%) received either a cardiology or a cardiac surgery consult, and one-third (55, 32.4%) received both.

Of the 29 patients who had an intracardiac lead, 6 patients had documentation of the device removal during the index hospitalization (5 or 50.0% of patients with NVE and 1 or 5.3% of patients with PVE; P = 0.02; Table 2).

Quality of Care of Patients Hospitalized with Infective Endocarditis
Table 2

Outcomes

Evidence of any embolic events was seen in 27.7% (n = 47) of patients, including stroke in 17.1% (n = 29). Median LOS for all patients was 13.5 days, and 6-month readmission among patients who survived their index admission was 51.0% (n = 74/145; 95% CI, 45.9%-62.7%). Inhospital mortality was 14.7% (n = 25; 95% CI: 10.1%-20.9%) and 12-month mortality was 22.4% (n = 38; 95% CI, 16.7%-29.3%). Inhospital mortality was more frequent among patients with PVE than NVE (20.9% vs. 12.6%; P = 0.21), although this difference was not statistically significant. Complications were more common in NVE than PVE (any embolic event: 32.3% vs. 14.0%, P = 0.03; stroke, 20.5% vs. 7.0%, P = 0.06; Table 3).

Outcome of Hospitalized Patients with Infective Endocarditis
Table 3

Although there was a trend toward reduction in 6-month readmission and 12-month mortality with incremental increase in the number of specialties consulted (ID, cardiology and cardiac surgery), the difference was not statistically significant (Figure). In addition, comparing outcomes of embolic events, stroke, 6-month readmission, and 12-month mortality between those who received 3 consults (28.8%, n = 49) to those with fewer than 3 (71.2%, n = 121) did not show statistically significant differences.

Comparison of outcomes of any embolic event, stroke, 6-month readmission and 12-month mortality between infective endocarditis patients who received infectious disease, cardiology, and cardiac surgery consultations.
Figure


Of 92 patients who received a cardiac surgery consult, 73 had NVE and 19 had PVE. Of these, 47 underwent valve surgery, 39 (of 73) with NVE (53.42%) and 8 (of 19) with PVE (42.1%). Most of the NVE patients (73.2%) had more than 1 indication for surgery. The most common indications for surgery among NVE patients were significant valvular dysfunction resulting in heart failure (65.9%), followed by mobile vegetation (56.1%) and recurrent embolic events (26.8%). The most common indication for surgery in PVE was persistent bacteremia or recurrent embolic events (75.0%).

DISCUSSION

In this study, we described the management, quality of care, and outcomes of IE patients in a tertiary medical center. We found that the majority of hospitalized patients with IE were older white men with comorbidities and IE risk factors. The complication rate was high (27.7% with embolic events) and the inhospital mortality rate was in the lower range reported by prior studies [14.7% vs. 15%-20%].5 Nearly one-third of patients (n = 47, 27.7%) received valve surgery. Quality of care received was generally good, with most patients receiving early blood cultures, echocardiograms, early antibiotics, and timely ID consultation. We identified important gaps in care, including a failure to consult cardiac surgery in nearly half of patients and failure to consult cardiology in more than half of patients.

Our findings support work suggesting that IE is no longer primarily a chronic or subacute disease of younger patients with IVD use, positive human immunodeficiency virus status, or bicuspid aortic valves.1,4,16,17 The International Collaboration on Endocarditis-Prospective Cohort Study,4 a multinational prospective cohort study (2000-2005) of 2781 adults with IE, reported a higher prevalence of patients with diabetes or on hemodialysis, IVD users, and patients with long-term venous catheter and intracardiac leads than we found. Yet both studies suggest that the demographics of IE are changing. This may partially explain why IE mortality has not improved in recent years:2,3 patients with older age and higher comorbidity burden may not be considered good surgical candidates.

This study is among the first to contribute information on concordance with IE guidelines in a cohort of U.S. patients. Our findings suggest that most patients received timely blood culture, same-day administration of empiric antibiotics, and ID consultation, which is similar to European studies.7,18 Guideline concordance could be improved in some areas. Overall documentation of the management plan regarding the intracardiac leads could be improved. Only 6 of 29 patients with intracardiac leads had documentation of their removal during the index hospitalization.

The 2014 AHA/ACC guidelines5 and the ESC guidelines6 emphasized the importance of multidisciplinary management of IE. As part of the Heart Valve Team at BMC, cardiologists provide expertise in diagnosis, imaging and clinical management of IE, and cardiac surgeons provide consultation on whether to pursue surgery and optimal timing of surgery. Early discussion with surgical team is considered mandatory in all complicated cases of IE.6,18 Infectious disease consultation has been shown to improve the rate of IE diagnosis, reduce the 6-month relapse rate,19 and improve outcomes in patients with S aureus bacteremia.20 In our study 86.5% of patients had documentation of an ID consultation; cardiac surgery consultation was obtained in 54.1% and cardiology consultation in 47.1% of patients.

We observed a trend towards lower rates of 6-month readmission and 12-month mortality among patients who received all 3 consults (Figure 1), despite the fact that rates of embolic events and stroke were higher in patients with 3 consults compared to those with fewer than 3. Obviously, the lack of confounder adjustment and lack of power limits our ability to make inferences about this association, but it generates hypotheses for future work. Because subjects in our study were cared for prior to 2014, multidisciplinary management of IE with involvement of cardiology, cardiac surgery, and ID physicians was observed in only one-third of patients. However, 117 (68.8%) patients received either cardiology or cardiac surgery consults. It is possible that some physicians considered involving both cardiology and cardiac surgery consultants as unnecessary and, therefore, did not consult both specialties. We will focus future QI efforts in our institution on educating physicians about the benefits of multidisciplinary care and the importance of fully implementing the 2014 AHA/ACC guidelines.

Our findings around quality of care should be placed in the context of 2 studies by González de Molina et al8 and Delahaye et al7 These studies described considerable discordance between guideline recommendations and real-world IE care. However, these studies were performed more than a decade ago and were conducted before current recommendations to consult cardiology and cardiac surgery were published.

In the 2014 AHA/ACC guidelines, surgery prior to completion of antibiotics is indicated in patients with valve dysfunction resulting in heart failure; left-sided IE caused by highly resistant organisms (including fungus or S aureus); IE complicated by heart block, aortic abscess, or penetrating lesions; and presence of persistent infection (bacteremia or fever lasting longer than 5 to 7 days) after onset of appropriate antimicrobial therapy. In addition, there is a Class IIa indication of early surgery in patients with recurrent emboli and persistent vegetation despite appropriate antibiotic therapy and a Class IIb indication of early surgery in patients with NVE with mobile vegetation greater than 10 mm in length. Surgery is recommended for patients with PVE and relapsing infection.

It is recommended that IE patients be cared for in centers with immediate access to cardiac surgery because the urgent need for surgical intervention can arise rapidly.5 We found that nearly one-third of included patients underwent surgery. Although we did not collect data on indications for surgery in patients who did not receive surgery, we observed that 50% had a surgery consult, suggesting the presence of 1 or more surgical indications. Of these, half underwent valve surgery. Most of the NVE patients who underwent surgery had more than 1 indication for surgery. Our surgical rate is similar to a study from Italy3 and overall in the lower range of reported surgical rate (25%-50%) from other studies.21 The low rate of surgery at our center may be related to the fact that the use of surgery for IE has been hotly debated in the literature,21 and may also be due to the low rate of cardiac surgery consultation.

Our study has several limitations. We identified eligible patients using a discharge ICD-9 coding of IE and then confirmed the presence of Duke criteria using record review. Using discharge diagnosis codes for endocarditis has been validated, and our additional manual chart review to confirm Duke criteria likely improved the specificity significantly. However, by excluding patients who did not have documented evidence of Duke criteria, we may have missed some cases, lowering sensitivity. The performance on selected quality metrics may also have been affected by our inclusion criteria. Because we included only patients who met Duke criteria, we tended to include patients who had received blood cultures and echocardiograms, which are part of the criteria. Thus, we cannot comment on use of diagnostic testing or specialty consultation in patients with suspected IE. This was a single-center study and may not represent patients or current practices seen in other institutions. We did not collect data on some of the predisposing factors to NVE (for example, baseline rheumatic heart disease or preexisting valvular heart disease) because it is estimated that less than 5% of IE in the U.S. is superimposed on rheumatic heart disease.4 We likely underestimated 12-month mortality rate because we did not cross-reference our findings again the National Death Index; however, this should not affect the comparison of this outcome between groups.

 

 

CONCLUSION

Our study confirms reports that IE epidemiology has changed significantly in recent years. It also suggests that concordance with guideline recommendations is good for some aspects of care (eg, echocardiogram, blood cultures), but can be improved in other areas, particularly in use of specialty consultation during the hospitalization. Future QI efforts should emphasize the role of the heart valve team or endocarditis team that consists of an internist, ID physician, cardiologist, cardiac surgeon, and nursing. Finally, efforts should be made to develop strategies for community hospitals that do not have access to all of these specialists (eg, early transfer, telehealth).

Disclosure

Nothing to report.

Files
References

1. Pant S, Patel NJ, Deshmukh A, Golwala H, Patel N, Badheka A, et al. Trends in infective endocarditis incidence, microbiology, and valve replacement in the United States from 2000 to 2011. J Am Coll Cardiol. 2015;65(19):2070-2076. PubMed
2. Bor DH, Woolhandler S, Nardin R, Brusch J, Himmelstein DU. Infective endocarditis in the U.S., 1998-2009: a nationwide study. PloS One. 2013;8(3):e60033. PubMed
3. Fedeli U, Schievano E, Buonfrate D, Pellizzer G, Spolaore P. Increasing incidence and mortality of infective endocarditis: a population-based study through a record-linkage system. BMC Infect Dis. 2011;11:48. PubMed
4. Murdoch DR, Corey GR, Hoen B, Miró JM, Fowler VG, Bayer AS, et al. Clinical presentation, etiology, and outcome of infective endocarditis in the 21st century: the International Collaboration on Endocarditis-Prospective Cohort Study. Arch Intern Med. 2009;169(5):463-473. PubMed
5. Nishimura RA, Otto CM, Bonow RO, Carabello BA, Erwin JP, Guyton RA, et al. 2014 AHA/ACC guideline for the management of patients with valvular heart disease: executive summary: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol. 2014;63(22):2438-2488PubMed
6. Habib G, Lancellotti P, Antunes MJ, Bongiorni MG, Casalta J-P, Del Zotti F, et al. 2015 ESC Guidelines for the management of infective endocarditis: The Task Force for the Management of Infective Endocarditis of the European Society of Cardiology (ESC). Endorsed by: European Association for Cardio-Thoracic Surgery (EACTS), the European Association of Nuclear Medicine (EANM). Eur Heart J. 2015;36(44):3075-3128PubMed
7. Delahaye F, Rial MO, de Gevigney G, Ecochard R, Delaye J. A critical appraisal of the quality of the management of infective endocarditis. J Am Coll Cardiol. 1999;33(3):788-793. PubMed
8. González De Molina M, Fernández-Guerrero JC, Azpitarte J. [Infectious endocarditis: degree of discordance between clinical guidelines recommendations and clinical practice]. Rev Esp Cardiol. 2002;55(8):793-800. PubMed
9. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. PubMed
10. Quan H, Parsons GA, Ghali WA. Validity of information on comorbidity derived rom ICD-9-CCM administrative data. Med Care. 2002;40(8):675-685. PubMed
11. American College of Cardiology/American Heart Association Task Force on Practice Guidelines, Society of Cardiovascular Anesthesiologists, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, Bonow RO, Carabello BA, Kanu C, deLeon AC Jr, Faxon DP, Freed MD, et al. ACC/AHA 2006 guidelines for the management of patients with valvular heart disease: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (writing committee to revise the 1998 Guidelines for the Management of Patients With Valvular Heart Disease): developed in collaboration with the Society of Cardiovascular Anesthesiologists: endorsed by the Society for Cardiovascular Angiography and Interventions and the Society of Thoracic Surgeons. Circulation. 2006;114(5):e84-e231.
12. REDCap [Internet]. [cited 2016 May 14]. Available from: https://collaborate.tuftsctsi.org/redcap/.
13. McGraw KO, Wong SP. A common language effect-size statistic. Psychol Bull. 1992;111:361-365. 
14. Cohen J. The statistical power of abnormal-social psychological research: a review. J Abnorm Soc Psychol. 1962;65:145-153. PubMed
15. Gagne JJ, Glynn RJ, Avorn J, Levin R, Schneeweiss S. A combined comorbidity score predicted mortality in elderly patients better than existing scores. J Clin Epidemiol. 2011;64(7):749-759. PubMed
16. Baddour LM, Wilson WR, Bayer AS, Fowler VG Jr, Tleyjeh IM, Rybak MJ, et al. Infective endocarditis in adults: Diagnosis, antimicrobial therapy, and management of complications: A scientific statement for healthcare professionals from the American Heart Association. Circulation. 2015;132(15):1435-1486. PubMed
17. Cecchi E, Chirillo F, Castiglione A, Faggiano P, Cecconi M, Moreo A, et al. Clinical epidemiology in Italian Registry of Infective Endocarditis (RIEI): Focus on age, intravascular devices and enterococci. Int J Cardiol. 2015;190:151-156. PubMed
18. Tornos P, Iung B, Permanyer-Miralda G, Baron G, Delahaye F, Gohlke-Bärwolf Ch, et al. Infective endocarditis in Europe: lessons from the Euro heart survey. Heart. 2005;91(5):571-575. PubMed
19. Yamamoto S, Hosokawa N, Sogi M, Inakaku M, Imoto K, Ohji G, et al. Impact of infectious diseases service consultation on diagnosis of infective endocarditis. Scand J Infect Dis. 2012;44(4):270-275. PubMed
20. Rieg S, Küpper MF. Infectious diseases consultations can make the difference: a brief review and a plea for more infectious diseases specialists in Germany. Infection. 2016;(2):159-166. PubMed
21. Prendergast BD, Tornos P. Surgery for infective endocarditis: who and when? Circulation. 2010;121(9):11411152. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12(6)
Publications
Topics
Page Number
414-420
Sections
Files
Files
Article PDF
Article PDF

Infective endocarditis (IE) affected an estimated 46,800 Americans in 2011, and its incidence is increasing due to greater numbers of invasive procedures and prevalence of IE risk factors.1-3 Despite recent advances in the treatment of IE, morbidity and mortality remain high: in-hospital mortality in IE patients is 15% to 20%, and the 1-year mortality rate is approximately 40%.2,4,5

Poor IE outcomes may be the result of difficulty in diagnosing IE and identifying its optimal treatments. The American Heart Association (AHA), the American College of Cardiology (ACC), and the European Society of Cardiology (ESC) have published guidelines to address these challenges. Recent guidelines recommend a multidisciplinary approach that includes cardiology, cardiac surgery, and infectious disease (ID) specialty involvement in decision-making.5,6

In the absence of published quality measures for IE management, guidelines can be used to evaluate the quality of care of IE. Studies have showed poor concordance with guideline recommendations but did not examine agreement with more recently published guidelines.7,8 Furthermore, few studies have examined the management, outcomes, and quality of care received by IE patients. Therefore, we aimed to describe a modern cohort of patients with IE admitted to a tertiary medical center over a 4-year period. In particular, we aimed to assess quality of care received by this cohort, as measured by concordance with AHA and ACC guidelines to identify gaps in care and spur quality improvement (QI) efforts.

METHODS

Design and Study Population

We conducted a retrospective cohort study of adult IE patients admitted to Baystate Medical Center (BMC), a 716-bed tertiary academic center that covers a population of 800,000 people throughout western New England. We used the International Classification of Diseases (ICD)–Ninth Revision, to identify IE patients discharged with a principal or secondary diagnosis of IE between 2007 and 2011 (codes 421.0, 421.1, 421.9, 424.9, 424.90, and 424.91). Three co-authors confirmed the diagnosis by conducting a review of the electronic health records.

We included only patients who met modified Duke criteria for definite or possible IE.5 Definite IE defines patients with pathological criteria (microorganisms demonstrated by culture or histologic examination or a histologic examination showing active endocarditis); or patients with 2 major criteria (positive blood culture and evidence of endocardial involvement by echocardiogram), 1 major criterion and 3 minor criteria (minor criteria: predisposing heart conditions or intravenous drug (IVD) use, fever, vascular phenomena, immunologic phenomena, and microbiologic evidence that do not meet the major criteria) or 5 minor criteria. Possible IE defines patients with 1 major and 1 minor criterion or 3 minor criteria.5

 

 

Data Collection

We used billing and clinical databases to collect demographics, comorbidities, antibiotic treatment, 6-month readmission and 1-year mortality. Comorbid conditions were classified into Elixhauser comorbidities using software provided by the Healthcare Costs and Utilization Project of the Agency for Healthcare Research and Quality.9,10

We obtained all other data through electronic health record abstraction. These included microbiology, type of endocarditis (native valve endocarditis [NVE] or prosthetic valve endocarditis [PVE]), echocardiographic location of the vegetation, and complications involving the valve (eg, valve perforation, ruptured chorda, perivalvular abscess, or valvular insufficiency).

Using 2006 AHA/ACC guidelines,11 we identified quality metrics, including the presence of at least 2 sets of blood cultures prior to start of antibiotics and use of transthoracic echocardiogram (TTE) and transesophageal echocardiogram (TEE). Guidelines recommend using TTE as first-line to detect valvular vegetations and assess IE complications. TEE is recommended if TTE is nondiagnostic and also as first-line to diagnose PVE. We assessed the rate of consultation with ID, cardiology, and cardiac surgery specialties. While these consultations were not explicitly emphasized in the 2006 AHA/ACC guidelines, there is a class I recommendation in 2014 AHA/ACC guidelines5 to manage IE patients with consultation of all these specialties.

We reported the number of patients with intracardiac leads (pacemaker or defibrillator) who had documentation of intracardiac lead removal. Complete removal of intracardiac leads is indicated in IE patients with infection of leads or device (class I) and suggested for IE caused by Staphylococcus aureus or fungi (even without evidence of device or lead infection), and for patients undergoing valve surgery (class IIa).5 We entered abstracted data elements into a RedCap database, hosted by Tufts Clinical and Translational Science Institute.12

Outcomes

Outcomes included embolic events, strokes, need for cardiac surgery, LOS, inhospital mortality, 6-month readmission, and 1-year mortality. We identified embolic events using documentation of clinical or imaging evidence of an embolic event to the cerebral, coronary, peripheral arterial, renal, splenic, or pulmonary vasculature. We used record extraction to identify incidence of valve surgery. Nearly all patients who require surgery at BMC have it done onsite. We compared outcomes among patients who received less than 3 vs. 3 consultations provided by ID, cardiology, and cardiac surgery specialties. We also compared outcomes among patients who received 0, 1, 2, or 3 consultations to look for a trend.

Statistical Analysis

We divided the cohort into patients with NVE and PVE because there are differences in pathophysiology, treatment, and outcomes of these groups. We calculated descriptive statistics, including means/standard deviation (SD) and n (%). We conducted univariable analyses using Fisher exact (categorical), unpaired t tests (Gaussian), or Kruskal-Wallis equality-of-populations rank test (non-Gaussian). Common language effect sizes were also calculated to quantify group differences without respect to sample size.13,14 Analyses were performed using Stata 14.1. (StataCorp LLC, College Station, Texas). The BMC Institutional Review Board approved the protocol.

RESULTS

We identified a total of 317 hospitalizations at BMC meeting criteria for IE. Of these, 147 hospitalizations were readmissions or did not meet the clinical criteria of definite or possible IE. Thus, we included a total of 170 patients in the final analysis. Definite IE was present in 135 (79.4%) and possible IE in 35 (20.6%) patients.

Characteristics of 170 Hospitalized Patients with Infective Endocarditis
Table 1

Patient Characteristics

Of 170 patients, 127 (74.7%) had NVE and 43 (25.3%) had PVE. Mean ± SD age was 60.0 ± 17.9 years, 66.5% (n = 113) of patients were male, and 79.4% (n = 135) were white (Table 1). Hypertension and chronic kidney disease were the most common comorbidities. The median Gagne score15 was 4, corresponding to a 1-year mortality risk of 15%. Predisposing factors for IE included previous history of IE (n = 14 or 8.2%), IVD use (n = 23 or 13.5%), and presence of long-term venous catheters (n = 19 or 11.2%). Intracardiac leads were present in 17.1% (n = 29) of patients. Bicuspid aortic valve was reported in 6.5% (n = 11) of patients with NVE. Patients with PVE were older (+11.5 years, 95% confidence interval [CI] 5.5, 17.5) and more likely to have intracardiac leads (44.2% vs. 7.9%; P < 0.001; Table 1).

Microbiology and Antibiotics

Staphylococcus aureus was isolated in 40.0% of patients (methicillin-sensitive: 21.2%, n = 36; methicillin-resistant: 18.8%, n = 32) and vancomycin (88.2%, n = 150) was the most common initial antibiotic used. Nearly half (44.7%, n = 76) of patients received gentamicin as part of their initial antibiotic regimen. Appendix 1 provides information on final blood culture results, prosthetic versus native valve IE, and antimicrobial agents that each patient received. PVE patients were more likely to receive gentamicin as their initial antibiotic regimen than NVE (58.1% vs. 40.2%; P = 0.051; Table 1).

 

 

Echocardiography and Affected Valves

As per study inclusion criteria, all patients received echocardiography (either TTE, TEE, or both). Overall, the most common infected valve was mitral (41.3%), n = 59), followed by aortic valve (28.7%), n = 41). Patients in whom the location of infected valve could not be determined (15.9%, n = 27) had echocardiographic features of intracardiac device infection or intracardiac mass (Table 1).

Quality of Care

Nearly all (n = 165, 97.1%) of patients had at least 2 sets of blood cultures drawn, most on the first day of admission (71.2%). The vast majority of patients (n = 152, 89.4%) also received their first dose of antibiotics on the day of admission. Ten (5.9%) patients did not receive any consults, and 160 (94.1%) received at least 1 consultation. An ID consultation was obtained for most (147, 86.5%) patients; cardiac surgery consultation was obtained for about half of patients (92, 54.1%), and cardiology consultation was also obtained for nearly half of patients (80, 47.1%). One-third (53, 31.2%) did not receive a cardiology or cardiac surgery consult, two-thirds (117, 68.8%) received either a cardiology or a cardiac surgery consult, and one-third (55, 32.4%) received both.

Of the 29 patients who had an intracardiac lead, 6 patients had documentation of the device removal during the index hospitalization (5 or 50.0% of patients with NVE and 1 or 5.3% of patients with PVE; P = 0.02; Table 2).

Quality of Care of Patients Hospitalized with Infective Endocarditis
Table 2

Outcomes

Evidence of any embolic events was seen in 27.7% (n = 47) of patients, including stroke in 17.1% (n = 29). Median LOS for all patients was 13.5 days, and 6-month readmission among patients who survived their index admission was 51.0% (n = 74/145; 95% CI, 45.9%-62.7%). Inhospital mortality was 14.7% (n = 25; 95% CI: 10.1%-20.9%) and 12-month mortality was 22.4% (n = 38; 95% CI, 16.7%-29.3%). Inhospital mortality was more frequent among patients with PVE than NVE (20.9% vs. 12.6%; P = 0.21), although this difference was not statistically significant. Complications were more common in NVE than PVE (any embolic event: 32.3% vs. 14.0%, P = 0.03; stroke, 20.5% vs. 7.0%, P = 0.06; Table 3).

Outcome of Hospitalized Patients with Infective Endocarditis
Table 3

Although there was a trend toward reduction in 6-month readmission and 12-month mortality with incremental increase in the number of specialties consulted (ID, cardiology and cardiac surgery), the difference was not statistically significant (Figure). In addition, comparing outcomes of embolic events, stroke, 6-month readmission, and 12-month mortality between those who received 3 consults (28.8%, n = 49) to those with fewer than 3 (71.2%, n = 121) did not show statistically significant differences.

Comparison of outcomes of any embolic event, stroke, 6-month readmission and 12-month mortality between infective endocarditis patients who received infectious disease, cardiology, and cardiac surgery consultations.
Figure


Of 92 patients who received a cardiac surgery consult, 73 had NVE and 19 had PVE. Of these, 47 underwent valve surgery, 39 (of 73) with NVE (53.42%) and 8 (of 19) with PVE (42.1%). Most of the NVE patients (73.2%) had more than 1 indication for surgery. The most common indications for surgery among NVE patients were significant valvular dysfunction resulting in heart failure (65.9%), followed by mobile vegetation (56.1%) and recurrent embolic events (26.8%). The most common indication for surgery in PVE was persistent bacteremia or recurrent embolic events (75.0%).

DISCUSSION

In this study, we described the management, quality of care, and outcomes of IE patients in a tertiary medical center. We found that the majority of hospitalized patients with IE were older white men with comorbidities and IE risk factors. The complication rate was high (27.7% with embolic events) and the inhospital mortality rate was in the lower range reported by prior studies [14.7% vs. 15%-20%].5 Nearly one-third of patients (n = 47, 27.7%) received valve surgery. Quality of care received was generally good, with most patients receiving early blood cultures, echocardiograms, early antibiotics, and timely ID consultation. We identified important gaps in care, including a failure to consult cardiac surgery in nearly half of patients and failure to consult cardiology in more than half of patients.

Our findings support work suggesting that IE is no longer primarily a chronic or subacute disease of younger patients with IVD use, positive human immunodeficiency virus status, or bicuspid aortic valves.1,4,16,17 The International Collaboration on Endocarditis-Prospective Cohort Study,4 a multinational prospective cohort study (2000-2005) of 2781 adults with IE, reported a higher prevalence of patients with diabetes or on hemodialysis, IVD users, and patients with long-term venous catheter and intracardiac leads than we found. Yet both studies suggest that the demographics of IE are changing. This may partially explain why IE mortality has not improved in recent years:2,3 patients with older age and higher comorbidity burden may not be considered good surgical candidates.

This study is among the first to contribute information on concordance with IE guidelines in a cohort of U.S. patients. Our findings suggest that most patients received timely blood culture, same-day administration of empiric antibiotics, and ID consultation, which is similar to European studies.7,18 Guideline concordance could be improved in some areas. Overall documentation of the management plan regarding the intracardiac leads could be improved. Only 6 of 29 patients with intracardiac leads had documentation of their removal during the index hospitalization.

The 2014 AHA/ACC guidelines5 and the ESC guidelines6 emphasized the importance of multidisciplinary management of IE. As part of the Heart Valve Team at BMC, cardiologists provide expertise in diagnosis, imaging and clinical management of IE, and cardiac surgeons provide consultation on whether to pursue surgery and optimal timing of surgery. Early discussion with surgical team is considered mandatory in all complicated cases of IE.6,18 Infectious disease consultation has been shown to improve the rate of IE diagnosis, reduce the 6-month relapse rate,19 and improve outcomes in patients with S aureus bacteremia.20 In our study 86.5% of patients had documentation of an ID consultation; cardiac surgery consultation was obtained in 54.1% and cardiology consultation in 47.1% of patients.

We observed a trend towards lower rates of 6-month readmission and 12-month mortality among patients who received all 3 consults (Figure 1), despite the fact that rates of embolic events and stroke were higher in patients with 3 consults compared to those with fewer than 3. Obviously, the lack of confounder adjustment and lack of power limits our ability to make inferences about this association, but it generates hypotheses for future work. Because subjects in our study were cared for prior to 2014, multidisciplinary management of IE with involvement of cardiology, cardiac surgery, and ID physicians was observed in only one-third of patients. However, 117 (68.8%) patients received either cardiology or cardiac surgery consults. It is possible that some physicians considered involving both cardiology and cardiac surgery consultants as unnecessary and, therefore, did not consult both specialties. We will focus future QI efforts in our institution on educating physicians about the benefits of multidisciplinary care and the importance of fully implementing the 2014 AHA/ACC guidelines.

Our findings around quality of care should be placed in the context of 2 studies by González de Molina et al8 and Delahaye et al7 These studies described considerable discordance between guideline recommendations and real-world IE care. However, these studies were performed more than a decade ago and were conducted before current recommendations to consult cardiology and cardiac surgery were published.

In the 2014 AHA/ACC guidelines, surgery prior to completion of antibiotics is indicated in patients with valve dysfunction resulting in heart failure; left-sided IE caused by highly resistant organisms (including fungus or S aureus); IE complicated by heart block, aortic abscess, or penetrating lesions; and presence of persistent infection (bacteremia or fever lasting longer than 5 to 7 days) after onset of appropriate antimicrobial therapy. In addition, there is a Class IIa indication of early surgery in patients with recurrent emboli and persistent vegetation despite appropriate antibiotic therapy and a Class IIb indication of early surgery in patients with NVE with mobile vegetation greater than 10 mm in length. Surgery is recommended for patients with PVE and relapsing infection.

It is recommended that IE patients be cared for in centers with immediate access to cardiac surgery because the urgent need for surgical intervention can arise rapidly.5 We found that nearly one-third of included patients underwent surgery. Although we did not collect data on indications for surgery in patients who did not receive surgery, we observed that 50% had a surgery consult, suggesting the presence of 1 or more surgical indications. Of these, half underwent valve surgery. Most of the NVE patients who underwent surgery had more than 1 indication for surgery. Our surgical rate is similar to a study from Italy3 and overall in the lower range of reported surgical rate (25%-50%) from other studies.21 The low rate of surgery at our center may be related to the fact that the use of surgery for IE has been hotly debated in the literature,21 and may also be due to the low rate of cardiac surgery consultation.

Our study has several limitations. We identified eligible patients using a discharge ICD-9 coding of IE and then confirmed the presence of Duke criteria using record review. Using discharge diagnosis codes for endocarditis has been validated, and our additional manual chart review to confirm Duke criteria likely improved the specificity significantly. However, by excluding patients who did not have documented evidence of Duke criteria, we may have missed some cases, lowering sensitivity. The performance on selected quality metrics may also have been affected by our inclusion criteria. Because we included only patients who met Duke criteria, we tended to include patients who had received blood cultures and echocardiograms, which are part of the criteria. Thus, we cannot comment on use of diagnostic testing or specialty consultation in patients with suspected IE. This was a single-center study and may not represent patients or current practices seen in other institutions. We did not collect data on some of the predisposing factors to NVE (for example, baseline rheumatic heart disease or preexisting valvular heart disease) because it is estimated that less than 5% of IE in the U.S. is superimposed on rheumatic heart disease.4 We likely underestimated 12-month mortality rate because we did not cross-reference our findings again the National Death Index; however, this should not affect the comparison of this outcome between groups.

 

 

CONCLUSION

Our study confirms reports that IE epidemiology has changed significantly in recent years. It also suggests that concordance with guideline recommendations is good for some aspects of care (eg, echocardiogram, blood cultures), but can be improved in other areas, particularly in use of specialty consultation during the hospitalization. Future QI efforts should emphasize the role of the heart valve team or endocarditis team that consists of an internist, ID physician, cardiologist, cardiac surgeon, and nursing. Finally, efforts should be made to develop strategies for community hospitals that do not have access to all of these specialists (eg, early transfer, telehealth).

Disclosure

Nothing to report.

Infective endocarditis (IE) affected an estimated 46,800 Americans in 2011, and its incidence is increasing due to greater numbers of invasive procedures and prevalence of IE risk factors.1-3 Despite recent advances in the treatment of IE, morbidity and mortality remain high: in-hospital mortality in IE patients is 15% to 20%, and the 1-year mortality rate is approximately 40%.2,4,5

Poor IE outcomes may be the result of difficulty in diagnosing IE and identifying its optimal treatments. The American Heart Association (AHA), the American College of Cardiology (ACC), and the European Society of Cardiology (ESC) have published guidelines to address these challenges. Recent guidelines recommend a multidisciplinary approach that includes cardiology, cardiac surgery, and infectious disease (ID) specialty involvement in decision-making.5,6

In the absence of published quality measures for IE management, guidelines can be used to evaluate the quality of care of IE. Studies have showed poor concordance with guideline recommendations but did not examine agreement with more recently published guidelines.7,8 Furthermore, few studies have examined the management, outcomes, and quality of care received by IE patients. Therefore, we aimed to describe a modern cohort of patients with IE admitted to a tertiary medical center over a 4-year period. In particular, we aimed to assess quality of care received by this cohort, as measured by concordance with AHA and ACC guidelines to identify gaps in care and spur quality improvement (QI) efforts.

METHODS

Design and Study Population

We conducted a retrospective cohort study of adult IE patients admitted to Baystate Medical Center (BMC), a 716-bed tertiary academic center that covers a population of 800,000 people throughout western New England. We used the International Classification of Diseases (ICD)–Ninth Revision, to identify IE patients discharged with a principal or secondary diagnosis of IE between 2007 and 2011 (codes 421.0, 421.1, 421.9, 424.9, 424.90, and 424.91). Three co-authors confirmed the diagnosis by conducting a review of the electronic health records.

We included only patients who met modified Duke criteria for definite or possible IE.5 Definite IE defines patients with pathological criteria (microorganisms demonstrated by culture or histologic examination or a histologic examination showing active endocarditis); or patients with 2 major criteria (positive blood culture and evidence of endocardial involvement by echocardiogram), 1 major criterion and 3 minor criteria (minor criteria: predisposing heart conditions or intravenous drug (IVD) use, fever, vascular phenomena, immunologic phenomena, and microbiologic evidence that do not meet the major criteria) or 5 minor criteria. Possible IE defines patients with 1 major and 1 minor criterion or 3 minor criteria.5

 

 

Data Collection

We used billing and clinical databases to collect demographics, comorbidities, antibiotic treatment, 6-month readmission and 1-year mortality. Comorbid conditions were classified into Elixhauser comorbidities using software provided by the Healthcare Costs and Utilization Project of the Agency for Healthcare Research and Quality.9,10

We obtained all other data through electronic health record abstraction. These included microbiology, type of endocarditis (native valve endocarditis [NVE] or prosthetic valve endocarditis [PVE]), echocardiographic location of the vegetation, and complications involving the valve (eg, valve perforation, ruptured chorda, perivalvular abscess, or valvular insufficiency).

Using 2006 AHA/ACC guidelines,11 we identified quality metrics, including the presence of at least 2 sets of blood cultures prior to start of antibiotics and use of transthoracic echocardiogram (TTE) and transesophageal echocardiogram (TEE). Guidelines recommend using TTE as first-line to detect valvular vegetations and assess IE complications. TEE is recommended if TTE is nondiagnostic and also as first-line to diagnose PVE. We assessed the rate of consultation with ID, cardiology, and cardiac surgery specialties. While these consultations were not explicitly emphasized in the 2006 AHA/ACC guidelines, there is a class I recommendation in 2014 AHA/ACC guidelines5 to manage IE patients with consultation of all these specialties.

We reported the number of patients with intracardiac leads (pacemaker or defibrillator) who had documentation of intracardiac lead removal. Complete removal of intracardiac leads is indicated in IE patients with infection of leads or device (class I) and suggested for IE caused by Staphylococcus aureus or fungi (even without evidence of device or lead infection), and for patients undergoing valve surgery (class IIa).5 We entered abstracted data elements into a RedCap database, hosted by Tufts Clinical and Translational Science Institute.12

Outcomes

Outcomes included embolic events, strokes, need for cardiac surgery, LOS, inhospital mortality, 6-month readmission, and 1-year mortality. We identified embolic events using documentation of clinical or imaging evidence of an embolic event to the cerebral, coronary, peripheral arterial, renal, splenic, or pulmonary vasculature. We used record extraction to identify incidence of valve surgery. Nearly all patients who require surgery at BMC have it done onsite. We compared outcomes among patients who received less than 3 vs. 3 consultations provided by ID, cardiology, and cardiac surgery specialties. We also compared outcomes among patients who received 0, 1, 2, or 3 consultations to look for a trend.

Statistical Analysis

We divided the cohort into patients with NVE and PVE because there are differences in pathophysiology, treatment, and outcomes of these groups. We calculated descriptive statistics, including means/standard deviation (SD) and n (%). We conducted univariable analyses using Fisher exact (categorical), unpaired t tests (Gaussian), or Kruskal-Wallis equality-of-populations rank test (non-Gaussian). Common language effect sizes were also calculated to quantify group differences without respect to sample size.13,14 Analyses were performed using Stata 14.1. (StataCorp LLC, College Station, Texas). The BMC Institutional Review Board approved the protocol.

RESULTS

We identified a total of 317 hospitalizations at BMC meeting criteria for IE. Of these, 147 hospitalizations were readmissions or did not meet the clinical criteria of definite or possible IE. Thus, we included a total of 170 patients in the final analysis. Definite IE was present in 135 (79.4%) and possible IE in 35 (20.6%) patients.

Characteristics of 170 Hospitalized Patients with Infective Endocarditis
Table 1

Patient Characteristics

Of 170 patients, 127 (74.7%) had NVE and 43 (25.3%) had PVE. Mean ± SD age was 60.0 ± 17.9 years, 66.5% (n = 113) of patients were male, and 79.4% (n = 135) were white (Table 1). Hypertension and chronic kidney disease were the most common comorbidities. The median Gagne score15 was 4, corresponding to a 1-year mortality risk of 15%. Predisposing factors for IE included previous history of IE (n = 14 or 8.2%), IVD use (n = 23 or 13.5%), and presence of long-term venous catheters (n = 19 or 11.2%). Intracardiac leads were present in 17.1% (n = 29) of patients. Bicuspid aortic valve was reported in 6.5% (n = 11) of patients with NVE. Patients with PVE were older (+11.5 years, 95% confidence interval [CI] 5.5, 17.5) and more likely to have intracardiac leads (44.2% vs. 7.9%; P < 0.001; Table 1).

Microbiology and Antibiotics

Staphylococcus aureus was isolated in 40.0% of patients (methicillin-sensitive: 21.2%, n = 36; methicillin-resistant: 18.8%, n = 32) and vancomycin (88.2%, n = 150) was the most common initial antibiotic used. Nearly half (44.7%, n = 76) of patients received gentamicin as part of their initial antibiotic regimen. Appendix 1 provides information on final blood culture results, prosthetic versus native valve IE, and antimicrobial agents that each patient received. PVE patients were more likely to receive gentamicin as their initial antibiotic regimen than NVE (58.1% vs. 40.2%; P = 0.051; Table 1).

 

 

Echocardiography and Affected Valves

As per study inclusion criteria, all patients received echocardiography (either TTE, TEE, or both). Overall, the most common infected valve was mitral (41.3%), n = 59), followed by aortic valve (28.7%), n = 41). Patients in whom the location of infected valve could not be determined (15.9%, n = 27) had echocardiographic features of intracardiac device infection or intracardiac mass (Table 1).

Quality of Care

Nearly all (n = 165, 97.1%) of patients had at least 2 sets of blood cultures drawn, most on the first day of admission (71.2%). The vast majority of patients (n = 152, 89.4%) also received their first dose of antibiotics on the day of admission. Ten (5.9%) patients did not receive any consults, and 160 (94.1%) received at least 1 consultation. An ID consultation was obtained for most (147, 86.5%) patients; cardiac surgery consultation was obtained for about half of patients (92, 54.1%), and cardiology consultation was also obtained for nearly half of patients (80, 47.1%). One-third (53, 31.2%) did not receive a cardiology or cardiac surgery consult, two-thirds (117, 68.8%) received either a cardiology or a cardiac surgery consult, and one-third (55, 32.4%) received both.

Of the 29 patients who had an intracardiac lead, 6 patients had documentation of the device removal during the index hospitalization (5 or 50.0% of patients with NVE and 1 or 5.3% of patients with PVE; P = 0.02; Table 2).

Quality of Care of Patients Hospitalized with Infective Endocarditis
Table 2

Outcomes

Evidence of any embolic events was seen in 27.7% (n = 47) of patients, including stroke in 17.1% (n = 29). Median LOS for all patients was 13.5 days, and 6-month readmission among patients who survived their index admission was 51.0% (n = 74/145; 95% CI, 45.9%-62.7%). Inhospital mortality was 14.7% (n = 25; 95% CI: 10.1%-20.9%) and 12-month mortality was 22.4% (n = 38; 95% CI, 16.7%-29.3%). Inhospital mortality was more frequent among patients with PVE than NVE (20.9% vs. 12.6%; P = 0.21), although this difference was not statistically significant. Complications were more common in NVE than PVE (any embolic event: 32.3% vs. 14.0%, P = 0.03; stroke, 20.5% vs. 7.0%, P = 0.06; Table 3).

Outcome of Hospitalized Patients with Infective Endocarditis
Table 3

Although there was a trend toward reduction in 6-month readmission and 12-month mortality with incremental increase in the number of specialties consulted (ID, cardiology and cardiac surgery), the difference was not statistically significant (Figure). In addition, comparing outcomes of embolic events, stroke, 6-month readmission, and 12-month mortality between those who received 3 consults (28.8%, n = 49) to those with fewer than 3 (71.2%, n = 121) did not show statistically significant differences.

Comparison of outcomes of any embolic event, stroke, 6-month readmission and 12-month mortality between infective endocarditis patients who received infectious disease, cardiology, and cardiac surgery consultations.
Figure


Of 92 patients who received a cardiac surgery consult, 73 had NVE and 19 had PVE. Of these, 47 underwent valve surgery, 39 (of 73) with NVE (53.42%) and 8 (of 19) with PVE (42.1%). Most of the NVE patients (73.2%) had more than 1 indication for surgery. The most common indications for surgery among NVE patients were significant valvular dysfunction resulting in heart failure (65.9%), followed by mobile vegetation (56.1%) and recurrent embolic events (26.8%). The most common indication for surgery in PVE was persistent bacteremia or recurrent embolic events (75.0%).

DISCUSSION

In this study, we described the management, quality of care, and outcomes of IE patients in a tertiary medical center. We found that the majority of hospitalized patients with IE were older white men with comorbidities and IE risk factors. The complication rate was high (27.7% with embolic events) and the inhospital mortality rate was in the lower range reported by prior studies [14.7% vs. 15%-20%].5 Nearly one-third of patients (n = 47, 27.7%) received valve surgery. Quality of care received was generally good, with most patients receiving early blood cultures, echocardiograms, early antibiotics, and timely ID consultation. We identified important gaps in care, including a failure to consult cardiac surgery in nearly half of patients and failure to consult cardiology in more than half of patients.

Our findings support work suggesting that IE is no longer primarily a chronic or subacute disease of younger patients with IVD use, positive human immunodeficiency virus status, or bicuspid aortic valves.1,4,16,17 The International Collaboration on Endocarditis-Prospective Cohort Study,4 a multinational prospective cohort study (2000-2005) of 2781 adults with IE, reported a higher prevalence of patients with diabetes or on hemodialysis, IVD users, and patients with long-term venous catheter and intracardiac leads than we found. Yet both studies suggest that the demographics of IE are changing. This may partially explain why IE mortality has not improved in recent years:2,3 patients with older age and higher comorbidity burden may not be considered good surgical candidates.

This study is among the first to contribute information on concordance with IE guidelines in a cohort of U.S. patients. Our findings suggest that most patients received timely blood culture, same-day administration of empiric antibiotics, and ID consultation, which is similar to European studies.7,18 Guideline concordance could be improved in some areas. Overall documentation of the management plan regarding the intracardiac leads could be improved. Only 6 of 29 patients with intracardiac leads had documentation of their removal during the index hospitalization.

The 2014 AHA/ACC guidelines5 and the ESC guidelines6 emphasized the importance of multidisciplinary management of IE. As part of the Heart Valve Team at BMC, cardiologists provide expertise in diagnosis, imaging and clinical management of IE, and cardiac surgeons provide consultation on whether to pursue surgery and optimal timing of surgery. Early discussion with surgical team is considered mandatory in all complicated cases of IE.6,18 Infectious disease consultation has been shown to improve the rate of IE diagnosis, reduce the 6-month relapse rate,19 and improve outcomes in patients with S aureus bacteremia.20 In our study 86.5% of patients had documentation of an ID consultation; cardiac surgery consultation was obtained in 54.1% and cardiology consultation in 47.1% of patients.

We observed a trend towards lower rates of 6-month readmission and 12-month mortality among patients who received all 3 consults (Figure 1), despite the fact that rates of embolic events and stroke were higher in patients with 3 consults compared to those with fewer than 3. Obviously, the lack of confounder adjustment and lack of power limits our ability to make inferences about this association, but it generates hypotheses for future work. Because subjects in our study were cared for prior to 2014, multidisciplinary management of IE with involvement of cardiology, cardiac surgery, and ID physicians was observed in only one-third of patients. However, 117 (68.8%) patients received either cardiology or cardiac surgery consults. It is possible that some physicians considered involving both cardiology and cardiac surgery consultants as unnecessary and, therefore, did not consult both specialties. We will focus future QI efforts in our institution on educating physicians about the benefits of multidisciplinary care and the importance of fully implementing the 2014 AHA/ACC guidelines.

Our findings around quality of care should be placed in the context of 2 studies by González de Molina et al8 and Delahaye et al7 These studies described considerable discordance between guideline recommendations and real-world IE care. However, these studies were performed more than a decade ago and were conducted before current recommendations to consult cardiology and cardiac surgery were published.

In the 2014 AHA/ACC guidelines, surgery prior to completion of antibiotics is indicated in patients with valve dysfunction resulting in heart failure; left-sided IE caused by highly resistant organisms (including fungus or S aureus); IE complicated by heart block, aortic abscess, or penetrating lesions; and presence of persistent infection (bacteremia or fever lasting longer than 5 to 7 days) after onset of appropriate antimicrobial therapy. In addition, there is a Class IIa indication of early surgery in patients with recurrent emboli and persistent vegetation despite appropriate antibiotic therapy and a Class IIb indication of early surgery in patients with NVE with mobile vegetation greater than 10 mm in length. Surgery is recommended for patients with PVE and relapsing infection.

It is recommended that IE patients be cared for in centers with immediate access to cardiac surgery because the urgent need for surgical intervention can arise rapidly.5 We found that nearly one-third of included patients underwent surgery. Although we did not collect data on indications for surgery in patients who did not receive surgery, we observed that 50% had a surgery consult, suggesting the presence of 1 or more surgical indications. Of these, half underwent valve surgery. Most of the NVE patients who underwent surgery had more than 1 indication for surgery. Our surgical rate is similar to a study from Italy3 and overall in the lower range of reported surgical rate (25%-50%) from other studies.21 The low rate of surgery at our center may be related to the fact that the use of surgery for IE has been hotly debated in the literature,21 and may also be due to the low rate of cardiac surgery consultation.

Our study has several limitations. We identified eligible patients using a discharge ICD-9 coding of IE and then confirmed the presence of Duke criteria using record review. Using discharge diagnosis codes for endocarditis has been validated, and our additional manual chart review to confirm Duke criteria likely improved the specificity significantly. However, by excluding patients who did not have documented evidence of Duke criteria, we may have missed some cases, lowering sensitivity. The performance on selected quality metrics may also have been affected by our inclusion criteria. Because we included only patients who met Duke criteria, we tended to include patients who had received blood cultures and echocardiograms, which are part of the criteria. Thus, we cannot comment on use of diagnostic testing or specialty consultation in patients with suspected IE. This was a single-center study and may not represent patients or current practices seen in other institutions. We did not collect data on some of the predisposing factors to NVE (for example, baseline rheumatic heart disease or preexisting valvular heart disease) because it is estimated that less than 5% of IE in the U.S. is superimposed on rheumatic heart disease.4 We likely underestimated 12-month mortality rate because we did not cross-reference our findings again the National Death Index; however, this should not affect the comparison of this outcome between groups.

 

 

CONCLUSION

Our study confirms reports that IE epidemiology has changed significantly in recent years. It also suggests that concordance with guideline recommendations is good for some aspects of care (eg, echocardiogram, blood cultures), but can be improved in other areas, particularly in use of specialty consultation during the hospitalization. Future QI efforts should emphasize the role of the heart valve team or endocarditis team that consists of an internist, ID physician, cardiologist, cardiac surgeon, and nursing. Finally, efforts should be made to develop strategies for community hospitals that do not have access to all of these specialists (eg, early transfer, telehealth).

Disclosure

Nothing to report.

References

1. Pant S, Patel NJ, Deshmukh A, Golwala H, Patel N, Badheka A, et al. Trends in infective endocarditis incidence, microbiology, and valve replacement in the United States from 2000 to 2011. J Am Coll Cardiol. 2015;65(19):2070-2076. PubMed
2. Bor DH, Woolhandler S, Nardin R, Brusch J, Himmelstein DU. Infective endocarditis in the U.S., 1998-2009: a nationwide study. PloS One. 2013;8(3):e60033. PubMed
3. Fedeli U, Schievano E, Buonfrate D, Pellizzer G, Spolaore P. Increasing incidence and mortality of infective endocarditis: a population-based study through a record-linkage system. BMC Infect Dis. 2011;11:48. PubMed
4. Murdoch DR, Corey GR, Hoen B, Miró JM, Fowler VG, Bayer AS, et al. Clinical presentation, etiology, and outcome of infective endocarditis in the 21st century: the International Collaboration on Endocarditis-Prospective Cohort Study. Arch Intern Med. 2009;169(5):463-473. PubMed
5. Nishimura RA, Otto CM, Bonow RO, Carabello BA, Erwin JP, Guyton RA, et al. 2014 AHA/ACC guideline for the management of patients with valvular heart disease: executive summary: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol. 2014;63(22):2438-2488PubMed
6. Habib G, Lancellotti P, Antunes MJ, Bongiorni MG, Casalta J-P, Del Zotti F, et al. 2015 ESC Guidelines for the management of infective endocarditis: The Task Force for the Management of Infective Endocarditis of the European Society of Cardiology (ESC). Endorsed by: European Association for Cardio-Thoracic Surgery (EACTS), the European Association of Nuclear Medicine (EANM). Eur Heart J. 2015;36(44):3075-3128PubMed
7. Delahaye F, Rial MO, de Gevigney G, Ecochard R, Delaye J. A critical appraisal of the quality of the management of infective endocarditis. J Am Coll Cardiol. 1999;33(3):788-793. PubMed
8. González De Molina M, Fernández-Guerrero JC, Azpitarte J. [Infectious endocarditis: degree of discordance between clinical guidelines recommendations and clinical practice]. Rev Esp Cardiol. 2002;55(8):793-800. PubMed
9. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. PubMed
10. Quan H, Parsons GA, Ghali WA. Validity of information on comorbidity derived rom ICD-9-CCM administrative data. Med Care. 2002;40(8):675-685. PubMed
11. American College of Cardiology/American Heart Association Task Force on Practice Guidelines, Society of Cardiovascular Anesthesiologists, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, Bonow RO, Carabello BA, Kanu C, deLeon AC Jr, Faxon DP, Freed MD, et al. ACC/AHA 2006 guidelines for the management of patients with valvular heart disease: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (writing committee to revise the 1998 Guidelines for the Management of Patients With Valvular Heart Disease): developed in collaboration with the Society of Cardiovascular Anesthesiologists: endorsed by the Society for Cardiovascular Angiography and Interventions and the Society of Thoracic Surgeons. Circulation. 2006;114(5):e84-e231.
12. REDCap [Internet]. [cited 2016 May 14]. Available from: https://collaborate.tuftsctsi.org/redcap/.
13. McGraw KO, Wong SP. A common language effect-size statistic. Psychol Bull. 1992;111:361-365. 
14. Cohen J. The statistical power of abnormal-social psychological research: a review. J Abnorm Soc Psychol. 1962;65:145-153. PubMed
15. Gagne JJ, Glynn RJ, Avorn J, Levin R, Schneeweiss S. A combined comorbidity score predicted mortality in elderly patients better than existing scores. J Clin Epidemiol. 2011;64(7):749-759. PubMed
16. Baddour LM, Wilson WR, Bayer AS, Fowler VG Jr, Tleyjeh IM, Rybak MJ, et al. Infective endocarditis in adults: Diagnosis, antimicrobial therapy, and management of complications: A scientific statement for healthcare professionals from the American Heart Association. Circulation. 2015;132(15):1435-1486. PubMed
17. Cecchi E, Chirillo F, Castiglione A, Faggiano P, Cecconi M, Moreo A, et al. Clinical epidemiology in Italian Registry of Infective Endocarditis (RIEI): Focus on age, intravascular devices and enterococci. Int J Cardiol. 2015;190:151-156. PubMed
18. Tornos P, Iung B, Permanyer-Miralda G, Baron G, Delahaye F, Gohlke-Bärwolf Ch, et al. Infective endocarditis in Europe: lessons from the Euro heart survey. Heart. 2005;91(5):571-575. PubMed
19. Yamamoto S, Hosokawa N, Sogi M, Inakaku M, Imoto K, Ohji G, et al. Impact of infectious diseases service consultation on diagnosis of infective endocarditis. Scand J Infect Dis. 2012;44(4):270-275. PubMed
20. Rieg S, Küpper MF. Infectious diseases consultations can make the difference: a brief review and a plea for more infectious diseases specialists in Germany. Infection. 2016;(2):159-166. PubMed
21. Prendergast BD, Tornos P. Surgery for infective endocarditis: who and when? Circulation. 2010;121(9):11411152. PubMed

References

1. Pant S, Patel NJ, Deshmukh A, Golwala H, Patel N, Badheka A, et al. Trends in infective endocarditis incidence, microbiology, and valve replacement in the United States from 2000 to 2011. J Am Coll Cardiol. 2015;65(19):2070-2076. PubMed
2. Bor DH, Woolhandler S, Nardin R, Brusch J, Himmelstein DU. Infective endocarditis in the U.S., 1998-2009: a nationwide study. PloS One. 2013;8(3):e60033. PubMed
3. Fedeli U, Schievano E, Buonfrate D, Pellizzer G, Spolaore P. Increasing incidence and mortality of infective endocarditis: a population-based study through a record-linkage system. BMC Infect Dis. 2011;11:48. PubMed
4. Murdoch DR, Corey GR, Hoen B, Miró JM, Fowler VG, Bayer AS, et al. Clinical presentation, etiology, and outcome of infective endocarditis in the 21st century: the International Collaboration on Endocarditis-Prospective Cohort Study. Arch Intern Med. 2009;169(5):463-473. PubMed
5. Nishimura RA, Otto CM, Bonow RO, Carabello BA, Erwin JP, Guyton RA, et al. 2014 AHA/ACC guideline for the management of patients with valvular heart disease: executive summary: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol. 2014;63(22):2438-2488PubMed
6. Habib G, Lancellotti P, Antunes MJ, Bongiorni MG, Casalta J-P, Del Zotti F, et al. 2015 ESC Guidelines for the management of infective endocarditis: The Task Force for the Management of Infective Endocarditis of the European Society of Cardiology (ESC). Endorsed by: European Association for Cardio-Thoracic Surgery (EACTS), the European Association of Nuclear Medicine (EANM). Eur Heart J. 2015;36(44):3075-3128PubMed
7. Delahaye F, Rial MO, de Gevigney G, Ecochard R, Delaye J. A critical appraisal of the quality of the management of infective endocarditis. J Am Coll Cardiol. 1999;33(3):788-793. PubMed
8. González De Molina M, Fernández-Guerrero JC, Azpitarte J. [Infectious endocarditis: degree of discordance between clinical guidelines recommendations and clinical practice]. Rev Esp Cardiol. 2002;55(8):793-800. PubMed
9. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. PubMed
10. Quan H, Parsons GA, Ghali WA. Validity of information on comorbidity derived rom ICD-9-CCM administrative data. Med Care. 2002;40(8):675-685. PubMed
11. American College of Cardiology/American Heart Association Task Force on Practice Guidelines, Society of Cardiovascular Anesthesiologists, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, Bonow RO, Carabello BA, Kanu C, deLeon AC Jr, Faxon DP, Freed MD, et al. ACC/AHA 2006 guidelines for the management of patients with valvular heart disease: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (writing committee to revise the 1998 Guidelines for the Management of Patients With Valvular Heart Disease): developed in collaboration with the Society of Cardiovascular Anesthesiologists: endorsed by the Society for Cardiovascular Angiography and Interventions and the Society of Thoracic Surgeons. Circulation. 2006;114(5):e84-e231.
12. REDCap [Internet]. [cited 2016 May 14]. Available from: https://collaborate.tuftsctsi.org/redcap/.
13. McGraw KO, Wong SP. A common language effect-size statistic. Psychol Bull. 1992;111:361-365. 
14. Cohen J. The statistical power of abnormal-social psychological research: a review. J Abnorm Soc Psychol. 1962;65:145-153. PubMed
15. Gagne JJ, Glynn RJ, Avorn J, Levin R, Schneeweiss S. A combined comorbidity score predicted mortality in elderly patients better than existing scores. J Clin Epidemiol. 2011;64(7):749-759. PubMed
16. Baddour LM, Wilson WR, Bayer AS, Fowler VG Jr, Tleyjeh IM, Rybak MJ, et al. Infective endocarditis in adults: Diagnosis, antimicrobial therapy, and management of complications: A scientific statement for healthcare professionals from the American Heart Association. Circulation. 2015;132(15):1435-1486. PubMed
17. Cecchi E, Chirillo F, Castiglione A, Faggiano P, Cecconi M, Moreo A, et al. Clinical epidemiology in Italian Registry of Infective Endocarditis (RIEI): Focus on age, intravascular devices and enterococci. Int J Cardiol. 2015;190:151-156. PubMed
18. Tornos P, Iung B, Permanyer-Miralda G, Baron G, Delahaye F, Gohlke-Bärwolf Ch, et al. Infective endocarditis in Europe: lessons from the Euro heart survey. Heart. 2005;91(5):571-575. PubMed
19. Yamamoto S, Hosokawa N, Sogi M, Inakaku M, Imoto K, Ohji G, et al. Impact of infectious diseases service consultation on diagnosis of infective endocarditis. Scand J Infect Dis. 2012;44(4):270-275. PubMed
20. Rieg S, Küpper MF. Infectious diseases consultations can make the difference: a brief review and a plea for more infectious diseases specialists in Germany. Infection. 2016;(2):159-166. PubMed
21. Prendergast BD, Tornos P. Surgery for infective endocarditis: who and when? Circulation. 2010;121(9):11411152. PubMed

Issue
Journal of Hospital Medicine 12(6)
Issue
Journal of Hospital Medicine 12(6)
Page Number
414-420
Page Number
414-420
Publications
Publications
Topics
Article Type
Display Headline
Quality of care of hospitalized infective endocarditis patients: Report from a tertiary medical center
Display Headline
Quality of care of hospitalized infective endocarditis patients: Report from a tertiary medical center
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Mohammad Amin Kashef, MD, Division of Cardiovascular Disease, Baystate Medical Center, Tufts University School of Medicine, 759 Chestnut Street, Springfield, MA 01199; Telephone: 860-989-6444; Fax: 413-794-8866; E-mail: mohammadamin.kashef@baystatehealth.org

Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Hospitalist Utilization and Performance

Article Type
Changed
Mon, 05/22/2017 - 18:43
Display Headline
Hospitalist utilization and hospital performance on 6 publicly reported patient outcomes

The past several years have seen a dramatic increase in the percentage of patients cared for by hospitalists, yet an emerging body of literature examining the association between care given by hospitalists and performance on a number of process measures has shown mixed results. Hospitalists do not appear to provide higher quality of care for pneumonia,1, 2 while results in heart failure are mixed.35 Each of these studies was conducted at a single site, and examined patient‐level effects. More recently, Vasilevskis et al6 assessed the association between the intensity of hospitalist use (measured as the percentage of patients admitted by hospitalists) and performance on process measures. In a cohort of 208 California hospitals, they found a significant improvement in performance on process measures in patients with acute myocardial infarction, heart failure, and pneumonia with increasing percentages of patients admitted by hospitalists.6

To date, no study has examined the association between the use of hospitalists and the publicly reported 30‐day mortality and readmission measures. Specifically, the Centers for Medicare and Medicaid Services (CMS) have developed and now publicly report risk‐standardized 30‐day mortality (RSMR) and readmission rates (RSRR) for Medicare patients hospitalized for 3 common and costly conditionsacute myocardial infarction (AMI), heart failure (HF), and pneumonia.7 Performance on these hospital‐based quality measures varies widely, and vary by hospital volume, ownership status, teaching status, and nurse staffing levels.813 However, even accounting for these characteristics leaves much of the variation in outcomes unexplained. We hypothesized that the presence of hospitalists within a hospital would be associated with higher performance on 30‐day mortality and 30‐day readmission measures for AMI, HF, and pneumonia. We further hypothesized that for hospitals using hospitalists, there would be a positive correlation between increasing percentage of patients admitted by hospitalists and performance on outcome measures. To test these hypotheses, we conducted a national survey of hospitalist leaders, linking data from survey responses to data on publicly reported outcome measures for AMI, HF, and pneumonia.

MATERIALS AND METHODS

Study Sites

Of the 4289 hospitals in operation in 2008, 1945 had 25 or more AMI discharges. We identified hospitals using American Hospital Association (AHA) data, calling hospitals up to 6 times each until we reached our target sample size of 600. Using this methodology, we contacted 1558 hospitals of a possible 1920 with AHA data; of the 1558 called, 598 provided survey results.

Survey Data

Our survey was adapted from the survey developed by Vasilevskis et al.6 The entire survey can be found in the Appendix (see Supporting Information in the online version of this article). Our key questions were: 1) Does your hospital have at least 1 hospitalist program or group? 2) Approximately what percentage of all medical patients in your hospital are admitted by hospitalists? The latter question was intended as an approximation of the intensity of hospitalist use, and has been used in prior studies.6, 14 A more direct measure was not feasible given the complexity of obtaining admission data for such a large and diverse set of hospitals. Respondents were also asked about hospitalist care of AMI, HF, and pneumonia patients. Given the low likelihood of precise estimation of hospitalist participation in care for specific conditions, the response choices were divided into percentage quartiles: 025, 2650, 5175, and 76100. Finally, participants were asked a number of questions regarding hospitalist organizational and clinical characteristics.

Survey Process

We obtained data regarding presence or absence of hospitalists and characteristics of the hospitalist services via phone‐ and fax‐administered survey (see Supporting Information, Appendix, in the online version of this article). Telephone and faxed surveys were administered between February 2010 and January 2011. Hospital telephone numbers were obtained from the 2008 AHA survey database and from a review of each hospital's website. Up to 6 attempts were made to obtain a completed survey from nonrespondents unless participation was specifically refused. Potential respondents were contacted in the following order: hospital medicine department leaders, hospital medicine clinical managers, vice president for medical affairs, chief medical officers, and other hospital executives with knowledge of the hospital medicine services. All respondents agreed with a question asking whether they had direct working knowledge of their hospital medicine services; contacts who said they did not have working knowledge of their hospital medicine services were asked to refer our surveyor to the appropriate person at their site. Absence of a hospitalist program was confirmed by contacting the Medical Staff Office.

Hospital Organizational and Patient‐Mix Characteristics

Hospital‐level organizational characteristics (eg, bed size, teaching status) and patient‐mix characteristics (eg, Medicare and Medicaid inpatient days) were obtained from the 2008 AHA survey database.

Outcome Performance Measures

The 30‐day risk‐standardized mortality and readmission rates (RSMR and RSRR) for 2008 for AMI, HF, and pneumonia were calculated for all admissions for people age 65 and over with traditional fee‐for‐service Medicare. Beneficiaries had to be enrolled for 12 months prior to their hospitalization for any of the 3 conditions, and had to have complete claims data available for that 12‐month period.7 These 6 outcome measures were constructed using hierarchical generalized linear models.1520 Using the RSMR for AMI as an example, for each hospital, the measure is estimated by dividing the predicted number of deaths within 30 days of admission for AMI by the expected number of deaths within 30 days of admission for AMI. This ratio is then divided by the national unadjusted 30‐day mortality rate for AMI, which is obtained using data on deaths from the Medicare beneficiary denominator file. Each measure is adjusted for patient characteristics such as age, gender, and comorbidities. All 6 measures are endorsed by the National Quality Forum (NQF) and are reported publicly by CMS on the Hospital Compare web site.

Statistical Analysis

Comparison of hospital‐ and patient‐level characteristics between hospitals with and without hospitalists was performed using chi‐square tests and Student t tests.

The primary outcome variables are the RSMRs and RSRRs for AMI, HF, and pneumonia. Multivariable linear regression models were used to assess the relationship between hospitals with at least 1 hospitalist group and each dependent variable. Models were adjusted for variables previously reported to be associated with quality of care. Hospital‐level characteristics included core‐based statistical area, teaching status, number of beds, region, safety‐net status, nursing staff ratio (number of registered nurse FTEs/number of hospital FTEs), and presence or absence of cardiac catheterization and coronary bypass capability. Patient‐level characteristics included Medicare and Medicaid inpatient days as a percentage of total inpatient days and percentage of admissions by race (black vs non‐black). The presence of hospitalists was correlated with each of the hospital and patient‐level characteristics. Further analyses of the subset of hospitals that use hospitalists included construction of multivariable linear regression models to assess the relationship between the percentage of patients admitted by hospitalists and the dependent variables. Models were adjusted for the same patient‐ and hospital‐level characteristics.

The institutional review boards at Yale University and University of California, San Francisco approved the study. All analyses were performed using Statistical Analysis Software (SAS) version 9.1 (SAS Institute, Inc, Cary, NC).

RESULTS

Characteristics of Participating Hospitals

Telephone, fax, and e‐mail surveys were attempted with 1558 hospitals; we received 598 completed surveys for a response rate of 40%. There was no difference between responders and nonresponders on any of the 6 outcome variables, the number of Medicare or Medicaid inpatient days, and the percentage of admissions by race. Responders and nonresponders were also similar in size, ownership, safety‐net and teaching status, nursing staff ratio, presence of cardiac catheterization and coronary bypass capability, and core‐based statistical area. They differed only on region of the country, where hospitals in the northwest Central and Pacific regions of the country had larger overall proportions of respondents. All hospitals provided information about the presence or absence of hospitalist programs. The majority of respondents were hospitalist clinical or administrative managers (n = 220) followed by hospitalist leaders (n = 106), other executives (n = 58), vice presidents for medical affairs (n = 39), and chief medical officers (n = 15). Each respondent indicated a working knowledge of their site's hospitalist utilization and practice characteristics. Absence of hospitalist utilization was confirmed by contact with the Medical Staff Office.

Comparisons of Sites With Hospitalists and Those Without Hospitalists

Hospitals with and without hospitalists differed by a number of organizational characteristics (Table 1). Sites with hospitalists were more likely to be larger, nonprofit teaching hospitals, located in metropolitan regions, and have cardiac surgical services. There was no difference in the hospitals' safety‐net status or RN staffing ratio. Hospitals with hospitalists admitted lower percentages of black patients.

Hospital Characteristics
 Hospitalist ProgramNo Hospitalist Program 
 N = 429N = 169 
 N (%)N (%)P Value
  • Abbreviations: CABG, coronary artery bypass grafting; CATH, cardiac catheterization; COTH, Council of Teaching Hospitals; RN, registered nurse; SD, standard deviation.

Core‐based statistical area  <0.0001
Division94 (21.9%)53 (31.4%) 
Metro275 (64.1%)72 (42.6%) 
Micro52 (12.1%)38 (22.5%) 
Rural8 (1.9%)6 (3.6%) 
Owner  0.0003
Public47 (11.0%)20 (11.8%) 
Nonprofit333 (77.6%)108 (63.9%) 
Private49 (11.4%)41 (24.3%) 
Teaching status  <0.0001
COTH54 (12.6%)7 (4.1%) 
Teaching110 (25.6%)26 (15.4%) 
Other265 (61.8%)136 (80.5%) 
Cardiac type  0.0003
CABG286 (66.7%)86 (50.9%) 
CATH79 (18.4%)36 (21.3%) 
Other64 (14.9%)47 (27.8%) 
Region  0.007
New England35 (8.2%)3 (1.8%) 
Middle Atlantic60 (14.0%)29 (17.2%) 
South Atlantic78 (18.2%)23 (13.6%) 
NE Central60 (14.0%)35 (20.7%) 
SE Central31 (7.2%)10 (5.9%) 
NW Central38 (8.9%)23 (13.6%) 
SW Central41 (9.6%)21 (12.4%) 
Mountain22 (5.1%)3 (1.8%) 
Pacific64 (14.9%)22 (13.0%) 
Safety‐net  0.53
Yes72 (16.8%)32 (18.9%) 
No357 (83.2%)137 (81.1%) 
 Mean (SD)Mean (SD)P value
RN staffing ratio (n = 455)27.3 (17.0)26.1 (7.6)0.28
Total beds315.0 (216.6)214.8 (136.0)<0.0001
% Medicare inpatient days47.2 (42)49.7 (41)0.19
% Medicaid inpatient days18.5 (28)21.4 (46)0.16
% Black7.6 (9.6)10.6 (17.4)0.03

Characteristics of Hospitalist Programs and Responsibilities

Of the 429 sites reporting use of hospitalists, the median percentage of patients admitted by hospitalists was 60%, with an interquartile range (IQR) of 35% to 80%. The median number of full‐time equivalent hospitalists per hospital was 8 with an IQR of 5 to 14. The IQR reflects the middle 50% of the distribution of responses, and is not affected by outliers or extreme values. Additional characteristics of hospitalist programs can be found in Table 2. The estimated percentage of patients with AMI, HF, and pneumonia cared for by hospitalists varied considerably, with fewer patients with AMI and more patients with pneumonia under hospitalist care. Overall, a majority of hospitalist groups provided the following services: care of critical care patients, emergency department admission screening, observation unit coverage, coverage for cardiac arrests and rapid response teams, quality improvement or utilization review activities, development of hospital practice guidelines, and participation in implementation of major hospital system projects (such as implementation of an electronic health record system).

Hospitalist Program and Responsibility Characteristics
 N (%)
  • Abbreviations: AMI, acute myocardial infarction; FTEs, full‐time equivalents; IQR, interquartile range.

Date program established 
198719949 (2.2%)
19952002130 (32.1%)
20032011266 (65.7%)
Missing date24
No. of hospitalist FTEs 
Median (IQR)8 (5, 14)
Percent of medical patients admitted by hospitalists 
Median (IQR)60% (35, 80)
No. of hospitalists groups 
1333 (77.6%)
254 (12.6%)
336 (8.4%)
Don't know6 (1.4%)
Employment of hospitalists (not mutually exclusive) 
Hospital system98 (22.8%)
Hospital185 (43.1%)
Local physician practice group62 (14.5%)
Hospitalist physician practice group (local)83 (19.3%)
Hospitalist physician practice group (national/regional)36 (8.4%)
Other/unknown36 (8.4%)
Any 24‐hr in‐house coverage by hospitalists 
Yes329 (76.7%)
No98 (22.8%)
31 (0.2%)
Unknown1 (0.2%)
No. of hospitalist international medical graduates 
Median (IQR)3 (1, 6)
No. of hospitalists that are <1 yr out of residency 
Median (IQR)1 (0, 2)
Percent of patients with AMI cared for by hospitalists 
0%25%148 (34.5%)
26%50%67 (15.6%)
51%75%50 (11.7%)
76%100%54 (12.6%)
Don't know110 (25.6%)
Percent of patients with heart failure cared for by hospitalists 
0%25%79 (18.4%)
26%50%78 (18.2%)
51%75%75 (17.5%)
76%100%84 (19.6%)
Don't know113 (26.3%)
Percent of patients with pneumonia cared for by hospitalists 
0%25%47 (11.0%)
26%50%61 (14.3%)
51%75%74 (17.3%)
76%100%141 (32.9%)
Don't know105 (24.5%)
Hospitalist provision of services 
Care of critical care patients 
Hospitalists provide service346 (80.7%)
Hospitalists do not provide service80 (18.7%)
Don't know3 (0.7%)
Emergency department admission screening 
Hospitalists provide service281 (65.5%)
Hospitalists do not provide service143 (33.3%)
Don't know5 (1.2%)
Observation unit coverage 
Hospitalists provide service359 (83.7%)
Hospitalists do not provide service64 (14.9%)
Don't know6 (1.4%)
Emergency department coverage 
Hospitalists provide service145 (33.8%)
Hospitalists do not provide service280 (65.3%)
Don't know4 (0.9%)
Coverage for cardiac arrests 
Hospitalists provide service283 (66.0%)
Hospitalists do not provide service135 (31.5%)
Don't know11 (2.6%)
Rapid response team coverage 
Hospitalists provide service240 (55.9%)
Hospitalists do not provide service168 (39.2%)
Don't know21 (4.9%)
Quality improvement or utilization review 
Hospitalists provide service376 (87.7%)
Hospitalists do not provide service37 (8.6%)
Don't know16 (3.7%)
Hospital practice guideline development 
Hospitalists provide service339 (79.0%)
Hospitalists do not provide service55 (12.8%)
Don't know35 (8.2%)
Implementation of major hospital system projects 
Hospitalists provide service309 (72.0%)
Hospitalists do not provide service96 (22.4%)
Don't know24 (5.6%)

Relationship Between Hospitalist Utilization and Outcomes

Tables 3 and 4 show the comparisons between hospitals with and without hospitalists on each of the 6 outcome measures. In the bivariate analysis (Table 3), there was no statistically significant difference between groups on any of the outcome measures with the exception of the risk‐stratified readmission rate for heart failure. Sites with hospitalists had a lower RSRR for HF than sites without hospitalists (24.7% vs 25.4%, P < 0.0001). These results were similar in the multivariable models as seen in Table 4, in which the beta estimate (slope) was not significantly different for hospitals utilizing hospitalists compared to those that did not, on all measures except the RSRR for HF. For the subset of hospitals that used hospitalists, there was no statistically significant change in any of the 6 outcome measures, with increasing percentage of patients admitted by hospitalists. Table 5 demonstrates that for each RSMR and RSRR, the slope did not consistently increase or decrease with incrementally higher percentages of patients admitted by hospitalists, and the confidence intervals for all estimates crossed zero.

Bivariate Analysis of Hospitalist Utilization and Outcomes
 Hospitalist ProgramNo Hospitalist Program 
 N = 429N = 169 
Outcome MeasureMean % (SD)Mean (SD)P Value
  • Abbreviations: HF, heart failure; MI, myocardial infarction; RSMR, 30‐day risk‐standardized mortality rates; RSRR, 30‐day risk‐standardized readmission rates; SD, standard deviation.

MI RSMR16.0 (1.6)16.1 (1.5)0.56
MI RSRR19.9 (0.88)20.0 (0.86)0.16
HF RSMR11.3 (1.4)11.3 (1.4)0.77
HF RSRR24.7 (1.6)25.4 (1.8)<0.0001
Pneumonia RSMR11.7 (1.7)12.0 (1.7)0.08
Pneumonia RSRR18.2 (1.2)18.3 (1.1)0.28
Multivariable Analysis of Hospitalist Utilization and Outcomes
 Adjusted beta estimate (95% CI)
  • Abbreviations: CI, confidence interval; HF, heart failure; MI, myocardial infarction; RSMR, 30‐day risk‐standardized mortality rates; RSRR, 30‐day risk‐standardized readmission rates.

MI RSMR 
Hospitalist0.001 (0.002, 004)
MI RSRR 
Hospitalist0.001 (0.002, 0.001)
HF RSMR 
Hospitalist0.0004 (0.002, 0.003)
HF RSRR 
Hospitalist0.006 (0.009, 0.003)
Pneumonia RSMR 
Hospitalist0.002 (0.005, 0.001)
Pneumonia RSRR 
Hospitalist0.00001 (0.002, 0.002)
Percent of Patients Admitted by Hospitalists and Outcomes
 Adjusted Beta Estimate (95% CI)
  • Abbreviations: CI, confidence interval; HF, heart failure; MI, myocardial infarction; Ref, reference range; RSMR, 30‐day risk‐standardized mortality rates; RSRR, 30‐day risk‐standardized readmission rates.

MI RSMR 
Percent admit 
0%30%0.003 (0.007, 0.002)
32%48%0.001 (0.005, 0.006)
50%66%Ref
70%80%0.004 (0.001, 0.009)
85%0.004 (0.009, 0.001)
MI RSRR 
Percent admit 
0%30%0.001 (0.002, 0.004)
32%48%0.001 (0.004, 0.004)
50%66%Ref
70%80%0.001 (0.002, 0.004)
85%0.001 (0.002, 0.004)
HF RSMR 
Percent admit 
0%30%0.001 (0.005, 0.003)
32%48%0.002 (0.007, 0.003)
50%66%Ref
70%80%0.002 (0.006, 0.002)
85%0.001 (0.004, 0.005)
HF RSRR 
Percent admit 
0%30%0.002 (0.004, 0.007)
32%48%0.0003 (0.005, 0.006)
50%66%Ref
70%80%0.001 (0.005, 0.004)
85%0.002 (0.007, 0.003)
Pneumonia RSMR 
Percent admit 
0%30%0.001 (0.004, 0.006)
32%48%0.00001 (0.006, 0.006)
50%66%Ref
70%80%0.001 (0.004, 0.006)
85%0.001 (0.006, 0.005)
Pneumonia RSRR 
Percent admit 
0%30%0.0002 (0.004, 0.003)
32%48%0.004 (0.0003, 0.008)
50%66%Ref
70%80%0.001 (0.003, 0.004)
85%0.002 (0.002, 0.006)

DISCUSSION

In this national survey of hospitals, we did not find a significant association between the use of hospitalists and hospitals' performance on 30‐day mortality or readmissions measures for AMI, HF, or pneumonia. While there was a statistically lower 30‐day risk‐standardized readmission rate measure for the heart failure measure among hospitals that use hospitalists, the effect size was small. The survey response rate of 40% is comparable to other surveys of physicians and other healthcare personnel, however, there were no significant differences between responders and nonresponders, so the potential for response bias, while present, is small.

Contrary to the findings of a recent study,21 we did not find a higher readmission rate for any of the 3 conditions in hospitals with hospitalist programs. One advantage of our study is the use of more robust risk‐adjustment methods. Our study used NQF‐endorsed risk‐standardized measures of readmission, which capture readmissions to any hospital for common, high priority conditions where the impact of care coordination and discontinuity of care are paramount. The models use administrative claims data, but have been validated by medical record data. Another advantage is that our study focused on a time period when hospital readmissions were a standard quality benchmark and increasing priority for hospitals, hospitalists, and community‐based care delivery systems. While our study is not able to discern whether patients had primary care physicians or the reason for admission to a hospitalist's care, our data do suggest that hospitalists continue to care for a large percentage of hospitalized patients. Moreover, increasing the proportion of patients being admitted to hospitalists did not affect the risk for readmission, providing perhaps reassuring evidence (or lack of proof) for a direct association between use of hospitalist systems and higher risk for readmission.

While hospitals with hospitalists clearly did not have better mortality or readmission rates, an alternate viewpoint might hold that, despite concerns that hospitalists negatively impact care continuity, our data do not demonstrate an association between readmission rates and use of hospitalist services. It is possible that hospitals that have hospitalists may have more ability to invest in hospital‐based systems of care,22 an association which may incorporate any hospitalist effect, but our results were robust even after testing whether adjustment for hospital factors (such as profit status, size) affected our results.

It is also possible that secular trends in hospitals or hospitalist systems affected our results. A handful of single‐site studies carried out soon after the hospitalist model's earliest descriptions found a reduction in mortality and readmission rates with the implementation of a hospitalist program.2325 Alternatively, it may be that there has been a dilution of the effect of hospitalists as often occurs when any new innovation is spread from early adopter sites to routine practice. Consistent with other multicenter studies from recent eras,21, 26 our article's findings do not demonstrate an association between hospitalists and improved outcomes. Unlike other multicenter studies, we had access to disease‐specific risk‐adjustment methodologies, which may partially account for referral biases related to patient‐specific measures of acute or chronic illness severity.

Changes in the hospitalist effect over time have a number of explanations, some of which are relevant to our study. Recent evidence suggests that complex organizational characteristics, such as organizational values and goals, may contribute to performance on 30‐day mortality for AMI rather than specific processes and protocols27; intense focus on AMI as a quality improvement target is emblematic of a number of national initiatives that may have affected our results. Interestingly, hospitalist systems have changed over time as well. Early in the hospitalist movement, hospitalist systems were implemented largely at the behest of hospitals trying to reduce costs. In recent years, however, hospitalist systems are at least as frequently being implemented because outpatient‐based physicians or surgeons request hospitalists; hospitalists have been focused on care of uncoveredpatients, since the model's earliest description. In addition, some hospitals invest in hospitalist programs based on perceived ability of hospitalists to improve quality and achieve better patient outcomes in an era of payment increasingly being linked to quality of care metrics.

Our study has several limitations, six of which are noted here. First, while the hospitalist model has been widely embraced in the adult medicine field, in the absence of board certification, there is no gold standard definition of a hospitalist. It is therefore possible that some respondents may have represented groups that were identified incorrectly as hospitalists. Second, the data for the primary independent variable of interest was based upon self‐report and, therefore, subject to recall bias and potential misclassification of results. Respondents were not aware of our hypothesis, so the bias should not have been in one particular direction. Third, the data for the outcome variables are from 2008. They may, therefore, not reflect organizational enhancements related to use of hospitalists that are in process, and take years to yield downstream improvements on performance metrics. In addition, of the 429 hospitals that have hospitalist programs, 46 programs were initiated after 2008. While national performance on the 6 outcome variables has been relatively static over time,7 any significant change in hospital performance on these metrics since 2008 could suggest an overestimation or underestimation of the effect of hospitalist programs on patient outcomes. Fourth, we were not able to adjust for additional hospital or health system level characteristics that may be associated with hospitalist use or patient outcomes. Fifth, our regression models had significant collinearity, in that the presence of hospitalists was correlated with each of the covariates. However, this finding would indicate that our estimates may be overly conservative and could have contributed to our nonsignificant findings. Finally, outcomes for 2 of the 3 clinical conditions measured are ones for which hospitalists may less frequently provide care: acute myocardial infarction and heart failure. Outcome measures more relevant for hospitalists may be all‐condition, all‐cause, 30‐day mortality and readmission.

This work adds to the growing body of literature examining the impact of hospitalists on quality of care. To our knowledge, it is the first study to assess the association between hospitalist use and performance on outcome metrics at a national level. While our findings suggest that use of hospitalists alone may not lead to improved performance on outcome measures, a parallel body of research is emerging implicating broader system and organizational factors as key to high performance on outcome measures. It is likely that multiple factors contribute to performance on outcome measures, including type and mix of hospital personnel, patient care processes and workflow, and system level attributes. Comparative effectiveness and implementation research that assess the contextual factors and interventions that lead to successful system improvement and better performance is increasingly needed. It is unlikely that a single factor, such as hospitalist use, will significantly impact 30‐day mortality or readmission and, therefore, multifactorial interventions are likely required. In addition, hospitalist use is a complex intervention as the structure, processes, training, experience, role in the hospital system, and other factors (including quality of hospitalists or the hospitalist program) vary across programs. Rather than focusing on the volume of care delivered by hospitalists, hospitals will likely need to support hospital medicine programs that have the time and expertise to devote to improving the quality and value of care delivered across the hospital system. This study highlights that interventions leading to improvement on core outcome measures are more complex than simply having a hospital medicine program.

Acknowledgements

The authors acknowledge Judy Maselli, MPH, Division of General Internal Medicine, Department of Medicine, University of California, San Francisco, for her assistance with statistical analyses and preparation of tables.

Disclosures: Work on this project was supported by the Robert Wood Johnson Clinical Scholars Program (K.G.); California Healthcare Foundation grant 15763 (A.D.A.); and a grant from the National Heart, Lung, and Blood Institute (NHLBI), study 1U01HL105270‐02 (H.M.K.). Dr Krumholz is the chair of the Cardiac Scientific Advisory Board for United Health and has a research grant with Medtronic through Yale University; Dr Auerbach has a grant through the National Heart, Lung, and Blood Institute (NHLBI). The authors have no other disclosures to report.

Files
References
  1. Rifkin WD,Burger A,Holmboe ES,Sturdevant B.Comparison of hospitalists and nonhospitalists regarding core measures of pneumonia care.Am J Manag Care.2007;13:129132.
  2. Rifkin WD,Conner D,Silver A,Eichorn A.Comparison of processes and outcomes of pneumonia care between hospitalists and community‐based primary care physicians.Mayo Clin Proc.2002;77(10):10531058.
  3. Lindenauer PK,Chehabbedine R,Pekow P,Fitzgerald J,Benjamin EM.Quality of care for patients hospitalized with heart failure: assessing the impact of hospitalists.Arch Intern Med.2002;162(11):12511256.
  4. Vasilevskis EE,Meltzer D,Schnipper J, et al.Quality of care for decompensated heart failure: comparable performance between academic hospitalists and non‐hospitalists.J Gen Intern Med.2008;23(9):13991406.
  5. Roytman MM,Thomas SM,Jiang CS.Comparison of practice patterns of hospitalists and community physicians in the care of patients with congestive heart failure.J Hosp Med.2008;3(1):3541.
  6. Vasilevskis EE,Knebel RJ,Dudley RA,Wachter RM,Auerbach AD.Cross‐sectional analysis of hospitalist prevalence and quality of care in California.J Hosp Med.2010;5(4):200207.
  7. Hospital Compare. Department of Health and Human Services. Available at: http://www.hospitalcompare.hhs.gov. Accessed September 3,2011.
  8. Ayanian JZ,Weissman JS.Teaching hospitals and quality of care: a review of the literature.Milbank Q.2002;80(3):569593.
  9. Devereaux PJ,Choi PT,Lacchetti C, et al.A systematic review and meta‐analysis of studies comparing mortality rates of private for‐profit and private not‐for‐profit hospitals.Can Med Assoc J.2002;166(11):13991406.
  10. Fine JM,Fine MJ,Galusha D,Patrillo M,Meehan TP.Patient and hospital characteristics associated with recommended processes of care for elderly patients hospitalized with pneumonia: results from the Medicare Quality Indicator System Pneumonia Module.Arch Intern Med.2002;162(7):827833.
  11. Jha AK,Li Z,Orav EJ,Epstein AM.Care in U.S. hospitals—The Hospital Quality Alliance Program.N Engl J Med.2005;353(3):265274.
  12. Keeler EB,Rubenstein LV,Khan KL, et al.Hospital characteristics and quality of care.JAMA.1992;268(13):17091714.
  13. Needleman J,Buerhaus P,Mattke S,Stewart M,Zelevinsky K.Nurse‐staffing levels and the quality of care in hospitals.N Engl J Med.2002;346(22):17151722.
  14. Pham HH,Devers KJ,Kuo S,Berenson R.Health care market trends and the evolution of hospitalist use and roles.J Gen Intern Med.2005;20:101107.
  15. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113:16831692.
  16. Krumholz HM,Lin Z,Drye EE, et al.An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction.Circulation.2011;4:243252.
  17. Keenan PS,Normand SL,Lin Z, et al.An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure.Circ Cardiovasc Qual Outcomes.2008;1:2937.
  18. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113:16931701.
  19. Bratzler DW,Normand SL,Wang Y, et al.An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients.PLoS ONE.2011;6(4):e17401.
  20. Lindenauer PK,Normand SL,Drye EE, et al.Development, validation and results of a measure of 30‐day readmission following hospitalization for pneumonia.J Hosp Med.2011;6:142150.
  21. Kuo YF,Goodwin JS.Association of hospitalist care with medical utilization after discharge: evidence of cost shift from a cohort study.Ann Intern Med.2011;155:152159.
  22. Vasilevskis EE,Knebel RJ,Wachter RM,Auerbach AD.California hospital leaders' views of hospitalists: meeting needs of the present and future.J Hosp Med.2009;4:528534.
  23. Meltzer D,Manning WG,Morrison J, et al.Effects of physician experience on costs and outcomes on an academic general medicine service: results of a trial of hospitalists.Ann Intern Med.2002;137:866874.
  24. Auerbach AD,Wachter RM,Katz P,Showstack J,Baron RB,Goldman L.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patients outcomes.Ann Intern Med.2002;137:859865.
  25. Palacio C,Alexandraki I,House J,Mooradian A.A comparative study of unscheduled hospital readmissions in a resident‐staffed teaching service and a hospitalist‐based service.South Med J.2009;102:145149.
  26. Lindenauer P,Rothberg M,Pekow P,Kenwood C,Benjamin E,Auerbach A.Outcomes of care by hospitalists, general internists, and family physicians.N Engl J Med.2007;357:25892600.
  27. Curry LA,Spatz E,Cherlin E, et al.What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates?Ann Intern Med.2011;154:384390.
Article PDF
Issue
Journal of Hospital Medicine - 7(6)
Publications
Page Number
482-488
Sections
Files
Files
Article PDF
Article PDF

The past several years have seen a dramatic increase in the percentage of patients cared for by hospitalists, yet an emerging body of literature examining the association between care given by hospitalists and performance on a number of process measures has shown mixed results. Hospitalists do not appear to provide higher quality of care for pneumonia,1, 2 while results in heart failure are mixed.35 Each of these studies was conducted at a single site, and examined patient‐level effects. More recently, Vasilevskis et al6 assessed the association between the intensity of hospitalist use (measured as the percentage of patients admitted by hospitalists) and performance on process measures. In a cohort of 208 California hospitals, they found a significant improvement in performance on process measures in patients with acute myocardial infarction, heart failure, and pneumonia with increasing percentages of patients admitted by hospitalists.6

To date, no study has examined the association between the use of hospitalists and the publicly reported 30‐day mortality and readmission measures. Specifically, the Centers for Medicare and Medicaid Services (CMS) have developed and now publicly report risk‐standardized 30‐day mortality (RSMR) and readmission rates (RSRR) for Medicare patients hospitalized for 3 common and costly conditionsacute myocardial infarction (AMI), heart failure (HF), and pneumonia.7 Performance on these hospital‐based quality measures varies widely, and vary by hospital volume, ownership status, teaching status, and nurse staffing levels.813 However, even accounting for these characteristics leaves much of the variation in outcomes unexplained. We hypothesized that the presence of hospitalists within a hospital would be associated with higher performance on 30‐day mortality and 30‐day readmission measures for AMI, HF, and pneumonia. We further hypothesized that for hospitals using hospitalists, there would be a positive correlation between increasing percentage of patients admitted by hospitalists and performance on outcome measures. To test these hypotheses, we conducted a national survey of hospitalist leaders, linking data from survey responses to data on publicly reported outcome measures for AMI, HF, and pneumonia.

MATERIALS AND METHODS

Study Sites

Of the 4289 hospitals in operation in 2008, 1945 had 25 or more AMI discharges. We identified hospitals using American Hospital Association (AHA) data, calling hospitals up to 6 times each until we reached our target sample size of 600. Using this methodology, we contacted 1558 hospitals of a possible 1920 with AHA data; of the 1558 called, 598 provided survey results.

Survey Data

Our survey was adapted from the survey developed by Vasilevskis et al.6 The entire survey can be found in the Appendix (see Supporting Information in the online version of this article). Our key questions were: 1) Does your hospital have at least 1 hospitalist program or group? 2) Approximately what percentage of all medical patients in your hospital are admitted by hospitalists? The latter question was intended as an approximation of the intensity of hospitalist use, and has been used in prior studies.6, 14 A more direct measure was not feasible given the complexity of obtaining admission data for such a large and diverse set of hospitals. Respondents were also asked about hospitalist care of AMI, HF, and pneumonia patients. Given the low likelihood of precise estimation of hospitalist participation in care for specific conditions, the response choices were divided into percentage quartiles: 025, 2650, 5175, and 76100. Finally, participants were asked a number of questions regarding hospitalist organizational and clinical characteristics.

Survey Process

We obtained data regarding presence or absence of hospitalists and characteristics of the hospitalist services via phone‐ and fax‐administered survey (see Supporting Information, Appendix, in the online version of this article). Telephone and faxed surveys were administered between February 2010 and January 2011. Hospital telephone numbers were obtained from the 2008 AHA survey database and from a review of each hospital's website. Up to 6 attempts were made to obtain a completed survey from nonrespondents unless participation was specifically refused. Potential respondents were contacted in the following order: hospital medicine department leaders, hospital medicine clinical managers, vice president for medical affairs, chief medical officers, and other hospital executives with knowledge of the hospital medicine services. All respondents agreed with a question asking whether they had direct working knowledge of their hospital medicine services; contacts who said they did not have working knowledge of their hospital medicine services were asked to refer our surveyor to the appropriate person at their site. Absence of a hospitalist program was confirmed by contacting the Medical Staff Office.

Hospital Organizational and Patient‐Mix Characteristics

Hospital‐level organizational characteristics (eg, bed size, teaching status) and patient‐mix characteristics (eg, Medicare and Medicaid inpatient days) were obtained from the 2008 AHA survey database.

Outcome Performance Measures

The 30‐day risk‐standardized mortality and readmission rates (RSMR and RSRR) for 2008 for AMI, HF, and pneumonia were calculated for all admissions for people age 65 and over with traditional fee‐for‐service Medicare. Beneficiaries had to be enrolled for 12 months prior to their hospitalization for any of the 3 conditions, and had to have complete claims data available for that 12‐month period.7 These 6 outcome measures were constructed using hierarchical generalized linear models.1520 Using the RSMR for AMI as an example, for each hospital, the measure is estimated by dividing the predicted number of deaths within 30 days of admission for AMI by the expected number of deaths within 30 days of admission for AMI. This ratio is then divided by the national unadjusted 30‐day mortality rate for AMI, which is obtained using data on deaths from the Medicare beneficiary denominator file. Each measure is adjusted for patient characteristics such as age, gender, and comorbidities. All 6 measures are endorsed by the National Quality Forum (NQF) and are reported publicly by CMS on the Hospital Compare web site.

Statistical Analysis

Comparison of hospital‐ and patient‐level characteristics between hospitals with and without hospitalists was performed using chi‐square tests and Student t tests.

The primary outcome variables are the RSMRs and RSRRs for AMI, HF, and pneumonia. Multivariable linear regression models were used to assess the relationship between hospitals with at least 1 hospitalist group and each dependent variable. Models were adjusted for variables previously reported to be associated with quality of care. Hospital‐level characteristics included core‐based statistical area, teaching status, number of beds, region, safety‐net status, nursing staff ratio (number of registered nurse FTEs/number of hospital FTEs), and presence or absence of cardiac catheterization and coronary bypass capability. Patient‐level characteristics included Medicare and Medicaid inpatient days as a percentage of total inpatient days and percentage of admissions by race (black vs non‐black). The presence of hospitalists was correlated with each of the hospital and patient‐level characteristics. Further analyses of the subset of hospitals that use hospitalists included construction of multivariable linear regression models to assess the relationship between the percentage of patients admitted by hospitalists and the dependent variables. Models were adjusted for the same patient‐ and hospital‐level characteristics.

The institutional review boards at Yale University and University of California, San Francisco approved the study. All analyses were performed using Statistical Analysis Software (SAS) version 9.1 (SAS Institute, Inc, Cary, NC).

RESULTS

Characteristics of Participating Hospitals

Telephone, fax, and e‐mail surveys were attempted with 1558 hospitals; we received 598 completed surveys for a response rate of 40%. There was no difference between responders and nonresponders on any of the 6 outcome variables, the number of Medicare or Medicaid inpatient days, and the percentage of admissions by race. Responders and nonresponders were also similar in size, ownership, safety‐net and teaching status, nursing staff ratio, presence of cardiac catheterization and coronary bypass capability, and core‐based statistical area. They differed only on region of the country, where hospitals in the northwest Central and Pacific regions of the country had larger overall proportions of respondents. All hospitals provided information about the presence or absence of hospitalist programs. The majority of respondents were hospitalist clinical or administrative managers (n = 220) followed by hospitalist leaders (n = 106), other executives (n = 58), vice presidents for medical affairs (n = 39), and chief medical officers (n = 15). Each respondent indicated a working knowledge of their site's hospitalist utilization and practice characteristics. Absence of hospitalist utilization was confirmed by contact with the Medical Staff Office.

Comparisons of Sites With Hospitalists and Those Without Hospitalists

Hospitals with and without hospitalists differed by a number of organizational characteristics (Table 1). Sites with hospitalists were more likely to be larger, nonprofit teaching hospitals, located in metropolitan regions, and have cardiac surgical services. There was no difference in the hospitals' safety‐net status or RN staffing ratio. Hospitals with hospitalists admitted lower percentages of black patients.

Hospital Characteristics
 Hospitalist ProgramNo Hospitalist Program 
 N = 429N = 169 
 N (%)N (%)P Value
  • Abbreviations: CABG, coronary artery bypass grafting; CATH, cardiac catheterization; COTH, Council of Teaching Hospitals; RN, registered nurse; SD, standard deviation.

Core‐based statistical area  <0.0001
Division94 (21.9%)53 (31.4%) 
Metro275 (64.1%)72 (42.6%) 
Micro52 (12.1%)38 (22.5%) 
Rural8 (1.9%)6 (3.6%) 
Owner  0.0003
Public47 (11.0%)20 (11.8%) 
Nonprofit333 (77.6%)108 (63.9%) 
Private49 (11.4%)41 (24.3%) 
Teaching status  <0.0001
COTH54 (12.6%)7 (4.1%) 
Teaching110 (25.6%)26 (15.4%) 
Other265 (61.8%)136 (80.5%) 
Cardiac type  0.0003
CABG286 (66.7%)86 (50.9%) 
CATH79 (18.4%)36 (21.3%) 
Other64 (14.9%)47 (27.8%) 
Region  0.007
New England35 (8.2%)3 (1.8%) 
Middle Atlantic60 (14.0%)29 (17.2%) 
South Atlantic78 (18.2%)23 (13.6%) 
NE Central60 (14.0%)35 (20.7%) 
SE Central31 (7.2%)10 (5.9%) 
NW Central38 (8.9%)23 (13.6%) 
SW Central41 (9.6%)21 (12.4%) 
Mountain22 (5.1%)3 (1.8%) 
Pacific64 (14.9%)22 (13.0%) 
Safety‐net  0.53
Yes72 (16.8%)32 (18.9%) 
No357 (83.2%)137 (81.1%) 
 Mean (SD)Mean (SD)P value
RN staffing ratio (n = 455)27.3 (17.0)26.1 (7.6)0.28
Total beds315.0 (216.6)214.8 (136.0)<0.0001
% Medicare inpatient days47.2 (42)49.7 (41)0.19
% Medicaid inpatient days18.5 (28)21.4 (46)0.16
% Black7.6 (9.6)10.6 (17.4)0.03

Characteristics of Hospitalist Programs and Responsibilities

Of the 429 sites reporting use of hospitalists, the median percentage of patients admitted by hospitalists was 60%, with an interquartile range (IQR) of 35% to 80%. The median number of full‐time equivalent hospitalists per hospital was 8 with an IQR of 5 to 14. The IQR reflects the middle 50% of the distribution of responses, and is not affected by outliers or extreme values. Additional characteristics of hospitalist programs can be found in Table 2. The estimated percentage of patients with AMI, HF, and pneumonia cared for by hospitalists varied considerably, with fewer patients with AMI and more patients with pneumonia under hospitalist care. Overall, a majority of hospitalist groups provided the following services: care of critical care patients, emergency department admission screening, observation unit coverage, coverage for cardiac arrests and rapid response teams, quality improvement or utilization review activities, development of hospital practice guidelines, and participation in implementation of major hospital system projects (such as implementation of an electronic health record system).

Hospitalist Program and Responsibility Characteristics
 N (%)
  • Abbreviations: AMI, acute myocardial infarction; FTEs, full‐time equivalents; IQR, interquartile range.

Date program established 
198719949 (2.2%)
19952002130 (32.1%)
20032011266 (65.7%)
Missing date24
No. of hospitalist FTEs 
Median (IQR)8 (5, 14)
Percent of medical patients admitted by hospitalists 
Median (IQR)60% (35, 80)
No. of hospitalists groups 
1333 (77.6%)
254 (12.6%)
336 (8.4%)
Don't know6 (1.4%)
Employment of hospitalists (not mutually exclusive) 
Hospital system98 (22.8%)
Hospital185 (43.1%)
Local physician practice group62 (14.5%)
Hospitalist physician practice group (local)83 (19.3%)
Hospitalist physician practice group (national/regional)36 (8.4%)
Other/unknown36 (8.4%)
Any 24‐hr in‐house coverage by hospitalists 
Yes329 (76.7%)
No98 (22.8%)
31 (0.2%)
Unknown1 (0.2%)
No. of hospitalist international medical graduates 
Median (IQR)3 (1, 6)
No. of hospitalists that are <1 yr out of residency 
Median (IQR)1 (0, 2)
Percent of patients with AMI cared for by hospitalists 
0%25%148 (34.5%)
26%50%67 (15.6%)
51%75%50 (11.7%)
76%100%54 (12.6%)
Don't know110 (25.6%)
Percent of patients with heart failure cared for by hospitalists 
0%25%79 (18.4%)
26%50%78 (18.2%)
51%75%75 (17.5%)
76%100%84 (19.6%)
Don't know113 (26.3%)
Percent of patients with pneumonia cared for by hospitalists 
0%25%47 (11.0%)
26%50%61 (14.3%)
51%75%74 (17.3%)
76%100%141 (32.9%)
Don't know105 (24.5%)
Hospitalist provision of services 
Care of critical care patients 
Hospitalists provide service346 (80.7%)
Hospitalists do not provide service80 (18.7%)
Don't know3 (0.7%)
Emergency department admission screening 
Hospitalists provide service281 (65.5%)
Hospitalists do not provide service143 (33.3%)
Don't know5 (1.2%)
Observation unit coverage 
Hospitalists provide service359 (83.7%)
Hospitalists do not provide service64 (14.9%)
Don't know6 (1.4%)
Emergency department coverage 
Hospitalists provide service145 (33.8%)
Hospitalists do not provide service280 (65.3%)
Don't know4 (0.9%)
Coverage for cardiac arrests 
Hospitalists provide service283 (66.0%)
Hospitalists do not provide service135 (31.5%)
Don't know11 (2.6%)
Rapid response team coverage 
Hospitalists provide service240 (55.9%)
Hospitalists do not provide service168 (39.2%)
Don't know21 (4.9%)
Quality improvement or utilization review 
Hospitalists provide service376 (87.7%)
Hospitalists do not provide service37 (8.6%)
Don't know16 (3.7%)
Hospital practice guideline development 
Hospitalists provide service339 (79.0%)
Hospitalists do not provide service55 (12.8%)
Don't know35 (8.2%)
Implementation of major hospital system projects 
Hospitalists provide service309 (72.0%)
Hospitalists do not provide service96 (22.4%)
Don't know24 (5.6%)

Relationship Between Hospitalist Utilization and Outcomes

Tables 3 and 4 show the comparisons between hospitals with and without hospitalists on each of the 6 outcome measures. In the bivariate analysis (Table 3), there was no statistically significant difference between groups on any of the outcome measures with the exception of the risk‐stratified readmission rate for heart failure. Sites with hospitalists had a lower RSRR for HF than sites without hospitalists (24.7% vs 25.4%, P < 0.0001). These results were similar in the multivariable models as seen in Table 4, in which the beta estimate (slope) was not significantly different for hospitals utilizing hospitalists compared to those that did not, on all measures except the RSRR for HF. For the subset of hospitals that used hospitalists, there was no statistically significant change in any of the 6 outcome measures, with increasing percentage of patients admitted by hospitalists. Table 5 demonstrates that for each RSMR and RSRR, the slope did not consistently increase or decrease with incrementally higher percentages of patients admitted by hospitalists, and the confidence intervals for all estimates crossed zero.

Bivariate Analysis of Hospitalist Utilization and Outcomes
 Hospitalist ProgramNo Hospitalist Program 
 N = 429N = 169 
Outcome MeasureMean % (SD)Mean (SD)P Value
  • Abbreviations: HF, heart failure; MI, myocardial infarction; RSMR, 30‐day risk‐standardized mortality rates; RSRR, 30‐day risk‐standardized readmission rates; SD, standard deviation.

MI RSMR16.0 (1.6)16.1 (1.5)0.56
MI RSRR19.9 (0.88)20.0 (0.86)0.16
HF RSMR11.3 (1.4)11.3 (1.4)0.77
HF RSRR24.7 (1.6)25.4 (1.8)<0.0001
Pneumonia RSMR11.7 (1.7)12.0 (1.7)0.08
Pneumonia RSRR18.2 (1.2)18.3 (1.1)0.28
Multivariable Analysis of Hospitalist Utilization and Outcomes
 Adjusted beta estimate (95% CI)
  • Abbreviations: CI, confidence interval; HF, heart failure; MI, myocardial infarction; RSMR, 30‐day risk‐standardized mortality rates; RSRR, 30‐day risk‐standardized readmission rates.

MI RSMR 
Hospitalist0.001 (0.002, 004)
MI RSRR 
Hospitalist0.001 (0.002, 0.001)
HF RSMR 
Hospitalist0.0004 (0.002, 0.003)
HF RSRR 
Hospitalist0.006 (0.009, 0.003)
Pneumonia RSMR 
Hospitalist0.002 (0.005, 0.001)
Pneumonia RSRR 
Hospitalist0.00001 (0.002, 0.002)
Percent of Patients Admitted by Hospitalists and Outcomes
 Adjusted Beta Estimate (95% CI)
  • Abbreviations: CI, confidence interval; HF, heart failure; MI, myocardial infarction; Ref, reference range; RSMR, 30‐day risk‐standardized mortality rates; RSRR, 30‐day risk‐standardized readmission rates.

MI RSMR 
Percent admit 
0%30%0.003 (0.007, 0.002)
32%48%0.001 (0.005, 0.006)
50%66%Ref
70%80%0.004 (0.001, 0.009)
85%0.004 (0.009, 0.001)
MI RSRR 
Percent admit 
0%30%0.001 (0.002, 0.004)
32%48%0.001 (0.004, 0.004)
50%66%Ref
70%80%0.001 (0.002, 0.004)
85%0.001 (0.002, 0.004)
HF RSMR 
Percent admit 
0%30%0.001 (0.005, 0.003)
32%48%0.002 (0.007, 0.003)
50%66%Ref
70%80%0.002 (0.006, 0.002)
85%0.001 (0.004, 0.005)
HF RSRR 
Percent admit 
0%30%0.002 (0.004, 0.007)
32%48%0.0003 (0.005, 0.006)
50%66%Ref
70%80%0.001 (0.005, 0.004)
85%0.002 (0.007, 0.003)
Pneumonia RSMR 
Percent admit 
0%30%0.001 (0.004, 0.006)
32%48%0.00001 (0.006, 0.006)
50%66%Ref
70%80%0.001 (0.004, 0.006)
85%0.001 (0.006, 0.005)
Pneumonia RSRR 
Percent admit 
0%30%0.0002 (0.004, 0.003)
32%48%0.004 (0.0003, 0.008)
50%66%Ref
70%80%0.001 (0.003, 0.004)
85%0.002 (0.002, 0.006)

DISCUSSION

In this national survey of hospitals, we did not find a significant association between the use of hospitalists and hospitals' performance on 30‐day mortality or readmissions measures for AMI, HF, or pneumonia. While there was a statistically lower 30‐day risk‐standardized readmission rate measure for the heart failure measure among hospitals that use hospitalists, the effect size was small. The survey response rate of 40% is comparable to other surveys of physicians and other healthcare personnel, however, there were no significant differences between responders and nonresponders, so the potential for response bias, while present, is small.

Contrary to the findings of a recent study,21 we did not find a higher readmission rate for any of the 3 conditions in hospitals with hospitalist programs. One advantage of our study is the use of more robust risk‐adjustment methods. Our study used NQF‐endorsed risk‐standardized measures of readmission, which capture readmissions to any hospital for common, high priority conditions where the impact of care coordination and discontinuity of care are paramount. The models use administrative claims data, but have been validated by medical record data. Another advantage is that our study focused on a time period when hospital readmissions were a standard quality benchmark and increasing priority for hospitals, hospitalists, and community‐based care delivery systems. While our study is not able to discern whether patients had primary care physicians or the reason for admission to a hospitalist's care, our data do suggest that hospitalists continue to care for a large percentage of hospitalized patients. Moreover, increasing the proportion of patients being admitted to hospitalists did not affect the risk for readmission, providing perhaps reassuring evidence (or lack of proof) for a direct association between use of hospitalist systems and higher risk for readmission.

While hospitals with hospitalists clearly did not have better mortality or readmission rates, an alternate viewpoint might hold that, despite concerns that hospitalists negatively impact care continuity, our data do not demonstrate an association between readmission rates and use of hospitalist services. It is possible that hospitals that have hospitalists may have more ability to invest in hospital‐based systems of care,22 an association which may incorporate any hospitalist effect, but our results were robust even after testing whether adjustment for hospital factors (such as profit status, size) affected our results.

It is also possible that secular trends in hospitals or hospitalist systems affected our results. A handful of single‐site studies carried out soon after the hospitalist model's earliest descriptions found a reduction in mortality and readmission rates with the implementation of a hospitalist program.2325 Alternatively, it may be that there has been a dilution of the effect of hospitalists as often occurs when any new innovation is spread from early adopter sites to routine practice. Consistent with other multicenter studies from recent eras,21, 26 our article's findings do not demonstrate an association between hospitalists and improved outcomes. Unlike other multicenter studies, we had access to disease‐specific risk‐adjustment methodologies, which may partially account for referral biases related to patient‐specific measures of acute or chronic illness severity.

Changes in the hospitalist effect over time have a number of explanations, some of which are relevant to our study. Recent evidence suggests that complex organizational characteristics, such as organizational values and goals, may contribute to performance on 30‐day mortality for AMI rather than specific processes and protocols27; intense focus on AMI as a quality improvement target is emblematic of a number of national initiatives that may have affected our results. Interestingly, hospitalist systems have changed over time as well. Early in the hospitalist movement, hospitalist systems were implemented largely at the behest of hospitals trying to reduce costs. In recent years, however, hospitalist systems are at least as frequently being implemented because outpatient‐based physicians or surgeons request hospitalists; hospitalists have been focused on care of uncoveredpatients, since the model's earliest description. In addition, some hospitals invest in hospitalist programs based on perceived ability of hospitalists to improve quality and achieve better patient outcomes in an era of payment increasingly being linked to quality of care metrics.

Our study has several limitations, six of which are noted here. First, while the hospitalist model has been widely embraced in the adult medicine field, in the absence of board certification, there is no gold standard definition of a hospitalist. It is therefore possible that some respondents may have represented groups that were identified incorrectly as hospitalists. Second, the data for the primary independent variable of interest was based upon self‐report and, therefore, subject to recall bias and potential misclassification of results. Respondents were not aware of our hypothesis, so the bias should not have been in one particular direction. Third, the data for the outcome variables are from 2008. They may, therefore, not reflect organizational enhancements related to use of hospitalists that are in process, and take years to yield downstream improvements on performance metrics. In addition, of the 429 hospitals that have hospitalist programs, 46 programs were initiated after 2008. While national performance on the 6 outcome variables has been relatively static over time,7 any significant change in hospital performance on these metrics since 2008 could suggest an overestimation or underestimation of the effect of hospitalist programs on patient outcomes. Fourth, we were not able to adjust for additional hospital or health system level characteristics that may be associated with hospitalist use or patient outcomes. Fifth, our regression models had significant collinearity, in that the presence of hospitalists was correlated with each of the covariates. However, this finding would indicate that our estimates may be overly conservative and could have contributed to our nonsignificant findings. Finally, outcomes for 2 of the 3 clinical conditions measured are ones for which hospitalists may less frequently provide care: acute myocardial infarction and heart failure. Outcome measures more relevant for hospitalists may be all‐condition, all‐cause, 30‐day mortality and readmission.

This work adds to the growing body of literature examining the impact of hospitalists on quality of care. To our knowledge, it is the first study to assess the association between hospitalist use and performance on outcome metrics at a national level. While our findings suggest that use of hospitalists alone may not lead to improved performance on outcome measures, a parallel body of research is emerging implicating broader system and organizational factors as key to high performance on outcome measures. It is likely that multiple factors contribute to performance on outcome measures, including type and mix of hospital personnel, patient care processes and workflow, and system level attributes. Comparative effectiveness and implementation research that assess the contextual factors and interventions that lead to successful system improvement and better performance is increasingly needed. It is unlikely that a single factor, such as hospitalist use, will significantly impact 30‐day mortality or readmission and, therefore, multifactorial interventions are likely required. In addition, hospitalist use is a complex intervention as the structure, processes, training, experience, role in the hospital system, and other factors (including quality of hospitalists or the hospitalist program) vary across programs. Rather than focusing on the volume of care delivered by hospitalists, hospitals will likely need to support hospital medicine programs that have the time and expertise to devote to improving the quality and value of care delivered across the hospital system. This study highlights that interventions leading to improvement on core outcome measures are more complex than simply having a hospital medicine program.

Acknowledgements

The authors acknowledge Judy Maselli, MPH, Division of General Internal Medicine, Department of Medicine, University of California, San Francisco, for her assistance with statistical analyses and preparation of tables.

Disclosures: Work on this project was supported by the Robert Wood Johnson Clinical Scholars Program (K.G.); California Healthcare Foundation grant 15763 (A.D.A.); and a grant from the National Heart, Lung, and Blood Institute (NHLBI), study 1U01HL105270‐02 (H.M.K.). Dr Krumholz is the chair of the Cardiac Scientific Advisory Board for United Health and has a research grant with Medtronic through Yale University; Dr Auerbach has a grant through the National Heart, Lung, and Blood Institute (NHLBI). The authors have no other disclosures to report.

The past several years have seen a dramatic increase in the percentage of patients cared for by hospitalists, yet an emerging body of literature examining the association between care given by hospitalists and performance on a number of process measures has shown mixed results. Hospitalists do not appear to provide higher quality of care for pneumonia,1, 2 while results in heart failure are mixed.35 Each of these studies was conducted at a single site, and examined patient‐level effects. More recently, Vasilevskis et al6 assessed the association between the intensity of hospitalist use (measured as the percentage of patients admitted by hospitalists) and performance on process measures. In a cohort of 208 California hospitals, they found a significant improvement in performance on process measures in patients with acute myocardial infarction, heart failure, and pneumonia with increasing percentages of patients admitted by hospitalists.6

To date, no study has examined the association between the use of hospitalists and the publicly reported 30‐day mortality and readmission measures. Specifically, the Centers for Medicare and Medicaid Services (CMS) have developed and now publicly report risk‐standardized 30‐day mortality (RSMR) and readmission rates (RSRR) for Medicare patients hospitalized for 3 common and costly conditionsacute myocardial infarction (AMI), heart failure (HF), and pneumonia.7 Performance on these hospital‐based quality measures varies widely, and vary by hospital volume, ownership status, teaching status, and nurse staffing levels.813 However, even accounting for these characteristics leaves much of the variation in outcomes unexplained. We hypothesized that the presence of hospitalists within a hospital would be associated with higher performance on 30‐day mortality and 30‐day readmission measures for AMI, HF, and pneumonia. We further hypothesized that for hospitals using hospitalists, there would be a positive correlation between increasing percentage of patients admitted by hospitalists and performance on outcome measures. To test these hypotheses, we conducted a national survey of hospitalist leaders, linking data from survey responses to data on publicly reported outcome measures for AMI, HF, and pneumonia.

MATERIALS AND METHODS

Study Sites

Of the 4289 hospitals in operation in 2008, 1945 had 25 or more AMI discharges. We identified hospitals using American Hospital Association (AHA) data, calling hospitals up to 6 times each until we reached our target sample size of 600. Using this methodology, we contacted 1558 hospitals of a possible 1920 with AHA data; of the 1558 called, 598 provided survey results.

Survey Data

Our survey was adapted from the survey developed by Vasilevskis et al.6 The entire survey can be found in the Appendix (see Supporting Information in the online version of this article). Our key questions were: 1) Does your hospital have at least 1 hospitalist program or group? 2) Approximately what percentage of all medical patients in your hospital are admitted by hospitalists? The latter question was intended as an approximation of the intensity of hospitalist use, and has been used in prior studies.6, 14 A more direct measure was not feasible given the complexity of obtaining admission data for such a large and diverse set of hospitals. Respondents were also asked about hospitalist care of AMI, HF, and pneumonia patients. Given the low likelihood of precise estimation of hospitalist participation in care for specific conditions, the response choices were divided into percentage quartiles: 025, 2650, 5175, and 76100. Finally, participants were asked a number of questions regarding hospitalist organizational and clinical characteristics.

Survey Process

We obtained data regarding presence or absence of hospitalists and characteristics of the hospitalist services via phone‐ and fax‐administered survey (see Supporting Information, Appendix, in the online version of this article). Telephone and faxed surveys were administered between February 2010 and January 2011. Hospital telephone numbers were obtained from the 2008 AHA survey database and from a review of each hospital's website. Up to 6 attempts were made to obtain a completed survey from nonrespondents unless participation was specifically refused. Potential respondents were contacted in the following order: hospital medicine department leaders, hospital medicine clinical managers, vice president for medical affairs, chief medical officers, and other hospital executives with knowledge of the hospital medicine services. All respondents agreed with a question asking whether they had direct working knowledge of their hospital medicine services; contacts who said they did not have working knowledge of their hospital medicine services were asked to refer our surveyor to the appropriate person at their site. Absence of a hospitalist program was confirmed by contacting the Medical Staff Office.

Hospital Organizational and Patient‐Mix Characteristics

Hospital‐level organizational characteristics (eg, bed size, teaching status) and patient‐mix characteristics (eg, Medicare and Medicaid inpatient days) were obtained from the 2008 AHA survey database.

Outcome Performance Measures

The 30‐day risk‐standardized mortality and readmission rates (RSMR and RSRR) for 2008 for AMI, HF, and pneumonia were calculated for all admissions for people age 65 and over with traditional fee‐for‐service Medicare. Beneficiaries had to be enrolled for 12 months prior to their hospitalization for any of the 3 conditions, and had to have complete claims data available for that 12‐month period.7 These 6 outcome measures were constructed using hierarchical generalized linear models.1520 Using the RSMR for AMI as an example, for each hospital, the measure is estimated by dividing the predicted number of deaths within 30 days of admission for AMI by the expected number of deaths within 30 days of admission for AMI. This ratio is then divided by the national unadjusted 30‐day mortality rate for AMI, which is obtained using data on deaths from the Medicare beneficiary denominator file. Each measure is adjusted for patient characteristics such as age, gender, and comorbidities. All 6 measures are endorsed by the National Quality Forum (NQF) and are reported publicly by CMS on the Hospital Compare web site.

Statistical Analysis

Comparison of hospital‐ and patient‐level characteristics between hospitals with and without hospitalists was performed using chi‐square tests and Student t tests.

The primary outcome variables are the RSMRs and RSRRs for AMI, HF, and pneumonia. Multivariable linear regression models were used to assess the relationship between hospitals with at least 1 hospitalist group and each dependent variable. Models were adjusted for variables previously reported to be associated with quality of care. Hospital‐level characteristics included core‐based statistical area, teaching status, number of beds, region, safety‐net status, nursing staff ratio (number of registered nurse FTEs/number of hospital FTEs), and presence or absence of cardiac catheterization and coronary bypass capability. Patient‐level characteristics included Medicare and Medicaid inpatient days as a percentage of total inpatient days and percentage of admissions by race (black vs non‐black). The presence of hospitalists was correlated with each of the hospital and patient‐level characteristics. Further analyses of the subset of hospitals that use hospitalists included construction of multivariable linear regression models to assess the relationship between the percentage of patients admitted by hospitalists and the dependent variables. Models were adjusted for the same patient‐ and hospital‐level characteristics.

The institutional review boards at Yale University and University of California, San Francisco approved the study. All analyses were performed using Statistical Analysis Software (SAS) version 9.1 (SAS Institute, Inc, Cary, NC).

RESULTS

Characteristics of Participating Hospitals

Telephone, fax, and e‐mail surveys were attempted with 1558 hospitals; we received 598 completed surveys for a response rate of 40%. There was no difference between responders and nonresponders on any of the 6 outcome variables, the number of Medicare or Medicaid inpatient days, and the percentage of admissions by race. Responders and nonresponders were also similar in size, ownership, safety‐net and teaching status, nursing staff ratio, presence of cardiac catheterization and coronary bypass capability, and core‐based statistical area. They differed only on region of the country, where hospitals in the northwest Central and Pacific regions of the country had larger overall proportions of respondents. All hospitals provided information about the presence or absence of hospitalist programs. The majority of respondents were hospitalist clinical or administrative managers (n = 220) followed by hospitalist leaders (n = 106), other executives (n = 58), vice presidents for medical affairs (n = 39), and chief medical officers (n = 15). Each respondent indicated a working knowledge of their site's hospitalist utilization and practice characteristics. Absence of hospitalist utilization was confirmed by contact with the Medical Staff Office.

Comparisons of Sites With Hospitalists and Those Without Hospitalists

Hospitals with and without hospitalists differed by a number of organizational characteristics (Table 1). Sites with hospitalists were more likely to be larger, nonprofit teaching hospitals, located in metropolitan regions, and have cardiac surgical services. There was no difference in the hospitals' safety‐net status or RN staffing ratio. Hospitals with hospitalists admitted lower percentages of black patients.

Hospital Characteristics
 Hospitalist ProgramNo Hospitalist Program 
 N = 429N = 169 
 N (%)N (%)P Value
  • Abbreviations: CABG, coronary artery bypass grafting; CATH, cardiac catheterization; COTH, Council of Teaching Hospitals; RN, registered nurse; SD, standard deviation.

Core‐based statistical area  <0.0001
Division94 (21.9%)53 (31.4%) 
Metro275 (64.1%)72 (42.6%) 
Micro52 (12.1%)38 (22.5%) 
Rural8 (1.9%)6 (3.6%) 
Owner  0.0003
Public47 (11.0%)20 (11.8%) 
Nonprofit333 (77.6%)108 (63.9%) 
Private49 (11.4%)41 (24.3%) 
Teaching status  <0.0001
COTH54 (12.6%)7 (4.1%) 
Teaching110 (25.6%)26 (15.4%) 
Other265 (61.8%)136 (80.5%) 
Cardiac type  0.0003
CABG286 (66.7%)86 (50.9%) 
CATH79 (18.4%)36 (21.3%) 
Other64 (14.9%)47 (27.8%) 
Region  0.007
New England35 (8.2%)3 (1.8%) 
Middle Atlantic60 (14.0%)29 (17.2%) 
South Atlantic78 (18.2%)23 (13.6%) 
NE Central60 (14.0%)35 (20.7%) 
SE Central31 (7.2%)10 (5.9%) 
NW Central38 (8.9%)23 (13.6%) 
SW Central41 (9.6%)21 (12.4%) 
Mountain22 (5.1%)3 (1.8%) 
Pacific64 (14.9%)22 (13.0%) 
Safety‐net  0.53
Yes72 (16.8%)32 (18.9%) 
No357 (83.2%)137 (81.1%) 
 Mean (SD)Mean (SD)P value
RN staffing ratio (n = 455)27.3 (17.0)26.1 (7.6)0.28
Total beds315.0 (216.6)214.8 (136.0)<0.0001
% Medicare inpatient days47.2 (42)49.7 (41)0.19
% Medicaid inpatient days18.5 (28)21.4 (46)0.16
% Black7.6 (9.6)10.6 (17.4)0.03

Characteristics of Hospitalist Programs and Responsibilities

Of the 429 sites reporting use of hospitalists, the median percentage of patients admitted by hospitalists was 60%, with an interquartile range (IQR) of 35% to 80%. The median number of full‐time equivalent hospitalists per hospital was 8 with an IQR of 5 to 14. The IQR reflects the middle 50% of the distribution of responses, and is not affected by outliers or extreme values. Additional characteristics of hospitalist programs can be found in Table 2. The estimated percentage of patients with AMI, HF, and pneumonia cared for by hospitalists varied considerably, with fewer patients with AMI and more patients with pneumonia under hospitalist care. Overall, a majority of hospitalist groups provided the following services: care of critical care patients, emergency department admission screening, observation unit coverage, coverage for cardiac arrests and rapid response teams, quality improvement or utilization review activities, development of hospital practice guidelines, and participation in implementation of major hospital system projects (such as implementation of an electronic health record system).

Hospitalist Program and Responsibility Characteristics
 N (%)
  • Abbreviations: AMI, acute myocardial infarction; FTEs, full‐time equivalents; IQR, interquartile range.

Date program established 
198719949 (2.2%)
19952002130 (32.1%)
20032011266 (65.7%)
Missing date24
No. of hospitalist FTEs 
Median (IQR)8 (5, 14)
Percent of medical patients admitted by hospitalists 
Median (IQR)60% (35, 80)
No. of hospitalists groups 
1333 (77.6%)
254 (12.6%)
336 (8.4%)
Don't know6 (1.4%)
Employment of hospitalists (not mutually exclusive) 
Hospital system98 (22.8%)
Hospital185 (43.1%)
Local physician practice group62 (14.5%)
Hospitalist physician practice group (local)83 (19.3%)
Hospitalist physician practice group (national/regional)36 (8.4%)
Other/unknown36 (8.4%)
Any 24‐hr in‐house coverage by hospitalists 
Yes329 (76.7%)
No98 (22.8%)
31 (0.2%)
Unknown1 (0.2%)
No. of hospitalist international medical graduates 
Median (IQR)3 (1, 6)
No. of hospitalists that are <1 yr out of residency 
Median (IQR)1 (0, 2)
Percent of patients with AMI cared for by hospitalists 
0%25%148 (34.5%)
26%50%67 (15.6%)
51%75%50 (11.7%)
76%100%54 (12.6%)
Don't know110 (25.6%)
Percent of patients with heart failure cared for by hospitalists 
0%25%79 (18.4%)
26%50%78 (18.2%)
51%75%75 (17.5%)
76%100%84 (19.6%)
Don't know113 (26.3%)
Percent of patients with pneumonia cared for by hospitalists 
0%25%47 (11.0%)
26%50%61 (14.3%)
51%75%74 (17.3%)
76%100%141 (32.9%)
Don't know105 (24.5%)
Hospitalist provision of services 
Care of critical care patients 
Hospitalists provide service346 (80.7%)
Hospitalists do not provide service80 (18.7%)
Don't know3 (0.7%)
Emergency department admission screening 
Hospitalists provide service281 (65.5%)
Hospitalists do not provide service143 (33.3%)
Don't know5 (1.2%)
Observation unit coverage 
Hospitalists provide service359 (83.7%)
Hospitalists do not provide service64 (14.9%)
Don't know6 (1.4%)
Emergency department coverage 
Hospitalists provide service145 (33.8%)
Hospitalists do not provide service280 (65.3%)
Don't know4 (0.9%)
Coverage for cardiac arrests 
Hospitalists provide service283 (66.0%)
Hospitalists do not provide service135 (31.5%)
Don't know11 (2.6%)
Rapid response team coverage 
Hospitalists provide service240 (55.9%)
Hospitalists do not provide service168 (39.2%)
Don't know21 (4.9%)
Quality improvement or utilization review 
Hospitalists provide service376 (87.7%)
Hospitalists do not provide service37 (8.6%)
Don't know16 (3.7%)
Hospital practice guideline development 
Hospitalists provide service339 (79.0%)
Hospitalists do not provide service55 (12.8%)
Don't know35 (8.2%)
Implementation of major hospital system projects 
Hospitalists provide service309 (72.0%)
Hospitalists do not provide service96 (22.4%)
Don't know24 (5.6%)

Relationship Between Hospitalist Utilization and Outcomes

Tables 3 and 4 show the comparisons between hospitals with and without hospitalists on each of the 6 outcome measures. In the bivariate analysis (Table 3), there was no statistically significant difference between groups on any of the outcome measures with the exception of the risk‐stratified readmission rate for heart failure. Sites with hospitalists had a lower RSRR for HF than sites without hospitalists (24.7% vs 25.4%, P < 0.0001). These results were similar in the multivariable models as seen in Table 4, in which the beta estimate (slope) was not significantly different for hospitals utilizing hospitalists compared to those that did not, on all measures except the RSRR for HF. For the subset of hospitals that used hospitalists, there was no statistically significant change in any of the 6 outcome measures, with increasing percentage of patients admitted by hospitalists. Table 5 demonstrates that for each RSMR and RSRR, the slope did not consistently increase or decrease with incrementally higher percentages of patients admitted by hospitalists, and the confidence intervals for all estimates crossed zero.

Bivariate Analysis of Hospitalist Utilization and Outcomes
 Hospitalist ProgramNo Hospitalist Program 
 N = 429N = 169 
Outcome MeasureMean % (SD)Mean (SD)P Value
  • Abbreviations: HF, heart failure; MI, myocardial infarction; RSMR, 30‐day risk‐standardized mortality rates; RSRR, 30‐day risk‐standardized readmission rates; SD, standard deviation.

MI RSMR16.0 (1.6)16.1 (1.5)0.56
MI RSRR19.9 (0.88)20.0 (0.86)0.16
HF RSMR11.3 (1.4)11.3 (1.4)0.77
HF RSRR24.7 (1.6)25.4 (1.8)<0.0001
Pneumonia RSMR11.7 (1.7)12.0 (1.7)0.08
Pneumonia RSRR18.2 (1.2)18.3 (1.1)0.28
Multivariable Analysis of Hospitalist Utilization and Outcomes
 Adjusted beta estimate (95% CI)
  • Abbreviations: CI, confidence interval; HF, heart failure; MI, myocardial infarction; RSMR, 30‐day risk‐standardized mortality rates; RSRR, 30‐day risk‐standardized readmission rates.

MI RSMR 
Hospitalist0.001 (0.002, 004)
MI RSRR 
Hospitalist0.001 (0.002, 0.001)
HF RSMR 
Hospitalist0.0004 (0.002, 0.003)
HF RSRR 
Hospitalist0.006 (0.009, 0.003)
Pneumonia RSMR 
Hospitalist0.002 (0.005, 0.001)
Pneumonia RSRR 
Hospitalist0.00001 (0.002, 0.002)
Percent of Patients Admitted by Hospitalists and Outcomes
 Adjusted Beta Estimate (95% CI)
  • Abbreviations: CI, confidence interval; HF, heart failure; MI, myocardial infarction; Ref, reference range; RSMR, 30‐day risk‐standardized mortality rates; RSRR, 30‐day risk‐standardized readmission rates.

MI RSMR 
Percent admit 
0%30%0.003 (0.007, 0.002)
32%48%0.001 (0.005, 0.006)
50%66%Ref
70%80%0.004 (0.001, 0.009)
85%0.004 (0.009, 0.001)
MI RSRR 
Percent admit 
0%30%0.001 (0.002, 0.004)
32%48%0.001 (0.004, 0.004)
50%66%Ref
70%80%0.001 (0.002, 0.004)
85%0.001 (0.002, 0.004)
HF RSMR 
Percent admit 
0%30%0.001 (0.005, 0.003)
32%48%0.002 (0.007, 0.003)
50%66%Ref
70%80%0.002 (0.006, 0.002)
85%0.001 (0.004, 0.005)
HF RSRR 
Percent admit 
0%30%0.002 (0.004, 0.007)
32%48%0.0003 (0.005, 0.006)
50%66%Ref
70%80%0.001 (0.005, 0.004)
85%0.002 (0.007, 0.003)
Pneumonia RSMR 
Percent admit 
0%30%0.001 (0.004, 0.006)
32%48%0.00001 (0.006, 0.006)
50%66%Ref
70%80%0.001 (0.004, 0.006)
85%0.001 (0.006, 0.005)
Pneumonia RSRR 
Percent admit 
0%30%0.0002 (0.004, 0.003)
32%48%0.004 (0.0003, 0.008)
50%66%Ref
70%80%0.001 (0.003, 0.004)
85%0.002 (0.002, 0.006)

DISCUSSION

In this national survey of hospitals, we did not find a significant association between the use of hospitalists and hospitals' performance on 30‐day mortality or readmissions measures for AMI, HF, or pneumonia. While there was a statistically lower 30‐day risk‐standardized readmission rate measure for the heart failure measure among hospitals that use hospitalists, the effect size was small. The survey response rate of 40% is comparable to other surveys of physicians and other healthcare personnel, however, there were no significant differences between responders and nonresponders, so the potential for response bias, while present, is small.

Contrary to the findings of a recent study,21 we did not find a higher readmission rate for any of the 3 conditions in hospitals with hospitalist programs. One advantage of our study is the use of more robust risk‐adjustment methods. Our study used NQF‐endorsed risk‐standardized measures of readmission, which capture readmissions to any hospital for common, high priority conditions where the impact of care coordination and discontinuity of care are paramount. The models use administrative claims data, but have been validated by medical record data. Another advantage is that our study focused on a time period when hospital readmissions were a standard quality benchmark and increasing priority for hospitals, hospitalists, and community‐based care delivery systems. While our study is not able to discern whether patients had primary care physicians or the reason for admission to a hospitalist's care, our data do suggest that hospitalists continue to care for a large percentage of hospitalized patients. Moreover, increasing the proportion of patients being admitted to hospitalists did not affect the risk for readmission, providing perhaps reassuring evidence (or lack of proof) for a direct association between use of hospitalist systems and higher risk for readmission.

While hospitals with hospitalists clearly did not have better mortality or readmission rates, an alternate viewpoint might hold that, despite concerns that hospitalists negatively impact care continuity, our data do not demonstrate an association between readmission rates and use of hospitalist services. It is possible that hospitals that have hospitalists may have more ability to invest in hospital‐based systems of care,22 an association which may incorporate any hospitalist effect, but our results were robust even after testing whether adjustment for hospital factors (such as profit status, size) affected our results.

It is also possible that secular trends in hospitals or hospitalist systems affected our results. A handful of single‐site studies carried out soon after the hospitalist model's earliest descriptions found a reduction in mortality and readmission rates with the implementation of a hospitalist program.2325 Alternatively, it may be that there has been a dilution of the effect of hospitalists as often occurs when any new innovation is spread from early adopter sites to routine practice. Consistent with other multicenter studies from recent eras,21, 26 our article's findings do not demonstrate an association between hospitalists and improved outcomes. Unlike other multicenter studies, we had access to disease‐specific risk‐adjustment methodologies, which may partially account for referral biases related to patient‐specific measures of acute or chronic illness severity.

Changes in the hospitalist effect over time have a number of explanations, some of which are relevant to our study. Recent evidence suggests that complex organizational characteristics, such as organizational values and goals, may contribute to performance on 30‐day mortality for AMI rather than specific processes and protocols27; intense focus on AMI as a quality improvement target is emblematic of a number of national initiatives that may have affected our results. Interestingly, hospitalist systems have changed over time as well. Early in the hospitalist movement, hospitalist systems were implemented largely at the behest of hospitals trying to reduce costs. In recent years, however, hospitalist systems are at least as frequently being implemented because outpatient‐based physicians or surgeons request hospitalists; hospitalists have been focused on care of uncoveredpatients, since the model's earliest description. In addition, some hospitals invest in hospitalist programs based on perceived ability of hospitalists to improve quality and achieve better patient outcomes in an era of payment increasingly being linked to quality of care metrics.

Our study has several limitations, six of which are noted here. First, while the hospitalist model has been widely embraced in the adult medicine field, in the absence of board certification, there is no gold standard definition of a hospitalist. It is therefore possible that some respondents may have represented groups that were identified incorrectly as hospitalists. Second, the data for the primary independent variable of interest was based upon self‐report and, therefore, subject to recall bias and potential misclassification of results. Respondents were not aware of our hypothesis, so the bias should not have been in one particular direction. Third, the data for the outcome variables are from 2008. They may, therefore, not reflect organizational enhancements related to use of hospitalists that are in process, and take years to yield downstream improvements on performance metrics. In addition, of the 429 hospitals that have hospitalist programs, 46 programs were initiated after 2008. While national performance on the 6 outcome variables has been relatively static over time,7 any significant change in hospital performance on these metrics since 2008 could suggest an overestimation or underestimation of the effect of hospitalist programs on patient outcomes. Fourth, we were not able to adjust for additional hospital or health system level characteristics that may be associated with hospitalist use or patient outcomes. Fifth, our regression models had significant collinearity, in that the presence of hospitalists was correlated with each of the covariates. However, this finding would indicate that our estimates may be overly conservative and could have contributed to our nonsignificant findings. Finally, outcomes for 2 of the 3 clinical conditions measured are ones for which hospitalists may less frequently provide care: acute myocardial infarction and heart failure. Outcome measures more relevant for hospitalists may be all‐condition, all‐cause, 30‐day mortality and readmission.

This work adds to the growing body of literature examining the impact of hospitalists on quality of care. To our knowledge, it is the first study to assess the association between hospitalist use and performance on outcome metrics at a national level. While our findings suggest that use of hospitalists alone may not lead to improved performance on outcome measures, a parallel body of research is emerging implicating broader system and organizational factors as key to high performance on outcome measures. It is likely that multiple factors contribute to performance on outcome measures, including type and mix of hospital personnel, patient care processes and workflow, and system level attributes. Comparative effectiveness and implementation research that assess the contextual factors and interventions that lead to successful system improvement and better performance is increasingly needed. It is unlikely that a single factor, such as hospitalist use, will significantly impact 30‐day mortality or readmission and, therefore, multifactorial interventions are likely required. In addition, hospitalist use is a complex intervention as the structure, processes, training, experience, role in the hospital system, and other factors (including quality of hospitalists or the hospitalist program) vary across programs. Rather than focusing on the volume of care delivered by hospitalists, hospitals will likely need to support hospital medicine programs that have the time and expertise to devote to improving the quality and value of care delivered across the hospital system. This study highlights that interventions leading to improvement on core outcome measures are more complex than simply having a hospital medicine program.

Acknowledgements

The authors acknowledge Judy Maselli, MPH, Division of General Internal Medicine, Department of Medicine, University of California, San Francisco, for her assistance with statistical analyses and preparation of tables.

Disclosures: Work on this project was supported by the Robert Wood Johnson Clinical Scholars Program (K.G.); California Healthcare Foundation grant 15763 (A.D.A.); and a grant from the National Heart, Lung, and Blood Institute (NHLBI), study 1U01HL105270‐02 (H.M.K.). Dr Krumholz is the chair of the Cardiac Scientific Advisory Board for United Health and has a research grant with Medtronic through Yale University; Dr Auerbach has a grant through the National Heart, Lung, and Blood Institute (NHLBI). The authors have no other disclosures to report.

References
  1. Rifkin WD,Burger A,Holmboe ES,Sturdevant B.Comparison of hospitalists and nonhospitalists regarding core measures of pneumonia care.Am J Manag Care.2007;13:129132.
  2. Rifkin WD,Conner D,Silver A,Eichorn A.Comparison of processes and outcomes of pneumonia care between hospitalists and community‐based primary care physicians.Mayo Clin Proc.2002;77(10):10531058.
  3. Lindenauer PK,Chehabbedine R,Pekow P,Fitzgerald J,Benjamin EM.Quality of care for patients hospitalized with heart failure: assessing the impact of hospitalists.Arch Intern Med.2002;162(11):12511256.
  4. Vasilevskis EE,Meltzer D,Schnipper J, et al.Quality of care for decompensated heart failure: comparable performance between academic hospitalists and non‐hospitalists.J Gen Intern Med.2008;23(9):13991406.
  5. Roytman MM,Thomas SM,Jiang CS.Comparison of practice patterns of hospitalists and community physicians in the care of patients with congestive heart failure.J Hosp Med.2008;3(1):3541.
  6. Vasilevskis EE,Knebel RJ,Dudley RA,Wachter RM,Auerbach AD.Cross‐sectional analysis of hospitalist prevalence and quality of care in California.J Hosp Med.2010;5(4):200207.
  7. Hospital Compare. Department of Health and Human Services. Available at: http://www.hospitalcompare.hhs.gov. Accessed September 3,2011.
  8. Ayanian JZ,Weissman JS.Teaching hospitals and quality of care: a review of the literature.Milbank Q.2002;80(3):569593.
  9. Devereaux PJ,Choi PT,Lacchetti C, et al.A systematic review and meta‐analysis of studies comparing mortality rates of private for‐profit and private not‐for‐profit hospitals.Can Med Assoc J.2002;166(11):13991406.
  10. Fine JM,Fine MJ,Galusha D,Patrillo M,Meehan TP.Patient and hospital characteristics associated with recommended processes of care for elderly patients hospitalized with pneumonia: results from the Medicare Quality Indicator System Pneumonia Module.Arch Intern Med.2002;162(7):827833.
  11. Jha AK,Li Z,Orav EJ,Epstein AM.Care in U.S. hospitals—The Hospital Quality Alliance Program.N Engl J Med.2005;353(3):265274.
  12. Keeler EB,Rubenstein LV,Khan KL, et al.Hospital characteristics and quality of care.JAMA.1992;268(13):17091714.
  13. Needleman J,Buerhaus P,Mattke S,Stewart M,Zelevinsky K.Nurse‐staffing levels and the quality of care in hospitals.N Engl J Med.2002;346(22):17151722.
  14. Pham HH,Devers KJ,Kuo S,Berenson R.Health care market trends and the evolution of hospitalist use and roles.J Gen Intern Med.2005;20:101107.
  15. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113:16831692.
  16. Krumholz HM,Lin Z,Drye EE, et al.An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction.Circulation.2011;4:243252.
  17. Keenan PS,Normand SL,Lin Z, et al.An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure.Circ Cardiovasc Qual Outcomes.2008;1:2937.
  18. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113:16931701.
  19. Bratzler DW,Normand SL,Wang Y, et al.An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients.PLoS ONE.2011;6(4):e17401.
  20. Lindenauer PK,Normand SL,Drye EE, et al.Development, validation and results of a measure of 30‐day readmission following hospitalization for pneumonia.J Hosp Med.2011;6:142150.
  21. Kuo YF,Goodwin JS.Association of hospitalist care with medical utilization after discharge: evidence of cost shift from a cohort study.Ann Intern Med.2011;155:152159.
  22. Vasilevskis EE,Knebel RJ,Wachter RM,Auerbach AD.California hospital leaders' views of hospitalists: meeting needs of the present and future.J Hosp Med.2009;4:528534.
  23. Meltzer D,Manning WG,Morrison J, et al.Effects of physician experience on costs and outcomes on an academic general medicine service: results of a trial of hospitalists.Ann Intern Med.2002;137:866874.
  24. Auerbach AD,Wachter RM,Katz P,Showstack J,Baron RB,Goldman L.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patients outcomes.Ann Intern Med.2002;137:859865.
  25. Palacio C,Alexandraki I,House J,Mooradian A.A comparative study of unscheduled hospital readmissions in a resident‐staffed teaching service and a hospitalist‐based service.South Med J.2009;102:145149.
  26. Lindenauer P,Rothberg M,Pekow P,Kenwood C,Benjamin E,Auerbach A.Outcomes of care by hospitalists, general internists, and family physicians.N Engl J Med.2007;357:25892600.
  27. Curry LA,Spatz E,Cherlin E, et al.What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates?Ann Intern Med.2011;154:384390.
References
  1. Rifkin WD,Burger A,Holmboe ES,Sturdevant B.Comparison of hospitalists and nonhospitalists regarding core measures of pneumonia care.Am J Manag Care.2007;13:129132.
  2. Rifkin WD,Conner D,Silver A,Eichorn A.Comparison of processes and outcomes of pneumonia care between hospitalists and community‐based primary care physicians.Mayo Clin Proc.2002;77(10):10531058.
  3. Lindenauer PK,Chehabbedine R,Pekow P,Fitzgerald J,Benjamin EM.Quality of care for patients hospitalized with heart failure: assessing the impact of hospitalists.Arch Intern Med.2002;162(11):12511256.
  4. Vasilevskis EE,Meltzer D,Schnipper J, et al.Quality of care for decompensated heart failure: comparable performance between academic hospitalists and non‐hospitalists.J Gen Intern Med.2008;23(9):13991406.
  5. Roytman MM,Thomas SM,Jiang CS.Comparison of practice patterns of hospitalists and community physicians in the care of patients with congestive heart failure.J Hosp Med.2008;3(1):3541.
  6. Vasilevskis EE,Knebel RJ,Dudley RA,Wachter RM,Auerbach AD.Cross‐sectional analysis of hospitalist prevalence and quality of care in California.J Hosp Med.2010;5(4):200207.
  7. Hospital Compare. Department of Health and Human Services. Available at: http://www.hospitalcompare.hhs.gov. Accessed September 3,2011.
  8. Ayanian JZ,Weissman JS.Teaching hospitals and quality of care: a review of the literature.Milbank Q.2002;80(3):569593.
  9. Devereaux PJ,Choi PT,Lacchetti C, et al.A systematic review and meta‐analysis of studies comparing mortality rates of private for‐profit and private not‐for‐profit hospitals.Can Med Assoc J.2002;166(11):13991406.
  10. Fine JM,Fine MJ,Galusha D,Patrillo M,Meehan TP.Patient and hospital characteristics associated with recommended processes of care for elderly patients hospitalized with pneumonia: results from the Medicare Quality Indicator System Pneumonia Module.Arch Intern Med.2002;162(7):827833.
  11. Jha AK,Li Z,Orav EJ,Epstein AM.Care in U.S. hospitals—The Hospital Quality Alliance Program.N Engl J Med.2005;353(3):265274.
  12. Keeler EB,Rubenstein LV,Khan KL, et al.Hospital characteristics and quality of care.JAMA.1992;268(13):17091714.
  13. Needleman J,Buerhaus P,Mattke S,Stewart M,Zelevinsky K.Nurse‐staffing levels and the quality of care in hospitals.N Engl J Med.2002;346(22):17151722.
  14. Pham HH,Devers KJ,Kuo S,Berenson R.Health care market trends and the evolution of hospitalist use and roles.J Gen Intern Med.2005;20:101107.
  15. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113:16831692.
  16. Krumholz HM,Lin Z,Drye EE, et al.An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction.Circulation.2011;4:243252.
  17. Keenan PS,Normand SL,Lin Z, et al.An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure.Circ Cardiovasc Qual Outcomes.2008;1:2937.
  18. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113:16931701.
  19. Bratzler DW,Normand SL,Wang Y, et al.An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients.PLoS ONE.2011;6(4):e17401.
  20. Lindenauer PK,Normand SL,Drye EE, et al.Development, validation and results of a measure of 30‐day readmission following hospitalization for pneumonia.J Hosp Med.2011;6:142150.
  21. Kuo YF,Goodwin JS.Association of hospitalist care with medical utilization after discharge: evidence of cost shift from a cohort study.Ann Intern Med.2011;155:152159.
  22. Vasilevskis EE,Knebel RJ,Wachter RM,Auerbach AD.California hospital leaders' views of hospitalists: meeting needs of the present and future.J Hosp Med.2009;4:528534.
  23. Meltzer D,Manning WG,Morrison J, et al.Effects of physician experience on costs and outcomes on an academic general medicine service: results of a trial of hospitalists.Ann Intern Med.2002;137:866874.
  24. Auerbach AD,Wachter RM,Katz P,Showstack J,Baron RB,Goldman L.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patients outcomes.Ann Intern Med.2002;137:859865.
  25. Palacio C,Alexandraki I,House J,Mooradian A.A comparative study of unscheduled hospital readmissions in a resident‐staffed teaching service and a hospitalist‐based service.South Med J.2009;102:145149.
  26. Lindenauer P,Rothberg M,Pekow P,Kenwood C,Benjamin E,Auerbach A.Outcomes of care by hospitalists, general internists, and family physicians.N Engl J Med.2007;357:25892600.
  27. Curry LA,Spatz E,Cherlin E, et al.What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates?Ann Intern Med.2011;154:384390.
Issue
Journal of Hospital Medicine - 7(6)
Issue
Journal of Hospital Medicine - 7(6)
Page Number
482-488
Page Number
482-488
Publications
Publications
Article Type
Display Headline
Hospitalist utilization and hospital performance on 6 publicly reported patient outcomes
Display Headline
Hospitalist utilization and hospital performance on 6 publicly reported patient outcomes
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Office of Clinical Standards and Quality, Centers for Medicare and Medicaid Services, 7500 Security Blvd, S3‐02‐01, Baltimore, MD 21244
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Public reporting and pay-for-performance programs in perioperative medicine

Article Type
Changed
Tue, 10/02/2018 - 11:49
Display Headline
Public reporting and pay-for-performance programs in perioperative medicine
Are they meeting their goals?

Hospital quality measures and rankings are now widely available to the public online, but is public reporting of this information an effective strategy for improving health care? Using a case study of a hospital that suffered negative publicity as a result of a quality report, this article explores the use of public reporting of performance data and pay-for-performance reimbursement strategies to foster quality improvement in the US health care system.

CASE STUDY: A SURGICAL PROGRAM GETS A BAD REPORT―IN THE HEADLINES

In September 2005, The Boston Globe ran a prominent story reporting that the UMass Memorial Medical Center in Worcester, Mass., was abruptly suspending its elective cardiac surgery program.1 The program’s suspension came after state public health officials presented UMass Memorial with a detailed analysis showing that the hospital’s mortality rate for coronary artery bypass graft surgery (CABG) patients was the highest in the state and almost double the average for Massachusetts hospitals.1

Key personnel from UMass Memorial described the events preceding and following the program’s suspension in a journal article published in 2008.2 In 2002, UMass Memorial hired a new chief of cardiothoracic surgery, who resigned in early 2005. A few months after that resignation, state public health officials alerted the hospital to the abovementioned CABG mortality data (from 2002 and 2003), which they said would soon be reported publicly. UMass Memorial then conducted an internal review of its data from the most recent years (2004 and 2005) and found that its risk-adjusted CABG mortality had actually worsened, at which point the hospital voluntarily suspended its cardiac surgery program.2

More news stories arose about UMass Memorial’s program and its problems. The hospital hired consultants and senior surgeons from around the state and New England to completely review its cardiac surgery program. They concluded that “many essential systems were not in place” and made 68 key recommendations, including a complete overhaul of the hospital’s quality-improvement structure. The prior cardiac surgeons departed.2

The cardiac surgery program resumed after a 6-week hiatus, with day-to-day supervision by two senior cardiac surgeons from a Boston teaching hospital. A nationally recognized cardiac surgeon was brought on as chief of cardiac surgery in January 2006. In the 18 months after the program resumed, risk-adjusted CABG mortality rates declined substantially, but patient volume failed to return to presuspension levels and the hospital reported $22 million in lost revenue in fiscal year 2006 as a result of the suspension.2

This case raises a number of questions that help to frame discussion of the benefits and risks of public reporting of hospital quality measures:

  • To what extent does public reporting accelerate quality improvement?
  • How typical was the subsequent mortality reduction reported by UMass Memorial—ie, can public reporting be expected to improve outcomes?
  • Was the effect on patient volume expected—ie, how much does public reporting affect market share?
  • Would a pay-for-performance reimbursement model have accelerated improvement?
  • Why do public reporting and pay-for-performance programs remain controversial?
  • Do patients have a right to know?

WHAT HAS FUELED THE MOVE TOWARD PUBLIC REPORTING?

Drivers of public reporting

Massachusetts is one of a number of states that publicly report outcomes from cardiac surgery and other procedures and processes of care. Three basic factors have helped drive the development of public reporting (and, in some cases, pay-for-performance) programs:

  • National policy imperatives designed to improve quality and safety and to reduce costs
  • Cultural factors in society, which include consumerism in health care and the desire for transparency
  • The growth of information technology and use of the World Wide Web, which has been a huge enabler of public reporting. Public reporting could be done prior to the Web era but would not have reached such a wide audience had the results been released in a book that had to be ordered from a government printing office.

The rationale for public reporting

In theory, how might public reporting and pay-for-performance programs improve quality? Several different mechanisms or factors are likely to be involved:

  • Feedback. The basic premise of the National Surgical Quality Improvement Program, to cite one example, is that peer comparison and performance feedback will stimulate quality improvement.
  • Reputation. Hospital personnel fear being embarrassed if data show that they are performing poorly compared with other hospitals. Likewise, in recent years we have seen hospitals with the best quality rankings publicly advertise their performance.
  • Market share. Here the premise is that patients will tend to select providers with higher quality rankings and shun those with lower rankings.
  • Financial incentives. Pay-for-performance programs link payment or reimbursement directly to the desired outcomes and thereby stimulate quality improvement without working through the abovementioned mechanisms.

Approaches to quality measurement

Public reporting of hospital performance requires selection of an approach to measuring quality of care. Generally speaking, measures of health care quality reflect one of three domains of care:

Structural (or environmental) aspects, such as staffing in the intensive care unit (ICU), surgical volume, or availablity of emergency medical responders. An example of a structure-oriented reporting system is the Leapfrog Group’s online posting of hospital ratings based on surgical volumes for high-risk procedures, the degree of computerized order entry implementation, and the presence or absence of various patient safety practices.3

Processes of care, such as whether beta-blockers are prescribed for all patients after a myocardial infarction (MI), or whether thromboprophylaxis measures are ordered for surgical patients in keeping with guideline recommendations. Examples of process-oriented reporting systems include the US Department of Health and Human Services’ Hospital Compare Web site4 and the Commonwealth Fund’s WhyNotTheBest.org site.5

Outcomes of care, such as rates of mortality or complications, or patient satisfaction rates. An example of an outcomes-oriented reporting system is the annual report of institution-specific hospital-acquired infection rates put out by Pennsylvania6 and most other states.

 

 

IS THERE EVIDENCE OF BENEFIT?

A consistent effect in spurring quality-improvement efforts

Nearly a dozen published studies have evaluated whether public reporting stimulates quality-improvement activities, and the results have shown fairly consistently that it does. A 2003 study by Hibbard et al is representative of the results.7 This survey-based investigation measured the number of quality-improvement activities in cardiac and obstetric care undertaken by 24 Wisconsin hospitals that were included in an existing public reporting system compared with the number undertaken by 98 other Wisconsin hospitals that received either a private report on their own quality performance (without the information being made public) or no quality report at all. The study found that the hospitals that participated in public reporting were engaged in significantly more quality-improvement activities in both of the clinical areas assessed than were the hospitals receiving private reporting or no reporting.

A mixed effect on patient outcomes

In contrast, the data on whether public reporting improves patient outcomes have so far been mixed. A 2008 systematic review of the literature identified 11 studies that addressed this issue: five studies found that public reporting had a positive effect on patient outcomes, while six studies demonstrated a negative effect or no effect.8 Unfortunately, the methodological quality of most studies was poor: most were before-and-after comparisons without controls.

One of the positive studies in this review examined the effects of New York State’s pioneering institution of provider-specific CABG mortality reports (provider profiling) in 1989.9 The analysis found that between 1987 and 1992 (during which time provider profiling was instituted), unadjusted 30-day mortality rates following bypass surgery declined to a significantly larger degree among New York Medicare patients (33% reduction) than among Medicare patients nationwide (19% reduction) (P < .001).

In contrast, a time-series study from Cleveland Health Quality Choice (CHQC)—an early and innovative public reporting program—exemplifies a case in which public reporting of hospital performance had no discernible effect.10 The study examined trends in 30-day mortality across a range of conditions over a 6-year period for 30 hospitals in the Cleveland area participating in a public reporting system. It found that the hospitals that started out in the worst-performing groups (based on baseline mortality rates) showed no significant change in mortality over time.

DOES PUBLIC REPORTING AFFECT PATIENT CHOICES?

How a high-profile bypass patient chooses a hospital

When former President Bill Clinton developed chest pain and shortness of breath in 2004, he was seen at a small community hospital in Westchester County, N.Y., and then transferred to New York-Presbyterian Hospital/Columbia University Medical Center for bypass surgery.11 Although one would think President Clinton would have chosen the best hospital for CABG in New York, Presbyterian/Columbia’s risk-adjusted mortality rate for CABG was actually about twice the average for New York hospitals and one of the worst in the state, according to the most recent “report card” for New York hospitals available at the time.12

Why did President Clinton choose the hospital he did? Chances are that he, like most other patients, did not base his decision on publicly reported data. His choice probably was heavily influenced by the normal referral patterns of the community hospital where he was first seen.

Surveys show low patient use of data on quality...

The question raised by President Clinton’s case has been formally studied. In 1996, Schneider and Epstein surveyed patients who had recently undergone CABG in Pennsylvania (where surgeon- and hospital-specific mortality rates for cardiac surgery are publicly available) and found that fewer than 1% of patients said that provider ratings had a moderate or major impact on their choice of provider.13 

The Kaiser Family Foundation regularly surveys the public about its knowledge and use of publicly available hospital comparison data. In the latest Kaiser survey, conducted in 2008,14 41% of respondents said they believe there are “big differences” in quality among their local hospitals, yet 59% said they would choose a hospital that is familiar to them rather than a higher-rated facility. These findings may be explained, in part, by a lack of awareness that data on hospital quality are available: only 7% of survey participants said they had seen and used information comparing the quality of hospitals to make health care decisions in the prior year, and only 6% said they had seen and used information comparing physicians.

...But a trend toward greater acceptance

Although consumers’ use of publicly reported quality data remains low, their recognition of the value of such data has grown over time. Kaiser has conducted similar public surveys dating back to 1996, and the period from 1996 to 2008 saw a substantial decrease (from 72% to 59%) in the percentage of Americans who would choose a hospital based on familiarity more than on quality ratings. Similarly, the percentage of Americans who would prefer a surgeon with high quality ratings over a surgeon who has treated friends or family more than doubled from 1996 (20%) to 2008 (47%).14

What effect on market share?

Studies on the effects that public reporting has on hospital market share have been limited.

Schneider and Epstein surveyed cardiologists in Pennsylvania in 1995 and found that 87% of them said the state’s public reporting of surgeon- and hospital-specific mortality rates for CABG had no influence or minimal influence on their referral recommendations.15

Similarly, a review of New York State’s public reporting system for CABG 15 years after its launch found that hospital performance was not associated with a subsequent change in market share, not even among those hospitals with the highest mortality rate in a given year.16 Interestingly, however, this review also showed that surgeons in the bottom performance quartile were four times as likely as other surgeons to leave practice in the year following their poor report, which is one of the most prominent outcomes associated with provider profiling reported to date.

PAY-FOR-PERFORMANCE PROGRAMS

Evidence on the impact of pay-for-performance programs in the hospital setting is even more limited than that for public reporting.

Some evidence has come from the CMS/Premier Hospital Quality Incentive Demonstration, a pay-for-performance collaboration between the Centers for Medicare and Medicaid Services (CMS) and Premier, Inc., a nationwide alliance of hospitals that promotes best practices.17 The demonstration calls for hospitals that rank in the top quintile or decile for performance to receive a 1% or 2% Medicare payment bonus for five clinical focus areas: cardiac surgery, hip and knee surgery, pneumonia, heart failure, and acute MI. Performance ratings are based primarily on process measures as well as a few clinical outcome measures. Results from the first 21 months of the demonstration showed a consistent improvement in the hospitals’ composite quality scores in each of the five clinical areas.17

It is important to recognize, however, that this improvement occurred against the backdrop of broad national adoption of public reporting of hospital quality data, which makes it difficult to tease out how much of the improvement was truly attributable to pay-for-performance, especially in the absence of a control group.

To address this question, my colleagues and I evaluated adherence to quality measures over a 2-year period at 613 hospitals participating in a national public reporting initiative,18 including 207 hospitals that simultaneously took part in the CMS/Premier Hospital Quality Incentive Demonstration’s pay-for-performance program described above. We found that the hospitals participating in both public reporting and the pay-for-performance initiative achieved only modestly greater improvements in quality than did the hospitals engaged solely in public reporting; the difference amounted to only about a 1% improvement in process measures per year.

In another controlled study, Glickman et al compared quality improvement in the management of acute MI between 54 hospitals in a CMS pay-for-performance pilot project and 446 control hospitals without pay-for-performance incentives.19 They found that the pay-for-performance hospitals achieved a statistically significantly greater degree of improvement compared with control hospitals on two of six process-of-care measures (use of aspirin at discharge and smoking-cessation counseling) but not on the composite process-of-care measure. There was no significant difference between the groups in improvements in in-hospital mortality.

Why have the effects of pay-for-performance initiatives so far been so limited? It may be that the bonuses are too small and that public reporting is already effective at stimulating quality improvement, so that the incremental benefit of adding financial incentives is small. In the case of my group’s study,18 another possible factor was that the hospitals’ baseline performance on the quality measures assessed was already high—approaching or exceeding 90% on 5 of the 10 measures—thereby limiting our power to detect differences between the groups.

 

 

CONTROVERSIES AND CHALLENGES

Many issues continue to surround public reporting and pay-for-performance programs:

  • Are the measures used to evaluate health care systems suitable and evidence-based? Do they truly reflect the quality of care that providers are giving?
  • Do the programs encourage “teaching to the test” rather than stimulating real and comprehensive improvement? Do they make the system prone to misuse or overuse of measured services?
  • How much of the variation in hospital outcomes can be explained by the current process-of-care measures?
  • Should quality be measured by outcomes or processes? Outcomes matter more to patients, but they require risk adjustment to ensure valid comparisons, and risk adjustment can be difficult and expensive to conduct.
  • How much is chance a factor in apparent performance differences between hospitals?
  • How much is patient selection a factor? Might public reporting lead to “cherry-picking” of low-risk patients and thereby reduce access to care for other patients?

Unidirectional measures can lead to misuse, overuse

In 2003, the Infectious Diseases Society of America updated its guidelines on community-acquired pneumonia to recommend that patients receive antibiotics within 4 hours of hospital admission. This recommendation was widely adopted as an incentive-linked performance measure by CMS and other third-party payers. Kanwar et al studied the impact of this guidelines-based incentive in a pre/post study at one large teaching hospital.20 They found that while significantly more patients received antibiotics in a timely fashion after publication of the guidelines (2005) versus before the guidelines (2003), almost one-third of patients receiving antibiotics in 2005 had normal chest radiographs and thus were not appropriate candidates for therapy. Moreover, significantly fewer patients in 2005 had a final diagnosis of pneumonia at discharge, and there was no difference between the two periods in rates of mortality or ICU transfer. The researchers concluded that linking the quality indicator of early antibiotic use to financial incentives may lead to misdiagnosis of pneumonia and inappropriate antibiotic use.

Of course, antibiotic timing is not the only quality measure subject to overuse or misuse; other measures pose similar risks, including prophylaxis for deep vein thrombosis, glycemic control measures, and target immunization rates.

More-nuanced measures needed

We must also consider how well reported quality measures actually reflect our objectives. For example, an evaluation of 962 hospitals’ performance in managing acute MI found that the publicly reported core process measures for acute MI (beta-blocker and aspirin at admission and discharge, ACE inhibitor at discharge, smoking-cessation counseling, timely reperfusion) together explained only 6% of the variance among the hospitals in risk-adjusted 30-day mortality.21 This underscores how complicated the factors affecting mortality are, and how existing process measures have only begun to scratch the surface.

How much of a role does chance play?

Another issue is the role of chance and our limited power to detect real differences in outcomes, as illustrated by an analysis by Dimick et al of all discharges from a nationally representative sample of nearly 1,000 hospitals.22 The objective was to determine whether the seven operations for which mortality is advocated as a quality indicator by the Agency for Healthcare Research and Quality are performed often enough to reliably identify hospitals with increased mortality rates. The researchers found that only for one of the seven procedures—CABG—is there sufficient caseload over a 3-year period at the majority of US hospitals to accurately detect a mortality rate twice the national average.

Although CMS is highly committed to public reporting, the comparative mortality data available on its Hospital Compare Web site are not very useful for driving consumer choice or motivating hospitals to improve. For example, of the nearly 4,500 US hospitals that reported data on 30-day mortality from MI, only 17 hospitals were considered to be better than the national average and only 7 were considered worse than the national average.4

CASE REVISITED: LESSONS FROM THE UMASS MEMORIAL EXPERIENCE

Returning to our case study, what can the UMass Memorial experience teach us, and how well does it reflect the literature about the usefulness of public reporting?

Did public reporting accelerate quality improvement efforts? Yes. Reporting led to the suspension of cardiac surgery and substantive reorganization, which is consistent with the literature.

Was the mortality reduction typical? No. An optimist’s view would be that the drastic actions spurred by the media coverage had strong effects. A skeptic might say that perhaps UMass Memorial did some “cherry-picking” of patients, or that they got better at coding procedures in a way that reflected more favorably on the hospital.

Were the declines in patient volumes predictable? No. So far, the data suggest that public reporting has its greatest effects on providers rather than on institutions. This may change, however, with the introduction of tiered copayments, whereby patients are asked to pay more if they get their care from lower rated institutions.

Would financial incentives have accelerated improvement? It is too early to tell. The evidence for pay-for-performance programs is limited, and the benefits demonstrated so far have been modest. But in many ways the alternative is worse: our current system of financing and paying for hospital care offers no financial incentives to hospitals for investing in the personnel or systems required to achieve better outcomes—and instead rewards (through supplemental payments) adverse outcomes.

Did prospective patients have a right to know? Despite the limitations of public reporting, one of the most compelling arguments in its favor is that patients at UMass Memorial had the right to know about the program’s outcomes. This alone may ultimately justify the expense and efforts involved. Transparency and accountability are core values of open democratic societies, and US society relies on public reporting in many other realms: the National Highway Traffic Safety Administration publicizes crash test ratings, the Securities and Exchange Commission enforces public reporting by financial institutions, and the Federal Aviation Administration reports on airline safety, timeliness of flights, and lost baggage rates.

FUTURE DIRECTIONS

In the future, we can expect more measurement and reporting of health care factors that patients care most about, such as clinical outcomes and the patient experience. It is likely that public reporting and pay-for-performance programs will address a broader range of conditions and comprise a larger number of measures. CMS has outlined plans to increase the number of publicly reported measures to more than 70 by 2010 and more than 100 by 2011. My hope is that this expansion of data, along with improved data synthesis and presentation, will foster greater use of publicly reported data. Further, the continued evolution of the Web and social networking sites is very likely to enhance public awareness of hospital performance and change the ways in which patients use these data.

 

 

DISCUSSION

Question from the audience: I’m concerned about what seems to be a unilateral effort to improve quality. There are many components of health care delivery beyond those you’ve described, including the efforts of patients, insurers, employers, and the government. The reality is that patients don’t plan for illness, insurance companies often deny care, more and more employers are providing less coverage or no coverage, and Medicare is on the road to insolvency. Is the battle for quality winnable when all these other components of delivery are failing?

Dr. Lindenauer: You make good points. But from the standpoint of professionalism, I think we have a compelling duty to constantly strive to improve the quality of care in our hospitals and practices. I have presented strategies for potentially accelerating improvements that providers are trying to make anyway. Public reporting and financial incentives are likely to be with us for a while, and their use is likely to grow. But as you said, they address only part of the problem confronting American health care.

Question from the audience: For the savvy health care consumer, is there one particular Web site for hospital or provider comparisons that you would especially recommend? Do you actually recommend using such Web sites to patients before they undergo certain procedures?

Dr. Lindenauer: I think the Hospital Compare site from the Department of Health and Human Services is the key Web site. The California Hospital Assessment and Reporting Taskforce (CHART) has a good site, and the Commonwealth Fund’s WhyNotTheBest.org is an interesting newcomer. 

However, even the most ardent advocates for public reporting wouldn’t say the information available today is sufficient for making decisions. There’s still an important role for getting recommendations from other doctors who are familiar with local hospitals and providers.

I’m optimistic that the changes that are coming to these Web sites will provide a better user experience and make it harder to ignore the results of public reporting. Today we can say, “Hospital A is better at discharge instructions or smoking cessation counseling.” But we all can appreciate how weak those kinds of measures are because their implementation is subject to local interpretations. Once risk-adjusted outcomes and more-meaningful process measures are available, I’d be surprised if more patients weren’t willing to base their decisions on published comparisons.

References
  1. Kowalczyk L, Smith S. Hospital halts heart surgeries due to deaths: high rate cited at Worcester facility. The Boston Globe. September 22, 2005.
  2. Ettinger WH, Hylka SM, Phillips RA, Harrison LH Jr, Cyr JA, Sussman AJ. When things go wrong: the impact of being a statistical outlier in publicly reported coronary artery bypass graft surgery mortality data. Am J Med Qual 2008; 23:90–95.
  3. Leapfrog hospital quality ratings. The Leapfrog Group Web site. http://www.leapfroggroup.org/cp. Accessed June 10, 2009.
  4. Hospital Compare: a quality tool provided by Medicare. U.S. Department of Health & Human Services Web site. http://www.hospitalcompare.hhs.gov.  Accessed June 10, 2009.
  5. Why Not the Best (Beta): A Health Care Quality Improvement Resource. The Commonwealth Fund. http://www.WhyNotTheBest.org. Accessed May 6, 2009.
  6. Hospital-acquired infections in Pennsylvania. Pennsylvania Health Care Cost Containment Council Web site. http://www.phc4.org.  Accessed April 6, 2009.
  7. Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement efforts? Health Aff (Millwood) 2003; 22:84–94.
  8. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008; 148:111–123.
  9. Peterson ED, DeLong ER, Jollis JG, et al. The effects of New York’s bypass surgery provider profiling on access to care and patient outcomes in the elderly. J Am Coll Cardiol 1998; 32:993–999.
  10. Baker DW, Einstadter D, Thomas C, et al. The effect of publicly reporting hospital performance on market share and risk-adjusted mortality at high-mortality hospitals. Med Care 2003; 41:729–740.
  11. Graylock J. After chest pains, Clinton set to undergo bypass surgery. USA Today. September 3, 2004.
  12. Adult Cardiac Surgery in New York State, 1999–2001. Albany, NY: New York State Department of Health; April 2004. http://www.health.state.ny.us/nysdoh/heart/pdf/1999-2001_cabg.pdf. Accessed June 10, 2009.
  13. Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA 1998; 279:1638–1642.
  14. The Henry J. Kaiser Family Foundation. 2008 Update on Consumers’ Views of Patient Safety and Quality Information: Summary & Chartpack; October 2008. http://www.kff.org/kaiserpolls/upload/7819.pdf. Accessed June 10, 2009.
  15. Schneider EC, Epstein AM. Influence of cardiac-surgery performance reports on referral practices and access to care: a survey of cardiovascular specialists. N Engl J Med 1996; 335:251–256.
  16. Jha AK, Epstein AM. The predictive accuracy of the New York State coronary artery bypass surgery report-card system. Health Aff (Millwood) 2006; 25:844–855.
  17. Remus D. Pay for performance: CMS/Premier Hospital Quality Incentive Demonstration Project—year 1 results, December 2005. PowerPoint presentation available at: http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/results/index.jsp. Accessed June 10, 2009.
  18. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med 2007; 356:486–496.
  19. Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA 2007; 297:2373–2380.
  20. Kanwar M, Brar N, Khatib R, Fakih MG. Misdiagnosis of community-acquired pneumonia and inappropriate utilization of antibiotics: side effects of the 4-h antibiotic administration rule. Chest 2007; 131:1865–1869.
  21. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality. JAMA 2006; 296:72–78.
  22. Dimick JB, Welch HG, Birkmeyer JD. Surgical mortality as an indicator of hospital quality: the problem with small sample size. JAMA 2004; 292:847–851.
Article PDF
Author and Disclosure Information

Peter Lindenauer, MD, MSc
Director, Center for Quality of Care Research, Baystate Medical Center, Springfield, MA; and Associate Professor of Medicine, Tufts University School of Medicine, Boston, MA

Correspondence: Peter K. Lindenauer, MD, MSc, Director, Center for Quality of Care Research, Baystate Medical Center, 759 Chestnut Street, Springfield, MA 01199; peter.lindenauer@bhs.org

Dr. Lindenauer has indicated that he has no financial relationships with commercial interests that have a direct bearing on the subject matter of this article.

This article was developed from an audio transcript of Dr. Lindenauer’s lecture at the 4th Annual Perioperative Medicine Summit. The transcript was edited by the Cleveland Clinic Journal of Medicine staff for clarity and conciseness, and was then reviewed, revised, and approved by Dr. Lindenauer.

Publications
Page Number
S3-S8
Author and Disclosure Information

Peter Lindenauer, MD, MSc
Director, Center for Quality of Care Research, Baystate Medical Center, Springfield, MA; and Associate Professor of Medicine, Tufts University School of Medicine, Boston, MA

Correspondence: Peter K. Lindenauer, MD, MSc, Director, Center for Quality of Care Research, Baystate Medical Center, 759 Chestnut Street, Springfield, MA 01199; peter.lindenauer@bhs.org

Dr. Lindenauer has indicated that he has no financial relationships with commercial interests that have a direct bearing on the subject matter of this article.

This article was developed from an audio transcript of Dr. Lindenauer’s lecture at the 4th Annual Perioperative Medicine Summit. The transcript was edited by the Cleveland Clinic Journal of Medicine staff for clarity and conciseness, and was then reviewed, revised, and approved by Dr. Lindenauer.

Author and Disclosure Information

Peter Lindenauer, MD, MSc
Director, Center for Quality of Care Research, Baystate Medical Center, Springfield, MA; and Associate Professor of Medicine, Tufts University School of Medicine, Boston, MA

Correspondence: Peter K. Lindenauer, MD, MSc, Director, Center for Quality of Care Research, Baystate Medical Center, 759 Chestnut Street, Springfield, MA 01199; peter.lindenauer@bhs.org

Dr. Lindenauer has indicated that he has no financial relationships with commercial interests that have a direct bearing on the subject matter of this article.

This article was developed from an audio transcript of Dr. Lindenauer’s lecture at the 4th Annual Perioperative Medicine Summit. The transcript was edited by the Cleveland Clinic Journal of Medicine staff for clarity and conciseness, and was then reviewed, revised, and approved by Dr. Lindenauer.

Article PDF
Article PDF
Are they meeting their goals?
Are they meeting their goals?

Hospital quality measures and rankings are now widely available to the public online, but is public reporting of this information an effective strategy for improving health care? Using a case study of a hospital that suffered negative publicity as a result of a quality report, this article explores the use of public reporting of performance data and pay-for-performance reimbursement strategies to foster quality improvement in the US health care system.

CASE STUDY: A SURGICAL PROGRAM GETS A BAD REPORT―IN THE HEADLINES

In September 2005, The Boston Globe ran a prominent story reporting that the UMass Memorial Medical Center in Worcester, Mass., was abruptly suspending its elective cardiac surgery program.1 The program’s suspension came after state public health officials presented UMass Memorial with a detailed analysis showing that the hospital’s mortality rate for coronary artery bypass graft surgery (CABG) patients was the highest in the state and almost double the average for Massachusetts hospitals.1

Key personnel from UMass Memorial described the events preceding and following the program’s suspension in a journal article published in 2008.2 In 2002, UMass Memorial hired a new chief of cardiothoracic surgery, who resigned in early 2005. A few months after that resignation, state public health officials alerted the hospital to the abovementioned CABG mortality data (from 2002 and 2003), which they said would soon be reported publicly. UMass Memorial then conducted an internal review of its data from the most recent years (2004 and 2005) and found that its risk-adjusted CABG mortality had actually worsened, at which point the hospital voluntarily suspended its cardiac surgery program.2

More news stories arose about UMass Memorial’s program and its problems. The hospital hired consultants and senior surgeons from around the state and New England to completely review its cardiac surgery program. They concluded that “many essential systems were not in place” and made 68 key recommendations, including a complete overhaul of the hospital’s quality-improvement structure. The prior cardiac surgeons departed.2

The cardiac surgery program resumed after a 6-week hiatus, with day-to-day supervision by two senior cardiac surgeons from a Boston teaching hospital. A nationally recognized cardiac surgeon was brought on as chief of cardiac surgery in January 2006. In the 18 months after the program resumed, risk-adjusted CABG mortality rates declined substantially, but patient volume failed to return to presuspension levels and the hospital reported $22 million in lost revenue in fiscal year 2006 as a result of the suspension.2

This case raises a number of questions that help to frame discussion of the benefits and risks of public reporting of hospital quality measures:

  • To what extent does public reporting accelerate quality improvement?
  • How typical was the subsequent mortality reduction reported by UMass Memorial—ie, can public reporting be expected to improve outcomes?
  • Was the effect on patient volume expected—ie, how much does public reporting affect market share?
  • Would a pay-for-performance reimbursement model have accelerated improvement?
  • Why do public reporting and pay-for-performance programs remain controversial?
  • Do patients have a right to know?

WHAT HAS FUELED THE MOVE TOWARD PUBLIC REPORTING?

Drivers of public reporting

Massachusetts is one of a number of states that publicly report outcomes from cardiac surgery and other procedures and processes of care. Three basic factors have helped drive the development of public reporting (and, in some cases, pay-for-performance) programs:

  • National policy imperatives designed to improve quality and safety and to reduce costs
  • Cultural factors in society, which include consumerism in health care and the desire for transparency
  • The growth of information technology and use of the World Wide Web, which has been a huge enabler of public reporting. Public reporting could be done prior to the Web era but would not have reached such a wide audience had the results been released in a book that had to be ordered from a government printing office.

The rationale for public reporting

In theory, how might public reporting and pay-for-performance programs improve quality? Several different mechanisms or factors are likely to be involved:

  • Feedback. The basic premise of the National Surgical Quality Improvement Program, to cite one example, is that peer comparison and performance feedback will stimulate quality improvement.
  • Reputation. Hospital personnel fear being embarrassed if data show that they are performing poorly compared with other hospitals. Likewise, in recent years we have seen hospitals with the best quality rankings publicly advertise their performance.
  • Market share. Here the premise is that patients will tend to select providers with higher quality rankings and shun those with lower rankings.
  • Financial incentives. Pay-for-performance programs link payment or reimbursement directly to the desired outcomes and thereby stimulate quality improvement without working through the abovementioned mechanisms.

Approaches to quality measurement

Public reporting of hospital performance requires selection of an approach to measuring quality of care. Generally speaking, measures of health care quality reflect one of three domains of care:

Structural (or environmental) aspects, such as staffing in the intensive care unit (ICU), surgical volume, or availablity of emergency medical responders. An example of a structure-oriented reporting system is the Leapfrog Group’s online posting of hospital ratings based on surgical volumes for high-risk procedures, the degree of computerized order entry implementation, and the presence or absence of various patient safety practices.3

Processes of care, such as whether beta-blockers are prescribed for all patients after a myocardial infarction (MI), or whether thromboprophylaxis measures are ordered for surgical patients in keeping with guideline recommendations. Examples of process-oriented reporting systems include the US Department of Health and Human Services’ Hospital Compare Web site4 and the Commonwealth Fund’s WhyNotTheBest.org site.5

Outcomes of care, such as rates of mortality or complications, or patient satisfaction rates. An example of an outcomes-oriented reporting system is the annual report of institution-specific hospital-acquired infection rates put out by Pennsylvania6 and most other states.

 

 

IS THERE EVIDENCE OF BENEFIT?

A consistent effect in spurring quality-improvement efforts

Nearly a dozen published studies have evaluated whether public reporting stimulates quality-improvement activities, and the results have shown fairly consistently that it does. A 2003 study by Hibbard et al is representative of the results.7 This survey-based investigation measured the number of quality-improvement activities in cardiac and obstetric care undertaken by 24 Wisconsin hospitals that were included in an existing public reporting system compared with the number undertaken by 98 other Wisconsin hospitals that received either a private report on their own quality performance (without the information being made public) or no quality report at all. The study found that the hospitals that participated in public reporting were engaged in significantly more quality-improvement activities in both of the clinical areas assessed than were the hospitals receiving private reporting or no reporting.

A mixed effect on patient outcomes

In contrast, the data on whether public reporting improves patient outcomes have so far been mixed. A 2008 systematic review of the literature identified 11 studies that addressed this issue: five studies found that public reporting had a positive effect on patient outcomes, while six studies demonstrated a negative effect or no effect.8 Unfortunately, the methodological quality of most studies was poor: most were before-and-after comparisons without controls.

One of the positive studies in this review examined the effects of New York State’s pioneering institution of provider-specific CABG mortality reports (provider profiling) in 1989.9 The analysis found that between 1987 and 1992 (during which time provider profiling was instituted), unadjusted 30-day mortality rates following bypass surgery declined to a significantly larger degree among New York Medicare patients (33% reduction) than among Medicare patients nationwide (19% reduction) (P < .001).

In contrast, a time-series study from Cleveland Health Quality Choice (CHQC)—an early and innovative public reporting program—exemplifies a case in which public reporting of hospital performance had no discernible effect.10 The study examined trends in 30-day mortality across a range of conditions over a 6-year period for 30 hospitals in the Cleveland area participating in a public reporting system. It found that the hospitals that started out in the worst-performing groups (based on baseline mortality rates) showed no significant change in mortality over time.

DOES PUBLIC REPORTING AFFECT PATIENT CHOICES?

How a high-profile bypass patient chooses a hospital

When former President Bill Clinton developed chest pain and shortness of breath in 2004, he was seen at a small community hospital in Westchester County, N.Y., and then transferred to New York-Presbyterian Hospital/Columbia University Medical Center for bypass surgery.11 Although one would think President Clinton would have chosen the best hospital for CABG in New York, Presbyterian/Columbia’s risk-adjusted mortality rate for CABG was actually about twice the average for New York hospitals and one of the worst in the state, according to the most recent “report card” for New York hospitals available at the time.12

Why did President Clinton choose the hospital he did? Chances are that he, like most other patients, did not base his decision on publicly reported data. His choice probably was heavily influenced by the normal referral patterns of the community hospital where he was first seen.

Surveys show low patient use of data on quality...

The question raised by President Clinton’s case has been formally studied. In 1996, Schneider and Epstein surveyed patients who had recently undergone CABG in Pennsylvania (where surgeon- and hospital-specific mortality rates for cardiac surgery are publicly available) and found that fewer than 1% of patients said that provider ratings had a moderate or major impact on their choice of provider.13 

The Kaiser Family Foundation regularly surveys the public about its knowledge and use of publicly available hospital comparison data. In the latest Kaiser survey, conducted in 2008,14 41% of respondents said they believe there are “big differences” in quality among their local hospitals, yet 59% said they would choose a hospital that is familiar to them rather than a higher-rated facility. These findings may be explained, in part, by a lack of awareness that data on hospital quality are available: only 7% of survey participants said they had seen and used information comparing the quality of hospitals to make health care decisions in the prior year, and only 6% said they had seen and used information comparing physicians.

...But a trend toward greater acceptance

Although consumers’ use of publicly reported quality data remains low, their recognition of the value of such data has grown over time. Kaiser has conducted similar public surveys dating back to 1996, and the period from 1996 to 2008 saw a substantial decrease (from 72% to 59%) in the percentage of Americans who would choose a hospital based on familiarity more than on quality ratings. Similarly, the percentage of Americans who would prefer a surgeon with high quality ratings over a surgeon who has treated friends or family more than doubled from 1996 (20%) to 2008 (47%).14

What effect on market share?

Studies on the effects that public reporting has on hospital market share have been limited.

Schneider and Epstein surveyed cardiologists in Pennsylvania in 1995 and found that 87% of them said the state’s public reporting of surgeon- and hospital-specific mortality rates for CABG had no influence or minimal influence on their referral recommendations.15

Similarly, a review of New York State’s public reporting system for CABG 15 years after its launch found that hospital performance was not associated with a subsequent change in market share, not even among those hospitals with the highest mortality rate in a given year.16 Interestingly, however, this review also showed that surgeons in the bottom performance quartile were four times as likely as other surgeons to leave practice in the year following their poor report, which is one of the most prominent outcomes associated with provider profiling reported to date.

PAY-FOR-PERFORMANCE PROGRAMS

Evidence on the impact of pay-for-performance programs in the hospital setting is even more limited than that for public reporting.

Some evidence has come from the CMS/Premier Hospital Quality Incentive Demonstration, a pay-for-performance collaboration between the Centers for Medicare and Medicaid Services (CMS) and Premier, Inc., a nationwide alliance of hospitals that promotes best practices.17 The demonstration calls for hospitals that rank in the top quintile or decile for performance to receive a 1% or 2% Medicare payment bonus for five clinical focus areas: cardiac surgery, hip and knee surgery, pneumonia, heart failure, and acute MI. Performance ratings are based primarily on process measures as well as a few clinical outcome measures. Results from the first 21 months of the demonstration showed a consistent improvement in the hospitals’ composite quality scores in each of the five clinical areas.17

It is important to recognize, however, that this improvement occurred against the backdrop of broad national adoption of public reporting of hospital quality data, which makes it difficult to tease out how much of the improvement was truly attributable to pay-for-performance, especially in the absence of a control group.

To address this question, my colleagues and I evaluated adherence to quality measures over a 2-year period at 613 hospitals participating in a national public reporting initiative,18 including 207 hospitals that simultaneously took part in the CMS/Premier Hospital Quality Incentive Demonstration’s pay-for-performance program described above. We found that the hospitals participating in both public reporting and the pay-for-performance initiative achieved only modestly greater improvements in quality than did the hospitals engaged solely in public reporting; the difference amounted to only about a 1% improvement in process measures per year.

In another controlled study, Glickman et al compared quality improvement in the management of acute MI between 54 hospitals in a CMS pay-for-performance pilot project and 446 control hospitals without pay-for-performance incentives.19 They found that the pay-for-performance hospitals achieved a statistically significantly greater degree of improvement compared with control hospitals on two of six process-of-care measures (use of aspirin at discharge and smoking-cessation counseling) but not on the composite process-of-care measure. There was no significant difference between the groups in improvements in in-hospital mortality.

Why have the effects of pay-for-performance initiatives so far been so limited? It may be that the bonuses are too small and that public reporting is already effective at stimulating quality improvement, so that the incremental benefit of adding financial incentives is small. In the case of my group’s study,18 another possible factor was that the hospitals’ baseline performance on the quality measures assessed was already high—approaching or exceeding 90% on 5 of the 10 measures—thereby limiting our power to detect differences between the groups.

 

 

CONTROVERSIES AND CHALLENGES

Many issues continue to surround public reporting and pay-for-performance programs:

  • Are the measures used to evaluate health care systems suitable and evidence-based? Do they truly reflect the quality of care that providers are giving?
  • Do the programs encourage “teaching to the test” rather than stimulating real and comprehensive improvement? Do they make the system prone to misuse or overuse of measured services?
  • How much of the variation in hospital outcomes can be explained by the current process-of-care measures?
  • Should quality be measured by outcomes or processes? Outcomes matter more to patients, but they require risk adjustment to ensure valid comparisons, and risk adjustment can be difficult and expensive to conduct.
  • How much is chance a factor in apparent performance differences between hospitals?
  • How much is patient selection a factor? Might public reporting lead to “cherry-picking” of low-risk patients and thereby reduce access to care for other patients?

Unidirectional measures can lead to misuse, overuse

In 2003, the Infectious Diseases Society of America updated its guidelines on community-acquired pneumonia to recommend that patients receive antibiotics within 4 hours of hospital admission. This recommendation was widely adopted as an incentive-linked performance measure by CMS and other third-party payers. Kanwar et al studied the impact of this guidelines-based incentive in a pre/post study at one large teaching hospital.20 They found that while significantly more patients received antibiotics in a timely fashion after publication of the guidelines (2005) versus before the guidelines (2003), almost one-third of patients receiving antibiotics in 2005 had normal chest radiographs and thus were not appropriate candidates for therapy. Moreover, significantly fewer patients in 2005 had a final diagnosis of pneumonia at discharge, and there was no difference between the two periods in rates of mortality or ICU transfer. The researchers concluded that linking the quality indicator of early antibiotic use to financial incentives may lead to misdiagnosis of pneumonia and inappropriate antibiotic use.

Of course, antibiotic timing is not the only quality measure subject to overuse or misuse; other measures pose similar risks, including prophylaxis for deep vein thrombosis, glycemic control measures, and target immunization rates.

More-nuanced measures needed

We must also consider how well reported quality measures actually reflect our objectives. For example, an evaluation of 962 hospitals’ performance in managing acute MI found that the publicly reported core process measures for acute MI (beta-blocker and aspirin at admission and discharge, ACE inhibitor at discharge, smoking-cessation counseling, timely reperfusion) together explained only 6% of the variance among the hospitals in risk-adjusted 30-day mortality.21 This underscores how complicated the factors affecting mortality are, and how existing process measures have only begun to scratch the surface.

How much of a role does chance play?

Another issue is the role of chance and our limited power to detect real differences in outcomes, as illustrated by an analysis by Dimick et al of all discharges from a nationally representative sample of nearly 1,000 hospitals.22 The objective was to determine whether the seven operations for which mortality is advocated as a quality indicator by the Agency for Healthcare Research and Quality are performed often enough to reliably identify hospitals with increased mortality rates. The researchers found that only for one of the seven procedures—CABG—is there sufficient caseload over a 3-year period at the majority of US hospitals to accurately detect a mortality rate twice the national average.

Although CMS is highly committed to public reporting, the comparative mortality data available on its Hospital Compare Web site are not very useful for driving consumer choice or motivating hospitals to improve. For example, of the nearly 4,500 US hospitals that reported data on 30-day mortality from MI, only 17 hospitals were considered to be better than the national average and only 7 were considered worse than the national average.4

CASE REVISITED: LESSONS FROM THE UMASS MEMORIAL EXPERIENCE

Returning to our case study, what can the UMass Memorial experience teach us, and how well does it reflect the literature about the usefulness of public reporting?

Did public reporting accelerate quality improvement efforts? Yes. Reporting led to the suspension of cardiac surgery and substantive reorganization, which is consistent with the literature.

Was the mortality reduction typical? No. An optimist’s view would be that the drastic actions spurred by the media coverage had strong effects. A skeptic might say that perhaps UMass Memorial did some “cherry-picking” of patients, or that they got better at coding procedures in a way that reflected more favorably on the hospital.

Were the declines in patient volumes predictable? No. So far, the data suggest that public reporting has its greatest effects on providers rather than on institutions. This may change, however, with the introduction of tiered copayments, whereby patients are asked to pay more if they get their care from lower rated institutions.

Would financial incentives have accelerated improvement? It is too early to tell. The evidence for pay-for-performance programs is limited, and the benefits demonstrated so far have been modest. But in many ways the alternative is worse: our current system of financing and paying for hospital care offers no financial incentives to hospitals for investing in the personnel or systems required to achieve better outcomes—and instead rewards (through supplemental payments) adverse outcomes.

Did prospective patients have a right to know? Despite the limitations of public reporting, one of the most compelling arguments in its favor is that patients at UMass Memorial had the right to know about the program’s outcomes. This alone may ultimately justify the expense and efforts involved. Transparency and accountability are core values of open democratic societies, and US society relies on public reporting in many other realms: the National Highway Traffic Safety Administration publicizes crash test ratings, the Securities and Exchange Commission enforces public reporting by financial institutions, and the Federal Aviation Administration reports on airline safety, timeliness of flights, and lost baggage rates.

FUTURE DIRECTIONS

In the future, we can expect more measurement and reporting of health care factors that patients care most about, such as clinical outcomes and the patient experience. It is likely that public reporting and pay-for-performance programs will address a broader range of conditions and comprise a larger number of measures. CMS has outlined plans to increase the number of publicly reported measures to more than 70 by 2010 and more than 100 by 2011. My hope is that this expansion of data, along with improved data synthesis and presentation, will foster greater use of publicly reported data. Further, the continued evolution of the Web and social networking sites is very likely to enhance public awareness of hospital performance and change the ways in which patients use these data.

 

 

DISCUSSION

Question from the audience: I’m concerned about what seems to be a unilateral effort to improve quality. There are many components of health care delivery beyond those you’ve described, including the efforts of patients, insurers, employers, and the government. The reality is that patients don’t plan for illness, insurance companies often deny care, more and more employers are providing less coverage or no coverage, and Medicare is on the road to insolvency. Is the battle for quality winnable when all these other components of delivery are failing?

Dr. Lindenauer: You make good points. But from the standpoint of professionalism, I think we have a compelling duty to constantly strive to improve the quality of care in our hospitals and practices. I have presented strategies for potentially accelerating improvements that providers are trying to make anyway. Public reporting and financial incentives are likely to be with us for a while, and their use is likely to grow. But as you said, they address only part of the problem confronting American health care.

Question from the audience: For the savvy health care consumer, is there one particular Web site for hospital or provider comparisons that you would especially recommend? Do you actually recommend using such Web sites to patients before they undergo certain procedures?

Dr. Lindenauer: I think the Hospital Compare site from the Department of Health and Human Services is the key Web site. The California Hospital Assessment and Reporting Taskforce (CHART) has a good site, and the Commonwealth Fund’s WhyNotTheBest.org is an interesting newcomer. 

However, even the most ardent advocates for public reporting wouldn’t say the information available today is sufficient for making decisions. There’s still an important role for getting recommendations from other doctors who are familiar with local hospitals and providers.

I’m optimistic that the changes that are coming to these Web sites will provide a better user experience and make it harder to ignore the results of public reporting. Today we can say, “Hospital A is better at discharge instructions or smoking cessation counseling.” But we all can appreciate how weak those kinds of measures are because their implementation is subject to local interpretations. Once risk-adjusted outcomes and more-meaningful process measures are available, I’d be surprised if more patients weren’t willing to base their decisions on published comparisons.

Hospital quality measures and rankings are now widely available to the public online, but is public reporting of this information an effective strategy for improving health care? Using a case study of a hospital that suffered negative publicity as a result of a quality report, this article explores the use of public reporting of performance data and pay-for-performance reimbursement strategies to foster quality improvement in the US health care system.

CASE STUDY: A SURGICAL PROGRAM GETS A BAD REPORT―IN THE HEADLINES

In September 2005, The Boston Globe ran a prominent story reporting that the UMass Memorial Medical Center in Worcester, Mass., was abruptly suspending its elective cardiac surgery program.1 The program’s suspension came after state public health officials presented UMass Memorial with a detailed analysis showing that the hospital’s mortality rate for coronary artery bypass graft surgery (CABG) patients was the highest in the state and almost double the average for Massachusetts hospitals.1

Key personnel from UMass Memorial described the events preceding and following the program’s suspension in a journal article published in 2008.2 In 2002, UMass Memorial hired a new chief of cardiothoracic surgery, who resigned in early 2005. A few months after that resignation, state public health officials alerted the hospital to the abovementioned CABG mortality data (from 2002 and 2003), which they said would soon be reported publicly. UMass Memorial then conducted an internal review of its data from the most recent years (2004 and 2005) and found that its risk-adjusted CABG mortality had actually worsened, at which point the hospital voluntarily suspended its cardiac surgery program.2

More news stories arose about UMass Memorial’s program and its problems. The hospital hired consultants and senior surgeons from around the state and New England to completely review its cardiac surgery program. They concluded that “many essential systems were not in place” and made 68 key recommendations, including a complete overhaul of the hospital’s quality-improvement structure. The prior cardiac surgeons departed.2

The cardiac surgery program resumed after a 6-week hiatus, with day-to-day supervision by two senior cardiac surgeons from a Boston teaching hospital. A nationally recognized cardiac surgeon was brought on as chief of cardiac surgery in January 2006. In the 18 months after the program resumed, risk-adjusted CABG mortality rates declined substantially, but patient volume failed to return to presuspension levels and the hospital reported $22 million in lost revenue in fiscal year 2006 as a result of the suspension.2

This case raises a number of questions that help to frame discussion of the benefits and risks of public reporting of hospital quality measures:

  • To what extent does public reporting accelerate quality improvement?
  • How typical was the subsequent mortality reduction reported by UMass Memorial—ie, can public reporting be expected to improve outcomes?
  • Was the effect on patient volume expected—ie, how much does public reporting affect market share?
  • Would a pay-for-performance reimbursement model have accelerated improvement?
  • Why do public reporting and pay-for-performance programs remain controversial?
  • Do patients have a right to know?

WHAT HAS FUELED THE MOVE TOWARD PUBLIC REPORTING?

Drivers of public reporting

Massachusetts is one of a number of states that publicly report outcomes from cardiac surgery and other procedures and processes of care. Three basic factors have helped drive the development of public reporting (and, in some cases, pay-for-performance) programs:

  • National policy imperatives designed to improve quality and safety and to reduce costs
  • Cultural factors in society, which include consumerism in health care and the desire for transparency
  • The growth of information technology and use of the World Wide Web, which has been a huge enabler of public reporting. Public reporting could be done prior to the Web era but would not have reached such a wide audience had the results been released in a book that had to be ordered from a government printing office.

The rationale for public reporting

In theory, how might public reporting and pay-for-performance programs improve quality? Several different mechanisms or factors are likely to be involved:

  • Feedback. The basic premise of the National Surgical Quality Improvement Program, to cite one example, is that peer comparison and performance feedback will stimulate quality improvement.
  • Reputation. Hospital personnel fear being embarrassed if data show that they are performing poorly compared with other hospitals. Likewise, in recent years we have seen hospitals with the best quality rankings publicly advertise their performance.
  • Market share. Here the premise is that patients will tend to select providers with higher quality rankings and shun those with lower rankings.
  • Financial incentives. Pay-for-performance programs link payment or reimbursement directly to the desired outcomes and thereby stimulate quality improvement without working through the abovementioned mechanisms.

Approaches to quality measurement

Public reporting of hospital performance requires selection of an approach to measuring quality of care. Generally speaking, measures of health care quality reflect one of three domains of care:

Structural (or environmental) aspects, such as staffing in the intensive care unit (ICU), surgical volume, or availablity of emergency medical responders. An example of a structure-oriented reporting system is the Leapfrog Group’s online posting of hospital ratings based on surgical volumes for high-risk procedures, the degree of computerized order entry implementation, and the presence or absence of various patient safety practices.3

Processes of care, such as whether beta-blockers are prescribed for all patients after a myocardial infarction (MI), or whether thromboprophylaxis measures are ordered for surgical patients in keeping with guideline recommendations. Examples of process-oriented reporting systems include the US Department of Health and Human Services’ Hospital Compare Web site4 and the Commonwealth Fund’s WhyNotTheBest.org site.5

Outcomes of care, such as rates of mortality or complications, or patient satisfaction rates. An example of an outcomes-oriented reporting system is the annual report of institution-specific hospital-acquired infection rates put out by Pennsylvania6 and most other states.

 

 

IS THERE EVIDENCE OF BENEFIT?

A consistent effect in spurring quality-improvement efforts

Nearly a dozen published studies have evaluated whether public reporting stimulates quality-improvement activities, and the results have shown fairly consistently that it does. A 2003 study by Hibbard et al is representative of the results.7 This survey-based investigation measured the number of quality-improvement activities in cardiac and obstetric care undertaken by 24 Wisconsin hospitals that were included in an existing public reporting system compared with the number undertaken by 98 other Wisconsin hospitals that received either a private report on their own quality performance (without the information being made public) or no quality report at all. The study found that the hospitals that participated in public reporting were engaged in significantly more quality-improvement activities in both of the clinical areas assessed than were the hospitals receiving private reporting or no reporting.

A mixed effect on patient outcomes

In contrast, the data on whether public reporting improves patient outcomes have so far been mixed. A 2008 systematic review of the literature identified 11 studies that addressed this issue: five studies found that public reporting had a positive effect on patient outcomes, while six studies demonstrated a negative effect or no effect.8 Unfortunately, the methodological quality of most studies was poor: most were before-and-after comparisons without controls.

One of the positive studies in this review examined the effects of New York State’s pioneering institution of provider-specific CABG mortality reports (provider profiling) in 1989.9 The analysis found that between 1987 and 1992 (during which time provider profiling was instituted), unadjusted 30-day mortality rates following bypass surgery declined to a significantly larger degree among New York Medicare patients (33% reduction) than among Medicare patients nationwide (19% reduction) (P < .001).

In contrast, a time-series study from Cleveland Health Quality Choice (CHQC)—an early and innovative public reporting program—exemplifies a case in which public reporting of hospital performance had no discernible effect.10 The study examined trends in 30-day mortality across a range of conditions over a 6-year period for 30 hospitals in the Cleveland area participating in a public reporting system. It found that the hospitals that started out in the worst-performing groups (based on baseline mortality rates) showed no significant change in mortality over time.

DOES PUBLIC REPORTING AFFECT PATIENT CHOICES?

How a high-profile bypass patient chooses a hospital

When former President Bill Clinton developed chest pain and shortness of breath in 2004, he was seen at a small community hospital in Westchester County, N.Y., and then transferred to New York-Presbyterian Hospital/Columbia University Medical Center for bypass surgery.11 Although one would think President Clinton would have chosen the best hospital for CABG in New York, Presbyterian/Columbia’s risk-adjusted mortality rate for CABG was actually about twice the average for New York hospitals and one of the worst in the state, according to the most recent “report card” for New York hospitals available at the time.12

Why did President Clinton choose the hospital he did? Chances are that he, like most other patients, did not base his decision on publicly reported data. His choice probably was heavily influenced by the normal referral patterns of the community hospital where he was first seen.

Surveys show low patient use of data on quality...

The question raised by President Clinton’s case has been formally studied. In 1996, Schneider and Epstein surveyed patients who had recently undergone CABG in Pennsylvania (where surgeon- and hospital-specific mortality rates for cardiac surgery are publicly available) and found that fewer than 1% of patients said that provider ratings had a moderate or major impact on their choice of provider.13 

The Kaiser Family Foundation regularly surveys the public about its knowledge and use of publicly available hospital comparison data. In the latest Kaiser survey, conducted in 2008,14 41% of respondents said they believe there are “big differences” in quality among their local hospitals, yet 59% said they would choose a hospital that is familiar to them rather than a higher-rated facility. These findings may be explained, in part, by a lack of awareness that data on hospital quality are available: only 7% of survey participants said they had seen and used information comparing the quality of hospitals to make health care decisions in the prior year, and only 6% said they had seen and used information comparing physicians.

...But a trend toward greater acceptance

Although consumers’ use of publicly reported quality data remains low, their recognition of the value of such data has grown over time. Kaiser has conducted similar public surveys dating back to 1996, and the period from 1996 to 2008 saw a substantial decrease (from 72% to 59%) in the percentage of Americans who would choose a hospital based on familiarity more than on quality ratings. Similarly, the percentage of Americans who would prefer a surgeon with high quality ratings over a surgeon who has treated friends or family more than doubled from 1996 (20%) to 2008 (47%).14

What effect on market share?

Studies on the effects that public reporting has on hospital market share have been limited.

Schneider and Epstein surveyed cardiologists in Pennsylvania in 1995 and found that 87% of them said the state’s public reporting of surgeon- and hospital-specific mortality rates for CABG had no influence or minimal influence on their referral recommendations.15

Similarly, a review of New York State’s public reporting system for CABG 15 years after its launch found that hospital performance was not associated with a subsequent change in market share, not even among those hospitals with the highest mortality rate in a given year.16 Interestingly, however, this review also showed that surgeons in the bottom performance quartile were four times as likely as other surgeons to leave practice in the year following their poor report, which is one of the most prominent outcomes associated with provider profiling reported to date.

PAY-FOR-PERFORMANCE PROGRAMS

Evidence on the impact of pay-for-performance programs in the hospital setting is even more limited than that for public reporting.

Some evidence has come from the CMS/Premier Hospital Quality Incentive Demonstration, a pay-for-performance collaboration between the Centers for Medicare and Medicaid Services (CMS) and Premier, Inc., a nationwide alliance of hospitals that promotes best practices.17 The demonstration calls for hospitals that rank in the top quintile or decile for performance to receive a 1% or 2% Medicare payment bonus for five clinical focus areas: cardiac surgery, hip and knee surgery, pneumonia, heart failure, and acute MI. Performance ratings are based primarily on process measures as well as a few clinical outcome measures. Results from the first 21 months of the demonstration showed a consistent improvement in the hospitals’ composite quality scores in each of the five clinical areas.17

It is important to recognize, however, that this improvement occurred against the backdrop of broad national adoption of public reporting of hospital quality data, which makes it difficult to tease out how much of the improvement was truly attributable to pay-for-performance, especially in the absence of a control group.

To address this question, my colleagues and I evaluated adherence to quality measures over a 2-year period at 613 hospitals participating in a national public reporting initiative,18 including 207 hospitals that simultaneously took part in the CMS/Premier Hospital Quality Incentive Demonstration’s pay-for-performance program described above. We found that the hospitals participating in both public reporting and the pay-for-performance initiative achieved only modestly greater improvements in quality than did the hospitals engaged solely in public reporting; the difference amounted to only about a 1% improvement in process measures per year.

In another controlled study, Glickman et al compared quality improvement in the management of acute MI between 54 hospitals in a CMS pay-for-performance pilot project and 446 control hospitals without pay-for-performance incentives.19 They found that the pay-for-performance hospitals achieved a statistically significantly greater degree of improvement compared with control hospitals on two of six process-of-care measures (use of aspirin at discharge and smoking-cessation counseling) but not on the composite process-of-care measure. There was no significant difference between the groups in improvements in in-hospital mortality.

Why have the effects of pay-for-performance initiatives so far been so limited? It may be that the bonuses are too small and that public reporting is already effective at stimulating quality improvement, so that the incremental benefit of adding financial incentives is small. In the case of my group’s study,18 another possible factor was that the hospitals’ baseline performance on the quality measures assessed was already high—approaching or exceeding 90% on 5 of the 10 measures—thereby limiting our power to detect differences between the groups.

 

 

CONTROVERSIES AND CHALLENGES

Many issues continue to surround public reporting and pay-for-performance programs:

  • Are the measures used to evaluate health care systems suitable and evidence-based? Do they truly reflect the quality of care that providers are giving?
  • Do the programs encourage “teaching to the test” rather than stimulating real and comprehensive improvement? Do they make the system prone to misuse or overuse of measured services?
  • How much of the variation in hospital outcomes can be explained by the current process-of-care measures?
  • Should quality be measured by outcomes or processes? Outcomes matter more to patients, but they require risk adjustment to ensure valid comparisons, and risk adjustment can be difficult and expensive to conduct.
  • How much is chance a factor in apparent performance differences between hospitals?
  • How much is patient selection a factor? Might public reporting lead to “cherry-picking” of low-risk patients and thereby reduce access to care for other patients?

Unidirectional measures can lead to misuse, overuse

In 2003, the Infectious Diseases Society of America updated its guidelines on community-acquired pneumonia to recommend that patients receive antibiotics within 4 hours of hospital admission. This recommendation was widely adopted as an incentive-linked performance measure by CMS and other third-party payers. Kanwar et al studied the impact of this guidelines-based incentive in a pre/post study at one large teaching hospital.20 They found that while significantly more patients received antibiotics in a timely fashion after publication of the guidelines (2005) versus before the guidelines (2003), almost one-third of patients receiving antibiotics in 2005 had normal chest radiographs and thus were not appropriate candidates for therapy. Moreover, significantly fewer patients in 2005 had a final diagnosis of pneumonia at discharge, and there was no difference between the two periods in rates of mortality or ICU transfer. The researchers concluded that linking the quality indicator of early antibiotic use to financial incentives may lead to misdiagnosis of pneumonia and inappropriate antibiotic use.

Of course, antibiotic timing is not the only quality measure subject to overuse or misuse; other measures pose similar risks, including prophylaxis for deep vein thrombosis, glycemic control measures, and target immunization rates.

More-nuanced measures needed

We must also consider how well reported quality measures actually reflect our objectives. For example, an evaluation of 962 hospitals’ performance in managing acute MI found that the publicly reported core process measures for acute MI (beta-blocker and aspirin at admission and discharge, ACE inhibitor at discharge, smoking-cessation counseling, timely reperfusion) together explained only 6% of the variance among the hospitals in risk-adjusted 30-day mortality.21 This underscores how complicated the factors affecting mortality are, and how existing process measures have only begun to scratch the surface.

How much of a role does chance play?

Another issue is the role of chance and our limited power to detect real differences in outcomes, as illustrated by an analysis by Dimick et al of all discharges from a nationally representative sample of nearly 1,000 hospitals.22 The objective was to determine whether the seven operations for which mortality is advocated as a quality indicator by the Agency for Healthcare Research and Quality are performed often enough to reliably identify hospitals with increased mortality rates. The researchers found that only for one of the seven procedures—CABG—is there sufficient caseload over a 3-year period at the majority of US hospitals to accurately detect a mortality rate twice the national average.

Although CMS is highly committed to public reporting, the comparative mortality data available on its Hospital Compare Web site are not very useful for driving consumer choice or motivating hospitals to improve. For example, of the nearly 4,500 US hospitals that reported data on 30-day mortality from MI, only 17 hospitals were considered to be better than the national average and only 7 were considered worse than the national average.4

CASE REVISITED: LESSONS FROM THE UMASS MEMORIAL EXPERIENCE

Returning to our case study, what can the UMass Memorial experience teach us, and how well does it reflect the literature about the usefulness of public reporting?

Did public reporting accelerate quality improvement efforts? Yes. Reporting led to the suspension of cardiac surgery and substantive reorganization, which is consistent with the literature.

Was the mortality reduction typical? No. An optimist’s view would be that the drastic actions spurred by the media coverage had strong effects. A skeptic might say that perhaps UMass Memorial did some “cherry-picking” of patients, or that they got better at coding procedures in a way that reflected more favorably on the hospital.

Were the declines in patient volumes predictable? No. So far, the data suggest that public reporting has its greatest effects on providers rather than on institutions. This may change, however, with the introduction of tiered copayments, whereby patients are asked to pay more if they get their care from lower rated institutions.

Would financial incentives have accelerated improvement? It is too early to tell. The evidence for pay-for-performance programs is limited, and the benefits demonstrated so far have been modest. But in many ways the alternative is worse: our current system of financing and paying for hospital care offers no financial incentives to hospitals for investing in the personnel or systems required to achieve better outcomes—and instead rewards (through supplemental payments) adverse outcomes.

Did prospective patients have a right to know? Despite the limitations of public reporting, one of the most compelling arguments in its favor is that patients at UMass Memorial had the right to know about the program’s outcomes. This alone may ultimately justify the expense and efforts involved. Transparency and accountability are core values of open democratic societies, and US society relies on public reporting in many other realms: the National Highway Traffic Safety Administration publicizes crash test ratings, the Securities and Exchange Commission enforces public reporting by financial institutions, and the Federal Aviation Administration reports on airline safety, timeliness of flights, and lost baggage rates.

FUTURE DIRECTIONS

In the future, we can expect more measurement and reporting of health care factors that patients care most about, such as clinical outcomes and the patient experience. It is likely that public reporting and pay-for-performance programs will address a broader range of conditions and comprise a larger number of measures. CMS has outlined plans to increase the number of publicly reported measures to more than 70 by 2010 and more than 100 by 2011. My hope is that this expansion of data, along with improved data synthesis and presentation, will foster greater use of publicly reported data. Further, the continued evolution of the Web and social networking sites is very likely to enhance public awareness of hospital performance and change the ways in which patients use these data.

 

 

DISCUSSION

Question from the audience: I’m concerned about what seems to be a unilateral effort to improve quality. There are many components of health care delivery beyond those you’ve described, including the efforts of patients, insurers, employers, and the government. The reality is that patients don’t plan for illness, insurance companies often deny care, more and more employers are providing less coverage or no coverage, and Medicare is on the road to insolvency. Is the battle for quality winnable when all these other components of delivery are failing?

Dr. Lindenauer: You make good points. But from the standpoint of professionalism, I think we have a compelling duty to constantly strive to improve the quality of care in our hospitals and practices. I have presented strategies for potentially accelerating improvements that providers are trying to make anyway. Public reporting and financial incentives are likely to be with us for a while, and their use is likely to grow. But as you said, they address only part of the problem confronting American health care.

Question from the audience: For the savvy health care consumer, is there one particular Web site for hospital or provider comparisons that you would especially recommend? Do you actually recommend using such Web sites to patients before they undergo certain procedures?

Dr. Lindenauer: I think the Hospital Compare site from the Department of Health and Human Services is the key Web site. The California Hospital Assessment and Reporting Taskforce (CHART) has a good site, and the Commonwealth Fund’s WhyNotTheBest.org is an interesting newcomer. 

However, even the most ardent advocates for public reporting wouldn’t say the information available today is sufficient for making decisions. There’s still an important role for getting recommendations from other doctors who are familiar with local hospitals and providers.

I’m optimistic that the changes that are coming to these Web sites will provide a better user experience and make it harder to ignore the results of public reporting. Today we can say, “Hospital A is better at discharge instructions or smoking cessation counseling.” But we all can appreciate how weak those kinds of measures are because their implementation is subject to local interpretations. Once risk-adjusted outcomes and more-meaningful process measures are available, I’d be surprised if more patients weren’t willing to base their decisions on published comparisons.

References
  1. Kowalczyk L, Smith S. Hospital halts heart surgeries due to deaths: high rate cited at Worcester facility. The Boston Globe. September 22, 2005.
  2. Ettinger WH, Hylka SM, Phillips RA, Harrison LH Jr, Cyr JA, Sussman AJ. When things go wrong: the impact of being a statistical outlier in publicly reported coronary artery bypass graft surgery mortality data. Am J Med Qual 2008; 23:90–95.
  3. Leapfrog hospital quality ratings. The Leapfrog Group Web site. http://www.leapfroggroup.org/cp. Accessed June 10, 2009.
  4. Hospital Compare: a quality tool provided by Medicare. U.S. Department of Health & Human Services Web site. http://www.hospitalcompare.hhs.gov.  Accessed June 10, 2009.
  5. Why Not the Best (Beta): A Health Care Quality Improvement Resource. The Commonwealth Fund. http://www.WhyNotTheBest.org. Accessed May 6, 2009.
  6. Hospital-acquired infections in Pennsylvania. Pennsylvania Health Care Cost Containment Council Web site. http://www.phc4.org.  Accessed April 6, 2009.
  7. Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement efforts? Health Aff (Millwood) 2003; 22:84–94.
  8. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008; 148:111–123.
  9. Peterson ED, DeLong ER, Jollis JG, et al. The effects of New York’s bypass surgery provider profiling on access to care and patient outcomes in the elderly. J Am Coll Cardiol 1998; 32:993–999.
  10. Baker DW, Einstadter D, Thomas C, et al. The effect of publicly reporting hospital performance on market share and risk-adjusted mortality at high-mortality hospitals. Med Care 2003; 41:729–740.
  11. Graylock J. After chest pains, Clinton set to undergo bypass surgery. USA Today. September 3, 2004.
  12. Adult Cardiac Surgery in New York State, 1999–2001. Albany, NY: New York State Department of Health; April 2004. http://www.health.state.ny.us/nysdoh/heart/pdf/1999-2001_cabg.pdf. Accessed June 10, 2009.
  13. Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA 1998; 279:1638–1642.
  14. The Henry J. Kaiser Family Foundation. 2008 Update on Consumers’ Views of Patient Safety and Quality Information: Summary & Chartpack; October 2008. http://www.kff.org/kaiserpolls/upload/7819.pdf. Accessed June 10, 2009.
  15. Schneider EC, Epstein AM. Influence of cardiac-surgery performance reports on referral practices and access to care: a survey of cardiovascular specialists. N Engl J Med 1996; 335:251–256.
  16. Jha AK, Epstein AM. The predictive accuracy of the New York State coronary artery bypass surgery report-card system. Health Aff (Millwood) 2006; 25:844–855.
  17. Remus D. Pay for performance: CMS/Premier Hospital Quality Incentive Demonstration Project—year 1 results, December 2005. PowerPoint presentation available at: http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/results/index.jsp. Accessed June 10, 2009.
  18. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med 2007; 356:486–496.
  19. Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA 2007; 297:2373–2380.
  20. Kanwar M, Brar N, Khatib R, Fakih MG. Misdiagnosis of community-acquired pneumonia and inappropriate utilization of antibiotics: side effects of the 4-h antibiotic administration rule. Chest 2007; 131:1865–1869.
  21. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality. JAMA 2006; 296:72–78.
  22. Dimick JB, Welch HG, Birkmeyer JD. Surgical mortality as an indicator of hospital quality: the problem with small sample size. JAMA 2004; 292:847–851.
References
  1. Kowalczyk L, Smith S. Hospital halts heart surgeries due to deaths: high rate cited at Worcester facility. The Boston Globe. September 22, 2005.
  2. Ettinger WH, Hylka SM, Phillips RA, Harrison LH Jr, Cyr JA, Sussman AJ. When things go wrong: the impact of being a statistical outlier in publicly reported coronary artery bypass graft surgery mortality data. Am J Med Qual 2008; 23:90–95.
  3. Leapfrog hospital quality ratings. The Leapfrog Group Web site. http://www.leapfroggroup.org/cp. Accessed June 10, 2009.
  4. Hospital Compare: a quality tool provided by Medicare. U.S. Department of Health & Human Services Web site. http://www.hospitalcompare.hhs.gov.  Accessed June 10, 2009.
  5. Why Not the Best (Beta): A Health Care Quality Improvement Resource. The Commonwealth Fund. http://www.WhyNotTheBest.org. Accessed May 6, 2009.
  6. Hospital-acquired infections in Pennsylvania. Pennsylvania Health Care Cost Containment Council Web site. http://www.phc4.org.  Accessed April 6, 2009.
  7. Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement efforts? Health Aff (Millwood) 2003; 22:84–94.
  8. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008; 148:111–123.
  9. Peterson ED, DeLong ER, Jollis JG, et al. The effects of New York’s bypass surgery provider profiling on access to care and patient outcomes in the elderly. J Am Coll Cardiol 1998; 32:993–999.
  10. Baker DW, Einstadter D, Thomas C, et al. The effect of publicly reporting hospital performance on market share and risk-adjusted mortality at high-mortality hospitals. Med Care 2003; 41:729–740.
  11. Graylock J. After chest pains, Clinton set to undergo bypass surgery. USA Today. September 3, 2004.
  12. Adult Cardiac Surgery in New York State, 1999–2001. Albany, NY: New York State Department of Health; April 2004. http://www.health.state.ny.us/nysdoh/heart/pdf/1999-2001_cabg.pdf. Accessed June 10, 2009.
  13. Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA 1998; 279:1638–1642.
  14. The Henry J. Kaiser Family Foundation. 2008 Update on Consumers’ Views of Patient Safety and Quality Information: Summary & Chartpack; October 2008. http://www.kff.org/kaiserpolls/upload/7819.pdf. Accessed June 10, 2009.
  15. Schneider EC, Epstein AM. Influence of cardiac-surgery performance reports on referral practices and access to care: a survey of cardiovascular specialists. N Engl J Med 1996; 335:251–256.
  16. Jha AK, Epstein AM. The predictive accuracy of the New York State coronary artery bypass surgery report-card system. Health Aff (Millwood) 2006; 25:844–855.
  17. Remus D. Pay for performance: CMS/Premier Hospital Quality Incentive Demonstration Project—year 1 results, December 2005. PowerPoint presentation available at: http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/results/index.jsp. Accessed June 10, 2009.
  18. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med 2007; 356:486–496.
  19. Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA 2007; 297:2373–2380.
  20. Kanwar M, Brar N, Khatib R, Fakih MG. Misdiagnosis of community-acquired pneumonia and inappropriate utilization of antibiotics: side effects of the 4-h antibiotic administration rule. Chest 2007; 131:1865–1869.
  21. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality. JAMA 2006; 296:72–78.
  22. Dimick JB, Welch HG, Birkmeyer JD. Surgical mortality as an indicator of hospital quality: the problem with small sample size. JAMA 2004; 292:847–851.
Page Number
S3-S8
Page Number
S3-S8
Publications
Publications
Article Type
Display Headline
Public reporting and pay-for-performance programs in perioperative medicine
Display Headline
Public reporting and pay-for-performance programs in perioperative medicine
Citation Override
Cleveland Clinic Journal of Medicine 2009 November;76(suppl 4):S3-S8
Inside the Article

KEY POINTS

  • Public reporting programs have expanded in recent years, driven by national policy imperatives to improve safety, increased demands for transparency, patient “consumerism,” and the growth of information technology.
  • Hospital-based pay-for-performance programs have had only a minor impact on quality so far, possibly because financial incentives have been small and much of the programs’ potential benefit may be preempted by existing public reporting efforts.
  • These programs have considerable potential to accelerate improvement in quality but are limited by a need for more-nuanced process measures and better risk-adjustment methods.
  • These programs may lead to unintended consequences such as misuse or overuse of measured services, “cherry-picking” of low-risk patients, or misclassification of providers.
  • Continued growth of the Internet and social-networking sites will likely enhance and change the way patients use and share information about the quality of health care.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 04/05/2018 - 12:00
Un-Gate On Date
Thu, 04/05/2018 - 12:00
Use ProPublica
Article PDF Media

Who do you want taking care of your parent?

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Who do you want taking care of your parent?

Specialist or generalist? The question of which physicians are best suited to treat patients with a single condition or in a particular care setting has been the subject of study and debate for decades.13 Investigators have asked whether cardiologists provide better care for patients with acute myocardial infarction1 or whether intensivists achieve superior outcomes in critical care settings.2 One implication of these studies is that a hospital or health plan armed with this knowledge would be capable of improving outcomes by directing a greater proportion of patients to the superior physician group. In fact, much of the literature reporting on the effect of hospitalists is simply a new variation on this old theme.48 Of course, to realize any potential gains, there must be an adequate number of specialists or the ability to increase the supply quickly. Neither option tends to be especially realistic. Further, these studies have a tendency to create false dilemmas because consultation and comanagement are more common than single‐handed care.

Because studies comparing the outcomes of physician groups are generally not randomized trials, minimizing the threat of selection bias (ie, patient prognosis influencing treatment assignment) is of paramount importance. For example, one can imagine how patients with a particularly poor prognosis in the setting of acute myocardial infarction (perhaps related to age or the presence of multiple comorbidities) might be preferentially directed toward a general medicine service, especially when remunerative cardiac intervention is unlikely. In such instances, comparing simple mortality rates would erroneously lead to the conclusion that patients cared for by cardiologists had better outcomes.

Multivariable modeling techniques like logistic and liner regression and more recently, propensity‐based methods, are the standard approaches used to adjust for differences in patient characteristics stemming from nonrandom assignment. When propensity methods are used, a multivariable model is created to predict the likelihood, or propensity, of a patient receiving treatment. Because it is not necessary to be parsimonious in the development of propensity models, they can include many factors and interaction terms that might be left out of a standard multivariable logistic regression. Then, the outcomes of patients with a similar treatment propensity who did receive the intervention can be compared to the outcomes of those who did not. Some have gone so far as to use the term pseudorandomized trial to describe this approach because it is often capable balancing covariates between the treated and nontreated patients. However, as sophisticated as this form of modeling may be, these techniques at best are only capable of reducing bias related to measured confounders. Residual bias from confounders that go unmeasured remains a threatand is particularly common when relying on administrative data sources.

In this issue of the Journal of Hospital Medicine, Gillum and Johnston9 apply a version of instrumental variable analysis, a technique borrowed from econometrics, to address the issue of unmeasured confounding head‐on. The approach, called group‐treatment analysis, is based on the relatively simple notion that if neurologist care is superior to that provided by generalists, all other things being equal, hospitals that admit a large proportion of their patients to neurologists should have better outcomes than those admitting a smaller proportion. This approach has theoretical advantages over propensity adjustment because it does not attempt to control for differences between treated and untreated patients at the individual hospital level, where, presumably, the problem of selection bias is more potent. Although their standard multivariable models suggested that patients admitted to a neurologist were 40% less likely to die while hospitalized than patients admitted to generalists, Gillum and Johnston found that after adjusting for the institutional rate of neurologist admission, any apparent benefit had disappeared. Similar results were observed in their analyses of length of stay and cost.

In some ways, the findings of this study are more startling for the questions they raise about the presence of residual bias in observational studies using conventional multivariable methods than for the fact that generalist care was found to be as safe as neurologist care and add to a growing body of evidence suggesting that stronger methods are required to deal with residual bias in observational studies.10

Although the results largely speak for themselves and should be reassuring given that most patients with ischemic stroke in the United States are and will continue to be cared for by generalists, a number of important questions remain unanswered. First, the focus of this study was on short‐term outcomes. Because functional status and quality of life probably matter as much or more to stroke patients than in‐hospital mortality and certainly length of stay or cost, we can only hope that it is safe to extrapolate from the authors' mortality findings. Second, this study relied on data from the late 1990s, before the widespread availability of hospitalists. How generalizable the findings would be in today's environment is uncertain. On a more practical level, the authors were unable to assess the impact of formal or informal consultation by a neurologist. If this played a significant role (a reasonable assumption, I think), this would have blurred any distinction between the 2 physician groups. For this reason one cannot draw any conclusions about a more pragmatic questionthe necessity or benefit of neurologist consultation in patients with ischemic stroke.

Looking ahead, researchers hoping to improve the outcomes of patients with acute ischemic stroke should focus on developing novel models of collaboration between hospitalists and neurologists, instead of simply trying to prove that a neurologist should take care of a patient suffering a stroke alone versus a hospitalist without help from a neurologist. We also should recognize that the use of protocols and checklists or leveraging information technology investments may provide clinical decision support that improves care more than just consulting a specialist or having them care for the patient.

References
  1. Ayanian JZ,Guadagnoli E,McNeil BJ,Cleary PD.Treatment and outcomes of acute myocardial infarction among patients of cardiologists and generalist physicians.Arch Intern Med.1997;157:25702576.
  2. Pronovost PJ,Angus DC,Dorman T,Robinson KA,Dremsizov TT,Young TL.Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA.2002;288:21512162.
  3. Smetana GW,Landon BE,Bindman AB, et al.A comparison of outcomes resulting from generalist vs specialist care for a single discrete medical condition: a systematic review and methodologic critique.Arch Intern Med.2007;167:1020.
  4. Auerbach AD,Wachter RM,Katz P, et al.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patient outcomes.Ann Intern Med.2002;137:859865.
  5. Halasyamani LK,Valenstein PN,Friedlander MP, et al.A comparison of two hospitalist models with traditional care in a community teaching hospital.Am J Med.2005;118:536543.
  6. Kaboli PJ,Barnett MJ,Rosenthal GE,Kaboli PJ,Barnett MJ,Rosenthal GE.Associations with reduced length of stay and costs on an academic hospitalist service.Am J Manag Care.2004;10:561568.
  7. Lindenauer PK,Rothberg MB,Pekow PS,Kenwood C,Benjamin EM,Auerbach AD.Outcomes of care by hospitalists, general internists, and family physicians. [see comment].N Engl J Med.2007;357:25892600.
  8. Wachter RM,Katz P,Showstack J, et al.Reorganizing an academic medical service: impact on cost, quality, patient satisfaction, and education.JAMA.1998;279:15601565.
  9. Gillum L,Johnston SC.Influence of physician specialty on outcomes after acute ischemic stroke.J Hosp Med2008;3:184192.
  10. Stukel TA,Fisher ES,Wennberg DE,Alter DA,Gottlieb DJ,Vermeulen MJ.Analysis of observational studies in the presence of treatment selection bias: effects of invasive cardiac management on AMI survival using propensity score and instrumental variable methods.JAMA.2007;297:278285.
Article PDF
Issue
Journal of Hospital Medicine - 3(3)
Publications
Page Number
179-180
Sections
Article PDF
Article PDF

Specialist or generalist? The question of which physicians are best suited to treat patients with a single condition or in a particular care setting has been the subject of study and debate for decades.13 Investigators have asked whether cardiologists provide better care for patients with acute myocardial infarction1 or whether intensivists achieve superior outcomes in critical care settings.2 One implication of these studies is that a hospital or health plan armed with this knowledge would be capable of improving outcomes by directing a greater proportion of patients to the superior physician group. In fact, much of the literature reporting on the effect of hospitalists is simply a new variation on this old theme.48 Of course, to realize any potential gains, there must be an adequate number of specialists or the ability to increase the supply quickly. Neither option tends to be especially realistic. Further, these studies have a tendency to create false dilemmas because consultation and comanagement are more common than single‐handed care.

Because studies comparing the outcomes of physician groups are generally not randomized trials, minimizing the threat of selection bias (ie, patient prognosis influencing treatment assignment) is of paramount importance. For example, one can imagine how patients with a particularly poor prognosis in the setting of acute myocardial infarction (perhaps related to age or the presence of multiple comorbidities) might be preferentially directed toward a general medicine service, especially when remunerative cardiac intervention is unlikely. In such instances, comparing simple mortality rates would erroneously lead to the conclusion that patients cared for by cardiologists had better outcomes.

Multivariable modeling techniques like logistic and liner regression and more recently, propensity‐based methods, are the standard approaches used to adjust for differences in patient characteristics stemming from nonrandom assignment. When propensity methods are used, a multivariable model is created to predict the likelihood, or propensity, of a patient receiving treatment. Because it is not necessary to be parsimonious in the development of propensity models, they can include many factors and interaction terms that might be left out of a standard multivariable logistic regression. Then, the outcomes of patients with a similar treatment propensity who did receive the intervention can be compared to the outcomes of those who did not. Some have gone so far as to use the term pseudorandomized trial to describe this approach because it is often capable balancing covariates between the treated and nontreated patients. However, as sophisticated as this form of modeling may be, these techniques at best are only capable of reducing bias related to measured confounders. Residual bias from confounders that go unmeasured remains a threatand is particularly common when relying on administrative data sources.

In this issue of the Journal of Hospital Medicine, Gillum and Johnston9 apply a version of instrumental variable analysis, a technique borrowed from econometrics, to address the issue of unmeasured confounding head‐on. The approach, called group‐treatment analysis, is based on the relatively simple notion that if neurologist care is superior to that provided by generalists, all other things being equal, hospitals that admit a large proportion of their patients to neurologists should have better outcomes than those admitting a smaller proportion. This approach has theoretical advantages over propensity adjustment because it does not attempt to control for differences between treated and untreated patients at the individual hospital level, where, presumably, the problem of selection bias is more potent. Although their standard multivariable models suggested that patients admitted to a neurologist were 40% less likely to die while hospitalized than patients admitted to generalists, Gillum and Johnston found that after adjusting for the institutional rate of neurologist admission, any apparent benefit had disappeared. Similar results were observed in their analyses of length of stay and cost.

In some ways, the findings of this study are more startling for the questions they raise about the presence of residual bias in observational studies using conventional multivariable methods than for the fact that generalist care was found to be as safe as neurologist care and add to a growing body of evidence suggesting that stronger methods are required to deal with residual bias in observational studies.10

Although the results largely speak for themselves and should be reassuring given that most patients with ischemic stroke in the United States are and will continue to be cared for by generalists, a number of important questions remain unanswered. First, the focus of this study was on short‐term outcomes. Because functional status and quality of life probably matter as much or more to stroke patients than in‐hospital mortality and certainly length of stay or cost, we can only hope that it is safe to extrapolate from the authors' mortality findings. Second, this study relied on data from the late 1990s, before the widespread availability of hospitalists. How generalizable the findings would be in today's environment is uncertain. On a more practical level, the authors were unable to assess the impact of formal or informal consultation by a neurologist. If this played a significant role (a reasonable assumption, I think), this would have blurred any distinction between the 2 physician groups. For this reason one cannot draw any conclusions about a more pragmatic questionthe necessity or benefit of neurologist consultation in patients with ischemic stroke.

Looking ahead, researchers hoping to improve the outcomes of patients with acute ischemic stroke should focus on developing novel models of collaboration between hospitalists and neurologists, instead of simply trying to prove that a neurologist should take care of a patient suffering a stroke alone versus a hospitalist without help from a neurologist. We also should recognize that the use of protocols and checklists or leveraging information technology investments may provide clinical decision support that improves care more than just consulting a specialist or having them care for the patient.

Specialist or generalist? The question of which physicians are best suited to treat patients with a single condition or in a particular care setting has been the subject of study and debate for decades.13 Investigators have asked whether cardiologists provide better care for patients with acute myocardial infarction1 or whether intensivists achieve superior outcomes in critical care settings.2 One implication of these studies is that a hospital or health plan armed with this knowledge would be capable of improving outcomes by directing a greater proportion of patients to the superior physician group. In fact, much of the literature reporting on the effect of hospitalists is simply a new variation on this old theme.48 Of course, to realize any potential gains, there must be an adequate number of specialists or the ability to increase the supply quickly. Neither option tends to be especially realistic. Further, these studies have a tendency to create false dilemmas because consultation and comanagement are more common than single‐handed care.

Because studies comparing the outcomes of physician groups are generally not randomized trials, minimizing the threat of selection bias (ie, patient prognosis influencing treatment assignment) is of paramount importance. For example, one can imagine how patients with a particularly poor prognosis in the setting of acute myocardial infarction (perhaps related to age or the presence of multiple comorbidities) might be preferentially directed toward a general medicine service, especially when remunerative cardiac intervention is unlikely. In such instances, comparing simple mortality rates would erroneously lead to the conclusion that patients cared for by cardiologists had better outcomes.

Multivariable modeling techniques like logistic and liner regression and more recently, propensity‐based methods, are the standard approaches used to adjust for differences in patient characteristics stemming from nonrandom assignment. When propensity methods are used, a multivariable model is created to predict the likelihood, or propensity, of a patient receiving treatment. Because it is not necessary to be parsimonious in the development of propensity models, they can include many factors and interaction terms that might be left out of a standard multivariable logistic regression. Then, the outcomes of patients with a similar treatment propensity who did receive the intervention can be compared to the outcomes of those who did not. Some have gone so far as to use the term pseudorandomized trial to describe this approach because it is often capable balancing covariates between the treated and nontreated patients. However, as sophisticated as this form of modeling may be, these techniques at best are only capable of reducing bias related to measured confounders. Residual bias from confounders that go unmeasured remains a threatand is particularly common when relying on administrative data sources.

In this issue of the Journal of Hospital Medicine, Gillum and Johnston9 apply a version of instrumental variable analysis, a technique borrowed from econometrics, to address the issue of unmeasured confounding head‐on. The approach, called group‐treatment analysis, is based on the relatively simple notion that if neurologist care is superior to that provided by generalists, all other things being equal, hospitals that admit a large proportion of their patients to neurologists should have better outcomes than those admitting a smaller proportion. This approach has theoretical advantages over propensity adjustment because it does not attempt to control for differences between treated and untreated patients at the individual hospital level, where, presumably, the problem of selection bias is more potent. Although their standard multivariable models suggested that patients admitted to a neurologist were 40% less likely to die while hospitalized than patients admitted to generalists, Gillum and Johnston found that after adjusting for the institutional rate of neurologist admission, any apparent benefit had disappeared. Similar results were observed in their analyses of length of stay and cost.

In some ways, the findings of this study are more startling for the questions they raise about the presence of residual bias in observational studies using conventional multivariable methods than for the fact that generalist care was found to be as safe as neurologist care and add to a growing body of evidence suggesting that stronger methods are required to deal with residual bias in observational studies.10

Although the results largely speak for themselves and should be reassuring given that most patients with ischemic stroke in the United States are and will continue to be cared for by generalists, a number of important questions remain unanswered. First, the focus of this study was on short‐term outcomes. Because functional status and quality of life probably matter as much or more to stroke patients than in‐hospital mortality and certainly length of stay or cost, we can only hope that it is safe to extrapolate from the authors' mortality findings. Second, this study relied on data from the late 1990s, before the widespread availability of hospitalists. How generalizable the findings would be in today's environment is uncertain. On a more practical level, the authors were unable to assess the impact of formal or informal consultation by a neurologist. If this played a significant role (a reasonable assumption, I think), this would have blurred any distinction between the 2 physician groups. For this reason one cannot draw any conclusions about a more pragmatic questionthe necessity or benefit of neurologist consultation in patients with ischemic stroke.

Looking ahead, researchers hoping to improve the outcomes of patients with acute ischemic stroke should focus on developing novel models of collaboration between hospitalists and neurologists, instead of simply trying to prove that a neurologist should take care of a patient suffering a stroke alone versus a hospitalist without help from a neurologist. We also should recognize that the use of protocols and checklists or leveraging information technology investments may provide clinical decision support that improves care more than just consulting a specialist or having them care for the patient.

References
  1. Ayanian JZ,Guadagnoli E,McNeil BJ,Cleary PD.Treatment and outcomes of acute myocardial infarction among patients of cardiologists and generalist physicians.Arch Intern Med.1997;157:25702576.
  2. Pronovost PJ,Angus DC,Dorman T,Robinson KA,Dremsizov TT,Young TL.Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA.2002;288:21512162.
  3. Smetana GW,Landon BE,Bindman AB, et al.A comparison of outcomes resulting from generalist vs specialist care for a single discrete medical condition: a systematic review and methodologic critique.Arch Intern Med.2007;167:1020.
  4. Auerbach AD,Wachter RM,Katz P, et al.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patient outcomes.Ann Intern Med.2002;137:859865.
  5. Halasyamani LK,Valenstein PN,Friedlander MP, et al.A comparison of two hospitalist models with traditional care in a community teaching hospital.Am J Med.2005;118:536543.
  6. Kaboli PJ,Barnett MJ,Rosenthal GE,Kaboli PJ,Barnett MJ,Rosenthal GE.Associations with reduced length of stay and costs on an academic hospitalist service.Am J Manag Care.2004;10:561568.
  7. Lindenauer PK,Rothberg MB,Pekow PS,Kenwood C,Benjamin EM,Auerbach AD.Outcomes of care by hospitalists, general internists, and family physicians. [see comment].N Engl J Med.2007;357:25892600.
  8. Wachter RM,Katz P,Showstack J, et al.Reorganizing an academic medical service: impact on cost, quality, patient satisfaction, and education.JAMA.1998;279:15601565.
  9. Gillum L,Johnston SC.Influence of physician specialty on outcomes after acute ischemic stroke.J Hosp Med2008;3:184192.
  10. Stukel TA,Fisher ES,Wennberg DE,Alter DA,Gottlieb DJ,Vermeulen MJ.Analysis of observational studies in the presence of treatment selection bias: effects of invasive cardiac management on AMI survival using propensity score and instrumental variable methods.JAMA.2007;297:278285.
References
  1. Ayanian JZ,Guadagnoli E,McNeil BJ,Cleary PD.Treatment and outcomes of acute myocardial infarction among patients of cardiologists and generalist physicians.Arch Intern Med.1997;157:25702576.
  2. Pronovost PJ,Angus DC,Dorman T,Robinson KA,Dremsizov TT,Young TL.Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA.2002;288:21512162.
  3. Smetana GW,Landon BE,Bindman AB, et al.A comparison of outcomes resulting from generalist vs specialist care for a single discrete medical condition: a systematic review and methodologic critique.Arch Intern Med.2007;167:1020.
  4. Auerbach AD,Wachter RM,Katz P, et al.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patient outcomes.Ann Intern Med.2002;137:859865.
  5. Halasyamani LK,Valenstein PN,Friedlander MP, et al.A comparison of two hospitalist models with traditional care in a community teaching hospital.Am J Med.2005;118:536543.
  6. Kaboli PJ,Barnett MJ,Rosenthal GE,Kaboli PJ,Barnett MJ,Rosenthal GE.Associations with reduced length of stay and costs on an academic hospitalist service.Am J Manag Care.2004;10:561568.
  7. Lindenauer PK,Rothberg MB,Pekow PS,Kenwood C,Benjamin EM,Auerbach AD.Outcomes of care by hospitalists, general internists, and family physicians. [see comment].N Engl J Med.2007;357:25892600.
  8. Wachter RM,Katz P,Showstack J, et al.Reorganizing an academic medical service: impact on cost, quality, patient satisfaction, and education.JAMA.1998;279:15601565.
  9. Gillum L,Johnston SC.Influence of physician specialty on outcomes after acute ischemic stroke.J Hosp Med2008;3:184192.
  10. Stukel TA,Fisher ES,Wennberg DE,Alter DA,Gottlieb DJ,Vermeulen MJ.Analysis of observational studies in the presence of treatment selection bias: effects of invasive cardiac management on AMI survival using propensity score and instrumental variable methods.JAMA.2007;297:278285.
Issue
Journal of Hospital Medicine - 3(3)
Issue
Journal of Hospital Medicine - 3(3)
Page Number
179-180
Page Number
179-180
Publications
Publications
Article Type
Display Headline
Who do you want taking care of your parent?
Display Headline
Who do you want taking care of your parent?
Sections
Article Source
Copyright © 2008 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Center for Quality and Safety Research, Baystate Meidcal Center, 759 Chestnut Street, Springfield, MA 01199
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media