Article Type
Changed
Tue, 10/02/2018 - 11:49
Display Headline
Public reporting and pay-for-performance programs in perioperative medicine
Are they meeting their goals?

Hospital quality measures and rankings are now widely available to the public online, but is public reporting of this information an effective strategy for improving health care? Using a case study of a hospital that suffered negative publicity as a result of a quality report, this article explores the use of public reporting of performance data and pay-for-performance reimbursement strategies to foster quality improvement in the US health care system.

CASE STUDY: A SURGICAL PROGRAM GETS A BAD REPORT―IN THE HEADLINES

In September 2005, The Boston Globe ran a prominent story reporting that the UMass Memorial Medical Center in Worcester, Mass., was abruptly suspending its elective cardiac surgery program.1 The program’s suspension came after state public health officials presented UMass Memorial with a detailed analysis showing that the hospital’s mortality rate for coronary artery bypass graft surgery (CABG) patients was the highest in the state and almost double the average for Massachusetts hospitals.1

Key personnel from UMass Memorial described the events preceding and following the program’s suspension in a journal article published in 2008.2 In 2002, UMass Memorial hired a new chief of cardiothoracic surgery, who resigned in early 2005. A few months after that resignation, state public health officials alerted the hospital to the abovementioned CABG mortality data (from 2002 and 2003), which they said would soon be reported publicly. UMass Memorial then conducted an internal review of its data from the most recent years (2004 and 2005) and found that its risk-adjusted CABG mortality had actually worsened, at which point the hospital voluntarily suspended its cardiac surgery program.2

More news stories arose about UMass Memorial’s program and its problems. The hospital hired consultants and senior surgeons from around the state and New England to completely review its cardiac surgery program. They concluded that “many essential systems were not in place” and made 68 key recommendations, including a complete overhaul of the hospital’s quality-improvement structure. The prior cardiac surgeons departed.2

The cardiac surgery program resumed after a 6-week hiatus, with day-to-day supervision by two senior cardiac surgeons from a Boston teaching hospital. A nationally recognized cardiac surgeon was brought on as chief of cardiac surgery in January 2006. In the 18 months after the program resumed, risk-adjusted CABG mortality rates declined substantially, but patient volume failed to return to presuspension levels and the hospital reported $22 million in lost revenue in fiscal year 2006 as a result of the suspension.2

This case raises a number of questions that help to frame discussion of the benefits and risks of public reporting of hospital quality measures:

  • To what extent does public reporting accelerate quality improvement?
  • How typical was the subsequent mortality reduction reported by UMass Memorial—ie, can public reporting be expected to improve outcomes?
  • Was the effect on patient volume expected—ie, how much does public reporting affect market share?
  • Would a pay-for-performance reimbursement model have accelerated improvement?
  • Why do public reporting and pay-for-performance programs remain controversial?
  • Do patients have a right to know?

WHAT HAS FUELED THE MOVE TOWARD PUBLIC REPORTING?

Drivers of public reporting

Massachusetts is one of a number of states that publicly report outcomes from cardiac surgery and other procedures and processes of care. Three basic factors have helped drive the development of public reporting (and, in some cases, pay-for-performance) programs:

  • National policy imperatives designed to improve quality and safety and to reduce costs
  • Cultural factors in society, which include consumerism in health care and the desire for transparency
  • The growth of information technology and use of the World Wide Web, which has been a huge enabler of public reporting. Public reporting could be done prior to the Web era but would not have reached such a wide audience had the results been released in a book that had to be ordered from a government printing office.

The rationale for public reporting

In theory, how might public reporting and pay-for-performance programs improve quality? Several different mechanisms or factors are likely to be involved:

  • Feedback. The basic premise of the National Surgical Quality Improvement Program, to cite one example, is that peer comparison and performance feedback will stimulate quality improvement.
  • Reputation. Hospital personnel fear being embarrassed if data show that they are performing poorly compared with other hospitals. Likewise, in recent years we have seen hospitals with the best quality rankings publicly advertise their performance.
  • Market share. Here the premise is that patients will tend to select providers with higher quality rankings and shun those with lower rankings.
  • Financial incentives. Pay-for-performance programs link payment or reimbursement directly to the desired outcomes and thereby stimulate quality improvement without working through the abovementioned mechanisms.

Approaches to quality measurement

Public reporting of hospital performance requires selection of an approach to measuring quality of care. Generally speaking, measures of health care quality reflect one of three domains of care:

Structural (or environmental) aspects, such as staffing in the intensive care unit (ICU), surgical volume, or availablity of emergency medical responders. An example of a structure-oriented reporting system is the Leapfrog Group’s online posting of hospital ratings based on surgical volumes for high-risk procedures, the degree of computerized order entry implementation, and the presence or absence of various patient safety practices.3

Processes of care, such as whether beta-blockers are prescribed for all patients after a myocardial infarction (MI), or whether thromboprophylaxis measures are ordered for surgical patients in keeping with guideline recommendations. Examples of process-oriented reporting systems include the US Department of Health and Human Services’ Hospital Compare Web site4 and the Commonwealth Fund’s WhyNotTheBest.org site.5

Outcomes of care, such as rates of mortality or complications, or patient satisfaction rates. An example of an outcomes-oriented reporting system is the annual report of institution-specific hospital-acquired infection rates put out by Pennsylvania6 and most other states.

 

 

IS THERE EVIDENCE OF BENEFIT?

A consistent effect in spurring quality-improvement efforts

Nearly a dozen published studies have evaluated whether public reporting stimulates quality-improvement activities, and the results have shown fairly consistently that it does. A 2003 study by Hibbard et al is representative of the results.7 This survey-based investigation measured the number of quality-improvement activities in cardiac and obstetric care undertaken by 24 Wisconsin hospitals that were included in an existing public reporting system compared with the number undertaken by 98 other Wisconsin hospitals that received either a private report on their own quality performance (without the information being made public) or no quality report at all. The study found that the hospitals that participated in public reporting were engaged in significantly more quality-improvement activities in both of the clinical areas assessed than were the hospitals receiving private reporting or no reporting.

A mixed effect on patient outcomes

In contrast, the data on whether public reporting improves patient outcomes have so far been mixed. A 2008 systematic review of the literature identified 11 studies that addressed this issue: five studies found that public reporting had a positive effect on patient outcomes, while six studies demonstrated a negative effect or no effect.8 Unfortunately, the methodological quality of most studies was poor: most were before-and-after comparisons without controls.

One of the positive studies in this review examined the effects of New York State’s pioneering institution of provider-specific CABG mortality reports (provider profiling) in 1989.9 The analysis found that between 1987 and 1992 (during which time provider profiling was instituted), unadjusted 30-day mortality rates following bypass surgery declined to a significantly larger degree among New York Medicare patients (33% reduction) than among Medicare patients nationwide (19% reduction) (P < .001).

In contrast, a time-series study from Cleveland Health Quality Choice (CHQC)—an early and innovative public reporting program—exemplifies a case in which public reporting of hospital performance had no discernible effect.10 The study examined trends in 30-day mortality across a range of conditions over a 6-year period for 30 hospitals in the Cleveland area participating in a public reporting system. It found that the hospitals that started out in the worst-performing groups (based on baseline mortality rates) showed no significant change in mortality over time.

DOES PUBLIC REPORTING AFFECT PATIENT CHOICES?

How a high-profile bypass patient chooses a hospital

When former President Bill Clinton developed chest pain and shortness of breath in 2004, he was seen at a small community hospital in Westchester County, N.Y., and then transferred to New York-Presbyterian Hospital/Columbia University Medical Center for bypass surgery.11 Although one would think President Clinton would have chosen the best hospital for CABG in New York, Presbyterian/Columbia’s risk-adjusted mortality rate for CABG was actually about twice the average for New York hospitals and one of the worst in the state, according to the most recent “report card” for New York hospitals available at the time.12

Why did President Clinton choose the hospital he did? Chances are that he, like most other patients, did not base his decision on publicly reported data. His choice probably was heavily influenced by the normal referral patterns of the community hospital where he was first seen.

Surveys show low patient use of data on quality...

The question raised by President Clinton’s case has been formally studied. In 1996, Schneider and Epstein surveyed patients who had recently undergone CABG in Pennsylvania (where surgeon- and hospital-specific mortality rates for cardiac surgery are publicly available) and found that fewer than 1% of patients said that provider ratings had a moderate or major impact on their choice of provider.13 

The Kaiser Family Foundation regularly surveys the public about its knowledge and use of publicly available hospital comparison data. In the latest Kaiser survey, conducted in 2008,14 41% of respondents said they believe there are “big differences” in quality among their local hospitals, yet 59% said they would choose a hospital that is familiar to them rather than a higher-rated facility. These findings may be explained, in part, by a lack of awareness that data on hospital quality are available: only 7% of survey participants said they had seen and used information comparing the quality of hospitals to make health care decisions in the prior year, and only 6% said they had seen and used information comparing physicians.

...But a trend toward greater acceptance

Although consumers’ use of publicly reported quality data remains low, their recognition of the value of such data has grown over time. Kaiser has conducted similar public surveys dating back to 1996, and the period from 1996 to 2008 saw a substantial decrease (from 72% to 59%) in the percentage of Americans who would choose a hospital based on familiarity more than on quality ratings. Similarly, the percentage of Americans who would prefer a surgeon with high quality ratings over a surgeon who has treated friends or family more than doubled from 1996 (20%) to 2008 (47%).14

What effect on market share?

Studies on the effects that public reporting has on hospital market share have been limited.

Schneider and Epstein surveyed cardiologists in Pennsylvania in 1995 and found that 87% of them said the state’s public reporting of surgeon- and hospital-specific mortality rates for CABG had no influence or minimal influence on their referral recommendations.15

Similarly, a review of New York State’s public reporting system for CABG 15 years after its launch found that hospital performance was not associated with a subsequent change in market share, not even among those hospitals with the highest mortality rate in a given year.16 Interestingly, however, this review also showed that surgeons in the bottom performance quartile were four times as likely as other surgeons to leave practice in the year following their poor report, which is one of the most prominent outcomes associated with provider profiling reported to date.

PAY-FOR-PERFORMANCE PROGRAMS

Evidence on the impact of pay-for-performance programs in the hospital setting is even more limited than that for public reporting.

Some evidence has come from the CMS/Premier Hospital Quality Incentive Demonstration, a pay-for-performance collaboration between the Centers for Medicare and Medicaid Services (CMS) and Premier, Inc., a nationwide alliance of hospitals that promotes best practices.17 The demonstration calls for hospitals that rank in the top quintile or decile for performance to receive a 1% or 2% Medicare payment bonus for five clinical focus areas: cardiac surgery, hip and knee surgery, pneumonia, heart failure, and acute MI. Performance ratings are based primarily on process measures as well as a few clinical outcome measures. Results from the first 21 months of the demonstration showed a consistent improvement in the hospitals’ composite quality scores in each of the five clinical areas.17

It is important to recognize, however, that this improvement occurred against the backdrop of broad national adoption of public reporting of hospital quality data, which makes it difficult to tease out how much of the improvement was truly attributable to pay-for-performance, especially in the absence of a control group.

To address this question, my colleagues and I evaluated adherence to quality measures over a 2-year period at 613 hospitals participating in a national public reporting initiative,18 including 207 hospitals that simultaneously took part in the CMS/Premier Hospital Quality Incentive Demonstration’s pay-for-performance program described above. We found that the hospitals participating in both public reporting and the pay-for-performance initiative achieved only modestly greater improvements in quality than did the hospitals engaged solely in public reporting; the difference amounted to only about a 1% improvement in process measures per year.

In another controlled study, Glickman et al compared quality improvement in the management of acute MI between 54 hospitals in a CMS pay-for-performance pilot project and 446 control hospitals without pay-for-performance incentives.19 They found that the pay-for-performance hospitals achieved a statistically significantly greater degree of improvement compared with control hospitals on two of six process-of-care measures (use of aspirin at discharge and smoking-cessation counseling) but not on the composite process-of-care measure. There was no significant difference between the groups in improvements in in-hospital mortality.

Why have the effects of pay-for-performance initiatives so far been so limited? It may be that the bonuses are too small and that public reporting is already effective at stimulating quality improvement, so that the incremental benefit of adding financial incentives is small. In the case of my group’s study,18 another possible factor was that the hospitals’ baseline performance on the quality measures assessed was already high—approaching or exceeding 90% on 5 of the 10 measures—thereby limiting our power to detect differences between the groups.

 

 

CONTROVERSIES AND CHALLENGES

Many issues continue to surround public reporting and pay-for-performance programs:

  • Are the measures used to evaluate health care systems suitable and evidence-based? Do they truly reflect the quality of care that providers are giving?
  • Do the programs encourage “teaching to the test” rather than stimulating real and comprehensive improvement? Do they make the system prone to misuse or overuse of measured services?
  • How much of the variation in hospital outcomes can be explained by the current process-of-care measures?
  • Should quality be measured by outcomes or processes? Outcomes matter more to patients, but they require risk adjustment to ensure valid comparisons, and risk adjustment can be difficult and expensive to conduct.
  • How much is chance a factor in apparent performance differences between hospitals?
  • How much is patient selection a factor? Might public reporting lead to “cherry-picking” of low-risk patients and thereby reduce access to care for other patients?

Unidirectional measures can lead to misuse, overuse

In 2003, the Infectious Diseases Society of America updated its guidelines on community-acquired pneumonia to recommend that patients receive antibiotics within 4 hours of hospital admission. This recommendation was widely adopted as an incentive-linked performance measure by CMS and other third-party payers. Kanwar et al studied the impact of this guidelines-based incentive in a pre/post study at one large teaching hospital.20 They found that while significantly more patients received antibiotics in a timely fashion after publication of the guidelines (2005) versus before the guidelines (2003), almost one-third of patients receiving antibiotics in 2005 had normal chest radiographs and thus were not appropriate candidates for therapy. Moreover, significantly fewer patients in 2005 had a final diagnosis of pneumonia at discharge, and there was no difference between the two periods in rates of mortality or ICU transfer. The researchers concluded that linking the quality indicator of early antibiotic use to financial incentives may lead to misdiagnosis of pneumonia and inappropriate antibiotic use.

Of course, antibiotic timing is not the only quality measure subject to overuse or misuse; other measures pose similar risks, including prophylaxis for deep vein thrombosis, glycemic control measures, and target immunization rates.

More-nuanced measures needed

We must also consider how well reported quality measures actually reflect our objectives. For example, an evaluation of 962 hospitals’ performance in managing acute MI found that the publicly reported core process measures for acute MI (beta-blocker and aspirin at admission and discharge, ACE inhibitor at discharge, smoking-cessation counseling, timely reperfusion) together explained only 6% of the variance among the hospitals in risk-adjusted 30-day mortality.21 This underscores how complicated the factors affecting mortality are, and how existing process measures have only begun to scratch the surface.

How much of a role does chance play?

Another issue is the role of chance and our limited power to detect real differences in outcomes, as illustrated by an analysis by Dimick et al of all discharges from a nationally representative sample of nearly 1,000 hospitals.22 The objective was to determine whether the seven operations for which mortality is advocated as a quality indicator by the Agency for Healthcare Research and Quality are performed often enough to reliably identify hospitals with increased mortality rates. The researchers found that only for one of the seven procedures—CABG—is there sufficient caseload over a 3-year period at the majority of US hospitals to accurately detect a mortality rate twice the national average.

Although CMS is highly committed to public reporting, the comparative mortality data available on its Hospital Compare Web site are not very useful for driving consumer choice or motivating hospitals to improve. For example, of the nearly 4,500 US hospitals that reported data on 30-day mortality from MI, only 17 hospitals were considered to be better than the national average and only 7 were considered worse than the national average.4

CASE REVISITED: LESSONS FROM THE UMASS MEMORIAL EXPERIENCE

Returning to our case study, what can the UMass Memorial experience teach us, and how well does it reflect the literature about the usefulness of public reporting?

Did public reporting accelerate quality improvement efforts? Yes. Reporting led to the suspension of cardiac surgery and substantive reorganization, which is consistent with the literature.

Was the mortality reduction typical? No. An optimist’s view would be that the drastic actions spurred by the media coverage had strong effects. A skeptic might say that perhaps UMass Memorial did some “cherry-picking” of patients, or that they got better at coding procedures in a way that reflected more favorably on the hospital.

Were the declines in patient volumes predictable? No. So far, the data suggest that public reporting has its greatest effects on providers rather than on institutions. This may change, however, with the introduction of tiered copayments, whereby patients are asked to pay more if they get their care from lower rated institutions.

Would financial incentives have accelerated improvement? It is too early to tell. The evidence for pay-for-performance programs is limited, and the benefits demonstrated so far have been modest. But in many ways the alternative is worse: our current system of financing and paying for hospital care offers no financial incentives to hospitals for investing in the personnel or systems required to achieve better outcomes—and instead rewards (through supplemental payments) adverse outcomes.

Did prospective patients have a right to know? Despite the limitations of public reporting, one of the most compelling arguments in its favor is that patients at UMass Memorial had the right to know about the program’s outcomes. This alone may ultimately justify the expense and efforts involved. Transparency and accountability are core values of open democratic societies, and US society relies on public reporting in many other realms: the National Highway Traffic Safety Administration publicizes crash test ratings, the Securities and Exchange Commission enforces public reporting by financial institutions, and the Federal Aviation Administration reports on airline safety, timeliness of flights, and lost baggage rates.

FUTURE DIRECTIONS

In the future, we can expect more measurement and reporting of health care factors that patients care most about, such as clinical outcomes and the patient experience. It is likely that public reporting and pay-for-performance programs will address a broader range of conditions and comprise a larger number of measures. CMS has outlined plans to increase the number of publicly reported measures to more than 70 by 2010 and more than 100 by 2011. My hope is that this expansion of data, along with improved data synthesis and presentation, will foster greater use of publicly reported data. Further, the continued evolution of the Web and social networking sites is very likely to enhance public awareness of hospital performance and change the ways in which patients use these data.

 

 

DISCUSSION

Question from the audience: I’m concerned about what seems to be a unilateral effort to improve quality. There are many components of health care delivery beyond those you’ve described, including the efforts of patients, insurers, employers, and the government. The reality is that patients don’t plan for illness, insurance companies often deny care, more and more employers are providing less coverage or no coverage, and Medicare is on the road to insolvency. Is the battle for quality winnable when all these other components of delivery are failing?

Dr. Lindenauer: You make good points. But from the standpoint of professionalism, I think we have a compelling duty to constantly strive to improve the quality of care in our hospitals and practices. I have presented strategies for potentially accelerating improvements that providers are trying to make anyway. Public reporting and financial incentives are likely to be with us for a while, and their use is likely to grow. But as you said, they address only part of the problem confronting American health care.

Question from the audience: For the savvy health care consumer, is there one particular Web site for hospital or provider comparisons that you would especially recommend? Do you actually recommend using such Web sites to patients before they undergo certain procedures?

Dr. Lindenauer: I think the Hospital Compare site from the Department of Health and Human Services is the key Web site. The California Hospital Assessment and Reporting Taskforce (CHART) has a good site, and the Commonwealth Fund’s WhyNotTheBest.org is an interesting newcomer. 

However, even the most ardent advocates for public reporting wouldn’t say the information available today is sufficient for making decisions. There’s still an important role for getting recommendations from other doctors who are familiar with local hospitals and providers.

I’m optimistic that the changes that are coming to these Web sites will provide a better user experience and make it harder to ignore the results of public reporting. Today we can say, “Hospital A is better at discharge instructions or smoking cessation counseling.” But we all can appreciate how weak those kinds of measures are because their implementation is subject to local interpretations. Once risk-adjusted outcomes and more-meaningful process measures are available, I’d be surprised if more patients weren’t willing to base their decisions on published comparisons.

References
  1. Kowalczyk L, Smith S. Hospital halts heart surgeries due to deaths: high rate cited at Worcester facility. The Boston Globe. September 22, 2005.
  2. Ettinger WH, Hylka SM, Phillips RA, Harrison LH Jr, Cyr JA, Sussman AJ. When things go wrong: the impact of being a statistical outlier in publicly reported coronary artery bypass graft surgery mortality data. Am J Med Qual 2008; 23:90–95.
  3. Leapfrog hospital quality ratings. The Leapfrog Group Web site. http://www.leapfroggroup.org/cp. Accessed June 10, 2009.
  4. Hospital Compare: a quality tool provided by Medicare. U.S. Department of Health & Human Services Web site. http://www.hospitalcompare.hhs.gov.  Accessed June 10, 2009.
  5. Why Not the Best (Beta): A Health Care Quality Improvement Resource. The Commonwealth Fund. http://www.WhyNotTheBest.org. Accessed May 6, 2009.
  6. Hospital-acquired infections in Pennsylvania. Pennsylvania Health Care Cost Containment Council Web site. http://www.phc4.org.  Accessed April 6, 2009.
  7. Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement efforts? Health Aff (Millwood) 2003; 22:84–94.
  8. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008; 148:111–123.
  9. Peterson ED, DeLong ER, Jollis JG, et al. The effects of New York’s bypass surgery provider profiling on access to care and patient outcomes in the elderly. J Am Coll Cardiol 1998; 32:993–999.
  10. Baker DW, Einstadter D, Thomas C, et al. The effect of publicly reporting hospital performance on market share and risk-adjusted mortality at high-mortality hospitals. Med Care 2003; 41:729–740.
  11. Graylock J. After chest pains, Clinton set to undergo bypass surgery. USA Today. September 3, 2004.
  12. Adult Cardiac Surgery in New York State, 1999–2001. Albany, NY: New York State Department of Health; April 2004. http://www.health.state.ny.us/nysdoh/heart/pdf/1999-2001_cabg.pdf. Accessed June 10, 2009.
  13. Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA 1998; 279:1638–1642.
  14. The Henry J. Kaiser Family Foundation. 2008 Update on Consumers’ Views of Patient Safety and Quality Information: Summary & Chartpack; October 2008. http://www.kff.org/kaiserpolls/upload/7819.pdf. Accessed June 10, 2009.
  15. Schneider EC, Epstein AM. Influence of cardiac-surgery performance reports on referral practices and access to care: a survey of cardiovascular specialists. N Engl J Med 1996; 335:251–256.
  16. Jha AK, Epstein AM. The predictive accuracy of the New York State coronary artery bypass surgery report-card system. Health Aff (Millwood) 2006; 25:844–855.
  17. Remus D. Pay for performance: CMS/Premier Hospital Quality Incentive Demonstration Project—year 1 results, December 2005. PowerPoint presentation available at: http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/results/index.jsp. Accessed June 10, 2009.
  18. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med 2007; 356:486–496.
  19. Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA 2007; 297:2373–2380.
  20. Kanwar M, Brar N, Khatib R, Fakih MG. Misdiagnosis of community-acquired pneumonia and inappropriate utilization of antibiotics: side effects of the 4-h antibiotic administration rule. Chest 2007; 131:1865–1869.
  21. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality. JAMA 2006; 296:72–78.
  22. Dimick JB, Welch HG, Birkmeyer JD. Surgical mortality as an indicator of hospital quality: the problem with small sample size. JAMA 2004; 292:847–851.
Article PDF
Author and Disclosure Information

Peter Lindenauer, MD, MSc
Director, Center for Quality of Care Research, Baystate Medical Center, Springfield, MA; and Associate Professor of Medicine, Tufts University School of Medicine, Boston, MA

Correspondence: Peter K. Lindenauer, MD, MSc, Director, Center for Quality of Care Research, Baystate Medical Center, 759 Chestnut Street, Springfield, MA 01199; peter.lindenauer@bhs.org

Dr. Lindenauer has indicated that he has no financial relationships with commercial interests that have a direct bearing on the subject matter of this article.

This article was developed from an audio transcript of Dr. Lindenauer’s lecture at the 4th Annual Perioperative Medicine Summit. The transcript was edited by the Cleveland Clinic Journal of Medicine staff for clarity and conciseness, and was then reviewed, revised, and approved by Dr. Lindenauer.

Publications
Page Number
S3-S8
Author and Disclosure Information

Peter Lindenauer, MD, MSc
Director, Center for Quality of Care Research, Baystate Medical Center, Springfield, MA; and Associate Professor of Medicine, Tufts University School of Medicine, Boston, MA

Correspondence: Peter K. Lindenauer, MD, MSc, Director, Center for Quality of Care Research, Baystate Medical Center, 759 Chestnut Street, Springfield, MA 01199; peter.lindenauer@bhs.org

Dr. Lindenauer has indicated that he has no financial relationships with commercial interests that have a direct bearing on the subject matter of this article.

This article was developed from an audio transcript of Dr. Lindenauer’s lecture at the 4th Annual Perioperative Medicine Summit. The transcript was edited by the Cleveland Clinic Journal of Medicine staff for clarity and conciseness, and was then reviewed, revised, and approved by Dr. Lindenauer.

Author and Disclosure Information

Peter Lindenauer, MD, MSc
Director, Center for Quality of Care Research, Baystate Medical Center, Springfield, MA; and Associate Professor of Medicine, Tufts University School of Medicine, Boston, MA

Correspondence: Peter K. Lindenauer, MD, MSc, Director, Center for Quality of Care Research, Baystate Medical Center, 759 Chestnut Street, Springfield, MA 01199; peter.lindenauer@bhs.org

Dr. Lindenauer has indicated that he has no financial relationships with commercial interests that have a direct bearing on the subject matter of this article.

This article was developed from an audio transcript of Dr. Lindenauer’s lecture at the 4th Annual Perioperative Medicine Summit. The transcript was edited by the Cleveland Clinic Journal of Medicine staff for clarity and conciseness, and was then reviewed, revised, and approved by Dr. Lindenauer.

Article PDF
Article PDF
Are they meeting their goals?
Are they meeting their goals?

Hospital quality measures and rankings are now widely available to the public online, but is public reporting of this information an effective strategy for improving health care? Using a case study of a hospital that suffered negative publicity as a result of a quality report, this article explores the use of public reporting of performance data and pay-for-performance reimbursement strategies to foster quality improvement in the US health care system.

CASE STUDY: A SURGICAL PROGRAM GETS A BAD REPORT―IN THE HEADLINES

In September 2005, The Boston Globe ran a prominent story reporting that the UMass Memorial Medical Center in Worcester, Mass., was abruptly suspending its elective cardiac surgery program.1 The program’s suspension came after state public health officials presented UMass Memorial with a detailed analysis showing that the hospital’s mortality rate for coronary artery bypass graft surgery (CABG) patients was the highest in the state and almost double the average for Massachusetts hospitals.1

Key personnel from UMass Memorial described the events preceding and following the program’s suspension in a journal article published in 2008.2 In 2002, UMass Memorial hired a new chief of cardiothoracic surgery, who resigned in early 2005. A few months after that resignation, state public health officials alerted the hospital to the abovementioned CABG mortality data (from 2002 and 2003), which they said would soon be reported publicly. UMass Memorial then conducted an internal review of its data from the most recent years (2004 and 2005) and found that its risk-adjusted CABG mortality had actually worsened, at which point the hospital voluntarily suspended its cardiac surgery program.2

More news stories arose about UMass Memorial’s program and its problems. The hospital hired consultants and senior surgeons from around the state and New England to completely review its cardiac surgery program. They concluded that “many essential systems were not in place” and made 68 key recommendations, including a complete overhaul of the hospital’s quality-improvement structure. The prior cardiac surgeons departed.2

The cardiac surgery program resumed after a 6-week hiatus, with day-to-day supervision by two senior cardiac surgeons from a Boston teaching hospital. A nationally recognized cardiac surgeon was brought on as chief of cardiac surgery in January 2006. In the 18 months after the program resumed, risk-adjusted CABG mortality rates declined substantially, but patient volume failed to return to presuspension levels and the hospital reported $22 million in lost revenue in fiscal year 2006 as a result of the suspension.2

This case raises a number of questions that help to frame discussion of the benefits and risks of public reporting of hospital quality measures:

  • To what extent does public reporting accelerate quality improvement?
  • How typical was the subsequent mortality reduction reported by UMass Memorial—ie, can public reporting be expected to improve outcomes?
  • Was the effect on patient volume expected—ie, how much does public reporting affect market share?
  • Would a pay-for-performance reimbursement model have accelerated improvement?
  • Why do public reporting and pay-for-performance programs remain controversial?
  • Do patients have a right to know?

WHAT HAS FUELED THE MOVE TOWARD PUBLIC REPORTING?

Drivers of public reporting

Massachusetts is one of a number of states that publicly report outcomes from cardiac surgery and other procedures and processes of care. Three basic factors have helped drive the development of public reporting (and, in some cases, pay-for-performance) programs:

  • National policy imperatives designed to improve quality and safety and to reduce costs
  • Cultural factors in society, which include consumerism in health care and the desire for transparency
  • The growth of information technology and use of the World Wide Web, which has been a huge enabler of public reporting. Public reporting could be done prior to the Web era but would not have reached such a wide audience had the results been released in a book that had to be ordered from a government printing office.

The rationale for public reporting

In theory, how might public reporting and pay-for-performance programs improve quality? Several different mechanisms or factors are likely to be involved:

  • Feedback. The basic premise of the National Surgical Quality Improvement Program, to cite one example, is that peer comparison and performance feedback will stimulate quality improvement.
  • Reputation. Hospital personnel fear being embarrassed if data show that they are performing poorly compared with other hospitals. Likewise, in recent years we have seen hospitals with the best quality rankings publicly advertise their performance.
  • Market share. Here the premise is that patients will tend to select providers with higher quality rankings and shun those with lower rankings.
  • Financial incentives. Pay-for-performance programs link payment or reimbursement directly to the desired outcomes and thereby stimulate quality improvement without working through the abovementioned mechanisms.

Approaches to quality measurement

Public reporting of hospital performance requires selection of an approach to measuring quality of care. Generally speaking, measures of health care quality reflect one of three domains of care:

Structural (or environmental) aspects, such as staffing in the intensive care unit (ICU), surgical volume, or availablity of emergency medical responders. An example of a structure-oriented reporting system is the Leapfrog Group’s online posting of hospital ratings based on surgical volumes for high-risk procedures, the degree of computerized order entry implementation, and the presence or absence of various patient safety practices.3

Processes of care, such as whether beta-blockers are prescribed for all patients after a myocardial infarction (MI), or whether thromboprophylaxis measures are ordered for surgical patients in keeping with guideline recommendations. Examples of process-oriented reporting systems include the US Department of Health and Human Services’ Hospital Compare Web site4 and the Commonwealth Fund’s WhyNotTheBest.org site.5

Outcomes of care, such as rates of mortality or complications, or patient satisfaction rates. An example of an outcomes-oriented reporting system is the annual report of institution-specific hospital-acquired infection rates put out by Pennsylvania6 and most other states.

 

 

IS THERE EVIDENCE OF BENEFIT?

A consistent effect in spurring quality-improvement efforts

Nearly a dozen published studies have evaluated whether public reporting stimulates quality-improvement activities, and the results have shown fairly consistently that it does. A 2003 study by Hibbard et al is representative of the results.7 This survey-based investigation measured the number of quality-improvement activities in cardiac and obstetric care undertaken by 24 Wisconsin hospitals that were included in an existing public reporting system compared with the number undertaken by 98 other Wisconsin hospitals that received either a private report on their own quality performance (without the information being made public) or no quality report at all. The study found that the hospitals that participated in public reporting were engaged in significantly more quality-improvement activities in both of the clinical areas assessed than were the hospitals receiving private reporting or no reporting.

A mixed effect on patient outcomes

In contrast, the data on whether public reporting improves patient outcomes have so far been mixed. A 2008 systematic review of the literature identified 11 studies that addressed this issue: five studies found that public reporting had a positive effect on patient outcomes, while six studies demonstrated a negative effect or no effect.8 Unfortunately, the methodological quality of most studies was poor: most were before-and-after comparisons without controls.

One of the positive studies in this review examined the effects of New York State’s pioneering institution of provider-specific CABG mortality reports (provider profiling) in 1989.9 The analysis found that between 1987 and 1992 (during which time provider profiling was instituted), unadjusted 30-day mortality rates following bypass surgery declined to a significantly larger degree among New York Medicare patients (33% reduction) than among Medicare patients nationwide (19% reduction) (P < .001).

In contrast, a time-series study from Cleveland Health Quality Choice (CHQC)—an early and innovative public reporting program—exemplifies a case in which public reporting of hospital performance had no discernible effect.10 The study examined trends in 30-day mortality across a range of conditions over a 6-year period for 30 hospitals in the Cleveland area participating in a public reporting system. It found that the hospitals that started out in the worst-performing groups (based on baseline mortality rates) showed no significant change in mortality over time.

DOES PUBLIC REPORTING AFFECT PATIENT CHOICES?

How a high-profile bypass patient chooses a hospital

When former President Bill Clinton developed chest pain and shortness of breath in 2004, he was seen at a small community hospital in Westchester County, N.Y., and then transferred to New York-Presbyterian Hospital/Columbia University Medical Center for bypass surgery.11 Although one would think President Clinton would have chosen the best hospital for CABG in New York, Presbyterian/Columbia’s risk-adjusted mortality rate for CABG was actually about twice the average for New York hospitals and one of the worst in the state, according to the most recent “report card” for New York hospitals available at the time.12

Why did President Clinton choose the hospital he did? Chances are that he, like most other patients, did not base his decision on publicly reported data. His choice probably was heavily influenced by the normal referral patterns of the community hospital where he was first seen.

Surveys show low patient use of data on quality...

The question raised by President Clinton’s case has been formally studied. In 1996, Schneider and Epstein surveyed patients who had recently undergone CABG in Pennsylvania (where surgeon- and hospital-specific mortality rates for cardiac surgery are publicly available) and found that fewer than 1% of patients said that provider ratings had a moderate or major impact on their choice of provider.13 

The Kaiser Family Foundation regularly surveys the public about its knowledge and use of publicly available hospital comparison data. In the latest Kaiser survey, conducted in 2008,14 41% of respondents said they believe there are “big differences” in quality among their local hospitals, yet 59% said they would choose a hospital that is familiar to them rather than a higher-rated facility. These findings may be explained, in part, by a lack of awareness that data on hospital quality are available: only 7% of survey participants said they had seen and used information comparing the quality of hospitals to make health care decisions in the prior year, and only 6% said they had seen and used information comparing physicians.

...But a trend toward greater acceptance

Although consumers’ use of publicly reported quality data remains low, their recognition of the value of such data has grown over time. Kaiser has conducted similar public surveys dating back to 1996, and the period from 1996 to 2008 saw a substantial decrease (from 72% to 59%) in the percentage of Americans who would choose a hospital based on familiarity more than on quality ratings. Similarly, the percentage of Americans who would prefer a surgeon with high quality ratings over a surgeon who has treated friends or family more than doubled from 1996 (20%) to 2008 (47%).14

What effect on market share?

Studies on the effects that public reporting has on hospital market share have been limited.

Schneider and Epstein surveyed cardiologists in Pennsylvania in 1995 and found that 87% of them said the state’s public reporting of surgeon- and hospital-specific mortality rates for CABG had no influence or minimal influence on their referral recommendations.15

Similarly, a review of New York State’s public reporting system for CABG 15 years after its launch found that hospital performance was not associated with a subsequent change in market share, not even among those hospitals with the highest mortality rate in a given year.16 Interestingly, however, this review also showed that surgeons in the bottom performance quartile were four times as likely as other surgeons to leave practice in the year following their poor report, which is one of the most prominent outcomes associated with provider profiling reported to date.

PAY-FOR-PERFORMANCE PROGRAMS

Evidence on the impact of pay-for-performance programs in the hospital setting is even more limited than that for public reporting.

Some evidence has come from the CMS/Premier Hospital Quality Incentive Demonstration, a pay-for-performance collaboration between the Centers for Medicare and Medicaid Services (CMS) and Premier, Inc., a nationwide alliance of hospitals that promotes best practices.17 The demonstration calls for hospitals that rank in the top quintile or decile for performance to receive a 1% or 2% Medicare payment bonus for five clinical focus areas: cardiac surgery, hip and knee surgery, pneumonia, heart failure, and acute MI. Performance ratings are based primarily on process measures as well as a few clinical outcome measures. Results from the first 21 months of the demonstration showed a consistent improvement in the hospitals’ composite quality scores in each of the five clinical areas.17

It is important to recognize, however, that this improvement occurred against the backdrop of broad national adoption of public reporting of hospital quality data, which makes it difficult to tease out how much of the improvement was truly attributable to pay-for-performance, especially in the absence of a control group.

To address this question, my colleagues and I evaluated adherence to quality measures over a 2-year period at 613 hospitals participating in a national public reporting initiative,18 including 207 hospitals that simultaneously took part in the CMS/Premier Hospital Quality Incentive Demonstration’s pay-for-performance program described above. We found that the hospitals participating in both public reporting and the pay-for-performance initiative achieved only modestly greater improvements in quality than did the hospitals engaged solely in public reporting; the difference amounted to only about a 1% improvement in process measures per year.

In another controlled study, Glickman et al compared quality improvement in the management of acute MI between 54 hospitals in a CMS pay-for-performance pilot project and 446 control hospitals without pay-for-performance incentives.19 They found that the pay-for-performance hospitals achieved a statistically significantly greater degree of improvement compared with control hospitals on two of six process-of-care measures (use of aspirin at discharge and smoking-cessation counseling) but not on the composite process-of-care measure. There was no significant difference between the groups in improvements in in-hospital mortality.

Why have the effects of pay-for-performance initiatives so far been so limited? It may be that the bonuses are too small and that public reporting is already effective at stimulating quality improvement, so that the incremental benefit of adding financial incentives is small. In the case of my group’s study,18 another possible factor was that the hospitals’ baseline performance on the quality measures assessed was already high—approaching or exceeding 90% on 5 of the 10 measures—thereby limiting our power to detect differences between the groups.

 

 

CONTROVERSIES AND CHALLENGES

Many issues continue to surround public reporting and pay-for-performance programs:

  • Are the measures used to evaluate health care systems suitable and evidence-based? Do they truly reflect the quality of care that providers are giving?
  • Do the programs encourage “teaching to the test” rather than stimulating real and comprehensive improvement? Do they make the system prone to misuse or overuse of measured services?
  • How much of the variation in hospital outcomes can be explained by the current process-of-care measures?
  • Should quality be measured by outcomes or processes? Outcomes matter more to patients, but they require risk adjustment to ensure valid comparisons, and risk adjustment can be difficult and expensive to conduct.
  • How much is chance a factor in apparent performance differences between hospitals?
  • How much is patient selection a factor? Might public reporting lead to “cherry-picking” of low-risk patients and thereby reduce access to care for other patients?

Unidirectional measures can lead to misuse, overuse

In 2003, the Infectious Diseases Society of America updated its guidelines on community-acquired pneumonia to recommend that patients receive antibiotics within 4 hours of hospital admission. This recommendation was widely adopted as an incentive-linked performance measure by CMS and other third-party payers. Kanwar et al studied the impact of this guidelines-based incentive in a pre/post study at one large teaching hospital.20 They found that while significantly more patients received antibiotics in a timely fashion after publication of the guidelines (2005) versus before the guidelines (2003), almost one-third of patients receiving antibiotics in 2005 had normal chest radiographs and thus were not appropriate candidates for therapy. Moreover, significantly fewer patients in 2005 had a final diagnosis of pneumonia at discharge, and there was no difference between the two periods in rates of mortality or ICU transfer. The researchers concluded that linking the quality indicator of early antibiotic use to financial incentives may lead to misdiagnosis of pneumonia and inappropriate antibiotic use.

Of course, antibiotic timing is not the only quality measure subject to overuse or misuse; other measures pose similar risks, including prophylaxis for deep vein thrombosis, glycemic control measures, and target immunization rates.

More-nuanced measures needed

We must also consider how well reported quality measures actually reflect our objectives. For example, an evaluation of 962 hospitals’ performance in managing acute MI found that the publicly reported core process measures for acute MI (beta-blocker and aspirin at admission and discharge, ACE inhibitor at discharge, smoking-cessation counseling, timely reperfusion) together explained only 6% of the variance among the hospitals in risk-adjusted 30-day mortality.21 This underscores how complicated the factors affecting mortality are, and how existing process measures have only begun to scratch the surface.

How much of a role does chance play?

Another issue is the role of chance and our limited power to detect real differences in outcomes, as illustrated by an analysis by Dimick et al of all discharges from a nationally representative sample of nearly 1,000 hospitals.22 The objective was to determine whether the seven operations for which mortality is advocated as a quality indicator by the Agency for Healthcare Research and Quality are performed often enough to reliably identify hospitals with increased mortality rates. The researchers found that only for one of the seven procedures—CABG—is there sufficient caseload over a 3-year period at the majority of US hospitals to accurately detect a mortality rate twice the national average.

Although CMS is highly committed to public reporting, the comparative mortality data available on its Hospital Compare Web site are not very useful for driving consumer choice or motivating hospitals to improve. For example, of the nearly 4,500 US hospitals that reported data on 30-day mortality from MI, only 17 hospitals were considered to be better than the national average and only 7 were considered worse than the national average.4

CASE REVISITED: LESSONS FROM THE UMASS MEMORIAL EXPERIENCE

Returning to our case study, what can the UMass Memorial experience teach us, and how well does it reflect the literature about the usefulness of public reporting?

Did public reporting accelerate quality improvement efforts? Yes. Reporting led to the suspension of cardiac surgery and substantive reorganization, which is consistent with the literature.

Was the mortality reduction typical? No. An optimist’s view would be that the drastic actions spurred by the media coverage had strong effects. A skeptic might say that perhaps UMass Memorial did some “cherry-picking” of patients, or that they got better at coding procedures in a way that reflected more favorably on the hospital.

Were the declines in patient volumes predictable? No. So far, the data suggest that public reporting has its greatest effects on providers rather than on institutions. This may change, however, with the introduction of tiered copayments, whereby patients are asked to pay more if they get their care from lower rated institutions.

Would financial incentives have accelerated improvement? It is too early to tell. The evidence for pay-for-performance programs is limited, and the benefits demonstrated so far have been modest. But in many ways the alternative is worse: our current system of financing and paying for hospital care offers no financial incentives to hospitals for investing in the personnel or systems required to achieve better outcomes—and instead rewards (through supplemental payments) adverse outcomes.

Did prospective patients have a right to know? Despite the limitations of public reporting, one of the most compelling arguments in its favor is that patients at UMass Memorial had the right to know about the program’s outcomes. This alone may ultimately justify the expense and efforts involved. Transparency and accountability are core values of open democratic societies, and US society relies on public reporting in many other realms: the National Highway Traffic Safety Administration publicizes crash test ratings, the Securities and Exchange Commission enforces public reporting by financial institutions, and the Federal Aviation Administration reports on airline safety, timeliness of flights, and lost baggage rates.

FUTURE DIRECTIONS

In the future, we can expect more measurement and reporting of health care factors that patients care most about, such as clinical outcomes and the patient experience. It is likely that public reporting and pay-for-performance programs will address a broader range of conditions and comprise a larger number of measures. CMS has outlined plans to increase the number of publicly reported measures to more than 70 by 2010 and more than 100 by 2011. My hope is that this expansion of data, along with improved data synthesis and presentation, will foster greater use of publicly reported data. Further, the continued evolution of the Web and social networking sites is very likely to enhance public awareness of hospital performance and change the ways in which patients use these data.

 

 

DISCUSSION

Question from the audience: I’m concerned about what seems to be a unilateral effort to improve quality. There are many components of health care delivery beyond those you’ve described, including the efforts of patients, insurers, employers, and the government. The reality is that patients don’t plan for illness, insurance companies often deny care, more and more employers are providing less coverage or no coverage, and Medicare is on the road to insolvency. Is the battle for quality winnable when all these other components of delivery are failing?

Dr. Lindenauer: You make good points. But from the standpoint of professionalism, I think we have a compelling duty to constantly strive to improve the quality of care in our hospitals and practices. I have presented strategies for potentially accelerating improvements that providers are trying to make anyway. Public reporting and financial incentives are likely to be with us for a while, and their use is likely to grow. But as you said, they address only part of the problem confronting American health care.

Question from the audience: For the savvy health care consumer, is there one particular Web site for hospital or provider comparisons that you would especially recommend? Do you actually recommend using such Web sites to patients before they undergo certain procedures?

Dr. Lindenauer: I think the Hospital Compare site from the Department of Health and Human Services is the key Web site. The California Hospital Assessment and Reporting Taskforce (CHART) has a good site, and the Commonwealth Fund’s WhyNotTheBest.org is an interesting newcomer. 

However, even the most ardent advocates for public reporting wouldn’t say the information available today is sufficient for making decisions. There’s still an important role for getting recommendations from other doctors who are familiar with local hospitals and providers.

I’m optimistic that the changes that are coming to these Web sites will provide a better user experience and make it harder to ignore the results of public reporting. Today we can say, “Hospital A is better at discharge instructions or smoking cessation counseling.” But we all can appreciate how weak those kinds of measures are because their implementation is subject to local interpretations. Once risk-adjusted outcomes and more-meaningful process measures are available, I’d be surprised if more patients weren’t willing to base their decisions on published comparisons.

Hospital quality measures and rankings are now widely available to the public online, but is public reporting of this information an effective strategy for improving health care? Using a case study of a hospital that suffered negative publicity as a result of a quality report, this article explores the use of public reporting of performance data and pay-for-performance reimbursement strategies to foster quality improvement in the US health care system.

CASE STUDY: A SURGICAL PROGRAM GETS A BAD REPORT―IN THE HEADLINES

In September 2005, The Boston Globe ran a prominent story reporting that the UMass Memorial Medical Center in Worcester, Mass., was abruptly suspending its elective cardiac surgery program.1 The program’s suspension came after state public health officials presented UMass Memorial with a detailed analysis showing that the hospital’s mortality rate for coronary artery bypass graft surgery (CABG) patients was the highest in the state and almost double the average for Massachusetts hospitals.1

Key personnel from UMass Memorial described the events preceding and following the program’s suspension in a journal article published in 2008.2 In 2002, UMass Memorial hired a new chief of cardiothoracic surgery, who resigned in early 2005. A few months after that resignation, state public health officials alerted the hospital to the abovementioned CABG mortality data (from 2002 and 2003), which they said would soon be reported publicly. UMass Memorial then conducted an internal review of its data from the most recent years (2004 and 2005) and found that its risk-adjusted CABG mortality had actually worsened, at which point the hospital voluntarily suspended its cardiac surgery program.2

More news stories arose about UMass Memorial’s program and its problems. The hospital hired consultants and senior surgeons from around the state and New England to completely review its cardiac surgery program. They concluded that “many essential systems were not in place” and made 68 key recommendations, including a complete overhaul of the hospital’s quality-improvement structure. The prior cardiac surgeons departed.2

The cardiac surgery program resumed after a 6-week hiatus, with day-to-day supervision by two senior cardiac surgeons from a Boston teaching hospital. A nationally recognized cardiac surgeon was brought on as chief of cardiac surgery in January 2006. In the 18 months after the program resumed, risk-adjusted CABG mortality rates declined substantially, but patient volume failed to return to presuspension levels and the hospital reported $22 million in lost revenue in fiscal year 2006 as a result of the suspension.2

This case raises a number of questions that help to frame discussion of the benefits and risks of public reporting of hospital quality measures:

  • To what extent does public reporting accelerate quality improvement?
  • How typical was the subsequent mortality reduction reported by UMass Memorial—ie, can public reporting be expected to improve outcomes?
  • Was the effect on patient volume expected—ie, how much does public reporting affect market share?
  • Would a pay-for-performance reimbursement model have accelerated improvement?
  • Why do public reporting and pay-for-performance programs remain controversial?
  • Do patients have a right to know?

WHAT HAS FUELED THE MOVE TOWARD PUBLIC REPORTING?

Drivers of public reporting

Massachusetts is one of a number of states that publicly report outcomes from cardiac surgery and other procedures and processes of care. Three basic factors have helped drive the development of public reporting (and, in some cases, pay-for-performance) programs:

  • National policy imperatives designed to improve quality and safety and to reduce costs
  • Cultural factors in society, which include consumerism in health care and the desire for transparency
  • The growth of information technology and use of the World Wide Web, which has been a huge enabler of public reporting. Public reporting could be done prior to the Web era but would not have reached such a wide audience had the results been released in a book that had to be ordered from a government printing office.

The rationale for public reporting

In theory, how might public reporting and pay-for-performance programs improve quality? Several different mechanisms or factors are likely to be involved:

  • Feedback. The basic premise of the National Surgical Quality Improvement Program, to cite one example, is that peer comparison and performance feedback will stimulate quality improvement.
  • Reputation. Hospital personnel fear being embarrassed if data show that they are performing poorly compared with other hospitals. Likewise, in recent years we have seen hospitals with the best quality rankings publicly advertise their performance.
  • Market share. Here the premise is that patients will tend to select providers with higher quality rankings and shun those with lower rankings.
  • Financial incentives. Pay-for-performance programs link payment or reimbursement directly to the desired outcomes and thereby stimulate quality improvement without working through the abovementioned mechanisms.

Approaches to quality measurement

Public reporting of hospital performance requires selection of an approach to measuring quality of care. Generally speaking, measures of health care quality reflect one of three domains of care:

Structural (or environmental) aspects, such as staffing in the intensive care unit (ICU), surgical volume, or availablity of emergency medical responders. An example of a structure-oriented reporting system is the Leapfrog Group’s online posting of hospital ratings based on surgical volumes for high-risk procedures, the degree of computerized order entry implementation, and the presence or absence of various patient safety practices.3

Processes of care, such as whether beta-blockers are prescribed for all patients after a myocardial infarction (MI), or whether thromboprophylaxis measures are ordered for surgical patients in keeping with guideline recommendations. Examples of process-oriented reporting systems include the US Department of Health and Human Services’ Hospital Compare Web site4 and the Commonwealth Fund’s WhyNotTheBest.org site.5

Outcomes of care, such as rates of mortality or complications, or patient satisfaction rates. An example of an outcomes-oriented reporting system is the annual report of institution-specific hospital-acquired infection rates put out by Pennsylvania6 and most other states.

 

 

IS THERE EVIDENCE OF BENEFIT?

A consistent effect in spurring quality-improvement efforts

Nearly a dozen published studies have evaluated whether public reporting stimulates quality-improvement activities, and the results have shown fairly consistently that it does. A 2003 study by Hibbard et al is representative of the results.7 This survey-based investigation measured the number of quality-improvement activities in cardiac and obstetric care undertaken by 24 Wisconsin hospitals that were included in an existing public reporting system compared with the number undertaken by 98 other Wisconsin hospitals that received either a private report on their own quality performance (without the information being made public) or no quality report at all. The study found that the hospitals that participated in public reporting were engaged in significantly more quality-improvement activities in both of the clinical areas assessed than were the hospitals receiving private reporting or no reporting.

A mixed effect on patient outcomes

In contrast, the data on whether public reporting improves patient outcomes have so far been mixed. A 2008 systematic review of the literature identified 11 studies that addressed this issue: five studies found that public reporting had a positive effect on patient outcomes, while six studies demonstrated a negative effect or no effect.8 Unfortunately, the methodological quality of most studies was poor: most were before-and-after comparisons without controls.

One of the positive studies in this review examined the effects of New York State’s pioneering institution of provider-specific CABG mortality reports (provider profiling) in 1989.9 The analysis found that between 1987 and 1992 (during which time provider profiling was instituted), unadjusted 30-day mortality rates following bypass surgery declined to a significantly larger degree among New York Medicare patients (33% reduction) than among Medicare patients nationwide (19% reduction) (P < .001).

In contrast, a time-series study from Cleveland Health Quality Choice (CHQC)—an early and innovative public reporting program—exemplifies a case in which public reporting of hospital performance had no discernible effect.10 The study examined trends in 30-day mortality across a range of conditions over a 6-year period for 30 hospitals in the Cleveland area participating in a public reporting system. It found that the hospitals that started out in the worst-performing groups (based on baseline mortality rates) showed no significant change in mortality over time.

DOES PUBLIC REPORTING AFFECT PATIENT CHOICES?

How a high-profile bypass patient chooses a hospital

When former President Bill Clinton developed chest pain and shortness of breath in 2004, he was seen at a small community hospital in Westchester County, N.Y., and then transferred to New York-Presbyterian Hospital/Columbia University Medical Center for bypass surgery.11 Although one would think President Clinton would have chosen the best hospital for CABG in New York, Presbyterian/Columbia’s risk-adjusted mortality rate for CABG was actually about twice the average for New York hospitals and one of the worst in the state, according to the most recent “report card” for New York hospitals available at the time.12

Why did President Clinton choose the hospital he did? Chances are that he, like most other patients, did not base his decision on publicly reported data. His choice probably was heavily influenced by the normal referral patterns of the community hospital where he was first seen.

Surveys show low patient use of data on quality...

The question raised by President Clinton’s case has been formally studied. In 1996, Schneider and Epstein surveyed patients who had recently undergone CABG in Pennsylvania (where surgeon- and hospital-specific mortality rates for cardiac surgery are publicly available) and found that fewer than 1% of patients said that provider ratings had a moderate or major impact on their choice of provider.13 

The Kaiser Family Foundation regularly surveys the public about its knowledge and use of publicly available hospital comparison data. In the latest Kaiser survey, conducted in 2008,14 41% of respondents said they believe there are “big differences” in quality among their local hospitals, yet 59% said they would choose a hospital that is familiar to them rather than a higher-rated facility. These findings may be explained, in part, by a lack of awareness that data on hospital quality are available: only 7% of survey participants said they had seen and used information comparing the quality of hospitals to make health care decisions in the prior year, and only 6% said they had seen and used information comparing physicians.

...But a trend toward greater acceptance

Although consumers’ use of publicly reported quality data remains low, their recognition of the value of such data has grown over time. Kaiser has conducted similar public surveys dating back to 1996, and the period from 1996 to 2008 saw a substantial decrease (from 72% to 59%) in the percentage of Americans who would choose a hospital based on familiarity more than on quality ratings. Similarly, the percentage of Americans who would prefer a surgeon with high quality ratings over a surgeon who has treated friends or family more than doubled from 1996 (20%) to 2008 (47%).14

What effect on market share?

Studies on the effects that public reporting has on hospital market share have been limited.

Schneider and Epstein surveyed cardiologists in Pennsylvania in 1995 and found that 87% of them said the state’s public reporting of surgeon- and hospital-specific mortality rates for CABG had no influence or minimal influence on their referral recommendations.15

Similarly, a review of New York State’s public reporting system for CABG 15 years after its launch found that hospital performance was not associated with a subsequent change in market share, not even among those hospitals with the highest mortality rate in a given year.16 Interestingly, however, this review also showed that surgeons in the bottom performance quartile were four times as likely as other surgeons to leave practice in the year following their poor report, which is one of the most prominent outcomes associated with provider profiling reported to date.

PAY-FOR-PERFORMANCE PROGRAMS

Evidence on the impact of pay-for-performance programs in the hospital setting is even more limited than that for public reporting.

Some evidence has come from the CMS/Premier Hospital Quality Incentive Demonstration, a pay-for-performance collaboration between the Centers for Medicare and Medicaid Services (CMS) and Premier, Inc., a nationwide alliance of hospitals that promotes best practices.17 The demonstration calls for hospitals that rank in the top quintile or decile for performance to receive a 1% or 2% Medicare payment bonus for five clinical focus areas: cardiac surgery, hip and knee surgery, pneumonia, heart failure, and acute MI. Performance ratings are based primarily on process measures as well as a few clinical outcome measures. Results from the first 21 months of the demonstration showed a consistent improvement in the hospitals’ composite quality scores in each of the five clinical areas.17

It is important to recognize, however, that this improvement occurred against the backdrop of broad national adoption of public reporting of hospital quality data, which makes it difficult to tease out how much of the improvement was truly attributable to pay-for-performance, especially in the absence of a control group.

To address this question, my colleagues and I evaluated adherence to quality measures over a 2-year period at 613 hospitals participating in a national public reporting initiative,18 including 207 hospitals that simultaneously took part in the CMS/Premier Hospital Quality Incentive Demonstration’s pay-for-performance program described above. We found that the hospitals participating in both public reporting and the pay-for-performance initiative achieved only modestly greater improvements in quality than did the hospitals engaged solely in public reporting; the difference amounted to only about a 1% improvement in process measures per year.

In another controlled study, Glickman et al compared quality improvement in the management of acute MI between 54 hospitals in a CMS pay-for-performance pilot project and 446 control hospitals without pay-for-performance incentives.19 They found that the pay-for-performance hospitals achieved a statistically significantly greater degree of improvement compared with control hospitals on two of six process-of-care measures (use of aspirin at discharge and smoking-cessation counseling) but not on the composite process-of-care measure. There was no significant difference between the groups in improvements in in-hospital mortality.

Why have the effects of pay-for-performance initiatives so far been so limited? It may be that the bonuses are too small and that public reporting is already effective at stimulating quality improvement, so that the incremental benefit of adding financial incentives is small. In the case of my group’s study,18 another possible factor was that the hospitals’ baseline performance on the quality measures assessed was already high—approaching or exceeding 90% on 5 of the 10 measures—thereby limiting our power to detect differences between the groups.

 

 

CONTROVERSIES AND CHALLENGES

Many issues continue to surround public reporting and pay-for-performance programs:

  • Are the measures used to evaluate health care systems suitable and evidence-based? Do they truly reflect the quality of care that providers are giving?
  • Do the programs encourage “teaching to the test” rather than stimulating real and comprehensive improvement? Do they make the system prone to misuse or overuse of measured services?
  • How much of the variation in hospital outcomes can be explained by the current process-of-care measures?
  • Should quality be measured by outcomes or processes? Outcomes matter more to patients, but they require risk adjustment to ensure valid comparisons, and risk adjustment can be difficult and expensive to conduct.
  • How much is chance a factor in apparent performance differences between hospitals?
  • How much is patient selection a factor? Might public reporting lead to “cherry-picking” of low-risk patients and thereby reduce access to care for other patients?

Unidirectional measures can lead to misuse, overuse

In 2003, the Infectious Diseases Society of America updated its guidelines on community-acquired pneumonia to recommend that patients receive antibiotics within 4 hours of hospital admission. This recommendation was widely adopted as an incentive-linked performance measure by CMS and other third-party payers. Kanwar et al studied the impact of this guidelines-based incentive in a pre/post study at one large teaching hospital.20 They found that while significantly more patients received antibiotics in a timely fashion after publication of the guidelines (2005) versus before the guidelines (2003), almost one-third of patients receiving antibiotics in 2005 had normal chest radiographs and thus were not appropriate candidates for therapy. Moreover, significantly fewer patients in 2005 had a final diagnosis of pneumonia at discharge, and there was no difference between the two periods in rates of mortality or ICU transfer. The researchers concluded that linking the quality indicator of early antibiotic use to financial incentives may lead to misdiagnosis of pneumonia and inappropriate antibiotic use.

Of course, antibiotic timing is not the only quality measure subject to overuse or misuse; other measures pose similar risks, including prophylaxis for deep vein thrombosis, glycemic control measures, and target immunization rates.

More-nuanced measures needed

We must also consider how well reported quality measures actually reflect our objectives. For example, an evaluation of 962 hospitals’ performance in managing acute MI found that the publicly reported core process measures for acute MI (beta-blocker and aspirin at admission and discharge, ACE inhibitor at discharge, smoking-cessation counseling, timely reperfusion) together explained only 6% of the variance among the hospitals in risk-adjusted 30-day mortality.21 This underscores how complicated the factors affecting mortality are, and how existing process measures have only begun to scratch the surface.

How much of a role does chance play?

Another issue is the role of chance and our limited power to detect real differences in outcomes, as illustrated by an analysis by Dimick et al of all discharges from a nationally representative sample of nearly 1,000 hospitals.22 The objective was to determine whether the seven operations for which mortality is advocated as a quality indicator by the Agency for Healthcare Research and Quality are performed often enough to reliably identify hospitals with increased mortality rates. The researchers found that only for one of the seven procedures—CABG—is there sufficient caseload over a 3-year period at the majority of US hospitals to accurately detect a mortality rate twice the national average.

Although CMS is highly committed to public reporting, the comparative mortality data available on its Hospital Compare Web site are not very useful for driving consumer choice or motivating hospitals to improve. For example, of the nearly 4,500 US hospitals that reported data on 30-day mortality from MI, only 17 hospitals were considered to be better than the national average and only 7 were considered worse than the national average.4

CASE REVISITED: LESSONS FROM THE UMASS MEMORIAL EXPERIENCE

Returning to our case study, what can the UMass Memorial experience teach us, and how well does it reflect the literature about the usefulness of public reporting?

Did public reporting accelerate quality improvement efforts? Yes. Reporting led to the suspension of cardiac surgery and substantive reorganization, which is consistent with the literature.

Was the mortality reduction typical? No. An optimist’s view would be that the drastic actions spurred by the media coverage had strong effects. A skeptic might say that perhaps UMass Memorial did some “cherry-picking” of patients, or that they got better at coding procedures in a way that reflected more favorably on the hospital.

Were the declines in patient volumes predictable? No. So far, the data suggest that public reporting has its greatest effects on providers rather than on institutions. This may change, however, with the introduction of tiered copayments, whereby patients are asked to pay more if they get their care from lower rated institutions.

Would financial incentives have accelerated improvement? It is too early to tell. The evidence for pay-for-performance programs is limited, and the benefits demonstrated so far have been modest. But in many ways the alternative is worse: our current system of financing and paying for hospital care offers no financial incentives to hospitals for investing in the personnel or systems required to achieve better outcomes—and instead rewards (through supplemental payments) adverse outcomes.

Did prospective patients have a right to know? Despite the limitations of public reporting, one of the most compelling arguments in its favor is that patients at UMass Memorial had the right to know about the program’s outcomes. This alone may ultimately justify the expense and efforts involved. Transparency and accountability are core values of open democratic societies, and US society relies on public reporting in many other realms: the National Highway Traffic Safety Administration publicizes crash test ratings, the Securities and Exchange Commission enforces public reporting by financial institutions, and the Federal Aviation Administration reports on airline safety, timeliness of flights, and lost baggage rates.

FUTURE DIRECTIONS

In the future, we can expect more measurement and reporting of health care factors that patients care most about, such as clinical outcomes and the patient experience. It is likely that public reporting and pay-for-performance programs will address a broader range of conditions and comprise a larger number of measures. CMS has outlined plans to increase the number of publicly reported measures to more than 70 by 2010 and more than 100 by 2011. My hope is that this expansion of data, along with improved data synthesis and presentation, will foster greater use of publicly reported data. Further, the continued evolution of the Web and social networking sites is very likely to enhance public awareness of hospital performance and change the ways in which patients use these data.

 

 

DISCUSSION

Question from the audience: I’m concerned about what seems to be a unilateral effort to improve quality. There are many components of health care delivery beyond those you’ve described, including the efforts of patients, insurers, employers, and the government. The reality is that patients don’t plan for illness, insurance companies often deny care, more and more employers are providing less coverage or no coverage, and Medicare is on the road to insolvency. Is the battle for quality winnable when all these other components of delivery are failing?

Dr. Lindenauer: You make good points. But from the standpoint of professionalism, I think we have a compelling duty to constantly strive to improve the quality of care in our hospitals and practices. I have presented strategies for potentially accelerating improvements that providers are trying to make anyway. Public reporting and financial incentives are likely to be with us for a while, and their use is likely to grow. But as you said, they address only part of the problem confronting American health care.

Question from the audience: For the savvy health care consumer, is there one particular Web site for hospital or provider comparisons that you would especially recommend? Do you actually recommend using such Web sites to patients before they undergo certain procedures?

Dr. Lindenauer: I think the Hospital Compare site from the Department of Health and Human Services is the key Web site. The California Hospital Assessment and Reporting Taskforce (CHART) has a good site, and the Commonwealth Fund’s WhyNotTheBest.org is an interesting newcomer. 

However, even the most ardent advocates for public reporting wouldn’t say the information available today is sufficient for making decisions. There’s still an important role for getting recommendations from other doctors who are familiar with local hospitals and providers.

I’m optimistic that the changes that are coming to these Web sites will provide a better user experience and make it harder to ignore the results of public reporting. Today we can say, “Hospital A is better at discharge instructions or smoking cessation counseling.” But we all can appreciate how weak those kinds of measures are because their implementation is subject to local interpretations. Once risk-adjusted outcomes and more-meaningful process measures are available, I’d be surprised if more patients weren’t willing to base their decisions on published comparisons.

References
  1. Kowalczyk L, Smith S. Hospital halts heart surgeries due to deaths: high rate cited at Worcester facility. The Boston Globe. September 22, 2005.
  2. Ettinger WH, Hylka SM, Phillips RA, Harrison LH Jr, Cyr JA, Sussman AJ. When things go wrong: the impact of being a statistical outlier in publicly reported coronary artery bypass graft surgery mortality data. Am J Med Qual 2008; 23:90–95.
  3. Leapfrog hospital quality ratings. The Leapfrog Group Web site. http://www.leapfroggroup.org/cp. Accessed June 10, 2009.
  4. Hospital Compare: a quality tool provided by Medicare. U.S. Department of Health & Human Services Web site. http://www.hospitalcompare.hhs.gov.  Accessed June 10, 2009.
  5. Why Not the Best (Beta): A Health Care Quality Improvement Resource. The Commonwealth Fund. http://www.WhyNotTheBest.org. Accessed May 6, 2009.
  6. Hospital-acquired infections in Pennsylvania. Pennsylvania Health Care Cost Containment Council Web site. http://www.phc4.org.  Accessed April 6, 2009.
  7. Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement efforts? Health Aff (Millwood) 2003; 22:84–94.
  8. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008; 148:111–123.
  9. Peterson ED, DeLong ER, Jollis JG, et al. The effects of New York’s bypass surgery provider profiling on access to care and patient outcomes in the elderly. J Am Coll Cardiol 1998; 32:993–999.
  10. Baker DW, Einstadter D, Thomas C, et al. The effect of publicly reporting hospital performance on market share and risk-adjusted mortality at high-mortality hospitals. Med Care 2003; 41:729–740.
  11. Graylock J. After chest pains, Clinton set to undergo bypass surgery. USA Today. September 3, 2004.
  12. Adult Cardiac Surgery in New York State, 1999–2001. Albany, NY: New York State Department of Health; April 2004. http://www.health.state.ny.us/nysdoh/heart/pdf/1999-2001_cabg.pdf. Accessed June 10, 2009.
  13. Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA 1998; 279:1638–1642.
  14. The Henry J. Kaiser Family Foundation. 2008 Update on Consumers’ Views of Patient Safety and Quality Information: Summary & Chartpack; October 2008. http://www.kff.org/kaiserpolls/upload/7819.pdf. Accessed June 10, 2009.
  15. Schneider EC, Epstein AM. Influence of cardiac-surgery performance reports on referral practices and access to care: a survey of cardiovascular specialists. N Engl J Med 1996; 335:251–256.
  16. Jha AK, Epstein AM. The predictive accuracy of the New York State coronary artery bypass surgery report-card system. Health Aff (Millwood) 2006; 25:844–855.
  17. Remus D. Pay for performance: CMS/Premier Hospital Quality Incentive Demonstration Project—year 1 results, December 2005. PowerPoint presentation available at: http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/results/index.jsp. Accessed June 10, 2009.
  18. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med 2007; 356:486–496.
  19. Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA 2007; 297:2373–2380.
  20. Kanwar M, Brar N, Khatib R, Fakih MG. Misdiagnosis of community-acquired pneumonia and inappropriate utilization of antibiotics: side effects of the 4-h antibiotic administration rule. Chest 2007; 131:1865–1869.
  21. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality. JAMA 2006; 296:72–78.
  22. Dimick JB, Welch HG, Birkmeyer JD. Surgical mortality as an indicator of hospital quality: the problem with small sample size. JAMA 2004; 292:847–851.
References
  1. Kowalczyk L, Smith S. Hospital halts heart surgeries due to deaths: high rate cited at Worcester facility. The Boston Globe. September 22, 2005.
  2. Ettinger WH, Hylka SM, Phillips RA, Harrison LH Jr, Cyr JA, Sussman AJ. When things go wrong: the impact of being a statistical outlier in publicly reported coronary artery bypass graft surgery mortality data. Am J Med Qual 2008; 23:90–95.
  3. Leapfrog hospital quality ratings. The Leapfrog Group Web site. http://www.leapfroggroup.org/cp. Accessed June 10, 2009.
  4. Hospital Compare: a quality tool provided by Medicare. U.S. Department of Health & Human Services Web site. http://www.hospitalcompare.hhs.gov.  Accessed June 10, 2009.
  5. Why Not the Best (Beta): A Health Care Quality Improvement Resource. The Commonwealth Fund. http://www.WhyNotTheBest.org. Accessed May 6, 2009.
  6. Hospital-acquired infections in Pennsylvania. Pennsylvania Health Care Cost Containment Council Web site. http://www.phc4.org.  Accessed April 6, 2009.
  7. Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement efforts? Health Aff (Millwood) 2003; 22:84–94.
  8. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008; 148:111–123.
  9. Peterson ED, DeLong ER, Jollis JG, et al. The effects of New York’s bypass surgery provider profiling on access to care and patient outcomes in the elderly. J Am Coll Cardiol 1998; 32:993–999.
  10. Baker DW, Einstadter D, Thomas C, et al. The effect of publicly reporting hospital performance on market share and risk-adjusted mortality at high-mortality hospitals. Med Care 2003; 41:729–740.
  11. Graylock J. After chest pains, Clinton set to undergo bypass surgery. USA Today. September 3, 2004.
  12. Adult Cardiac Surgery in New York State, 1999–2001. Albany, NY: New York State Department of Health; April 2004. http://www.health.state.ny.us/nysdoh/heart/pdf/1999-2001_cabg.pdf. Accessed June 10, 2009.
  13. Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA 1998; 279:1638–1642.
  14. The Henry J. Kaiser Family Foundation. 2008 Update on Consumers’ Views of Patient Safety and Quality Information: Summary & Chartpack; October 2008. http://www.kff.org/kaiserpolls/upload/7819.pdf. Accessed June 10, 2009.
  15. Schneider EC, Epstein AM. Influence of cardiac-surgery performance reports on referral practices and access to care: a survey of cardiovascular specialists. N Engl J Med 1996; 335:251–256.
  16. Jha AK, Epstein AM. The predictive accuracy of the New York State coronary artery bypass surgery report-card system. Health Aff (Millwood) 2006; 25:844–855.
  17. Remus D. Pay for performance: CMS/Premier Hospital Quality Incentive Demonstration Project—year 1 results, December 2005. PowerPoint presentation available at: http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/results/index.jsp. Accessed June 10, 2009.
  18. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med 2007; 356:486–496.
  19. Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA 2007; 297:2373–2380.
  20. Kanwar M, Brar N, Khatib R, Fakih MG. Misdiagnosis of community-acquired pneumonia and inappropriate utilization of antibiotics: side effects of the 4-h antibiotic administration rule. Chest 2007; 131:1865–1869.
  21. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality. JAMA 2006; 296:72–78.
  22. Dimick JB, Welch HG, Birkmeyer JD. Surgical mortality as an indicator of hospital quality: the problem with small sample size. JAMA 2004; 292:847–851.
Page Number
S3-S8
Page Number
S3-S8
Publications
Publications
Article Type
Display Headline
Public reporting and pay-for-performance programs in perioperative medicine
Display Headline
Public reporting and pay-for-performance programs in perioperative medicine
Citation Override
Cleveland Clinic Journal of Medicine 2009 November;76(suppl 4):S3-S8
Inside the Article

KEY POINTS

  • Public reporting programs have expanded in recent years, driven by national policy imperatives to improve safety, increased demands for transparency, patient “consumerism,” and the growth of information technology.
  • Hospital-based pay-for-performance programs have had only a minor impact on quality so far, possibly because financial incentives have been small and much of the programs’ potential benefit may be preempted by existing public reporting efforts.
  • These programs have considerable potential to accelerate improvement in quality but are limited by a need for more-nuanced process measures and better risk-adjustment methods.
  • These programs may lead to unintended consequences such as misuse or overuse of measured services, “cherry-picking” of low-risk patients, or misclassification of providers.
  • Continued growth of the Internet and social-networking sites will likely enhance and change the way patients use and share information about the quality of health care.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 04/05/2018 - 12:00
Un-Gate On Date
Thu, 04/05/2018 - 12:00
Use ProPublica
Article PDF Media