Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Hospital quality measurement— Perplexing for professionals, let alone for patients

Why measure hospital quality? One popular premise is that measurement and transparency will inform consumer decision making and drive volume to high‐quality programs, providing incentives for improvement and raising the bar nationally. In this issue of the Journal of Hospital Medicine, Halasyamani and Davis report that there is relatively poor correlation between the Hospital Compare scores of the Centers for Medicare and Medicaid Services (CMS) and U.S. News and World Report's Best Hospitals rankings.1 The authors note that this is not necessarily surprising, as the methodologies of these rating systems are quite different, although their purposes are functionally similar.

Clearly, these 2 popular quality evaluation systems reflect different underlying constructs (which may or may not actually describe quality). And therein lies a central dilemma for health care professionals and academics: we haven't agreed among ourselves on reliable and meaningful quality metrics; so how can we, or even should we, expect the public to use available data to make health care decisions?

The 2 constructs in this particular comparison are certainly divergent in design. For the Hospital Compare ratings, the CMS used detailed process‐of‐care measures, expensively abstracted from the medical record, for just 3 medical conditions: acute myocardial infarction, congestive heart failure, and community‐acquired pneumonia. The U.S. News Best Hospitals rankings used reputation (based on a survey of physicians), severity‐adjusted mortality rate, staffing ratio, and key technologies offered by hospitals. Halasyamani and Davis conclude that consumers may be left to wonder how to reconcile these discordant rating systems. At the same time, they acknowledge that it is not yet clear whether public reporting will affect consumers' health care choices. Available evidence suggests that when making choices about health care, patients are much more likely to consult family and friends than an Internet site that posts quality information.2 There is as yet no conclusive evidence that quality data drive consumer decision making. Furthermore, acute myocardial infarction patients rarely have the opportunity to choose a hospital, even if they had access to the data.

The assessment of hospital quality is not only a challenge for patients, it's still perplexing for those of us immersed in health care. The scope of measures of quality is both broad and incomplete. At the microsystem and individual clinical syndrome level, we have a plethora of process measures that are evidence based (such as the CMS Hospital Compare measures) but appear to move meaningful outcomes only slightly, if at all. The evidence linking the pneumonia measures, for instance, to significant outcomes such as lower mortality or (rarely studied) better functional outcomes is extremely limited or nonexistent.3, 4

At the other end of the continuum are sweeping metrics such as risk‐adjusted in‐hospital mortality, which may be important and yet has 2 significant limitations. First, mortality rates in acute care are generally so low that this is not a useful outcome of interest for most clinical conditions. Its utility is really limited to well‐studied procedures such as cardiac surgery. Second, mortality rate reduction is extraordinarily difficult to link meaningfully to specific process interventions with available information and tools. For high‐volume complex medical conditions, such as pneumonia, nonsurgically‐managed cardiac disease, and oncology, we cannot as yet reliably use in‐hospital mortality rate as a descriptor for quality of care because the populations are so diverse and the statistical tools so crude. The public reporting of these data is even more complex because it often lags behind current data by years and may be significantly affected by sample size.

Even when we settle on a few, well‐defined process metrics, we have problems with complete and accurate reporting of data. In Halasyamani and Davis's study, only 2.9% of hospitals reported all 14 Hospital Compare core performance measures used in their analysis.1 Evidence suggests that poor performance is a strong disincentive to voluntarily report quality measures to the public.5 And because there is no evidence that this type of transparency initiative will drive volume to higher‐quality programs, publicly reporting quality measures may not provide a strong enough incentive for hospitals to allocate resources to the improvement of the quality of care they deliver in these specific areas.

The CMS has introduced financial incentives to encourage hospitals to report performance measures (regardless of the actual level of performance which is reported), providing financial rewards to top‐performing hospitals and/or to hospitals that actually demonstrate that strong performance may have a greater impact. The results of early studies suggested that that pay‐for‐performance did improve the quality of health care.6 Lindenauer et al. recently published the results of a large study evaluating adherence to quality measures in hospitals that voluntarily reported measures compared with those participating in a pay‐for‐performance demonstration project funded by the CMS. Hospitals engaged in both public reporting and pay‐for‐performance achieved modestly greater improvements in quality compared with those that only did public reporting.7 It is notable that this demonstration project generally produced modest financial rewards to those hospitals that improved performance.8 The optimal model to reward performance remains to be determined.7, 9, 10

There are a number of potentially harmful unintended consequences of poorly designed quality measures and associated transparency and incentive programs. The most obvious is opportunity cost. As the incentives become more tangible and meaningful, hospital quality leaders will be expected to step up efforts to improve performance in the specific process of care measures for which they are rewarded. Without caution, however, hospital quality leaders may develop a narrow focus in deciding where to apply their limited resources and may become distracted from other areas in dire need of improvement. Their boards of directors might appropriately argue that it is their fiduciary responsibility to focus on improving those aspects of quality that the payer community has highlighted as most important. If the metrics are excellent and the underlying constructs are in fact the right ones to advance quality in American acute care, this is a direction to be applauded. If the metrics are flawed and limited, which is the case today, then the risk is that resources will be wasted and diverted from more important priorities.

Even worse, an overly narrow focus may have unintended adverse clinical consequences. Recently, Wachter discussed several real‐world examples of unintended consequences of quality improvement efforts, including giving patients multiple doses of pneumococcal vaccines and inappropriately treating patients with symptoms that might indicate community‐acquired pneumonia with antibiotics.11 As hospitals attempt to improve their report cards, a significant risk exists that patients will receive excessive or unnecessary care in an attempt to meet specified timeliness goals.

The most important issue that has still not been completely addressed is whether improvements in process‐of‐care measures will actually improve patient outcomes. In a recent issue of this journal, Seymann concluded that there is strong evidence for influenza vaccination and the use of appropriate antibiotics for community‐acquired pneumonia12 but that other pneumonia quality measures were of less obvious clinical benefit. Controversy continues over whether the optimal timing of the initial treatment of community‐acquired pneumonia with antibiotics is 4 hours, as it currently stands, or 8 hours. Patients hospitalized with pneumonia may be motivated to quit smoking, but CMS requirements for smoking cessation advice/counseling can be satisfied with a simple pamphlet or a video, rather than interventions that involve counseling by specifically trained professionals and the use of pharmacotherapy, which are more likely to succeed. Although smoking cessation is an admirable goal, whether this is performed will not affect the quality of care that a patient with pneumonia receives during the index admission. In fact, it would be more important to counsel all patients about the hazards of smoking in an attempt to prevent pneumonia and acute myocardial infarction as well as a host of other smoking‐related illnesses.

In another example, Fonarow and colleagues examined the association between heart failure clinical outcomes and performance measures in a large observational cohort.13 The study found that current heart failure performance measures, aside from prescribing angiotensin‐converting inhibitor or angiotensin receptor blocker at discharge, had little relationship to mortality in the first 60‐90 days following discharge. On the other hand, the team found that being discharged on a beta blocker was associated with a significant reduction in mortality; however, beta blocker use is not part of the current CMS core measures. In addition, many patients hospitalized for heart failure may benefit from implantable cardioverter‐defibrillator therapy and/or cardiac resynchronization therapy,14 yet referral to a cardiologist to evaluate patients who may be suitable for these therapies is not a CMS core measure.

A similar, more comprehensive study recently evaluated whether performance on CMS quality measures for acute myocardial infarction, heart failure, and pneumonia correlated with condition‐specific inpatient, 30‐day, and 1‐year risk‐adjusted mortality rates.15 The study found that the best hospitals, those performing at the 75th percentile on quality measures, did have lower mortality rates than did hospitals performing at the 25th percentile, but the absolute risk reduction was small. Specifically, the absolute risk reduction for 30‐day mortality was 0.6%, 0.1%, and 0.1% for acute myocardial infarction, heart failure, and pneumonia, respectively. In attempting to explain their findings, the authors noted that current quality measures include only a subset of activities involved in the care of hospitalized patients. In addition, mortality rates are likely influenced by factors not included in current quality measures, such as the use of electronic health records, staffing levels, and other activities of quality oversight committees.

The era of measurement and accountability for providing high‐quality health care is upon us. Public reporting may lead to improvement in quality measures, but it is incumbent on the academic and provider communities as well as the payer community to ensure that the metrics are meaningful, reliable, and reproducible and, equally important, that they make a difference in essential clinical outcomes such as mortality, return to function, and avoidance of adverse events.10 Emerging evidence suggests the measures may need to be linked to meaningful financial incentives to the provider in order to accelerate change. Incentives directed at patients appear to be ineffective, clumsy, and slow to produce results.16

The time is right to revisit the quality measures currently used for transparency and incentives. We need a tighter, more reliable set of metrics that actually correlate with meaningful outcomes. Some evidence‐based measures appear to be missing from the current leading lists and some remain inadequately defined with regard to compliance. As a system, the measurement program contains poorly understood risks of unintended consequences. Above all else, local and national quality leaders need to be mindful that improving patient outcomes must be the central goal in our efforts to improve performance on process‐of‐care measures.

References
  1. Halasyamani LK,Davis MM.Conflicting measures of hospital quality: ratings from “Hospital Compare” versus “Best Hospitals.”J Hosp Med.2007;2:128134.
  2. Kaiser Family Foundation and Agency for Health Care Research and Quality.National Survey on Consumers' Experiences with Patient Safety and Quality Information.Washington, DC:Kaiser Family Foundation;2004.
  3. Meehan TM,Fine MH,Krumholz HM et al.Quality of care, process, and outcomes in elderly patients with pneumonia.JAMA.1997;278:20802084.
  4. Dedier J,Singer DE,Chang Y,Moore M,Atlas SJ.Process of care, illness severity, and outcomes in the management of community acquired pneumonia at academic hospitals.Arch Intern Med.2001;161:20992104.
  5. McCormick D,Himmelstein DU,Woolhandler S,Wolfe SM,Bor DH.Relationship between low quality‐of‐care scores and HMOs' subsequent public disclosure of quality‐of‐care scores.JAMA.2002;288:14841490.
  6. Petersen LA,Woodward LD,Urech T,Daw C,Sookanan S.Does pay‐for‐performance improve the quality of health care?Ann Intern Med.2006;145:265272.
  7. Lindenauer PK,Remus D,Roman S, et al.Public Reporting and pay for performance in hospital quality improvement.N Engl J Med.2007;356:486496.
  8. The CMS demonstration project methodology provides a 2% incremental payment for the best 10 percent of hospitals and 1% for the second decile. See CMS press release, available at: http://www.cms.hhs.gov/apps/media/. Accessed January 26,2007.
  9. Rowe JW.Pay for performance and accountability: related themes in improving health care.Ann Intern Med.2006;145:695699.
  10. Institute of Medicine Committee on Redesigning Health Insurance Performance Measures, Payment, and Performance Improvement Programs.Rewarding Provider Performance: Aligning Incentives in Medicare (Pathways to Quality Health Care Series).Washington, DC:National Academies Press;2007.
  11. Wachter RM.Expected and unanticipated consequences of the quality and information technology revolutions.JAMA.2006;295:27802783.
  12. Seymann GB.Community‐acquired pneumonia: defining quality care.J Hosp Med.2006;1:344353.
  13. Fonarow GC,Abraham WT,Albert NM, et al.Association between performance measures and clinical outcomes for patients hospitalized with heart failure.JAMA.2007;297:6170.
  14. Hunt SA,Abraham WT,Chin MH et al.ACC/AHA 2005 Guideline update for the diagnosis and management of chronic heart failure in the adult: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.Circulation.2005;112:e154e235.
  15. Werner RM,Bradlow ET.Relationship between Medicare's Hospital Compare performance measures and mortality rates.JAMA.2006;296:26942702.
  16. Employee Benefit Research Institute. 2nd Annual EBRI/Commonwealth Fund Consumerism in Health Care Survey, 2006: early experience with high‐deductible and consumer‐driven health plans. December 2006. Available at: http://www.ebri.org/pdf/briefspdf/EBRI_IB_12‐20061.pdf.. Accessed February 23,2007.
Article PDF
Issue
Journal of Hospital Medicine - 2(3)
Publications
Page Number
119-122
Sections
Article PDF
Article PDF

Why measure hospital quality? One popular premise is that measurement and transparency will inform consumer decision making and drive volume to high‐quality programs, providing incentives for improvement and raising the bar nationally. In this issue of the Journal of Hospital Medicine, Halasyamani and Davis report that there is relatively poor correlation between the Hospital Compare scores of the Centers for Medicare and Medicaid Services (CMS) and U.S. News and World Report's Best Hospitals rankings.1 The authors note that this is not necessarily surprising, as the methodologies of these rating systems are quite different, although their purposes are functionally similar.

Clearly, these 2 popular quality evaluation systems reflect different underlying constructs (which may or may not actually describe quality). And therein lies a central dilemma for health care professionals and academics: we haven't agreed among ourselves on reliable and meaningful quality metrics; so how can we, or even should we, expect the public to use available data to make health care decisions?

The 2 constructs in this particular comparison are certainly divergent in design. For the Hospital Compare ratings, the CMS used detailed process‐of‐care measures, expensively abstracted from the medical record, for just 3 medical conditions: acute myocardial infarction, congestive heart failure, and community‐acquired pneumonia. The U.S. News Best Hospitals rankings used reputation (based on a survey of physicians), severity‐adjusted mortality rate, staffing ratio, and key technologies offered by hospitals. Halasyamani and Davis conclude that consumers may be left to wonder how to reconcile these discordant rating systems. At the same time, they acknowledge that it is not yet clear whether public reporting will affect consumers' health care choices. Available evidence suggests that when making choices about health care, patients are much more likely to consult family and friends than an Internet site that posts quality information.2 There is as yet no conclusive evidence that quality data drive consumer decision making. Furthermore, acute myocardial infarction patients rarely have the opportunity to choose a hospital, even if they had access to the data.

The assessment of hospital quality is not only a challenge for patients, it's still perplexing for those of us immersed in health care. The scope of measures of quality is both broad and incomplete. At the microsystem and individual clinical syndrome level, we have a plethora of process measures that are evidence based (such as the CMS Hospital Compare measures) but appear to move meaningful outcomes only slightly, if at all. The evidence linking the pneumonia measures, for instance, to significant outcomes such as lower mortality or (rarely studied) better functional outcomes is extremely limited or nonexistent.3, 4

At the other end of the continuum are sweeping metrics such as risk‐adjusted in‐hospital mortality, which may be important and yet has 2 significant limitations. First, mortality rates in acute care are generally so low that this is not a useful outcome of interest for most clinical conditions. Its utility is really limited to well‐studied procedures such as cardiac surgery. Second, mortality rate reduction is extraordinarily difficult to link meaningfully to specific process interventions with available information and tools. For high‐volume complex medical conditions, such as pneumonia, nonsurgically‐managed cardiac disease, and oncology, we cannot as yet reliably use in‐hospital mortality rate as a descriptor for quality of care because the populations are so diverse and the statistical tools so crude. The public reporting of these data is even more complex because it often lags behind current data by years and may be significantly affected by sample size.

Even when we settle on a few, well‐defined process metrics, we have problems with complete and accurate reporting of data. In Halasyamani and Davis's study, only 2.9% of hospitals reported all 14 Hospital Compare core performance measures used in their analysis.1 Evidence suggests that poor performance is a strong disincentive to voluntarily report quality measures to the public.5 And because there is no evidence that this type of transparency initiative will drive volume to higher‐quality programs, publicly reporting quality measures may not provide a strong enough incentive for hospitals to allocate resources to the improvement of the quality of care they deliver in these specific areas.

The CMS has introduced financial incentives to encourage hospitals to report performance measures (regardless of the actual level of performance which is reported), providing financial rewards to top‐performing hospitals and/or to hospitals that actually demonstrate that strong performance may have a greater impact. The results of early studies suggested that that pay‐for‐performance did improve the quality of health care.6 Lindenauer et al. recently published the results of a large study evaluating adherence to quality measures in hospitals that voluntarily reported measures compared with those participating in a pay‐for‐performance demonstration project funded by the CMS. Hospitals engaged in both public reporting and pay‐for‐performance achieved modestly greater improvements in quality compared with those that only did public reporting.7 It is notable that this demonstration project generally produced modest financial rewards to those hospitals that improved performance.8 The optimal model to reward performance remains to be determined.7, 9, 10

There are a number of potentially harmful unintended consequences of poorly designed quality measures and associated transparency and incentive programs. The most obvious is opportunity cost. As the incentives become more tangible and meaningful, hospital quality leaders will be expected to step up efforts to improve performance in the specific process of care measures for which they are rewarded. Without caution, however, hospital quality leaders may develop a narrow focus in deciding where to apply their limited resources and may become distracted from other areas in dire need of improvement. Their boards of directors might appropriately argue that it is their fiduciary responsibility to focus on improving those aspects of quality that the payer community has highlighted as most important. If the metrics are excellent and the underlying constructs are in fact the right ones to advance quality in American acute care, this is a direction to be applauded. If the metrics are flawed and limited, which is the case today, then the risk is that resources will be wasted and diverted from more important priorities.

Even worse, an overly narrow focus may have unintended adverse clinical consequences. Recently, Wachter discussed several real‐world examples of unintended consequences of quality improvement efforts, including giving patients multiple doses of pneumococcal vaccines and inappropriately treating patients with symptoms that might indicate community‐acquired pneumonia with antibiotics.11 As hospitals attempt to improve their report cards, a significant risk exists that patients will receive excessive or unnecessary care in an attempt to meet specified timeliness goals.

The most important issue that has still not been completely addressed is whether improvements in process‐of‐care measures will actually improve patient outcomes. In a recent issue of this journal, Seymann concluded that there is strong evidence for influenza vaccination and the use of appropriate antibiotics for community‐acquired pneumonia12 but that other pneumonia quality measures were of less obvious clinical benefit. Controversy continues over whether the optimal timing of the initial treatment of community‐acquired pneumonia with antibiotics is 4 hours, as it currently stands, or 8 hours. Patients hospitalized with pneumonia may be motivated to quit smoking, but CMS requirements for smoking cessation advice/counseling can be satisfied with a simple pamphlet or a video, rather than interventions that involve counseling by specifically trained professionals and the use of pharmacotherapy, which are more likely to succeed. Although smoking cessation is an admirable goal, whether this is performed will not affect the quality of care that a patient with pneumonia receives during the index admission. In fact, it would be more important to counsel all patients about the hazards of smoking in an attempt to prevent pneumonia and acute myocardial infarction as well as a host of other smoking‐related illnesses.

In another example, Fonarow and colleagues examined the association between heart failure clinical outcomes and performance measures in a large observational cohort.13 The study found that current heart failure performance measures, aside from prescribing angiotensin‐converting inhibitor or angiotensin receptor blocker at discharge, had little relationship to mortality in the first 60‐90 days following discharge. On the other hand, the team found that being discharged on a beta blocker was associated with a significant reduction in mortality; however, beta blocker use is not part of the current CMS core measures. In addition, many patients hospitalized for heart failure may benefit from implantable cardioverter‐defibrillator therapy and/or cardiac resynchronization therapy,14 yet referral to a cardiologist to evaluate patients who may be suitable for these therapies is not a CMS core measure.

A similar, more comprehensive study recently evaluated whether performance on CMS quality measures for acute myocardial infarction, heart failure, and pneumonia correlated with condition‐specific inpatient, 30‐day, and 1‐year risk‐adjusted mortality rates.15 The study found that the best hospitals, those performing at the 75th percentile on quality measures, did have lower mortality rates than did hospitals performing at the 25th percentile, but the absolute risk reduction was small. Specifically, the absolute risk reduction for 30‐day mortality was 0.6%, 0.1%, and 0.1% for acute myocardial infarction, heart failure, and pneumonia, respectively. In attempting to explain their findings, the authors noted that current quality measures include only a subset of activities involved in the care of hospitalized patients. In addition, mortality rates are likely influenced by factors not included in current quality measures, such as the use of electronic health records, staffing levels, and other activities of quality oversight committees.

The era of measurement and accountability for providing high‐quality health care is upon us. Public reporting may lead to improvement in quality measures, but it is incumbent on the academic and provider communities as well as the payer community to ensure that the metrics are meaningful, reliable, and reproducible and, equally important, that they make a difference in essential clinical outcomes such as mortality, return to function, and avoidance of adverse events.10 Emerging evidence suggests the measures may need to be linked to meaningful financial incentives to the provider in order to accelerate change. Incentives directed at patients appear to be ineffective, clumsy, and slow to produce results.16

The time is right to revisit the quality measures currently used for transparency and incentives. We need a tighter, more reliable set of metrics that actually correlate with meaningful outcomes. Some evidence‐based measures appear to be missing from the current leading lists and some remain inadequately defined with regard to compliance. As a system, the measurement program contains poorly understood risks of unintended consequences. Above all else, local and national quality leaders need to be mindful that improving patient outcomes must be the central goal in our efforts to improve performance on process‐of‐care measures.

Why measure hospital quality? One popular premise is that measurement and transparency will inform consumer decision making and drive volume to high‐quality programs, providing incentives for improvement and raising the bar nationally. In this issue of the Journal of Hospital Medicine, Halasyamani and Davis report that there is relatively poor correlation between the Hospital Compare scores of the Centers for Medicare and Medicaid Services (CMS) and U.S. News and World Report's Best Hospitals rankings.1 The authors note that this is not necessarily surprising, as the methodologies of these rating systems are quite different, although their purposes are functionally similar.

Clearly, these 2 popular quality evaluation systems reflect different underlying constructs (which may or may not actually describe quality). And therein lies a central dilemma for health care professionals and academics: we haven't agreed among ourselves on reliable and meaningful quality metrics; so how can we, or even should we, expect the public to use available data to make health care decisions?

The 2 constructs in this particular comparison are certainly divergent in design. For the Hospital Compare ratings, the CMS used detailed process‐of‐care measures, expensively abstracted from the medical record, for just 3 medical conditions: acute myocardial infarction, congestive heart failure, and community‐acquired pneumonia. The U.S. News Best Hospitals rankings used reputation (based on a survey of physicians), severity‐adjusted mortality rate, staffing ratio, and key technologies offered by hospitals. Halasyamani and Davis conclude that consumers may be left to wonder how to reconcile these discordant rating systems. At the same time, they acknowledge that it is not yet clear whether public reporting will affect consumers' health care choices. Available evidence suggests that when making choices about health care, patients are much more likely to consult family and friends than an Internet site that posts quality information.2 There is as yet no conclusive evidence that quality data drive consumer decision making. Furthermore, acute myocardial infarction patients rarely have the opportunity to choose a hospital, even if they had access to the data.

The assessment of hospital quality is not only a challenge for patients, it's still perplexing for those of us immersed in health care. The scope of measures of quality is both broad and incomplete. At the microsystem and individual clinical syndrome level, we have a plethora of process measures that are evidence based (such as the CMS Hospital Compare measures) but appear to move meaningful outcomes only slightly, if at all. The evidence linking the pneumonia measures, for instance, to significant outcomes such as lower mortality or (rarely studied) better functional outcomes is extremely limited or nonexistent.3, 4

At the other end of the continuum are sweeping metrics such as risk‐adjusted in‐hospital mortality, which may be important and yet has 2 significant limitations. First, mortality rates in acute care are generally so low that this is not a useful outcome of interest for most clinical conditions. Its utility is really limited to well‐studied procedures such as cardiac surgery. Second, mortality rate reduction is extraordinarily difficult to link meaningfully to specific process interventions with available information and tools. For high‐volume complex medical conditions, such as pneumonia, nonsurgically‐managed cardiac disease, and oncology, we cannot as yet reliably use in‐hospital mortality rate as a descriptor for quality of care because the populations are so diverse and the statistical tools so crude. The public reporting of these data is even more complex because it often lags behind current data by years and may be significantly affected by sample size.

Even when we settle on a few, well‐defined process metrics, we have problems with complete and accurate reporting of data. In Halasyamani and Davis's study, only 2.9% of hospitals reported all 14 Hospital Compare core performance measures used in their analysis.1 Evidence suggests that poor performance is a strong disincentive to voluntarily report quality measures to the public.5 And because there is no evidence that this type of transparency initiative will drive volume to higher‐quality programs, publicly reporting quality measures may not provide a strong enough incentive for hospitals to allocate resources to the improvement of the quality of care they deliver in these specific areas.

The CMS has introduced financial incentives to encourage hospitals to report performance measures (regardless of the actual level of performance which is reported), providing financial rewards to top‐performing hospitals and/or to hospitals that actually demonstrate that strong performance may have a greater impact. The results of early studies suggested that that pay‐for‐performance did improve the quality of health care.6 Lindenauer et al. recently published the results of a large study evaluating adherence to quality measures in hospitals that voluntarily reported measures compared with those participating in a pay‐for‐performance demonstration project funded by the CMS. Hospitals engaged in both public reporting and pay‐for‐performance achieved modestly greater improvements in quality compared with those that only did public reporting.7 It is notable that this demonstration project generally produced modest financial rewards to those hospitals that improved performance.8 The optimal model to reward performance remains to be determined.7, 9, 10

There are a number of potentially harmful unintended consequences of poorly designed quality measures and associated transparency and incentive programs. The most obvious is opportunity cost. As the incentives become more tangible and meaningful, hospital quality leaders will be expected to step up efforts to improve performance in the specific process of care measures for which they are rewarded. Without caution, however, hospital quality leaders may develop a narrow focus in deciding where to apply their limited resources and may become distracted from other areas in dire need of improvement. Their boards of directors might appropriately argue that it is their fiduciary responsibility to focus on improving those aspects of quality that the payer community has highlighted as most important. If the metrics are excellent and the underlying constructs are in fact the right ones to advance quality in American acute care, this is a direction to be applauded. If the metrics are flawed and limited, which is the case today, then the risk is that resources will be wasted and diverted from more important priorities.

Even worse, an overly narrow focus may have unintended adverse clinical consequences. Recently, Wachter discussed several real‐world examples of unintended consequences of quality improvement efforts, including giving patients multiple doses of pneumococcal vaccines and inappropriately treating patients with symptoms that might indicate community‐acquired pneumonia with antibiotics.11 As hospitals attempt to improve their report cards, a significant risk exists that patients will receive excessive or unnecessary care in an attempt to meet specified timeliness goals.

The most important issue that has still not been completely addressed is whether improvements in process‐of‐care measures will actually improve patient outcomes. In a recent issue of this journal, Seymann concluded that there is strong evidence for influenza vaccination and the use of appropriate antibiotics for community‐acquired pneumonia12 but that other pneumonia quality measures were of less obvious clinical benefit. Controversy continues over whether the optimal timing of the initial treatment of community‐acquired pneumonia with antibiotics is 4 hours, as it currently stands, or 8 hours. Patients hospitalized with pneumonia may be motivated to quit smoking, but CMS requirements for smoking cessation advice/counseling can be satisfied with a simple pamphlet or a video, rather than interventions that involve counseling by specifically trained professionals and the use of pharmacotherapy, which are more likely to succeed. Although smoking cessation is an admirable goal, whether this is performed will not affect the quality of care that a patient with pneumonia receives during the index admission. In fact, it would be more important to counsel all patients about the hazards of smoking in an attempt to prevent pneumonia and acute myocardial infarction as well as a host of other smoking‐related illnesses.

In another example, Fonarow and colleagues examined the association between heart failure clinical outcomes and performance measures in a large observational cohort.13 The study found that current heart failure performance measures, aside from prescribing angiotensin‐converting inhibitor or angiotensin receptor blocker at discharge, had little relationship to mortality in the first 60‐90 days following discharge. On the other hand, the team found that being discharged on a beta blocker was associated with a significant reduction in mortality; however, beta blocker use is not part of the current CMS core measures. In addition, many patients hospitalized for heart failure may benefit from implantable cardioverter‐defibrillator therapy and/or cardiac resynchronization therapy,14 yet referral to a cardiologist to evaluate patients who may be suitable for these therapies is not a CMS core measure.

A similar, more comprehensive study recently evaluated whether performance on CMS quality measures for acute myocardial infarction, heart failure, and pneumonia correlated with condition‐specific inpatient, 30‐day, and 1‐year risk‐adjusted mortality rates.15 The study found that the best hospitals, those performing at the 75th percentile on quality measures, did have lower mortality rates than did hospitals performing at the 25th percentile, but the absolute risk reduction was small. Specifically, the absolute risk reduction for 30‐day mortality was 0.6%, 0.1%, and 0.1% for acute myocardial infarction, heart failure, and pneumonia, respectively. In attempting to explain their findings, the authors noted that current quality measures include only a subset of activities involved in the care of hospitalized patients. In addition, mortality rates are likely influenced by factors not included in current quality measures, such as the use of electronic health records, staffing levels, and other activities of quality oversight committees.

The era of measurement and accountability for providing high‐quality health care is upon us. Public reporting may lead to improvement in quality measures, but it is incumbent on the academic and provider communities as well as the payer community to ensure that the metrics are meaningful, reliable, and reproducible and, equally important, that they make a difference in essential clinical outcomes such as mortality, return to function, and avoidance of adverse events.10 Emerging evidence suggests the measures may need to be linked to meaningful financial incentives to the provider in order to accelerate change. Incentives directed at patients appear to be ineffective, clumsy, and slow to produce results.16

The time is right to revisit the quality measures currently used for transparency and incentives. We need a tighter, more reliable set of metrics that actually correlate with meaningful outcomes. Some evidence‐based measures appear to be missing from the current leading lists and some remain inadequately defined with regard to compliance. As a system, the measurement program contains poorly understood risks of unintended consequences. Above all else, local and national quality leaders need to be mindful that improving patient outcomes must be the central goal in our efforts to improve performance on process‐of‐care measures.

References
  1. Halasyamani LK,Davis MM.Conflicting measures of hospital quality: ratings from “Hospital Compare” versus “Best Hospitals.”J Hosp Med.2007;2:128134.
  2. Kaiser Family Foundation and Agency for Health Care Research and Quality.National Survey on Consumers' Experiences with Patient Safety and Quality Information.Washington, DC:Kaiser Family Foundation;2004.
  3. Meehan TM,Fine MH,Krumholz HM et al.Quality of care, process, and outcomes in elderly patients with pneumonia.JAMA.1997;278:20802084.
  4. Dedier J,Singer DE,Chang Y,Moore M,Atlas SJ.Process of care, illness severity, and outcomes in the management of community acquired pneumonia at academic hospitals.Arch Intern Med.2001;161:20992104.
  5. McCormick D,Himmelstein DU,Woolhandler S,Wolfe SM,Bor DH.Relationship between low quality‐of‐care scores and HMOs' subsequent public disclosure of quality‐of‐care scores.JAMA.2002;288:14841490.
  6. Petersen LA,Woodward LD,Urech T,Daw C,Sookanan S.Does pay‐for‐performance improve the quality of health care?Ann Intern Med.2006;145:265272.
  7. Lindenauer PK,Remus D,Roman S, et al.Public Reporting and pay for performance in hospital quality improvement.N Engl J Med.2007;356:486496.
  8. The CMS demonstration project methodology provides a 2% incremental payment for the best 10 percent of hospitals and 1% for the second decile. See CMS press release, available at: http://www.cms.hhs.gov/apps/media/. Accessed January 26,2007.
  9. Rowe JW.Pay for performance and accountability: related themes in improving health care.Ann Intern Med.2006;145:695699.
  10. Institute of Medicine Committee on Redesigning Health Insurance Performance Measures, Payment, and Performance Improvement Programs.Rewarding Provider Performance: Aligning Incentives in Medicare (Pathways to Quality Health Care Series).Washington, DC:National Academies Press;2007.
  11. Wachter RM.Expected and unanticipated consequences of the quality and information technology revolutions.JAMA.2006;295:27802783.
  12. Seymann GB.Community‐acquired pneumonia: defining quality care.J Hosp Med.2006;1:344353.
  13. Fonarow GC,Abraham WT,Albert NM, et al.Association between performance measures and clinical outcomes for patients hospitalized with heart failure.JAMA.2007;297:6170.
  14. Hunt SA,Abraham WT,Chin MH et al.ACC/AHA 2005 Guideline update for the diagnosis and management of chronic heart failure in the adult: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.Circulation.2005;112:e154e235.
  15. Werner RM,Bradlow ET.Relationship between Medicare's Hospital Compare performance measures and mortality rates.JAMA.2006;296:26942702.
  16. Employee Benefit Research Institute. 2nd Annual EBRI/Commonwealth Fund Consumerism in Health Care Survey, 2006: early experience with high‐deductible and consumer‐driven health plans. December 2006. Available at: http://www.ebri.org/pdf/briefspdf/EBRI_IB_12‐20061.pdf.. Accessed February 23,2007.
References
  1. Halasyamani LK,Davis MM.Conflicting measures of hospital quality: ratings from “Hospital Compare” versus “Best Hospitals.”J Hosp Med.2007;2:128134.
  2. Kaiser Family Foundation and Agency for Health Care Research and Quality.National Survey on Consumers' Experiences with Patient Safety and Quality Information.Washington, DC:Kaiser Family Foundation;2004.
  3. Meehan TM,Fine MH,Krumholz HM et al.Quality of care, process, and outcomes in elderly patients with pneumonia.JAMA.1997;278:20802084.
  4. Dedier J,Singer DE,Chang Y,Moore M,Atlas SJ.Process of care, illness severity, and outcomes in the management of community acquired pneumonia at academic hospitals.Arch Intern Med.2001;161:20992104.
  5. McCormick D,Himmelstein DU,Woolhandler S,Wolfe SM,Bor DH.Relationship between low quality‐of‐care scores and HMOs' subsequent public disclosure of quality‐of‐care scores.JAMA.2002;288:14841490.
  6. Petersen LA,Woodward LD,Urech T,Daw C,Sookanan S.Does pay‐for‐performance improve the quality of health care?Ann Intern Med.2006;145:265272.
  7. Lindenauer PK,Remus D,Roman S, et al.Public Reporting and pay for performance in hospital quality improvement.N Engl J Med.2007;356:486496.
  8. The CMS demonstration project methodology provides a 2% incremental payment for the best 10 percent of hospitals and 1% for the second decile. See CMS press release, available at: http://www.cms.hhs.gov/apps/media/. Accessed January 26,2007.
  9. Rowe JW.Pay for performance and accountability: related themes in improving health care.Ann Intern Med.2006;145:695699.
  10. Institute of Medicine Committee on Redesigning Health Insurance Performance Measures, Payment, and Performance Improvement Programs.Rewarding Provider Performance: Aligning Incentives in Medicare (Pathways to Quality Health Care Series).Washington, DC:National Academies Press;2007.
  11. Wachter RM.Expected and unanticipated consequences of the quality and information technology revolutions.JAMA.2006;295:27802783.
  12. Seymann GB.Community‐acquired pneumonia: defining quality care.J Hosp Med.2006;1:344353.
  13. Fonarow GC,Abraham WT,Albert NM, et al.Association between performance measures and clinical outcomes for patients hospitalized with heart failure.JAMA.2007;297:6170.
  14. Hunt SA,Abraham WT,Chin MH et al.ACC/AHA 2005 Guideline update for the diagnosis and management of chronic heart failure in the adult: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.Circulation.2005;112:e154e235.
  15. Werner RM,Bradlow ET.Relationship between Medicare's Hospital Compare performance measures and mortality rates.JAMA.2006;296:26942702.
  16. Employee Benefit Research Institute. 2nd Annual EBRI/Commonwealth Fund Consumerism in Health Care Survey, 2006: early experience with high‐deductible and consumer‐driven health plans. December 2006. Available at: http://www.ebri.org/pdf/briefspdf/EBRI_IB_12‐20061.pdf.. Accessed February 23,2007.
Issue
Journal of Hospital Medicine - 2(3)
Issue
Journal of Hospital Medicine - 2(3)
Page Number
119-122
Page Number
119-122
Publications
Publications
Article Type
Display Headline
Hospital quality measurement— Perplexing for professionals, let alone for patients
Display Headline
Hospital quality measurement— Perplexing for professionals, let alone for patients
Sections
Article Source
Copyright © 2007 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Associate Division Chief for Hospital Medicine, Division of General Internal Medicine, 675 North St. Clair, Suite 18‐200, Chicago, IL 60611; Fax: (312) 695‐2857
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media