Slot System
Featured Buckets
Featured Buckets Admin

Differences in 30-Day Readmission Rates in Older Adults With Dementia

Article Type
Changed
Thu, 05/25/2023 - 09:36
Display Headline
Differences in 30-Day Readmission Rates in Older Adults With Dementia

Study 1 Overview (Park et al)

Objective: To compare rates of adverse events and 30-day readmission among patients with dementia who undergo percutaneous coronary intervention (PCI) with those without dementia.

Design: This cohort study used a national database of hospital readmissions developed by the Agency for Healthcare Research and Quality.

Setting and participants: Data from State Inpatient Databases were used to derive this national readmissions database representing 80% of hospitals from 28 states that contribute data. The study included all individuals aged 18 years and older who were identified to have had a PCI procedure in the years 2017 and 2018. International Classification of Diseases, Tenth Revision (ICD-10) codes were used to identify PCI procedures, including drug-eluting stent placement, bare-metal stent placement, and balloon angioplasty, performed in patients who presented with myocardial infarction and unstable angina and those with stable ischemic heart disease. Patients were stratified into those with or without dementia, also defined using ICD-10 codes. A total of 755,406 index hospitalizations were included; 2.3% of the patients had dementia.

Main outcome measures: The primary study outcome was 30-day all-cause readmission, with the cause classified as cardiovascular or noncardiovascular. Secondary outcome measures examined were delirium, in-hospital mortality, cardiac arrest, blood transfusion, acute kidney injury, fall in hospital, length of hospital stay, and other adverse outcomes. Location at discharge was also examined. Other covariates included in the analysis were age, sex, comorbidities, hospital characteristics, primary payer, and median income. For analysis, a propensity score matching algorithm was applied to match patients with and without dementia. Kaplan-Meier curves were used to examine 30-day readmission rates, and a Cox proportional hazards model was used to calculate hazard ratios (HR) for those with and without dementia. For secondary outcomes, logistic regression models were used to calculate odds ratios (OR) of outcomes between those with and without dementia.

Main results: The average age of those with dementia was 78.8 years vs 64.9 years in those without dementia. Women made up 42.8% of those with dementia and 31.3% of those without dementia. Those with dementia also had higher rates of comorbidities, such as heart failure, renal failure, and depression. After propensity score matching, 17,309 and 17,187 patients with and without dementia, respectively, were included. Covariates were balanced between the 2 groups after matching. For the primary outcome, patients with dementia were more likely to be readmitted at 30 days (HR, 1.11; 95% CI, 1.05-1.18; P < .01) when compared to those without dementia. For other adverse outcomes, delirium was significantly more likely to occur for those with dementia (OR, 4.37; 95% CI, 3.69-5.16; P < .01). Patients with dementia were also more likely to die in hospital (OR, 1.15; 95% CI, 1.01-1.30; P = .03), have cardiac arrest (OR, 1.19; 95% CI, 1.01-1.39; P = .04), receive a blood transfusion (OR, 1.17; 95% CI, 1.00-1.36; P = .05), experience acute kidney injury (OR, 1.30; 95% CI, 1.21-1.39; P < .01), and fall in hospital (OR, 2.51; 95% CI, 2.06-3.07; P < .01). Hospital length of stay was higher for those with dementia, with a mean difference of 1.43 days. For discharge location, patients with dementia were more likely to be sent to a skilled nursing facility (30.1% vs 12.2%) and less likely to be discharged home.

Conclusion: Patients with dementia are more likely to experience adverse events, including delirium, mortality, kidney injury, and falls after PCI, and are more likely to be readmitted to the hospital in 30 days compared to those without dementia.

 

 

Study 2 Overview (Gilmore-Bykovskyi et al)

Objective: To examine the association between race and 30-day readmissions in Black and non-Hispanic White Medicare beneficiaries with dementia.

Design: This was a retrospective cohort study that used 100% Medicare fee-for service claims data from all hospitalizations between January 1, 2014, and November 30, 2014, for all enrollees with a dementia diagnosis. The claims data were linked to the patient, hospital stay, and hospital factors. Patients with dementia were identified using a validated algorithm that requires an inpatient, skilled nursing facility, home health, or Part B institutional or noninstitutional claim with a qualifying diagnostic code during a 3-year period. Persons enrolled in a health maintenance organization plan were excluded.

Main outcome measures: The primary outcome examined in this study was 30-day all-cause readmission. Self-reported race and ethnic identity was a baseline covariate. Persons who self-reported Black or non-Hispanic White race were included in the study; other categories of race and ethnicity were excluded because of prior evidence suggesting low accuracy of these categories in Medicare claims data. Other covariates included neighborhood disadvantage, measured using the Area Deprivation Index (ADI), and rurality; hospital-level and hospital stay–level characteristics such as for-profit status and number of annual discharges; and individual demographic characteristics and comorbidities. The ADI is constructed using variables of poverty, education, housing, and employment and is represented as a percentile ranking of level of disadvantage. Unadjusted and adjusted analyses of 30-day hospital readmission were conducted. Models using various levels of adjustment were constructed to examine the contributions of the identified covariates to the estimated association between 30-day readmission and race.

Main results: A total of 1,523,142 index hospital stays among 945,481 beneficiaries were included; 215,815 episodes were among Black beneficiaries and 1,307,327 episodes were among non-Hispanic White beneficiaries. Mean age was 81.5 years, and approximately 61% of beneficiaries were female. Black beneficiaries were younger but had higher rates of dual Medicare/Medicaid eligibility and disability; they were also more likely to reside in disadvantaged neighborhoods. Black beneficiaries had a 30-day readmission rate of 24.1% compared with 18.5% in non-Hispanic White beneficiaries (unadjusted OR, 1.37; 95% CI, 1.35-1.39). The differences in outcomes persisted after adjusting for geographic factors, social factors, hospital characteristics, hospital stay factors, demographics, and comorbidities, suggesting that unmeasured underlying racial disparities not included in this model accounted for the differences. The effects of certain variables, such as neighborhood, differed by race; for example, the protective effect of living in a less disadvantaged neighborhood was observed among White beneficiaries but not Black beneficiaries.

Conclusion: Racial and geographic disparities in 30-day readmission rates were observed among Medicare beneficiaries with dementia. Protective effects associated with neighborhood advantage may confer different levels of benefit for people of different race.

 

 

Commentary

Adults living with dementia are at higher risk of adverse outcomes across settings. In the first study, by Park et al, among adults who underwent a cardiac procedure (PCI), those with dementia were more likely to experience adverse events compared to those without dementia. These outcomes include increased rates of 30-day readmissions, delirium, cardiac arrest, and falls. These findings are consistent with other studies that found a similar association among patients who underwent other cardiac procedures, such as transcatheter aortic valve replacement.1 Because dementia is a strong predisposing factor for delirium, it is not surprising that delirium is observed across patients who underwent different procedures or hospitalization episodes.2 Because of the potential hazards for inpatients with dementia, hospitals have developed risk-reduction programs, such as those that promote recognition of dementia, and management strategies that reduce the risk of delirium.3 Delirium prevention may also impact other adverse outcomes, such as falls, discharge to institutional care, and readmissions.

Racial disparities in care outcomes have been documented across settings, including hospital4 and hospice care settings.5 In study 2, by Gilmore-Bykovskyi et al, the findings of higher rates of hospital readmission among Black patients when compared to non-Hispanic White patients were not surprising. The central finding of this study is that even when accounting for various levels of factors, including hospital-level, hospital stay–level, individual (demographics, comorbidities), and neighborhood characteristics (disadvantage), the observed disparity diminished but persisted, suggesting that while these various levels of factors contributed to the observed disparity, other unmeasured factors also contributed. Another key finding is that the effect of the various factors examined in this study may affect different subgroups in different ways, suggesting underlying factors, and thus potential solutions to reduce disparities in care outcomes, could differ among subgroups.

Applications for Clinical Practice and System Implementation

These 2 studies add to the literature on factors that can affect 30-day hospital readmission rates in patients with dementia. These data could allow for more robust discussions of what to anticipate when adults with dementia undergo specific procedures, and also further build the case that improvements in care, such as delirium prevention programs, could offer benefits. The observation about racial and ethnic disparities in care outcomes among patients with dementia highlights the continued need to better understand the drivers of these disparities so that hospital systems and policy makers can consider and test possible solutions. Future studies should further disentangle the relationships among the various levels of factors and observed disparities in outcomes, especially for this vulnerable population of adults living with dementia.

Practice Points

  • Clinicians should be aware of the additional risks for poor outcomes that dementia confers.
  • Awareness of this increased risk will inform discussions of risks and benefits for older adults considered for procedures.

–William W. Hung, MD, MPH

References

1. Park DY, Sana MK, Shoura S, et al. Readmission and in-hospital outcomes after transcatheter aortic valve replacement in patients with dementia. Cardiovasc Revasc Med. 2023;46:70-77. doi:10.1016/j.carrev.2022.08.016

2. McNicoll L, Pisani MA, Zhang Y, et al. Delirium in the intensive care unit: occurrence and clinical course in older patients. J Am Geriatr Soc. 2003;51(5):591-598. doi:10.1034/j.1600-0579.2003.00201.x

3. Weldingh NM, Mellingsæter MR, Hegna BW, et al. Impact of a dementia-friendly program on detection and management of patients with cognitive impairment and delirium in acute-care hospital units: a controlled clinical trial design. BMC Geriatr. 2022;22(1):266. doi:10.1186/s12877-022-02949-0

4. Hermosura AH, Noonan CJ, Fyfe-Johnson AL, et al. Hospital disparities between native Hawaiian and other pacific islanders and non-Hispanic whites with Alzheimer’s disease and related dementias. J Aging Health. 2020;32(10):1579-1590. doi:10.1177/0898264320945177

5. Zhang Y, Shao H, Zhang M, Li J. Healthcare utilization and mortality after hospice live discharge among Medicare patients with and without Alzheimer’s disease and related dementias. J Gen Intern Med. 2023 Jan 17. doi:10.1007/s11606-023-08031-8

Article PDF
Issue
Journal of Clinical Outcomes Management - 30(3)
Publications
Topics
Page Number
62-64
Sections
Article PDF
Article PDF

Study 1 Overview (Park et al)

Objective: To compare rates of adverse events and 30-day readmission among patients with dementia who undergo percutaneous coronary intervention (PCI) with those without dementia.

Design: This cohort study used a national database of hospital readmissions developed by the Agency for Healthcare Research and Quality.

Setting and participants: Data from State Inpatient Databases were used to derive this national readmissions database representing 80% of hospitals from 28 states that contribute data. The study included all individuals aged 18 years and older who were identified to have had a PCI procedure in the years 2017 and 2018. International Classification of Diseases, Tenth Revision (ICD-10) codes were used to identify PCI procedures, including drug-eluting stent placement, bare-metal stent placement, and balloon angioplasty, performed in patients who presented with myocardial infarction and unstable angina and those with stable ischemic heart disease. Patients were stratified into those with or without dementia, also defined using ICD-10 codes. A total of 755,406 index hospitalizations were included; 2.3% of the patients had dementia.

Main outcome measures: The primary study outcome was 30-day all-cause readmission, with the cause classified as cardiovascular or noncardiovascular. Secondary outcome measures examined were delirium, in-hospital mortality, cardiac arrest, blood transfusion, acute kidney injury, fall in hospital, length of hospital stay, and other adverse outcomes. Location at discharge was also examined. Other covariates included in the analysis were age, sex, comorbidities, hospital characteristics, primary payer, and median income. For analysis, a propensity score matching algorithm was applied to match patients with and without dementia. Kaplan-Meier curves were used to examine 30-day readmission rates, and a Cox proportional hazards model was used to calculate hazard ratios (HR) for those with and without dementia. For secondary outcomes, logistic regression models were used to calculate odds ratios (OR) of outcomes between those with and without dementia.

Main results: The average age of those with dementia was 78.8 years vs 64.9 years in those without dementia. Women made up 42.8% of those with dementia and 31.3% of those without dementia. Those with dementia also had higher rates of comorbidities, such as heart failure, renal failure, and depression. After propensity score matching, 17,309 and 17,187 patients with and without dementia, respectively, were included. Covariates were balanced between the 2 groups after matching. For the primary outcome, patients with dementia were more likely to be readmitted at 30 days (HR, 1.11; 95% CI, 1.05-1.18; P < .01) when compared to those without dementia. For other adverse outcomes, delirium was significantly more likely to occur for those with dementia (OR, 4.37; 95% CI, 3.69-5.16; P < .01). Patients with dementia were also more likely to die in hospital (OR, 1.15; 95% CI, 1.01-1.30; P = .03), have cardiac arrest (OR, 1.19; 95% CI, 1.01-1.39; P = .04), receive a blood transfusion (OR, 1.17; 95% CI, 1.00-1.36; P = .05), experience acute kidney injury (OR, 1.30; 95% CI, 1.21-1.39; P < .01), and fall in hospital (OR, 2.51; 95% CI, 2.06-3.07; P < .01). Hospital length of stay was higher for those with dementia, with a mean difference of 1.43 days. For discharge location, patients with dementia were more likely to be sent to a skilled nursing facility (30.1% vs 12.2%) and less likely to be discharged home.

Conclusion: Patients with dementia are more likely to experience adverse events, including delirium, mortality, kidney injury, and falls after PCI, and are more likely to be readmitted to the hospital in 30 days compared to those without dementia.

 

 

Study 2 Overview (Gilmore-Bykovskyi et al)

Objective: To examine the association between race and 30-day readmissions in Black and non-Hispanic White Medicare beneficiaries with dementia.

Design: This was a retrospective cohort study that used 100% Medicare fee-for service claims data from all hospitalizations between January 1, 2014, and November 30, 2014, for all enrollees with a dementia diagnosis. The claims data were linked to the patient, hospital stay, and hospital factors. Patients with dementia were identified using a validated algorithm that requires an inpatient, skilled nursing facility, home health, or Part B institutional or noninstitutional claim with a qualifying diagnostic code during a 3-year period. Persons enrolled in a health maintenance organization plan were excluded.

Main outcome measures: The primary outcome examined in this study was 30-day all-cause readmission. Self-reported race and ethnic identity was a baseline covariate. Persons who self-reported Black or non-Hispanic White race were included in the study; other categories of race and ethnicity were excluded because of prior evidence suggesting low accuracy of these categories in Medicare claims data. Other covariates included neighborhood disadvantage, measured using the Area Deprivation Index (ADI), and rurality; hospital-level and hospital stay–level characteristics such as for-profit status and number of annual discharges; and individual demographic characteristics and comorbidities. The ADI is constructed using variables of poverty, education, housing, and employment and is represented as a percentile ranking of level of disadvantage. Unadjusted and adjusted analyses of 30-day hospital readmission were conducted. Models using various levels of adjustment were constructed to examine the contributions of the identified covariates to the estimated association between 30-day readmission and race.

Main results: A total of 1,523,142 index hospital stays among 945,481 beneficiaries were included; 215,815 episodes were among Black beneficiaries and 1,307,327 episodes were among non-Hispanic White beneficiaries. Mean age was 81.5 years, and approximately 61% of beneficiaries were female. Black beneficiaries were younger but had higher rates of dual Medicare/Medicaid eligibility and disability; they were also more likely to reside in disadvantaged neighborhoods. Black beneficiaries had a 30-day readmission rate of 24.1% compared with 18.5% in non-Hispanic White beneficiaries (unadjusted OR, 1.37; 95% CI, 1.35-1.39). The differences in outcomes persisted after adjusting for geographic factors, social factors, hospital characteristics, hospital stay factors, demographics, and comorbidities, suggesting that unmeasured underlying racial disparities not included in this model accounted for the differences. The effects of certain variables, such as neighborhood, differed by race; for example, the protective effect of living in a less disadvantaged neighborhood was observed among White beneficiaries but not Black beneficiaries.

Conclusion: Racial and geographic disparities in 30-day readmission rates were observed among Medicare beneficiaries with dementia. Protective effects associated with neighborhood advantage may confer different levels of benefit for people of different race.

 

 

Commentary

Adults living with dementia are at higher risk of adverse outcomes across settings. In the first study, by Park et al, among adults who underwent a cardiac procedure (PCI), those with dementia were more likely to experience adverse events compared to those without dementia. These outcomes include increased rates of 30-day readmissions, delirium, cardiac arrest, and falls. These findings are consistent with other studies that found a similar association among patients who underwent other cardiac procedures, such as transcatheter aortic valve replacement.1 Because dementia is a strong predisposing factor for delirium, it is not surprising that delirium is observed across patients who underwent different procedures or hospitalization episodes.2 Because of the potential hazards for inpatients with dementia, hospitals have developed risk-reduction programs, such as those that promote recognition of dementia, and management strategies that reduce the risk of delirium.3 Delirium prevention may also impact other adverse outcomes, such as falls, discharge to institutional care, and readmissions.

Racial disparities in care outcomes have been documented across settings, including hospital4 and hospice care settings.5 In study 2, by Gilmore-Bykovskyi et al, the findings of higher rates of hospital readmission among Black patients when compared to non-Hispanic White patients were not surprising. The central finding of this study is that even when accounting for various levels of factors, including hospital-level, hospital stay–level, individual (demographics, comorbidities), and neighborhood characteristics (disadvantage), the observed disparity diminished but persisted, suggesting that while these various levels of factors contributed to the observed disparity, other unmeasured factors also contributed. Another key finding is that the effect of the various factors examined in this study may affect different subgroups in different ways, suggesting underlying factors, and thus potential solutions to reduce disparities in care outcomes, could differ among subgroups.

Applications for Clinical Practice and System Implementation

These 2 studies add to the literature on factors that can affect 30-day hospital readmission rates in patients with dementia. These data could allow for more robust discussions of what to anticipate when adults with dementia undergo specific procedures, and also further build the case that improvements in care, such as delirium prevention programs, could offer benefits. The observation about racial and ethnic disparities in care outcomes among patients with dementia highlights the continued need to better understand the drivers of these disparities so that hospital systems and policy makers can consider and test possible solutions. Future studies should further disentangle the relationships among the various levels of factors and observed disparities in outcomes, especially for this vulnerable population of adults living with dementia.

Practice Points

  • Clinicians should be aware of the additional risks for poor outcomes that dementia confers.
  • Awareness of this increased risk will inform discussions of risks and benefits for older adults considered for procedures.

–William W. Hung, MD, MPH

Study 1 Overview (Park et al)

Objective: To compare rates of adverse events and 30-day readmission among patients with dementia who undergo percutaneous coronary intervention (PCI) with those without dementia.

Design: This cohort study used a national database of hospital readmissions developed by the Agency for Healthcare Research and Quality.

Setting and participants: Data from State Inpatient Databases were used to derive this national readmissions database representing 80% of hospitals from 28 states that contribute data. The study included all individuals aged 18 years and older who were identified to have had a PCI procedure in the years 2017 and 2018. International Classification of Diseases, Tenth Revision (ICD-10) codes were used to identify PCI procedures, including drug-eluting stent placement, bare-metal stent placement, and balloon angioplasty, performed in patients who presented with myocardial infarction and unstable angina and those with stable ischemic heart disease. Patients were stratified into those with or without dementia, also defined using ICD-10 codes. A total of 755,406 index hospitalizations were included; 2.3% of the patients had dementia.

Main outcome measures: The primary study outcome was 30-day all-cause readmission, with the cause classified as cardiovascular or noncardiovascular. Secondary outcome measures examined were delirium, in-hospital mortality, cardiac arrest, blood transfusion, acute kidney injury, fall in hospital, length of hospital stay, and other adverse outcomes. Location at discharge was also examined. Other covariates included in the analysis were age, sex, comorbidities, hospital characteristics, primary payer, and median income. For analysis, a propensity score matching algorithm was applied to match patients with and without dementia. Kaplan-Meier curves were used to examine 30-day readmission rates, and a Cox proportional hazards model was used to calculate hazard ratios (HR) for those with and without dementia. For secondary outcomes, logistic regression models were used to calculate odds ratios (OR) of outcomes between those with and without dementia.

Main results: The average age of those with dementia was 78.8 years vs 64.9 years in those without dementia. Women made up 42.8% of those with dementia and 31.3% of those without dementia. Those with dementia also had higher rates of comorbidities, such as heart failure, renal failure, and depression. After propensity score matching, 17,309 and 17,187 patients with and without dementia, respectively, were included. Covariates were balanced between the 2 groups after matching. For the primary outcome, patients with dementia were more likely to be readmitted at 30 days (HR, 1.11; 95% CI, 1.05-1.18; P < .01) when compared to those without dementia. For other adverse outcomes, delirium was significantly more likely to occur for those with dementia (OR, 4.37; 95% CI, 3.69-5.16; P < .01). Patients with dementia were also more likely to die in hospital (OR, 1.15; 95% CI, 1.01-1.30; P = .03), have cardiac arrest (OR, 1.19; 95% CI, 1.01-1.39; P = .04), receive a blood transfusion (OR, 1.17; 95% CI, 1.00-1.36; P = .05), experience acute kidney injury (OR, 1.30; 95% CI, 1.21-1.39; P < .01), and fall in hospital (OR, 2.51; 95% CI, 2.06-3.07; P < .01). Hospital length of stay was higher for those with dementia, with a mean difference of 1.43 days. For discharge location, patients with dementia were more likely to be sent to a skilled nursing facility (30.1% vs 12.2%) and less likely to be discharged home.

Conclusion: Patients with dementia are more likely to experience adverse events, including delirium, mortality, kidney injury, and falls after PCI, and are more likely to be readmitted to the hospital in 30 days compared to those without dementia.

 

 

Study 2 Overview (Gilmore-Bykovskyi et al)

Objective: To examine the association between race and 30-day readmissions in Black and non-Hispanic White Medicare beneficiaries with dementia.

Design: This was a retrospective cohort study that used 100% Medicare fee-for service claims data from all hospitalizations between January 1, 2014, and November 30, 2014, for all enrollees with a dementia diagnosis. The claims data were linked to the patient, hospital stay, and hospital factors. Patients with dementia were identified using a validated algorithm that requires an inpatient, skilled nursing facility, home health, or Part B institutional or noninstitutional claim with a qualifying diagnostic code during a 3-year period. Persons enrolled in a health maintenance organization plan were excluded.

Main outcome measures: The primary outcome examined in this study was 30-day all-cause readmission. Self-reported race and ethnic identity was a baseline covariate. Persons who self-reported Black or non-Hispanic White race were included in the study; other categories of race and ethnicity were excluded because of prior evidence suggesting low accuracy of these categories in Medicare claims data. Other covariates included neighborhood disadvantage, measured using the Area Deprivation Index (ADI), and rurality; hospital-level and hospital stay–level characteristics such as for-profit status and number of annual discharges; and individual demographic characteristics and comorbidities. The ADI is constructed using variables of poverty, education, housing, and employment and is represented as a percentile ranking of level of disadvantage. Unadjusted and adjusted analyses of 30-day hospital readmission were conducted. Models using various levels of adjustment were constructed to examine the contributions of the identified covariates to the estimated association between 30-day readmission and race.

Main results: A total of 1,523,142 index hospital stays among 945,481 beneficiaries were included; 215,815 episodes were among Black beneficiaries and 1,307,327 episodes were among non-Hispanic White beneficiaries. Mean age was 81.5 years, and approximately 61% of beneficiaries were female. Black beneficiaries were younger but had higher rates of dual Medicare/Medicaid eligibility and disability; they were also more likely to reside in disadvantaged neighborhoods. Black beneficiaries had a 30-day readmission rate of 24.1% compared with 18.5% in non-Hispanic White beneficiaries (unadjusted OR, 1.37; 95% CI, 1.35-1.39). The differences in outcomes persisted after adjusting for geographic factors, social factors, hospital characteristics, hospital stay factors, demographics, and comorbidities, suggesting that unmeasured underlying racial disparities not included in this model accounted for the differences. The effects of certain variables, such as neighborhood, differed by race; for example, the protective effect of living in a less disadvantaged neighborhood was observed among White beneficiaries but not Black beneficiaries.

Conclusion: Racial and geographic disparities in 30-day readmission rates were observed among Medicare beneficiaries with dementia. Protective effects associated with neighborhood advantage may confer different levels of benefit for people of different race.

 

 

Commentary

Adults living with dementia are at higher risk of adverse outcomes across settings. In the first study, by Park et al, among adults who underwent a cardiac procedure (PCI), those with dementia were more likely to experience adverse events compared to those without dementia. These outcomes include increased rates of 30-day readmissions, delirium, cardiac arrest, and falls. These findings are consistent with other studies that found a similar association among patients who underwent other cardiac procedures, such as transcatheter aortic valve replacement.1 Because dementia is a strong predisposing factor for delirium, it is not surprising that delirium is observed across patients who underwent different procedures or hospitalization episodes.2 Because of the potential hazards for inpatients with dementia, hospitals have developed risk-reduction programs, such as those that promote recognition of dementia, and management strategies that reduce the risk of delirium.3 Delirium prevention may also impact other adverse outcomes, such as falls, discharge to institutional care, and readmissions.

Racial disparities in care outcomes have been documented across settings, including hospital4 and hospice care settings.5 In study 2, by Gilmore-Bykovskyi et al, the findings of higher rates of hospital readmission among Black patients when compared to non-Hispanic White patients were not surprising. The central finding of this study is that even when accounting for various levels of factors, including hospital-level, hospital stay–level, individual (demographics, comorbidities), and neighborhood characteristics (disadvantage), the observed disparity diminished but persisted, suggesting that while these various levels of factors contributed to the observed disparity, other unmeasured factors also contributed. Another key finding is that the effect of the various factors examined in this study may affect different subgroups in different ways, suggesting underlying factors, and thus potential solutions to reduce disparities in care outcomes, could differ among subgroups.

Applications for Clinical Practice and System Implementation

These 2 studies add to the literature on factors that can affect 30-day hospital readmission rates in patients with dementia. These data could allow for more robust discussions of what to anticipate when adults with dementia undergo specific procedures, and also further build the case that improvements in care, such as delirium prevention programs, could offer benefits. The observation about racial and ethnic disparities in care outcomes among patients with dementia highlights the continued need to better understand the drivers of these disparities so that hospital systems and policy makers can consider and test possible solutions. Future studies should further disentangle the relationships among the various levels of factors and observed disparities in outcomes, especially for this vulnerable population of adults living with dementia.

Practice Points

  • Clinicians should be aware of the additional risks for poor outcomes that dementia confers.
  • Awareness of this increased risk will inform discussions of risks and benefits for older adults considered for procedures.

–William W. Hung, MD, MPH

References

1. Park DY, Sana MK, Shoura S, et al. Readmission and in-hospital outcomes after transcatheter aortic valve replacement in patients with dementia. Cardiovasc Revasc Med. 2023;46:70-77. doi:10.1016/j.carrev.2022.08.016

2. McNicoll L, Pisani MA, Zhang Y, et al. Delirium in the intensive care unit: occurrence and clinical course in older patients. J Am Geriatr Soc. 2003;51(5):591-598. doi:10.1034/j.1600-0579.2003.00201.x

3. Weldingh NM, Mellingsæter MR, Hegna BW, et al. Impact of a dementia-friendly program on detection and management of patients with cognitive impairment and delirium in acute-care hospital units: a controlled clinical trial design. BMC Geriatr. 2022;22(1):266. doi:10.1186/s12877-022-02949-0

4. Hermosura AH, Noonan CJ, Fyfe-Johnson AL, et al. Hospital disparities between native Hawaiian and other pacific islanders and non-Hispanic whites with Alzheimer’s disease and related dementias. J Aging Health. 2020;32(10):1579-1590. doi:10.1177/0898264320945177

5. Zhang Y, Shao H, Zhang M, Li J. Healthcare utilization and mortality after hospice live discharge among Medicare patients with and without Alzheimer’s disease and related dementias. J Gen Intern Med. 2023 Jan 17. doi:10.1007/s11606-023-08031-8

References

1. Park DY, Sana MK, Shoura S, et al. Readmission and in-hospital outcomes after transcatheter aortic valve replacement in patients with dementia. Cardiovasc Revasc Med. 2023;46:70-77. doi:10.1016/j.carrev.2022.08.016

2. McNicoll L, Pisani MA, Zhang Y, et al. Delirium in the intensive care unit: occurrence and clinical course in older patients. J Am Geriatr Soc. 2003;51(5):591-598. doi:10.1034/j.1600-0579.2003.00201.x

3. Weldingh NM, Mellingsæter MR, Hegna BW, et al. Impact of a dementia-friendly program on detection and management of patients with cognitive impairment and delirium in acute-care hospital units: a controlled clinical trial design. BMC Geriatr. 2022;22(1):266. doi:10.1186/s12877-022-02949-0

4. Hermosura AH, Noonan CJ, Fyfe-Johnson AL, et al. Hospital disparities between native Hawaiian and other pacific islanders and non-Hispanic whites with Alzheimer’s disease and related dementias. J Aging Health. 2020;32(10):1579-1590. doi:10.1177/0898264320945177

5. Zhang Y, Shao H, Zhang M, Li J. Healthcare utilization and mortality after hospice live discharge among Medicare patients with and without Alzheimer’s disease and related dementias. J Gen Intern Med. 2023 Jan 17. doi:10.1007/s11606-023-08031-8

Issue
Journal of Clinical Outcomes Management - 30(3)
Issue
Journal of Clinical Outcomes Management - 30(3)
Page Number
62-64
Page Number
62-64
Publications
Publications
Topics
Article Type
Display Headline
Differences in 30-Day Readmission Rates in Older Adults With Dementia
Display Headline
Differences in 30-Day Readmission Rates in Older Adults With Dementia
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Patient Safety in Transitions of Care: Addressing Discharge Communication Gaps and the Potential of the Teach-Back Method

Article Type
Changed
Thu, 05/25/2023 - 09:36
Display Headline
Patient Safety in Transitions of Care: Addressing Discharge Communication Gaps and the Potential of the Teach-Back Method

Study 1 Overview (Trivedi et al)

Objective: This observational quality improvement study aimed to evaluate the discharge communication practices in internal medicine services at 2 urban academic teaching hospitals, specifically focusing on patient education and counseling in 6 key discharge communication domains.

Design: Observations were conducted over a 13-month period from September 2018 through October 2019, following the Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines.

Setting and participants: The study involved a total of 33 English- and Spanish-speaking patients purposefully selected from the “discharge before noon” list at 2 urban tertiary-care teaching hospitals. A total of 155 observation hours were accumulated, with an average observation time of 4.7 hours per patient on the day of discharge.

Main outcome measures: The study assessed 6 discharge communication domains: (1) the name and function of medication changes, (2) the purpose of postdischarge appointments, (3) disease self-management, (4) red flags or warning signs for complications, (5) teach-back techniques to confirm patient understanding, and (6) staff solicitation of patient questions or concerns.

Main results: The study found several gaps in discharge communication practices. Among the 29 patients with medication changes, 28% were not informed about the name and basic function of the changes, while 59% did not receive counseling on the purpose for the medication change. In terms of postdischarge appointments, 48% of patients were not told the purpose of these appointments. Moreover, 54% of patients did not receive counseling on self-management of their primary discharge diagnosis or other diagnoses, and 73% were not informed about symptom expectations or the expected course of their illness after leaving the hospital. Most patients (82%) were not counseled on red-flag signs and symptoms that should prompt immediate return to care.

Teach-back techniques, which are critical for ensuring patient understanding, were used in only 3% of cases, and 85% of patients were not asked by health care providers if there might be barriers to following the care plan. Less than half (42%) of the patients were asked if they had any questions, with most questions being logistical and often deferred to another team member or met with uncertainty. Of note, among the 33 patients, only 2 patients received extensive information that covered 5 or 6 out of 6 discharge communication domains.

The study found variable roles in who communicated what aspects of discharge education, with most domains being communicated in an ad hoc manner and no clear pattern of responsibility. However, 2 exceptions were observed: nurses were more likely to provide information about new or changed medications and follow-up appointments, and the only example of teach-back was conducted by an attending physician.

Conclusion: The study highlights a significant need for improved discharge techniques to enhance patient safety and quality of care upon leaving the hospital. Interventions should focus on increasing transparency in patient education and understanding, clarifying assumptions of roles among the interprofessional team, and implementing effective communication strategies and system redesigns that foster patient-centered discharge education. Also, the study revealed that some patients received more robust discharge education than others, indicating systemic inequality in the patient experience. Further studies are needed to explore the development and assessment of such interventions to ensure optimal patient outcomes and equal care following hospital discharge.

 

 

Study 2 Overview (Marks et al)

Objective: This study aimed to investigate the impact of a nurse-led discharge medication education program, Teaching Important Medication Effects (TIME), on patients’ new medication knowledge at discharge and 48 to 72 hours post discharge. The specific objectives were to identify patients’ priority learning needs, evaluate the influence of TIME on patients’ new medication knowledge before and after discharge, and assess the effect of TIME on patients’ experience and satisfaction with medication education.

Design: The study employed a longitudinal pretest/post-test, 2-group design involving 107 randomly selected medical-surgical patients from an academic hospital. Participants were interviewed before and within 72 hours after discharge following administration of medication instructions. Bivariate analyses were performed to assess demographic and outcome variable differences between groups.

Setting and participants: Conducted on a 24-bed medical-surgical unit at a large Magnet® hospital over 18 months (2018-2019), the study included patients with at least 1 new medication, aged 18 years or older, able to read and speak English or Spanish, admitted from home with a minimum 1 overnight stay, and planning to return home post discharge. Excluded were cognitively impaired patients, those assigned to a resource pool nurse without TIME training, and those having a research team member assigned. Participants were randomly selected from a computerized list of patients scheduled for discharge.

Main outcome measures: Primary outcome measures included patients’ new medication knowledge before and after discharge and patients’ experience and satisfaction with medication education.

Main results: The usual care (n = 52) and TIME (n = 55) patients had similar baseline demographic characteristics. The study revealed that almost all patients in both usual care and TIME groups were aware of their new medication and its purpose at discharge. However, differences were observed in medication side effect responses, with 72.5% of the usual-care group knowing side effects compared to 94.3% of the TIME group (P = .003). Additionally, 81.5% of the usual-care group understood the medication purpose compared to 100% of the TIME group (P = .02). During the 48- to 72-hour postdischarge calls, consistent responses were found from both groups regarding knowledge of new medication, medication name, and medication purpose. Similar to discharge results, differences in medication side effect responses were observed, with 75.8% of the usual care group correctly identifying at least 1 medication side effect compared to 93.9% of the TIME group (P = .04). TIME was associated with higher satisfaction with medication education compared to usual care (97% vs. 46.9%, P < .001).

Conclusion: The nurse-led discharge medication education program TIME effectively enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge. The program also significantly improved patients’ experience and satisfaction with medication education. These findings indicate that TIME is a valuable tool for augmenting patient education and medication adherence in a hospital setting. By incorporating the teach-back method, TIME offers a structured approach to educating patients about their medications at hospital discharge, leading to improved care transitions.

 

 

Commentary

Suboptimal communication between patients, caregivers, and providers upon hospital discharge is a major contributor to patients’ inadequate understanding of postdischarge care plans. This inadequate understanding leads to preventable harms, such as medication errors, adverse events, emergency room visits, and costly hospital readmissions.1 The issue is further exacerbated by a lack of clarity among health care team members’ respective roles in providing information that optimizes care transitions during the discharge communication process. Moreover, low health literacy, particularly prevalent among seniors, those from disadvantaged backgrouds, and those with lower education attainment or chronic illnesses, create additional barriers to effective discharge communication. A potential solution to this problem is the adoption of effective teaching strategies, specifically the teach-back method. This method employs techniques that ensure patients’ understanding and recall of new information regardless of health literacy, and places accountability on clinicians rather than patients. By closing communication gaps between clinicians and patients, the teach-back method can reduce hospital readmissions, hospital-acquired conditions, and mortality rates, while improving patient satisfaction with health care instructions and the overall hospital experience.2

Study 1, by Trivedi et al, and study 2, by Marks et al, aimed to identify and address problems related to poor communication between patients and health care team members at hospital discharge. Specifically, study 1 examined routine discharge communication practices to determine communication gaps, while study 2 evaluated a nurse-led teach-back intervention program designed to improve patients’ medication knowledge and satisfaction. These distinct objectives and designs reflected the unique ways each study approached the challenges associated with care transitions at the time of hospital discharge.

Study 1 used direct observation of patient-practitioner interactions to evaluate routine discharge communication practices in internal medicine services at 2 urban academic teaching hospitals. In the 33 patients observed, significant gaps in discharge communication practices were identified in the domains of medication changes, postdischarge appointments, disease self-management, and red flags or warning signs. Unsurprisingly, most of these domains were communicated in an ad hoc manner by members of the health care team without a clear pattern of responsibility in reference to patient discharge education, and teach-back was seldom used. These findings underscore the need for improved discharge techniques, effective communication strategies, and clarification of roles among the interprofessional team to enhance the safety, quality of care, and overall patient experience during hospital discharge.

Study 2 aimed to augment the hospital discharge communication process by implementing a nurse-led discharge medication education program (TIME), which targeted patients’ priority learning needs, new medication knowledge, and satisfaction with medication education. In the 107 patients assessed, this teach-back method enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge, as well as improved patients’ experience and satisfaction with medication education. These results suggest that a teach-back method such as the TIME program could be a solution to care transition problems identified in the Trivedi et al study by providing a structured approach to patient education and enhancing communication practices during the hospital discharge process. Thus, by implementing the TIME program, hospitals may improve patient outcomes, safety, and overall quality of care upon leaving the hospital.

Applications for Clinical Practice and System Implementation

Care transition at the time of hospital discharge is a particularly pivotal period in the care of vulnerable individuals. There is growing literature, including studies discussed in this review, to indicate that by focusing on improving patient-practitioner communication during the discharge process and using strategies such as the teach-back method, health care professionals can better prepare patients for self-management in the post-acute period and help them make informed decisions about their care. This emphasis on care-transition communication strategies may lead to a reduction in medication errors, adverse events, and hospital readmissions, ultimately improving patient outcomes and satisfaction. Barriers to system implementation of such strategies may include competing demands and responsibilities of busy practitioners as well as the inherent complexities associated with hospital discharge. Creative solutions, such as the utilization of telehealth and early transition-of-care visits, represent some potential approaches to counter these barriers.

While both studies illustrated barriers and facilitators of hospital discharge communication, each study had limitations that impacted their generalizability to real-world clinical practice. Limitations in study 1 included a small sample size, purposive sampling method, and a focus on planned discharges in a teaching hospital, which may introduce selection bias. The study’s findings may not be generalizable to unplanned discharges, patients who do not speak English or Spanish, or nonteaching hospitals. Additionally, the data were collected before the COVID-19 pandemic, which could have further impacted discharge education practices. The study also revealed that some patients received more robust discharge education than others, which indicated systemic inequality in the patient experience. Further research is required to address this discrepancy. Limitations in study 2 included a relatively small and homogeneous sample, with most participants being younger, non-Hispanic White, English-speaking, and well-educated. This lack of diversity may limit the generalizability of the findings. Furthermore, the study did not evaluate the patients’ knowledge of medication dosage and focused only on new medications. Future studies should examine the effect of teach-back on a broader range of self-management topics in preparation for discharge, while also including a more diverse population to account for factors related to social determinants of health. Taken together, further research is needed to address these limitations and ensure more generalizable results that can more broadly improve discharge education and care transitions that bridge acute and post-acute care.

Practice Points

  • There is a significant need for improved discharge strategies to enhance patient safety and quality of care upon leaving the hospital.
  • Teach-back method may offer a structured approach to educating patients about their medications at hospital discharge and improve care transitions.

–Yuka Shichijo, MD, and Fred Ko, MD, Mount Sinai Beth Israel Hospital, New York, NY

References

1. Snow V, Beck D, Budnitz T, Miller DC, Potter J, Wears RL, Weiss KB, Williams MV; American College of Physicians; Society of General Internal Medicine; Society of Hospital Medicine; American Geriatrics Society; American College of Emergency Physicians; Society of Academic Emergency Medicine. Transitions of care consensus policy statement American College of Physicians-Society of General Internal Medicine-Society of Hospital Medicine-American Geriatrics Society-American College of Emergency Physicians-Society of Academic Emergency Medicine. J Gen Intern Med. 2009;24(8):971-976. doi:10.1007/s11606-009-0969-x

2. Yen PH, Leasure AR. Use and effectiveness of the teach-back method in patient education and health outcomes. Fed. Pract. 2019;36(6):284-289.

Article PDF
Issue
Journal of Clinical Outcomes Management - 30(3)
Publications
Topics
Page Number
58-61
Sections
Article PDF
Article PDF

Study 1 Overview (Trivedi et al)

Objective: This observational quality improvement study aimed to evaluate the discharge communication practices in internal medicine services at 2 urban academic teaching hospitals, specifically focusing on patient education and counseling in 6 key discharge communication domains.

Design: Observations were conducted over a 13-month period from September 2018 through October 2019, following the Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines.

Setting and participants: The study involved a total of 33 English- and Spanish-speaking patients purposefully selected from the “discharge before noon” list at 2 urban tertiary-care teaching hospitals. A total of 155 observation hours were accumulated, with an average observation time of 4.7 hours per patient on the day of discharge.

Main outcome measures: The study assessed 6 discharge communication domains: (1) the name and function of medication changes, (2) the purpose of postdischarge appointments, (3) disease self-management, (4) red flags or warning signs for complications, (5) teach-back techniques to confirm patient understanding, and (6) staff solicitation of patient questions or concerns.

Main results: The study found several gaps in discharge communication practices. Among the 29 patients with medication changes, 28% were not informed about the name and basic function of the changes, while 59% did not receive counseling on the purpose for the medication change. In terms of postdischarge appointments, 48% of patients were not told the purpose of these appointments. Moreover, 54% of patients did not receive counseling on self-management of their primary discharge diagnosis or other diagnoses, and 73% were not informed about symptom expectations or the expected course of their illness after leaving the hospital. Most patients (82%) were not counseled on red-flag signs and symptoms that should prompt immediate return to care.

Teach-back techniques, which are critical for ensuring patient understanding, were used in only 3% of cases, and 85% of patients were not asked by health care providers if there might be barriers to following the care plan. Less than half (42%) of the patients were asked if they had any questions, with most questions being logistical and often deferred to another team member or met with uncertainty. Of note, among the 33 patients, only 2 patients received extensive information that covered 5 or 6 out of 6 discharge communication domains.

The study found variable roles in who communicated what aspects of discharge education, with most domains being communicated in an ad hoc manner and no clear pattern of responsibility. However, 2 exceptions were observed: nurses were more likely to provide information about new or changed medications and follow-up appointments, and the only example of teach-back was conducted by an attending physician.

Conclusion: The study highlights a significant need for improved discharge techniques to enhance patient safety and quality of care upon leaving the hospital. Interventions should focus on increasing transparency in patient education and understanding, clarifying assumptions of roles among the interprofessional team, and implementing effective communication strategies and system redesigns that foster patient-centered discharge education. Also, the study revealed that some patients received more robust discharge education than others, indicating systemic inequality in the patient experience. Further studies are needed to explore the development and assessment of such interventions to ensure optimal patient outcomes and equal care following hospital discharge.

 

 

Study 2 Overview (Marks et al)

Objective: This study aimed to investigate the impact of a nurse-led discharge medication education program, Teaching Important Medication Effects (TIME), on patients’ new medication knowledge at discharge and 48 to 72 hours post discharge. The specific objectives were to identify patients’ priority learning needs, evaluate the influence of TIME on patients’ new medication knowledge before and after discharge, and assess the effect of TIME on patients’ experience and satisfaction with medication education.

Design: The study employed a longitudinal pretest/post-test, 2-group design involving 107 randomly selected medical-surgical patients from an academic hospital. Participants were interviewed before and within 72 hours after discharge following administration of medication instructions. Bivariate analyses were performed to assess demographic and outcome variable differences between groups.

Setting and participants: Conducted on a 24-bed medical-surgical unit at a large Magnet® hospital over 18 months (2018-2019), the study included patients with at least 1 new medication, aged 18 years or older, able to read and speak English or Spanish, admitted from home with a minimum 1 overnight stay, and planning to return home post discharge. Excluded were cognitively impaired patients, those assigned to a resource pool nurse without TIME training, and those having a research team member assigned. Participants were randomly selected from a computerized list of patients scheduled for discharge.

Main outcome measures: Primary outcome measures included patients’ new medication knowledge before and after discharge and patients’ experience and satisfaction with medication education.

Main results: The usual care (n = 52) and TIME (n = 55) patients had similar baseline demographic characteristics. The study revealed that almost all patients in both usual care and TIME groups were aware of their new medication and its purpose at discharge. However, differences were observed in medication side effect responses, with 72.5% of the usual-care group knowing side effects compared to 94.3% of the TIME group (P = .003). Additionally, 81.5% of the usual-care group understood the medication purpose compared to 100% of the TIME group (P = .02). During the 48- to 72-hour postdischarge calls, consistent responses were found from both groups regarding knowledge of new medication, medication name, and medication purpose. Similar to discharge results, differences in medication side effect responses were observed, with 75.8% of the usual care group correctly identifying at least 1 medication side effect compared to 93.9% of the TIME group (P = .04). TIME was associated with higher satisfaction with medication education compared to usual care (97% vs. 46.9%, P < .001).

Conclusion: The nurse-led discharge medication education program TIME effectively enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge. The program also significantly improved patients’ experience and satisfaction with medication education. These findings indicate that TIME is a valuable tool for augmenting patient education and medication adherence in a hospital setting. By incorporating the teach-back method, TIME offers a structured approach to educating patients about their medications at hospital discharge, leading to improved care transitions.

 

 

Commentary

Suboptimal communication between patients, caregivers, and providers upon hospital discharge is a major contributor to patients’ inadequate understanding of postdischarge care plans. This inadequate understanding leads to preventable harms, such as medication errors, adverse events, emergency room visits, and costly hospital readmissions.1 The issue is further exacerbated by a lack of clarity among health care team members’ respective roles in providing information that optimizes care transitions during the discharge communication process. Moreover, low health literacy, particularly prevalent among seniors, those from disadvantaged backgrouds, and those with lower education attainment or chronic illnesses, create additional barriers to effective discharge communication. A potential solution to this problem is the adoption of effective teaching strategies, specifically the teach-back method. This method employs techniques that ensure patients’ understanding and recall of new information regardless of health literacy, and places accountability on clinicians rather than patients. By closing communication gaps between clinicians and patients, the teach-back method can reduce hospital readmissions, hospital-acquired conditions, and mortality rates, while improving patient satisfaction with health care instructions and the overall hospital experience.2

Study 1, by Trivedi et al, and study 2, by Marks et al, aimed to identify and address problems related to poor communication between patients and health care team members at hospital discharge. Specifically, study 1 examined routine discharge communication practices to determine communication gaps, while study 2 evaluated a nurse-led teach-back intervention program designed to improve patients’ medication knowledge and satisfaction. These distinct objectives and designs reflected the unique ways each study approached the challenges associated with care transitions at the time of hospital discharge.

Study 1 used direct observation of patient-practitioner interactions to evaluate routine discharge communication practices in internal medicine services at 2 urban academic teaching hospitals. In the 33 patients observed, significant gaps in discharge communication practices were identified in the domains of medication changes, postdischarge appointments, disease self-management, and red flags or warning signs. Unsurprisingly, most of these domains were communicated in an ad hoc manner by members of the health care team without a clear pattern of responsibility in reference to patient discharge education, and teach-back was seldom used. These findings underscore the need for improved discharge techniques, effective communication strategies, and clarification of roles among the interprofessional team to enhance the safety, quality of care, and overall patient experience during hospital discharge.

Study 2 aimed to augment the hospital discharge communication process by implementing a nurse-led discharge medication education program (TIME), which targeted patients’ priority learning needs, new medication knowledge, and satisfaction with medication education. In the 107 patients assessed, this teach-back method enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge, as well as improved patients’ experience and satisfaction with medication education. These results suggest that a teach-back method such as the TIME program could be a solution to care transition problems identified in the Trivedi et al study by providing a structured approach to patient education and enhancing communication practices during the hospital discharge process. Thus, by implementing the TIME program, hospitals may improve patient outcomes, safety, and overall quality of care upon leaving the hospital.

Applications for Clinical Practice and System Implementation

Care transition at the time of hospital discharge is a particularly pivotal period in the care of vulnerable individuals. There is growing literature, including studies discussed in this review, to indicate that by focusing on improving patient-practitioner communication during the discharge process and using strategies such as the teach-back method, health care professionals can better prepare patients for self-management in the post-acute period and help them make informed decisions about their care. This emphasis on care-transition communication strategies may lead to a reduction in medication errors, adverse events, and hospital readmissions, ultimately improving patient outcomes and satisfaction. Barriers to system implementation of such strategies may include competing demands and responsibilities of busy practitioners as well as the inherent complexities associated with hospital discharge. Creative solutions, such as the utilization of telehealth and early transition-of-care visits, represent some potential approaches to counter these barriers.

While both studies illustrated barriers and facilitators of hospital discharge communication, each study had limitations that impacted their generalizability to real-world clinical practice. Limitations in study 1 included a small sample size, purposive sampling method, and a focus on planned discharges in a teaching hospital, which may introduce selection bias. The study’s findings may not be generalizable to unplanned discharges, patients who do not speak English or Spanish, or nonteaching hospitals. Additionally, the data were collected before the COVID-19 pandemic, which could have further impacted discharge education practices. The study also revealed that some patients received more robust discharge education than others, which indicated systemic inequality in the patient experience. Further research is required to address this discrepancy. Limitations in study 2 included a relatively small and homogeneous sample, with most participants being younger, non-Hispanic White, English-speaking, and well-educated. This lack of diversity may limit the generalizability of the findings. Furthermore, the study did not evaluate the patients’ knowledge of medication dosage and focused only on new medications. Future studies should examine the effect of teach-back on a broader range of self-management topics in preparation for discharge, while also including a more diverse population to account for factors related to social determinants of health. Taken together, further research is needed to address these limitations and ensure more generalizable results that can more broadly improve discharge education and care transitions that bridge acute and post-acute care.

Practice Points

  • There is a significant need for improved discharge strategies to enhance patient safety and quality of care upon leaving the hospital.
  • Teach-back method may offer a structured approach to educating patients about their medications at hospital discharge and improve care transitions.

–Yuka Shichijo, MD, and Fred Ko, MD, Mount Sinai Beth Israel Hospital, New York, NY

Study 1 Overview (Trivedi et al)

Objective: This observational quality improvement study aimed to evaluate the discharge communication practices in internal medicine services at 2 urban academic teaching hospitals, specifically focusing on patient education and counseling in 6 key discharge communication domains.

Design: Observations were conducted over a 13-month period from September 2018 through October 2019, following the Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines.

Setting and participants: The study involved a total of 33 English- and Spanish-speaking patients purposefully selected from the “discharge before noon” list at 2 urban tertiary-care teaching hospitals. A total of 155 observation hours were accumulated, with an average observation time of 4.7 hours per patient on the day of discharge.

Main outcome measures: The study assessed 6 discharge communication domains: (1) the name and function of medication changes, (2) the purpose of postdischarge appointments, (3) disease self-management, (4) red flags or warning signs for complications, (5) teach-back techniques to confirm patient understanding, and (6) staff solicitation of patient questions or concerns.

Main results: The study found several gaps in discharge communication practices. Among the 29 patients with medication changes, 28% were not informed about the name and basic function of the changes, while 59% did not receive counseling on the purpose for the medication change. In terms of postdischarge appointments, 48% of patients were not told the purpose of these appointments. Moreover, 54% of patients did not receive counseling on self-management of their primary discharge diagnosis or other diagnoses, and 73% were not informed about symptom expectations or the expected course of their illness after leaving the hospital. Most patients (82%) were not counseled on red-flag signs and symptoms that should prompt immediate return to care.

Teach-back techniques, which are critical for ensuring patient understanding, were used in only 3% of cases, and 85% of patients were not asked by health care providers if there might be barriers to following the care plan. Less than half (42%) of the patients were asked if they had any questions, with most questions being logistical and often deferred to another team member or met with uncertainty. Of note, among the 33 patients, only 2 patients received extensive information that covered 5 or 6 out of 6 discharge communication domains.

The study found variable roles in who communicated what aspects of discharge education, with most domains being communicated in an ad hoc manner and no clear pattern of responsibility. However, 2 exceptions were observed: nurses were more likely to provide information about new or changed medications and follow-up appointments, and the only example of teach-back was conducted by an attending physician.

Conclusion: The study highlights a significant need for improved discharge techniques to enhance patient safety and quality of care upon leaving the hospital. Interventions should focus on increasing transparency in patient education and understanding, clarifying assumptions of roles among the interprofessional team, and implementing effective communication strategies and system redesigns that foster patient-centered discharge education. Also, the study revealed that some patients received more robust discharge education than others, indicating systemic inequality in the patient experience. Further studies are needed to explore the development and assessment of such interventions to ensure optimal patient outcomes and equal care following hospital discharge.

 

 

Study 2 Overview (Marks et al)

Objective: This study aimed to investigate the impact of a nurse-led discharge medication education program, Teaching Important Medication Effects (TIME), on patients’ new medication knowledge at discharge and 48 to 72 hours post discharge. The specific objectives were to identify patients’ priority learning needs, evaluate the influence of TIME on patients’ new medication knowledge before and after discharge, and assess the effect of TIME on patients’ experience and satisfaction with medication education.

Design: The study employed a longitudinal pretest/post-test, 2-group design involving 107 randomly selected medical-surgical patients from an academic hospital. Participants were interviewed before and within 72 hours after discharge following administration of medication instructions. Bivariate analyses were performed to assess demographic and outcome variable differences between groups.

Setting and participants: Conducted on a 24-bed medical-surgical unit at a large Magnet® hospital over 18 months (2018-2019), the study included patients with at least 1 new medication, aged 18 years or older, able to read and speak English or Spanish, admitted from home with a minimum 1 overnight stay, and planning to return home post discharge. Excluded were cognitively impaired patients, those assigned to a resource pool nurse without TIME training, and those having a research team member assigned. Participants were randomly selected from a computerized list of patients scheduled for discharge.

Main outcome measures: Primary outcome measures included patients’ new medication knowledge before and after discharge and patients’ experience and satisfaction with medication education.

Main results: The usual care (n = 52) and TIME (n = 55) patients had similar baseline demographic characteristics. The study revealed that almost all patients in both usual care and TIME groups were aware of their new medication and its purpose at discharge. However, differences were observed in medication side effect responses, with 72.5% of the usual-care group knowing side effects compared to 94.3% of the TIME group (P = .003). Additionally, 81.5% of the usual-care group understood the medication purpose compared to 100% of the TIME group (P = .02). During the 48- to 72-hour postdischarge calls, consistent responses were found from both groups regarding knowledge of new medication, medication name, and medication purpose. Similar to discharge results, differences in medication side effect responses were observed, with 75.8% of the usual care group correctly identifying at least 1 medication side effect compared to 93.9% of the TIME group (P = .04). TIME was associated with higher satisfaction with medication education compared to usual care (97% vs. 46.9%, P < .001).

Conclusion: The nurse-led discharge medication education program TIME effectively enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge. The program also significantly improved patients’ experience and satisfaction with medication education. These findings indicate that TIME is a valuable tool for augmenting patient education and medication adherence in a hospital setting. By incorporating the teach-back method, TIME offers a structured approach to educating patients about their medications at hospital discharge, leading to improved care transitions.

 

 

Commentary

Suboptimal communication between patients, caregivers, and providers upon hospital discharge is a major contributor to patients’ inadequate understanding of postdischarge care plans. This inadequate understanding leads to preventable harms, such as medication errors, adverse events, emergency room visits, and costly hospital readmissions.1 The issue is further exacerbated by a lack of clarity among health care team members’ respective roles in providing information that optimizes care transitions during the discharge communication process. Moreover, low health literacy, particularly prevalent among seniors, those from disadvantaged backgrouds, and those with lower education attainment or chronic illnesses, create additional barriers to effective discharge communication. A potential solution to this problem is the adoption of effective teaching strategies, specifically the teach-back method. This method employs techniques that ensure patients’ understanding and recall of new information regardless of health literacy, and places accountability on clinicians rather than patients. By closing communication gaps between clinicians and patients, the teach-back method can reduce hospital readmissions, hospital-acquired conditions, and mortality rates, while improving patient satisfaction with health care instructions and the overall hospital experience.2

Study 1, by Trivedi et al, and study 2, by Marks et al, aimed to identify and address problems related to poor communication between patients and health care team members at hospital discharge. Specifically, study 1 examined routine discharge communication practices to determine communication gaps, while study 2 evaluated a nurse-led teach-back intervention program designed to improve patients’ medication knowledge and satisfaction. These distinct objectives and designs reflected the unique ways each study approached the challenges associated with care transitions at the time of hospital discharge.

Study 1 used direct observation of patient-practitioner interactions to evaluate routine discharge communication practices in internal medicine services at 2 urban academic teaching hospitals. In the 33 patients observed, significant gaps in discharge communication practices were identified in the domains of medication changes, postdischarge appointments, disease self-management, and red flags or warning signs. Unsurprisingly, most of these domains were communicated in an ad hoc manner by members of the health care team without a clear pattern of responsibility in reference to patient discharge education, and teach-back was seldom used. These findings underscore the need for improved discharge techniques, effective communication strategies, and clarification of roles among the interprofessional team to enhance the safety, quality of care, and overall patient experience during hospital discharge.

Study 2 aimed to augment the hospital discharge communication process by implementing a nurse-led discharge medication education program (TIME), which targeted patients’ priority learning needs, new medication knowledge, and satisfaction with medication education. In the 107 patients assessed, this teach-back method enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge, as well as improved patients’ experience and satisfaction with medication education. These results suggest that a teach-back method such as the TIME program could be a solution to care transition problems identified in the Trivedi et al study by providing a structured approach to patient education and enhancing communication practices during the hospital discharge process. Thus, by implementing the TIME program, hospitals may improve patient outcomes, safety, and overall quality of care upon leaving the hospital.

Applications for Clinical Practice and System Implementation

Care transition at the time of hospital discharge is a particularly pivotal period in the care of vulnerable individuals. There is growing literature, including studies discussed in this review, to indicate that by focusing on improving patient-practitioner communication during the discharge process and using strategies such as the teach-back method, health care professionals can better prepare patients for self-management in the post-acute period and help them make informed decisions about their care. This emphasis on care-transition communication strategies may lead to a reduction in medication errors, adverse events, and hospital readmissions, ultimately improving patient outcomes and satisfaction. Barriers to system implementation of such strategies may include competing demands and responsibilities of busy practitioners as well as the inherent complexities associated with hospital discharge. Creative solutions, such as the utilization of telehealth and early transition-of-care visits, represent some potential approaches to counter these barriers.

While both studies illustrated barriers and facilitators of hospital discharge communication, each study had limitations that impacted their generalizability to real-world clinical practice. Limitations in study 1 included a small sample size, purposive sampling method, and a focus on planned discharges in a teaching hospital, which may introduce selection bias. The study’s findings may not be generalizable to unplanned discharges, patients who do not speak English or Spanish, or nonteaching hospitals. Additionally, the data were collected before the COVID-19 pandemic, which could have further impacted discharge education practices. The study also revealed that some patients received more robust discharge education than others, which indicated systemic inequality in the patient experience. Further research is required to address this discrepancy. Limitations in study 2 included a relatively small and homogeneous sample, with most participants being younger, non-Hispanic White, English-speaking, and well-educated. This lack of diversity may limit the generalizability of the findings. Furthermore, the study did not evaluate the patients’ knowledge of medication dosage and focused only on new medications. Future studies should examine the effect of teach-back on a broader range of self-management topics in preparation for discharge, while also including a more diverse population to account for factors related to social determinants of health. Taken together, further research is needed to address these limitations and ensure more generalizable results that can more broadly improve discharge education and care transitions that bridge acute and post-acute care.

Practice Points

  • There is a significant need for improved discharge strategies to enhance patient safety and quality of care upon leaving the hospital.
  • Teach-back method may offer a structured approach to educating patients about their medications at hospital discharge and improve care transitions.

–Yuka Shichijo, MD, and Fred Ko, MD, Mount Sinai Beth Israel Hospital, New York, NY

References

1. Snow V, Beck D, Budnitz T, Miller DC, Potter J, Wears RL, Weiss KB, Williams MV; American College of Physicians; Society of General Internal Medicine; Society of Hospital Medicine; American Geriatrics Society; American College of Emergency Physicians; Society of Academic Emergency Medicine. Transitions of care consensus policy statement American College of Physicians-Society of General Internal Medicine-Society of Hospital Medicine-American Geriatrics Society-American College of Emergency Physicians-Society of Academic Emergency Medicine. J Gen Intern Med. 2009;24(8):971-976. doi:10.1007/s11606-009-0969-x

2. Yen PH, Leasure AR. Use and effectiveness of the teach-back method in patient education and health outcomes. Fed. Pract. 2019;36(6):284-289.

References

1. Snow V, Beck D, Budnitz T, Miller DC, Potter J, Wears RL, Weiss KB, Williams MV; American College of Physicians; Society of General Internal Medicine; Society of Hospital Medicine; American Geriatrics Society; American College of Emergency Physicians; Society of Academic Emergency Medicine. Transitions of care consensus policy statement American College of Physicians-Society of General Internal Medicine-Society of Hospital Medicine-American Geriatrics Society-American College of Emergency Physicians-Society of Academic Emergency Medicine. J Gen Intern Med. 2009;24(8):971-976. doi:10.1007/s11606-009-0969-x

2. Yen PH, Leasure AR. Use and effectiveness of the teach-back method in patient education and health outcomes. Fed. Pract. 2019;36(6):284-289.

Issue
Journal of Clinical Outcomes Management - 30(3)
Issue
Journal of Clinical Outcomes Management - 30(3)
Page Number
58-61
Page Number
58-61
Publications
Publications
Topics
Article Type
Display Headline
Patient Safety in Transitions of Care: Addressing Discharge Communication Gaps and the Potential of the Teach-Back Method
Display Headline
Patient Safety in Transitions of Care: Addressing Discharge Communication Gaps and the Potential of the Teach-Back Method
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

The Shifting Landscape of Thrombolytic Therapy for Acute Ischemic Stroke

Article Type
Changed
Fri, 03/24/2023 - 01:15
Display Headline
The Shifting Landscape of Thrombolytic Therapy for Acute Ischemic Stroke

Study 1 Overview (Menon et al)

Objective: To determine whether a 0.25 mg/kg dose of intravenous tenecteplase is noninferior to intravenous alteplase 0.9 mg/kg for patients with acute ischemic stroke eligible for thrombolytic therapy.

Design: Multicenter, parallel-group, open-label randomized controlled trial.

Setting and participants: The trial was conducted at 22 primary and comprehensive stroke centers across Canada. A primary stroke center was defined as a hospital capable of offering intravenous thrombolysis to patients with acute ischemic stroke, while a comprehensive stroke center was able to offer thrombectomy services in addition. The involved centers also participated in Canadian quality improvement registries (either Quality Improvement and Clinical Research [QuiCR] or Optimizing Patient Treatment in Major Ischemic Stroke with EVT [OPTIMISE]) that track patient outcomes. Patients were eligible for inclusion if they were aged 18 years or older, had a diagnosis of acute ischemic stroke, presented within 4.5 hours of symptom onset, and were eligible for thrombolysis according to Canadian guidelines.

Patients were randomized in a 1:1 fashion to either intravenous tenecteplase (0.25 mg/kg single dose, maximum of 25 mg) or intravenous alteplase (0.9 mg/kg total dose to a maximum of 90 mg, delivered as a bolus followed by a continuous infusion). A total of 1600 patients were enrolled, with 816 randomly assigned to the tenecteplase arm and 784 to the alteplase arm; 1577 patients were included in the intention-to-treat (ITT) analysis (n = 806 tenecteplase; n = 771 alteplase). The median age of enrollees was 74 years, and 52.1% of the ITT population were men.

Main outcome measures: In the ITT population, the primary outcome measure was a modified Rankin score (mRS) of 0 or 1 at 90 to 120 days post treatment. Safety outcomes included symptomatic intracerebral hemorrhage, orolingual angioedema, extracranial bleeding that required blood transfusion (all within 24 hours of thrombolytic administration), and all-cause mortality at 90 days. The noninferiority threshold for intravenous tenecteplase was set as the lower 95% CI of the difference between the tenecteplase and alteplase groups in the proportion of patients who met the primary outcome exceeding –5%.

Main results: The primary outcome of mRS of either 0 or 1 at 90 to 120 days of treatment occurred in 296 (36.9%) of the 802 patients assigned to tenecteplase and 266 (34.8%) of the 765 patients assigned to alteplase (unadjusted risk difference, 2.1%; 95% CI, –2.6 to 6.9). The prespecified noninferiority threshold was met. There were no significant differences between the groups in rates of intracerebral hemorrhage at 24 hours or 90-day all-cause mortality.

Conclusion: Intravenous tenecteplase is a reasonable alternative to alteplase for patients eligible for thrombolytic therapy.

Study 2 Overview (Wang et al)

Objective: To determine whether tenecteplase (dose 0.25 mg/kg) is noninferior to alteplase in patients with acute ischemic stroke who are within 4.5 hours of symptom onset and eligible for thrombolytic therapy but either refused or were ineligible for endovascular thrombectomy.

Design: Multicenter, prospective, open-label, randomized, controlled noninferiority trial.

Setting and participants: This trial was conducted at 53 centers across China and included patients 18 years of age or older who were within 4.5 hours of symptom onset and were thrombolytic eligible, had a mRS ≤ 1 at enrollment, and had a National Institutes of Health Stroke Scale score between 5 and 25. Eligible participants were randomized 1:1 to either tenecteplase 0.25 mg/kg (maximum dose 25 mg) or alteplase 0.9 mg/kg (maximum dose 90 mg, administered as a bolus followed by infusion). During the enrollment period (June 12, 2021, to May 29, 2022), a total of 1430 participants were enrolled, and, of those, 716 were randomly assigned to tenecteplase and 714 to alteplase. Six patients assigned to tenecteplase and 7 assigned to alteplase did not receive drugs. At 90 days, 5 in the tenecteplase group and 11 in the alteplase group were lost to follow up.

Main outcome measures: The primary efficacy outcome was a mRS of 0 or 1 at 90 days. The primary safety outcome was intracranial hemorrhage within 36 hours. Safety outcomes included parenchymal hematoma 2, as defined by the European Cooperative Acute Stroke Study III; any intracranial or significant hemorrhage, as defined by the Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries criteria; and death from all causes at 90 days. Noninferiority for tenecteplase would be declared if the lower 97.5% 1-sided CI for the relative risk (RR) for the primary outcome did not cross 0.937.

Main results: In the modified ITT population, the primary outcome occurred in 439 (62%) of the tenecteplase group and 405 (68%) of the alteplase group (RR, 1.07; 95% CI, 0.98-1.16). This met the prespecified margin for noninferiority. Intracranial hemorrhage within 36 hours was experienced by 15 (2%) patients in the tenecteplase group and 13 (2%) in the alteplase group (RR, 1.18; 95% CI, 0.56-2.50). Death at 90 days occurred in 46 (7%) patients in the tenecteplase group and 35 (5%) in the alteplase group (RR, 1.31; 95% CI, 0.86-2.01).

Conclusion: Tenecteplase was noninferior to alteplase in patients with acute ischemic stroke who met criteria for thrombolysis and either refused or were ineligible for endovascular thrombectomy.

 

 

Commentary

Alteplase has been FDA-approved for managing acute ischemic stroke since 1996 and has demonstrated positive effects on functional outcomes. Drawbacks of alteplase therapy, however, include bleeding risk as well as cumbersome administration of a bolus dose followed by a 60-minute infusion. In recent years, the question of whether or not tenecteplase could replace alteplase as the preferred thrombolytic for acute ischemic stroke has garnered much attention. Several features of tenecteplase make it an attractive option, including increased fibrin specificity, a longer half-life, and ease of administration as a single, rapid bolus dose. In phase 2 trials that compared tenecteplase 0.25 mg/kg with alteplase, findings suggested the potential for early neurological improvement as well as improved outcomes at 90 days. While the role of tenecteplase in acute myocardial infarction has been well established due to ease of use and a favorable adverse-effect profile,1 there is much less evidence from phase 3 randomized controlled clinical trials to secure the role of tenecteplase in acute ischemic stroke.2

Menon et al attempted to close this gap in the literature by conducting a randomized controlled clinical trial (AcT) comparing tenecteplase to alteplase in a Canadian patient population. The trial's patient population mirrors that of real-world data from global registries in terms of age, sex, and baseline stroke severity. In addition, the eligibility window of 4.5 hours from symptom onset as well as the inclusion and exclusion criteria for therapy are common to those utilized in other countries, making the findings generalizable. There were some limitations to the study, however, including the impact of COVID-19 on recruitment efforts as well as limitations of research infrastructure and staffing, which may have limited enrollment efforts at primary stroke centers. Nonetheless, the authors concluded that their results provide evidence that tenecteplase is comparable to alteplase, with similar functional and safety outcomes.

TRACE-2 focused on an Asian patient population and provided follow up to the dose-ranging TRACE-1 phase 2 trial. TRACE-1 showed that tenecteplase 0.25 mg/kg had a similar safety profile to alteplase 0.9 mg/kg in Chinese patients presenting with acute ischemic stroke. TRACE-2 sought to establish noninferiority of tenecteplase and excluded patients who were ineligible for or refused thrombectomy. Interestingly, the tenecteplase arm, as the authors point out, had numerically greater mortality as well as intracranial hemorrhage, but these differences were not statistically significant between the treatment groups at 90 days. The TRACE-2 results parallel those of AcT, and although there were differences in ethnicity between the 2 trials, the authors cite this as evidence that the results are consistent and provide evidence for the role of tenecteplase in the management of acute ischemic stroke. Limitations of this trial include potential bias from its open-label design, as well as exclusion of patients with more severe strokes eligible for thrombectomy, which may limit generalizability to patients with more disabling strokes who could have a higher risk of intracranial hemorrhage.

Application for Clinical Practice and System Implementation

Across the country, many organizations have adopted the off-label use of tenecteplase for managing fibrinolytic-eligible acute ischemic stroke patients. In most cases, the impetus for change is the ease of dosing and administration of tenecteplase compared to alteplase, while the inclusion and exclusion criteria and overall management remain the same. Timely administration of therapy in stroke is critical. This, along with other time constraints in stroke workflows, the weight-based calculation of alteplase doses, and alteplase’s administration method may lead to medication errors when using this agent to treat patients with acute stroke. The rapid, single-dose administration of tenecteplase removes many barriers that hospitals face when patients may need to be treated and then transferred to another site for further care. Without the worry to “drip and ship,” the completion of administration may allow for timely patient transfer and eliminate the need for monitoring of an infusion during transfer. For some organizations, there may be a potential for drug cost-savings as well as improved metrics, such as door-to-needle time, but the overall effects of switching from alteplase to tenecteplase remain to be seen. Currently, tenecteplase is included in stroke guidelines as a “reasonable choice,” though with a low level of evidence.3 However, these 2 studies support the role of tenecteplase in acute ischemic stroke treatment and may provide a foundation for further studies to establish the role of tenecteplase in the acute ischemic stroke population.

Practice Points

  • Tenecteplase may be considered as an alternative to alteplase for acute ischemic stroke for patients who meet eligibility criteria for thrombolytics; this recommendation is included in the most recent stroke guidelines, although tenecteplase has not been demonstrated to be superior to alteplase.
  • The ease of administration of tenecteplase as a single intravenous bolus dose represents a benefit compared to alteplase; it is an off-label use, however, and further studies are needed to establish the superiority of tenecteplase in terms of functional and safety outcomes.

Carol Heunisch, PharmD, BCPS, BCCP
Pharmacy Department, NorthShore–Edward-Elmhurst Health, Evanston, IL

References

1. Assessment of the Safety and Efficacy of a New Thrombolytic (ASSENT-2) Investigators; F Van De Werf, J Adgey, et al. Single-bolus tenecteplase compared with front-loaded alteplase in acute myocardial infarction: the ASSENT-2 double-blind randomised trial. Lancet. 1999;354(9180):716-722. doi:10.1016/s0140-6736(99)07403-6

2. Burgos AM, Saver JL. Evidence that tenecteplase is noninferior to alteplase for acute ischaemic stroke: meta-analysis of 5 randomized trials. Stroke. 2019;50(8):2156-2162. doi:10.1161/STROKEAHA.119.025080

3. Powers WJ, Rabinstein AA, Ackerson T, et al. Guidelines for the early management of patients with acute ischemic stroke: 2019 update to the 2018 Guidelines for the Early Management of Acute Ischemic Stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2019;50(12):e344-e418. doi:10.1161/STR.0000000000000211

Article PDF
Issue
Journal of Clinical Outcomes Management - 30(2)
Publications
Topics
Page Number
30-32
Sections
Article PDF
Article PDF

Study 1 Overview (Menon et al)

Objective: To determine whether a 0.25 mg/kg dose of intravenous tenecteplase is noninferior to intravenous alteplase 0.9 mg/kg for patients with acute ischemic stroke eligible for thrombolytic therapy.

Design: Multicenter, parallel-group, open-label randomized controlled trial.

Setting and participants: The trial was conducted at 22 primary and comprehensive stroke centers across Canada. A primary stroke center was defined as a hospital capable of offering intravenous thrombolysis to patients with acute ischemic stroke, while a comprehensive stroke center was able to offer thrombectomy services in addition. The involved centers also participated in Canadian quality improvement registries (either Quality Improvement and Clinical Research [QuiCR] or Optimizing Patient Treatment in Major Ischemic Stroke with EVT [OPTIMISE]) that track patient outcomes. Patients were eligible for inclusion if they were aged 18 years or older, had a diagnosis of acute ischemic stroke, presented within 4.5 hours of symptom onset, and were eligible for thrombolysis according to Canadian guidelines.

Patients were randomized in a 1:1 fashion to either intravenous tenecteplase (0.25 mg/kg single dose, maximum of 25 mg) or intravenous alteplase (0.9 mg/kg total dose to a maximum of 90 mg, delivered as a bolus followed by a continuous infusion). A total of 1600 patients were enrolled, with 816 randomly assigned to the tenecteplase arm and 784 to the alteplase arm; 1577 patients were included in the intention-to-treat (ITT) analysis (n = 806 tenecteplase; n = 771 alteplase). The median age of enrollees was 74 years, and 52.1% of the ITT population were men.

Main outcome measures: In the ITT population, the primary outcome measure was a modified Rankin score (mRS) of 0 or 1 at 90 to 120 days post treatment. Safety outcomes included symptomatic intracerebral hemorrhage, orolingual angioedema, extracranial bleeding that required blood transfusion (all within 24 hours of thrombolytic administration), and all-cause mortality at 90 days. The noninferiority threshold for intravenous tenecteplase was set as the lower 95% CI of the difference between the tenecteplase and alteplase groups in the proportion of patients who met the primary outcome exceeding –5%.

Main results: The primary outcome of mRS of either 0 or 1 at 90 to 120 days of treatment occurred in 296 (36.9%) of the 802 patients assigned to tenecteplase and 266 (34.8%) of the 765 patients assigned to alteplase (unadjusted risk difference, 2.1%; 95% CI, –2.6 to 6.9). The prespecified noninferiority threshold was met. There were no significant differences between the groups in rates of intracerebral hemorrhage at 24 hours or 90-day all-cause mortality.

Conclusion: Intravenous tenecteplase is a reasonable alternative to alteplase for patients eligible for thrombolytic therapy.

Study 2 Overview (Wang et al)

Objective: To determine whether tenecteplase (dose 0.25 mg/kg) is noninferior to alteplase in patients with acute ischemic stroke who are within 4.5 hours of symptom onset and eligible for thrombolytic therapy but either refused or were ineligible for endovascular thrombectomy.

Design: Multicenter, prospective, open-label, randomized, controlled noninferiority trial.

Setting and participants: This trial was conducted at 53 centers across China and included patients 18 years of age or older who were within 4.5 hours of symptom onset and were thrombolytic eligible, had a mRS ≤ 1 at enrollment, and had a National Institutes of Health Stroke Scale score between 5 and 25. Eligible participants were randomized 1:1 to either tenecteplase 0.25 mg/kg (maximum dose 25 mg) or alteplase 0.9 mg/kg (maximum dose 90 mg, administered as a bolus followed by infusion). During the enrollment period (June 12, 2021, to May 29, 2022), a total of 1430 participants were enrolled, and, of those, 716 were randomly assigned to tenecteplase and 714 to alteplase. Six patients assigned to tenecteplase and 7 assigned to alteplase did not receive drugs. At 90 days, 5 in the tenecteplase group and 11 in the alteplase group were lost to follow up.

Main outcome measures: The primary efficacy outcome was a mRS of 0 or 1 at 90 days. The primary safety outcome was intracranial hemorrhage within 36 hours. Safety outcomes included parenchymal hematoma 2, as defined by the European Cooperative Acute Stroke Study III; any intracranial or significant hemorrhage, as defined by the Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries criteria; and death from all causes at 90 days. Noninferiority for tenecteplase would be declared if the lower 97.5% 1-sided CI for the relative risk (RR) for the primary outcome did not cross 0.937.

Main results: In the modified ITT population, the primary outcome occurred in 439 (62%) of the tenecteplase group and 405 (68%) of the alteplase group (RR, 1.07; 95% CI, 0.98-1.16). This met the prespecified margin for noninferiority. Intracranial hemorrhage within 36 hours was experienced by 15 (2%) patients in the tenecteplase group and 13 (2%) in the alteplase group (RR, 1.18; 95% CI, 0.56-2.50). Death at 90 days occurred in 46 (7%) patients in the tenecteplase group and 35 (5%) in the alteplase group (RR, 1.31; 95% CI, 0.86-2.01).

Conclusion: Tenecteplase was noninferior to alteplase in patients with acute ischemic stroke who met criteria for thrombolysis and either refused or were ineligible for endovascular thrombectomy.

 

 

Commentary

Alteplase has been FDA-approved for managing acute ischemic stroke since 1996 and has demonstrated positive effects on functional outcomes. Drawbacks of alteplase therapy, however, include bleeding risk as well as cumbersome administration of a bolus dose followed by a 60-minute infusion. In recent years, the question of whether or not tenecteplase could replace alteplase as the preferred thrombolytic for acute ischemic stroke has garnered much attention. Several features of tenecteplase make it an attractive option, including increased fibrin specificity, a longer half-life, and ease of administration as a single, rapid bolus dose. In phase 2 trials that compared tenecteplase 0.25 mg/kg with alteplase, findings suggested the potential for early neurological improvement as well as improved outcomes at 90 days. While the role of tenecteplase in acute myocardial infarction has been well established due to ease of use and a favorable adverse-effect profile,1 there is much less evidence from phase 3 randomized controlled clinical trials to secure the role of tenecteplase in acute ischemic stroke.2

Menon et al attempted to close this gap in the literature by conducting a randomized controlled clinical trial (AcT) comparing tenecteplase to alteplase in a Canadian patient population. The trial's patient population mirrors that of real-world data from global registries in terms of age, sex, and baseline stroke severity. In addition, the eligibility window of 4.5 hours from symptom onset as well as the inclusion and exclusion criteria for therapy are common to those utilized in other countries, making the findings generalizable. There were some limitations to the study, however, including the impact of COVID-19 on recruitment efforts as well as limitations of research infrastructure and staffing, which may have limited enrollment efforts at primary stroke centers. Nonetheless, the authors concluded that their results provide evidence that tenecteplase is comparable to alteplase, with similar functional and safety outcomes.

TRACE-2 focused on an Asian patient population and provided follow up to the dose-ranging TRACE-1 phase 2 trial. TRACE-1 showed that tenecteplase 0.25 mg/kg had a similar safety profile to alteplase 0.9 mg/kg in Chinese patients presenting with acute ischemic stroke. TRACE-2 sought to establish noninferiority of tenecteplase and excluded patients who were ineligible for or refused thrombectomy. Interestingly, the tenecteplase arm, as the authors point out, had numerically greater mortality as well as intracranial hemorrhage, but these differences were not statistically significant between the treatment groups at 90 days. The TRACE-2 results parallel those of AcT, and although there were differences in ethnicity between the 2 trials, the authors cite this as evidence that the results are consistent and provide evidence for the role of tenecteplase in the management of acute ischemic stroke. Limitations of this trial include potential bias from its open-label design, as well as exclusion of patients with more severe strokes eligible for thrombectomy, which may limit generalizability to patients with more disabling strokes who could have a higher risk of intracranial hemorrhage.

Application for Clinical Practice and System Implementation

Across the country, many organizations have adopted the off-label use of tenecteplase for managing fibrinolytic-eligible acute ischemic stroke patients. In most cases, the impetus for change is the ease of dosing and administration of tenecteplase compared to alteplase, while the inclusion and exclusion criteria and overall management remain the same. Timely administration of therapy in stroke is critical. This, along with other time constraints in stroke workflows, the weight-based calculation of alteplase doses, and alteplase’s administration method may lead to medication errors when using this agent to treat patients with acute stroke. The rapid, single-dose administration of tenecteplase removes many barriers that hospitals face when patients may need to be treated and then transferred to another site for further care. Without the worry to “drip and ship,” the completion of administration may allow for timely patient transfer and eliminate the need for monitoring of an infusion during transfer. For some organizations, there may be a potential for drug cost-savings as well as improved metrics, such as door-to-needle time, but the overall effects of switching from alteplase to tenecteplase remain to be seen. Currently, tenecteplase is included in stroke guidelines as a “reasonable choice,” though with a low level of evidence.3 However, these 2 studies support the role of tenecteplase in acute ischemic stroke treatment and may provide a foundation for further studies to establish the role of tenecteplase in the acute ischemic stroke population.

Practice Points

  • Tenecteplase may be considered as an alternative to alteplase for acute ischemic stroke for patients who meet eligibility criteria for thrombolytics; this recommendation is included in the most recent stroke guidelines, although tenecteplase has not been demonstrated to be superior to alteplase.
  • The ease of administration of tenecteplase as a single intravenous bolus dose represents a benefit compared to alteplase; it is an off-label use, however, and further studies are needed to establish the superiority of tenecteplase in terms of functional and safety outcomes.

Carol Heunisch, PharmD, BCPS, BCCP
Pharmacy Department, NorthShore–Edward-Elmhurst Health, Evanston, IL

Study 1 Overview (Menon et al)

Objective: To determine whether a 0.25 mg/kg dose of intravenous tenecteplase is noninferior to intravenous alteplase 0.9 mg/kg for patients with acute ischemic stroke eligible for thrombolytic therapy.

Design: Multicenter, parallel-group, open-label randomized controlled trial.

Setting and participants: The trial was conducted at 22 primary and comprehensive stroke centers across Canada. A primary stroke center was defined as a hospital capable of offering intravenous thrombolysis to patients with acute ischemic stroke, while a comprehensive stroke center was able to offer thrombectomy services in addition. The involved centers also participated in Canadian quality improvement registries (either Quality Improvement and Clinical Research [QuiCR] or Optimizing Patient Treatment in Major Ischemic Stroke with EVT [OPTIMISE]) that track patient outcomes. Patients were eligible for inclusion if they were aged 18 years or older, had a diagnosis of acute ischemic stroke, presented within 4.5 hours of symptom onset, and were eligible for thrombolysis according to Canadian guidelines.

Patients were randomized in a 1:1 fashion to either intravenous tenecteplase (0.25 mg/kg single dose, maximum of 25 mg) or intravenous alteplase (0.9 mg/kg total dose to a maximum of 90 mg, delivered as a bolus followed by a continuous infusion). A total of 1600 patients were enrolled, with 816 randomly assigned to the tenecteplase arm and 784 to the alteplase arm; 1577 patients were included in the intention-to-treat (ITT) analysis (n = 806 tenecteplase; n = 771 alteplase). The median age of enrollees was 74 years, and 52.1% of the ITT population were men.

Main outcome measures: In the ITT population, the primary outcome measure was a modified Rankin score (mRS) of 0 or 1 at 90 to 120 days post treatment. Safety outcomes included symptomatic intracerebral hemorrhage, orolingual angioedema, extracranial bleeding that required blood transfusion (all within 24 hours of thrombolytic administration), and all-cause mortality at 90 days. The noninferiority threshold for intravenous tenecteplase was set as the lower 95% CI of the difference between the tenecteplase and alteplase groups in the proportion of patients who met the primary outcome exceeding –5%.

Main results: The primary outcome of mRS of either 0 or 1 at 90 to 120 days of treatment occurred in 296 (36.9%) of the 802 patients assigned to tenecteplase and 266 (34.8%) of the 765 patients assigned to alteplase (unadjusted risk difference, 2.1%; 95% CI, –2.6 to 6.9). The prespecified noninferiority threshold was met. There were no significant differences between the groups in rates of intracerebral hemorrhage at 24 hours or 90-day all-cause mortality.

Conclusion: Intravenous tenecteplase is a reasonable alternative to alteplase for patients eligible for thrombolytic therapy.

Study 2 Overview (Wang et al)

Objective: To determine whether tenecteplase (dose 0.25 mg/kg) is noninferior to alteplase in patients with acute ischemic stroke who are within 4.5 hours of symptom onset and eligible for thrombolytic therapy but either refused or were ineligible for endovascular thrombectomy.

Design: Multicenter, prospective, open-label, randomized, controlled noninferiority trial.

Setting and participants: This trial was conducted at 53 centers across China and included patients 18 years of age or older who were within 4.5 hours of symptom onset and were thrombolytic eligible, had a mRS ≤ 1 at enrollment, and had a National Institutes of Health Stroke Scale score between 5 and 25. Eligible participants were randomized 1:1 to either tenecteplase 0.25 mg/kg (maximum dose 25 mg) or alteplase 0.9 mg/kg (maximum dose 90 mg, administered as a bolus followed by infusion). During the enrollment period (June 12, 2021, to May 29, 2022), a total of 1430 participants were enrolled, and, of those, 716 were randomly assigned to tenecteplase and 714 to alteplase. Six patients assigned to tenecteplase and 7 assigned to alteplase did not receive drugs. At 90 days, 5 in the tenecteplase group and 11 in the alteplase group were lost to follow up.

Main outcome measures: The primary efficacy outcome was a mRS of 0 or 1 at 90 days. The primary safety outcome was intracranial hemorrhage within 36 hours. Safety outcomes included parenchymal hematoma 2, as defined by the European Cooperative Acute Stroke Study III; any intracranial or significant hemorrhage, as defined by the Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries criteria; and death from all causes at 90 days. Noninferiority for tenecteplase would be declared if the lower 97.5% 1-sided CI for the relative risk (RR) for the primary outcome did not cross 0.937.

Main results: In the modified ITT population, the primary outcome occurred in 439 (62%) of the tenecteplase group and 405 (68%) of the alteplase group (RR, 1.07; 95% CI, 0.98-1.16). This met the prespecified margin for noninferiority. Intracranial hemorrhage within 36 hours was experienced by 15 (2%) patients in the tenecteplase group and 13 (2%) in the alteplase group (RR, 1.18; 95% CI, 0.56-2.50). Death at 90 days occurred in 46 (7%) patients in the tenecteplase group and 35 (5%) in the alteplase group (RR, 1.31; 95% CI, 0.86-2.01).

Conclusion: Tenecteplase was noninferior to alteplase in patients with acute ischemic stroke who met criteria for thrombolysis and either refused or were ineligible for endovascular thrombectomy.

 

 

Commentary

Alteplase has been FDA-approved for managing acute ischemic stroke since 1996 and has demonstrated positive effects on functional outcomes. Drawbacks of alteplase therapy, however, include bleeding risk as well as cumbersome administration of a bolus dose followed by a 60-minute infusion. In recent years, the question of whether or not tenecteplase could replace alteplase as the preferred thrombolytic for acute ischemic stroke has garnered much attention. Several features of tenecteplase make it an attractive option, including increased fibrin specificity, a longer half-life, and ease of administration as a single, rapid bolus dose. In phase 2 trials that compared tenecteplase 0.25 mg/kg with alteplase, findings suggested the potential for early neurological improvement as well as improved outcomes at 90 days. While the role of tenecteplase in acute myocardial infarction has been well established due to ease of use and a favorable adverse-effect profile,1 there is much less evidence from phase 3 randomized controlled clinical trials to secure the role of tenecteplase in acute ischemic stroke.2

Menon et al attempted to close this gap in the literature by conducting a randomized controlled clinical trial (AcT) comparing tenecteplase to alteplase in a Canadian patient population. The trial's patient population mirrors that of real-world data from global registries in terms of age, sex, and baseline stroke severity. In addition, the eligibility window of 4.5 hours from symptom onset as well as the inclusion and exclusion criteria for therapy are common to those utilized in other countries, making the findings generalizable. There were some limitations to the study, however, including the impact of COVID-19 on recruitment efforts as well as limitations of research infrastructure and staffing, which may have limited enrollment efforts at primary stroke centers. Nonetheless, the authors concluded that their results provide evidence that tenecteplase is comparable to alteplase, with similar functional and safety outcomes.

TRACE-2 focused on an Asian patient population and provided follow up to the dose-ranging TRACE-1 phase 2 trial. TRACE-1 showed that tenecteplase 0.25 mg/kg had a similar safety profile to alteplase 0.9 mg/kg in Chinese patients presenting with acute ischemic stroke. TRACE-2 sought to establish noninferiority of tenecteplase and excluded patients who were ineligible for or refused thrombectomy. Interestingly, the tenecteplase arm, as the authors point out, had numerically greater mortality as well as intracranial hemorrhage, but these differences were not statistically significant between the treatment groups at 90 days. The TRACE-2 results parallel those of AcT, and although there were differences in ethnicity between the 2 trials, the authors cite this as evidence that the results are consistent and provide evidence for the role of tenecteplase in the management of acute ischemic stroke. Limitations of this trial include potential bias from its open-label design, as well as exclusion of patients with more severe strokes eligible for thrombectomy, which may limit generalizability to patients with more disabling strokes who could have a higher risk of intracranial hemorrhage.

Application for Clinical Practice and System Implementation

Across the country, many organizations have adopted the off-label use of tenecteplase for managing fibrinolytic-eligible acute ischemic stroke patients. In most cases, the impetus for change is the ease of dosing and administration of tenecteplase compared to alteplase, while the inclusion and exclusion criteria and overall management remain the same. Timely administration of therapy in stroke is critical. This, along with other time constraints in stroke workflows, the weight-based calculation of alteplase doses, and alteplase’s administration method may lead to medication errors when using this agent to treat patients with acute stroke. The rapid, single-dose administration of tenecteplase removes many barriers that hospitals face when patients may need to be treated and then transferred to another site for further care. Without the worry to “drip and ship,” the completion of administration may allow for timely patient transfer and eliminate the need for monitoring of an infusion during transfer. For some organizations, there may be a potential for drug cost-savings as well as improved metrics, such as door-to-needle time, but the overall effects of switching from alteplase to tenecteplase remain to be seen. Currently, tenecteplase is included in stroke guidelines as a “reasonable choice,” though with a low level of evidence.3 However, these 2 studies support the role of tenecteplase in acute ischemic stroke treatment and may provide a foundation for further studies to establish the role of tenecteplase in the acute ischemic stroke population.

Practice Points

  • Tenecteplase may be considered as an alternative to alteplase for acute ischemic stroke for patients who meet eligibility criteria for thrombolytics; this recommendation is included in the most recent stroke guidelines, although tenecteplase has not been demonstrated to be superior to alteplase.
  • The ease of administration of tenecteplase as a single intravenous bolus dose represents a benefit compared to alteplase; it is an off-label use, however, and further studies are needed to establish the superiority of tenecteplase in terms of functional and safety outcomes.

Carol Heunisch, PharmD, BCPS, BCCP
Pharmacy Department, NorthShore–Edward-Elmhurst Health, Evanston, IL

References

1. Assessment of the Safety and Efficacy of a New Thrombolytic (ASSENT-2) Investigators; F Van De Werf, J Adgey, et al. Single-bolus tenecteplase compared with front-loaded alteplase in acute myocardial infarction: the ASSENT-2 double-blind randomised trial. Lancet. 1999;354(9180):716-722. doi:10.1016/s0140-6736(99)07403-6

2. Burgos AM, Saver JL. Evidence that tenecteplase is noninferior to alteplase for acute ischaemic stroke: meta-analysis of 5 randomized trials. Stroke. 2019;50(8):2156-2162. doi:10.1161/STROKEAHA.119.025080

3. Powers WJ, Rabinstein AA, Ackerson T, et al. Guidelines for the early management of patients with acute ischemic stroke: 2019 update to the 2018 Guidelines for the Early Management of Acute Ischemic Stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2019;50(12):e344-e418. doi:10.1161/STR.0000000000000211

References

1. Assessment of the Safety and Efficacy of a New Thrombolytic (ASSENT-2) Investigators; F Van De Werf, J Adgey, et al. Single-bolus tenecteplase compared with front-loaded alteplase in acute myocardial infarction: the ASSENT-2 double-blind randomised trial. Lancet. 1999;354(9180):716-722. doi:10.1016/s0140-6736(99)07403-6

2. Burgos AM, Saver JL. Evidence that tenecteplase is noninferior to alteplase for acute ischaemic stroke: meta-analysis of 5 randomized trials. Stroke. 2019;50(8):2156-2162. doi:10.1161/STROKEAHA.119.025080

3. Powers WJ, Rabinstein AA, Ackerson T, et al. Guidelines for the early management of patients with acute ischemic stroke: 2019 update to the 2018 Guidelines for the Early Management of Acute Ischemic Stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2019;50(12):e344-e418. doi:10.1161/STR.0000000000000211

Issue
Journal of Clinical Outcomes Management - 30(2)
Issue
Journal of Clinical Outcomes Management - 30(2)
Page Number
30-32
Page Number
30-32
Publications
Publications
Topics
Article Type
Display Headline
The Shifting Landscape of Thrombolytic Therapy for Acute Ischemic Stroke
Display Headline
The Shifting Landscape of Thrombolytic Therapy for Acute Ischemic Stroke
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

The Role of Revascularization and Viability Testing in Patients With Multivessel Coronary Artery Disease and Severely Reduced Ejection Fraction

Article Type
Changed
Wed, 12/28/2022 - 12:33
Display Headline
The Role of Revascularization and Viability Testing in Patients With Multivessel Coronary Artery Disease and Severely Reduced Ejection Fraction

Study 1 Overview (STICHES Investigators)

Objective: To assess the survival benefit of coronary-artery bypass grafting (CABG) added to guideline-directed medical therapy, compared to optimal medical therapy (OMT) alone, in patients with coronary artery disease, heart failure, and severe left ventricular dysfunction. Design: Multicenter, randomized, prospective study with extended follow-up (median duration of 9.8 years).

Setting and participants: A total of 1212 patients with left ventricular ejection fraction (LVEF) of 35% or less and coronary artery disease were randomized to medical therapy plus CABG or OMT alone at 127 clinical sites in 26 countries.

Main outcome measures: The primary endpoint was death from any cause. Main secondary endpoints were death from cardiovascular causes and a composite outcome of death from any cause or hospitalization for cardiovascular causes.

Main results: There were 359 primary outcome all-cause deaths (58.9%) in the CABG group and 398 (66.1%) in the medical therapy group (hazard ratio [HR], 0.84; 95% CI, 0.73-0.97; P = .02). Death from cardiovascular causes was reported in 247 patients (40.5%) in the CABG group and 297 patients (49.3%) in the medical therapy group (HR, 0.79; 95% CI, 0.66-0.93; P < .01). The composite outcome of death from any cause or hospitalization for cardiovascular causes occurred in 467 patients (76.6%) in the CABG group and 467 patients (87.0%) in the medical therapy group (HR, 0.72; 95% CI, 0.64-0.82; P < .01).

Conclusion: Over a median follow-up of 9.8 years in patients with ischemic cardiomyopathy with severely reduced ejection fraction, the rates of death from any cause, death from cardiovascular causes, and the composite of death from any cause or hospitalization for cardiovascular causes were significantly lower in patients undergoing CABG than in patients receiving medical therapy alone.

Study 2 Overview (REVIVED BCIS Trial Group)

Objective: To assess whether percutaneous coronary intervention (PCI) can improve survival and left ventricular function in patients with severe left ventricular systolic dysfunction as compared to OMT alone.

Design: Multicenter, randomized, prospective study.

Setting and participants: A total of 700 patients with LVEF <35% with severe coronary artery disease amendable to PCI and demonstrable myocardial viability were randomly assigned to either PCI plus optimal medical therapy (PCI group) or OMT alone (OMT group).

Main outcome measures: The primary outcome was death from any cause or hospitalization for heart failure. The main secondary outcomes were LVEF at 6 and 12 months and quality of life (QOL) scores.

Main results: Over a median follow-up of 41 months, the primary outcome was reported in 129 patients (37.2%) in the PCI group and in 134 patients (38.0%) in the OMT group (HR, 0.99; 95% CI, 0.78-1.27; P = .96). The LVEF was similar in the 2 groups at 6 months (mean difference, –1.6 percentage points; 95% CI, –3.7 to 0.5) and at 12 months (mean difference, 0.9 percentage points; 95% CI, –1.7 to 3.4). QOL scores at 6 and 12 months favored the PCI group, but the difference had diminished at 24 months.

Conclusion: In patients with severe ischemic cardiomyopathy, revascularization by PCI in addition to OMT did not result in a lower incidence of death from any cause or hospitalization from heart failure.

 

 

Commentary

Coronary artery disease is the most common cause of heart failure with reduced ejection fraction and an important cause of mortality.1 Patients with ischemic cardiomyopathy with reduced ejection fraction are often considered for revascularization in addition to OMT and device therapies. Although there have been multiple retrospective studies and registries suggesting that cardiac outcomes and LVEF improve with revascularization, the number of large-scale prospective studies that assessed this clinical question and randomized patients to revascularization plus OMT compared to OMT alone has been limited.

In the Surgical Treatment for Ischemic Heart Failure (STICH) study,2,3 eligible patients had coronary artery disease amendable to CABG and a LVEF of 35% or less. Patients (N = 1212) were randomly assigned to CABG plus OMT or OMT alone between July 2002 and May 2007. The original study, with a median follow-up of 5 years, did not show survival benefit, but the investigators reported that the primary outcome of death from any cause was significantly lower in the CABG group compared to OMT alone when follow-up of the same study population was extended to 9.8 years (58.9% vs 66.1%, P = .02). The findings from this study led to a class I guideline recommendation of CABG over medical therapy in patients with multivessel disease and low ejection fraction.4

Since the STICH trial was designed, there have been significant improvements in devices and techniques used for PCI, and the procedure is now widely performed in patients with multivessel disease.5 The advantages of PCI over CABG include shorter recovery times and lower risk of immediate complications. In this context, the recently reported Revascularization for Ischemic Ventricular Dysfunction (REVIVED) study assessed clinical outcomes in patients with severe coronary artery disease and reduced ejection fraction by randomizing patients to either PCI with OMT or OMT alone.6 At a median follow-up of 3.5 years, the investigators found no difference in the primary outcome of death from any cause or hospitalization for heart failure (37.2% vs 38.0%; 95% CI, 0.78-1.28; P = .96). Moreover, the degree of LVEF improvement, assessed by follow-up echocardiogram read by the core lab, showed no difference in the degree of LVEF improvement between groups at 6 and 12 months. Finally, although results of the QOL assessment using the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated, patient-reported, heart-failure-specific QOL scale, favored the PCI group at 6 and 12 months of follow-up, the difference had diminished at 24 months.

The main strength of the REVIVED study was that it targeted a patient population with severe coronary artery disease, including left main disease and severely reduced ejection fraction, that historically have been excluded from large-scale randomized controlled studies evaluating PCI with OMT compared to OMT alone.7 However, there are several points to consider when interpreting the results of this study. First, further details of the PCI procedures are necessary. The REVIVED study recommended revascularization of all territories with viable myocardium; the anatomical revascularization index utilizing the British Cardiovascular Intervention Society (BCIS) Jeopardy Score was 71%. It is important to note that this jeopardy score was operator-reported and the core-lab adjudicated anatomical revascularization rate may be lower. Although viability testing primarily utilizing cardiac magnetic resonance imaging was performed in most patients, correlation between the revascularization territory and the viable segments has yet to be reported. Moreover, procedural details such as use of intravascular ultrasound and physiological testing, known to improve clinical outcome, need to be reported.8,9

Second, there is a high prevalence of ischemic cardiomyopathy, and it is important to note that the patients included in this study were highly selected from daily clinical practice, as evidenced by the prolonged enrollment period (8 years). Individuals were largely stable patients with less complex coronary anatomy as evidenced by the median interval from angiography to randomization of 80 days. Taking into consideration the degree of left ventricular dysfunction for patients included in the trial, only 14% of the patients had left main disease and half of the patients only had 2-vessel disease. The severity of the left main disease also needs to be clarified as it is likely that patients the operator determined to be critical were not enrolled in the study. Furthermore, the standard of care based on the STICH trial is to refer patients with severe multivessel coronary artery disease to CABG, making it more likely that patients with more severe and complex disease were not included in this trial. It is also important to note that this study enrolled patients with stable ischemic heart disease, and the data do not apply to patients presenting with acute coronary syndrome.

 

 

Third, although the primary outcome was similar between the groups, the secondary outcome of unplanned revascularization was lower in the PCI group. In addition, the rate of acute myocardial infarction (MI) was similar between the 2 groups, but the rate of spontaneous MI was lower in the PCI group compared to the OMT group (5.2% vs 9.3%) as 40% of MI cases in the PCI group were periprocedural MIs. The correlation between periprocedural MI and long-term outcomes has been modest compared to spontaneous MI. Moreover, with the longer follow-up, the number of spontaneous MI cases is expected to rise while the number of periprocedural MI cases is not. Extending the follow-up period is also important, as the STICH extension trial showed a statistically significant difference at 10-year follow up despite negative results at the time of the original publication.

Fourth, the REVIVED trial randomized a significantly lower number of patients compared to the STICH trial, and the authors reported fewer primary-outcome events than the estimated number needed to achieve the power to assess the primary hypothesis. In addition, significant improvements in medical treatment for heart failure with reduced ejection fraction since the STICH trial make comparison of PCI vs CABG in this patient population unfeasible.

Finally, although severe angina was not an exclusion criterion, two-thirds of the patients enrolled had no angina, and only 2% of the patients had baseline severe angina. This is important to consider when interpreting the results of the patient-reported health status as previous studies have shown that patients with worse angina at baseline derive the largest improvement in their QOL,10,11 and symptom improvement is the main indication for PCI in patients with stable ischemic heart disease.

Applications for Clinical Practice and System Implementation

In patients with severe left ventricular systolic dysfunction and multivessel stable ischemic heart disease who are well compensated and have little or no angina at baseline, OMT alone as an initial strategy may be considered against the addition of PCI after careful risk and benefit discussion. Further details about revascularization and extended follow-up data from the REVIVED trial are necessary.

Practice Points

  • Patients with ischemic cardiomyopathy with reduced ejection fraction have been an understudied population in previous studies.
  • Further studies are necessary to understand the benefits of revascularization and the role of viability testing in this population.

Taishi Hirai MD, and Ziad Sayed Ahmad, MD
University of Missouri, Columbia, MO

References

1. Nowbar AN, Gitto M, Howard JP, et al. Mortality from ischemic heart disease. Circ Cardiovasc Qual Outcomes. 2019;12(6):e005375. doi:10.1161/CIRCOUTCOMES

2. Velazquez EJ, Lee KL, Deja MA, et al; for the STICH Investigators. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616. doi:10.1056/NEJMoa1100356

3. Velazquez EJ, Lee KL, Jones RH, et al. Coronary-artery bypass surgery in patients with ischemic cardiomyopathy. N Engl J Med. 2016;374(16):1511-1520. doi:10.1056/NEJMoa1602001

4. Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

5. Kirtane AJ, Doshi D, Leon MB, et al. Treatment of higher-risk patients with an indication for revascularization: evolution within the field of contemporary percutaneous coronary intervention. Circulation. 2016;134(5):422-431. doi:10.1161/CIRCULATIONAHA

6. Perera D, Clayton T, O’Kane PD, et al. Percutaneous revascularization for ischemic left ventricular dysfunction. N Engl J Med. 2022;387(15):1351-1360. doi:10.1056/NEJMoa2206606

7. Maron DJ, Hochman JS, Reynolds HR, et al. Initial invasive or conservative strategy for stable coronary disease. Circulation. 2020;142(18):1725-1735. doi:10.1161/CIRCULATIONAHA

8. De Bruyne B, Pijls NH, Kalesan B, et al. Fractional flow reserve-guided PCI versus medical therapy in stable coronary disease. N Engl J Med. 2012;367(11):991-1001. doi:10.1056/NEJMoa1205361

9. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial.  J Am Coll Cardiol. 2018;72(24):3126-3137. doi:10.1016/j.jacc.2018.09.013

10. Spertus JA, Jones PG, Maron DJ, et al. Health-status outcomes with invasive or conservative care in coronary disease. N Engl J Med. 2020;382(15):1408-1419. doi:10.1056/NEJMoa1916370

11. Hirai T, Grantham JA, Sapontis J, et al. Quality of life changes after chronic total occlusion angioplasty in patients with baseline refractory angina. Circ Cardiovasc Interv. 2019;12:e007558. doi:10.1161/CIRCINTERVENTIONS.118.007558

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
202-205
Sections
Article PDF
Article PDF

Study 1 Overview (STICHES Investigators)

Objective: To assess the survival benefit of coronary-artery bypass grafting (CABG) added to guideline-directed medical therapy, compared to optimal medical therapy (OMT) alone, in patients with coronary artery disease, heart failure, and severe left ventricular dysfunction. Design: Multicenter, randomized, prospective study with extended follow-up (median duration of 9.8 years).

Setting and participants: A total of 1212 patients with left ventricular ejection fraction (LVEF) of 35% or less and coronary artery disease were randomized to medical therapy plus CABG or OMT alone at 127 clinical sites in 26 countries.

Main outcome measures: The primary endpoint was death from any cause. Main secondary endpoints were death from cardiovascular causes and a composite outcome of death from any cause or hospitalization for cardiovascular causes.

Main results: There were 359 primary outcome all-cause deaths (58.9%) in the CABG group and 398 (66.1%) in the medical therapy group (hazard ratio [HR], 0.84; 95% CI, 0.73-0.97; P = .02). Death from cardiovascular causes was reported in 247 patients (40.5%) in the CABG group and 297 patients (49.3%) in the medical therapy group (HR, 0.79; 95% CI, 0.66-0.93; P < .01). The composite outcome of death from any cause or hospitalization for cardiovascular causes occurred in 467 patients (76.6%) in the CABG group and 467 patients (87.0%) in the medical therapy group (HR, 0.72; 95% CI, 0.64-0.82; P < .01).

Conclusion: Over a median follow-up of 9.8 years in patients with ischemic cardiomyopathy with severely reduced ejection fraction, the rates of death from any cause, death from cardiovascular causes, and the composite of death from any cause or hospitalization for cardiovascular causes were significantly lower in patients undergoing CABG than in patients receiving medical therapy alone.

Study 2 Overview (REVIVED BCIS Trial Group)

Objective: To assess whether percutaneous coronary intervention (PCI) can improve survival and left ventricular function in patients with severe left ventricular systolic dysfunction as compared to OMT alone.

Design: Multicenter, randomized, prospective study.

Setting and participants: A total of 700 patients with LVEF <35% with severe coronary artery disease amendable to PCI and demonstrable myocardial viability were randomly assigned to either PCI plus optimal medical therapy (PCI group) or OMT alone (OMT group).

Main outcome measures: The primary outcome was death from any cause or hospitalization for heart failure. The main secondary outcomes were LVEF at 6 and 12 months and quality of life (QOL) scores.

Main results: Over a median follow-up of 41 months, the primary outcome was reported in 129 patients (37.2%) in the PCI group and in 134 patients (38.0%) in the OMT group (HR, 0.99; 95% CI, 0.78-1.27; P = .96). The LVEF was similar in the 2 groups at 6 months (mean difference, –1.6 percentage points; 95% CI, –3.7 to 0.5) and at 12 months (mean difference, 0.9 percentage points; 95% CI, –1.7 to 3.4). QOL scores at 6 and 12 months favored the PCI group, but the difference had diminished at 24 months.

Conclusion: In patients with severe ischemic cardiomyopathy, revascularization by PCI in addition to OMT did not result in a lower incidence of death from any cause or hospitalization from heart failure.

 

 

Commentary

Coronary artery disease is the most common cause of heart failure with reduced ejection fraction and an important cause of mortality.1 Patients with ischemic cardiomyopathy with reduced ejection fraction are often considered for revascularization in addition to OMT and device therapies. Although there have been multiple retrospective studies and registries suggesting that cardiac outcomes and LVEF improve with revascularization, the number of large-scale prospective studies that assessed this clinical question and randomized patients to revascularization plus OMT compared to OMT alone has been limited.

In the Surgical Treatment for Ischemic Heart Failure (STICH) study,2,3 eligible patients had coronary artery disease amendable to CABG and a LVEF of 35% or less. Patients (N = 1212) were randomly assigned to CABG plus OMT or OMT alone between July 2002 and May 2007. The original study, with a median follow-up of 5 years, did not show survival benefit, but the investigators reported that the primary outcome of death from any cause was significantly lower in the CABG group compared to OMT alone when follow-up of the same study population was extended to 9.8 years (58.9% vs 66.1%, P = .02). The findings from this study led to a class I guideline recommendation of CABG over medical therapy in patients with multivessel disease and low ejection fraction.4

Since the STICH trial was designed, there have been significant improvements in devices and techniques used for PCI, and the procedure is now widely performed in patients with multivessel disease.5 The advantages of PCI over CABG include shorter recovery times and lower risk of immediate complications. In this context, the recently reported Revascularization for Ischemic Ventricular Dysfunction (REVIVED) study assessed clinical outcomes in patients with severe coronary artery disease and reduced ejection fraction by randomizing patients to either PCI with OMT or OMT alone.6 At a median follow-up of 3.5 years, the investigators found no difference in the primary outcome of death from any cause or hospitalization for heart failure (37.2% vs 38.0%; 95% CI, 0.78-1.28; P = .96). Moreover, the degree of LVEF improvement, assessed by follow-up echocardiogram read by the core lab, showed no difference in the degree of LVEF improvement between groups at 6 and 12 months. Finally, although results of the QOL assessment using the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated, patient-reported, heart-failure-specific QOL scale, favored the PCI group at 6 and 12 months of follow-up, the difference had diminished at 24 months.

The main strength of the REVIVED study was that it targeted a patient population with severe coronary artery disease, including left main disease and severely reduced ejection fraction, that historically have been excluded from large-scale randomized controlled studies evaluating PCI with OMT compared to OMT alone.7 However, there are several points to consider when interpreting the results of this study. First, further details of the PCI procedures are necessary. The REVIVED study recommended revascularization of all territories with viable myocardium; the anatomical revascularization index utilizing the British Cardiovascular Intervention Society (BCIS) Jeopardy Score was 71%. It is important to note that this jeopardy score was operator-reported and the core-lab adjudicated anatomical revascularization rate may be lower. Although viability testing primarily utilizing cardiac magnetic resonance imaging was performed in most patients, correlation between the revascularization territory and the viable segments has yet to be reported. Moreover, procedural details such as use of intravascular ultrasound and physiological testing, known to improve clinical outcome, need to be reported.8,9

Second, there is a high prevalence of ischemic cardiomyopathy, and it is important to note that the patients included in this study were highly selected from daily clinical practice, as evidenced by the prolonged enrollment period (8 years). Individuals were largely stable patients with less complex coronary anatomy as evidenced by the median interval from angiography to randomization of 80 days. Taking into consideration the degree of left ventricular dysfunction for patients included in the trial, only 14% of the patients had left main disease and half of the patients only had 2-vessel disease. The severity of the left main disease also needs to be clarified as it is likely that patients the operator determined to be critical were not enrolled in the study. Furthermore, the standard of care based on the STICH trial is to refer patients with severe multivessel coronary artery disease to CABG, making it more likely that patients with more severe and complex disease were not included in this trial. It is also important to note that this study enrolled patients with stable ischemic heart disease, and the data do not apply to patients presenting with acute coronary syndrome.

 

 

Third, although the primary outcome was similar between the groups, the secondary outcome of unplanned revascularization was lower in the PCI group. In addition, the rate of acute myocardial infarction (MI) was similar between the 2 groups, but the rate of spontaneous MI was lower in the PCI group compared to the OMT group (5.2% vs 9.3%) as 40% of MI cases in the PCI group were periprocedural MIs. The correlation between periprocedural MI and long-term outcomes has been modest compared to spontaneous MI. Moreover, with the longer follow-up, the number of spontaneous MI cases is expected to rise while the number of periprocedural MI cases is not. Extending the follow-up period is also important, as the STICH extension trial showed a statistically significant difference at 10-year follow up despite negative results at the time of the original publication.

Fourth, the REVIVED trial randomized a significantly lower number of patients compared to the STICH trial, and the authors reported fewer primary-outcome events than the estimated number needed to achieve the power to assess the primary hypothesis. In addition, significant improvements in medical treatment for heart failure with reduced ejection fraction since the STICH trial make comparison of PCI vs CABG in this patient population unfeasible.

Finally, although severe angina was not an exclusion criterion, two-thirds of the patients enrolled had no angina, and only 2% of the patients had baseline severe angina. This is important to consider when interpreting the results of the patient-reported health status as previous studies have shown that patients with worse angina at baseline derive the largest improvement in their QOL,10,11 and symptom improvement is the main indication for PCI in patients with stable ischemic heart disease.

Applications for Clinical Practice and System Implementation

In patients with severe left ventricular systolic dysfunction and multivessel stable ischemic heart disease who are well compensated and have little or no angina at baseline, OMT alone as an initial strategy may be considered against the addition of PCI after careful risk and benefit discussion. Further details about revascularization and extended follow-up data from the REVIVED trial are necessary.

Practice Points

  • Patients with ischemic cardiomyopathy with reduced ejection fraction have been an understudied population in previous studies.
  • Further studies are necessary to understand the benefits of revascularization and the role of viability testing in this population.

Taishi Hirai MD, and Ziad Sayed Ahmad, MD
University of Missouri, Columbia, MO

Study 1 Overview (STICHES Investigators)

Objective: To assess the survival benefit of coronary-artery bypass grafting (CABG) added to guideline-directed medical therapy, compared to optimal medical therapy (OMT) alone, in patients with coronary artery disease, heart failure, and severe left ventricular dysfunction. Design: Multicenter, randomized, prospective study with extended follow-up (median duration of 9.8 years).

Setting and participants: A total of 1212 patients with left ventricular ejection fraction (LVEF) of 35% or less and coronary artery disease were randomized to medical therapy plus CABG or OMT alone at 127 clinical sites in 26 countries.

Main outcome measures: The primary endpoint was death from any cause. Main secondary endpoints were death from cardiovascular causes and a composite outcome of death from any cause or hospitalization for cardiovascular causes.

Main results: There were 359 primary outcome all-cause deaths (58.9%) in the CABG group and 398 (66.1%) in the medical therapy group (hazard ratio [HR], 0.84; 95% CI, 0.73-0.97; P = .02). Death from cardiovascular causes was reported in 247 patients (40.5%) in the CABG group and 297 patients (49.3%) in the medical therapy group (HR, 0.79; 95% CI, 0.66-0.93; P < .01). The composite outcome of death from any cause or hospitalization for cardiovascular causes occurred in 467 patients (76.6%) in the CABG group and 467 patients (87.0%) in the medical therapy group (HR, 0.72; 95% CI, 0.64-0.82; P < .01).

Conclusion: Over a median follow-up of 9.8 years in patients with ischemic cardiomyopathy with severely reduced ejection fraction, the rates of death from any cause, death from cardiovascular causes, and the composite of death from any cause or hospitalization for cardiovascular causes were significantly lower in patients undergoing CABG than in patients receiving medical therapy alone.

Study 2 Overview (REVIVED BCIS Trial Group)

Objective: To assess whether percutaneous coronary intervention (PCI) can improve survival and left ventricular function in patients with severe left ventricular systolic dysfunction as compared to OMT alone.

Design: Multicenter, randomized, prospective study.

Setting and participants: A total of 700 patients with LVEF <35% with severe coronary artery disease amendable to PCI and demonstrable myocardial viability were randomly assigned to either PCI plus optimal medical therapy (PCI group) or OMT alone (OMT group).

Main outcome measures: The primary outcome was death from any cause or hospitalization for heart failure. The main secondary outcomes were LVEF at 6 and 12 months and quality of life (QOL) scores.

Main results: Over a median follow-up of 41 months, the primary outcome was reported in 129 patients (37.2%) in the PCI group and in 134 patients (38.0%) in the OMT group (HR, 0.99; 95% CI, 0.78-1.27; P = .96). The LVEF was similar in the 2 groups at 6 months (mean difference, –1.6 percentage points; 95% CI, –3.7 to 0.5) and at 12 months (mean difference, 0.9 percentage points; 95% CI, –1.7 to 3.4). QOL scores at 6 and 12 months favored the PCI group, but the difference had diminished at 24 months.

Conclusion: In patients with severe ischemic cardiomyopathy, revascularization by PCI in addition to OMT did not result in a lower incidence of death from any cause or hospitalization from heart failure.

 

 

Commentary

Coronary artery disease is the most common cause of heart failure with reduced ejection fraction and an important cause of mortality.1 Patients with ischemic cardiomyopathy with reduced ejection fraction are often considered for revascularization in addition to OMT and device therapies. Although there have been multiple retrospective studies and registries suggesting that cardiac outcomes and LVEF improve with revascularization, the number of large-scale prospective studies that assessed this clinical question and randomized patients to revascularization plus OMT compared to OMT alone has been limited.

In the Surgical Treatment for Ischemic Heart Failure (STICH) study,2,3 eligible patients had coronary artery disease amendable to CABG and a LVEF of 35% or less. Patients (N = 1212) were randomly assigned to CABG plus OMT or OMT alone between July 2002 and May 2007. The original study, with a median follow-up of 5 years, did not show survival benefit, but the investigators reported that the primary outcome of death from any cause was significantly lower in the CABG group compared to OMT alone when follow-up of the same study population was extended to 9.8 years (58.9% vs 66.1%, P = .02). The findings from this study led to a class I guideline recommendation of CABG over medical therapy in patients with multivessel disease and low ejection fraction.4

Since the STICH trial was designed, there have been significant improvements in devices and techniques used for PCI, and the procedure is now widely performed in patients with multivessel disease.5 The advantages of PCI over CABG include shorter recovery times and lower risk of immediate complications. In this context, the recently reported Revascularization for Ischemic Ventricular Dysfunction (REVIVED) study assessed clinical outcomes in patients with severe coronary artery disease and reduced ejection fraction by randomizing patients to either PCI with OMT or OMT alone.6 At a median follow-up of 3.5 years, the investigators found no difference in the primary outcome of death from any cause or hospitalization for heart failure (37.2% vs 38.0%; 95% CI, 0.78-1.28; P = .96). Moreover, the degree of LVEF improvement, assessed by follow-up echocardiogram read by the core lab, showed no difference in the degree of LVEF improvement between groups at 6 and 12 months. Finally, although results of the QOL assessment using the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated, patient-reported, heart-failure-specific QOL scale, favored the PCI group at 6 and 12 months of follow-up, the difference had diminished at 24 months.

The main strength of the REVIVED study was that it targeted a patient population with severe coronary artery disease, including left main disease and severely reduced ejection fraction, that historically have been excluded from large-scale randomized controlled studies evaluating PCI with OMT compared to OMT alone.7 However, there are several points to consider when interpreting the results of this study. First, further details of the PCI procedures are necessary. The REVIVED study recommended revascularization of all territories with viable myocardium; the anatomical revascularization index utilizing the British Cardiovascular Intervention Society (BCIS) Jeopardy Score was 71%. It is important to note that this jeopardy score was operator-reported and the core-lab adjudicated anatomical revascularization rate may be lower. Although viability testing primarily utilizing cardiac magnetic resonance imaging was performed in most patients, correlation between the revascularization territory and the viable segments has yet to be reported. Moreover, procedural details such as use of intravascular ultrasound and physiological testing, known to improve clinical outcome, need to be reported.8,9

Second, there is a high prevalence of ischemic cardiomyopathy, and it is important to note that the patients included in this study were highly selected from daily clinical practice, as evidenced by the prolonged enrollment period (8 years). Individuals were largely stable patients with less complex coronary anatomy as evidenced by the median interval from angiography to randomization of 80 days. Taking into consideration the degree of left ventricular dysfunction for patients included in the trial, only 14% of the patients had left main disease and half of the patients only had 2-vessel disease. The severity of the left main disease also needs to be clarified as it is likely that patients the operator determined to be critical were not enrolled in the study. Furthermore, the standard of care based on the STICH trial is to refer patients with severe multivessel coronary artery disease to CABG, making it more likely that patients with more severe and complex disease were not included in this trial. It is also important to note that this study enrolled patients with stable ischemic heart disease, and the data do not apply to patients presenting with acute coronary syndrome.

 

 

Third, although the primary outcome was similar between the groups, the secondary outcome of unplanned revascularization was lower in the PCI group. In addition, the rate of acute myocardial infarction (MI) was similar between the 2 groups, but the rate of spontaneous MI was lower in the PCI group compared to the OMT group (5.2% vs 9.3%) as 40% of MI cases in the PCI group were periprocedural MIs. The correlation between periprocedural MI and long-term outcomes has been modest compared to spontaneous MI. Moreover, with the longer follow-up, the number of spontaneous MI cases is expected to rise while the number of periprocedural MI cases is not. Extending the follow-up period is also important, as the STICH extension trial showed a statistically significant difference at 10-year follow up despite negative results at the time of the original publication.

Fourth, the REVIVED trial randomized a significantly lower number of patients compared to the STICH trial, and the authors reported fewer primary-outcome events than the estimated number needed to achieve the power to assess the primary hypothesis. In addition, significant improvements in medical treatment for heart failure with reduced ejection fraction since the STICH trial make comparison of PCI vs CABG in this patient population unfeasible.

Finally, although severe angina was not an exclusion criterion, two-thirds of the patients enrolled had no angina, and only 2% of the patients had baseline severe angina. This is important to consider when interpreting the results of the patient-reported health status as previous studies have shown that patients with worse angina at baseline derive the largest improvement in their QOL,10,11 and symptom improvement is the main indication for PCI in patients with stable ischemic heart disease.

Applications for Clinical Practice and System Implementation

In patients with severe left ventricular systolic dysfunction and multivessel stable ischemic heart disease who are well compensated and have little or no angina at baseline, OMT alone as an initial strategy may be considered against the addition of PCI after careful risk and benefit discussion. Further details about revascularization and extended follow-up data from the REVIVED trial are necessary.

Practice Points

  • Patients with ischemic cardiomyopathy with reduced ejection fraction have been an understudied population in previous studies.
  • Further studies are necessary to understand the benefits of revascularization and the role of viability testing in this population.

Taishi Hirai MD, and Ziad Sayed Ahmad, MD
University of Missouri, Columbia, MO

References

1. Nowbar AN, Gitto M, Howard JP, et al. Mortality from ischemic heart disease. Circ Cardiovasc Qual Outcomes. 2019;12(6):e005375. doi:10.1161/CIRCOUTCOMES

2. Velazquez EJ, Lee KL, Deja MA, et al; for the STICH Investigators. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616. doi:10.1056/NEJMoa1100356

3. Velazquez EJ, Lee KL, Jones RH, et al. Coronary-artery bypass surgery in patients with ischemic cardiomyopathy. N Engl J Med. 2016;374(16):1511-1520. doi:10.1056/NEJMoa1602001

4. Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

5. Kirtane AJ, Doshi D, Leon MB, et al. Treatment of higher-risk patients with an indication for revascularization: evolution within the field of contemporary percutaneous coronary intervention. Circulation. 2016;134(5):422-431. doi:10.1161/CIRCULATIONAHA

6. Perera D, Clayton T, O’Kane PD, et al. Percutaneous revascularization for ischemic left ventricular dysfunction. N Engl J Med. 2022;387(15):1351-1360. doi:10.1056/NEJMoa2206606

7. Maron DJ, Hochman JS, Reynolds HR, et al. Initial invasive or conservative strategy for stable coronary disease. Circulation. 2020;142(18):1725-1735. doi:10.1161/CIRCULATIONAHA

8. De Bruyne B, Pijls NH, Kalesan B, et al. Fractional flow reserve-guided PCI versus medical therapy in stable coronary disease. N Engl J Med. 2012;367(11):991-1001. doi:10.1056/NEJMoa1205361

9. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial.  J Am Coll Cardiol. 2018;72(24):3126-3137. doi:10.1016/j.jacc.2018.09.013

10. Spertus JA, Jones PG, Maron DJ, et al. Health-status outcomes with invasive or conservative care in coronary disease. N Engl J Med. 2020;382(15):1408-1419. doi:10.1056/NEJMoa1916370

11. Hirai T, Grantham JA, Sapontis J, et al. Quality of life changes after chronic total occlusion angioplasty in patients with baseline refractory angina. Circ Cardiovasc Interv. 2019;12:e007558. doi:10.1161/CIRCINTERVENTIONS.118.007558

References

1. Nowbar AN, Gitto M, Howard JP, et al. Mortality from ischemic heart disease. Circ Cardiovasc Qual Outcomes. 2019;12(6):e005375. doi:10.1161/CIRCOUTCOMES

2. Velazquez EJ, Lee KL, Deja MA, et al; for the STICH Investigators. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616. doi:10.1056/NEJMoa1100356

3. Velazquez EJ, Lee KL, Jones RH, et al. Coronary-artery bypass surgery in patients with ischemic cardiomyopathy. N Engl J Med. 2016;374(16):1511-1520. doi:10.1056/NEJMoa1602001

4. Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

5. Kirtane AJ, Doshi D, Leon MB, et al. Treatment of higher-risk patients with an indication for revascularization: evolution within the field of contemporary percutaneous coronary intervention. Circulation. 2016;134(5):422-431. doi:10.1161/CIRCULATIONAHA

6. Perera D, Clayton T, O’Kane PD, et al. Percutaneous revascularization for ischemic left ventricular dysfunction. N Engl J Med. 2022;387(15):1351-1360. doi:10.1056/NEJMoa2206606

7. Maron DJ, Hochman JS, Reynolds HR, et al. Initial invasive or conservative strategy for stable coronary disease. Circulation. 2020;142(18):1725-1735. doi:10.1161/CIRCULATIONAHA

8. De Bruyne B, Pijls NH, Kalesan B, et al. Fractional flow reserve-guided PCI versus medical therapy in stable coronary disease. N Engl J Med. 2012;367(11):991-1001. doi:10.1056/NEJMoa1205361

9. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial.  J Am Coll Cardiol. 2018;72(24):3126-3137. doi:10.1016/j.jacc.2018.09.013

10. Spertus JA, Jones PG, Maron DJ, et al. Health-status outcomes with invasive or conservative care in coronary disease. N Engl J Med. 2020;382(15):1408-1419. doi:10.1056/NEJMoa1916370

11. Hirai T, Grantham JA, Sapontis J, et al. Quality of life changes after chronic total occlusion angioplasty in patients with baseline refractory angina. Circ Cardiovasc Interv. 2019;12:e007558. doi:10.1161/CIRCINTERVENTIONS.118.007558

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
202-205
Page Number
202-205
Publications
Publications
Topics
Article Type
Display Headline
The Role of Revascularization and Viability Testing in Patients With Multivessel Coronary Artery Disease and Severely Reduced Ejection Fraction
Display Headline
The Role of Revascularization and Viability Testing in Patients With Multivessel Coronary Artery Disease and Severely Reduced Ejection Fraction
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane

Article Type
Changed
Wed, 12/28/2022 - 12:32
Display Headline
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane

Study 1 Overview (Chang et al)

Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.

Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.

Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.

Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.

Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.

Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.

Study 2 Overview (Mei et al)

Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.

Design: Randomized clinical trial of propofol and sevoflurane groups.

Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.

Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.

Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P =  .049, Student’s t-test).

Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.

 

 

Commentary

Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.

In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.

In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.

 

 

Applications for Clinical Practice and System Implementation

The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.

The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.

Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.

Practice Points

  • Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
  • Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.

–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai

References

1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x

2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
199-201
Sections
Article PDF
Article PDF

Study 1 Overview (Chang et al)

Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.

Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.

Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.

Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.

Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.

Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.

Study 2 Overview (Mei et al)

Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.

Design: Randomized clinical trial of propofol and sevoflurane groups.

Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.

Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.

Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P =  .049, Student’s t-test).

Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.

 

 

Commentary

Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.

In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.

In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.

 

 

Applications for Clinical Practice and System Implementation

The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.

The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.

Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.

Practice Points

  • Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
  • Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.

–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai

Study 1 Overview (Chang et al)

Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.

Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.

Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.

Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.

Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.

Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.

Study 2 Overview (Mei et al)

Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.

Design: Randomized clinical trial of propofol and sevoflurane groups.

Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.

Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.

Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P =  .049, Student’s t-test).

Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.

 

 

Commentary

Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.

In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.

In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.

 

 

Applications for Clinical Practice and System Implementation

The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.

The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.

Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.

Practice Points

  • Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
  • Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.

–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai

References

1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x

2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z

References

1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x

2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
199-201
Page Number
199-201
Publications
Publications
Topics
Article Type
Display Headline
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane
Display Headline
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Effectiveness of Colonoscopy for Colorectal Cancer Screening in Reducing Cancer-Related Mortality: Interpreting the Results From Two Ongoing Randomized Trials 

Article Type
Changed
Wed, 11/23/2022 - 14:24
Display Headline
Effectiveness of Colonoscopy for Colorectal Cancer Screening in Reducing Cancer-Related Mortality: Interpreting the Results From Two Ongoing Randomized Trials 

Study 1 Overview (Bretthauer et al) 

Objective: To evaluate the impact of screening colonoscopy on colon cancer–related death. 

Design: Randomized trial conducted in 4 European countries.

Setting and participants: Presumptively healthy men and women between the ages of 55 and 64 years were selected from population registries in Poland, Norway, Sweden, and the Netherlands between 2009 and 2014. Eligible participants had not previously undergone screening. Patients with a diagnosis of colon cancer before trial entry were excluded.

Intervention: Participants were randomly assigned in a 1:2 ratio to undergo colonoscopy screening by invitation or to no invitation and no screening. Participants were randomized using a computer-generated allocation algorithm. Patients were stratified by age, sex, and municipality.

Main outcome measures: The primary endpoint of the study was risk of colorectal cancer and related death after a median follow-up of 10 to 15 years. The main secondary endpoint was death from any cause.

Main results: The study reported follow-up data from 84,585 participants (89.1% of all participants originally included in the trial). The remaining participants were either excluded or data could not be included due to lack of follow-up data from the usual-care group. Men (50.1%) and women (49.9%) were equally represented. The median age at entry was 59 years. The median follow-up was 10 years. Characteristics were otherwise balanced. Good bowel preparation was reported in 91% of all participants. Cecal intubation was achieved in 96.8% of all participants. The percentage of patients who underwent screening was 42% for the group, but screening rates varied by country (33%-60%). Colorectal cancer was diagnosed at screening in 62 participants (0.5% of screening group). Adenomas were detected in 30.7% of participants; 15 patients had polypectomy-related major bleeding. There were no perforations.

The risk of colorectal cancer at 10 years was 0.98% in the invited-to-screen group and 1.2% in the usual-care group (risk ratio, 0.82; 95% CI, 0.7-0.93). The reported number needed to invite to prevent 1 case of colon cancer in a 10-year period was 455. The risk of colorectal cancer–related death at 10 years was 0.28% in the invited-to-screen group and 0.31% in the usual-care group (risk ratio, 0.9; 95% CI, 0.64-1.16). An adjusted per-protocol analysis was performed to account for the estimated effect of screening if all participants assigned to the screening group underwent screening. In this analysis, the risk of colorectal cancer at 10 years was decreased from 1.22% to 0.84% (risk ratio, 0.69; 95% CI, 0.66-0.83).

Conclusion: Based on the results of this European randomized trial, the risk of colorectal cancer at 10 years was lower among those who were invited to undergo screening.

 

 

Study 2 Overview (Forsberg et al) 

Objective: To investigate the effect of colorectal cancer screening with once-only colonoscopy or fecal immunochemical testing (FIT) on colorectal cancer mortality and incidence.

Design: Randomized controlled trial in Sweden utilizing a population registry. 

Setting and participants: Patients aged 60 years at the time of entry were identified from a population-based registry from the Swedish Tax Agency.

Intervention: Individuals were assigned by an independent statistician to once-only colonoscopy, 2 rounds of FIT 2 years apart, or a control group in which no intervention was performed. Patients were assigned in a 1:6 ratio for colonoscopy vs control and a 1:2 ratio for FIT vs control.

Main outcome measures: The primary endpoint of the trial was colorectal cancer incidence and mortality.

Main results: A total of 278,280 participants were included in the study from March 1, 2014, through December 31, 2020 (31,140 in the colonoscopy group, 60,300 in the FIT group, and 186,840 in the control group). Of those in the colonoscopy group, 35% underwent colonoscopy, and 55% of those in the FIT group participated in testing. Colorectal cancer was detected in 0.16% (49) of people in the colonoscopy group and 0.2% (121) of people in the FIT test group (relative risk, 0.78; 95% CI, 0.56-1.09). The advanced adenoma detection rate was 2.05% in the colonoscopy group and 1.61% in the FIT group (relative risk, 1.27; 95% CI, 1.15-1.41). There were 2 perforations noted in the colonoscopy group and 15 major bleeding events. More right-sided adenomas were detected in the colonoscopy group.

Conclusion: The results of the current study highlight similar detection rates in the colonoscopy and FIT group. Should further follow-up show a benefit in disease-specific mortality, such screening strategies could be translated into population-based screening programs.

 

 

Commentary 

The first colonoscopy screening recommendations were established in the mid 1990s in the United States, and over the subsequent 2 decades colonoscopy has been the recommended method and main modality for colorectal cancer screening in this country. The advantage of colonoscopy over other screening modalities (sigmoidoscopy and fecal-based testing) is that it can examine the entire large bowel and allow for removal of potential precancerous lesions. However, data to support colonoscopy as a screening modality for colorectal cancer are largely based on cohort studies.1,2 These studies have reported a significant reduction in the incidence of colon cancer. Additionally, colorectal cancer mortality was notably lower in the screened populations. For example, one study among health professionals found a nearly 70% reduction in colorectal cancer mortality in those who underwent at least 1 screening colonoscopy.3

There has been a lack of randomized clinical data to validate the efficacy of colonoscopy screening for reducing colorectal cancer–related deaths. The current study by Bretthauer et al addresses an important need and enhances our understanding of the efficacy of colorectal cancer screening with colonoscopy. In this randomized trial involving more than 84,000 participants from Poland, Norway, Sweden, and the Netherlands, there was a noted 18% decrease in the risk of colorectal cancer over a 10-year period in the intention-to-screen population. The reduction in the risk of death from colorectal cancer was not statistically significant (risk ratio, 0.90; 95% CI, 0.64-1.16). These results are surprising and certainly raise the question as to whether previous studies overestimated the effectiveness of colonoscopy in reducing the risk of colorectal cancer–related deaths. There are several limitations to the Bretthauer et al study, however.

Perhaps the most important limitation is the fact that only 42% of participants in the invited-to-screen cohort underwent screening colonoscopy. Therefore, this raises the question of whether the efficacy noted is simply due to a lack of participation in the screening protocol. In the adjusted per-protocol analysis, colonoscopy was estimated to reduce the risk of colorectal cancer by 31% and the risk of colorectal cancer–related death by around 50%. These findings are more in line with prior published studies regarding the efficacy of colorectal cancer screening. The authors plan to repeat this analysis at 15 years, and it is possible that the risk of colorectal cancer and colorectal cancer–related death can be reduced on subsequent follow-up.

 

 

While the results of the Bretthauer et al trial are important, randomized trials that directly compare the effectiveness of different colorectal cancer screening strategies are lacking. The Forsberg et al trial, also an ongoing study, seeks to address this vitally important gap in our current data. The SCREESCO trial is a study that compares the efficacy of colonoscopy with FIT every 2 years or no screening. The currently reported data are preliminary but show a similarly low rate of colonoscopy screening in those invited to do so (35%). This is a similar limitation to that noted in the Bretthauer et al study. Furthermore, there is some question regarding colonoscopy quality in this study, which had a very low reported adenoma detection rate.

While the current studies are important and provide quality randomized data on the effect of colorectal cancer screening, there remain many unanswered questions. Should the results presented by Bretthauer et al represent the current real-world scenario, then colonoscopy screening may not be viewed as an effective screening tool compared to simpler, less-invasive modalities (ie, FIT). Further follow-up from the SCREESCO trial will help shed light on this question. However, there are concerns with this study, including a very low participation rate, which could greatly underestimate the effectiveness of screening. Additional analysis and longer follow-up will be vital to fully understand the benefits of screening colonoscopy. In the meantime, screening remains an important tool for early detection of colorectal cancer and remains a category A recommendation by the United States Preventive Services Task Force.4 

Applications for Clinical Practice and System Implementation

Current guidelines continue to strongly recommend screening for colorectal cancer for persons between 45 and 75 years of age (category B recommendation for those aged 45 to 49 years per the United States Preventive Services Task Force). Stool-based tests and direct visualization tests are both endorsed as screening options. Further follow-up from the presented studies is needed to help shed light on the magnitude of benefit of these modalities.

Practice Points

  • Current guidelines continue to strongly recommend screening for colon cancer in those aged 45 to 75 years.
  • The optimal modality for screening and the impact of screening on cancer-related mortality requires longer- term follow-up from these ongoing studies.

–Daniel Isaac, DO, MS 

References

1. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for Colorectal Cancer: An Evidence Update for the U.S. Preventive Services Task Force [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2021 May. Report No.: 20-05271-EF-1.

2. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for colorectal cancer: updated evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2021;325(19):1978-1998. doi:10.1001/jama.2021.4417

3. Nishihara R, Wu K, Lochhead P, et al. Long-term colorectal-cancer incidence and mortality after lower endoscopy. N Engl J Med. 2013;369(12):1095-1105. doi:10.1056/NEJMoa1301969

4. U.S. Preventive Services Task Force. Colorectal cancer: screening. Published May 18, 2021. Accessed November 8, 2022. https://uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
196-198
Sections
Article PDF
Article PDF

Study 1 Overview (Bretthauer et al) 

Objective: To evaluate the impact of screening colonoscopy on colon cancer–related death. 

Design: Randomized trial conducted in 4 European countries.

Setting and participants: Presumptively healthy men and women between the ages of 55 and 64 years were selected from population registries in Poland, Norway, Sweden, and the Netherlands between 2009 and 2014. Eligible participants had not previously undergone screening. Patients with a diagnosis of colon cancer before trial entry were excluded.

Intervention: Participants were randomly assigned in a 1:2 ratio to undergo colonoscopy screening by invitation or to no invitation and no screening. Participants were randomized using a computer-generated allocation algorithm. Patients were stratified by age, sex, and municipality.

Main outcome measures: The primary endpoint of the study was risk of colorectal cancer and related death after a median follow-up of 10 to 15 years. The main secondary endpoint was death from any cause.

Main results: The study reported follow-up data from 84,585 participants (89.1% of all participants originally included in the trial). The remaining participants were either excluded or data could not be included due to lack of follow-up data from the usual-care group. Men (50.1%) and women (49.9%) were equally represented. The median age at entry was 59 years. The median follow-up was 10 years. Characteristics were otherwise balanced. Good bowel preparation was reported in 91% of all participants. Cecal intubation was achieved in 96.8% of all participants. The percentage of patients who underwent screening was 42% for the group, but screening rates varied by country (33%-60%). Colorectal cancer was diagnosed at screening in 62 participants (0.5% of screening group). Adenomas were detected in 30.7% of participants; 15 patients had polypectomy-related major bleeding. There were no perforations.

The risk of colorectal cancer at 10 years was 0.98% in the invited-to-screen group and 1.2% in the usual-care group (risk ratio, 0.82; 95% CI, 0.7-0.93). The reported number needed to invite to prevent 1 case of colon cancer in a 10-year period was 455. The risk of colorectal cancer–related death at 10 years was 0.28% in the invited-to-screen group and 0.31% in the usual-care group (risk ratio, 0.9; 95% CI, 0.64-1.16). An adjusted per-protocol analysis was performed to account for the estimated effect of screening if all participants assigned to the screening group underwent screening. In this analysis, the risk of colorectal cancer at 10 years was decreased from 1.22% to 0.84% (risk ratio, 0.69; 95% CI, 0.66-0.83).

Conclusion: Based on the results of this European randomized trial, the risk of colorectal cancer at 10 years was lower among those who were invited to undergo screening.

 

 

Study 2 Overview (Forsberg et al) 

Objective: To investigate the effect of colorectal cancer screening with once-only colonoscopy or fecal immunochemical testing (FIT) on colorectal cancer mortality and incidence.

Design: Randomized controlled trial in Sweden utilizing a population registry. 

Setting and participants: Patients aged 60 years at the time of entry were identified from a population-based registry from the Swedish Tax Agency.

Intervention: Individuals were assigned by an independent statistician to once-only colonoscopy, 2 rounds of FIT 2 years apart, or a control group in which no intervention was performed. Patients were assigned in a 1:6 ratio for colonoscopy vs control and a 1:2 ratio for FIT vs control.

Main outcome measures: The primary endpoint of the trial was colorectal cancer incidence and mortality.

Main results: A total of 278,280 participants were included in the study from March 1, 2014, through December 31, 2020 (31,140 in the colonoscopy group, 60,300 in the FIT group, and 186,840 in the control group). Of those in the colonoscopy group, 35% underwent colonoscopy, and 55% of those in the FIT group participated in testing. Colorectal cancer was detected in 0.16% (49) of people in the colonoscopy group and 0.2% (121) of people in the FIT test group (relative risk, 0.78; 95% CI, 0.56-1.09). The advanced adenoma detection rate was 2.05% in the colonoscopy group and 1.61% in the FIT group (relative risk, 1.27; 95% CI, 1.15-1.41). There were 2 perforations noted in the colonoscopy group and 15 major bleeding events. More right-sided adenomas were detected in the colonoscopy group.

Conclusion: The results of the current study highlight similar detection rates in the colonoscopy and FIT group. Should further follow-up show a benefit in disease-specific mortality, such screening strategies could be translated into population-based screening programs.

 

 

Commentary 

The first colonoscopy screening recommendations were established in the mid 1990s in the United States, and over the subsequent 2 decades colonoscopy has been the recommended method and main modality for colorectal cancer screening in this country. The advantage of colonoscopy over other screening modalities (sigmoidoscopy and fecal-based testing) is that it can examine the entire large bowel and allow for removal of potential precancerous lesions. However, data to support colonoscopy as a screening modality for colorectal cancer are largely based on cohort studies.1,2 These studies have reported a significant reduction in the incidence of colon cancer. Additionally, colorectal cancer mortality was notably lower in the screened populations. For example, one study among health professionals found a nearly 70% reduction in colorectal cancer mortality in those who underwent at least 1 screening colonoscopy.3

There has been a lack of randomized clinical data to validate the efficacy of colonoscopy screening for reducing colorectal cancer–related deaths. The current study by Bretthauer et al addresses an important need and enhances our understanding of the efficacy of colorectal cancer screening with colonoscopy. In this randomized trial involving more than 84,000 participants from Poland, Norway, Sweden, and the Netherlands, there was a noted 18% decrease in the risk of colorectal cancer over a 10-year period in the intention-to-screen population. The reduction in the risk of death from colorectal cancer was not statistically significant (risk ratio, 0.90; 95% CI, 0.64-1.16). These results are surprising and certainly raise the question as to whether previous studies overestimated the effectiveness of colonoscopy in reducing the risk of colorectal cancer–related deaths. There are several limitations to the Bretthauer et al study, however.

Perhaps the most important limitation is the fact that only 42% of participants in the invited-to-screen cohort underwent screening colonoscopy. Therefore, this raises the question of whether the efficacy noted is simply due to a lack of participation in the screening protocol. In the adjusted per-protocol analysis, colonoscopy was estimated to reduce the risk of colorectal cancer by 31% and the risk of colorectal cancer–related death by around 50%. These findings are more in line with prior published studies regarding the efficacy of colorectal cancer screening. The authors plan to repeat this analysis at 15 years, and it is possible that the risk of colorectal cancer and colorectal cancer–related death can be reduced on subsequent follow-up.

 

 

While the results of the Bretthauer et al trial are important, randomized trials that directly compare the effectiveness of different colorectal cancer screening strategies are lacking. The Forsberg et al trial, also an ongoing study, seeks to address this vitally important gap in our current data. The SCREESCO trial is a study that compares the efficacy of colonoscopy with FIT every 2 years or no screening. The currently reported data are preliminary but show a similarly low rate of colonoscopy screening in those invited to do so (35%). This is a similar limitation to that noted in the Bretthauer et al study. Furthermore, there is some question regarding colonoscopy quality in this study, which had a very low reported adenoma detection rate.

While the current studies are important and provide quality randomized data on the effect of colorectal cancer screening, there remain many unanswered questions. Should the results presented by Bretthauer et al represent the current real-world scenario, then colonoscopy screening may not be viewed as an effective screening tool compared to simpler, less-invasive modalities (ie, FIT). Further follow-up from the SCREESCO trial will help shed light on this question. However, there are concerns with this study, including a very low participation rate, which could greatly underestimate the effectiveness of screening. Additional analysis and longer follow-up will be vital to fully understand the benefits of screening colonoscopy. In the meantime, screening remains an important tool for early detection of colorectal cancer and remains a category A recommendation by the United States Preventive Services Task Force.4 

Applications for Clinical Practice and System Implementation

Current guidelines continue to strongly recommend screening for colorectal cancer for persons between 45 and 75 years of age (category B recommendation for those aged 45 to 49 years per the United States Preventive Services Task Force). Stool-based tests and direct visualization tests are both endorsed as screening options. Further follow-up from the presented studies is needed to help shed light on the magnitude of benefit of these modalities.

Practice Points

  • Current guidelines continue to strongly recommend screening for colon cancer in those aged 45 to 75 years.
  • The optimal modality for screening and the impact of screening on cancer-related mortality requires longer- term follow-up from these ongoing studies.

–Daniel Isaac, DO, MS 

Study 1 Overview (Bretthauer et al) 

Objective: To evaluate the impact of screening colonoscopy on colon cancer–related death. 

Design: Randomized trial conducted in 4 European countries.

Setting and participants: Presumptively healthy men and women between the ages of 55 and 64 years were selected from population registries in Poland, Norway, Sweden, and the Netherlands between 2009 and 2014. Eligible participants had not previously undergone screening. Patients with a diagnosis of colon cancer before trial entry were excluded.

Intervention: Participants were randomly assigned in a 1:2 ratio to undergo colonoscopy screening by invitation or to no invitation and no screening. Participants were randomized using a computer-generated allocation algorithm. Patients were stratified by age, sex, and municipality.

Main outcome measures: The primary endpoint of the study was risk of colorectal cancer and related death after a median follow-up of 10 to 15 years. The main secondary endpoint was death from any cause.

Main results: The study reported follow-up data from 84,585 participants (89.1% of all participants originally included in the trial). The remaining participants were either excluded or data could not be included due to lack of follow-up data from the usual-care group. Men (50.1%) and women (49.9%) were equally represented. The median age at entry was 59 years. The median follow-up was 10 years. Characteristics were otherwise balanced. Good bowel preparation was reported in 91% of all participants. Cecal intubation was achieved in 96.8% of all participants. The percentage of patients who underwent screening was 42% for the group, but screening rates varied by country (33%-60%). Colorectal cancer was diagnosed at screening in 62 participants (0.5% of screening group). Adenomas were detected in 30.7% of participants; 15 patients had polypectomy-related major bleeding. There were no perforations.

The risk of colorectal cancer at 10 years was 0.98% in the invited-to-screen group and 1.2% in the usual-care group (risk ratio, 0.82; 95% CI, 0.7-0.93). The reported number needed to invite to prevent 1 case of colon cancer in a 10-year period was 455. The risk of colorectal cancer–related death at 10 years was 0.28% in the invited-to-screen group and 0.31% in the usual-care group (risk ratio, 0.9; 95% CI, 0.64-1.16). An adjusted per-protocol analysis was performed to account for the estimated effect of screening if all participants assigned to the screening group underwent screening. In this analysis, the risk of colorectal cancer at 10 years was decreased from 1.22% to 0.84% (risk ratio, 0.69; 95% CI, 0.66-0.83).

Conclusion: Based on the results of this European randomized trial, the risk of colorectal cancer at 10 years was lower among those who were invited to undergo screening.

 

 

Study 2 Overview (Forsberg et al) 

Objective: To investigate the effect of colorectal cancer screening with once-only colonoscopy or fecal immunochemical testing (FIT) on colorectal cancer mortality and incidence.

Design: Randomized controlled trial in Sweden utilizing a population registry. 

Setting and participants: Patients aged 60 years at the time of entry were identified from a population-based registry from the Swedish Tax Agency.

Intervention: Individuals were assigned by an independent statistician to once-only colonoscopy, 2 rounds of FIT 2 years apart, or a control group in which no intervention was performed. Patients were assigned in a 1:6 ratio for colonoscopy vs control and a 1:2 ratio for FIT vs control.

Main outcome measures: The primary endpoint of the trial was colorectal cancer incidence and mortality.

Main results: A total of 278,280 participants were included in the study from March 1, 2014, through December 31, 2020 (31,140 in the colonoscopy group, 60,300 in the FIT group, and 186,840 in the control group). Of those in the colonoscopy group, 35% underwent colonoscopy, and 55% of those in the FIT group participated in testing. Colorectal cancer was detected in 0.16% (49) of people in the colonoscopy group and 0.2% (121) of people in the FIT test group (relative risk, 0.78; 95% CI, 0.56-1.09). The advanced adenoma detection rate was 2.05% in the colonoscopy group and 1.61% in the FIT group (relative risk, 1.27; 95% CI, 1.15-1.41). There were 2 perforations noted in the colonoscopy group and 15 major bleeding events. More right-sided adenomas were detected in the colonoscopy group.

Conclusion: The results of the current study highlight similar detection rates in the colonoscopy and FIT group. Should further follow-up show a benefit in disease-specific mortality, such screening strategies could be translated into population-based screening programs.

 

 

Commentary 

The first colonoscopy screening recommendations were established in the mid 1990s in the United States, and over the subsequent 2 decades colonoscopy has been the recommended method and main modality for colorectal cancer screening in this country. The advantage of colonoscopy over other screening modalities (sigmoidoscopy and fecal-based testing) is that it can examine the entire large bowel and allow for removal of potential precancerous lesions. However, data to support colonoscopy as a screening modality for colorectal cancer are largely based on cohort studies.1,2 These studies have reported a significant reduction in the incidence of colon cancer. Additionally, colorectal cancer mortality was notably lower in the screened populations. For example, one study among health professionals found a nearly 70% reduction in colorectal cancer mortality in those who underwent at least 1 screening colonoscopy.3

There has been a lack of randomized clinical data to validate the efficacy of colonoscopy screening for reducing colorectal cancer–related deaths. The current study by Bretthauer et al addresses an important need and enhances our understanding of the efficacy of colorectal cancer screening with colonoscopy. In this randomized trial involving more than 84,000 participants from Poland, Norway, Sweden, and the Netherlands, there was a noted 18% decrease in the risk of colorectal cancer over a 10-year period in the intention-to-screen population. The reduction in the risk of death from colorectal cancer was not statistically significant (risk ratio, 0.90; 95% CI, 0.64-1.16). These results are surprising and certainly raise the question as to whether previous studies overestimated the effectiveness of colonoscopy in reducing the risk of colorectal cancer–related deaths. There are several limitations to the Bretthauer et al study, however.

Perhaps the most important limitation is the fact that only 42% of participants in the invited-to-screen cohort underwent screening colonoscopy. Therefore, this raises the question of whether the efficacy noted is simply due to a lack of participation in the screening protocol. In the adjusted per-protocol analysis, colonoscopy was estimated to reduce the risk of colorectal cancer by 31% and the risk of colorectal cancer–related death by around 50%. These findings are more in line with prior published studies regarding the efficacy of colorectal cancer screening. The authors plan to repeat this analysis at 15 years, and it is possible that the risk of colorectal cancer and colorectal cancer–related death can be reduced on subsequent follow-up.

 

 

While the results of the Bretthauer et al trial are important, randomized trials that directly compare the effectiveness of different colorectal cancer screening strategies are lacking. The Forsberg et al trial, also an ongoing study, seeks to address this vitally important gap in our current data. The SCREESCO trial is a study that compares the efficacy of colonoscopy with FIT every 2 years or no screening. The currently reported data are preliminary but show a similarly low rate of colonoscopy screening in those invited to do so (35%). This is a similar limitation to that noted in the Bretthauer et al study. Furthermore, there is some question regarding colonoscopy quality in this study, which had a very low reported adenoma detection rate.

While the current studies are important and provide quality randomized data on the effect of colorectal cancer screening, there remain many unanswered questions. Should the results presented by Bretthauer et al represent the current real-world scenario, then colonoscopy screening may not be viewed as an effective screening tool compared to simpler, less-invasive modalities (ie, FIT). Further follow-up from the SCREESCO trial will help shed light on this question. However, there are concerns with this study, including a very low participation rate, which could greatly underestimate the effectiveness of screening. Additional analysis and longer follow-up will be vital to fully understand the benefits of screening colonoscopy. In the meantime, screening remains an important tool for early detection of colorectal cancer and remains a category A recommendation by the United States Preventive Services Task Force.4 

Applications for Clinical Practice and System Implementation

Current guidelines continue to strongly recommend screening for colorectal cancer for persons between 45 and 75 years of age (category B recommendation for those aged 45 to 49 years per the United States Preventive Services Task Force). Stool-based tests and direct visualization tests are both endorsed as screening options. Further follow-up from the presented studies is needed to help shed light on the magnitude of benefit of these modalities.

Practice Points

  • Current guidelines continue to strongly recommend screening for colon cancer in those aged 45 to 75 years.
  • The optimal modality for screening and the impact of screening on cancer-related mortality requires longer- term follow-up from these ongoing studies.

–Daniel Isaac, DO, MS 

References

1. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for Colorectal Cancer: An Evidence Update for the U.S. Preventive Services Task Force [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2021 May. Report No.: 20-05271-EF-1.

2. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for colorectal cancer: updated evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2021;325(19):1978-1998. doi:10.1001/jama.2021.4417

3. Nishihara R, Wu K, Lochhead P, et al. Long-term colorectal-cancer incidence and mortality after lower endoscopy. N Engl J Med. 2013;369(12):1095-1105. doi:10.1056/NEJMoa1301969

4. U.S. Preventive Services Task Force. Colorectal cancer: screening. Published May 18, 2021. Accessed November 8, 2022. https://uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening

References

1. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for Colorectal Cancer: An Evidence Update for the U.S. Preventive Services Task Force [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2021 May. Report No.: 20-05271-EF-1.

2. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for colorectal cancer: updated evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2021;325(19):1978-1998. doi:10.1001/jama.2021.4417

3. Nishihara R, Wu K, Lochhead P, et al. Long-term colorectal-cancer incidence and mortality after lower endoscopy. N Engl J Med. 2013;369(12):1095-1105. doi:10.1056/NEJMoa1301969

4. U.S. Preventive Services Task Force. Colorectal cancer: screening. Published May 18, 2021. Accessed November 8, 2022. https://uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
196-198
Page Number
196-198
Publications
Publications
Topics
Article Type
Display Headline
Effectiveness of Colonoscopy for Colorectal Cancer Screening in Reducing Cancer-Related Mortality: Interpreting the Results From Two Ongoing Randomized Trials 
Display Headline
Effectiveness of Colonoscopy for Colorectal Cancer Screening in Reducing Cancer-Related Mortality: Interpreting the Results From Two Ongoing Randomized Trials 
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Deprescribing in Older Adults in Community and Nursing Home Settings

Article Type
Changed
Mon, 09/26/2022 - 13:53
Display Headline
Deprescribing in Older Adults in Community and Nursing Home Settings

Study 1 Overview (Bayliss et al)

Objective: To examine the effect of a deprescribing educational intervention on medication use in older adults with cognitive impairment.

Design: This was a pragmatic, cluster randomized trial conducted in 8 primary care clinics that are part of a nonprofit health care system.

Setting and participants: The primary care clinic populations ranged from 170 to 1125 patients per clinic. The primary care clinics were randomly assigned to intervention or control using a uniform distribution in blocks by clinic size. Eligibility criteria for participants at those practices included age 65 years or older; health plan enrollment at least 1 year prior to intervention; diagnosis of Alzheimer disease and related dementia (ADRD) or mild cognitive impairment (MCI) by International Statistical Classification of Diseases and Related Health Problems, Tenth Revision code or from problem list; 1 or more chronic conditions from those in the Chronic Conditions Warehouse; and 5 or more long-term medications. Those who scheduled a visit at their primary care clinic in advance were eligible for the intervention. Primary care clinicians in intervention clinics were eligible to receive the clinician portion of the intervention. A total of 1433 participants were enrolled in the intervention group, and 1579 participants were enrolled in the control group.

Intervention: The intervention included 2 components: a patient and family component with materials mailed in advance of their primary care visits and a clinician component comprising monthly educational materials on deprescribing and notification in the electronic health record about visits with patient participants. The patient and family component consisted of a brochure titled “Managing Medication” and a questionnaire on attitudes toward deprescribing intended to educate patients and family about deprescribing. Clinicians at intervention clinics received an educational presentation at a monthly clinician meeting as well as tip sheets and a poster on deprescribing topics, and they also were notified of upcoming appointments with patients who received the patient component of the intervention. For the control group, patients and family did not receive any materials, and clinicians did not receive intervention materials or notification of participants enrolled in the trial. Usual care in both intervention and control groups included medication reconciliation and electronic health record alerts for potentially high-risk medications.

Main outcome measures: The primary outcomes of the study were the number of long-term medications per individual and the proportion of patients prescribed 1 or more potentially inappropriate medications. Outcome measurements were extracted from the electronic clinical data, and outcomes were assessed at 6 months, which involved comparing counts of medications at baseline with medications at 6 months. Long-term medications were defined as medications that are prescribed for 28 days or more based on pharmacy dispensing data. Potentially inappropriate medications (PIMs) were defined using the Beers list of medications to avoid in those with cognitive impairment and opioid medications. Analyses were conducted as intention to treat.

Main results: In the intervention group and control group, 56.2% and 54.4% of participants were women, and the mean age was 80.1 years (SD, 7.2) and 79.9 years (SD, 7.5), respectively. At baseline, the mean number of long-term medications was 7.0 (SD, 2.1) in the intervention group and 7.0 (SD, 2.2) in the control group. The proportion of patients taking any PIMs was 30.5% in the intervention group and 29.6% in the control group. At 6 months, the mean number of long-term medications was 6.4 in the intervention group and 6.5 in the control group, with an adjusted difference of –0.1 (95% CI, –0.2 to 0.04; P = .14); the proportion of patients with any PIMs was 17.8% in the intervention group and 20.9% in the control group, with an adjusted difference of –3.2% (95% CI, –6.2 to 0.4; P = .08). Preplanned analyses to examine subgroup differences for those with a higher number of medications (7+ vs 5 or 6 medications) did not find different effects of the intervention.

Conclusion: This educational intervention on deprescribing did not result in reductions in the number of medications or the use of PIMs in patients with cognitive impairment.

Study 2 Overview (Gedde et al)

Objective: To examine the effect of a deprescribing intervention (COSMOS) on medication use for nursing home residents.

Design: This was a randomized clinical trial.

Setting and participants: This trial was conducted in 67 units in 33 nursing homes in Norway. Participants were nursing home residents recruited from August 2014 to March 2015. Inclusion criteria included adults aged 65 years and older with at least 2 years of residency in nursing homes. Exclusion criteria included diagnosis of schizophrenia and a life expectancy of 6 months or less. Participants were followed for 4 months; participants were considered lost to follow-up if they died or moved from the nursing home unit. The analyses were per protocol and did not include those lost to follow-up or those who did not undergo a medication review in the intervention group. A total of 217 and 211 residents were included in the intervention and control groups, respectively.

Intervention: The intervention contained 5 components: communication and advance care planning, systematic pain management, medication reviews with collegial mentoring, organization of activities adjusted to needs and preferences, and safety. For medication review, the nursing home physician reviewed medications together with a nurse and study physicians who provided mentoring. The medication review involved a structured process that used assessment tools for behavioral and psychological symptoms of dementia (BPSD), activities of daily living (ADL), pain, cognitive status, well-being and quality of life, and clinical metrics of blood pressure, pulse, and body mass index. The study utilized the START/STOPP criteria1 for medication use in addition to a list of medications with anticholinergic properties for the medication review. In addition, drug interactions were documented through a drug interaction database; the team also incorporated patient wishes and concerns in the medication reviews. The nursing home physician made final decisions on medications. For the control group, nursing home residents received usual care without this intervention.

Main outcome measures: The primary outcome of the study was the mean change in the number of prescribed psychotropic medications, both regularly scheduled and total medications (which also included on-demand drugs) received at 4 months when compared to baseline. Psychotropic medications included antipsychotics, anxiolytics, hypnotics or sedatives, antidepressants, and antidementia drugs. Secondary outcomes included mean changes in BPSD using the Neuropsychiatric Inventory-Nursing home version (NPI-NH) and the Cornell Scale for Depression for Dementia (CSDD) and ADL using the Physical Self Maintenance Scale (PSMS).

Main results: In both the intervention and control groups, 76% of participants were women, and mean age was 86.3 years (SD, 7.95) in the intervention group and 86.6 years (SD, 7.21) in the control group. At baseline, the mean number of total medications was 10.9 (SD, 4.6) in the intervention group and 10.9 (SD, 4.7) in the control group, and the mean number of psychotropic medications was 2.2 (SD, 1.6) and 2.2 (SD, 1.7) in the intervention and control groups, respectively. At 4 months, the mean change from baseline of total psychotropic medications was –0.34 in the intervention group and 0.01 in the control group (P < .001), and the mean change of regularly scheduled psychotropic medications was –0.21 in the intervention group and 0.02 in the control group (P < .001). Measures of BPSD and depression did not differ between intervention and control groups, and ADL showed a small improvement in the intervention group.

Conclusion: This intervention reduced the use of psychotropic medications in nursing home residents without worsening BPSD or depression and may have yielded improvements in ADL.

 

 

Commentary

Polypharmacy is common among older adults, as many of them have multiple chronic conditions and often take multiple medications for managing them. Polypharmacy increases the risk of drug interactions and adverse effects from medications; older adults who are frail and/or who have cognitive impairment are especially at risk. Reducing medication use, especially medications likely to cause adverse effects such as those with anticholinergic properties, has the potential to yield beneficial effects while reducing the burden of taking medications. A large randomized trial found that a pharmacist-led education intervention can be effective in reducing PIM use in community-dwelling older adults,2 and that targeting patient motivation and capacity to deprescribe could be effective.3 This study by Bayliss and colleagues (Study 1), however, fell short of the effects seen in the earlier D-PRESCRIBE trial. One of the reasons for these findings may be that the clinician portion of the intervention was less intensive than that used in the earlier trial; specifically, in the present study, clinicians were not provided with or expected to utilize tools for structured medication review or deprescribing. Although the intervention primes the patient and family for discussions around deprescribing through the use of a brochure and questionnaire, the clinician portion of the intervention was less structured. Another example of an effective intervention that provided a more structured deprescribing intervention beyond education of clinicians utilized electronic decision-support to assist with deprescribing.4

The findings from the Gedde et al study (Study 2) are comparable to those of prior studies in the nursing home population,5 where participants are likely to take a large number of medications, including psychotropic medications, and are more likely to be frail. However, Gedde and colleagues employed a bundled intervention6 that included other components besides medication review, and thus it is unclear whether the effect on ADL can be attributed to the deprescribing of medications alone. Gedde et al’s finding that deprescribing can reduce the use of psychotropic medications while not leading to differences in behavioral and psychologic symptoms or depression is an important contribution to our knowledge about polypharmacy and deprescribing in older patients. Thus, nursing home residents, their families, and clinicians could expect that the deprescribing of psychotropic medications does not lead to worsening symptoms. Of note, the clinician portion of the intervention in the Gedde et al study was quite structured, and this structure may have contributed to the observed effects.

Applications for Clinical Practice and System Implementation

Both studies add to the literature on deprescribing and may offer options for researchers and clinicians who are considering potential components of an effective deprescribing intervention. Patient activation for deprescribing via the methods used in these 2 studies may help to prime patients for conversations about deprescribing; however, as shown by the Bayliss et al study, a more structured approach to clinical encounters may be needed when deprescribing, such as the use of tools in the electronic health record, in order to reduce the use of medication deemed unnecessary or potentially harmful. Further studies should examine the effect of deprescribing on medication use, but perhaps even more importantly, how deprescribing impacts patient outcomes both in terms of risks and benefits.

Practice Points

  • A more structured approach to clinical encounters (eg, the use of tools in the electronic health record) may be needed when deprescribing unnecessary or potentially harmful medications in older patients in community settings.
  • In the nursing home setting, structured deprescribing intervention can reduce the use of psychotropic medications while not leading to differences in behavioral and psychologic symptoms or depression.

–William W. Hung, MD, MPH

References

1. O’Mahony D, O’Sullivan D, Byrne S, et al. STOPP/START criteria for potentially inappropriate prescribing in older people: version 2. Age Ageing. 2015;44(2):213-218. doi:10.1093/ageing/afu145

2. Martin P, Tamblyn R, Benedetti A, et al. Effect of a pharmacist-led educational intervention on inappropriate medication prescriptions in older adults: the D-PRESCRIBE randomized clinical trial. JAMA. 2018;320(18):1889-1898. doi:10.1001/jama.2018.16131

3. Martin P, Tannenbaum C. A realist evaluation of patients’ decisions to deprescribe in the EMPOWER trial. BMJ Open. 2017;7(4):e015959. doi:10.1136/bmjopen-2017-015959

4. Rieckert A, Reeves D, Altiner A, et al. Use of an electronic decision support tool to reduce polypharmacy in elderly people with chronic diseases: cluster randomised controlled trial. BMJ. 2020;369:m1822. doi:10.1136/bmj.m1822

5. Fournier A, Anrys P, Beuscart JB, et al. Use and deprescribing of potentially inappropriate medications in frail nursing home residents. Drugs Aging. 2020;37(12):917-924. doi:10.1007/s40266-020-00805-7

6. Husebø BS, Ballard C, Aarsland D, et al. The effect of a multicomponent intervention on quality of life in residents of nursing homes: a randomized controlled trial (COSMOS). J Am Med Dir Assoc. 2019;20(3):330-339. doi:10.1016/j.jamda.2018.11.006

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(5)
Publications
Topics
Page Number
169,171-174
Sections
Article PDF
Article PDF

Study 1 Overview (Bayliss et al)

Objective: To examine the effect of a deprescribing educational intervention on medication use in older adults with cognitive impairment.

Design: This was a pragmatic, cluster randomized trial conducted in 8 primary care clinics that are part of a nonprofit health care system.

Setting and participants: The primary care clinic populations ranged from 170 to 1125 patients per clinic. The primary care clinics were randomly assigned to intervention or control using a uniform distribution in blocks by clinic size. Eligibility criteria for participants at those practices included age 65 years or older; health plan enrollment at least 1 year prior to intervention; diagnosis of Alzheimer disease and related dementia (ADRD) or mild cognitive impairment (MCI) by International Statistical Classification of Diseases and Related Health Problems, Tenth Revision code or from problem list; 1 or more chronic conditions from those in the Chronic Conditions Warehouse; and 5 or more long-term medications. Those who scheduled a visit at their primary care clinic in advance were eligible for the intervention. Primary care clinicians in intervention clinics were eligible to receive the clinician portion of the intervention. A total of 1433 participants were enrolled in the intervention group, and 1579 participants were enrolled in the control group.

Intervention: The intervention included 2 components: a patient and family component with materials mailed in advance of their primary care visits and a clinician component comprising monthly educational materials on deprescribing and notification in the electronic health record about visits with patient participants. The patient and family component consisted of a brochure titled “Managing Medication” and a questionnaire on attitudes toward deprescribing intended to educate patients and family about deprescribing. Clinicians at intervention clinics received an educational presentation at a monthly clinician meeting as well as tip sheets and a poster on deprescribing topics, and they also were notified of upcoming appointments with patients who received the patient component of the intervention. For the control group, patients and family did not receive any materials, and clinicians did not receive intervention materials or notification of participants enrolled in the trial. Usual care in both intervention and control groups included medication reconciliation and electronic health record alerts for potentially high-risk medications.

Main outcome measures: The primary outcomes of the study were the number of long-term medications per individual and the proportion of patients prescribed 1 or more potentially inappropriate medications. Outcome measurements were extracted from the electronic clinical data, and outcomes were assessed at 6 months, which involved comparing counts of medications at baseline with medications at 6 months. Long-term medications were defined as medications that are prescribed for 28 days or more based on pharmacy dispensing data. Potentially inappropriate medications (PIMs) were defined using the Beers list of medications to avoid in those with cognitive impairment and opioid medications. Analyses were conducted as intention to treat.

Main results: In the intervention group and control group, 56.2% and 54.4% of participants were women, and the mean age was 80.1 years (SD, 7.2) and 79.9 years (SD, 7.5), respectively. At baseline, the mean number of long-term medications was 7.0 (SD, 2.1) in the intervention group and 7.0 (SD, 2.2) in the control group. The proportion of patients taking any PIMs was 30.5% in the intervention group and 29.6% in the control group. At 6 months, the mean number of long-term medications was 6.4 in the intervention group and 6.5 in the control group, with an adjusted difference of –0.1 (95% CI, –0.2 to 0.04; P = .14); the proportion of patients with any PIMs was 17.8% in the intervention group and 20.9% in the control group, with an adjusted difference of –3.2% (95% CI, –6.2 to 0.4; P = .08). Preplanned analyses to examine subgroup differences for those with a higher number of medications (7+ vs 5 or 6 medications) did not find different effects of the intervention.

Conclusion: This educational intervention on deprescribing did not result in reductions in the number of medications or the use of PIMs in patients with cognitive impairment.

Study 2 Overview (Gedde et al)

Objective: To examine the effect of a deprescribing intervention (COSMOS) on medication use for nursing home residents.

Design: This was a randomized clinical trial.

Setting and participants: This trial was conducted in 67 units in 33 nursing homes in Norway. Participants were nursing home residents recruited from August 2014 to March 2015. Inclusion criteria included adults aged 65 years and older with at least 2 years of residency in nursing homes. Exclusion criteria included diagnosis of schizophrenia and a life expectancy of 6 months or less. Participants were followed for 4 months; participants were considered lost to follow-up if they died or moved from the nursing home unit. The analyses were per protocol and did not include those lost to follow-up or those who did not undergo a medication review in the intervention group. A total of 217 and 211 residents were included in the intervention and control groups, respectively.

Intervention: The intervention contained 5 components: communication and advance care planning, systematic pain management, medication reviews with collegial mentoring, organization of activities adjusted to needs and preferences, and safety. For medication review, the nursing home physician reviewed medications together with a nurse and study physicians who provided mentoring. The medication review involved a structured process that used assessment tools for behavioral and psychological symptoms of dementia (BPSD), activities of daily living (ADL), pain, cognitive status, well-being and quality of life, and clinical metrics of blood pressure, pulse, and body mass index. The study utilized the START/STOPP criteria1 for medication use in addition to a list of medications with anticholinergic properties for the medication review. In addition, drug interactions were documented through a drug interaction database; the team also incorporated patient wishes and concerns in the medication reviews. The nursing home physician made final decisions on medications. For the control group, nursing home residents received usual care without this intervention.

Main outcome measures: The primary outcome of the study was the mean change in the number of prescribed psychotropic medications, both regularly scheduled and total medications (which also included on-demand drugs) received at 4 months when compared to baseline. Psychotropic medications included antipsychotics, anxiolytics, hypnotics or sedatives, antidepressants, and antidementia drugs. Secondary outcomes included mean changes in BPSD using the Neuropsychiatric Inventory-Nursing home version (NPI-NH) and the Cornell Scale for Depression for Dementia (CSDD) and ADL using the Physical Self Maintenance Scale (PSMS).

Main results: In both the intervention and control groups, 76% of participants were women, and mean age was 86.3 years (SD, 7.95) in the intervention group and 86.6 years (SD, 7.21) in the control group. At baseline, the mean number of total medications was 10.9 (SD, 4.6) in the intervention group and 10.9 (SD, 4.7) in the control group, and the mean number of psychotropic medications was 2.2 (SD, 1.6) and 2.2 (SD, 1.7) in the intervention and control groups, respectively. At 4 months, the mean change from baseline of total psychotropic medications was –0.34 in the intervention group and 0.01 in the control group (P < .001), and the mean change of regularly scheduled psychotropic medications was –0.21 in the intervention group and 0.02 in the control group (P < .001). Measures of BPSD and depression did not differ between intervention and control groups, and ADL showed a small improvement in the intervention group.

Conclusion: This intervention reduced the use of psychotropic medications in nursing home residents without worsening BPSD or depression and may have yielded improvements in ADL.

 

 

Commentary

Polypharmacy is common among older adults, as many of them have multiple chronic conditions and often take multiple medications for managing them. Polypharmacy increases the risk of drug interactions and adverse effects from medications; older adults who are frail and/or who have cognitive impairment are especially at risk. Reducing medication use, especially medications likely to cause adverse effects such as those with anticholinergic properties, has the potential to yield beneficial effects while reducing the burden of taking medications. A large randomized trial found that a pharmacist-led education intervention can be effective in reducing PIM use in community-dwelling older adults,2 and that targeting patient motivation and capacity to deprescribe could be effective.3 This study by Bayliss and colleagues (Study 1), however, fell short of the effects seen in the earlier D-PRESCRIBE trial. One of the reasons for these findings may be that the clinician portion of the intervention was less intensive than that used in the earlier trial; specifically, in the present study, clinicians were not provided with or expected to utilize tools for structured medication review or deprescribing. Although the intervention primes the patient and family for discussions around deprescribing through the use of a brochure and questionnaire, the clinician portion of the intervention was less structured. Another example of an effective intervention that provided a more structured deprescribing intervention beyond education of clinicians utilized electronic decision-support to assist with deprescribing.4

The findings from the Gedde et al study (Study 2) are comparable to those of prior studies in the nursing home population,5 where participants are likely to take a large number of medications, including psychotropic medications, and are more likely to be frail. However, Gedde and colleagues employed a bundled intervention6 that included other components besides medication review, and thus it is unclear whether the effect on ADL can be attributed to the deprescribing of medications alone. Gedde et al’s finding that deprescribing can reduce the use of psychotropic medications while not leading to differences in behavioral and psychologic symptoms or depression is an important contribution to our knowledge about polypharmacy and deprescribing in older patients. Thus, nursing home residents, their families, and clinicians could expect that the deprescribing of psychotropic medications does not lead to worsening symptoms. Of note, the clinician portion of the intervention in the Gedde et al study was quite structured, and this structure may have contributed to the observed effects.

Applications for Clinical Practice and System Implementation

Both studies add to the literature on deprescribing and may offer options for researchers and clinicians who are considering potential components of an effective deprescribing intervention. Patient activation for deprescribing via the methods used in these 2 studies may help to prime patients for conversations about deprescribing; however, as shown by the Bayliss et al study, a more structured approach to clinical encounters may be needed when deprescribing, such as the use of tools in the electronic health record, in order to reduce the use of medication deemed unnecessary or potentially harmful. Further studies should examine the effect of deprescribing on medication use, but perhaps even more importantly, how deprescribing impacts patient outcomes both in terms of risks and benefits.

Practice Points

  • A more structured approach to clinical encounters (eg, the use of tools in the electronic health record) may be needed when deprescribing unnecessary or potentially harmful medications in older patients in community settings.
  • In the nursing home setting, structured deprescribing intervention can reduce the use of psychotropic medications while not leading to differences in behavioral and psychologic symptoms or depression.

–William W. Hung, MD, MPH

Study 1 Overview (Bayliss et al)

Objective: To examine the effect of a deprescribing educational intervention on medication use in older adults with cognitive impairment.

Design: This was a pragmatic, cluster randomized trial conducted in 8 primary care clinics that are part of a nonprofit health care system.

Setting and participants: The primary care clinic populations ranged from 170 to 1125 patients per clinic. The primary care clinics were randomly assigned to intervention or control using a uniform distribution in blocks by clinic size. Eligibility criteria for participants at those practices included age 65 years or older; health plan enrollment at least 1 year prior to intervention; diagnosis of Alzheimer disease and related dementia (ADRD) or mild cognitive impairment (MCI) by International Statistical Classification of Diseases and Related Health Problems, Tenth Revision code or from problem list; 1 or more chronic conditions from those in the Chronic Conditions Warehouse; and 5 or more long-term medications. Those who scheduled a visit at their primary care clinic in advance were eligible for the intervention. Primary care clinicians in intervention clinics were eligible to receive the clinician portion of the intervention. A total of 1433 participants were enrolled in the intervention group, and 1579 participants were enrolled in the control group.

Intervention: The intervention included 2 components: a patient and family component with materials mailed in advance of their primary care visits and a clinician component comprising monthly educational materials on deprescribing and notification in the electronic health record about visits with patient participants. The patient and family component consisted of a brochure titled “Managing Medication” and a questionnaire on attitudes toward deprescribing intended to educate patients and family about deprescribing. Clinicians at intervention clinics received an educational presentation at a monthly clinician meeting as well as tip sheets and a poster on deprescribing topics, and they also were notified of upcoming appointments with patients who received the patient component of the intervention. For the control group, patients and family did not receive any materials, and clinicians did not receive intervention materials or notification of participants enrolled in the trial. Usual care in both intervention and control groups included medication reconciliation and electronic health record alerts for potentially high-risk medications.

Main outcome measures: The primary outcomes of the study were the number of long-term medications per individual and the proportion of patients prescribed 1 or more potentially inappropriate medications. Outcome measurements were extracted from the electronic clinical data, and outcomes were assessed at 6 months, which involved comparing counts of medications at baseline with medications at 6 months. Long-term medications were defined as medications that are prescribed for 28 days or more based on pharmacy dispensing data. Potentially inappropriate medications (PIMs) were defined using the Beers list of medications to avoid in those with cognitive impairment and opioid medications. Analyses were conducted as intention to treat.

Main results: In the intervention group and control group, 56.2% and 54.4% of participants were women, and the mean age was 80.1 years (SD, 7.2) and 79.9 years (SD, 7.5), respectively. At baseline, the mean number of long-term medications was 7.0 (SD, 2.1) in the intervention group and 7.0 (SD, 2.2) in the control group. The proportion of patients taking any PIMs was 30.5% in the intervention group and 29.6% in the control group. At 6 months, the mean number of long-term medications was 6.4 in the intervention group and 6.5 in the control group, with an adjusted difference of –0.1 (95% CI, –0.2 to 0.04; P = .14); the proportion of patients with any PIMs was 17.8% in the intervention group and 20.9% in the control group, with an adjusted difference of –3.2% (95% CI, –6.2 to 0.4; P = .08). Preplanned analyses to examine subgroup differences for those with a higher number of medications (7+ vs 5 or 6 medications) did not find different effects of the intervention.

Conclusion: This educational intervention on deprescribing did not result in reductions in the number of medications or the use of PIMs in patients with cognitive impairment.

Study 2 Overview (Gedde et al)

Objective: To examine the effect of a deprescribing intervention (COSMOS) on medication use for nursing home residents.

Design: This was a randomized clinical trial.

Setting and participants: This trial was conducted in 67 units in 33 nursing homes in Norway. Participants were nursing home residents recruited from August 2014 to March 2015. Inclusion criteria included adults aged 65 years and older with at least 2 years of residency in nursing homes. Exclusion criteria included diagnosis of schizophrenia and a life expectancy of 6 months or less. Participants were followed for 4 months; participants were considered lost to follow-up if they died or moved from the nursing home unit. The analyses were per protocol and did not include those lost to follow-up or those who did not undergo a medication review in the intervention group. A total of 217 and 211 residents were included in the intervention and control groups, respectively.

Intervention: The intervention contained 5 components: communication and advance care planning, systematic pain management, medication reviews with collegial mentoring, organization of activities adjusted to needs and preferences, and safety. For medication review, the nursing home physician reviewed medications together with a nurse and study physicians who provided mentoring. The medication review involved a structured process that used assessment tools for behavioral and psychological symptoms of dementia (BPSD), activities of daily living (ADL), pain, cognitive status, well-being and quality of life, and clinical metrics of blood pressure, pulse, and body mass index. The study utilized the START/STOPP criteria1 for medication use in addition to a list of medications with anticholinergic properties for the medication review. In addition, drug interactions were documented through a drug interaction database; the team also incorporated patient wishes and concerns in the medication reviews. The nursing home physician made final decisions on medications. For the control group, nursing home residents received usual care without this intervention.

Main outcome measures: The primary outcome of the study was the mean change in the number of prescribed psychotropic medications, both regularly scheduled and total medications (which also included on-demand drugs) received at 4 months when compared to baseline. Psychotropic medications included antipsychotics, anxiolytics, hypnotics or sedatives, antidepressants, and antidementia drugs. Secondary outcomes included mean changes in BPSD using the Neuropsychiatric Inventory-Nursing home version (NPI-NH) and the Cornell Scale for Depression for Dementia (CSDD) and ADL using the Physical Self Maintenance Scale (PSMS).

Main results: In both the intervention and control groups, 76% of participants were women, and mean age was 86.3 years (SD, 7.95) in the intervention group and 86.6 years (SD, 7.21) in the control group. At baseline, the mean number of total medications was 10.9 (SD, 4.6) in the intervention group and 10.9 (SD, 4.7) in the control group, and the mean number of psychotropic medications was 2.2 (SD, 1.6) and 2.2 (SD, 1.7) in the intervention and control groups, respectively. At 4 months, the mean change from baseline of total psychotropic medications was –0.34 in the intervention group and 0.01 in the control group (P < .001), and the mean change of regularly scheduled psychotropic medications was –0.21 in the intervention group and 0.02 in the control group (P < .001). Measures of BPSD and depression did not differ between intervention and control groups, and ADL showed a small improvement in the intervention group.

Conclusion: This intervention reduced the use of psychotropic medications in nursing home residents without worsening BPSD or depression and may have yielded improvements in ADL.

 

 

Commentary

Polypharmacy is common among older adults, as many of them have multiple chronic conditions and often take multiple medications for managing them. Polypharmacy increases the risk of drug interactions and adverse effects from medications; older adults who are frail and/or who have cognitive impairment are especially at risk. Reducing medication use, especially medications likely to cause adverse effects such as those with anticholinergic properties, has the potential to yield beneficial effects while reducing the burden of taking medications. A large randomized trial found that a pharmacist-led education intervention can be effective in reducing PIM use in community-dwelling older adults,2 and that targeting patient motivation and capacity to deprescribe could be effective.3 This study by Bayliss and colleagues (Study 1), however, fell short of the effects seen in the earlier D-PRESCRIBE trial. One of the reasons for these findings may be that the clinician portion of the intervention was less intensive than that used in the earlier trial; specifically, in the present study, clinicians were not provided with or expected to utilize tools for structured medication review or deprescribing. Although the intervention primes the patient and family for discussions around deprescribing through the use of a brochure and questionnaire, the clinician portion of the intervention was less structured. Another example of an effective intervention that provided a more structured deprescribing intervention beyond education of clinicians utilized electronic decision-support to assist with deprescribing.4

The findings from the Gedde et al study (Study 2) are comparable to those of prior studies in the nursing home population,5 where participants are likely to take a large number of medications, including psychotropic medications, and are more likely to be frail. However, Gedde and colleagues employed a bundled intervention6 that included other components besides medication review, and thus it is unclear whether the effect on ADL can be attributed to the deprescribing of medications alone. Gedde et al’s finding that deprescribing can reduce the use of psychotropic medications while not leading to differences in behavioral and psychologic symptoms or depression is an important contribution to our knowledge about polypharmacy and deprescribing in older patients. Thus, nursing home residents, their families, and clinicians could expect that the deprescribing of psychotropic medications does not lead to worsening symptoms. Of note, the clinician portion of the intervention in the Gedde et al study was quite structured, and this structure may have contributed to the observed effects.

Applications for Clinical Practice and System Implementation

Both studies add to the literature on deprescribing and may offer options for researchers and clinicians who are considering potential components of an effective deprescribing intervention. Patient activation for deprescribing via the methods used in these 2 studies may help to prime patients for conversations about deprescribing; however, as shown by the Bayliss et al study, a more structured approach to clinical encounters may be needed when deprescribing, such as the use of tools in the electronic health record, in order to reduce the use of medication deemed unnecessary or potentially harmful. Further studies should examine the effect of deprescribing on medication use, but perhaps even more importantly, how deprescribing impacts patient outcomes both in terms of risks and benefits.

Practice Points

  • A more structured approach to clinical encounters (eg, the use of tools in the electronic health record) may be needed when deprescribing unnecessary or potentially harmful medications in older patients in community settings.
  • In the nursing home setting, structured deprescribing intervention can reduce the use of psychotropic medications while not leading to differences in behavioral and psychologic symptoms or depression.

–William W. Hung, MD, MPH

References

1. O’Mahony D, O’Sullivan D, Byrne S, et al. STOPP/START criteria for potentially inappropriate prescribing in older people: version 2. Age Ageing. 2015;44(2):213-218. doi:10.1093/ageing/afu145

2. Martin P, Tamblyn R, Benedetti A, et al. Effect of a pharmacist-led educational intervention on inappropriate medication prescriptions in older adults: the D-PRESCRIBE randomized clinical trial. JAMA. 2018;320(18):1889-1898. doi:10.1001/jama.2018.16131

3. Martin P, Tannenbaum C. A realist evaluation of patients’ decisions to deprescribe in the EMPOWER trial. BMJ Open. 2017;7(4):e015959. doi:10.1136/bmjopen-2017-015959

4. Rieckert A, Reeves D, Altiner A, et al. Use of an electronic decision support tool to reduce polypharmacy in elderly people with chronic diseases: cluster randomised controlled trial. BMJ. 2020;369:m1822. doi:10.1136/bmj.m1822

5. Fournier A, Anrys P, Beuscart JB, et al. Use and deprescribing of potentially inappropriate medications in frail nursing home residents. Drugs Aging. 2020;37(12):917-924. doi:10.1007/s40266-020-00805-7

6. Husebø BS, Ballard C, Aarsland D, et al. The effect of a multicomponent intervention on quality of life in residents of nursing homes: a randomized controlled trial (COSMOS). J Am Med Dir Assoc. 2019;20(3):330-339. doi:10.1016/j.jamda.2018.11.006

References

1. O’Mahony D, O’Sullivan D, Byrne S, et al. STOPP/START criteria for potentially inappropriate prescribing in older people: version 2. Age Ageing. 2015;44(2):213-218. doi:10.1093/ageing/afu145

2. Martin P, Tamblyn R, Benedetti A, et al. Effect of a pharmacist-led educational intervention on inappropriate medication prescriptions in older adults: the D-PRESCRIBE randomized clinical trial. JAMA. 2018;320(18):1889-1898. doi:10.1001/jama.2018.16131

3. Martin P, Tannenbaum C. A realist evaluation of patients’ decisions to deprescribe in the EMPOWER trial. BMJ Open. 2017;7(4):e015959. doi:10.1136/bmjopen-2017-015959

4. Rieckert A, Reeves D, Altiner A, et al. Use of an electronic decision support tool to reduce polypharmacy in elderly people with chronic diseases: cluster randomised controlled trial. BMJ. 2020;369:m1822. doi:10.1136/bmj.m1822

5. Fournier A, Anrys P, Beuscart JB, et al. Use and deprescribing of potentially inappropriate medications in frail nursing home residents. Drugs Aging. 2020;37(12):917-924. doi:10.1007/s40266-020-00805-7

6. Husebø BS, Ballard C, Aarsland D, et al. The effect of a multicomponent intervention on quality of life in residents of nursing homes: a randomized controlled trial (COSMOS). J Am Med Dir Assoc. 2019;20(3):330-339. doi:10.1016/j.jamda.2018.11.006

Issue
Journal of Clinical Outcomes Management - 29(5)
Issue
Journal of Clinical Outcomes Management - 29(5)
Page Number
169,171-174
Page Number
169,171-174
Publications
Publications
Topics
Article Type
Display Headline
Deprescribing in Older Adults in Community and Nursing Home Settings
Display Headline
Deprescribing in Older Adults in Community and Nursing Home Settings
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Abbreviated Delirium Screening Instruments: Plausible Tool to Improve Delirium Detection in Hospitalized Older Patients

Article Type
Changed
Mon, 09/26/2022 - 13:53
Display Headline
Abbreviated Delirium Screening Instruments: Plausible Tool to Improve Delirium Detection in Hospitalized Older Patients

Study 1 Overview (Oberhaus et al)

Objective: To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative delirium.

Design: Prospective concurrent comparison of 3D-CAM and CAM evaluations in a cohort of postoperative geriatric patients.

Setting and participants: Eligible participants were patients aged 60 years or older undergoing major elective surgery at Barnes Jewish Hospital (St. Louis, Missouri) who were enrolled in ongoing clinical trials (PODCAST, ENGAGES, SATISFY-SOS) between 2015 and 2018. Surgeries were at least 2 hours in length and required general anesthesia, planned extubation, and a minimum 2-day hospital stay. Investigators were extensively trained in administering 3D-CAM and CAM instruments. Participants were evaluated 2 hours after the end of anesthesia care on the day of surgery, then daily until follow-up was completed per clinical trial protocol or until the participant was determined by CAM to be nondelirious for 3 consecutive days. For each evaluation, both 3D-CAM and CAM assessors approached the participant together, but the evaluation was conducted such that the 3D-CAM assessor was masked to the additional questions ascertained by the long-form CAM assessment. The 3D-CAM or CAM assessor independently scored their respective assessments blinded to the results of the other assessor.

Main outcome measures: Participants were concurrently evaluated for postoperative delirium by both 3D-CAM and long-form CAM assessments. Comparisons between 3D-CAM and CAM scores were made using Cohen κ with repeated measures, generalized linear mixed-effects model, and Bland-Altman analysis.

Main results: Sixteen raters performed 471 concurrent 3D-CAM and CAM assessments in 299 participants (mean [SD] age, 69 [6.5] years). Of these participants, 152 (50.8%) were men, 263 (88.0%) were White, and 211 (70.6%) underwent noncardiac surgery. Both instruments showed good intraclass correlation (0.98 for 3D-CAM, 0.84 for CAM) with good overall agreement (Cohen κ = 0.71; 95% CI, 0.58-0.83). The mixed-effects model indicated a significant disagreement between the 3D-CAM and CAM assessments (estimated difference in fixed effect, –0.68; 95% CI, –1.32 to –0.05; P = .04). The Bland-Altman analysis showed that the probability of a delirium diagnosis with the 3D-CAM was more than twice that with the CAM (probability ratio, 2.78; 95% CI, 2.44-3.23).

Conclusion: The high degree of agreement between 3D-CAM and long-form CAM assessments suggests that the former may be a pragmatic and easy-to-administer clinical tool to screen for postoperative delirium in vulnerable older surgical patients.

Study 2 Overview (Shenkin et al)

Objective: To assess the accuracy of the 4 ‘A’s Test (4AT) for delirium detection in the medical inpatient setting and to compare the 4AT to the CAM.

Design: Prospective randomized diagnostic test accuracy study.

Setting and participants: This study was conducted in emergency departments and acute medical wards at 3 UK sites (Edinburgh, Bradford, and Sheffield) and enrolled acute medical patients aged 70 years or older without acute life-threatening illnesses and/or coma. Assessors administering the delirium evaluation were nurses or graduate clinical research associates who underwent systematic training in delirium and delirium assessment. Additional training was provided to those administering the CAM but not to those administering the 4AT as the latter is designed to be administered without special training. First, all participants underwent a reference standard delirium assessment using Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (DSM-IV) criteria to derive a final definitive diagnosis of delirium via expert consensus (1 psychiatrist and 2 geriatricians). Then, the participants were randomized to either the 4AT or the comparator CAM group using computer-generated pseudo-random numbers, stratified by study site, with block allocation. All assessments were performed by pairs of independent assessors blinded to the results of the other assessment.

Main outcome measures: All participants were evaluated by the reference standard (DSM-IV criteria for delirium) and by either 4AT or CAM instruments for delirium. The accuracy of the 4AT instrument was evaluated by comparing its positive and negative predictive values, sensitivity, and specificity to the reference standard and analyzed via the area under the receiver operating characteristic curve. The diagnostic accuracy of 4AT, compared to the CAM, was evaluated by comparing positive and negative predictive values, sensitivity, and specificity using Fisher’s exact test. The overall performance of 4AT and CAM was summarized using Youden’s Index and the diagnostic odds ratio of sensitivity to specificity.

Results: All 843 individuals enrolled in the study were randomized and 785 were included in the analysis (23 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome). Of the participants analyzed, the mean age was 81.4 [6.4] years, and 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT group had an area under the receiver operating characteristic curve of 0.90 (95% CI, 0.84-0.96), a sensitivity of 76% (95% CI, 61%-87%), and a specificity of 94% (95% CI, 92%-97%). In comparison, the CAM group had a sensitivity of 40% (95% CI, 26%-57%) and a specificity of 100% (95% CI, 98%-100%).

Conclusions: The 4AT is a pragmatic screening test for delirium in a medical space that does not require special training to administer. The use of this instrument may help to improve delirium detection as a part of routine clinical care in hospitalized older adults.

 

 

Commentary

Delirium is an acute confusional state marked by fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is exceedingly common in older patients in both surgical and medical settings and is associated with increased morbidity, mortality, hospital length of stay, institutionalization, and health care costs. Delirium is frequently underdiagnosed in the hospitalized setting, perhaps due to a combination of its waxing and waning nature and a lack of pragmatic and easily implementable screening tools that can be readily administered by clinicians and nonclinicians alike.1 While the CAM is a well-validated instrument to diagnose delirium, it requires specific training in the rating of each of the cardinal features ascertained through a brief cognitive assessment and takes 5 to 10 minutes to complete. Taken together, given the high patient load for clinicians in the hospital setting, the validation and application of brief delirium screening instruments that can be reliably administered by nonphysicians and nonclinicians may enhance delirium detection in vulnerable patients and consequently improve their outcomes.

In Study 1, Oberhaus et al approach the challenge of underdiagnosing delirium in the postoperative setting by investigating whether the widely accepted long-form CAM and an abbreviated 3-minute version, the 3D-CAM, provide similar delirium detection in older surgical patients. The authors found that both instruments were reliable tests individually (high interrater reliability) and had good overall agreement. However, the 3D-CAM was more likely to yield a positive diagnosis of delirium compared to the long-form CAM, consistent with its purpose as a screening tool with a high sensitivity. It is important to emphasize that the 3D-CAM takes less time to administer, but also requires less extensive training and clinical knowledge than the long-form CAM. Therefore, this instrument meets the prerequisite of a brief screening test that can be rapidly administered by nonclinicians, and if affirmative, followed by a more extensive confirmatory test performed by a clinician. Limitations of this study include a lack of a reference standard structured interview conducted by a physician-rater to better determine the true diagnostic accuracy of both 3D-CAM and CAM assessments, and the use of convenience sampling at a single center, which reduces the generalizability of its findings.

In a similar vein, Shenkin et al in Study 2 attempt to evaluate the utility of the 4AT instrument in diagnosing delirium in older medical inpatients by testing the diagnostic accuracy of the 4AT against a reference standard (ie, DSM-IVbased evaluation by physicians) as well as comparing it to CAM. The 4AT takes less time (~2 minutes) and requires less knowledge and training to administer as compared to the CAM. The study showed that the abbreviated 4AT, compared to CAM, had a higher sensitivity (76% vs 40%) and lower specificity (94% vs 100%) in delirium detection. Thus, akin to the application of 3D-CAM in the postoperative setting, 4AT possesses key characteristics of a brief delirium screening test for older patients in the acute medical setting. In contrast to the Oberhaus et al study, a major strength of this study was the utilization of a reference standard that was validated by expert consensus. This allowed the 4AT and CAM assessments to be compared to a more objective standard, thereby directly testing their diagnostic performance in detecting delirium.

Application for Clinical Practice and System Implementation

The findings from both Study 1 and 2 suggest that using an abbreviated delirium instrument in both surgical and acute medical settings may provide a pragmatic and sensitive method to detect delirium in older patients. The brevity of administration of 3D-CAM (~3 minutes) and 4AT (~2 minutes), combined with their higher sensitivity for detecting delirium compared to CAM, make these instruments potentially effective rapid screening tests for delirium in hospitalized older patients. Importantly, the utilization of such instruments might be a feasible way to mitigate the issue of underdiagnosing delirium in the hospital.

Several additional aspects of these abbreviated delirium instruments increase their suitability for clinical application. Specifically, the 3D-CAM and 4AT require less extensive training and clinical knowledge to both administer and interpret the results than the CAM.2 For instance, a multistage, multiday training for CAM is a key factor in maintaining its diagnostic accuracy.3,4 In contrast, the 3D-CAM requires only a 1- to 2-hour training session, and the 4AT can be administered by a nonclinician without the need for instrument-specific training. Thus, implementation of these instruments can be particularly pragmatic in clinical settings in which the staff involved in delirium screening cannot undergo the substantial training required to administer CAM. Moreover, these abbreviated tests enable nonphysician care team members to assume the role of delirium screener in the hospital. Taken together, the adoption of these abbreviated instruments may facilitate brief screenings of delirium in older patients by caregivers who see them most often—nurses and certified nursing assistants—thereby improving early detection and prevention of delirium-related complications in the hospital.

The feasibility of using abbreviated delirium screening instruments in the hospital setting raises a system implementation question—if these instruments are designed to be administered by those with limited to no training, could nonclinicians, such as hospital volunteers, effectively take on delirium screening roles in the hospital? If volunteers are able to take on this role, the integration of hospital volunteers into the clinical team can greatly expand the capacity for delirium screening in the hospital setting. Further research is warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.

Practice Points

  • Abbreviated delirium screening tools such as 3D-CAM and 4AT may be pragmatic instruments to improve delirium detection in surgical and hospitalized older patients, respectively.
  • Further studies are warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.

Jared Doan, BS, and Fred Ko, MD
Geriatrics and Palliative Medicine, Icahn School of Medicine at Mount Sinai

References

1. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24

2. Marcantonio ER, Ngo LH, O’Connor M, et al. 3D-CAM: derivation and validation of a 3-minute diagnostic interview for CAM-defined delirium: a cross-sectional diagnostic test study. Ann Intern Med. 2014;161(8):554-561. doi:10.7326/M14-0865

3. Green JR, Smith J, Teale E, et al. Use of the confusion assessment method in multicentre delirium trials: training and standardisation. BMC Geriatr. 2019;19(1):107. doi:10.1186/s12877-019-1129-8

4. Wei LA, Fearing MA, Sternberg EJ, Inouye SK. The Confusion Assessment Method: a systematic review of current usage. Am Geriatr Soc. 2008;56(5):823-830. doi:10.1111/j.1532-5415.2008.01674.x

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(5)
Publications
Topics
Page Number
166-169
Sections
Article PDF
Article PDF

Study 1 Overview (Oberhaus et al)

Objective: To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative delirium.

Design: Prospective concurrent comparison of 3D-CAM and CAM evaluations in a cohort of postoperative geriatric patients.

Setting and participants: Eligible participants were patients aged 60 years or older undergoing major elective surgery at Barnes Jewish Hospital (St. Louis, Missouri) who were enrolled in ongoing clinical trials (PODCAST, ENGAGES, SATISFY-SOS) between 2015 and 2018. Surgeries were at least 2 hours in length and required general anesthesia, planned extubation, and a minimum 2-day hospital stay. Investigators were extensively trained in administering 3D-CAM and CAM instruments. Participants were evaluated 2 hours after the end of anesthesia care on the day of surgery, then daily until follow-up was completed per clinical trial protocol or until the participant was determined by CAM to be nondelirious for 3 consecutive days. For each evaluation, both 3D-CAM and CAM assessors approached the participant together, but the evaluation was conducted such that the 3D-CAM assessor was masked to the additional questions ascertained by the long-form CAM assessment. The 3D-CAM or CAM assessor independently scored their respective assessments blinded to the results of the other assessor.

Main outcome measures: Participants were concurrently evaluated for postoperative delirium by both 3D-CAM and long-form CAM assessments. Comparisons between 3D-CAM and CAM scores were made using Cohen κ with repeated measures, generalized linear mixed-effects model, and Bland-Altman analysis.

Main results: Sixteen raters performed 471 concurrent 3D-CAM and CAM assessments in 299 participants (mean [SD] age, 69 [6.5] years). Of these participants, 152 (50.8%) were men, 263 (88.0%) were White, and 211 (70.6%) underwent noncardiac surgery. Both instruments showed good intraclass correlation (0.98 for 3D-CAM, 0.84 for CAM) with good overall agreement (Cohen κ = 0.71; 95% CI, 0.58-0.83). The mixed-effects model indicated a significant disagreement between the 3D-CAM and CAM assessments (estimated difference in fixed effect, –0.68; 95% CI, –1.32 to –0.05; P = .04). The Bland-Altman analysis showed that the probability of a delirium diagnosis with the 3D-CAM was more than twice that with the CAM (probability ratio, 2.78; 95% CI, 2.44-3.23).

Conclusion: The high degree of agreement between 3D-CAM and long-form CAM assessments suggests that the former may be a pragmatic and easy-to-administer clinical tool to screen for postoperative delirium in vulnerable older surgical patients.

Study 2 Overview (Shenkin et al)

Objective: To assess the accuracy of the 4 ‘A’s Test (4AT) for delirium detection in the medical inpatient setting and to compare the 4AT to the CAM.

Design: Prospective randomized diagnostic test accuracy study.

Setting and participants: This study was conducted in emergency departments and acute medical wards at 3 UK sites (Edinburgh, Bradford, and Sheffield) and enrolled acute medical patients aged 70 years or older without acute life-threatening illnesses and/or coma. Assessors administering the delirium evaluation were nurses or graduate clinical research associates who underwent systematic training in delirium and delirium assessment. Additional training was provided to those administering the CAM but not to those administering the 4AT as the latter is designed to be administered without special training. First, all participants underwent a reference standard delirium assessment using Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (DSM-IV) criteria to derive a final definitive diagnosis of delirium via expert consensus (1 psychiatrist and 2 geriatricians). Then, the participants were randomized to either the 4AT or the comparator CAM group using computer-generated pseudo-random numbers, stratified by study site, with block allocation. All assessments were performed by pairs of independent assessors blinded to the results of the other assessment.

Main outcome measures: All participants were evaluated by the reference standard (DSM-IV criteria for delirium) and by either 4AT or CAM instruments for delirium. The accuracy of the 4AT instrument was evaluated by comparing its positive and negative predictive values, sensitivity, and specificity to the reference standard and analyzed via the area under the receiver operating characteristic curve. The diagnostic accuracy of 4AT, compared to the CAM, was evaluated by comparing positive and negative predictive values, sensitivity, and specificity using Fisher’s exact test. The overall performance of 4AT and CAM was summarized using Youden’s Index and the diagnostic odds ratio of sensitivity to specificity.

Results: All 843 individuals enrolled in the study were randomized and 785 were included in the analysis (23 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome). Of the participants analyzed, the mean age was 81.4 [6.4] years, and 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT group had an area under the receiver operating characteristic curve of 0.90 (95% CI, 0.84-0.96), a sensitivity of 76% (95% CI, 61%-87%), and a specificity of 94% (95% CI, 92%-97%). In comparison, the CAM group had a sensitivity of 40% (95% CI, 26%-57%) and a specificity of 100% (95% CI, 98%-100%).

Conclusions: The 4AT is a pragmatic screening test for delirium in a medical space that does not require special training to administer. The use of this instrument may help to improve delirium detection as a part of routine clinical care in hospitalized older adults.

 

 

Commentary

Delirium is an acute confusional state marked by fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is exceedingly common in older patients in both surgical and medical settings and is associated with increased morbidity, mortality, hospital length of stay, institutionalization, and health care costs. Delirium is frequently underdiagnosed in the hospitalized setting, perhaps due to a combination of its waxing and waning nature and a lack of pragmatic and easily implementable screening tools that can be readily administered by clinicians and nonclinicians alike.1 While the CAM is a well-validated instrument to diagnose delirium, it requires specific training in the rating of each of the cardinal features ascertained through a brief cognitive assessment and takes 5 to 10 minutes to complete. Taken together, given the high patient load for clinicians in the hospital setting, the validation and application of brief delirium screening instruments that can be reliably administered by nonphysicians and nonclinicians may enhance delirium detection in vulnerable patients and consequently improve their outcomes.

In Study 1, Oberhaus et al approach the challenge of underdiagnosing delirium in the postoperative setting by investigating whether the widely accepted long-form CAM and an abbreviated 3-minute version, the 3D-CAM, provide similar delirium detection in older surgical patients. The authors found that both instruments were reliable tests individually (high interrater reliability) and had good overall agreement. However, the 3D-CAM was more likely to yield a positive diagnosis of delirium compared to the long-form CAM, consistent with its purpose as a screening tool with a high sensitivity. It is important to emphasize that the 3D-CAM takes less time to administer, but also requires less extensive training and clinical knowledge than the long-form CAM. Therefore, this instrument meets the prerequisite of a brief screening test that can be rapidly administered by nonclinicians, and if affirmative, followed by a more extensive confirmatory test performed by a clinician. Limitations of this study include a lack of a reference standard structured interview conducted by a physician-rater to better determine the true diagnostic accuracy of both 3D-CAM and CAM assessments, and the use of convenience sampling at a single center, which reduces the generalizability of its findings.

In a similar vein, Shenkin et al in Study 2 attempt to evaluate the utility of the 4AT instrument in diagnosing delirium in older medical inpatients by testing the diagnostic accuracy of the 4AT against a reference standard (ie, DSM-IVbased evaluation by physicians) as well as comparing it to CAM. The 4AT takes less time (~2 minutes) and requires less knowledge and training to administer as compared to the CAM. The study showed that the abbreviated 4AT, compared to CAM, had a higher sensitivity (76% vs 40%) and lower specificity (94% vs 100%) in delirium detection. Thus, akin to the application of 3D-CAM in the postoperative setting, 4AT possesses key characteristics of a brief delirium screening test for older patients in the acute medical setting. In contrast to the Oberhaus et al study, a major strength of this study was the utilization of a reference standard that was validated by expert consensus. This allowed the 4AT and CAM assessments to be compared to a more objective standard, thereby directly testing their diagnostic performance in detecting delirium.

Application for Clinical Practice and System Implementation

The findings from both Study 1 and 2 suggest that using an abbreviated delirium instrument in both surgical and acute medical settings may provide a pragmatic and sensitive method to detect delirium in older patients. The brevity of administration of 3D-CAM (~3 minutes) and 4AT (~2 minutes), combined with their higher sensitivity for detecting delirium compared to CAM, make these instruments potentially effective rapid screening tests for delirium in hospitalized older patients. Importantly, the utilization of such instruments might be a feasible way to mitigate the issue of underdiagnosing delirium in the hospital.

Several additional aspects of these abbreviated delirium instruments increase their suitability for clinical application. Specifically, the 3D-CAM and 4AT require less extensive training and clinical knowledge to both administer and interpret the results than the CAM.2 For instance, a multistage, multiday training for CAM is a key factor in maintaining its diagnostic accuracy.3,4 In contrast, the 3D-CAM requires only a 1- to 2-hour training session, and the 4AT can be administered by a nonclinician without the need for instrument-specific training. Thus, implementation of these instruments can be particularly pragmatic in clinical settings in which the staff involved in delirium screening cannot undergo the substantial training required to administer CAM. Moreover, these abbreviated tests enable nonphysician care team members to assume the role of delirium screener in the hospital. Taken together, the adoption of these abbreviated instruments may facilitate brief screenings of delirium in older patients by caregivers who see them most often—nurses and certified nursing assistants—thereby improving early detection and prevention of delirium-related complications in the hospital.

The feasibility of using abbreviated delirium screening instruments in the hospital setting raises a system implementation question—if these instruments are designed to be administered by those with limited to no training, could nonclinicians, such as hospital volunteers, effectively take on delirium screening roles in the hospital? If volunteers are able to take on this role, the integration of hospital volunteers into the clinical team can greatly expand the capacity for delirium screening in the hospital setting. Further research is warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.

Practice Points

  • Abbreviated delirium screening tools such as 3D-CAM and 4AT may be pragmatic instruments to improve delirium detection in surgical and hospitalized older patients, respectively.
  • Further studies are warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.

Jared Doan, BS, and Fred Ko, MD
Geriatrics and Palliative Medicine, Icahn School of Medicine at Mount Sinai

Study 1 Overview (Oberhaus et al)

Objective: To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative delirium.

Design: Prospective concurrent comparison of 3D-CAM and CAM evaluations in a cohort of postoperative geriatric patients.

Setting and participants: Eligible participants were patients aged 60 years or older undergoing major elective surgery at Barnes Jewish Hospital (St. Louis, Missouri) who were enrolled in ongoing clinical trials (PODCAST, ENGAGES, SATISFY-SOS) between 2015 and 2018. Surgeries were at least 2 hours in length and required general anesthesia, planned extubation, and a minimum 2-day hospital stay. Investigators were extensively trained in administering 3D-CAM and CAM instruments. Participants were evaluated 2 hours after the end of anesthesia care on the day of surgery, then daily until follow-up was completed per clinical trial protocol or until the participant was determined by CAM to be nondelirious for 3 consecutive days. For each evaluation, both 3D-CAM and CAM assessors approached the participant together, but the evaluation was conducted such that the 3D-CAM assessor was masked to the additional questions ascertained by the long-form CAM assessment. The 3D-CAM or CAM assessor independently scored their respective assessments blinded to the results of the other assessor.

Main outcome measures: Participants were concurrently evaluated for postoperative delirium by both 3D-CAM and long-form CAM assessments. Comparisons between 3D-CAM and CAM scores were made using Cohen κ with repeated measures, generalized linear mixed-effects model, and Bland-Altman analysis.

Main results: Sixteen raters performed 471 concurrent 3D-CAM and CAM assessments in 299 participants (mean [SD] age, 69 [6.5] years). Of these participants, 152 (50.8%) were men, 263 (88.0%) were White, and 211 (70.6%) underwent noncardiac surgery. Both instruments showed good intraclass correlation (0.98 for 3D-CAM, 0.84 for CAM) with good overall agreement (Cohen κ = 0.71; 95% CI, 0.58-0.83). The mixed-effects model indicated a significant disagreement between the 3D-CAM and CAM assessments (estimated difference in fixed effect, –0.68; 95% CI, –1.32 to –0.05; P = .04). The Bland-Altman analysis showed that the probability of a delirium diagnosis with the 3D-CAM was more than twice that with the CAM (probability ratio, 2.78; 95% CI, 2.44-3.23).

Conclusion: The high degree of agreement between 3D-CAM and long-form CAM assessments suggests that the former may be a pragmatic and easy-to-administer clinical tool to screen for postoperative delirium in vulnerable older surgical patients.

Study 2 Overview (Shenkin et al)

Objective: To assess the accuracy of the 4 ‘A’s Test (4AT) for delirium detection in the medical inpatient setting and to compare the 4AT to the CAM.

Design: Prospective randomized diagnostic test accuracy study.

Setting and participants: This study was conducted in emergency departments and acute medical wards at 3 UK sites (Edinburgh, Bradford, and Sheffield) and enrolled acute medical patients aged 70 years or older without acute life-threatening illnesses and/or coma. Assessors administering the delirium evaluation were nurses or graduate clinical research associates who underwent systematic training in delirium and delirium assessment. Additional training was provided to those administering the CAM but not to those administering the 4AT as the latter is designed to be administered without special training. First, all participants underwent a reference standard delirium assessment using Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (DSM-IV) criteria to derive a final definitive diagnosis of delirium via expert consensus (1 psychiatrist and 2 geriatricians). Then, the participants were randomized to either the 4AT or the comparator CAM group using computer-generated pseudo-random numbers, stratified by study site, with block allocation. All assessments were performed by pairs of independent assessors blinded to the results of the other assessment.

Main outcome measures: All participants were evaluated by the reference standard (DSM-IV criteria for delirium) and by either 4AT or CAM instruments for delirium. The accuracy of the 4AT instrument was evaluated by comparing its positive and negative predictive values, sensitivity, and specificity to the reference standard and analyzed via the area under the receiver operating characteristic curve. The diagnostic accuracy of 4AT, compared to the CAM, was evaluated by comparing positive and negative predictive values, sensitivity, and specificity using Fisher’s exact test. The overall performance of 4AT and CAM was summarized using Youden’s Index and the diagnostic odds ratio of sensitivity to specificity.

Results: All 843 individuals enrolled in the study were randomized and 785 were included in the analysis (23 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome). Of the participants analyzed, the mean age was 81.4 [6.4] years, and 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT group had an area under the receiver operating characteristic curve of 0.90 (95% CI, 0.84-0.96), a sensitivity of 76% (95% CI, 61%-87%), and a specificity of 94% (95% CI, 92%-97%). In comparison, the CAM group had a sensitivity of 40% (95% CI, 26%-57%) and a specificity of 100% (95% CI, 98%-100%).

Conclusions: The 4AT is a pragmatic screening test for delirium in a medical space that does not require special training to administer. The use of this instrument may help to improve delirium detection as a part of routine clinical care in hospitalized older adults.

 

 

Commentary

Delirium is an acute confusional state marked by fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is exceedingly common in older patients in both surgical and medical settings and is associated with increased morbidity, mortality, hospital length of stay, institutionalization, and health care costs. Delirium is frequently underdiagnosed in the hospitalized setting, perhaps due to a combination of its waxing and waning nature and a lack of pragmatic and easily implementable screening tools that can be readily administered by clinicians and nonclinicians alike.1 While the CAM is a well-validated instrument to diagnose delirium, it requires specific training in the rating of each of the cardinal features ascertained through a brief cognitive assessment and takes 5 to 10 minutes to complete. Taken together, given the high patient load for clinicians in the hospital setting, the validation and application of brief delirium screening instruments that can be reliably administered by nonphysicians and nonclinicians may enhance delirium detection in vulnerable patients and consequently improve their outcomes.

In Study 1, Oberhaus et al approach the challenge of underdiagnosing delirium in the postoperative setting by investigating whether the widely accepted long-form CAM and an abbreviated 3-minute version, the 3D-CAM, provide similar delirium detection in older surgical patients. The authors found that both instruments were reliable tests individually (high interrater reliability) and had good overall agreement. However, the 3D-CAM was more likely to yield a positive diagnosis of delirium compared to the long-form CAM, consistent with its purpose as a screening tool with a high sensitivity. It is important to emphasize that the 3D-CAM takes less time to administer, but also requires less extensive training and clinical knowledge than the long-form CAM. Therefore, this instrument meets the prerequisite of a brief screening test that can be rapidly administered by nonclinicians, and if affirmative, followed by a more extensive confirmatory test performed by a clinician. Limitations of this study include a lack of a reference standard structured interview conducted by a physician-rater to better determine the true diagnostic accuracy of both 3D-CAM and CAM assessments, and the use of convenience sampling at a single center, which reduces the generalizability of its findings.

In a similar vein, Shenkin et al in Study 2 attempt to evaluate the utility of the 4AT instrument in diagnosing delirium in older medical inpatients by testing the diagnostic accuracy of the 4AT against a reference standard (ie, DSM-IVbased evaluation by physicians) as well as comparing it to CAM. The 4AT takes less time (~2 minutes) and requires less knowledge and training to administer as compared to the CAM. The study showed that the abbreviated 4AT, compared to CAM, had a higher sensitivity (76% vs 40%) and lower specificity (94% vs 100%) in delirium detection. Thus, akin to the application of 3D-CAM in the postoperative setting, 4AT possesses key characteristics of a brief delirium screening test for older patients in the acute medical setting. In contrast to the Oberhaus et al study, a major strength of this study was the utilization of a reference standard that was validated by expert consensus. This allowed the 4AT and CAM assessments to be compared to a more objective standard, thereby directly testing their diagnostic performance in detecting delirium.

Application for Clinical Practice and System Implementation

The findings from both Study 1 and 2 suggest that using an abbreviated delirium instrument in both surgical and acute medical settings may provide a pragmatic and sensitive method to detect delirium in older patients. The brevity of administration of 3D-CAM (~3 minutes) and 4AT (~2 minutes), combined with their higher sensitivity for detecting delirium compared to CAM, make these instruments potentially effective rapid screening tests for delirium in hospitalized older patients. Importantly, the utilization of such instruments might be a feasible way to mitigate the issue of underdiagnosing delirium in the hospital.

Several additional aspects of these abbreviated delirium instruments increase their suitability for clinical application. Specifically, the 3D-CAM and 4AT require less extensive training and clinical knowledge to both administer and interpret the results than the CAM.2 For instance, a multistage, multiday training for CAM is a key factor in maintaining its diagnostic accuracy.3,4 In contrast, the 3D-CAM requires only a 1- to 2-hour training session, and the 4AT can be administered by a nonclinician without the need for instrument-specific training. Thus, implementation of these instruments can be particularly pragmatic in clinical settings in which the staff involved in delirium screening cannot undergo the substantial training required to administer CAM. Moreover, these abbreviated tests enable nonphysician care team members to assume the role of delirium screener in the hospital. Taken together, the adoption of these abbreviated instruments may facilitate brief screenings of delirium in older patients by caregivers who see them most often—nurses and certified nursing assistants—thereby improving early detection and prevention of delirium-related complications in the hospital.

The feasibility of using abbreviated delirium screening instruments in the hospital setting raises a system implementation question—if these instruments are designed to be administered by those with limited to no training, could nonclinicians, such as hospital volunteers, effectively take on delirium screening roles in the hospital? If volunteers are able to take on this role, the integration of hospital volunteers into the clinical team can greatly expand the capacity for delirium screening in the hospital setting. Further research is warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.

Practice Points

  • Abbreviated delirium screening tools such as 3D-CAM and 4AT may be pragmatic instruments to improve delirium detection in surgical and hospitalized older patients, respectively.
  • Further studies are warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.

Jared Doan, BS, and Fred Ko, MD
Geriatrics and Palliative Medicine, Icahn School of Medicine at Mount Sinai

References

1. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24

2. Marcantonio ER, Ngo LH, O’Connor M, et al. 3D-CAM: derivation and validation of a 3-minute diagnostic interview for CAM-defined delirium: a cross-sectional diagnostic test study. Ann Intern Med. 2014;161(8):554-561. doi:10.7326/M14-0865

3. Green JR, Smith J, Teale E, et al. Use of the confusion assessment method in multicentre delirium trials: training and standardisation. BMC Geriatr. 2019;19(1):107. doi:10.1186/s12877-019-1129-8

4. Wei LA, Fearing MA, Sternberg EJ, Inouye SK. The Confusion Assessment Method: a systematic review of current usage. Am Geriatr Soc. 2008;56(5):823-830. doi:10.1111/j.1532-5415.2008.01674.x

References

1. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24

2. Marcantonio ER, Ngo LH, O’Connor M, et al. 3D-CAM: derivation and validation of a 3-minute diagnostic interview for CAM-defined delirium: a cross-sectional diagnostic test study. Ann Intern Med. 2014;161(8):554-561. doi:10.7326/M14-0865

3. Green JR, Smith J, Teale E, et al. Use of the confusion assessment method in multicentre delirium trials: training and standardisation. BMC Geriatr. 2019;19(1):107. doi:10.1186/s12877-019-1129-8

4. Wei LA, Fearing MA, Sternberg EJ, Inouye SK. The Confusion Assessment Method: a systematic review of current usage. Am Geriatr Soc. 2008;56(5):823-830. doi:10.1111/j.1532-5415.2008.01674.x

Issue
Journal of Clinical Outcomes Management - 29(5)
Issue
Journal of Clinical Outcomes Management - 29(5)
Page Number
166-169
Page Number
166-169
Publications
Publications
Topics
Article Type
Display Headline
Abbreviated Delirium Screening Instruments: Plausible Tool to Improve Delirium Detection in Hospitalized Older Patients
Display Headline
Abbreviated Delirium Screening Instruments: Plausible Tool to Improve Delirium Detection in Hospitalized Older Patients
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Geriatric-Centered Interdisciplinary Care Pathway Reduces Delirium in Hospitalized Older Adults With Traumatic Injury

Article Type
Changed
Tue, 05/02/2023 - 14:01
Display Headline
Geriatric-Centered Interdisciplinary Care Pathway Reduces Delirium in Hospitalized Older Adults With Traumatic Injury

Study 1 Overview (Park et al)

Objective: To examine whether implementation of a geriatric trauma clinical pathway is associated with reduced rates of delirium in older adults with traumatic injury.

Design: Retrospective case-control study of electronic health records.

Setting and participants: Eligible patients were persons aged 65 years or older who were admitted to the trauma service and did not undergo an operation. A Geriatric Trauma Care Pathway was developed by a multidisciplinary Stanford Quality Pathways team and formally launched on November 1, 2018. The clinical pathway was designed to incorporate geriatric best practices, which included order sets (eg, age-appropriate nonpharmacological interventions and pharmacological dosages), guidelines (eg, Institute for Healthcare Improvement Age-Friendly Health systems 4M framework), automated consultations (comprehensive geriatric assessment), and escalation pathways executed by a multidisciplinary team (eg, pain, bowel, and sleep regulation). The clinical pathway began with admission to the emergency department (ED) (ie, automatic trigger of geriatric trauma care admission order set), daily multidisciplinary team meetings during acute hospitalization, and a transitional care team consultation for postdischarge follow-up or home visit.

Main outcome measures: The primary outcome was delirium as determined by a positive Confusion Assessment Method (CAM) score or a diagnosis of delirium by the clinical team. The secondary outcome was hospital length of stay (LOS). Process measures for pathway compliance (eg, achieving adequate pain control, early mobilization, advance care planning) were assessed. Outcome measures were compared between patients who underwent the Geriatric Trauma Care Pathway intervention (postimplementation group) vs patients who were treated prior to pathway implementation (baseline pre-implementation group).

Main results: Of the 859 eligible patients, 712 were included in the analysis (442 [62.1%] in the baseline pre-implementation group and 270 [37.9%] in the postimplementation group); mean (SD) age was 81.4 (9.1) years, and 394 (55.3%) were women. The injury mechanism was similar between groups, with falls being the most common cause of injury (247 [55.9%] in the baseline group vs 162 [60.0%] in the postimplementation group; P = .43). Injuries as measured by Injury Severity Score (ISS) were minor or moderate in both groups (261 [59.0%] in baseline group vs 168 [62.2%] in postimplementation group; P = .87). The adjusted odds ratio (OR) for delirium in the postimplementation group was lower compared to the baseline pre-implementation group (OR, 0.54; 95% CI, 0.37-0.80; P < .001). Measures of advance care planning in the postimplementation group improved, including more frequent goals-of-care documentation (53.7% in postimplementation group vs 16.7% in baseline group; P < .001) and a shortened time to first goals-of-care discussion upon presenting to the ED (36 hours in postimplementation group vs 50 hours in baseline group; P = .03).

Conclusion: Implementation of a multidisciplinary geriatric trauma clinical pathway for older adults with traumatic injury at a single level I trauma center was associated with reduced rates of delirium.

Study 2 Overview (Bryant et al)

Objective: To determine whether an interdisciplinary care pathway for frail trauma patients can improve in-hospital mortality, complications, and 30-day readmissions.

Design: Retrospective cohort study of frail patients.

Setting and participants: Eligible patients were persons aged 65 years or older who were admitted to the trauma service and survived more than 24 hours; admitted to and discharged from the trauma unit; and determined to be pre-frail or frail by a geriatrician’s assessment. A Frailty Identification and Care Pathway designed to reduce delirium and complications in frail older trauma patients was developed by a multidisciplinary team and implemented in 2016. The standardized evidence-based interdisciplinary care pathway included utilization of order sets and interventions for delirium prevention, early ambulation, bowel and pain regimens, nutrition and physical therapy consults, medication management, care-goal setting, and geriatric assessments.

Main outcome measures: The main outcomes were delirium as determined by a positive CAM score, major complications as defined by the Trauma Quality Improvement Project, in-hospital mortality, and 30-day hospital readmission. Outcome measures were compared between patients who underwent Frailty Identification and Care Pathway intervention (postintervention group) vs patients who were treated prior to pathway implementation (pre-intervention group).

Main results: A total of 269 frail patients were included in the analysis (125 in pre-intervention group vs 144 in postintervention group). Patient demographic and admission characteristics were similar between the 2 groups: mean age was 83.5 (7.1) years, 60.6% were women, and median ISS was 10 (interquartile range [IQR], 9-14). The injury mechanism was similar between groups, with falls accounting for 92.8% and 86.1% of injuries in the pre-intervention and postintervention groups, respectively (P = .07). In univariate analysis, the Frailty Identification and Care Pathway intervention was associated with a significant reduction in delirium (12.5% vs 21.6%, P = .04) and 30-day hospital readmission (2.7% vs 9.6%, P = .01) compared to patients in the pre-intervention group. However, rates of major complications (28.5% vs 28.0%, P = 0.93) and in-hospital mortality (4.2% vs 7.2%, P = .28) were similar between the pre-intervention and postintervention groups. In multivariate logistic regression models adjusted for patient characteristics (age, sex, race, ISS), patients in the postintervention group had lower delirium (OR, 0.44; 95% CI, 0.22-0.88; P = .02) and 30-day hospital readmission (OR, 0.25; 95% CI, 0.07-0.84; P = .02) rates compared to those in the pre-intervention group.

Conclusion: Implementation of an interdisciplinary care protocol for frail geriatric trauma patients significantly decreased their risks for in-hospital delirium and 30-day hospital readmission.

 

 

Commentary

Traumatic injuries in older adults are associated with higher morbidity and mortality compared to younger patients, with falls and motor vehicle accidents accounting for a majority of these injuries. Astoundingly, up to one-third of this vulnerable population presenting to hospitals with an ISS greater than 15 may die during hospitalization.1 As a result, a large number of studies and clinical trials have focused on interventions that are designed to reduce fall risks, and hence reduce adverse consequences of traumatic injuries that may arise after falls.2 However, this emphasis on falls prevention has overshadowed a need to develop effective geriatric-centered clinical interventions that aim to improve outcomes in older adults who present to hospitals with traumatic injuries. Furthermore, frailty—a geriatric syndrome indicative of an increased state of vulnerability and predictive of adverse outcomes such as delirium—is highly prevalent in older patients with traumatic injury.3 Thus, there is an urgent need to develop novel, hospital-based, traumatic injury–targeting strategies that incorporate a thoughtful redesign of the care framework that includes evidence-based interventions for geriatric syndromes such as delirium and frailty.

The study reported by Park et al (Study 1) represents the latest effort to evaluate inpatient management strategies designed to improve outcomes in hospitalized older adults who have sustained traumatic injury. Through the implementation of a novel multidisciplinary Geriatric Trauma Care Pathway that incorporates geriatric best practices, this intervention was found to be associated with a 46% lower risk of in-hospital delirium. Because of the inclusion of all age-eligible patients across all strata of traumatic injuries, rather than preselecting for those at the highest risk for poor clinical outcomes, the benefits of this intervention extend to those with minor or moderate injury severity. Furthermore, the improvement in delirium (ie, the primary outcome) is particularly meaningful given that delirium is one of the most common hospital-associated complications that increase hospital LOS, discharge to an institution, and mortality in older adults. Finally, the study’s observed reduced time to a first goals-of-care discussion and increased frequency of goals-of-care documentation after intervention should not be overlooked. The improvements in these 2 process measures are highly significant given that advanced care planning, an intervention that helps to align patients’ values, goals, and treatments, is completed at substantially lower rates in older adults in the acute hospital setting.4

Similarly, in an earlier published study, Bryant and colleagues (Study 2) also show that a geriatric-focused interdisciplinary trauma care pathway is associated with delirium risk reduction in hospitalized older trauma patients. Much like Study 1, the Frailty Identification and Care Pathway utilized in Study 2 is an evidence-based interdisciplinary care pathway that includes the use of geriatric assessments, order sets, and geriatric best practices. Moreover, its exclusive inclusion of pre-frail and frail older patients (ie, those at higher risk for poor outcomes) with moderate injury severity (median ISS of 10 [IQR, 9-14]) suggests that this type of care pathway benefits hospitalized older trauma patients, who are particularly vulnerable to adverse complications such as delirium. Moreover, the successful utilization of the FRAIL questionnaire, a validated frailty screening tool, by surgical residents in the ED to initiate this care pathway demonstrates the feasibility of its use in expediting frailty screening in older patients in trauma care.

 

 

Application for Clinical Practice and System Implementation

Findings from the 2 studies discussed in this review indicate that implementation of interdisciplinary clinical care pathways predicated on evidence-based geriatric principles and best practices is a promising approach to reduce delirium in hospitalized older trauma patients. These studies have helped to lay the groundwork in outlining the roadmaps (eg, processes and infrastructures) needed to create such clinical pathways. These key elements include: (1) integration of a multidisciplinary committee (eg, representation from trauma, emergency, and geriatric medicine, nursing, physical and occupational therapy, pharmacy, social work) in pathway design and implementation; (2) adaption of evidence-based geriatric best practices (eg, the Institute for Healthcare Improvement Age-Friendly Health System 4M framework [medication, mentation, mobility, what matters]) to prioritize interventions and to design a pathway that incorporates these features; (3) incorporation of comprehensive geriatric assessment by interdisciplinary providers; (4) utilization of validated clinical instruments to assess physical and cognitive functions, frailty, delirium, and social determinants of health; (5) modification of electronic health record systems to encompass order sets that incorporate evidence-based, nonpharmacological and pharmacological interventions to manage symptoms (eg, delirium, pain, bowel movement, sleep, immobility, polypharmacy) essential to quality geriatric care; and (6) integration of patient and caregiver preferences via goals-of-care discussions and corresponding documentation and communication of these goals.

Additionally, these 2 studies imparted some strategies that may facilitate the implementation of interdisciplinary clinical care pathways in trauma care. Examples of such facilitators include: (1) collaboration with champions within each specialty to reinforce education and buy-in; (2) creation of automatically triggered order sets upon patient presentation to the ED that unites distinct features of clinical pathways; (3) adaption and reorganization of existing hospital infrastructures and resources to meet the needs of clinical pathways implementation (eg, utilizing information technology resources to develop electronic health record order sets; using quality department to develop clinical pathway guidelines and electronic outcome dashboards); and (4) development of individualized patient and caregiver education materials based on care needs (eg, principles of delirium prevention and preservation of mobility during hospitalization) to prepare and engage these stakeholders in patient care and recovery.

Practice Points

  • A geriatric interdisciplinary care model can be effectively applied to the management of acute trauma in older patients.
  • Interdisciplinary clinical pathways should incorporate geriatric best practices and guidelines and age-appropriate order sets to prioritize and integrate care.

—Fred Ko, MD, MS

References

1. Hashmi A, Ibrahim-Zada I, Rhee P, et al. Predictors of mortality in geriatric trauma patients: a systematic review and meta-analysis. J Trauma Acute Care Surg. 2014;76(3):894-901. doi:10.1097/TA.0b013e3182ab0763

2. Hopewell S, Adedire O, Copsey BJ, et al. Multifactorial and multiple component interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2018;7(7):CD012221. doi:10.1002/14651858.CD012221.pub2

3. Joseph B, Pandit V, Zangbar B, et al. Superiority of frailty over age in predicting outcomes among geriatric trauma patients: a prospective analysis. JAMA Surg. 2014;149(8):766-772. doi:10.1001/jamasurg.2014.296

4. Hopkins SA, Bentley A, Phillips V, Barclay S. Advance care plans and hospitalized frail older adults: a systematic review. BMJ Support Palliat Care. 2020;10(2):164-174. doi:10.1136/bmjspcare-2019-002093

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(4)
Publications
Topics
Page Number
136-139
Sections
Article PDF
Article PDF

Study 1 Overview (Park et al)

Objective: To examine whether implementation of a geriatric trauma clinical pathway is associated with reduced rates of delirium in older adults with traumatic injury.

Design: Retrospective case-control study of electronic health records.

Setting and participants: Eligible patients were persons aged 65 years or older who were admitted to the trauma service and did not undergo an operation. A Geriatric Trauma Care Pathway was developed by a multidisciplinary Stanford Quality Pathways team and formally launched on November 1, 2018. The clinical pathway was designed to incorporate geriatric best practices, which included order sets (eg, age-appropriate nonpharmacological interventions and pharmacological dosages), guidelines (eg, Institute for Healthcare Improvement Age-Friendly Health systems 4M framework), automated consultations (comprehensive geriatric assessment), and escalation pathways executed by a multidisciplinary team (eg, pain, bowel, and sleep regulation). The clinical pathway began with admission to the emergency department (ED) (ie, automatic trigger of geriatric trauma care admission order set), daily multidisciplinary team meetings during acute hospitalization, and a transitional care team consultation for postdischarge follow-up or home visit.

Main outcome measures: The primary outcome was delirium as determined by a positive Confusion Assessment Method (CAM) score or a diagnosis of delirium by the clinical team. The secondary outcome was hospital length of stay (LOS). Process measures for pathway compliance (eg, achieving adequate pain control, early mobilization, advance care planning) were assessed. Outcome measures were compared between patients who underwent the Geriatric Trauma Care Pathway intervention (postimplementation group) vs patients who were treated prior to pathway implementation (baseline pre-implementation group).

Main results: Of the 859 eligible patients, 712 were included in the analysis (442 [62.1%] in the baseline pre-implementation group and 270 [37.9%] in the postimplementation group); mean (SD) age was 81.4 (9.1) years, and 394 (55.3%) were women. The injury mechanism was similar between groups, with falls being the most common cause of injury (247 [55.9%] in the baseline group vs 162 [60.0%] in the postimplementation group; P = .43). Injuries as measured by Injury Severity Score (ISS) were minor or moderate in both groups (261 [59.0%] in baseline group vs 168 [62.2%] in postimplementation group; P = .87). The adjusted odds ratio (OR) for delirium in the postimplementation group was lower compared to the baseline pre-implementation group (OR, 0.54; 95% CI, 0.37-0.80; P < .001). Measures of advance care planning in the postimplementation group improved, including more frequent goals-of-care documentation (53.7% in postimplementation group vs 16.7% in baseline group; P < .001) and a shortened time to first goals-of-care discussion upon presenting to the ED (36 hours in postimplementation group vs 50 hours in baseline group; P = .03).

Conclusion: Implementation of a multidisciplinary geriatric trauma clinical pathway for older adults with traumatic injury at a single level I trauma center was associated with reduced rates of delirium.

Study 2 Overview (Bryant et al)

Objective: To determine whether an interdisciplinary care pathway for frail trauma patients can improve in-hospital mortality, complications, and 30-day readmissions.

Design: Retrospective cohort study of frail patients.

Setting and participants: Eligible patients were persons aged 65 years or older who were admitted to the trauma service and survived more than 24 hours; admitted to and discharged from the trauma unit; and determined to be pre-frail or frail by a geriatrician’s assessment. A Frailty Identification and Care Pathway designed to reduce delirium and complications in frail older trauma patients was developed by a multidisciplinary team and implemented in 2016. The standardized evidence-based interdisciplinary care pathway included utilization of order sets and interventions for delirium prevention, early ambulation, bowel and pain regimens, nutrition and physical therapy consults, medication management, care-goal setting, and geriatric assessments.

Main outcome measures: The main outcomes were delirium as determined by a positive CAM score, major complications as defined by the Trauma Quality Improvement Project, in-hospital mortality, and 30-day hospital readmission. Outcome measures were compared between patients who underwent Frailty Identification and Care Pathway intervention (postintervention group) vs patients who were treated prior to pathway implementation (pre-intervention group).

Main results: A total of 269 frail patients were included in the analysis (125 in pre-intervention group vs 144 in postintervention group). Patient demographic and admission characteristics were similar between the 2 groups: mean age was 83.5 (7.1) years, 60.6% were women, and median ISS was 10 (interquartile range [IQR], 9-14). The injury mechanism was similar between groups, with falls accounting for 92.8% and 86.1% of injuries in the pre-intervention and postintervention groups, respectively (P = .07). In univariate analysis, the Frailty Identification and Care Pathway intervention was associated with a significant reduction in delirium (12.5% vs 21.6%, P = .04) and 30-day hospital readmission (2.7% vs 9.6%, P = .01) compared to patients in the pre-intervention group. However, rates of major complications (28.5% vs 28.0%, P = 0.93) and in-hospital mortality (4.2% vs 7.2%, P = .28) were similar between the pre-intervention and postintervention groups. In multivariate logistic regression models adjusted for patient characteristics (age, sex, race, ISS), patients in the postintervention group had lower delirium (OR, 0.44; 95% CI, 0.22-0.88; P = .02) and 30-day hospital readmission (OR, 0.25; 95% CI, 0.07-0.84; P = .02) rates compared to those in the pre-intervention group.

Conclusion: Implementation of an interdisciplinary care protocol for frail geriatric trauma patients significantly decreased their risks for in-hospital delirium and 30-day hospital readmission.

 

 

Commentary

Traumatic injuries in older adults are associated with higher morbidity and mortality compared to younger patients, with falls and motor vehicle accidents accounting for a majority of these injuries. Astoundingly, up to one-third of this vulnerable population presenting to hospitals with an ISS greater than 15 may die during hospitalization.1 As a result, a large number of studies and clinical trials have focused on interventions that are designed to reduce fall risks, and hence reduce adverse consequences of traumatic injuries that may arise after falls.2 However, this emphasis on falls prevention has overshadowed a need to develop effective geriatric-centered clinical interventions that aim to improve outcomes in older adults who present to hospitals with traumatic injuries. Furthermore, frailty—a geriatric syndrome indicative of an increased state of vulnerability and predictive of adverse outcomes such as delirium—is highly prevalent in older patients with traumatic injury.3 Thus, there is an urgent need to develop novel, hospital-based, traumatic injury–targeting strategies that incorporate a thoughtful redesign of the care framework that includes evidence-based interventions for geriatric syndromes such as delirium and frailty.

The study reported by Park et al (Study 1) represents the latest effort to evaluate inpatient management strategies designed to improve outcomes in hospitalized older adults who have sustained traumatic injury. Through the implementation of a novel multidisciplinary Geriatric Trauma Care Pathway that incorporates geriatric best practices, this intervention was found to be associated with a 46% lower risk of in-hospital delirium. Because of the inclusion of all age-eligible patients across all strata of traumatic injuries, rather than preselecting for those at the highest risk for poor clinical outcomes, the benefits of this intervention extend to those with minor or moderate injury severity. Furthermore, the improvement in delirium (ie, the primary outcome) is particularly meaningful given that delirium is one of the most common hospital-associated complications that increase hospital LOS, discharge to an institution, and mortality in older adults. Finally, the study’s observed reduced time to a first goals-of-care discussion and increased frequency of goals-of-care documentation after intervention should not be overlooked. The improvements in these 2 process measures are highly significant given that advanced care planning, an intervention that helps to align patients’ values, goals, and treatments, is completed at substantially lower rates in older adults in the acute hospital setting.4

Similarly, in an earlier published study, Bryant and colleagues (Study 2) also show that a geriatric-focused interdisciplinary trauma care pathway is associated with delirium risk reduction in hospitalized older trauma patients. Much like Study 1, the Frailty Identification and Care Pathway utilized in Study 2 is an evidence-based interdisciplinary care pathway that includes the use of geriatric assessments, order sets, and geriatric best practices. Moreover, its exclusive inclusion of pre-frail and frail older patients (ie, those at higher risk for poor outcomes) with moderate injury severity (median ISS of 10 [IQR, 9-14]) suggests that this type of care pathway benefits hospitalized older trauma patients, who are particularly vulnerable to adverse complications such as delirium. Moreover, the successful utilization of the FRAIL questionnaire, a validated frailty screening tool, by surgical residents in the ED to initiate this care pathway demonstrates the feasibility of its use in expediting frailty screening in older patients in trauma care.

 

 

Application for Clinical Practice and System Implementation

Findings from the 2 studies discussed in this review indicate that implementation of interdisciplinary clinical care pathways predicated on evidence-based geriatric principles and best practices is a promising approach to reduce delirium in hospitalized older trauma patients. These studies have helped to lay the groundwork in outlining the roadmaps (eg, processes and infrastructures) needed to create such clinical pathways. These key elements include: (1) integration of a multidisciplinary committee (eg, representation from trauma, emergency, and geriatric medicine, nursing, physical and occupational therapy, pharmacy, social work) in pathway design and implementation; (2) adaption of evidence-based geriatric best practices (eg, the Institute for Healthcare Improvement Age-Friendly Health System 4M framework [medication, mentation, mobility, what matters]) to prioritize interventions and to design a pathway that incorporates these features; (3) incorporation of comprehensive geriatric assessment by interdisciplinary providers; (4) utilization of validated clinical instruments to assess physical and cognitive functions, frailty, delirium, and social determinants of health; (5) modification of electronic health record systems to encompass order sets that incorporate evidence-based, nonpharmacological and pharmacological interventions to manage symptoms (eg, delirium, pain, bowel movement, sleep, immobility, polypharmacy) essential to quality geriatric care; and (6) integration of patient and caregiver preferences via goals-of-care discussions and corresponding documentation and communication of these goals.

Additionally, these 2 studies imparted some strategies that may facilitate the implementation of interdisciplinary clinical care pathways in trauma care. Examples of such facilitators include: (1) collaboration with champions within each specialty to reinforce education and buy-in; (2) creation of automatically triggered order sets upon patient presentation to the ED that unites distinct features of clinical pathways; (3) adaption and reorganization of existing hospital infrastructures and resources to meet the needs of clinical pathways implementation (eg, utilizing information technology resources to develop electronic health record order sets; using quality department to develop clinical pathway guidelines and electronic outcome dashboards); and (4) development of individualized patient and caregiver education materials based on care needs (eg, principles of delirium prevention and preservation of mobility during hospitalization) to prepare and engage these stakeholders in patient care and recovery.

Practice Points

  • A geriatric interdisciplinary care model can be effectively applied to the management of acute trauma in older patients.
  • Interdisciplinary clinical pathways should incorporate geriatric best practices and guidelines and age-appropriate order sets to prioritize and integrate care.

—Fred Ko, MD, MS

Study 1 Overview (Park et al)

Objective: To examine whether implementation of a geriatric trauma clinical pathway is associated with reduced rates of delirium in older adults with traumatic injury.

Design: Retrospective case-control study of electronic health records.

Setting and participants: Eligible patients were persons aged 65 years or older who were admitted to the trauma service and did not undergo an operation. A Geriatric Trauma Care Pathway was developed by a multidisciplinary Stanford Quality Pathways team and formally launched on November 1, 2018. The clinical pathway was designed to incorporate geriatric best practices, which included order sets (eg, age-appropriate nonpharmacological interventions and pharmacological dosages), guidelines (eg, Institute for Healthcare Improvement Age-Friendly Health systems 4M framework), automated consultations (comprehensive geriatric assessment), and escalation pathways executed by a multidisciplinary team (eg, pain, bowel, and sleep regulation). The clinical pathway began with admission to the emergency department (ED) (ie, automatic trigger of geriatric trauma care admission order set), daily multidisciplinary team meetings during acute hospitalization, and a transitional care team consultation for postdischarge follow-up or home visit.

Main outcome measures: The primary outcome was delirium as determined by a positive Confusion Assessment Method (CAM) score or a diagnosis of delirium by the clinical team. The secondary outcome was hospital length of stay (LOS). Process measures for pathway compliance (eg, achieving adequate pain control, early mobilization, advance care planning) were assessed. Outcome measures were compared between patients who underwent the Geriatric Trauma Care Pathway intervention (postimplementation group) vs patients who were treated prior to pathway implementation (baseline pre-implementation group).

Main results: Of the 859 eligible patients, 712 were included in the analysis (442 [62.1%] in the baseline pre-implementation group and 270 [37.9%] in the postimplementation group); mean (SD) age was 81.4 (9.1) years, and 394 (55.3%) were women. The injury mechanism was similar between groups, with falls being the most common cause of injury (247 [55.9%] in the baseline group vs 162 [60.0%] in the postimplementation group; P = .43). Injuries as measured by Injury Severity Score (ISS) were minor or moderate in both groups (261 [59.0%] in baseline group vs 168 [62.2%] in postimplementation group; P = .87). The adjusted odds ratio (OR) for delirium in the postimplementation group was lower compared to the baseline pre-implementation group (OR, 0.54; 95% CI, 0.37-0.80; P < .001). Measures of advance care planning in the postimplementation group improved, including more frequent goals-of-care documentation (53.7% in postimplementation group vs 16.7% in baseline group; P < .001) and a shortened time to first goals-of-care discussion upon presenting to the ED (36 hours in postimplementation group vs 50 hours in baseline group; P = .03).

Conclusion: Implementation of a multidisciplinary geriatric trauma clinical pathway for older adults with traumatic injury at a single level I trauma center was associated with reduced rates of delirium.

Study 2 Overview (Bryant et al)

Objective: To determine whether an interdisciplinary care pathway for frail trauma patients can improve in-hospital mortality, complications, and 30-day readmissions.

Design: Retrospective cohort study of frail patients.

Setting and participants: Eligible patients were persons aged 65 years or older who were admitted to the trauma service and survived more than 24 hours; admitted to and discharged from the trauma unit; and determined to be pre-frail or frail by a geriatrician’s assessment. A Frailty Identification and Care Pathway designed to reduce delirium and complications in frail older trauma patients was developed by a multidisciplinary team and implemented in 2016. The standardized evidence-based interdisciplinary care pathway included utilization of order sets and interventions for delirium prevention, early ambulation, bowel and pain regimens, nutrition and physical therapy consults, medication management, care-goal setting, and geriatric assessments.

Main outcome measures: The main outcomes were delirium as determined by a positive CAM score, major complications as defined by the Trauma Quality Improvement Project, in-hospital mortality, and 30-day hospital readmission. Outcome measures were compared between patients who underwent Frailty Identification and Care Pathway intervention (postintervention group) vs patients who were treated prior to pathway implementation (pre-intervention group).

Main results: A total of 269 frail patients were included in the analysis (125 in pre-intervention group vs 144 in postintervention group). Patient demographic and admission characteristics were similar between the 2 groups: mean age was 83.5 (7.1) years, 60.6% were women, and median ISS was 10 (interquartile range [IQR], 9-14). The injury mechanism was similar between groups, with falls accounting for 92.8% and 86.1% of injuries in the pre-intervention and postintervention groups, respectively (P = .07). In univariate analysis, the Frailty Identification and Care Pathway intervention was associated with a significant reduction in delirium (12.5% vs 21.6%, P = .04) and 30-day hospital readmission (2.7% vs 9.6%, P = .01) compared to patients in the pre-intervention group. However, rates of major complications (28.5% vs 28.0%, P = 0.93) and in-hospital mortality (4.2% vs 7.2%, P = .28) were similar between the pre-intervention and postintervention groups. In multivariate logistic regression models adjusted for patient characteristics (age, sex, race, ISS), patients in the postintervention group had lower delirium (OR, 0.44; 95% CI, 0.22-0.88; P = .02) and 30-day hospital readmission (OR, 0.25; 95% CI, 0.07-0.84; P = .02) rates compared to those in the pre-intervention group.

Conclusion: Implementation of an interdisciplinary care protocol for frail geriatric trauma patients significantly decreased their risks for in-hospital delirium and 30-day hospital readmission.

 

 

Commentary

Traumatic injuries in older adults are associated with higher morbidity and mortality compared to younger patients, with falls and motor vehicle accidents accounting for a majority of these injuries. Astoundingly, up to one-third of this vulnerable population presenting to hospitals with an ISS greater than 15 may die during hospitalization.1 As a result, a large number of studies and clinical trials have focused on interventions that are designed to reduce fall risks, and hence reduce adverse consequences of traumatic injuries that may arise after falls.2 However, this emphasis on falls prevention has overshadowed a need to develop effective geriatric-centered clinical interventions that aim to improve outcomes in older adults who present to hospitals with traumatic injuries. Furthermore, frailty—a geriatric syndrome indicative of an increased state of vulnerability and predictive of adverse outcomes such as delirium—is highly prevalent in older patients with traumatic injury.3 Thus, there is an urgent need to develop novel, hospital-based, traumatic injury–targeting strategies that incorporate a thoughtful redesign of the care framework that includes evidence-based interventions for geriatric syndromes such as delirium and frailty.

The study reported by Park et al (Study 1) represents the latest effort to evaluate inpatient management strategies designed to improve outcomes in hospitalized older adults who have sustained traumatic injury. Through the implementation of a novel multidisciplinary Geriatric Trauma Care Pathway that incorporates geriatric best practices, this intervention was found to be associated with a 46% lower risk of in-hospital delirium. Because of the inclusion of all age-eligible patients across all strata of traumatic injuries, rather than preselecting for those at the highest risk for poor clinical outcomes, the benefits of this intervention extend to those with minor or moderate injury severity. Furthermore, the improvement in delirium (ie, the primary outcome) is particularly meaningful given that delirium is one of the most common hospital-associated complications that increase hospital LOS, discharge to an institution, and mortality in older adults. Finally, the study’s observed reduced time to a first goals-of-care discussion and increased frequency of goals-of-care documentation after intervention should not be overlooked. The improvements in these 2 process measures are highly significant given that advanced care planning, an intervention that helps to align patients’ values, goals, and treatments, is completed at substantially lower rates in older adults in the acute hospital setting.4

Similarly, in an earlier published study, Bryant and colleagues (Study 2) also show that a geriatric-focused interdisciplinary trauma care pathway is associated with delirium risk reduction in hospitalized older trauma patients. Much like Study 1, the Frailty Identification and Care Pathway utilized in Study 2 is an evidence-based interdisciplinary care pathway that includes the use of geriatric assessments, order sets, and geriatric best practices. Moreover, its exclusive inclusion of pre-frail and frail older patients (ie, those at higher risk for poor outcomes) with moderate injury severity (median ISS of 10 [IQR, 9-14]) suggests that this type of care pathway benefits hospitalized older trauma patients, who are particularly vulnerable to adverse complications such as delirium. Moreover, the successful utilization of the FRAIL questionnaire, a validated frailty screening tool, by surgical residents in the ED to initiate this care pathway demonstrates the feasibility of its use in expediting frailty screening in older patients in trauma care.

 

 

Application for Clinical Practice and System Implementation

Findings from the 2 studies discussed in this review indicate that implementation of interdisciplinary clinical care pathways predicated on evidence-based geriatric principles and best practices is a promising approach to reduce delirium in hospitalized older trauma patients. These studies have helped to lay the groundwork in outlining the roadmaps (eg, processes and infrastructures) needed to create such clinical pathways. These key elements include: (1) integration of a multidisciplinary committee (eg, representation from trauma, emergency, and geriatric medicine, nursing, physical and occupational therapy, pharmacy, social work) in pathway design and implementation; (2) adaption of evidence-based geriatric best practices (eg, the Institute for Healthcare Improvement Age-Friendly Health System 4M framework [medication, mentation, mobility, what matters]) to prioritize interventions and to design a pathway that incorporates these features; (3) incorporation of comprehensive geriatric assessment by interdisciplinary providers; (4) utilization of validated clinical instruments to assess physical and cognitive functions, frailty, delirium, and social determinants of health; (5) modification of electronic health record systems to encompass order sets that incorporate evidence-based, nonpharmacological and pharmacological interventions to manage symptoms (eg, delirium, pain, bowel movement, sleep, immobility, polypharmacy) essential to quality geriatric care; and (6) integration of patient and caregiver preferences via goals-of-care discussions and corresponding documentation and communication of these goals.

Additionally, these 2 studies imparted some strategies that may facilitate the implementation of interdisciplinary clinical care pathways in trauma care. Examples of such facilitators include: (1) collaboration with champions within each specialty to reinforce education and buy-in; (2) creation of automatically triggered order sets upon patient presentation to the ED that unites distinct features of clinical pathways; (3) adaption and reorganization of existing hospital infrastructures and resources to meet the needs of clinical pathways implementation (eg, utilizing information technology resources to develop electronic health record order sets; using quality department to develop clinical pathway guidelines and electronic outcome dashboards); and (4) development of individualized patient and caregiver education materials based on care needs (eg, principles of delirium prevention and preservation of mobility during hospitalization) to prepare and engage these stakeholders in patient care and recovery.

Practice Points

  • A geriatric interdisciplinary care model can be effectively applied to the management of acute trauma in older patients.
  • Interdisciplinary clinical pathways should incorporate geriatric best practices and guidelines and age-appropriate order sets to prioritize and integrate care.

—Fred Ko, MD, MS

References

1. Hashmi A, Ibrahim-Zada I, Rhee P, et al. Predictors of mortality in geriatric trauma patients: a systematic review and meta-analysis. J Trauma Acute Care Surg. 2014;76(3):894-901. doi:10.1097/TA.0b013e3182ab0763

2. Hopewell S, Adedire O, Copsey BJ, et al. Multifactorial and multiple component interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2018;7(7):CD012221. doi:10.1002/14651858.CD012221.pub2

3. Joseph B, Pandit V, Zangbar B, et al. Superiority of frailty over age in predicting outcomes among geriatric trauma patients: a prospective analysis. JAMA Surg. 2014;149(8):766-772. doi:10.1001/jamasurg.2014.296

4. Hopkins SA, Bentley A, Phillips V, Barclay S. Advance care plans and hospitalized frail older adults: a systematic review. BMJ Support Palliat Care. 2020;10(2):164-174. doi:10.1136/bmjspcare-2019-002093

References

1. Hashmi A, Ibrahim-Zada I, Rhee P, et al. Predictors of mortality in geriatric trauma patients: a systematic review and meta-analysis. J Trauma Acute Care Surg. 2014;76(3):894-901. doi:10.1097/TA.0b013e3182ab0763

2. Hopewell S, Adedire O, Copsey BJ, et al. Multifactorial and multiple component interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2018;7(7):CD012221. doi:10.1002/14651858.CD012221.pub2

3. Joseph B, Pandit V, Zangbar B, et al. Superiority of frailty over age in predicting outcomes among geriatric trauma patients: a prospective analysis. JAMA Surg. 2014;149(8):766-772. doi:10.1001/jamasurg.2014.296

4. Hopkins SA, Bentley A, Phillips V, Barclay S. Advance care plans and hospitalized frail older adults: a systematic review. BMJ Support Palliat Care. 2020;10(2):164-174. doi:10.1136/bmjspcare-2019-002093

Issue
Journal of Clinical Outcomes Management - 29(4)
Issue
Journal of Clinical Outcomes Management - 29(4)
Page Number
136-139
Page Number
136-139
Publications
Publications
Topics
Article Type
Display Headline
Geriatric-Centered Interdisciplinary Care Pathway Reduces Delirium in Hospitalized Older Adults With Traumatic Injury
Display Headline
Geriatric-Centered Interdisciplinary Care Pathway Reduces Delirium in Hospitalized Older Adults With Traumatic Injury
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Trastuzumab Deruxtecan in HER2-Positive Breast Cancer

Article Type
Changed
Wed, 08/03/2022 - 13:41
Display Headline
Trastuzumab Deruxtecan in HER2-Positive Breast Cancer

Study 1 Overview (Cortés et al)

Objective: To compare the efficacy and safety of trastuzumab deruxtecan with those of trastuzumab emtansine in patients with HER2-positive metastatic breast cancer previously treated with trastuzumab and taxane.

Design: Phase 3, multicenter, open-label randomized trial conducted at 169 centers and 15 countries.

Setting and participants: Eligible patients had to have unresectable or metastatic HER2-positive breast cancer that had progressed during or after treatment with trastuzumab and a taxane or had disease that progressed within 6 months after neoadjuvant or adjuvant treatment involving trastuzumab or taxane. Patients with stable or previously treated brain metastases were eligible. Patients were not eligible for the study if they had symptomatic brain metastases, prior exposure to trastuzumab emtansine, or a history of interstitial lung disease.

Intervention: Patients were randomized in a 1-to-1 fashion to receive either trastuzumab deruxtecan 5.4 mg/kg every 3 weeks or trastuzumab emtansine 3.6 mg/kg every 3 weeks. Patients were stratified according to hormone-receptor status, prior treatment with epratuzumab, and the presence or absence of visceral disease.

Main outcome measures: The primary endpoint of the study was progression-free survival as determined by an independent central review. Secondary endpoints included overall survival, overall response, and safety.

Main results: A total of 524 patients were enrolled in the study, with 261 patients randomized to trastuzumab deruxtecan and 263 patients randomized to trastuzumab emtansine. The demographic and baseline characteristics were similar between the 2 cohorts, and 60% of patients in both groups received prior epratuzumab therapy. Stable brain metastases were present in around 20% of patients in each group, and 70% of patients in each group had visceral disease. The median duration of follow-up was 16.2 months with trastuzumab deruxtecan and 15.3 months with trastuzumab emtansine.

The median progression-free survival was not reached in the trastuzumab deruxtecan group and was 6.8 months in the trastuzumab emtansine group (95% CI, 5.6-8.2). At 12 months the percentage of patients alive without disease progression was significantly larger in the trastuzumab deruxtecan group compared with the trastuzumab emtansine group. The hazard ratio for disease progression or death from any cause was 0.28 (95% CI, 0.22-0.37; P < .001). Subgroup analyses showed a benefit in progression-free survival with trastuzumab deruxtecan across all subgroups.

At the time of this analysis, the percentage of patients who were alive at 12 months was 94% with trastuzumab deruxtecan and 85.9% with trastuzumab emtansine. The response rates were significantly higher with trastuzumab deruxtecan compared with trastuzumab emtansine (79.7% vs 34.2%). A complete response was seen in 16% of patients in the trastuzumab deruxtecan arm, compared with 8.7% of patients in the trastuzumab emtansine group. The disease control rate (complete response, partial response, or stable disease) was higher in the trastuzumab deruxtecan group compared with the trastuzumab emtansine group (96.6% vs 76.8%).

Serious adverse events were reported in 19% of patients in the trastuzumab deruxtecan group and 18% of patients in the trastuzumab emtansine group. Discontinuation due to adverse events was higher in the trastuzumab deruxtecan group, with 13.6% of patients discontinuing trastuzumab deruxtecan. Grade 3 or higher adverse events were seen in 52% of patients treated with trastuzumab deruxtecan and 48% of patients treated with trastuzumab emtansine. The most commonly reported adverse event with trastuzumab deruxtecan was nausea/vomiting and fatigue. These adverse events were seen more in the trastuzumab deruxtecan group compared with the trastuzumab emtansine group. No drug-related grade 5 adverse events were reported.

In the trastuzumab deruxtecan group, 10.5% of patients receiving trastuzumab deruxtecan developed interstitial lung disease or pneumonitis. Seven patients had grade 1 events, 18 patients had grade 2 events, and 2 patients had grade 3 events. No grade 4 or 5 events were noted in either treatment group. The median time to onset of interstitial lung disease or pneumonitis in those receiving trastuzumab deruxtecan was 168 days (range, 33-507). Discontinuation of therapy due to interstitial lung disease or pneumonitis occurred in 8% of patients receiving trastuzumab deruxtecan and 1% of patients receiving trastuzumab emtansine.

Conclusion: Trastuzumab deruxtecan significantly decreases the risk of disease progression or death compared to trastuzumab emtansine in patients with HER2-positive metastatic breast cancer who have progressed on prior trastuzumab and taxane-based therapy.

 

 

Study 2 Overview (Modi et al)

Objective: To assess the efficacy of trastuzumab deruxtecan in patients with unresectable or metastatic breast cancer with low levels of HER2 expression.

Design: This was a randomized, 2-group, open-label, phase 3 trial.

Setting and participants: The trial was designed with a planned enrollment of 480 patients with hormone receptor–positive disease and 60 patients with hormone receptor–negative disease. Patients were randomized in a 2:1 ratio. Randomization was stratified according to HER2 status (immunohistochemical [IHC] 1+ vs IHC 2+/in situ hybridization [ISH] negative), number of prior lines of therapy, and hormone-receptor status. IHC scores for HER2 expression were determined through central testing. Specimens that had HER2 IHC scores of 2+ were reflexed to ISH. Specimens were considered HER2-low-expressing if they had an IHC score of 1+ or if they had an IHC score of 2+ and were ISH negative.

Eligible patients had to have received chemotherapy for metastatic disease or had disease recurrence during or within 6 months after completing adjuvant chemotherapy. Patients with hormone receptor–positive disease must have had at least 1 line of endocrine therapy. Patients were eligible if they had stable brain metastases. Patients with interstitial lung disease were excluded.

Intervention: Patients were randomized to receive trastuzumab deruxtecan 5.4 mg/kg every 3 weeks or physician’s choice of chemotherapy (capecitabine, eribulin, gemcitabine, paclitaxel, or nab-paclitaxel).

Main outcome measures: The primary endpoint was progression-free survival in patients with hormone receptor–positive disease. Secondary endpoints were progression-free survival among all patients, overall survival in hormone receptor–positive patients, and overall survival in all patients. Additional secondary endpoints included objective response rates, duration of response, and efficacy in hormone receptor–negative patients.

Main results: A total of 373 patients were assigned to the trastuzumab deruxtecan group and 184 patients were assigned to the physician’s choice chemotherapy group; 88% of patients in each cohort were hormone receptor–positive. In the physician’s choice chemotherapy group, 51% received eribulin, 20% received capecitabine, 10% received nab-paclitaxel, 10% received gemcitabine, and 8% received paclitaxel. The demographic and baseline characteristics were similar between both cohorts. The median duration of follow-up was 18.4 months.

The median progression-free survival in the hormone receptor–positive cohort was 10.1 months in the trastuzumab deruxtecan group and 5.4 months in the physician’s choice chemotherapy group (HR, 0.51; 95% CI, 0.4-0.64). Subgroup analyses revealed a benefit across all subgroups. The median progression-free survival among patients with a HER2 IHC score of 1+ and those with a HER2 IHC score of 2+/negative ISH were identical. In patients who received a prior CDK 4/6 inhibitor, the median progression-free survival was also 10 months in the trastuzumab deruxtecan group. In those who were CDK 4/6- naïve, the progression-free survival was 11.7 months. The progression-free survival in all patients was 9.9 months in the trastuzumab deruxtecan group and 5.1 months in the physician’s choice chemotherapy group (HR, 0.46; 95% CI, 0.24-0.89).

The median overall survival in the hormone receptor–positive cohort was 23.9 months in the trastuzumab deruxtecan group compared with 17.5 months in the physician’s choice chemotherapy group (HR, 0.64; 95% CI, 0.48-0.86; P = .003). The median overall survival in the entire population was 23.4 months in the trastuzumab deruxtecan group vs 16.8 months in the physician’s choice chemotherapy group. In the hormone receptor–negative cohort, the median overall survival was 18.2 months in the trastuzumab deruxtecan group and 8.3 months in the physician’s choice chemotherapy group. Complete responses were seen in 3.6% in the trastuzumab deruxtecan group and 0.6% and the physician’s choice chemotherapy group. The median duration of response was 10.7 months in the trastuzumab deruxtecan group and 6.8 months in the physician’s choice chemotherapy group.

Incidence of serious adverse events was 27% in the trastuzumab deruxtecan group and 25% in the physician’s choice chemotherapy group. Grade 3 or higher events occurred in 52% of the trastuzumab deruxtecan group and 67% of the physician’s choice chemotherapy group. Discontinuation due to adverse events occurred in 16% in the trastuzumab deruxtecan group and 18% in the physician’s choice chemotherapy group; 14 patients in the trastuzumab deruxtecan group and 5 patients in the physician’s choice chemotherapy group had an adverse event that was associated with death. Death due to pneumonitis in the trastuzumab deruxtecan group occurred in 2 patients. Drug-related interstitial lung disease or pneumonitis occurred in 45 patients who received trastuzumab deruxtecan. The majority of these events were grade 1 and grade 2. However, 3 patients had grade 5 interstitial lung disease or pneumonitis.

Conclusion: Treatment with trastuzumab deruxtecan led to a significant improvement in progression-free survival compared to physician’s choice chemotherapy in patients with HER2-low metastatic breast cancer.

 

 

Commentary

Trastuzumab deruxtecan is an antibody drug conjugate that consists of a humanized anti-HER2 monoclonal antibody linked to a topoisomerase 1 inhibitor. This antibody drug conjugate is unique compared with prior antibody drug conjugates such as trastuzumab emtansine in that it has a high drug-to-antibody ratio (~8). Furthermore, there appears to be a unique bystander effect resulting in off-target cytotoxicity to neighboring tumor cells, enhancing the efficacy of this novel therapy. Prior studies of trastuzumab deruxtecan have shown durable activity in heavily pretreated patients with metastatic HER2-positive breast cancer.1

HER2-positive breast cancer represents approximately 20% of breast cancer cases in women.2 Historically, HER2 positivity has been defined by strong HER2 expression with IHC staining (ie, score 3+) or HER2 amplification through ISH. Conversely, HER2-negative disease has historically been defined as those with IHC scores of 0 or 1+. This group represents approximately 60% of HER2-negative metastatic breast cancer patients.3 These patients have limited targeted treatment options after progressing on primary therapy. Prior data has shown that patients with low HER2 expression represent a heterogeneous population and thus, the historic categorization of HER2 status as positive or negative may in fact not adequately characterize the proportion of patients who may derive clinical benefit from HER2-directed therapies. Nevertheless, there have been no data to date that have shown improved outcomes in low HER2 expressers with anti-HER2 therapies.

The current studies add to the rapidly growing body of literature outlining the efficacy of the novel antibody drug conjugate trastuzumab deruxtecan. The implications of the data presented in these 2 studies are immediately practice changing.

In the DESTINY-Breast03 trial, Cortéz and colleagues show that trastuzumab deruxtecan therapy significantly prolongs progression-free survival compared with trastuzumab emtansine in patients with HER2-positive metastatic breast cancer who have progressed on first-line trastuzumab and taxane-based therapy. With a hazard ratio of 0.28 for disease progression or death, the efficacy of trastuzumab deruxtecan highlighted in this trial clearly makes this the standard of care in the second-line setting for patients with metastatic HER2-positive breast cancer. The overall survival in this trial was immature at the time of this analysis, and thus continued follow-up to validate the results noted here are warranted.

The DESTINY-Breast04 trial by Modi et al expands the cohort of patients who benefit from trastuzumab deruxtecan profoundly. This study defines a population of patients with HER2-low metastatic breast cancer who will now be eligible for HER2-directed therapies. These data show that therapy with trastuzumab deruxtecan leads to a significant and clinically meaningful improvement in both progression-free survival and overall survival compared with chemotherapy in patients with metastatic breast cancer with low expression of HER2. This benefit was seen in both the estrogen receptor–positive cohort as well as the entire population, including pre-treated triple-negative disease. Furthermore, this study does not define a threshold of HER2 expression by IHC that predicts benefit with trastuzumab deruxtecan. Patients with an IHC score of 1+ as well as those with a score of 2+/ISH negative both benefit to a similar extent from trastuzumab deruxtecan. Interestingly, in the DAISY trial, antitumor activity was noted with trastuzumab deruxtecan even in those without any detectable HER2 expression on IHC.4 Given the inconsistency and potential false negatives of IHC along with heterogeneous HER2 expression, further work is needed to better identify patients with low levels of HER2 expression who may benefit from this novel antibody drug conjugate. Thus, a reliable test to quantitatively assess the level of HER2 expression is needed in order to determine more accurately which patients will benefit from trastuzumab deruxtecan.

Last, trastuzumab deruxtecan has been associated with interstitial lung disease and pneumonitis. Interstitial lung disease and pneumonitis occurred in approximately 10% of patients who received trastuzumab deruxtecan in the DESTINY-Breast03 trial and about 12% of patients in the DESTINY-Breast04 trial. Most of these events were grade 1 and grade 2. Nevertheless, clinicians must be aware of this risk and monitor patients frequently for the development of pneumonitis or interstitial lung disease.

 

 

Application for Clinical Practice and System Implementation

The results of the current studies show a longer progression-free survival with trastuzumab deruxtecan in both HER2-low expressing metastatic breast cancer and HER2-positive metastatic breast cancer following taxane and trastuzumab-based therapy. These results are clearly practice changing and represent a new standard of care in these patient populations. It is incumbent upon treating oncologists to work with our pathology colleagues to assess HER2 IHC thoroughly in order to identify all potential patients who may benefit from trastuzumab deruxtecan in the metastatic setting. The continued advancement of anti-HER2 therapy will undoubtedly have a significant impact on patient outcomes going forward.

Practice Points

  • With a hazard ratio of 0.28 for disease progression or death, the efficacy of trastuzumab deruxtecan highlighted in the DESTINY-Breast03 trial clearly makes this the standard of care in the second-line setting for patients with metastatic HER2-positive breast cancer.
  • In the DESTINY-Breast04 trial, a significant and clinically meaningful improvement in both progression-free survival and overall survival compared with chemotherapy was seen in patients with metastatic breast cancer with low expression of HER2, including both the estrogen receptor–positive cohort as well as the entire population, including those with pre-treated triple-negative disease.

­—Daniel Isaac, DO, MS

References

1. Modi S, Saura C, Yamashita T, et al. Trastuzumab deruxtecan in previously treated HER2-positive breast cancer. N Engl J Med. 2020;382(7):610-621. doi:10.1056/NEJMoa1914510

2. National Cancer Institute. Cancer stat facts. female breast cancer. Accessed July 25, 2022. https://seer.cancer.gov/statfacts/html/breast.html

3. Schettini F, Chic N, Braso-Maristany F, et al. Clinical, pathological and PAM50 gene expression features of HER2-low breast cancer. NPJ Breast Cancer. 2021;7(`1):1. doi:10.1038/s41523-020-00208-2

4. Dieras VDE, Deluche E, Lusque A, et al. Trastuzumab deruxtecan for advanced breast cancer patients, regardless of HER2 status: a phase II study with biomarkers analysis. In: Proceedings of Abstracts of the 2021 San Antonio Breast Cancer Symposium, December 7-10, 2021. San Antonio: American Association for Cancer Research, 2021. Abstract.

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(4)
Publications
Topics
Page Number
132-136
Sections
Article PDF
Article PDF

Study 1 Overview (Cortés et al)

Objective: To compare the efficacy and safety of trastuzumab deruxtecan with those of trastuzumab emtansine in patients with HER2-positive metastatic breast cancer previously treated with trastuzumab and taxane.

Design: Phase 3, multicenter, open-label randomized trial conducted at 169 centers and 15 countries.

Setting and participants: Eligible patients had to have unresectable or metastatic HER2-positive breast cancer that had progressed during or after treatment with trastuzumab and a taxane or had disease that progressed within 6 months after neoadjuvant or adjuvant treatment involving trastuzumab or taxane. Patients with stable or previously treated brain metastases were eligible. Patients were not eligible for the study if they had symptomatic brain metastases, prior exposure to trastuzumab emtansine, or a history of interstitial lung disease.

Intervention: Patients were randomized in a 1-to-1 fashion to receive either trastuzumab deruxtecan 5.4 mg/kg every 3 weeks or trastuzumab emtansine 3.6 mg/kg every 3 weeks. Patients were stratified according to hormone-receptor status, prior treatment with epratuzumab, and the presence or absence of visceral disease.

Main outcome measures: The primary endpoint of the study was progression-free survival as determined by an independent central review. Secondary endpoints included overall survival, overall response, and safety.

Main results: A total of 524 patients were enrolled in the study, with 261 patients randomized to trastuzumab deruxtecan and 263 patients randomized to trastuzumab emtansine. The demographic and baseline characteristics were similar between the 2 cohorts, and 60% of patients in both groups received prior epratuzumab therapy. Stable brain metastases were present in around 20% of patients in each group, and 70% of patients in each group had visceral disease. The median duration of follow-up was 16.2 months with trastuzumab deruxtecan and 15.3 months with trastuzumab emtansine.

The median progression-free survival was not reached in the trastuzumab deruxtecan group and was 6.8 months in the trastuzumab emtansine group (95% CI, 5.6-8.2). At 12 months the percentage of patients alive without disease progression was significantly larger in the trastuzumab deruxtecan group compared with the trastuzumab emtansine group. The hazard ratio for disease progression or death from any cause was 0.28 (95% CI, 0.22-0.37; P < .001). Subgroup analyses showed a benefit in progression-free survival with trastuzumab deruxtecan across all subgroups.

At the time of this analysis, the percentage of patients who were alive at 12 months was 94% with trastuzumab deruxtecan and 85.9% with trastuzumab emtansine. The response rates were significantly higher with trastuzumab deruxtecan compared with trastuzumab emtansine (79.7% vs 34.2%). A complete response was seen in 16% of patients in the trastuzumab deruxtecan arm, compared with 8.7% of patients in the trastuzumab emtansine group. The disease control rate (complete response, partial response, or stable disease) was higher in the trastuzumab deruxtecan group compared with the trastuzumab emtansine group (96.6% vs 76.8%).

Serious adverse events were reported in 19% of patients in the trastuzumab deruxtecan group and 18% of patients in the trastuzumab emtansine group. Discontinuation due to adverse events was higher in the trastuzumab deruxtecan group, with 13.6% of patients discontinuing trastuzumab deruxtecan. Grade 3 or higher adverse events were seen in 52% of patients treated with trastuzumab deruxtecan and 48% of patients treated with trastuzumab emtansine. The most commonly reported adverse event with trastuzumab deruxtecan was nausea/vomiting and fatigue. These adverse events were seen more in the trastuzumab deruxtecan group compared with the trastuzumab emtansine group. No drug-related grade 5 adverse events were reported.

In the trastuzumab deruxtecan group, 10.5% of patients receiving trastuzumab deruxtecan developed interstitial lung disease or pneumonitis. Seven patients had grade 1 events, 18 patients had grade 2 events, and 2 patients had grade 3 events. No grade 4 or 5 events were noted in either treatment group. The median time to onset of interstitial lung disease or pneumonitis in those receiving trastuzumab deruxtecan was 168 days (range, 33-507). Discontinuation of therapy due to interstitial lung disease or pneumonitis occurred in 8% of patients receiving trastuzumab deruxtecan and 1% of patients receiving trastuzumab emtansine.

Conclusion: Trastuzumab deruxtecan significantly decreases the risk of disease progression or death compared to trastuzumab emtansine in patients with HER2-positive metastatic breast cancer who have progressed on prior trastuzumab and taxane-based therapy.

 

 

Study 2 Overview (Modi et al)

Objective: To assess the efficacy of trastuzumab deruxtecan in patients with unresectable or metastatic breast cancer with low levels of HER2 expression.

Design: This was a randomized, 2-group, open-label, phase 3 trial.

Setting and participants: The trial was designed with a planned enrollment of 480 patients with hormone receptor–positive disease and 60 patients with hormone receptor–negative disease. Patients were randomized in a 2:1 ratio. Randomization was stratified according to HER2 status (immunohistochemical [IHC] 1+ vs IHC 2+/in situ hybridization [ISH] negative), number of prior lines of therapy, and hormone-receptor status. IHC scores for HER2 expression were determined through central testing. Specimens that had HER2 IHC scores of 2+ were reflexed to ISH. Specimens were considered HER2-low-expressing if they had an IHC score of 1+ or if they had an IHC score of 2+ and were ISH negative.

Eligible patients had to have received chemotherapy for metastatic disease or had disease recurrence during or within 6 months after completing adjuvant chemotherapy. Patients with hormone receptor–positive disease must have had at least 1 line of endocrine therapy. Patients were eligible if they had stable brain metastases. Patients with interstitial lung disease were excluded.

Intervention: Patients were randomized to receive trastuzumab deruxtecan 5.4 mg/kg every 3 weeks or physician’s choice of chemotherapy (capecitabine, eribulin, gemcitabine, paclitaxel, or nab-paclitaxel).

Main outcome measures: The primary endpoint was progression-free survival in patients with hormone receptor–positive disease. Secondary endpoints were progression-free survival among all patients, overall survival in hormone receptor–positive patients, and overall survival in all patients. Additional secondary endpoints included objective response rates, duration of response, and efficacy in hormone receptor–negative patients.

Main results: A total of 373 patients were assigned to the trastuzumab deruxtecan group and 184 patients were assigned to the physician’s choice chemotherapy group; 88% of patients in each cohort were hormone receptor–positive. In the physician’s choice chemotherapy group, 51% received eribulin, 20% received capecitabine, 10% received nab-paclitaxel, 10% received gemcitabine, and 8% received paclitaxel. The demographic and baseline characteristics were similar between both cohorts. The median duration of follow-up was 18.4 months.

The median progression-free survival in the hormone receptor–positive cohort was 10.1 months in the trastuzumab deruxtecan group and 5.4 months in the physician’s choice chemotherapy group (HR, 0.51; 95% CI, 0.4-0.64). Subgroup analyses revealed a benefit across all subgroups. The median progression-free survival among patients with a HER2 IHC score of 1+ and those with a HER2 IHC score of 2+/negative ISH were identical. In patients who received a prior CDK 4/6 inhibitor, the median progression-free survival was also 10 months in the trastuzumab deruxtecan group. In those who were CDK 4/6- naïve, the progression-free survival was 11.7 months. The progression-free survival in all patients was 9.9 months in the trastuzumab deruxtecan group and 5.1 months in the physician’s choice chemotherapy group (HR, 0.46; 95% CI, 0.24-0.89).

The median overall survival in the hormone receptor–positive cohort was 23.9 months in the trastuzumab deruxtecan group compared with 17.5 months in the physician’s choice chemotherapy group (HR, 0.64; 95% CI, 0.48-0.86; P = .003). The median overall survival in the entire population was 23.4 months in the trastuzumab deruxtecan group vs 16.8 months in the physician’s choice chemotherapy group. In the hormone receptor–negative cohort, the median overall survival was 18.2 months in the trastuzumab deruxtecan group and 8.3 months in the physician’s choice chemotherapy group. Complete responses were seen in 3.6% in the trastuzumab deruxtecan group and 0.6% and the physician’s choice chemotherapy group. The median duration of response was 10.7 months in the trastuzumab deruxtecan group and 6.8 months in the physician’s choice chemotherapy group.

Incidence of serious adverse events was 27% in the trastuzumab deruxtecan group and 25% in the physician’s choice chemotherapy group. Grade 3 or higher events occurred in 52% of the trastuzumab deruxtecan group and 67% of the physician’s choice chemotherapy group. Discontinuation due to adverse events occurred in 16% in the trastuzumab deruxtecan group and 18% in the physician’s choice chemotherapy group; 14 patients in the trastuzumab deruxtecan group and 5 patients in the physician’s choice chemotherapy group had an adverse event that was associated with death. Death due to pneumonitis in the trastuzumab deruxtecan group occurred in 2 patients. Drug-related interstitial lung disease or pneumonitis occurred in 45 patients who received trastuzumab deruxtecan. The majority of these events were grade 1 and grade 2. However, 3 patients had grade 5 interstitial lung disease or pneumonitis.

Conclusion: Treatment with trastuzumab deruxtecan led to a significant improvement in progression-free survival compared to physician’s choice chemotherapy in patients with HER2-low metastatic breast cancer.

 

 

Commentary

Trastuzumab deruxtecan is an antibody drug conjugate that consists of a humanized anti-HER2 monoclonal antibody linked to a topoisomerase 1 inhibitor. This antibody drug conjugate is unique compared with prior antibody drug conjugates such as trastuzumab emtansine in that it has a high drug-to-antibody ratio (~8). Furthermore, there appears to be a unique bystander effect resulting in off-target cytotoxicity to neighboring tumor cells, enhancing the efficacy of this novel therapy. Prior studies of trastuzumab deruxtecan have shown durable activity in heavily pretreated patients with metastatic HER2-positive breast cancer.1

HER2-positive breast cancer represents approximately 20% of breast cancer cases in women.2 Historically, HER2 positivity has been defined by strong HER2 expression with IHC staining (ie, score 3+) or HER2 amplification through ISH. Conversely, HER2-negative disease has historically been defined as those with IHC scores of 0 or 1+. This group represents approximately 60% of HER2-negative metastatic breast cancer patients.3 These patients have limited targeted treatment options after progressing on primary therapy. Prior data has shown that patients with low HER2 expression represent a heterogeneous population and thus, the historic categorization of HER2 status as positive or negative may in fact not adequately characterize the proportion of patients who may derive clinical benefit from HER2-directed therapies. Nevertheless, there have been no data to date that have shown improved outcomes in low HER2 expressers with anti-HER2 therapies.

The current studies add to the rapidly growing body of literature outlining the efficacy of the novel antibody drug conjugate trastuzumab deruxtecan. The implications of the data presented in these 2 studies are immediately practice changing.

In the DESTINY-Breast03 trial, Cortéz and colleagues show that trastuzumab deruxtecan therapy significantly prolongs progression-free survival compared with trastuzumab emtansine in patients with HER2-positive metastatic breast cancer who have progressed on first-line trastuzumab and taxane-based therapy. With a hazard ratio of 0.28 for disease progression or death, the efficacy of trastuzumab deruxtecan highlighted in this trial clearly makes this the standard of care in the second-line setting for patients with metastatic HER2-positive breast cancer. The overall survival in this trial was immature at the time of this analysis, and thus continued follow-up to validate the results noted here are warranted.

The DESTINY-Breast04 trial by Modi et al expands the cohort of patients who benefit from trastuzumab deruxtecan profoundly. This study defines a population of patients with HER2-low metastatic breast cancer who will now be eligible for HER2-directed therapies. These data show that therapy with trastuzumab deruxtecan leads to a significant and clinically meaningful improvement in both progression-free survival and overall survival compared with chemotherapy in patients with metastatic breast cancer with low expression of HER2. This benefit was seen in both the estrogen receptor–positive cohort as well as the entire population, including pre-treated triple-negative disease. Furthermore, this study does not define a threshold of HER2 expression by IHC that predicts benefit with trastuzumab deruxtecan. Patients with an IHC score of 1+ as well as those with a score of 2+/ISH negative both benefit to a similar extent from trastuzumab deruxtecan. Interestingly, in the DAISY trial, antitumor activity was noted with trastuzumab deruxtecan even in those without any detectable HER2 expression on IHC.4 Given the inconsistency and potential false negatives of IHC along with heterogeneous HER2 expression, further work is needed to better identify patients with low levels of HER2 expression who may benefit from this novel antibody drug conjugate. Thus, a reliable test to quantitatively assess the level of HER2 expression is needed in order to determine more accurately which patients will benefit from trastuzumab deruxtecan.

Last, trastuzumab deruxtecan has been associated with interstitial lung disease and pneumonitis. Interstitial lung disease and pneumonitis occurred in approximately 10% of patients who received trastuzumab deruxtecan in the DESTINY-Breast03 trial and about 12% of patients in the DESTINY-Breast04 trial. Most of these events were grade 1 and grade 2. Nevertheless, clinicians must be aware of this risk and monitor patients frequently for the development of pneumonitis or interstitial lung disease.

 

 

Application for Clinical Practice and System Implementation

The results of the current studies show a longer progression-free survival with trastuzumab deruxtecan in both HER2-low expressing metastatic breast cancer and HER2-positive metastatic breast cancer following taxane and trastuzumab-based therapy. These results are clearly practice changing and represent a new standard of care in these patient populations. It is incumbent upon treating oncologists to work with our pathology colleagues to assess HER2 IHC thoroughly in order to identify all potential patients who may benefit from trastuzumab deruxtecan in the metastatic setting. The continued advancement of anti-HER2 therapy will undoubtedly have a significant impact on patient outcomes going forward.

Practice Points

  • With a hazard ratio of 0.28 for disease progression or death, the efficacy of trastuzumab deruxtecan highlighted in the DESTINY-Breast03 trial clearly makes this the standard of care in the second-line setting for patients with metastatic HER2-positive breast cancer.
  • In the DESTINY-Breast04 trial, a significant and clinically meaningful improvement in both progression-free survival and overall survival compared with chemotherapy was seen in patients with metastatic breast cancer with low expression of HER2, including both the estrogen receptor–positive cohort as well as the entire population, including those with pre-treated triple-negative disease.

­—Daniel Isaac, DO, MS

Study 1 Overview (Cortés et al)

Objective: To compare the efficacy and safety of trastuzumab deruxtecan with those of trastuzumab emtansine in patients with HER2-positive metastatic breast cancer previously treated with trastuzumab and taxane.

Design: Phase 3, multicenter, open-label randomized trial conducted at 169 centers and 15 countries.

Setting and participants: Eligible patients had to have unresectable or metastatic HER2-positive breast cancer that had progressed during or after treatment with trastuzumab and a taxane or had disease that progressed within 6 months after neoadjuvant or adjuvant treatment involving trastuzumab or taxane. Patients with stable or previously treated brain metastases were eligible. Patients were not eligible for the study if they had symptomatic brain metastases, prior exposure to trastuzumab emtansine, or a history of interstitial lung disease.

Intervention: Patients were randomized in a 1-to-1 fashion to receive either trastuzumab deruxtecan 5.4 mg/kg every 3 weeks or trastuzumab emtansine 3.6 mg/kg every 3 weeks. Patients were stratified according to hormone-receptor status, prior treatment with epratuzumab, and the presence or absence of visceral disease.

Main outcome measures: The primary endpoint of the study was progression-free survival as determined by an independent central review. Secondary endpoints included overall survival, overall response, and safety.

Main results: A total of 524 patients were enrolled in the study, with 261 patients randomized to trastuzumab deruxtecan and 263 patients randomized to trastuzumab emtansine. The demographic and baseline characteristics were similar between the 2 cohorts, and 60% of patients in both groups received prior epratuzumab therapy. Stable brain metastases were present in around 20% of patients in each group, and 70% of patients in each group had visceral disease. The median duration of follow-up was 16.2 months with trastuzumab deruxtecan and 15.3 months with trastuzumab emtansine.

The median progression-free survival was not reached in the trastuzumab deruxtecan group and was 6.8 months in the trastuzumab emtansine group (95% CI, 5.6-8.2). At 12 months the percentage of patients alive without disease progression was significantly larger in the trastuzumab deruxtecan group compared with the trastuzumab emtansine group. The hazard ratio for disease progression or death from any cause was 0.28 (95% CI, 0.22-0.37; P < .001). Subgroup analyses showed a benefit in progression-free survival with trastuzumab deruxtecan across all subgroups.

At the time of this analysis, the percentage of patients who were alive at 12 months was 94% with trastuzumab deruxtecan and 85.9% with trastuzumab emtansine. The response rates were significantly higher with trastuzumab deruxtecan compared with trastuzumab emtansine (79.7% vs 34.2%). A complete response was seen in 16% of patients in the trastuzumab deruxtecan arm, compared with 8.7% of patients in the trastuzumab emtansine group. The disease control rate (complete response, partial response, or stable disease) was higher in the trastuzumab deruxtecan group compared with the trastuzumab emtansine group (96.6% vs 76.8%).

Serious adverse events were reported in 19% of patients in the trastuzumab deruxtecan group and 18% of patients in the trastuzumab emtansine group. Discontinuation due to adverse events was higher in the trastuzumab deruxtecan group, with 13.6% of patients discontinuing trastuzumab deruxtecan. Grade 3 or higher adverse events were seen in 52% of patients treated with trastuzumab deruxtecan and 48% of patients treated with trastuzumab emtansine. The most commonly reported adverse event with trastuzumab deruxtecan was nausea/vomiting and fatigue. These adverse events were seen more in the trastuzumab deruxtecan group compared with the trastuzumab emtansine group. No drug-related grade 5 adverse events were reported.

In the trastuzumab deruxtecan group, 10.5% of patients receiving trastuzumab deruxtecan developed interstitial lung disease or pneumonitis. Seven patients had grade 1 events, 18 patients had grade 2 events, and 2 patients had grade 3 events. No grade 4 or 5 events were noted in either treatment group. The median time to onset of interstitial lung disease or pneumonitis in those receiving trastuzumab deruxtecan was 168 days (range, 33-507). Discontinuation of therapy due to interstitial lung disease or pneumonitis occurred in 8% of patients receiving trastuzumab deruxtecan and 1% of patients receiving trastuzumab emtansine.

Conclusion: Trastuzumab deruxtecan significantly decreases the risk of disease progression or death compared to trastuzumab emtansine in patients with HER2-positive metastatic breast cancer who have progressed on prior trastuzumab and taxane-based therapy.

 

 

Study 2 Overview (Modi et al)

Objective: To assess the efficacy of trastuzumab deruxtecan in patients with unresectable or metastatic breast cancer with low levels of HER2 expression.

Design: This was a randomized, 2-group, open-label, phase 3 trial.

Setting and participants: The trial was designed with a planned enrollment of 480 patients with hormone receptor–positive disease and 60 patients with hormone receptor–negative disease. Patients were randomized in a 2:1 ratio. Randomization was stratified according to HER2 status (immunohistochemical [IHC] 1+ vs IHC 2+/in situ hybridization [ISH] negative), number of prior lines of therapy, and hormone-receptor status. IHC scores for HER2 expression were determined through central testing. Specimens that had HER2 IHC scores of 2+ were reflexed to ISH. Specimens were considered HER2-low-expressing if they had an IHC score of 1+ or if they had an IHC score of 2+ and were ISH negative.

Eligible patients had to have received chemotherapy for metastatic disease or had disease recurrence during or within 6 months after completing adjuvant chemotherapy. Patients with hormone receptor–positive disease must have had at least 1 line of endocrine therapy. Patients were eligible if they had stable brain metastases. Patients with interstitial lung disease were excluded.

Intervention: Patients were randomized to receive trastuzumab deruxtecan 5.4 mg/kg every 3 weeks or physician’s choice of chemotherapy (capecitabine, eribulin, gemcitabine, paclitaxel, or nab-paclitaxel).

Main outcome measures: The primary endpoint was progression-free survival in patients with hormone receptor–positive disease. Secondary endpoints were progression-free survival among all patients, overall survival in hormone receptor–positive patients, and overall survival in all patients. Additional secondary endpoints included objective response rates, duration of response, and efficacy in hormone receptor–negative patients.

Main results: A total of 373 patients were assigned to the trastuzumab deruxtecan group and 184 patients were assigned to the physician’s choice chemotherapy group; 88% of patients in each cohort were hormone receptor–positive. In the physician’s choice chemotherapy group, 51% received eribulin, 20% received capecitabine, 10% received nab-paclitaxel, 10% received gemcitabine, and 8% received paclitaxel. The demographic and baseline characteristics were similar between both cohorts. The median duration of follow-up was 18.4 months.

The median progression-free survival in the hormone receptor–positive cohort was 10.1 months in the trastuzumab deruxtecan group and 5.4 months in the physician’s choice chemotherapy group (HR, 0.51; 95% CI, 0.4-0.64). Subgroup analyses revealed a benefit across all subgroups. The median progression-free survival among patients with a HER2 IHC score of 1+ and those with a HER2 IHC score of 2+/negative ISH were identical. In patients who received a prior CDK 4/6 inhibitor, the median progression-free survival was also 10 months in the trastuzumab deruxtecan group. In those who were CDK 4/6- naïve, the progression-free survival was 11.7 months. The progression-free survival in all patients was 9.9 months in the trastuzumab deruxtecan group and 5.1 months in the physician’s choice chemotherapy group (HR, 0.46; 95% CI, 0.24-0.89).

The median overall survival in the hormone receptor–positive cohort was 23.9 months in the trastuzumab deruxtecan group compared with 17.5 months in the physician’s choice chemotherapy group (HR, 0.64; 95% CI, 0.48-0.86; P = .003). The median overall survival in the entire population was 23.4 months in the trastuzumab deruxtecan group vs 16.8 months in the physician’s choice chemotherapy group. In the hormone receptor–negative cohort, the median overall survival was 18.2 months in the trastuzumab deruxtecan group and 8.3 months in the physician’s choice chemotherapy group. Complete responses were seen in 3.6% in the trastuzumab deruxtecan group and 0.6% and the physician’s choice chemotherapy group. The median duration of response was 10.7 months in the trastuzumab deruxtecan group and 6.8 months in the physician’s choice chemotherapy group.

Incidence of serious adverse events was 27% in the trastuzumab deruxtecan group and 25% in the physician’s choice chemotherapy group. Grade 3 or higher events occurred in 52% of the trastuzumab deruxtecan group and 67% of the physician’s choice chemotherapy group. Discontinuation due to adverse events occurred in 16% in the trastuzumab deruxtecan group and 18% in the physician’s choice chemotherapy group; 14 patients in the trastuzumab deruxtecan group and 5 patients in the physician’s choice chemotherapy group had an adverse event that was associated with death. Death due to pneumonitis in the trastuzumab deruxtecan group occurred in 2 patients. Drug-related interstitial lung disease or pneumonitis occurred in 45 patients who received trastuzumab deruxtecan. The majority of these events were grade 1 and grade 2. However, 3 patients had grade 5 interstitial lung disease or pneumonitis.

Conclusion: Treatment with trastuzumab deruxtecan led to a significant improvement in progression-free survival compared to physician’s choice chemotherapy in patients with HER2-low metastatic breast cancer.

 

 

Commentary

Trastuzumab deruxtecan is an antibody drug conjugate that consists of a humanized anti-HER2 monoclonal antibody linked to a topoisomerase 1 inhibitor. This antibody drug conjugate is unique compared with prior antibody drug conjugates such as trastuzumab emtansine in that it has a high drug-to-antibody ratio (~8). Furthermore, there appears to be a unique bystander effect resulting in off-target cytotoxicity to neighboring tumor cells, enhancing the efficacy of this novel therapy. Prior studies of trastuzumab deruxtecan have shown durable activity in heavily pretreated patients with metastatic HER2-positive breast cancer.1

HER2-positive breast cancer represents approximately 20% of breast cancer cases in women.2 Historically, HER2 positivity has been defined by strong HER2 expression with IHC staining (ie, score 3+) or HER2 amplification through ISH. Conversely, HER2-negative disease has historically been defined as those with IHC scores of 0 or 1+. This group represents approximately 60% of HER2-negative metastatic breast cancer patients.3 These patients have limited targeted treatment options after progressing on primary therapy. Prior data has shown that patients with low HER2 expression represent a heterogeneous population and thus, the historic categorization of HER2 status as positive or negative may in fact not adequately characterize the proportion of patients who may derive clinical benefit from HER2-directed therapies. Nevertheless, there have been no data to date that have shown improved outcomes in low HER2 expressers with anti-HER2 therapies.

The current studies add to the rapidly growing body of literature outlining the efficacy of the novel antibody drug conjugate trastuzumab deruxtecan. The implications of the data presented in these 2 studies are immediately practice changing.

In the DESTINY-Breast03 trial, Cortéz and colleagues show that trastuzumab deruxtecan therapy significantly prolongs progression-free survival compared with trastuzumab emtansine in patients with HER2-positive metastatic breast cancer who have progressed on first-line trastuzumab and taxane-based therapy. With a hazard ratio of 0.28 for disease progression or death, the efficacy of trastuzumab deruxtecan highlighted in this trial clearly makes this the standard of care in the second-line setting for patients with metastatic HER2-positive breast cancer. The overall survival in this trial was immature at the time of this analysis, and thus continued follow-up to validate the results noted here are warranted.

The DESTINY-Breast04 trial by Modi et al expands the cohort of patients who benefit from trastuzumab deruxtecan profoundly. This study defines a population of patients with HER2-low metastatic breast cancer who will now be eligible for HER2-directed therapies. These data show that therapy with trastuzumab deruxtecan leads to a significant and clinically meaningful improvement in both progression-free survival and overall survival compared with chemotherapy in patients with metastatic breast cancer with low expression of HER2. This benefit was seen in both the estrogen receptor–positive cohort as well as the entire population, including pre-treated triple-negative disease. Furthermore, this study does not define a threshold of HER2 expression by IHC that predicts benefit with trastuzumab deruxtecan. Patients with an IHC score of 1+ as well as those with a score of 2+/ISH negative both benefit to a similar extent from trastuzumab deruxtecan. Interestingly, in the DAISY trial, antitumor activity was noted with trastuzumab deruxtecan even in those without any detectable HER2 expression on IHC.4 Given the inconsistency and potential false negatives of IHC along with heterogeneous HER2 expression, further work is needed to better identify patients with low levels of HER2 expression who may benefit from this novel antibody drug conjugate. Thus, a reliable test to quantitatively assess the level of HER2 expression is needed in order to determine more accurately which patients will benefit from trastuzumab deruxtecan.

Last, trastuzumab deruxtecan has been associated with interstitial lung disease and pneumonitis. Interstitial lung disease and pneumonitis occurred in approximately 10% of patients who received trastuzumab deruxtecan in the DESTINY-Breast03 trial and about 12% of patients in the DESTINY-Breast04 trial. Most of these events were grade 1 and grade 2. Nevertheless, clinicians must be aware of this risk and monitor patients frequently for the development of pneumonitis or interstitial lung disease.

 

 

Application for Clinical Practice and System Implementation

The results of the current studies show a longer progression-free survival with trastuzumab deruxtecan in both HER2-low expressing metastatic breast cancer and HER2-positive metastatic breast cancer following taxane and trastuzumab-based therapy. These results are clearly practice changing and represent a new standard of care in these patient populations. It is incumbent upon treating oncologists to work with our pathology colleagues to assess HER2 IHC thoroughly in order to identify all potential patients who may benefit from trastuzumab deruxtecan in the metastatic setting. The continued advancement of anti-HER2 therapy will undoubtedly have a significant impact on patient outcomes going forward.

Practice Points

  • With a hazard ratio of 0.28 for disease progression or death, the efficacy of trastuzumab deruxtecan highlighted in the DESTINY-Breast03 trial clearly makes this the standard of care in the second-line setting for patients with metastatic HER2-positive breast cancer.
  • In the DESTINY-Breast04 trial, a significant and clinically meaningful improvement in both progression-free survival and overall survival compared with chemotherapy was seen in patients with metastatic breast cancer with low expression of HER2, including both the estrogen receptor–positive cohort as well as the entire population, including those with pre-treated triple-negative disease.

­—Daniel Isaac, DO, MS

References

1. Modi S, Saura C, Yamashita T, et al. Trastuzumab deruxtecan in previously treated HER2-positive breast cancer. N Engl J Med. 2020;382(7):610-621. doi:10.1056/NEJMoa1914510

2. National Cancer Institute. Cancer stat facts. female breast cancer. Accessed July 25, 2022. https://seer.cancer.gov/statfacts/html/breast.html

3. Schettini F, Chic N, Braso-Maristany F, et al. Clinical, pathological and PAM50 gene expression features of HER2-low breast cancer. NPJ Breast Cancer. 2021;7(`1):1. doi:10.1038/s41523-020-00208-2

4. Dieras VDE, Deluche E, Lusque A, et al. Trastuzumab deruxtecan for advanced breast cancer patients, regardless of HER2 status: a phase II study with biomarkers analysis. In: Proceedings of Abstracts of the 2021 San Antonio Breast Cancer Symposium, December 7-10, 2021. San Antonio: American Association for Cancer Research, 2021. Abstract.

References

1. Modi S, Saura C, Yamashita T, et al. Trastuzumab deruxtecan in previously treated HER2-positive breast cancer. N Engl J Med. 2020;382(7):610-621. doi:10.1056/NEJMoa1914510

2. National Cancer Institute. Cancer stat facts. female breast cancer. Accessed July 25, 2022. https://seer.cancer.gov/statfacts/html/breast.html

3. Schettini F, Chic N, Braso-Maristany F, et al. Clinical, pathological and PAM50 gene expression features of HER2-low breast cancer. NPJ Breast Cancer. 2021;7(`1):1. doi:10.1038/s41523-020-00208-2

4. Dieras VDE, Deluche E, Lusque A, et al. Trastuzumab deruxtecan for advanced breast cancer patients, regardless of HER2 status: a phase II study with biomarkers analysis. In: Proceedings of Abstracts of the 2021 San Antonio Breast Cancer Symposium, December 7-10, 2021. San Antonio: American Association for Cancer Research, 2021. Abstract.

Issue
Journal of Clinical Outcomes Management - 29(4)
Issue
Journal of Clinical Outcomes Management - 29(4)
Page Number
132-136
Page Number
132-136
Publications
Publications
Topics
Article Type
Display Headline
Trastuzumab Deruxtecan in HER2-Positive Breast Cancer
Display Headline
Trastuzumab Deruxtecan in HER2-Positive Breast Cancer
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media