Immunotherapy May Be Overused in Dying Patients With Cancer

Article Type
Changed
Thu, 08/08/2024 - 15:50

Chemotherapy has fallen out of favor for treating cancer toward the end of life. The toxicity is too high, and the benefit, if any, is often too low.

Immunotherapy, however, has been taking its place. Checkpoint inhibitors are increasingly being initiated to treat metastatic cancer in patients approaching the end of life and have become the leading driver of end-of-life cancer spending.

This means “there are patients who are getting immunotherapy who shouldn’t,” said Yale University, New Haven, Connecticut, surgical oncologist Sajid Khan, MD, senior investigator on a recent study that highlighted the growing use of these agents in patients’ last month of life.

What’s driving this trend, and how can oncologists avoid overtreatment with immunotherapy at the end of life?
 

The N-of-1 Patient

With immunotherapy at the end of life, “each of us has had our N-of-1” where a patient bounces back with a remarkable and durable response, said Don Dizon, MD, a gynecologic oncologist at Brown University, Providence, Rhode Island.

He recalled a patient with sarcoma who did not respond to chemotherapy. But after Dr. Dizon started her on immunotherapy, everything turned around. She has now been in remission for 8 years and counting.

The possibility of an unexpected or remarkable responder is seductive. And the improved safety of immunotherapy over chemotherapy adds to the allure.

Meanwhile, patients are often desperate. It’s rare for someone to be ready to stop treatment, Dr. Dizon said. Everybody “hopes that they’re going to be the exceptional responder.”

At the end of the day, the question often becomes: “Why not try immunotherapy? What’s there to lose?”

This thinking may be prompting broader use of immunotherapy in late-stage disease, even in instances with no Food and Drug Administration indication and virtually no supportive data, such as for metastatic ovarian cancer, Dr. Dizon said.
 

Back to Earth

The problem with the hopeful approach is that end-of-life turnarounds with immunotherapy are rare, and there’s no way at the moment to predict who will have one, said Laura Petrillo, MD, a palliative care physician at Massachusetts General Hospital, Boston.

Even though immunotherapy generally comes with fewer adverse events than chemotherapy, catastrophic side effects are still possible.

Dr. Petrillo recalled a 95-year-old woman with metastatic cancer who was largely asymptomatic.

She had a qualifying mutation for a checkpoint inhibitor, so her oncologist started her on one. The patient never bounced back from the severe colitis the agent caused, and she died of complications in the hospital.

Although such reactions with immunotherapy are uncommon, less serious problems caused by the agents can still have a major impact on a person’s quality of life. Low-grade diarrhea, for instance, may not sound too bad, but in a patient’s daily life, it can translate to six or more episodes a day.

Even with no side effects, prescribing immunotherapy can mean that patients with limited time left spend a good portion of it at an infusion clinic instead of at home. These patients are also less likely to be referred to hospice and more likely to be admitted to and die in the hospital.

And with treatments that can cost $20,000 per dose, financial toxicity becomes a big concern.

In short, some of the reasons why chemotherapy is not recommended at the end of life also apply to immunotherapy, Dr. Petrillo said.
 

 

 

Prescribing Decisions

Recent research highlights the growing use of immunotherapy at the end of life.

Dr. Khan’s retrospective study found, for instance, that the percentage of patients starting immunotherapy in the last 30 days of life increased by about fourfold to fivefold over the study period for the three cancers analyzed — stage IV melanoma, lung, and kidney cancers.

Among the population that died within 30 days, the percentage receiving immunotherapy increased over the study periods — 0.8%-4.3% for melanoma, 0.9%-3.2% for NSCLC, and 0.5%-2.6% for kidney cell carcinoma — prompting the conclusion that immunotherapy prescriptions in the last month of life are on the rise.

Prescribing immunotherapy in patients who ultimately died within 1 month occurred more frequently at low-volume, nonacademic centers than at academic or high-volume centers, and outcomes varied by practice setting.

Patients had better survival outcomes overall when receiving immunotherapy at academic or high-volume centers — a finding Dr. Khan said is worth investigating further. Possible explanations include better management of severe immune-related side effects at larger centers and more caution when prescribing immunotherapy to “borderline” candidates, such as those with several comorbidities.

Importantly, given the retrospective design, Dr. Khan and colleagues already knew which patients prescribed immunotherapy died within 30 days of initiating treatment.

More specifically, 5192 of 71,204 patients who received immunotherapy (7.3%) died within a month of initiating therapy, while 66,012 (92.7%) lived beyond that point.

The study, however, did not assess how the remaining 92.7% who lived beyond 30 days fared on immunotherapy and the differences between those who lived less than 30 days and those who survived longer.

Knowing the outcome of patients at the outset of the analysis still leaves open the question of when immunotherapy can extend life and when it can’t for the patient in front of you.

To avoid overtreating at the end of life, it’s important to have “the same standard that you have for giving chemotherapy. You have to treat it with the same respect,” said Moshe Chasky, MD, a community medical oncologist with Alliance Cancer Specialists in Philadelphia, Pennsylvania. “You can’t just be throwing” immunotherapy around “at the end of life.”

While there are no clear predictors of risk and benefit, there are some factors to help guide decisions.

As with chemotherapy, Dr. Petrillo said performance status is key. Dr. Petrillo and colleagues found that median overall survival with immune checkpoint inhibitors for advanced non–small cell lung cancer was 14.3 months in patients with an Eastern Cooperative Oncology Group performance score of 0-1 but only 4.5 months with scores of ≥ 2.

Dr. Khan also found that immunotherapy survival is, unsurprisingly, worse in patients with high metastatic burdens and more comorbidities.

“You should still consider immunotherapy for metastatic melanoma, non–small cell lung cancer, and renal cell carcinoma,” Dr. Khan said. The message here is to “think twice before using” it, especially in comorbid patients with widespread metastases.

“Just because something can be done doesn’t always mean it should be done,” he said.

At Yale, when Dr. Khan works, immunotherapy decisions are considered by a multidisciplinary tumor board. At Mass General, immunotherapy has generally moved to the frontline setting, and the hospital no longer prescribes checkpoint inhibitors to hospitalized patients because the cost is too high relative to the potential benefit, Dr. Petrillo explained.

Still, with all the uncertainties about risk and benefit, counseling patients is a challenge. Dr. Dizon called it “the epitome of shared decision-making.”

Dr. Petrillo noted that it’s critical not to counsel patients based solely on the anecdotal patients who do surprisingly well.

“It’s hard to mention that and not have that be what somebody anchors on,” she said. But that speaks to “how desperate people can feel, how hopeful they can be.”

Dr. Khan, Dr. Petrillo, and Dr. Chasky all reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Chemotherapy has fallen out of favor for treating cancer toward the end of life. The toxicity is too high, and the benefit, if any, is often too low.

Immunotherapy, however, has been taking its place. Checkpoint inhibitors are increasingly being initiated to treat metastatic cancer in patients approaching the end of life and have become the leading driver of end-of-life cancer spending.

This means “there are patients who are getting immunotherapy who shouldn’t,” said Yale University, New Haven, Connecticut, surgical oncologist Sajid Khan, MD, senior investigator on a recent study that highlighted the growing use of these agents in patients’ last month of life.

What’s driving this trend, and how can oncologists avoid overtreatment with immunotherapy at the end of life?
 

The N-of-1 Patient

With immunotherapy at the end of life, “each of us has had our N-of-1” where a patient bounces back with a remarkable and durable response, said Don Dizon, MD, a gynecologic oncologist at Brown University, Providence, Rhode Island.

He recalled a patient with sarcoma who did not respond to chemotherapy. But after Dr. Dizon started her on immunotherapy, everything turned around. She has now been in remission for 8 years and counting.

The possibility of an unexpected or remarkable responder is seductive. And the improved safety of immunotherapy over chemotherapy adds to the allure.

Meanwhile, patients are often desperate. It’s rare for someone to be ready to stop treatment, Dr. Dizon said. Everybody “hopes that they’re going to be the exceptional responder.”

At the end of the day, the question often becomes: “Why not try immunotherapy? What’s there to lose?”

This thinking may be prompting broader use of immunotherapy in late-stage disease, even in instances with no Food and Drug Administration indication and virtually no supportive data, such as for metastatic ovarian cancer, Dr. Dizon said.
 

Back to Earth

The problem with the hopeful approach is that end-of-life turnarounds with immunotherapy are rare, and there’s no way at the moment to predict who will have one, said Laura Petrillo, MD, a palliative care physician at Massachusetts General Hospital, Boston.

Even though immunotherapy generally comes with fewer adverse events than chemotherapy, catastrophic side effects are still possible.

Dr. Petrillo recalled a 95-year-old woman with metastatic cancer who was largely asymptomatic.

She had a qualifying mutation for a checkpoint inhibitor, so her oncologist started her on one. The patient never bounced back from the severe colitis the agent caused, and she died of complications in the hospital.

Although such reactions with immunotherapy are uncommon, less serious problems caused by the agents can still have a major impact on a person’s quality of life. Low-grade diarrhea, for instance, may not sound too bad, but in a patient’s daily life, it can translate to six or more episodes a day.

Even with no side effects, prescribing immunotherapy can mean that patients with limited time left spend a good portion of it at an infusion clinic instead of at home. These patients are also less likely to be referred to hospice and more likely to be admitted to and die in the hospital.

And with treatments that can cost $20,000 per dose, financial toxicity becomes a big concern.

In short, some of the reasons why chemotherapy is not recommended at the end of life also apply to immunotherapy, Dr. Petrillo said.
 

 

 

Prescribing Decisions

Recent research highlights the growing use of immunotherapy at the end of life.

Dr. Khan’s retrospective study found, for instance, that the percentage of patients starting immunotherapy in the last 30 days of life increased by about fourfold to fivefold over the study period for the three cancers analyzed — stage IV melanoma, lung, and kidney cancers.

Among the population that died within 30 days, the percentage receiving immunotherapy increased over the study periods — 0.8%-4.3% for melanoma, 0.9%-3.2% for NSCLC, and 0.5%-2.6% for kidney cell carcinoma — prompting the conclusion that immunotherapy prescriptions in the last month of life are on the rise.

Prescribing immunotherapy in patients who ultimately died within 1 month occurred more frequently at low-volume, nonacademic centers than at academic or high-volume centers, and outcomes varied by practice setting.

Patients had better survival outcomes overall when receiving immunotherapy at academic or high-volume centers — a finding Dr. Khan said is worth investigating further. Possible explanations include better management of severe immune-related side effects at larger centers and more caution when prescribing immunotherapy to “borderline” candidates, such as those with several comorbidities.

Importantly, given the retrospective design, Dr. Khan and colleagues already knew which patients prescribed immunotherapy died within 30 days of initiating treatment.

More specifically, 5192 of 71,204 patients who received immunotherapy (7.3%) died within a month of initiating therapy, while 66,012 (92.7%) lived beyond that point.

The study, however, did not assess how the remaining 92.7% who lived beyond 30 days fared on immunotherapy and the differences between those who lived less than 30 days and those who survived longer.

Knowing the outcome of patients at the outset of the analysis still leaves open the question of when immunotherapy can extend life and when it can’t for the patient in front of you.

To avoid overtreating at the end of life, it’s important to have “the same standard that you have for giving chemotherapy. You have to treat it with the same respect,” said Moshe Chasky, MD, a community medical oncologist with Alliance Cancer Specialists in Philadelphia, Pennsylvania. “You can’t just be throwing” immunotherapy around “at the end of life.”

While there are no clear predictors of risk and benefit, there are some factors to help guide decisions.

As with chemotherapy, Dr. Petrillo said performance status is key. Dr. Petrillo and colleagues found that median overall survival with immune checkpoint inhibitors for advanced non–small cell lung cancer was 14.3 months in patients with an Eastern Cooperative Oncology Group performance score of 0-1 but only 4.5 months with scores of ≥ 2.

Dr. Khan also found that immunotherapy survival is, unsurprisingly, worse in patients with high metastatic burdens and more comorbidities.

“You should still consider immunotherapy for metastatic melanoma, non–small cell lung cancer, and renal cell carcinoma,” Dr. Khan said. The message here is to “think twice before using” it, especially in comorbid patients with widespread metastases.

“Just because something can be done doesn’t always mean it should be done,” he said.

At Yale, when Dr. Khan works, immunotherapy decisions are considered by a multidisciplinary tumor board. At Mass General, immunotherapy has generally moved to the frontline setting, and the hospital no longer prescribes checkpoint inhibitors to hospitalized patients because the cost is too high relative to the potential benefit, Dr. Petrillo explained.

Still, with all the uncertainties about risk and benefit, counseling patients is a challenge. Dr. Dizon called it “the epitome of shared decision-making.”

Dr. Petrillo noted that it’s critical not to counsel patients based solely on the anecdotal patients who do surprisingly well.

“It’s hard to mention that and not have that be what somebody anchors on,” she said. But that speaks to “how desperate people can feel, how hopeful they can be.”

Dr. Khan, Dr. Petrillo, and Dr. Chasky all reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Chemotherapy has fallen out of favor for treating cancer toward the end of life. The toxicity is too high, and the benefit, if any, is often too low.

Immunotherapy, however, has been taking its place. Checkpoint inhibitors are increasingly being initiated to treat metastatic cancer in patients approaching the end of life and have become the leading driver of end-of-life cancer spending.

This means “there are patients who are getting immunotherapy who shouldn’t,” said Yale University, New Haven, Connecticut, surgical oncologist Sajid Khan, MD, senior investigator on a recent study that highlighted the growing use of these agents in patients’ last month of life.

What’s driving this trend, and how can oncologists avoid overtreatment with immunotherapy at the end of life?
 

The N-of-1 Patient

With immunotherapy at the end of life, “each of us has had our N-of-1” where a patient bounces back with a remarkable and durable response, said Don Dizon, MD, a gynecologic oncologist at Brown University, Providence, Rhode Island.

He recalled a patient with sarcoma who did not respond to chemotherapy. But after Dr. Dizon started her on immunotherapy, everything turned around. She has now been in remission for 8 years and counting.

The possibility of an unexpected or remarkable responder is seductive. And the improved safety of immunotherapy over chemotherapy adds to the allure.

Meanwhile, patients are often desperate. It’s rare for someone to be ready to stop treatment, Dr. Dizon said. Everybody “hopes that they’re going to be the exceptional responder.”

At the end of the day, the question often becomes: “Why not try immunotherapy? What’s there to lose?”

This thinking may be prompting broader use of immunotherapy in late-stage disease, even in instances with no Food and Drug Administration indication and virtually no supportive data, such as for metastatic ovarian cancer, Dr. Dizon said.
 

Back to Earth

The problem with the hopeful approach is that end-of-life turnarounds with immunotherapy are rare, and there’s no way at the moment to predict who will have one, said Laura Petrillo, MD, a palliative care physician at Massachusetts General Hospital, Boston.

Even though immunotherapy generally comes with fewer adverse events than chemotherapy, catastrophic side effects are still possible.

Dr. Petrillo recalled a 95-year-old woman with metastatic cancer who was largely asymptomatic.

She had a qualifying mutation for a checkpoint inhibitor, so her oncologist started her on one. The patient never bounced back from the severe colitis the agent caused, and she died of complications in the hospital.

Although such reactions with immunotherapy are uncommon, less serious problems caused by the agents can still have a major impact on a person’s quality of life. Low-grade diarrhea, for instance, may not sound too bad, but in a patient’s daily life, it can translate to six or more episodes a day.

Even with no side effects, prescribing immunotherapy can mean that patients with limited time left spend a good portion of it at an infusion clinic instead of at home. These patients are also less likely to be referred to hospice and more likely to be admitted to and die in the hospital.

And with treatments that can cost $20,000 per dose, financial toxicity becomes a big concern.

In short, some of the reasons why chemotherapy is not recommended at the end of life also apply to immunotherapy, Dr. Petrillo said.
 

 

 

Prescribing Decisions

Recent research highlights the growing use of immunotherapy at the end of life.

Dr. Khan’s retrospective study found, for instance, that the percentage of patients starting immunotherapy in the last 30 days of life increased by about fourfold to fivefold over the study period for the three cancers analyzed — stage IV melanoma, lung, and kidney cancers.

Among the population that died within 30 days, the percentage receiving immunotherapy increased over the study periods — 0.8%-4.3% for melanoma, 0.9%-3.2% for NSCLC, and 0.5%-2.6% for kidney cell carcinoma — prompting the conclusion that immunotherapy prescriptions in the last month of life are on the rise.

Prescribing immunotherapy in patients who ultimately died within 1 month occurred more frequently at low-volume, nonacademic centers than at academic or high-volume centers, and outcomes varied by practice setting.

Patients had better survival outcomes overall when receiving immunotherapy at academic or high-volume centers — a finding Dr. Khan said is worth investigating further. Possible explanations include better management of severe immune-related side effects at larger centers and more caution when prescribing immunotherapy to “borderline” candidates, such as those with several comorbidities.

Importantly, given the retrospective design, Dr. Khan and colleagues already knew which patients prescribed immunotherapy died within 30 days of initiating treatment.

More specifically, 5192 of 71,204 patients who received immunotherapy (7.3%) died within a month of initiating therapy, while 66,012 (92.7%) lived beyond that point.

The study, however, did not assess how the remaining 92.7% who lived beyond 30 days fared on immunotherapy and the differences between those who lived less than 30 days and those who survived longer.

Knowing the outcome of patients at the outset of the analysis still leaves open the question of when immunotherapy can extend life and when it can’t for the patient in front of you.

To avoid overtreating at the end of life, it’s important to have “the same standard that you have for giving chemotherapy. You have to treat it with the same respect,” said Moshe Chasky, MD, a community medical oncologist with Alliance Cancer Specialists in Philadelphia, Pennsylvania. “You can’t just be throwing” immunotherapy around “at the end of life.”

While there are no clear predictors of risk and benefit, there are some factors to help guide decisions.

As with chemotherapy, Dr. Petrillo said performance status is key. Dr. Petrillo and colleagues found that median overall survival with immune checkpoint inhibitors for advanced non–small cell lung cancer was 14.3 months in patients with an Eastern Cooperative Oncology Group performance score of 0-1 but only 4.5 months with scores of ≥ 2.

Dr. Khan also found that immunotherapy survival is, unsurprisingly, worse in patients with high metastatic burdens and more comorbidities.

“You should still consider immunotherapy for metastatic melanoma, non–small cell lung cancer, and renal cell carcinoma,” Dr. Khan said. The message here is to “think twice before using” it, especially in comorbid patients with widespread metastases.

“Just because something can be done doesn’t always mean it should be done,” he said.

At Yale, when Dr. Khan works, immunotherapy decisions are considered by a multidisciplinary tumor board. At Mass General, immunotherapy has generally moved to the frontline setting, and the hospital no longer prescribes checkpoint inhibitors to hospitalized patients because the cost is too high relative to the potential benefit, Dr. Petrillo explained.

Still, with all the uncertainties about risk and benefit, counseling patients is a challenge. Dr. Dizon called it “the epitome of shared decision-making.”

Dr. Petrillo noted that it’s critical not to counsel patients based solely on the anecdotal patients who do surprisingly well.

“It’s hard to mention that and not have that be what somebody anchors on,” she said. But that speaks to “how desperate people can feel, how hopeful they can be.”

Dr. Khan, Dr. Petrillo, and Dr. Chasky all reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Can Addressing Depression Reduce Chemo Toxicity in Older Adults?

Article Type
Changed
Tue, 08/13/2024 - 09:44

 

TOPLINE:

Elevated depression symptoms are linked to an increased risk for severe chemotherapy toxicity in older adults with cancer. This risk is mitigated by geriatric assessment (GA)-driven interventions.

METHODOLOGY:

  • Researchers conducted a secondary analysis of a randomized controlled trial to evaluate whether greater reductions in grade 3 chemotherapy-related toxicities occurred with geriatric assessment-driven interventions vs standard care.
  • A total of 605 patients aged 65 years and older with any stage of solid malignancy were included, with 402 randomized to the intervention arm and 203 to the standard-of-care arm.
  • Mental health was assessed using the Mental Health Inventory 13, and chemotherapy toxicity was graded by the National Cancer Institute Common Terminology Criteria for Adverse Events, version 4.0.
  • Patients in the intervention arm received recommendations from a multidisciplinary team based on their baseline GA, while those in the standard-of-care arm received only the baseline assessment results.
  • The study was conducted at City of Hope National Medical Center in Duarte, California, and patients were followed throughout treatment or for up to 6 months from starting chemotherapy.

TAKEAWAY:

  • According to the authors, patients with depression had increased chemotherapy toxicity in the standard-of-care arm (70.7% vs 54.3%; P = .02) but not in the GA-driven intervention arm (54.3% vs 48.5%; P = .27).
  • The association between depression and chemotherapy toxicity was also seen after adjustment for the Cancer and Aging Research Group toxicity score (odds ratio, [OR], 1.98; 95% CI, 1.07-3.65) and for demographic, disease, and treatment factors (OR, 2.00; 95% CI, 1.03-3.85).
  • No significant association was found between anxiety and chemotherapy toxicity in either the standard-of-care arm (univariate OR, 1.07; 95% CI, 0.61-1.88) or the GA-driven intervention arm (univariate OR, 1.15; 95% CI, 0.78-1.71).
  • The authors stated that depression was associated with increased odds of hematologic-only toxicities (OR, 2.50; 95% CI, 1.13-5.56) in the standard-of-care arm.
  • An analysis of a small subgroup found associations between elevated anxiety symptoms and increased risk for hematologic and nonhematologic chemotherapy toxicities.

IN PRACTICE:

“The current study showed that elevated depression symptoms are associated with increased risk of severe chemotherapy toxicities in older adults with cancer. This risk was mitigated in those in the GA intervention arm, which suggests that addressing elevated depression symptoms may lower the risk of toxicities,” the authors wrote. “Overall, elevated anxiety symptoms were not associated with risk for severe chemotherapy toxicity.”

SOURCE:

Reena V. Jayani, MD, MSCI, of Vanderbilt University Medical Center in Nashville, Tennessee, was the first and corresponding author for this paper. This study was published online August 4, 2024, in Cancer

LIMITATIONS:

The thresholds for depression and anxiety used in the Mental Health Inventory 13 were based on an English-speaking population, which may not be fully applicable to Chinese- and Spanish-speaking patients included in the study. Depression and anxiety were not evaluated by a mental health professional or with a structured interview to assess formal diagnostic criteria. Psychiatric medication used at the time of baseline GA was not included in the analysis. The study is a secondary analysis of a randomized controlled trial, and it is not known which components of the interventions affected mental health.

DISCLOSURES:

This research project was supported by the UniHealth Foundation, the City of Hope Center for Cancer and Aging, and the National Institutes of Health. One coauthor disclosed receiving institutional research funding from AstraZeneca and Brooklyn ImmunoTherapeutics and consulting for multiple pharmaceutical companies, including AbbVie, Adagene, and Bayer HealthCare Pharmaceuticals. William Dale, MD, PhD, of City of Hope National Medical Center, served as senior author and a principal investigator. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Elevated depression symptoms are linked to an increased risk for severe chemotherapy toxicity in older adults with cancer. This risk is mitigated by geriatric assessment (GA)-driven interventions.

METHODOLOGY:

  • Researchers conducted a secondary analysis of a randomized controlled trial to evaluate whether greater reductions in grade 3 chemotherapy-related toxicities occurred with geriatric assessment-driven interventions vs standard care.
  • A total of 605 patients aged 65 years and older with any stage of solid malignancy were included, with 402 randomized to the intervention arm and 203 to the standard-of-care arm.
  • Mental health was assessed using the Mental Health Inventory 13, and chemotherapy toxicity was graded by the National Cancer Institute Common Terminology Criteria for Adverse Events, version 4.0.
  • Patients in the intervention arm received recommendations from a multidisciplinary team based on their baseline GA, while those in the standard-of-care arm received only the baseline assessment results.
  • The study was conducted at City of Hope National Medical Center in Duarte, California, and patients were followed throughout treatment or for up to 6 months from starting chemotherapy.

TAKEAWAY:

  • According to the authors, patients with depression had increased chemotherapy toxicity in the standard-of-care arm (70.7% vs 54.3%; P = .02) but not in the GA-driven intervention arm (54.3% vs 48.5%; P = .27).
  • The association between depression and chemotherapy toxicity was also seen after adjustment for the Cancer and Aging Research Group toxicity score (odds ratio, [OR], 1.98; 95% CI, 1.07-3.65) and for demographic, disease, and treatment factors (OR, 2.00; 95% CI, 1.03-3.85).
  • No significant association was found between anxiety and chemotherapy toxicity in either the standard-of-care arm (univariate OR, 1.07; 95% CI, 0.61-1.88) or the GA-driven intervention arm (univariate OR, 1.15; 95% CI, 0.78-1.71).
  • The authors stated that depression was associated with increased odds of hematologic-only toxicities (OR, 2.50; 95% CI, 1.13-5.56) in the standard-of-care arm.
  • An analysis of a small subgroup found associations between elevated anxiety symptoms and increased risk for hematologic and nonhematologic chemotherapy toxicities.

IN PRACTICE:

“The current study showed that elevated depression symptoms are associated with increased risk of severe chemotherapy toxicities in older adults with cancer. This risk was mitigated in those in the GA intervention arm, which suggests that addressing elevated depression symptoms may lower the risk of toxicities,” the authors wrote. “Overall, elevated anxiety symptoms were not associated with risk for severe chemotherapy toxicity.”

SOURCE:

Reena V. Jayani, MD, MSCI, of Vanderbilt University Medical Center in Nashville, Tennessee, was the first and corresponding author for this paper. This study was published online August 4, 2024, in Cancer

LIMITATIONS:

The thresholds for depression and anxiety used in the Mental Health Inventory 13 were based on an English-speaking population, which may not be fully applicable to Chinese- and Spanish-speaking patients included in the study. Depression and anxiety were not evaluated by a mental health professional or with a structured interview to assess formal diagnostic criteria. Psychiatric medication used at the time of baseline GA was not included in the analysis. The study is a secondary analysis of a randomized controlled trial, and it is not known which components of the interventions affected mental health.

DISCLOSURES:

This research project was supported by the UniHealth Foundation, the City of Hope Center for Cancer and Aging, and the National Institutes of Health. One coauthor disclosed receiving institutional research funding from AstraZeneca and Brooklyn ImmunoTherapeutics and consulting for multiple pharmaceutical companies, including AbbVie, Adagene, and Bayer HealthCare Pharmaceuticals. William Dale, MD, PhD, of City of Hope National Medical Center, served as senior author and a principal investigator. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Elevated depression symptoms are linked to an increased risk for severe chemotherapy toxicity in older adults with cancer. This risk is mitigated by geriatric assessment (GA)-driven interventions.

METHODOLOGY:

  • Researchers conducted a secondary analysis of a randomized controlled trial to evaluate whether greater reductions in grade 3 chemotherapy-related toxicities occurred with geriatric assessment-driven interventions vs standard care.
  • A total of 605 patients aged 65 years and older with any stage of solid malignancy were included, with 402 randomized to the intervention arm and 203 to the standard-of-care arm.
  • Mental health was assessed using the Mental Health Inventory 13, and chemotherapy toxicity was graded by the National Cancer Institute Common Terminology Criteria for Adverse Events, version 4.0.
  • Patients in the intervention arm received recommendations from a multidisciplinary team based on their baseline GA, while those in the standard-of-care arm received only the baseline assessment results.
  • The study was conducted at City of Hope National Medical Center in Duarte, California, and patients were followed throughout treatment or for up to 6 months from starting chemotherapy.

TAKEAWAY:

  • According to the authors, patients with depression had increased chemotherapy toxicity in the standard-of-care arm (70.7% vs 54.3%; P = .02) but not in the GA-driven intervention arm (54.3% vs 48.5%; P = .27).
  • The association between depression and chemotherapy toxicity was also seen after adjustment for the Cancer and Aging Research Group toxicity score (odds ratio, [OR], 1.98; 95% CI, 1.07-3.65) and for demographic, disease, and treatment factors (OR, 2.00; 95% CI, 1.03-3.85).
  • No significant association was found between anxiety and chemotherapy toxicity in either the standard-of-care arm (univariate OR, 1.07; 95% CI, 0.61-1.88) or the GA-driven intervention arm (univariate OR, 1.15; 95% CI, 0.78-1.71).
  • The authors stated that depression was associated with increased odds of hematologic-only toxicities (OR, 2.50; 95% CI, 1.13-5.56) in the standard-of-care arm.
  • An analysis of a small subgroup found associations between elevated anxiety symptoms and increased risk for hematologic and nonhematologic chemotherapy toxicities.

IN PRACTICE:

“The current study showed that elevated depression symptoms are associated with increased risk of severe chemotherapy toxicities in older adults with cancer. This risk was mitigated in those in the GA intervention arm, which suggests that addressing elevated depression symptoms may lower the risk of toxicities,” the authors wrote. “Overall, elevated anxiety symptoms were not associated with risk for severe chemotherapy toxicity.”

SOURCE:

Reena V. Jayani, MD, MSCI, of Vanderbilt University Medical Center in Nashville, Tennessee, was the first and corresponding author for this paper. This study was published online August 4, 2024, in Cancer

LIMITATIONS:

The thresholds for depression and anxiety used in the Mental Health Inventory 13 were based on an English-speaking population, which may not be fully applicable to Chinese- and Spanish-speaking patients included in the study. Depression and anxiety were not evaluated by a mental health professional or with a structured interview to assess formal diagnostic criteria. Psychiatric medication used at the time of baseline GA was not included in the analysis. The study is a secondary analysis of a randomized controlled trial, and it is not known which components of the interventions affected mental health.

DISCLOSURES:

This research project was supported by the UniHealth Foundation, the City of Hope Center for Cancer and Aging, and the National Institutes of Health. One coauthor disclosed receiving institutional research funding from AstraZeneca and Brooklyn ImmunoTherapeutics and consulting for multiple pharmaceutical companies, including AbbVie, Adagene, and Bayer HealthCare Pharmaceuticals. William Dale, MD, PhD, of City of Hope National Medical Center, served as senior author and a principal investigator. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

It’s in the Juice: Cranberries for UTI Prevention

Article Type
Changed
Thu, 08/08/2024 - 11:02

 

TOPLINE:

A systematic review and network meta-analysis found cranberry juice can help prevent urinary tract infections (UTIs).

METHODOLOGY:

  • With an increasing prevalence of antimicrobial resistance and over 50% women reporting at least one episode of UTI each year, identifying evidence supporting possible nondrug interventions is necessary, according to the study researchers from Bond University, the University of Helsinki, and the University of Oxford.
  • The primary study outcome was number of UTIs in each treatment or placebo group; the secondary outcomes were UTI symptoms such as increased bladder sensation, urgency, frequency, dysuria, and consumption of antimicrobial drugs.
  • Studies analyzed included people of any age and gender at a risk for UTI.
  • Researchers included 3091 participants from 18 randomized controlled trials and two nonrandomized controlled trials.

TAKEAWAY:

  • Studies used one of the following interventions: Cranberry nonliquid products (tablet, capsule, or fruit), cranberry liquid, liquid other than cranberry, and no treatment.
  • A total of 18 studies showed a 27% lower rate of UTIs with the consumption of cranberry juice than with placebo liquid (moderate certainty evidence) and a 54% lower rate of UTIs with the consumption of cranberry juice than with no treatment (very low certainty evidence).
  • Based on a meta-analysis of six studies, antibiotic use was 49% lower with the consumption of cranberry juice than with placebo liquid and 59% lower than with no treatment.
  • Cranberry compounds also were associated with a decrease in prevalence of UTI symptoms.

IN PRACTICE:

“The evidence supports the use of cranberry juice for the prevention of UTIs. While increased liquids benefit the rate of UTIs and reduce antibiotic use, and cranberry compounds benefit symptoms of infection, the combination of these, in cranberry juice, provides clear and significant clinical outcomes for the reduction in UTIs and antibiotic use and should be considered for the management of UTIs,” the authors wrote.

SOURCE:

The study was led by Christian Moro, PhD, faculty of health sciences and medicine at Bond University in Gold Coast, Australia, and was published online in European Urology Focus on July 18, 2024.

LIMITATIONS:

The authors noted that some planned findings such as the impact on antibiotic use were reduced due to limited studies. Some studies on cranberry tablets also provided education with the intervention, which could have affected UTI recurrence rates. Nearly all the 20 studies that were analyzed included mostly women; thus, comparisons between genders were not possible.

DISCLOSURES:

Dr. Moro reported no disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

A systematic review and network meta-analysis found cranberry juice can help prevent urinary tract infections (UTIs).

METHODOLOGY:

  • With an increasing prevalence of antimicrobial resistance and over 50% women reporting at least one episode of UTI each year, identifying evidence supporting possible nondrug interventions is necessary, according to the study researchers from Bond University, the University of Helsinki, and the University of Oxford.
  • The primary study outcome was number of UTIs in each treatment or placebo group; the secondary outcomes were UTI symptoms such as increased bladder sensation, urgency, frequency, dysuria, and consumption of antimicrobial drugs.
  • Studies analyzed included people of any age and gender at a risk for UTI.
  • Researchers included 3091 participants from 18 randomized controlled trials and two nonrandomized controlled trials.

TAKEAWAY:

  • Studies used one of the following interventions: Cranberry nonliquid products (tablet, capsule, or fruit), cranberry liquid, liquid other than cranberry, and no treatment.
  • A total of 18 studies showed a 27% lower rate of UTIs with the consumption of cranberry juice than with placebo liquid (moderate certainty evidence) and a 54% lower rate of UTIs with the consumption of cranberry juice than with no treatment (very low certainty evidence).
  • Based on a meta-analysis of six studies, antibiotic use was 49% lower with the consumption of cranberry juice than with placebo liquid and 59% lower than with no treatment.
  • Cranberry compounds also were associated with a decrease in prevalence of UTI symptoms.

IN PRACTICE:

“The evidence supports the use of cranberry juice for the prevention of UTIs. While increased liquids benefit the rate of UTIs and reduce antibiotic use, and cranberry compounds benefit symptoms of infection, the combination of these, in cranberry juice, provides clear and significant clinical outcomes for the reduction in UTIs and antibiotic use and should be considered for the management of UTIs,” the authors wrote.

SOURCE:

The study was led by Christian Moro, PhD, faculty of health sciences and medicine at Bond University in Gold Coast, Australia, and was published online in European Urology Focus on July 18, 2024.

LIMITATIONS:

The authors noted that some planned findings such as the impact on antibiotic use were reduced due to limited studies. Some studies on cranberry tablets also provided education with the intervention, which could have affected UTI recurrence rates. Nearly all the 20 studies that were analyzed included mostly women; thus, comparisons between genders were not possible.

DISCLOSURES:

Dr. Moro reported no disclosures.
 

A version of this article appeared on Medscape.com.

 

TOPLINE:

A systematic review and network meta-analysis found cranberry juice can help prevent urinary tract infections (UTIs).

METHODOLOGY:

  • With an increasing prevalence of antimicrobial resistance and over 50% women reporting at least one episode of UTI each year, identifying evidence supporting possible nondrug interventions is necessary, according to the study researchers from Bond University, the University of Helsinki, and the University of Oxford.
  • The primary study outcome was number of UTIs in each treatment or placebo group; the secondary outcomes were UTI symptoms such as increased bladder sensation, urgency, frequency, dysuria, and consumption of antimicrobial drugs.
  • Studies analyzed included people of any age and gender at a risk for UTI.
  • Researchers included 3091 participants from 18 randomized controlled trials and two nonrandomized controlled trials.

TAKEAWAY:

  • Studies used one of the following interventions: Cranberry nonliquid products (tablet, capsule, or fruit), cranberry liquid, liquid other than cranberry, and no treatment.
  • A total of 18 studies showed a 27% lower rate of UTIs with the consumption of cranberry juice than with placebo liquid (moderate certainty evidence) and a 54% lower rate of UTIs with the consumption of cranberry juice than with no treatment (very low certainty evidence).
  • Based on a meta-analysis of six studies, antibiotic use was 49% lower with the consumption of cranberry juice than with placebo liquid and 59% lower than with no treatment.
  • Cranberry compounds also were associated with a decrease in prevalence of UTI symptoms.

IN PRACTICE:

“The evidence supports the use of cranberry juice for the prevention of UTIs. While increased liquids benefit the rate of UTIs and reduce antibiotic use, and cranberry compounds benefit symptoms of infection, the combination of these, in cranberry juice, provides clear and significant clinical outcomes for the reduction in UTIs and antibiotic use and should be considered for the management of UTIs,” the authors wrote.

SOURCE:

The study was led by Christian Moro, PhD, faculty of health sciences and medicine at Bond University in Gold Coast, Australia, and was published online in European Urology Focus on July 18, 2024.

LIMITATIONS:

The authors noted that some planned findings such as the impact on antibiotic use were reduced due to limited studies. Some studies on cranberry tablets also provided education with the intervention, which could have affected UTI recurrence rates. Nearly all the 20 studies that were analyzed included mostly women; thus, comparisons between genders were not possible.

DISCLOSURES:

Dr. Moro reported no disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Fruits and Vegetables May Promote Kidney and Cardiovascular Health in Hypertensive Patients

Article Type
Changed
Mon, 08/05/2024 - 12:14

Progression of chronic kidney disease (CKD) and cardiovascular disease risk in hypertensive adults was significantly slower among those who consumed more fruits and vegetables or oral sodium bicarbonate, compared with controls who received usual care.

A primary focus on pharmacologic strategies has failed to reduced hypertension-related CKD and cardiovascular disease mortality, Nimrit Goraya, MD, of Texas A&M Health Sciences Center College of Medicine, Temple, and colleagues wrote. High-acid diets (those with greater amounts of animal-sourced foods) have been associated with increased incidence and progression of CKD and with increased risk of cardiovascular disease.

Diets high in fruits and vegetables are associated with reduced CKD and cardiovascular disease but are not routinely used as part of hypertension treatment. The researchers hypothesized that dietary acid reduction could slow kidney disease progression and reduce cardiovascular disease risk.

In a study published in The American Journal of Medicine, the researchers randomized 153 adults aged 18-70 years with hypertension and CKD to fruits and vegetables, oral sodium bicarbonate (NaHCO3), or usual care; 51 to each group. The fruit and vegetable group received 2-4 cups daily of base-producing food items including apples, apricots, oranges, peaches, pears, raisins, strawberries, carrots, cauliflower, eggplant, lettuce, potatoes, spinach, tomatoes, and zucchini. Participants were not instructed how to incorporate these foods into their diets. The sodium bicarbonate group received an average of four to five NaHCO3 tablets daily (650 mg), divided into two doses.

The mean age of the participants was 48.8 years, 51% were female, and 47% were African American. The primary outcome was CKD progression and cardiovascular disease risk over 5 years. All participants met criteria at baseline for macroalbuminuria (a urine albumin to creatinine ratio of at least 200 mg/g) and were considered at increased risk for CKD progression.

Over the 5-year follow-up, CKD progression was significantly slower in the groups receiving fruits and vegetables and oral sodium bicarbonate, compared with usual care, based on trajectories showing a lower decline of estimated glomerular filtration rates (mean declines of 1.08 and 1.17 for fruits/vegetables and NaHCO3, respectively, vs 19.4 for usual care, P < .001 for both).

However, systolic blood pressure and subsequent cardiovascular disease risk indicators were lower only in the fruit and vegetable group, compared with both the NaHCO3 or usual-care groups over the long term. “Specifically, with fruits and vegetables, systolic blood pressure, plasma LDL and Lp(a) cholesterol, and body mass index decreased from baseline, consistent with better cardiovascular disease protection,” the researchers wrote. The protection against cardiovascular disease in the fruits and vegetables group occurred with lower doses of antihypertensive and statin medications and was not affected by baseline differences in medication doses.

The findings were limited by several factors, including the lack of data on compliance with the NaHCO3 supplements, although urine net acid excretion in this group suggested increased alkali intake similar to that provided by fruits and vegetables, the researchers noted. Other limitations included the focus only on individuals with very high albuminuria.

More basic science studies are needed to explore how the potential vascular injury suggested by albuminuria affects CKD progression and cardiovascular disease, and clinical studies are needed to assess the impact of dietary acid reduction on patients with lower levels of albuminuria that the current study, the researchers said.

However, the results suggest that consuming fruits and vegetables, rather than NaHCO3, is the preferred strategy for dietary acid reduction for patients with primary hypertension and CKD, they concluded. The findings also support routine measurement of urine albumin-to-creatinine ratios in hypertensive patients to identify CKD and assess risk for progression and subsequent cardiovascular disease.

The study was supported by the Larry and Jane Woirhaye Memorial Endowment in Renal Research at the Texas Tech University Health Sciences Center, the University Medical Center (both in Lubbock, Texas), the Endowment, Academic Operations Division of Baylor Scott & White Health, and the Episcopal Health Foundation. The researchers had no financial conflicts to disclose.

Publications
Topics
Sections

Progression of chronic kidney disease (CKD) and cardiovascular disease risk in hypertensive adults was significantly slower among those who consumed more fruits and vegetables or oral sodium bicarbonate, compared with controls who received usual care.

A primary focus on pharmacologic strategies has failed to reduced hypertension-related CKD and cardiovascular disease mortality, Nimrit Goraya, MD, of Texas A&M Health Sciences Center College of Medicine, Temple, and colleagues wrote. High-acid diets (those with greater amounts of animal-sourced foods) have been associated with increased incidence and progression of CKD and with increased risk of cardiovascular disease.

Diets high in fruits and vegetables are associated with reduced CKD and cardiovascular disease but are not routinely used as part of hypertension treatment. The researchers hypothesized that dietary acid reduction could slow kidney disease progression and reduce cardiovascular disease risk.

In a study published in The American Journal of Medicine, the researchers randomized 153 adults aged 18-70 years with hypertension and CKD to fruits and vegetables, oral sodium bicarbonate (NaHCO3), or usual care; 51 to each group. The fruit and vegetable group received 2-4 cups daily of base-producing food items including apples, apricots, oranges, peaches, pears, raisins, strawberries, carrots, cauliflower, eggplant, lettuce, potatoes, spinach, tomatoes, and zucchini. Participants were not instructed how to incorporate these foods into their diets. The sodium bicarbonate group received an average of four to five NaHCO3 tablets daily (650 mg), divided into two doses.

The mean age of the participants was 48.8 years, 51% were female, and 47% were African American. The primary outcome was CKD progression and cardiovascular disease risk over 5 years. All participants met criteria at baseline for macroalbuminuria (a urine albumin to creatinine ratio of at least 200 mg/g) and were considered at increased risk for CKD progression.

Over the 5-year follow-up, CKD progression was significantly slower in the groups receiving fruits and vegetables and oral sodium bicarbonate, compared with usual care, based on trajectories showing a lower decline of estimated glomerular filtration rates (mean declines of 1.08 and 1.17 for fruits/vegetables and NaHCO3, respectively, vs 19.4 for usual care, P < .001 for both).

However, systolic blood pressure and subsequent cardiovascular disease risk indicators were lower only in the fruit and vegetable group, compared with both the NaHCO3 or usual-care groups over the long term. “Specifically, with fruits and vegetables, systolic blood pressure, plasma LDL and Lp(a) cholesterol, and body mass index decreased from baseline, consistent with better cardiovascular disease protection,” the researchers wrote. The protection against cardiovascular disease in the fruits and vegetables group occurred with lower doses of antihypertensive and statin medications and was not affected by baseline differences in medication doses.

The findings were limited by several factors, including the lack of data on compliance with the NaHCO3 supplements, although urine net acid excretion in this group suggested increased alkali intake similar to that provided by fruits and vegetables, the researchers noted. Other limitations included the focus only on individuals with very high albuminuria.

More basic science studies are needed to explore how the potential vascular injury suggested by albuminuria affects CKD progression and cardiovascular disease, and clinical studies are needed to assess the impact of dietary acid reduction on patients with lower levels of albuminuria that the current study, the researchers said.

However, the results suggest that consuming fruits and vegetables, rather than NaHCO3, is the preferred strategy for dietary acid reduction for patients with primary hypertension and CKD, they concluded. The findings also support routine measurement of urine albumin-to-creatinine ratios in hypertensive patients to identify CKD and assess risk for progression and subsequent cardiovascular disease.

The study was supported by the Larry and Jane Woirhaye Memorial Endowment in Renal Research at the Texas Tech University Health Sciences Center, the University Medical Center (both in Lubbock, Texas), the Endowment, Academic Operations Division of Baylor Scott & White Health, and the Episcopal Health Foundation. The researchers had no financial conflicts to disclose.

Progression of chronic kidney disease (CKD) and cardiovascular disease risk in hypertensive adults was significantly slower among those who consumed more fruits and vegetables or oral sodium bicarbonate, compared with controls who received usual care.

A primary focus on pharmacologic strategies has failed to reduced hypertension-related CKD and cardiovascular disease mortality, Nimrit Goraya, MD, of Texas A&M Health Sciences Center College of Medicine, Temple, and colleagues wrote. High-acid diets (those with greater amounts of animal-sourced foods) have been associated with increased incidence and progression of CKD and with increased risk of cardiovascular disease.

Diets high in fruits and vegetables are associated with reduced CKD and cardiovascular disease but are not routinely used as part of hypertension treatment. The researchers hypothesized that dietary acid reduction could slow kidney disease progression and reduce cardiovascular disease risk.

In a study published in The American Journal of Medicine, the researchers randomized 153 adults aged 18-70 years with hypertension and CKD to fruits and vegetables, oral sodium bicarbonate (NaHCO3), or usual care; 51 to each group. The fruit and vegetable group received 2-4 cups daily of base-producing food items including apples, apricots, oranges, peaches, pears, raisins, strawberries, carrots, cauliflower, eggplant, lettuce, potatoes, spinach, tomatoes, and zucchini. Participants were not instructed how to incorporate these foods into their diets. The sodium bicarbonate group received an average of four to five NaHCO3 tablets daily (650 mg), divided into two doses.

The mean age of the participants was 48.8 years, 51% were female, and 47% were African American. The primary outcome was CKD progression and cardiovascular disease risk over 5 years. All participants met criteria at baseline for macroalbuminuria (a urine albumin to creatinine ratio of at least 200 mg/g) and were considered at increased risk for CKD progression.

Over the 5-year follow-up, CKD progression was significantly slower in the groups receiving fruits and vegetables and oral sodium bicarbonate, compared with usual care, based on trajectories showing a lower decline of estimated glomerular filtration rates (mean declines of 1.08 and 1.17 for fruits/vegetables and NaHCO3, respectively, vs 19.4 for usual care, P < .001 for both).

However, systolic blood pressure and subsequent cardiovascular disease risk indicators were lower only in the fruit and vegetable group, compared with both the NaHCO3 or usual-care groups over the long term. “Specifically, with fruits and vegetables, systolic blood pressure, plasma LDL and Lp(a) cholesterol, and body mass index decreased from baseline, consistent with better cardiovascular disease protection,” the researchers wrote. The protection against cardiovascular disease in the fruits and vegetables group occurred with lower doses of antihypertensive and statin medications and was not affected by baseline differences in medication doses.

The findings were limited by several factors, including the lack of data on compliance with the NaHCO3 supplements, although urine net acid excretion in this group suggested increased alkali intake similar to that provided by fruits and vegetables, the researchers noted. Other limitations included the focus only on individuals with very high albuminuria.

More basic science studies are needed to explore how the potential vascular injury suggested by albuminuria affects CKD progression and cardiovascular disease, and clinical studies are needed to assess the impact of dietary acid reduction on patients with lower levels of albuminuria that the current study, the researchers said.

However, the results suggest that consuming fruits and vegetables, rather than NaHCO3, is the preferred strategy for dietary acid reduction for patients with primary hypertension and CKD, they concluded. The findings also support routine measurement of urine albumin-to-creatinine ratios in hypertensive patients to identify CKD and assess risk for progression and subsequent cardiovascular disease.

The study was supported by the Larry and Jane Woirhaye Memorial Endowment in Renal Research at the Texas Tech University Health Sciences Center, the University Medical Center (both in Lubbock, Texas), the Endowment, Academic Operations Division of Baylor Scott & White Health, and the Episcopal Health Foundation. The researchers had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Wearables May Confirm Sleep Disruption Impact on Chronic Disease

Article Type
Changed
Fri, 08/02/2024 - 15:26

Rapid eye movement (REM) sleep, deep sleep, and sleep irregularity were significantly associated with increased risk for a range of chronic diseases, based on a new study of > 6000 individuals. 

“Most of what we think we know about sleep patterns in adults comes from either self-report surveys, which are widely used but have all sorts of problems with over- and under-estimating sleep duration and quality, or single-night sleep studies,” corresponding author Evan L. Brittain, MD, of Vanderbilt University, Nashville, Tennessee, said in an interview. 

The single-night study yields the highest quality data but is limited by extrapolating a single night’s sleep to represent habitual sleep patterns, which is often not the case, he said. In the current study, published in Nature Medicine, “we had a unique opportunity to understand sleep using a large cohort of individuals using wearable devices that measure sleep duration, quality, and variability. The All of Us Research Program is the first to link wearables data to the electronic health record at scale and allowed us to study long-term, real-world sleep behavior,” Dr. Brittain said.

The timing of the study is important because the American Heart Association now recognizes sleep as a key component of heart health, and public awareness of the value of sleep is increasing, he added. 

The researchers reviewed objectively measured, longitudinal sleep data from 6785 adults who used commercial wearable devices (Fitbit) linked to electronic health record data in the All of Us Research Program. The median age of the participants was 50.2 years, 71% were women, and 84% self-identified as White individuals. The median period of sleep monitoring was 4.5 years.

REM sleep and deep sleep were inversely associated with the odds of incident heart rhythm and heart rate abnormalities. Each percent increase in REM sleep was associated with a reduced incidence of atrial fibrillation (odds ratio [OR], 0.86), atrial flutter (OR, 0.78), and sinoatrial node dysfunction/bradycardia (OR, 0.72). A higher percentage of deep sleep was associated with reduced odds of atrial fibrillation (OR, 0.87), major depressive disorder (OR, 0.93), and anxiety disorder (OR, 0.94). 

Increased irregular sleep was significantly associated with increased odds of incident obesity (OR, 1.49), hyperlipidemia (OR, 1.39), and hypertension (OR, 1.56), as well as major depressive disorder (OR, 1.75), anxiety disorder (OR, 1.55), and bipolar disorder (OR, 2.27). 

The researchers also identified J-shaped associations between average daily sleep duration and hypertension (P for nonlinearity = .003), as well as major depressive disorder and generalized anxiety disorder (both P < .001). 

The study was limited by several factors including the relatively young, White, and female study population. However, the results illustrate how sleep stages, duration, and regularity are associated with chronic disease development, and may inform evidence-based recommendations on healthy sleeping habits, the researchers wrote.
 

Findings Support Need for Sleep Consistency 

“The biggest surprise for me was the impact of sleep variability of health,” Dr. Brittain told this news organization. “The more your sleep duration varies, the higher your risk of numerous chronic diseases across the entire spectrum of organ systems. Sleep duration and quality were also important but that was less surprising,” he said. 

The clinical implications of the findings are that sleep duration, quality, and variability are all important, said Dr. Brittain. “To me, the easiest finding to translate into the clinic is the importance of reducing the variability of sleep duration as much as possible,” he said. For patients, that means explaining that they need to go to sleep and wake up at roughly the same time night to night, he said. 

“Commercial wearable devices are not perfect compared with research grade devices, but our study showed that they nonetheless collect clinically relevant information,” Dr. Brittain added. “For patients who own a device, I have adopted the practice of reviewing my patients’ sleep and activity data which gives objective insight into behavior that is not always accurate through routine questioning,” he said.

As for other limitations, “Our cohort was limited to individuals who already owned a Fitbit; not surprisingly, these individuals differ from a random sample of the community in important ways, both demographic and behavioral, and our findings need to be validated in a more diverse population,” said Dr. Brittain. 

Looking ahead, “we are interested in using commercial devices as a tool for sleep interventions to test the impact of improving sleep hygiene on chronic disease incidence, severity, and progression,” he said.
 

Device Data Will Evolve to Inform Patient Care

“With the increasing use of commercial wearable devices, it is crucial to identify and understand the data they can collect,” said Arianne K. Baldomero, MD, a pulmonologist and assistant professor of medicine at the University of Minnesota, Minneapolis, in an interview. “This study specifically analyzed sleep data from Fitbit devices among participants in the All of Us Research Program to assess sleep patterns and their association with chronic disease risk,” said Dr. Baldomero, who was not involved in the study. 

The significant relationships between sleep patterns and risk for chronic diseases were not surprising, said Dr. Baldomero. The findings of an association between shorter sleep duration and greater sleep irregularity with obesity and sleep apnea validated previous studies in large-scale population surveys, she said. Findings from the current study also reflect data from the literature on sleep duration associated with hypertension, major depressive disorder, and generalized anxiety findings, she added.

“This study reinforces the importance of adequate sleep, typically around 7 hours per night, and suggests that insufficient or poor-quality sleep may be associated with chronic diseases,” Dr. Baldomero told this news organization. “Pulmonologists should remain vigilant about sleep-related issues, and consider further investigation and referrals to sleep specialty clinics for patients suspected of having sleep disturbances,” she said.

“What remains unclear is whether abnormal sleep patterns are a cause or an effect of chronic diseases,” Dr. Baldomero noted. “Additionally, it is essential to ensure that these devices accurately capture sleep patterns and continue to validate their data against gold standard measures of sleep disturbances,” she said.

The study was based on work that was partially funded by an unrestricted gift from Google, and the study itself was supported by National Institutes of Health. Dr. Brittain disclosed received research funds unrelated to this work from United Therapeutics. Dr. Baldomero had no financial conflicts to disclose.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Rapid eye movement (REM) sleep, deep sleep, and sleep irregularity were significantly associated with increased risk for a range of chronic diseases, based on a new study of > 6000 individuals. 

“Most of what we think we know about sleep patterns in adults comes from either self-report surveys, which are widely used but have all sorts of problems with over- and under-estimating sleep duration and quality, or single-night sleep studies,” corresponding author Evan L. Brittain, MD, of Vanderbilt University, Nashville, Tennessee, said in an interview. 

The single-night study yields the highest quality data but is limited by extrapolating a single night’s sleep to represent habitual sleep patterns, which is often not the case, he said. In the current study, published in Nature Medicine, “we had a unique opportunity to understand sleep using a large cohort of individuals using wearable devices that measure sleep duration, quality, and variability. The All of Us Research Program is the first to link wearables data to the electronic health record at scale and allowed us to study long-term, real-world sleep behavior,” Dr. Brittain said.

The timing of the study is important because the American Heart Association now recognizes sleep as a key component of heart health, and public awareness of the value of sleep is increasing, he added. 

The researchers reviewed objectively measured, longitudinal sleep data from 6785 adults who used commercial wearable devices (Fitbit) linked to electronic health record data in the All of Us Research Program. The median age of the participants was 50.2 years, 71% were women, and 84% self-identified as White individuals. The median period of sleep monitoring was 4.5 years.

REM sleep and deep sleep were inversely associated with the odds of incident heart rhythm and heart rate abnormalities. Each percent increase in REM sleep was associated with a reduced incidence of atrial fibrillation (odds ratio [OR], 0.86), atrial flutter (OR, 0.78), and sinoatrial node dysfunction/bradycardia (OR, 0.72). A higher percentage of deep sleep was associated with reduced odds of atrial fibrillation (OR, 0.87), major depressive disorder (OR, 0.93), and anxiety disorder (OR, 0.94). 

Increased irregular sleep was significantly associated with increased odds of incident obesity (OR, 1.49), hyperlipidemia (OR, 1.39), and hypertension (OR, 1.56), as well as major depressive disorder (OR, 1.75), anxiety disorder (OR, 1.55), and bipolar disorder (OR, 2.27). 

The researchers also identified J-shaped associations between average daily sleep duration and hypertension (P for nonlinearity = .003), as well as major depressive disorder and generalized anxiety disorder (both P < .001). 

The study was limited by several factors including the relatively young, White, and female study population. However, the results illustrate how sleep stages, duration, and regularity are associated with chronic disease development, and may inform evidence-based recommendations on healthy sleeping habits, the researchers wrote.
 

Findings Support Need for Sleep Consistency 

“The biggest surprise for me was the impact of sleep variability of health,” Dr. Brittain told this news organization. “The more your sleep duration varies, the higher your risk of numerous chronic diseases across the entire spectrum of organ systems. Sleep duration and quality were also important but that was less surprising,” he said. 

The clinical implications of the findings are that sleep duration, quality, and variability are all important, said Dr. Brittain. “To me, the easiest finding to translate into the clinic is the importance of reducing the variability of sleep duration as much as possible,” he said. For patients, that means explaining that they need to go to sleep and wake up at roughly the same time night to night, he said. 

“Commercial wearable devices are not perfect compared with research grade devices, but our study showed that they nonetheless collect clinically relevant information,” Dr. Brittain added. “For patients who own a device, I have adopted the practice of reviewing my patients’ sleep and activity data which gives objective insight into behavior that is not always accurate through routine questioning,” he said.

As for other limitations, “Our cohort was limited to individuals who already owned a Fitbit; not surprisingly, these individuals differ from a random sample of the community in important ways, both demographic and behavioral, and our findings need to be validated in a more diverse population,” said Dr. Brittain. 

Looking ahead, “we are interested in using commercial devices as a tool for sleep interventions to test the impact of improving sleep hygiene on chronic disease incidence, severity, and progression,” he said.
 

Device Data Will Evolve to Inform Patient Care

“With the increasing use of commercial wearable devices, it is crucial to identify and understand the data they can collect,” said Arianne K. Baldomero, MD, a pulmonologist and assistant professor of medicine at the University of Minnesota, Minneapolis, in an interview. “This study specifically analyzed sleep data from Fitbit devices among participants in the All of Us Research Program to assess sleep patterns and their association with chronic disease risk,” said Dr. Baldomero, who was not involved in the study. 

The significant relationships between sleep patterns and risk for chronic diseases were not surprising, said Dr. Baldomero. The findings of an association between shorter sleep duration and greater sleep irregularity with obesity and sleep apnea validated previous studies in large-scale population surveys, she said. Findings from the current study also reflect data from the literature on sleep duration associated with hypertension, major depressive disorder, and generalized anxiety findings, she added.

“This study reinforces the importance of adequate sleep, typically around 7 hours per night, and suggests that insufficient or poor-quality sleep may be associated with chronic diseases,” Dr. Baldomero told this news organization. “Pulmonologists should remain vigilant about sleep-related issues, and consider further investigation and referrals to sleep specialty clinics for patients suspected of having sleep disturbances,” she said.

“What remains unclear is whether abnormal sleep patterns are a cause or an effect of chronic diseases,” Dr. Baldomero noted. “Additionally, it is essential to ensure that these devices accurately capture sleep patterns and continue to validate their data against gold standard measures of sleep disturbances,” she said.

The study was based on work that was partially funded by an unrestricted gift from Google, and the study itself was supported by National Institutes of Health. Dr. Brittain disclosed received research funds unrelated to this work from United Therapeutics. Dr. Baldomero had no financial conflicts to disclose.

A version of this article first appeared on Medscape.com.

Rapid eye movement (REM) sleep, deep sleep, and sleep irregularity were significantly associated with increased risk for a range of chronic diseases, based on a new study of > 6000 individuals. 

“Most of what we think we know about sleep patterns in adults comes from either self-report surveys, which are widely used but have all sorts of problems with over- and under-estimating sleep duration and quality, or single-night sleep studies,” corresponding author Evan L. Brittain, MD, of Vanderbilt University, Nashville, Tennessee, said in an interview. 

The single-night study yields the highest quality data but is limited by extrapolating a single night’s sleep to represent habitual sleep patterns, which is often not the case, he said. In the current study, published in Nature Medicine, “we had a unique opportunity to understand sleep using a large cohort of individuals using wearable devices that measure sleep duration, quality, and variability. The All of Us Research Program is the first to link wearables data to the electronic health record at scale and allowed us to study long-term, real-world sleep behavior,” Dr. Brittain said.

The timing of the study is important because the American Heart Association now recognizes sleep as a key component of heart health, and public awareness of the value of sleep is increasing, he added. 

The researchers reviewed objectively measured, longitudinal sleep data from 6785 adults who used commercial wearable devices (Fitbit) linked to electronic health record data in the All of Us Research Program. The median age of the participants was 50.2 years, 71% were women, and 84% self-identified as White individuals. The median period of sleep monitoring was 4.5 years.

REM sleep and deep sleep were inversely associated with the odds of incident heart rhythm and heart rate abnormalities. Each percent increase in REM sleep was associated with a reduced incidence of atrial fibrillation (odds ratio [OR], 0.86), atrial flutter (OR, 0.78), and sinoatrial node dysfunction/bradycardia (OR, 0.72). A higher percentage of deep sleep was associated with reduced odds of atrial fibrillation (OR, 0.87), major depressive disorder (OR, 0.93), and anxiety disorder (OR, 0.94). 

Increased irregular sleep was significantly associated with increased odds of incident obesity (OR, 1.49), hyperlipidemia (OR, 1.39), and hypertension (OR, 1.56), as well as major depressive disorder (OR, 1.75), anxiety disorder (OR, 1.55), and bipolar disorder (OR, 2.27). 

The researchers also identified J-shaped associations between average daily sleep duration and hypertension (P for nonlinearity = .003), as well as major depressive disorder and generalized anxiety disorder (both P < .001). 

The study was limited by several factors including the relatively young, White, and female study population. However, the results illustrate how sleep stages, duration, and regularity are associated with chronic disease development, and may inform evidence-based recommendations on healthy sleeping habits, the researchers wrote.
 

Findings Support Need for Sleep Consistency 

“The biggest surprise for me was the impact of sleep variability of health,” Dr. Brittain told this news organization. “The more your sleep duration varies, the higher your risk of numerous chronic diseases across the entire spectrum of organ systems. Sleep duration and quality were also important but that was less surprising,” he said. 

The clinical implications of the findings are that sleep duration, quality, and variability are all important, said Dr. Brittain. “To me, the easiest finding to translate into the clinic is the importance of reducing the variability of sleep duration as much as possible,” he said. For patients, that means explaining that they need to go to sleep and wake up at roughly the same time night to night, he said. 

“Commercial wearable devices are not perfect compared with research grade devices, but our study showed that they nonetheless collect clinically relevant information,” Dr. Brittain added. “For patients who own a device, I have adopted the practice of reviewing my patients’ sleep and activity data which gives objective insight into behavior that is not always accurate through routine questioning,” he said.

As for other limitations, “Our cohort was limited to individuals who already owned a Fitbit; not surprisingly, these individuals differ from a random sample of the community in important ways, both demographic and behavioral, and our findings need to be validated in a more diverse population,” said Dr. Brittain. 

Looking ahead, “we are interested in using commercial devices as a tool for sleep interventions to test the impact of improving sleep hygiene on chronic disease incidence, severity, and progression,” he said.
 

Device Data Will Evolve to Inform Patient Care

“With the increasing use of commercial wearable devices, it is crucial to identify and understand the data they can collect,” said Arianne K. Baldomero, MD, a pulmonologist and assistant professor of medicine at the University of Minnesota, Minneapolis, in an interview. “This study specifically analyzed sleep data from Fitbit devices among participants in the All of Us Research Program to assess sleep patterns and their association with chronic disease risk,” said Dr. Baldomero, who was not involved in the study. 

The significant relationships between sleep patterns and risk for chronic diseases were not surprising, said Dr. Baldomero. The findings of an association between shorter sleep duration and greater sleep irregularity with obesity and sleep apnea validated previous studies in large-scale population surveys, she said. Findings from the current study also reflect data from the literature on sleep duration associated with hypertension, major depressive disorder, and generalized anxiety findings, she added.

“This study reinforces the importance of adequate sleep, typically around 7 hours per night, and suggests that insufficient or poor-quality sleep may be associated with chronic diseases,” Dr. Baldomero told this news organization. “Pulmonologists should remain vigilant about sleep-related issues, and consider further investigation and referrals to sleep specialty clinics for patients suspected of having sleep disturbances,” she said.

“What remains unclear is whether abnormal sleep patterns are a cause or an effect of chronic diseases,” Dr. Baldomero noted. “Additionally, it is essential to ensure that these devices accurately capture sleep patterns and continue to validate their data against gold standard measures of sleep disturbances,” she said.

The study was based on work that was partially funded by an unrestricted gift from Google, and the study itself was supported by National Institutes of Health. Dr. Brittain disclosed received research funds unrelated to this work from United Therapeutics. Dr. Baldomero had no financial conflicts to disclose.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Ancient Viruses in Our DNA Hold Clues to Cancer Treatment

Article Type
Changed
Mon, 08/12/2024 - 13:15

An ancient virus that infected our ancestors tens of millions of years ago may be helping to fuel cancer today, according to a fascinating new study in Science Advances. Targeting these viral remnants still lingering in our DNA could lead to more effective cancer treatment with fewer side effects, the researchers said.

The study “gives a better understanding of how gene regulation can be impacted by these ancient retroviral sequences,” said Dixie Mager, PhD, scientist emeritus at the Terry Fox Laboratory at the British Columbia Cancer Research Institute, Vancouver, British Columbia, Canada. (Mager was not involved in the study.)

Long thought to be “junk” DNA with no biologic function, “endogenous retroviruses,” which have mutated over time and lost their ability to create the virus, are now known to regulate genes — allowing some genes to turn on and off. Research in recent years suggests they may play a role in diseases like cancer.

But scientists weren’t exactly sure what that role was, said senior study author Edward Chuong, PhD, a genome biologist at the University of Colorado Boulder.

Most studies have looked at whether endogenous retroviruses code for proteins that influence cancer. But these ancient viral strands usually don’t code for proteins at all.

Dr. Chuong took a different approach. Inspired by scientists who’ve studied how viral remnants regulate positive processes (immunity, brain development, or placenta development), he and his team explored whether some might regulate genes that, once activated, help cancer thrive.

Borrowing from epigenomic analysis data (data on molecules that alter gene expression) for 21 cancers mapped by the Cancer Genome Atlas, the researchers identified 19 virus-derived DNA sequences that bind to regulatory proteins more in cancer cells than in healthy cells. All of these could potentially act as gene regulators that promote cancer.

The researchers homed in on one sequence, called LTR10, because it showed especially high activity in several cancers, including lung and colorectal cancer. This DNA segment comes from a virus that entered our ancestors’ genome 30 million years ago, and it’s activated in a third of colorectal cancers.

Using the gene editing technology clustered regularly interspaced short palindromic repeats (CRISPR), Dr. Chuong’s team silenced LTR10 in colorectal cancer cells, altering the gene sequence so it couldn’t bind to regulatory proteins. Doing so dampened the activity of nearby cancer-promoting genes.

“They still behaved like cancer cells,” Dr. Chuong said. But “it made the cancer cells more susceptible to radiation. That would imply that the presence of that viral ‘switch’ actually helped those cancer cells survive radiation therapy.”

Previously, two studies had found that viral regulators play a role in promoting two types of cancer: Leukemia and prostate cancer. The new study shows these two cases weren’t flukes. All 21 cancers they looked at had at least one of those 19 viral elements, presumably working as cancer enhancers.

The study also identified what activates LTR10 to make it promote cancer. The culprit is a regulator protein called mitogen-activated protein (MAP) kinase, which is overactivated in about 40% of all human cancers.

Some cancer drugs — MAP kinase inhibitors — already target MAP kinase, and they’re often the first ones prescribed when a patient is diagnosed with cancer, Dr. Chuong said. As with many cancer treatments, doctors don’t know why they work, just that they do.

“By understanding the mechanisms in the cell, we might be able to make them work better or further optimize their treatment,” he said.

“MAP kinase inhibitors are really like a sledgehammer to the cell,” Dr. Chuong said — meaning they affect many cellular processes, not just those related to cancer.

“If we’re able to say that these viral switches are what’s important, then that could potentially help us develop a more targeted therapy that uses something like CRISPR to silence these viral elements,” he said. Or it could help providers choose a MAP kinase inhibitor from among the dozens available best suited to treat an individual patient and avoid side effects.  

Still, whether the findings translate to real cancer patients remains to be seen. “It’s very, very hard to go the final step of showing in a patient that these actually make a difference in the cancer,” Dr. Mager said.

More lab research, human trials, and at least a few years will be needed before this discovery could help treat cancer. “Directly targeting these elements as a therapy would be at least 5 years out,” Dr. Chuong said, “partly because that application would rely on CRISPR epigenome editing technology that is still being developed for clinical use.”
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

An ancient virus that infected our ancestors tens of millions of years ago may be helping to fuel cancer today, according to a fascinating new study in Science Advances. Targeting these viral remnants still lingering in our DNA could lead to more effective cancer treatment with fewer side effects, the researchers said.

The study “gives a better understanding of how gene regulation can be impacted by these ancient retroviral sequences,” said Dixie Mager, PhD, scientist emeritus at the Terry Fox Laboratory at the British Columbia Cancer Research Institute, Vancouver, British Columbia, Canada. (Mager was not involved in the study.)

Long thought to be “junk” DNA with no biologic function, “endogenous retroviruses,” which have mutated over time and lost their ability to create the virus, are now known to regulate genes — allowing some genes to turn on and off. Research in recent years suggests they may play a role in diseases like cancer.

But scientists weren’t exactly sure what that role was, said senior study author Edward Chuong, PhD, a genome biologist at the University of Colorado Boulder.

Most studies have looked at whether endogenous retroviruses code for proteins that influence cancer. But these ancient viral strands usually don’t code for proteins at all.

Dr. Chuong took a different approach. Inspired by scientists who’ve studied how viral remnants regulate positive processes (immunity, brain development, or placenta development), he and his team explored whether some might regulate genes that, once activated, help cancer thrive.

Borrowing from epigenomic analysis data (data on molecules that alter gene expression) for 21 cancers mapped by the Cancer Genome Atlas, the researchers identified 19 virus-derived DNA sequences that bind to regulatory proteins more in cancer cells than in healthy cells. All of these could potentially act as gene regulators that promote cancer.

The researchers homed in on one sequence, called LTR10, because it showed especially high activity in several cancers, including lung and colorectal cancer. This DNA segment comes from a virus that entered our ancestors’ genome 30 million years ago, and it’s activated in a third of colorectal cancers.

Using the gene editing technology clustered regularly interspaced short palindromic repeats (CRISPR), Dr. Chuong’s team silenced LTR10 in colorectal cancer cells, altering the gene sequence so it couldn’t bind to regulatory proteins. Doing so dampened the activity of nearby cancer-promoting genes.

“They still behaved like cancer cells,” Dr. Chuong said. But “it made the cancer cells more susceptible to radiation. That would imply that the presence of that viral ‘switch’ actually helped those cancer cells survive radiation therapy.”

Previously, two studies had found that viral regulators play a role in promoting two types of cancer: Leukemia and prostate cancer. The new study shows these two cases weren’t flukes. All 21 cancers they looked at had at least one of those 19 viral elements, presumably working as cancer enhancers.

The study also identified what activates LTR10 to make it promote cancer. The culprit is a regulator protein called mitogen-activated protein (MAP) kinase, which is overactivated in about 40% of all human cancers.

Some cancer drugs — MAP kinase inhibitors — already target MAP kinase, and they’re often the first ones prescribed when a patient is diagnosed with cancer, Dr. Chuong said. As with many cancer treatments, doctors don’t know why they work, just that they do.

“By understanding the mechanisms in the cell, we might be able to make them work better or further optimize their treatment,” he said.

“MAP kinase inhibitors are really like a sledgehammer to the cell,” Dr. Chuong said — meaning they affect many cellular processes, not just those related to cancer.

“If we’re able to say that these viral switches are what’s important, then that could potentially help us develop a more targeted therapy that uses something like CRISPR to silence these viral elements,” he said. Or it could help providers choose a MAP kinase inhibitor from among the dozens available best suited to treat an individual patient and avoid side effects.  

Still, whether the findings translate to real cancer patients remains to be seen. “It’s very, very hard to go the final step of showing in a patient that these actually make a difference in the cancer,” Dr. Mager said.

More lab research, human trials, and at least a few years will be needed before this discovery could help treat cancer. “Directly targeting these elements as a therapy would be at least 5 years out,” Dr. Chuong said, “partly because that application would rely on CRISPR epigenome editing technology that is still being developed for clinical use.”
 

A version of this article first appeared on Medscape.com.

An ancient virus that infected our ancestors tens of millions of years ago may be helping to fuel cancer today, according to a fascinating new study in Science Advances. Targeting these viral remnants still lingering in our DNA could lead to more effective cancer treatment with fewer side effects, the researchers said.

The study “gives a better understanding of how gene regulation can be impacted by these ancient retroviral sequences,” said Dixie Mager, PhD, scientist emeritus at the Terry Fox Laboratory at the British Columbia Cancer Research Institute, Vancouver, British Columbia, Canada. (Mager was not involved in the study.)

Long thought to be “junk” DNA with no biologic function, “endogenous retroviruses,” which have mutated over time and lost their ability to create the virus, are now known to regulate genes — allowing some genes to turn on and off. Research in recent years suggests they may play a role in diseases like cancer.

But scientists weren’t exactly sure what that role was, said senior study author Edward Chuong, PhD, a genome biologist at the University of Colorado Boulder.

Most studies have looked at whether endogenous retroviruses code for proteins that influence cancer. But these ancient viral strands usually don’t code for proteins at all.

Dr. Chuong took a different approach. Inspired by scientists who’ve studied how viral remnants regulate positive processes (immunity, brain development, or placenta development), he and his team explored whether some might regulate genes that, once activated, help cancer thrive.

Borrowing from epigenomic analysis data (data on molecules that alter gene expression) for 21 cancers mapped by the Cancer Genome Atlas, the researchers identified 19 virus-derived DNA sequences that bind to regulatory proteins more in cancer cells than in healthy cells. All of these could potentially act as gene regulators that promote cancer.

The researchers homed in on one sequence, called LTR10, because it showed especially high activity in several cancers, including lung and colorectal cancer. This DNA segment comes from a virus that entered our ancestors’ genome 30 million years ago, and it’s activated in a third of colorectal cancers.

Using the gene editing technology clustered regularly interspaced short palindromic repeats (CRISPR), Dr. Chuong’s team silenced LTR10 in colorectal cancer cells, altering the gene sequence so it couldn’t bind to regulatory proteins. Doing so dampened the activity of nearby cancer-promoting genes.

“They still behaved like cancer cells,” Dr. Chuong said. But “it made the cancer cells more susceptible to radiation. That would imply that the presence of that viral ‘switch’ actually helped those cancer cells survive radiation therapy.”

Previously, two studies had found that viral regulators play a role in promoting two types of cancer: Leukemia and prostate cancer. The new study shows these two cases weren’t flukes. All 21 cancers they looked at had at least one of those 19 viral elements, presumably working as cancer enhancers.

The study also identified what activates LTR10 to make it promote cancer. The culprit is a regulator protein called mitogen-activated protein (MAP) kinase, which is overactivated in about 40% of all human cancers.

Some cancer drugs — MAP kinase inhibitors — already target MAP kinase, and they’re often the first ones prescribed when a patient is diagnosed with cancer, Dr. Chuong said. As with many cancer treatments, doctors don’t know why they work, just that they do.

“By understanding the mechanisms in the cell, we might be able to make them work better or further optimize their treatment,” he said.

“MAP kinase inhibitors are really like a sledgehammer to the cell,” Dr. Chuong said — meaning they affect many cellular processes, not just those related to cancer.

“If we’re able to say that these viral switches are what’s important, then that could potentially help us develop a more targeted therapy that uses something like CRISPR to silence these viral elements,” he said. Or it could help providers choose a MAP kinase inhibitor from among the dozens available best suited to treat an individual patient and avoid side effects.  

Still, whether the findings translate to real cancer patients remains to be seen. “It’s very, very hard to go the final step of showing in a patient that these actually make a difference in the cancer,” Dr. Mager said.

More lab research, human trials, and at least a few years will be needed before this discovery could help treat cancer. “Directly targeting these elements as a therapy would be at least 5 years out,” Dr. Chuong said, “partly because that application would rely on CRISPR epigenome editing technology that is still being developed for clinical use.”
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SCIENCE ADVANCES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New Study Says Your Sedentary Lifestyle Is Killing You

Article Type
Changed
Thu, 08/01/2024 - 11:13

 

TOPLINE:

A less favorable balance between physical activity (PA) and sitting time (ST) is associated with a higher risk for all-cause mortality.

METHODOLOGY:

  • Researchers evaluated the association between PA and ST with the risk for mortality in 5836 middle-aged and older Australian adults (mean age, 56.4 years; 45% men) from the Australian Diabetes, Obesity and Lifestyle Study.
  • The Physical Activity and Sitting Time Balance Index (PASTBI) was calculated by dividing the total duration of daily PA by the duration of daily ST.
  • Participants were categorized into quartiles on the basis of their PASTBI score, ranging from low PA/high ST to high PA/low ST.
  • The primary outcome was all-cause mortality.

TAKEAWAY:

  • During a median follow-up time of 14.3 years, 885 (15%) all-cause deaths were reported.
  • The risk for all-cause mortality was 47% higher in participants with lower engagement in PA and higher ST (low PASTBI) than those with higher engagement in PA and lower ST (high PASTBI; adjusted hazard ratio, 1.47; 95% confidence interval, 1.21-1.79).

IN PRACTICE:

“The utility of the PASTBI in identifying relationships with mortality risk further highlights the importance of achieving a healthier balance in the dual health behaviors of PA [physical activity] and ST [sitting time],” the authors wrote.

SOURCE:

The study was led by Roslin Botlero, MBBS, MPH, PhD, of the School of Public Health and Preventive Medicine at Monash University in Melbourne, Australia. It was published online in the American Journal of Preventive Medicine.

LIMITATIONS:

The study relied on self-reported data for PA and ST, which may have introduced recall or reporting bias. The generalizability of the findings is restricted to a specific set of self-reported questionnaires. Even after adjustment for several potential confounders, other unmeasured or unknown confounders may have influenced the association between PASTBI and all-cause mortality.
 

DISCLOSURES:

The Australian Diabetes, Obesity and Lifestyle Study was sponsored by the National Health and Medical Research Council, the Australian Government Department of Health and Aged Care, and others. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

A less favorable balance between physical activity (PA) and sitting time (ST) is associated with a higher risk for all-cause mortality.

METHODOLOGY:

  • Researchers evaluated the association between PA and ST with the risk for mortality in 5836 middle-aged and older Australian adults (mean age, 56.4 years; 45% men) from the Australian Diabetes, Obesity and Lifestyle Study.
  • The Physical Activity and Sitting Time Balance Index (PASTBI) was calculated by dividing the total duration of daily PA by the duration of daily ST.
  • Participants were categorized into quartiles on the basis of their PASTBI score, ranging from low PA/high ST to high PA/low ST.
  • The primary outcome was all-cause mortality.

TAKEAWAY:

  • During a median follow-up time of 14.3 years, 885 (15%) all-cause deaths were reported.
  • The risk for all-cause mortality was 47% higher in participants with lower engagement in PA and higher ST (low PASTBI) than those with higher engagement in PA and lower ST (high PASTBI; adjusted hazard ratio, 1.47; 95% confidence interval, 1.21-1.79).

IN PRACTICE:

“The utility of the PASTBI in identifying relationships with mortality risk further highlights the importance of achieving a healthier balance in the dual health behaviors of PA [physical activity] and ST [sitting time],” the authors wrote.

SOURCE:

The study was led by Roslin Botlero, MBBS, MPH, PhD, of the School of Public Health and Preventive Medicine at Monash University in Melbourne, Australia. It was published online in the American Journal of Preventive Medicine.

LIMITATIONS:

The study relied on self-reported data for PA and ST, which may have introduced recall or reporting bias. The generalizability of the findings is restricted to a specific set of self-reported questionnaires. Even after adjustment for several potential confounders, other unmeasured or unknown confounders may have influenced the association between PASTBI and all-cause mortality.
 

DISCLOSURES:

The Australian Diabetes, Obesity and Lifestyle Study was sponsored by the National Health and Medical Research Council, the Australian Government Department of Health and Aged Care, and others. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

A less favorable balance between physical activity (PA) and sitting time (ST) is associated with a higher risk for all-cause mortality.

METHODOLOGY:

  • Researchers evaluated the association between PA and ST with the risk for mortality in 5836 middle-aged and older Australian adults (mean age, 56.4 years; 45% men) from the Australian Diabetes, Obesity and Lifestyle Study.
  • The Physical Activity and Sitting Time Balance Index (PASTBI) was calculated by dividing the total duration of daily PA by the duration of daily ST.
  • Participants were categorized into quartiles on the basis of their PASTBI score, ranging from low PA/high ST to high PA/low ST.
  • The primary outcome was all-cause mortality.

TAKEAWAY:

  • During a median follow-up time of 14.3 years, 885 (15%) all-cause deaths were reported.
  • The risk for all-cause mortality was 47% higher in participants with lower engagement in PA and higher ST (low PASTBI) than those with higher engagement in PA and lower ST (high PASTBI; adjusted hazard ratio, 1.47; 95% confidence interval, 1.21-1.79).

IN PRACTICE:

“The utility of the PASTBI in identifying relationships with mortality risk further highlights the importance of achieving a healthier balance in the dual health behaviors of PA [physical activity] and ST [sitting time],” the authors wrote.

SOURCE:

The study was led by Roslin Botlero, MBBS, MPH, PhD, of the School of Public Health and Preventive Medicine at Monash University in Melbourne, Australia. It was published online in the American Journal of Preventive Medicine.

LIMITATIONS:

The study relied on self-reported data for PA and ST, which may have introduced recall or reporting bias. The generalizability of the findings is restricted to a specific set of self-reported questionnaires. Even after adjustment for several potential confounders, other unmeasured or unknown confounders may have influenced the association between PASTBI and all-cause mortality.
 

DISCLOSURES:

The Australian Diabetes, Obesity and Lifestyle Study was sponsored by the National Health and Medical Research Council, the Australian Government Department of Health and Aged Care, and others. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Statins: So Misunderstood

Article Type
Changed
Wed, 07/31/2024 - 16:39

Recently, a patient of mine was hospitalized with chest pain. She was diagnosed with an acute coronary syndrome and started on a statin in addition to a beta-blocker, aspirin, and clopidogrel. After discharge, she had symptoms of dizziness and recurrent chest pain and her first thought was to stop the statin because she believed that her symptoms were statin-related side effects. I will cover a few areas where I think that there are some misunderstandings about statins.

Statins Are Not Bad For the Liver

When lovastatin first became available for prescription in the 1980s, frequent monitoring of transaminases was recommended. Patients and healthcare professionals became accustomed to frequent liver tests to monitor for statin toxicity, and to this day, some healthcare professionals still obtain liver function tests for this purpose.

But is there a reason to do this? Pfeffer and colleagues reported on the results of over 112,000 people enrolled in the West of Scotland Coronary Protection trial and found that the percentage of patients with any abnormal liver function test was similar (> 3 times the upper limit of normal for ALT) for patients taking pravastatin (1.4%) and for patients taking placebo (1.4%).1 A panel of liver experts concurred that statin-associated transaminase elevations were not indicative of liver damage or dysfunction.2 Furthermore, they noted that chronic liver disease and compensated cirrhosis were not contraindications to statin use.

Dr. Douglas S. Paauw, University of Washington, Seattle
Dr. Douglas S. Paauw

In a small study, use of low-dose atorvastatin in patients with nonalcoholic steatohepatitis improved transaminase values in 75% of patients and liver steatosis and nonalcoholic fatty liver disease activity scores were significantly improved on biopsy in most of the patients.3 The US Food and Drug Administration (FDA) removed the recommendation for routine regular monitoring of liver function for patients on statins in 2012.4

Statins Do Not Cause Muscle Pain in Most Patients

Most muscle pain occurring in patients on statins is not due to the statin although patient concerns about muscle pain are common. In a meta-analysis of 19 large statin trials, 27.1% of participants treated with a statin reported at least one episode of muscle pain or weakness during a median of 4.3 years, compared with 26.6% of participants treated with placebo.5 Muscle pain for any reason is common, and patients on statins may stop therapy because of the symptoms.

Cohen and colleagues performed a survey of past and current statin users, asking about muscle symptoms.6 Muscle-related side effects were reported by 60% of former statin users and 25% of current users.

Herrett and colleagues performed an extensive series of n-of-1 trials involving 200 patients who had stopped or were considering stopping statins because of muscle symptoms.7 Participants received either 2-month blocks of atorvastatin 20 mg or 2-month blocks of placebo, six times. They rated their muscle symptoms on a visual analogue scale at the end of each block. There was no difference in muscle symptom scores between the statin and placebo periods.

Wood and colleagues took it a step further when they planned an n-of-1 trial that included statin, placebo, and no treatment.8 Each participant received four bottles of atorvastatin 20 mg, four bottles of placebo, and four empty bottles. Each month they used treatment from the bottles based on a random sequence and reported daily symptom scores. The mean symptom intensity score was 8.0 during no-tablet months, 15.4 during placebo months (P < .001, compared with no-tablet months), and 16.3 during statin months (P < .001, compared with no-tablet months; P = .39, compared with placebo).
 

 

 

Statins Are Likely Helpful In the Very Elderly

Should we be using statins for primary prevention in our very old patients? For many years the answer was generally “no” on the basis of a lack of evidence. Patients in their 80s often were not included in clinical trials. The much used American Heart Association risk calculator stops at age 79. Given the prevalence of coronary artery disease in patients as they reach their 80s, wouldn’t primary prevention really be secondary prevention? Xu and colleagues in a recent study compared outcomes for patients who were treated with statins for primary prevention with a group who were not. In the patients aged 75-84 there was a risk reduction for major cardiovascular events of 1.2% over 5 years, and for those 85 and older the risk reduction was 4.4%. Importantly, there were no significantly increased risks for myopathies and liver dysfunction in either age group.

Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at imnews@mdedge.com.

References

1. Pfeffer MA et al. Circulation. 2002;105(20):2341-6.

2. Cohen DE et al. Am J Cardiol. 2006;97(8A):77C-81C.

3. Hyogo H et al. Metabolism. 2008;57(12):1711-8.

4. FDA Drug Safety Communication: Important safety label changes to cholesterol-lowering statin drugs. 2012 Feb 28.

5. Cholesterol Treatment Trialists’ Collaboration. Lancet. 2022;400(10355):832-45.

6. Cohen JD et al. J Clin Lipidol. 2012;6(3):208-15.

7. Herrett E et al. BMJ. 2021 Feb 24;372:n1355.

8. Wood FA et al. N Engl J Med. 2020;383(22):2182-4.

9. Xu W et al. Ann Intern Med. 2024;177(6):701-10.

Publications
Topics
Sections

Recently, a patient of mine was hospitalized with chest pain. She was diagnosed with an acute coronary syndrome and started on a statin in addition to a beta-blocker, aspirin, and clopidogrel. After discharge, she had symptoms of dizziness and recurrent chest pain and her first thought was to stop the statin because she believed that her symptoms were statin-related side effects. I will cover a few areas where I think that there are some misunderstandings about statins.

Statins Are Not Bad For the Liver

When lovastatin first became available for prescription in the 1980s, frequent monitoring of transaminases was recommended. Patients and healthcare professionals became accustomed to frequent liver tests to monitor for statin toxicity, and to this day, some healthcare professionals still obtain liver function tests for this purpose.

But is there a reason to do this? Pfeffer and colleagues reported on the results of over 112,000 people enrolled in the West of Scotland Coronary Protection trial and found that the percentage of patients with any abnormal liver function test was similar (> 3 times the upper limit of normal for ALT) for patients taking pravastatin (1.4%) and for patients taking placebo (1.4%).1 A panel of liver experts concurred that statin-associated transaminase elevations were not indicative of liver damage or dysfunction.2 Furthermore, they noted that chronic liver disease and compensated cirrhosis were not contraindications to statin use.

Dr. Douglas S. Paauw, University of Washington, Seattle
Dr. Douglas S. Paauw

In a small study, use of low-dose atorvastatin in patients with nonalcoholic steatohepatitis improved transaminase values in 75% of patients and liver steatosis and nonalcoholic fatty liver disease activity scores were significantly improved on biopsy in most of the patients.3 The US Food and Drug Administration (FDA) removed the recommendation for routine regular monitoring of liver function for patients on statins in 2012.4

Statins Do Not Cause Muscle Pain in Most Patients

Most muscle pain occurring in patients on statins is not due to the statin although patient concerns about muscle pain are common. In a meta-analysis of 19 large statin trials, 27.1% of participants treated with a statin reported at least one episode of muscle pain or weakness during a median of 4.3 years, compared with 26.6% of participants treated with placebo.5 Muscle pain for any reason is common, and patients on statins may stop therapy because of the symptoms.

Cohen and colleagues performed a survey of past and current statin users, asking about muscle symptoms.6 Muscle-related side effects were reported by 60% of former statin users and 25% of current users.

Herrett and colleagues performed an extensive series of n-of-1 trials involving 200 patients who had stopped or were considering stopping statins because of muscle symptoms.7 Participants received either 2-month blocks of atorvastatin 20 mg or 2-month blocks of placebo, six times. They rated their muscle symptoms on a visual analogue scale at the end of each block. There was no difference in muscle symptom scores between the statin and placebo periods.

Wood and colleagues took it a step further when they planned an n-of-1 trial that included statin, placebo, and no treatment.8 Each participant received four bottles of atorvastatin 20 mg, four bottles of placebo, and four empty bottles. Each month they used treatment from the bottles based on a random sequence and reported daily symptom scores. The mean symptom intensity score was 8.0 during no-tablet months, 15.4 during placebo months (P < .001, compared with no-tablet months), and 16.3 during statin months (P < .001, compared with no-tablet months; P = .39, compared with placebo).
 

 

 

Statins Are Likely Helpful In the Very Elderly

Should we be using statins for primary prevention in our very old patients? For many years the answer was generally “no” on the basis of a lack of evidence. Patients in their 80s often were not included in clinical trials. The much used American Heart Association risk calculator stops at age 79. Given the prevalence of coronary artery disease in patients as they reach their 80s, wouldn’t primary prevention really be secondary prevention? Xu and colleagues in a recent study compared outcomes for patients who were treated with statins for primary prevention with a group who were not. In the patients aged 75-84 there was a risk reduction for major cardiovascular events of 1.2% over 5 years, and for those 85 and older the risk reduction was 4.4%. Importantly, there were no significantly increased risks for myopathies and liver dysfunction in either age group.

Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at imnews@mdedge.com.

References

1. Pfeffer MA et al. Circulation. 2002;105(20):2341-6.

2. Cohen DE et al. Am J Cardiol. 2006;97(8A):77C-81C.

3. Hyogo H et al. Metabolism. 2008;57(12):1711-8.

4. FDA Drug Safety Communication: Important safety label changes to cholesterol-lowering statin drugs. 2012 Feb 28.

5. Cholesterol Treatment Trialists’ Collaboration. Lancet. 2022;400(10355):832-45.

6. Cohen JD et al. J Clin Lipidol. 2012;6(3):208-15.

7. Herrett E et al. BMJ. 2021 Feb 24;372:n1355.

8. Wood FA et al. N Engl J Med. 2020;383(22):2182-4.

9. Xu W et al. Ann Intern Med. 2024;177(6):701-10.

Recently, a patient of mine was hospitalized with chest pain. She was diagnosed with an acute coronary syndrome and started on a statin in addition to a beta-blocker, aspirin, and clopidogrel. After discharge, she had symptoms of dizziness and recurrent chest pain and her first thought was to stop the statin because she believed that her symptoms were statin-related side effects. I will cover a few areas where I think that there are some misunderstandings about statins.

Statins Are Not Bad For the Liver

When lovastatin first became available for prescription in the 1980s, frequent monitoring of transaminases was recommended. Patients and healthcare professionals became accustomed to frequent liver tests to monitor for statin toxicity, and to this day, some healthcare professionals still obtain liver function tests for this purpose.

But is there a reason to do this? Pfeffer and colleagues reported on the results of over 112,000 people enrolled in the West of Scotland Coronary Protection trial and found that the percentage of patients with any abnormal liver function test was similar (> 3 times the upper limit of normal for ALT) for patients taking pravastatin (1.4%) and for patients taking placebo (1.4%).1 A panel of liver experts concurred that statin-associated transaminase elevations were not indicative of liver damage or dysfunction.2 Furthermore, they noted that chronic liver disease and compensated cirrhosis were not contraindications to statin use.

Dr. Douglas S. Paauw, University of Washington, Seattle
Dr. Douglas S. Paauw

In a small study, use of low-dose atorvastatin in patients with nonalcoholic steatohepatitis improved transaminase values in 75% of patients and liver steatosis and nonalcoholic fatty liver disease activity scores were significantly improved on biopsy in most of the patients.3 The US Food and Drug Administration (FDA) removed the recommendation for routine regular monitoring of liver function for patients on statins in 2012.4

Statins Do Not Cause Muscle Pain in Most Patients

Most muscle pain occurring in patients on statins is not due to the statin although patient concerns about muscle pain are common. In a meta-analysis of 19 large statin trials, 27.1% of participants treated with a statin reported at least one episode of muscle pain or weakness during a median of 4.3 years, compared with 26.6% of participants treated with placebo.5 Muscle pain for any reason is common, and patients on statins may stop therapy because of the symptoms.

Cohen and colleagues performed a survey of past and current statin users, asking about muscle symptoms.6 Muscle-related side effects were reported by 60% of former statin users and 25% of current users.

Herrett and colleagues performed an extensive series of n-of-1 trials involving 200 patients who had stopped or were considering stopping statins because of muscle symptoms.7 Participants received either 2-month blocks of atorvastatin 20 mg or 2-month blocks of placebo, six times. They rated their muscle symptoms on a visual analogue scale at the end of each block. There was no difference in muscle symptom scores between the statin and placebo periods.

Wood and colleagues took it a step further when they planned an n-of-1 trial that included statin, placebo, and no treatment.8 Each participant received four bottles of atorvastatin 20 mg, four bottles of placebo, and four empty bottles. Each month they used treatment from the bottles based on a random sequence and reported daily symptom scores. The mean symptom intensity score was 8.0 during no-tablet months, 15.4 during placebo months (P < .001, compared with no-tablet months), and 16.3 during statin months (P < .001, compared with no-tablet months; P = .39, compared with placebo).
 

 

 

Statins Are Likely Helpful In the Very Elderly

Should we be using statins for primary prevention in our very old patients? For many years the answer was generally “no” on the basis of a lack of evidence. Patients in their 80s often were not included in clinical trials. The much used American Heart Association risk calculator stops at age 79. Given the prevalence of coronary artery disease in patients as they reach their 80s, wouldn’t primary prevention really be secondary prevention? Xu and colleagues in a recent study compared outcomes for patients who were treated with statins for primary prevention with a group who were not. In the patients aged 75-84 there was a risk reduction for major cardiovascular events of 1.2% over 5 years, and for those 85 and older the risk reduction was 4.4%. Importantly, there were no significantly increased risks for myopathies and liver dysfunction in either age group.

Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at imnews@mdedge.com.

References

1. Pfeffer MA et al. Circulation. 2002;105(20):2341-6.

2. Cohen DE et al. Am J Cardiol. 2006;97(8A):77C-81C.

3. Hyogo H et al. Metabolism. 2008;57(12):1711-8.

4. FDA Drug Safety Communication: Important safety label changes to cholesterol-lowering statin drugs. 2012 Feb 28.

5. Cholesterol Treatment Trialists’ Collaboration. Lancet. 2022;400(10355):832-45.

6. Cohen JD et al. J Clin Lipidol. 2012;6(3):208-15.

7. Herrett E et al. BMJ. 2021 Feb 24;372:n1355.

8. Wood FA et al. N Engl J Med. 2020;383(22):2182-4.

9. Xu W et al. Ann Intern Med. 2024;177(6):701-10.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Breakthrough Blood Test for Colorectal Cancer Gets Green Light

Article Type
Changed
Wed, 08/07/2024 - 14:59

 

A breakthrough in medical testing now allows for colorectal cancer screening with just a simple blood test, promising a more accessible and less invasive way to catch the disease early. 

The FDA on July 29 approved the test, called Shield, which can accurately detect tumors in the colon or rectum about 87% of the time when the cancer is in treatable early stages. The approval was announced July 29 by the test’s maker, Guardant Health, and comes just months after promising clinical trial results were published in The New England Journal of Medicine.

Colorectal cancer is among the most common types of cancer diagnosed in the United States each year, along with being one of the leading causes of cancer deaths. The condition is treatable in early stages, but about 1 in 3 people don’t stay up to date on regular screenings, which should begin at age 45.

The simplicity of a blood test could make it more likely for people to be screened for and, ultimately, survive the disease. Other primary screening options include feces-based tests or colonoscopy. The 5-year survival rate for colorectal cancer is 64%.

While highly accurate at detecting DNA shed by tumors during treatable stages of colorectal cancer, the Shield test was not as effective at detecting precancerous areas of tissue, which are typically removed after being detected.

In its news release, Guardant Health officials said they anticipate the test to be covered under Medicare. The out-of-pocket cost for people whose insurance does not cover the test has not yet been announced. The test is expected to be available by next week, The New York Times reported.

If someone’s Shield test comes back positive, the person would then get more tests to confirm the result. Shield was shown in trials to have a 10% false positive rate.

“I was in for a routine physical, and my doctor asked when I had my last colonoscopy,” said John Gormly, a 77-year-old business executive in Newport Beach, California, according to a Guardant Health news release. “I said it’s been a long time, so he offered to give me the Shield blood test. A few days later, the result came back positive, so he referred me for a colonoscopy. It turned out I had stage II colon cancer. The tumor was removed, and I recovered very quickly. Thank God I had taken that blood test.”
 

A version of this article appeared on WebMD.com.

Publications
Topics
Sections

 

A breakthrough in medical testing now allows for colorectal cancer screening with just a simple blood test, promising a more accessible and less invasive way to catch the disease early. 

The FDA on July 29 approved the test, called Shield, which can accurately detect tumors in the colon or rectum about 87% of the time when the cancer is in treatable early stages. The approval was announced July 29 by the test’s maker, Guardant Health, and comes just months after promising clinical trial results were published in The New England Journal of Medicine.

Colorectal cancer is among the most common types of cancer diagnosed in the United States each year, along with being one of the leading causes of cancer deaths. The condition is treatable in early stages, but about 1 in 3 people don’t stay up to date on regular screenings, which should begin at age 45.

The simplicity of a blood test could make it more likely for people to be screened for and, ultimately, survive the disease. Other primary screening options include feces-based tests or colonoscopy. The 5-year survival rate for colorectal cancer is 64%.

While highly accurate at detecting DNA shed by tumors during treatable stages of colorectal cancer, the Shield test was not as effective at detecting precancerous areas of tissue, which are typically removed after being detected.

In its news release, Guardant Health officials said they anticipate the test to be covered under Medicare. The out-of-pocket cost for people whose insurance does not cover the test has not yet been announced. The test is expected to be available by next week, The New York Times reported.

If someone’s Shield test comes back positive, the person would then get more tests to confirm the result. Shield was shown in trials to have a 10% false positive rate.

“I was in for a routine physical, and my doctor asked when I had my last colonoscopy,” said John Gormly, a 77-year-old business executive in Newport Beach, California, according to a Guardant Health news release. “I said it’s been a long time, so he offered to give me the Shield blood test. A few days later, the result came back positive, so he referred me for a colonoscopy. It turned out I had stage II colon cancer. The tumor was removed, and I recovered very quickly. Thank God I had taken that blood test.”
 

A version of this article appeared on WebMD.com.

 

A breakthrough in medical testing now allows for colorectal cancer screening with just a simple blood test, promising a more accessible and less invasive way to catch the disease early. 

The FDA on July 29 approved the test, called Shield, which can accurately detect tumors in the colon or rectum about 87% of the time when the cancer is in treatable early stages. The approval was announced July 29 by the test’s maker, Guardant Health, and comes just months after promising clinical trial results were published in The New England Journal of Medicine.

Colorectal cancer is among the most common types of cancer diagnosed in the United States each year, along with being one of the leading causes of cancer deaths. The condition is treatable in early stages, but about 1 in 3 people don’t stay up to date on regular screenings, which should begin at age 45.

The simplicity of a blood test could make it more likely for people to be screened for and, ultimately, survive the disease. Other primary screening options include feces-based tests or colonoscopy. The 5-year survival rate for colorectal cancer is 64%.

While highly accurate at detecting DNA shed by tumors during treatable stages of colorectal cancer, the Shield test was not as effective at detecting precancerous areas of tissue, which are typically removed after being detected.

In its news release, Guardant Health officials said they anticipate the test to be covered under Medicare. The out-of-pocket cost for people whose insurance does not cover the test has not yet been announced. The test is expected to be available by next week, The New York Times reported.

If someone’s Shield test comes back positive, the person would then get more tests to confirm the result. Shield was shown in trials to have a 10% false positive rate.

“I was in for a routine physical, and my doctor asked when I had my last colonoscopy,” said John Gormly, a 77-year-old business executive in Newport Beach, California, according to a Guardant Health news release. “I said it’s been a long time, so he offered to give me the Shield blood test. A few days later, the result came back positive, so he referred me for a colonoscopy. It turned out I had stage II colon cancer. The tumor was removed, and I recovered very quickly. Thank God I had taken that blood test.”
 

A version of this article appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New Models Predict Time From Mild Cognitive Impairment to Dementia

Article Type
Changed
Tue, 07/30/2024 - 10:23

Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.

The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.

The findings were published online in Neurology.
 

Easy-to-Use Prototype

On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.

Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.

For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.

Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.

At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.

In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.

The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).

However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.

For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.

External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.

Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.

Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.

“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.

Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.

The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.

The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.

The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.

The findings were published online in Neurology.
 

Easy-to-Use Prototype

On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.

Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.

For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.

Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.

At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.

In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.

The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).

However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.

For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.

External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.

Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.

Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.

“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.

Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.

The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.

The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.

A version of this article first appeared on Medscape.com.

Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.

The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.

The findings were published online in Neurology.
 

Easy-to-Use Prototype

On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.

Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.

For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.

Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.

At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.

In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.

The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).

However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.

For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.

External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.

Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.

Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.

“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.

Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.

The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.

The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article