PCP Visits Save Lives of Older Patients After Cancer Surgery

Article Type
Changed
Fri, 08/09/2024 - 11:34

 

TOPLINE:

Primary care visits within 90 days after cancer surgery are linked to a lower mortality rate in older adults. Patients with a primary care visit had a 90-day mortality rate of 0.3% compared with 3.3% for those without. 

METHODOLOGY:

  • A total of 2566 patients aged 65 years or older who underwent inpatient cancer surgery between January 1, 2017, and December 31, 2019, were included in a retrospective cohort study.
  • Patients were categorized on the basis of having a primary care practitioner (PCP) and whether they had a primary care visit within 90 postoperative days.
  • The primary outcome was 90-day postoperative mortality, analyzed using inverse propensity weighted Kaplan-Meier curves.

TAKEAWAY: 

  • Patients with a primary care visit within 90 postoperative days had a significantly lower 90-day mortality rate (0.3%) than those without a visit (3.3%; P = .001).
  • Older adults without a PCP had a higher 90-day postoperative mortality rate (3.6%) than those with a PCP (2.0%; P = .01).
  • Patients who had a primary care visit were more likely to be older, have a higher comorbidity score, and have higher rates of emergency department visits and readmissions.

IN PRACTICE:

“Identifying older patients with cancer who do not have a PCP in the preoperative setting is, therefore, a potential intervention point; such patients could be referred to establish primary care or prioritized for assessment in a preoperative optimization clinic,” wrote the study authors. 

SOURCE:

The study was led by Hadiza S. Kazaure, MD, of Duke University Medical Center in Durham, North Carolina. It was published online in JAMA Surgery.

LIMITATIONS:

The study was retrospective and performed at a single institution, which may limit the generalizability of the results. Coding errors were possible, and details on potential confounders such as frailty and severity of comorbidities are lacking. Mortality was low overall, limiting further adjusted and cancer-specific analyses. Data linkage between the electronic health record and Medicare and Medicaid databases was not possible, limiting analysis of data from patients with external PCPs.

DISCLOSURES:

Dr. Kazaure disclosed receiving grants from the National Cancer Institute. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Primary care visits within 90 days after cancer surgery are linked to a lower mortality rate in older adults. Patients with a primary care visit had a 90-day mortality rate of 0.3% compared with 3.3% for those without. 

METHODOLOGY:

  • A total of 2566 patients aged 65 years or older who underwent inpatient cancer surgery between January 1, 2017, and December 31, 2019, were included in a retrospective cohort study.
  • Patients were categorized on the basis of having a primary care practitioner (PCP) and whether they had a primary care visit within 90 postoperative days.
  • The primary outcome was 90-day postoperative mortality, analyzed using inverse propensity weighted Kaplan-Meier curves.

TAKEAWAY: 

  • Patients with a primary care visit within 90 postoperative days had a significantly lower 90-day mortality rate (0.3%) than those without a visit (3.3%; P = .001).
  • Older adults without a PCP had a higher 90-day postoperative mortality rate (3.6%) than those with a PCP (2.0%; P = .01).
  • Patients who had a primary care visit were more likely to be older, have a higher comorbidity score, and have higher rates of emergency department visits and readmissions.

IN PRACTICE:

“Identifying older patients with cancer who do not have a PCP in the preoperative setting is, therefore, a potential intervention point; such patients could be referred to establish primary care or prioritized for assessment in a preoperative optimization clinic,” wrote the study authors. 

SOURCE:

The study was led by Hadiza S. Kazaure, MD, of Duke University Medical Center in Durham, North Carolina. It was published online in JAMA Surgery.

LIMITATIONS:

The study was retrospective and performed at a single institution, which may limit the generalizability of the results. Coding errors were possible, and details on potential confounders such as frailty and severity of comorbidities are lacking. Mortality was low overall, limiting further adjusted and cancer-specific analyses. Data linkage between the electronic health record and Medicare and Medicaid databases was not possible, limiting analysis of data from patients with external PCPs.

DISCLOSURES:

Dr. Kazaure disclosed receiving grants from the National Cancer Institute. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Primary care visits within 90 days after cancer surgery are linked to a lower mortality rate in older adults. Patients with a primary care visit had a 90-day mortality rate of 0.3% compared with 3.3% for those without. 

METHODOLOGY:

  • A total of 2566 patients aged 65 years or older who underwent inpatient cancer surgery between January 1, 2017, and December 31, 2019, were included in a retrospective cohort study.
  • Patients were categorized on the basis of having a primary care practitioner (PCP) and whether they had a primary care visit within 90 postoperative days.
  • The primary outcome was 90-day postoperative mortality, analyzed using inverse propensity weighted Kaplan-Meier curves.

TAKEAWAY: 

  • Patients with a primary care visit within 90 postoperative days had a significantly lower 90-day mortality rate (0.3%) than those without a visit (3.3%; P = .001).
  • Older adults without a PCP had a higher 90-day postoperative mortality rate (3.6%) than those with a PCP (2.0%; P = .01).
  • Patients who had a primary care visit were more likely to be older, have a higher comorbidity score, and have higher rates of emergency department visits and readmissions.

IN PRACTICE:

“Identifying older patients with cancer who do not have a PCP in the preoperative setting is, therefore, a potential intervention point; such patients could be referred to establish primary care or prioritized for assessment in a preoperative optimization clinic,” wrote the study authors. 

SOURCE:

The study was led by Hadiza S. Kazaure, MD, of Duke University Medical Center in Durham, North Carolina. It was published online in JAMA Surgery.

LIMITATIONS:

The study was retrospective and performed at a single institution, which may limit the generalizability of the results. Coding errors were possible, and details on potential confounders such as frailty and severity of comorbidities are lacking. Mortality was low overall, limiting further adjusted and cancer-specific analyses. Data linkage between the electronic health record and Medicare and Medicaid databases was not possible, limiting analysis of data from patients with external PCPs.

DISCLOSURES:

Dr. Kazaure disclosed receiving grants from the National Cancer Institute. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Can Addressing Depression Reduce Chemo Toxicity in Older Adults?

Article Type
Changed
Tue, 08/13/2024 - 09:44

 

TOPLINE:

Elevated depression symptoms are linked to an increased risk for severe chemotherapy toxicity in older adults with cancer. This risk is mitigated by geriatric assessment (GA)-driven interventions.

METHODOLOGY:

  • Researchers conducted a secondary analysis of a randomized controlled trial to evaluate whether greater reductions in grade 3 chemotherapy-related toxicities occurred with geriatric assessment-driven interventions vs standard care.
  • A total of 605 patients aged 65 years and older with any stage of solid malignancy were included, with 402 randomized to the intervention arm and 203 to the standard-of-care arm.
  • Mental health was assessed using the Mental Health Inventory 13, and chemotherapy toxicity was graded by the National Cancer Institute Common Terminology Criteria for Adverse Events, version 4.0.
  • Patients in the intervention arm received recommendations from a multidisciplinary team based on their baseline GA, while those in the standard-of-care arm received only the baseline assessment results.
  • The study was conducted at City of Hope National Medical Center in Duarte, California, and patients were followed throughout treatment or for up to 6 months from starting chemotherapy.

TAKEAWAY:

  • According to the authors, patients with depression had increased chemotherapy toxicity in the standard-of-care arm (70.7% vs 54.3%; P = .02) but not in the GA-driven intervention arm (54.3% vs 48.5%; P = .27).
  • The association between depression and chemotherapy toxicity was also seen after adjustment for the Cancer and Aging Research Group toxicity score (odds ratio, [OR], 1.98; 95% CI, 1.07-3.65) and for demographic, disease, and treatment factors (OR, 2.00; 95% CI, 1.03-3.85).
  • No significant association was found between anxiety and chemotherapy toxicity in either the standard-of-care arm (univariate OR, 1.07; 95% CI, 0.61-1.88) or the GA-driven intervention arm (univariate OR, 1.15; 95% CI, 0.78-1.71).
  • The authors stated that depression was associated with increased odds of hematologic-only toxicities (OR, 2.50; 95% CI, 1.13-5.56) in the standard-of-care arm.
  • An analysis of a small subgroup found associations between elevated anxiety symptoms and increased risk for hematologic and nonhematologic chemotherapy toxicities.

IN PRACTICE:

“The current study showed that elevated depression symptoms are associated with increased risk of severe chemotherapy toxicities in older adults with cancer. This risk was mitigated in those in the GA intervention arm, which suggests that addressing elevated depression symptoms may lower the risk of toxicities,” the authors wrote. “Overall, elevated anxiety symptoms were not associated with risk for severe chemotherapy toxicity.”

SOURCE:

Reena V. Jayani, MD, MSCI, of Vanderbilt University Medical Center in Nashville, Tennessee, was the first and corresponding author for this paper. This study was published online August 4, 2024, in Cancer

LIMITATIONS:

The thresholds for depression and anxiety used in the Mental Health Inventory 13 were based on an English-speaking population, which may not be fully applicable to Chinese- and Spanish-speaking patients included in the study. Depression and anxiety were not evaluated by a mental health professional or with a structured interview to assess formal diagnostic criteria. Psychiatric medication used at the time of baseline GA was not included in the analysis. The study is a secondary analysis of a randomized controlled trial, and it is not known which components of the interventions affected mental health.

DISCLOSURES:

This research project was supported by the UniHealth Foundation, the City of Hope Center for Cancer and Aging, and the National Institutes of Health. One coauthor disclosed receiving institutional research funding from AstraZeneca and Brooklyn ImmunoTherapeutics and consulting for multiple pharmaceutical companies, including AbbVie, Adagene, and Bayer HealthCare Pharmaceuticals. William Dale, MD, PhD, of City of Hope National Medical Center, served as senior author and a principal investigator. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Elevated depression symptoms are linked to an increased risk for severe chemotherapy toxicity in older adults with cancer. This risk is mitigated by geriatric assessment (GA)-driven interventions.

METHODOLOGY:

  • Researchers conducted a secondary analysis of a randomized controlled trial to evaluate whether greater reductions in grade 3 chemotherapy-related toxicities occurred with geriatric assessment-driven interventions vs standard care.
  • A total of 605 patients aged 65 years and older with any stage of solid malignancy were included, with 402 randomized to the intervention arm and 203 to the standard-of-care arm.
  • Mental health was assessed using the Mental Health Inventory 13, and chemotherapy toxicity was graded by the National Cancer Institute Common Terminology Criteria for Adverse Events, version 4.0.
  • Patients in the intervention arm received recommendations from a multidisciplinary team based on their baseline GA, while those in the standard-of-care arm received only the baseline assessment results.
  • The study was conducted at City of Hope National Medical Center in Duarte, California, and patients were followed throughout treatment or for up to 6 months from starting chemotherapy.

TAKEAWAY:

  • According to the authors, patients with depression had increased chemotherapy toxicity in the standard-of-care arm (70.7% vs 54.3%; P = .02) but not in the GA-driven intervention arm (54.3% vs 48.5%; P = .27).
  • The association between depression and chemotherapy toxicity was also seen after adjustment for the Cancer and Aging Research Group toxicity score (odds ratio, [OR], 1.98; 95% CI, 1.07-3.65) and for demographic, disease, and treatment factors (OR, 2.00; 95% CI, 1.03-3.85).
  • No significant association was found between anxiety and chemotherapy toxicity in either the standard-of-care arm (univariate OR, 1.07; 95% CI, 0.61-1.88) or the GA-driven intervention arm (univariate OR, 1.15; 95% CI, 0.78-1.71).
  • The authors stated that depression was associated with increased odds of hematologic-only toxicities (OR, 2.50; 95% CI, 1.13-5.56) in the standard-of-care arm.
  • An analysis of a small subgroup found associations between elevated anxiety symptoms and increased risk for hematologic and nonhematologic chemotherapy toxicities.

IN PRACTICE:

“The current study showed that elevated depression symptoms are associated with increased risk of severe chemotherapy toxicities in older adults with cancer. This risk was mitigated in those in the GA intervention arm, which suggests that addressing elevated depression symptoms may lower the risk of toxicities,” the authors wrote. “Overall, elevated anxiety symptoms were not associated with risk for severe chemotherapy toxicity.”

SOURCE:

Reena V. Jayani, MD, MSCI, of Vanderbilt University Medical Center in Nashville, Tennessee, was the first and corresponding author for this paper. This study was published online August 4, 2024, in Cancer

LIMITATIONS:

The thresholds for depression and anxiety used in the Mental Health Inventory 13 were based on an English-speaking population, which may not be fully applicable to Chinese- and Spanish-speaking patients included in the study. Depression and anxiety were not evaluated by a mental health professional or with a structured interview to assess formal diagnostic criteria. Psychiatric medication used at the time of baseline GA was not included in the analysis. The study is a secondary analysis of a randomized controlled trial, and it is not known which components of the interventions affected mental health.

DISCLOSURES:

This research project was supported by the UniHealth Foundation, the City of Hope Center for Cancer and Aging, and the National Institutes of Health. One coauthor disclosed receiving institutional research funding from AstraZeneca and Brooklyn ImmunoTherapeutics and consulting for multiple pharmaceutical companies, including AbbVie, Adagene, and Bayer HealthCare Pharmaceuticals. William Dale, MD, PhD, of City of Hope National Medical Center, served as senior author and a principal investigator. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Elevated depression symptoms are linked to an increased risk for severe chemotherapy toxicity in older adults with cancer. This risk is mitigated by geriatric assessment (GA)-driven interventions.

METHODOLOGY:

  • Researchers conducted a secondary analysis of a randomized controlled trial to evaluate whether greater reductions in grade 3 chemotherapy-related toxicities occurred with geriatric assessment-driven interventions vs standard care.
  • A total of 605 patients aged 65 years and older with any stage of solid malignancy were included, with 402 randomized to the intervention arm and 203 to the standard-of-care arm.
  • Mental health was assessed using the Mental Health Inventory 13, and chemotherapy toxicity was graded by the National Cancer Institute Common Terminology Criteria for Adverse Events, version 4.0.
  • Patients in the intervention arm received recommendations from a multidisciplinary team based on their baseline GA, while those in the standard-of-care arm received only the baseline assessment results.
  • The study was conducted at City of Hope National Medical Center in Duarte, California, and patients were followed throughout treatment or for up to 6 months from starting chemotherapy.

TAKEAWAY:

  • According to the authors, patients with depression had increased chemotherapy toxicity in the standard-of-care arm (70.7% vs 54.3%; P = .02) but not in the GA-driven intervention arm (54.3% vs 48.5%; P = .27).
  • The association between depression and chemotherapy toxicity was also seen after adjustment for the Cancer and Aging Research Group toxicity score (odds ratio, [OR], 1.98; 95% CI, 1.07-3.65) and for demographic, disease, and treatment factors (OR, 2.00; 95% CI, 1.03-3.85).
  • No significant association was found between anxiety and chemotherapy toxicity in either the standard-of-care arm (univariate OR, 1.07; 95% CI, 0.61-1.88) or the GA-driven intervention arm (univariate OR, 1.15; 95% CI, 0.78-1.71).
  • The authors stated that depression was associated with increased odds of hematologic-only toxicities (OR, 2.50; 95% CI, 1.13-5.56) in the standard-of-care arm.
  • An analysis of a small subgroup found associations between elevated anxiety symptoms and increased risk for hematologic and nonhematologic chemotherapy toxicities.

IN PRACTICE:

“The current study showed that elevated depression symptoms are associated with increased risk of severe chemotherapy toxicities in older adults with cancer. This risk was mitigated in those in the GA intervention arm, which suggests that addressing elevated depression symptoms may lower the risk of toxicities,” the authors wrote. “Overall, elevated anxiety symptoms were not associated with risk for severe chemotherapy toxicity.”

SOURCE:

Reena V. Jayani, MD, MSCI, of Vanderbilt University Medical Center in Nashville, Tennessee, was the first and corresponding author for this paper. This study was published online August 4, 2024, in Cancer

LIMITATIONS:

The thresholds for depression and anxiety used in the Mental Health Inventory 13 were based on an English-speaking population, which may not be fully applicable to Chinese- and Spanish-speaking patients included in the study. Depression and anxiety were not evaluated by a mental health professional or with a structured interview to assess formal diagnostic criteria. Psychiatric medication used at the time of baseline GA was not included in the analysis. The study is a secondary analysis of a randomized controlled trial, and it is not known which components of the interventions affected mental health.

DISCLOSURES:

This research project was supported by the UniHealth Foundation, the City of Hope Center for Cancer and Aging, and the National Institutes of Health. One coauthor disclosed receiving institutional research funding from AstraZeneca and Brooklyn ImmunoTherapeutics and consulting for multiple pharmaceutical companies, including AbbVie, Adagene, and Bayer HealthCare Pharmaceuticals. William Dale, MD, PhD, of City of Hope National Medical Center, served as senior author and a principal investigator. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Management, Evaluation of Chronic Itch in Older Adults

Article Type
Changed
Wed, 08/07/2024 - 12:34

Immunoglobulin E (IgE) and eosinophils appeared to be reliable biomarkers of type 2 inflammation in chronic pruritus of unknown origin (CPUO) and predictors of a positive response to immunomodulatory therapies, Shawn G. Kwatra, MD, said at the ElderDerm conference on dermatology in older patients hosted by the GW School of Medicine & Health Sciences.

“We found a few years ago that eosinophils seem to differentiate this group, and now we’re finding that IgE and CBC [complete blood count] differential can help you get a little better sense of who has an immune-driven itch vs something more neuropathic,” said Dr. Kwatra, professor and chair of dermatology at the University of Maryland, Baltimore, who founded and directed the Johns Hopkins Itch Center before coming to the University of Maryland in 2023. Not all patients with immune-driven itch will have these biomarkers, “but it’s a helpful tool,” he said.

Dr. Shawn G. Kwatra, department of dermatology, Johns Hopkins University School of Medicine, Baltimore
Dr. Kwatra
Dr. Shawn G. Kwatra

CPUO is the term that is increasingly being used, he said, to describe intense, chronic pruritus without primary skin lesions or rashes and without any known systemic cause. It becomes more common as people get older and is sometimes debilitating. The initial evaluation should be kept “simple and straightforward,” he advised, with heightened concern for underlying malignancy in those who present with an itch of less than 12 months’ duration.
 

Biologics, JAK Inhibitors: Case Reports, Ongoing Research 

Research conducted by Dr. Kwatra and Jaya Manjunath, a fourth-year medical student at The George Washington University, Washington, documented higher levels of Th2-associated cytokines and inflammatory markers in patients with CPUO who had elevated IgE or eosinophil levels, or both than in patients with itch who had low IgE and eosinophil levels. The patients with higher levels also had a greater response to off-label treatment with immunomodulatory therapy.

“Multiple Th2-related inflammatory markers, like IL [interleukin]-5 and eotaxin-3, were reduced after dupilumab” in patients who responded to the therapy, said Ms. Manjunath, who co-presented the meeting session on chronic itch with Dr. Kwatra. Other changes in the plasma cytokine profile included a reduction in the serum level of thymus and activation-regulated chemokine, which is a biomarker for atopic dermatitis. The research is under review for publication.

Meanwhile, a phase 3 trial (LIBERTY-CPUO-CHIC) of dupilumab for CPUO is currently underway, Dr. Kwatra noted. Investigators are randomizing patients with severe pruritus (Worst Itch Numeric Rating Scale [WI-NRS] ≥ 7) to dupilumab or placebo for 12 or 24 weeks.

In one of several cases shared by Dr. Kwatra and Ms. Manjunath, a 71-year-old Black woman with a 6-month history of generalized itch (WI-NRS = 10) and a history of type 2 diabetes, hypertension, and chronic kidney disease was found to have elevated eosinophil levels and a negative malignancy workup. Previous therapies included antihistamines and topical steroids. She was started on a 600-mg loading dose of subcutaneous dupilumab followed by 300 mg every 14 days. At the 2-month follow-up, her WI-NRS score was 0.

Because “dupilumab is off label right now for this form of itch, oftentimes our first line is methotrexate,” Dr. Kwatra said. Patients “can have a good response with this therapeutic.”

He also described the case of a 72-year-old Black woman with total body itch for 2 years (WI-NRS = 10) and a history of seasonal allergies, thyroid disease, and hypertension. Previous therapies included prednisone, antihistamines, topical steroids, and gabapentin. The patient was found to have high IgE (447 kU/L) and eosinophil levels (4.9%), was started on methotrexate, and had an itch score of 0 at the 8-month follow-up.

JAK inhibitors may also have a role in the management of CPUO. A phase 2 nonrandomized controlled trial of abrocitinib for adults with prurigo nodularis (PN) or CPUO, recently published in JAMA Dermatology, showed itch scores decreased by 53.7% in the CPUO group (and 78.3% in the PN group) after 12 weeks of treatment with oral abrocitinib 200 mg daily. Patients had significant improvements in quality of life and no serious adverse events, said Dr. Kwatra, the lead author of the paper.

One of these patients was a 73-year-old White man who had experienced total body itch for 1.5 years (predominantly affecting his upper extremities; WI-NRS = 10) and a history of ascending aortic aneurysm, hypertension, and hyperlipidemia. Previous failed therapies included dupilumab (> 6 months), topical steroids, tacrolimus, and antihistamines. Labs showed elevated IgE (456 kU/L) and eosinophil levels (11.7%). After 12 weeks of treatment with abrocitinib, the WI-NRS decreased to 2.
 

 

 

PD-1 Inhibitors As a Trigger

Chronic pruritus caused by the anticancer PD-1 inhibitors is becoming more common as the utilization of these immune checkpoint inhibitors increases, Dr. Kwatra noted. “You don’t see much in the skin, but [these patients have] very high IgE and eosinophils,” he said. “We’ve been seeing more reports recently of utilizing agents that target type 2 inflammation off label for PD-1 inhibitor–related skin manifestations.”

One such patient with PD-1 inhibitor–induced pruritus was a 65-year-old White man with metastatic melanoma who reported a 6-month history of itching that began 3 weeks after the start of treatment with the PD-1 inhibitor pembrolizumab. His WI-NRS score was 10 despite treatment with topical steroids and antihistamines. He had a history of psoriasis. Labs showed elevated IgE (1350 kU/L) and eosinophil levels (4.5%). At a 4-month follow-up after treatment with off-label dupilumab (a 600-mg subcutaneous loading dose followed by 300 mg every 14 days), his WI-NRS score was 0.

In a paper recently published in JAAD International, Dr. Kwatra, Ms. Manjunath, and coinvestigators reported on a series of 15 patients who developed chronic pruritus following an immune stimulus exposure, including immunotherapy and vaccination (2024 Apr 7:16:97-102. doi: 10.1016/j.jdin.2024.03.022). Most immunotherapy-treated patients experienced pruritus during treatment or after 21-60 days of receiving treatment, and the patients with vaccine-stimulated pruritus (after Tdap and messenger RNA COVID-19 vaccination) developed pruritus within a week of vaccination.

In addition to the elevated levels of IgE and eosinophils, plasma cytokine analysis showed elevated levels of IL-5, thymic stromal lymphopoietin, and other Th2-related cytokines and inflammatory markers in patients with immune-stimulated pruritus compared with healthy controls, Ms. Manjunath said at the meeting.

When a Malignancy Workup Becomes Important

The initial part of any diagnostic workup for CPUO should include CBC with differential, liver function tests, renal function tests, and thyroid function testing, said Kwatra, referring to a diagnostic algorithm he developed, which was published as part of a CME review in the Journal of the American Academy of Dermatology in 2022.

Then, as indicated by risk factors in the history and physical, one could order other tests such as HIV serology, hepatitis B/C serologies, bullous pemphigoid testing, chest x-rays, evaluation for gammopathies, stool examination for ova and parasites, or heavy metal testing. “Do you do everything at once? We like to keep it straightforward,” Dr. Kwatra said. “Depending on the patient’s risk factors, you could order more or less.”

A malignancy workup should be strongly considered in patients whose itch duration is less than 12 months — and especially if the duration is less than 3 months — with an emphasis on cancers more frequently associated with itch: Hematologic and hepatobiliary cancers. This is “when concern should be heightened ... when there should be a lower threshold for workup,” he said.

The 12-month recommendation stems from a Danish cohort study published in 2014 that demonstrated a twofold increased incidence of cancer among patients with pruritus in the first 3 months after the diagnosis of pruritus. The 1-year absolute cancer risk was 1.63%.

Other risk factors for underlying malignancy or malignancy development in patients with CPUO include age older than 60 years, male sex, liver disease, and current or prior smoking, according to another study, noted Dr. Kwatra.

Dr. Kwatra disclosed that he is an advisory board member/consultant for Pfizer, Regeneron, Sanofi, and other companies and an investigator for Galderma, Incyte, Pfizer, and Sanofi. Manjunath served as the codirector of the ElderDerm conference.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Immunoglobulin E (IgE) and eosinophils appeared to be reliable biomarkers of type 2 inflammation in chronic pruritus of unknown origin (CPUO) and predictors of a positive response to immunomodulatory therapies, Shawn G. Kwatra, MD, said at the ElderDerm conference on dermatology in older patients hosted by the GW School of Medicine & Health Sciences.

“We found a few years ago that eosinophils seem to differentiate this group, and now we’re finding that IgE and CBC [complete blood count] differential can help you get a little better sense of who has an immune-driven itch vs something more neuropathic,” said Dr. Kwatra, professor and chair of dermatology at the University of Maryland, Baltimore, who founded and directed the Johns Hopkins Itch Center before coming to the University of Maryland in 2023. Not all patients with immune-driven itch will have these biomarkers, “but it’s a helpful tool,” he said.

Dr. Shawn G. Kwatra, department of dermatology, Johns Hopkins University School of Medicine, Baltimore
Dr. Kwatra
Dr. Shawn G. Kwatra

CPUO is the term that is increasingly being used, he said, to describe intense, chronic pruritus without primary skin lesions or rashes and without any known systemic cause. It becomes more common as people get older and is sometimes debilitating. The initial evaluation should be kept “simple and straightforward,” he advised, with heightened concern for underlying malignancy in those who present with an itch of less than 12 months’ duration.
 

Biologics, JAK Inhibitors: Case Reports, Ongoing Research 

Research conducted by Dr. Kwatra and Jaya Manjunath, a fourth-year medical student at The George Washington University, Washington, documented higher levels of Th2-associated cytokines and inflammatory markers in patients with CPUO who had elevated IgE or eosinophil levels, or both than in patients with itch who had low IgE and eosinophil levels. The patients with higher levels also had a greater response to off-label treatment with immunomodulatory therapy.

“Multiple Th2-related inflammatory markers, like IL [interleukin]-5 and eotaxin-3, were reduced after dupilumab” in patients who responded to the therapy, said Ms. Manjunath, who co-presented the meeting session on chronic itch with Dr. Kwatra. Other changes in the plasma cytokine profile included a reduction in the serum level of thymus and activation-regulated chemokine, which is a biomarker for atopic dermatitis. The research is under review for publication.

Meanwhile, a phase 3 trial (LIBERTY-CPUO-CHIC) of dupilumab for CPUO is currently underway, Dr. Kwatra noted. Investigators are randomizing patients with severe pruritus (Worst Itch Numeric Rating Scale [WI-NRS] ≥ 7) to dupilumab or placebo for 12 or 24 weeks.

In one of several cases shared by Dr. Kwatra and Ms. Manjunath, a 71-year-old Black woman with a 6-month history of generalized itch (WI-NRS = 10) and a history of type 2 diabetes, hypertension, and chronic kidney disease was found to have elevated eosinophil levels and a negative malignancy workup. Previous therapies included antihistamines and topical steroids. She was started on a 600-mg loading dose of subcutaneous dupilumab followed by 300 mg every 14 days. At the 2-month follow-up, her WI-NRS score was 0.

Because “dupilumab is off label right now for this form of itch, oftentimes our first line is methotrexate,” Dr. Kwatra said. Patients “can have a good response with this therapeutic.”

He also described the case of a 72-year-old Black woman with total body itch for 2 years (WI-NRS = 10) and a history of seasonal allergies, thyroid disease, and hypertension. Previous therapies included prednisone, antihistamines, topical steroids, and gabapentin. The patient was found to have high IgE (447 kU/L) and eosinophil levels (4.9%), was started on methotrexate, and had an itch score of 0 at the 8-month follow-up.

JAK inhibitors may also have a role in the management of CPUO. A phase 2 nonrandomized controlled trial of abrocitinib for adults with prurigo nodularis (PN) or CPUO, recently published in JAMA Dermatology, showed itch scores decreased by 53.7% in the CPUO group (and 78.3% in the PN group) after 12 weeks of treatment with oral abrocitinib 200 mg daily. Patients had significant improvements in quality of life and no serious adverse events, said Dr. Kwatra, the lead author of the paper.

One of these patients was a 73-year-old White man who had experienced total body itch for 1.5 years (predominantly affecting his upper extremities; WI-NRS = 10) and a history of ascending aortic aneurysm, hypertension, and hyperlipidemia. Previous failed therapies included dupilumab (> 6 months), topical steroids, tacrolimus, and antihistamines. Labs showed elevated IgE (456 kU/L) and eosinophil levels (11.7%). After 12 weeks of treatment with abrocitinib, the WI-NRS decreased to 2.
 

 

 

PD-1 Inhibitors As a Trigger

Chronic pruritus caused by the anticancer PD-1 inhibitors is becoming more common as the utilization of these immune checkpoint inhibitors increases, Dr. Kwatra noted. “You don’t see much in the skin, but [these patients have] very high IgE and eosinophils,” he said. “We’ve been seeing more reports recently of utilizing agents that target type 2 inflammation off label for PD-1 inhibitor–related skin manifestations.”

One such patient with PD-1 inhibitor–induced pruritus was a 65-year-old White man with metastatic melanoma who reported a 6-month history of itching that began 3 weeks after the start of treatment with the PD-1 inhibitor pembrolizumab. His WI-NRS score was 10 despite treatment with topical steroids and antihistamines. He had a history of psoriasis. Labs showed elevated IgE (1350 kU/L) and eosinophil levels (4.5%). At a 4-month follow-up after treatment with off-label dupilumab (a 600-mg subcutaneous loading dose followed by 300 mg every 14 days), his WI-NRS score was 0.

In a paper recently published in JAAD International, Dr. Kwatra, Ms. Manjunath, and coinvestigators reported on a series of 15 patients who developed chronic pruritus following an immune stimulus exposure, including immunotherapy and vaccination (2024 Apr 7:16:97-102. doi: 10.1016/j.jdin.2024.03.022). Most immunotherapy-treated patients experienced pruritus during treatment or after 21-60 days of receiving treatment, and the patients with vaccine-stimulated pruritus (after Tdap and messenger RNA COVID-19 vaccination) developed pruritus within a week of vaccination.

In addition to the elevated levels of IgE and eosinophils, plasma cytokine analysis showed elevated levels of IL-5, thymic stromal lymphopoietin, and other Th2-related cytokines and inflammatory markers in patients with immune-stimulated pruritus compared with healthy controls, Ms. Manjunath said at the meeting.

When a Malignancy Workup Becomes Important

The initial part of any diagnostic workup for CPUO should include CBC with differential, liver function tests, renal function tests, and thyroid function testing, said Kwatra, referring to a diagnostic algorithm he developed, which was published as part of a CME review in the Journal of the American Academy of Dermatology in 2022.

Then, as indicated by risk factors in the history and physical, one could order other tests such as HIV serology, hepatitis B/C serologies, bullous pemphigoid testing, chest x-rays, evaluation for gammopathies, stool examination for ova and parasites, or heavy metal testing. “Do you do everything at once? We like to keep it straightforward,” Dr. Kwatra said. “Depending on the patient’s risk factors, you could order more or less.”

A malignancy workup should be strongly considered in patients whose itch duration is less than 12 months — and especially if the duration is less than 3 months — with an emphasis on cancers more frequently associated with itch: Hematologic and hepatobiliary cancers. This is “when concern should be heightened ... when there should be a lower threshold for workup,” he said.

The 12-month recommendation stems from a Danish cohort study published in 2014 that demonstrated a twofold increased incidence of cancer among patients with pruritus in the first 3 months after the diagnosis of pruritus. The 1-year absolute cancer risk was 1.63%.

Other risk factors for underlying malignancy or malignancy development in patients with CPUO include age older than 60 years, male sex, liver disease, and current or prior smoking, according to another study, noted Dr. Kwatra.

Dr. Kwatra disclosed that he is an advisory board member/consultant for Pfizer, Regeneron, Sanofi, and other companies and an investigator for Galderma, Incyte, Pfizer, and Sanofi. Manjunath served as the codirector of the ElderDerm conference.
 

A version of this article first appeared on Medscape.com.

Immunoglobulin E (IgE) and eosinophils appeared to be reliable biomarkers of type 2 inflammation in chronic pruritus of unknown origin (CPUO) and predictors of a positive response to immunomodulatory therapies, Shawn G. Kwatra, MD, said at the ElderDerm conference on dermatology in older patients hosted by the GW School of Medicine & Health Sciences.

“We found a few years ago that eosinophils seem to differentiate this group, and now we’re finding that IgE and CBC [complete blood count] differential can help you get a little better sense of who has an immune-driven itch vs something more neuropathic,” said Dr. Kwatra, professor and chair of dermatology at the University of Maryland, Baltimore, who founded and directed the Johns Hopkins Itch Center before coming to the University of Maryland in 2023. Not all patients with immune-driven itch will have these biomarkers, “but it’s a helpful tool,” he said.

Dr. Shawn G. Kwatra, department of dermatology, Johns Hopkins University School of Medicine, Baltimore
Dr. Kwatra
Dr. Shawn G. Kwatra

CPUO is the term that is increasingly being used, he said, to describe intense, chronic pruritus without primary skin lesions or rashes and without any known systemic cause. It becomes more common as people get older and is sometimes debilitating. The initial evaluation should be kept “simple and straightforward,” he advised, with heightened concern for underlying malignancy in those who present with an itch of less than 12 months’ duration.
 

Biologics, JAK Inhibitors: Case Reports, Ongoing Research 

Research conducted by Dr. Kwatra and Jaya Manjunath, a fourth-year medical student at The George Washington University, Washington, documented higher levels of Th2-associated cytokines and inflammatory markers in patients with CPUO who had elevated IgE or eosinophil levels, or both than in patients with itch who had low IgE and eosinophil levels. The patients with higher levels also had a greater response to off-label treatment with immunomodulatory therapy.

“Multiple Th2-related inflammatory markers, like IL [interleukin]-5 and eotaxin-3, were reduced after dupilumab” in patients who responded to the therapy, said Ms. Manjunath, who co-presented the meeting session on chronic itch with Dr. Kwatra. Other changes in the plasma cytokine profile included a reduction in the serum level of thymus and activation-regulated chemokine, which is a biomarker for atopic dermatitis. The research is under review for publication.

Meanwhile, a phase 3 trial (LIBERTY-CPUO-CHIC) of dupilumab for CPUO is currently underway, Dr. Kwatra noted. Investigators are randomizing patients with severe pruritus (Worst Itch Numeric Rating Scale [WI-NRS] ≥ 7) to dupilumab or placebo for 12 or 24 weeks.

In one of several cases shared by Dr. Kwatra and Ms. Manjunath, a 71-year-old Black woman with a 6-month history of generalized itch (WI-NRS = 10) and a history of type 2 diabetes, hypertension, and chronic kidney disease was found to have elevated eosinophil levels and a negative malignancy workup. Previous therapies included antihistamines and topical steroids. She was started on a 600-mg loading dose of subcutaneous dupilumab followed by 300 mg every 14 days. At the 2-month follow-up, her WI-NRS score was 0.

Because “dupilumab is off label right now for this form of itch, oftentimes our first line is methotrexate,” Dr. Kwatra said. Patients “can have a good response with this therapeutic.”

He also described the case of a 72-year-old Black woman with total body itch for 2 years (WI-NRS = 10) and a history of seasonal allergies, thyroid disease, and hypertension. Previous therapies included prednisone, antihistamines, topical steroids, and gabapentin. The patient was found to have high IgE (447 kU/L) and eosinophil levels (4.9%), was started on methotrexate, and had an itch score of 0 at the 8-month follow-up.

JAK inhibitors may also have a role in the management of CPUO. A phase 2 nonrandomized controlled trial of abrocitinib for adults with prurigo nodularis (PN) or CPUO, recently published in JAMA Dermatology, showed itch scores decreased by 53.7% in the CPUO group (and 78.3% in the PN group) after 12 weeks of treatment with oral abrocitinib 200 mg daily. Patients had significant improvements in quality of life and no serious adverse events, said Dr. Kwatra, the lead author of the paper.

One of these patients was a 73-year-old White man who had experienced total body itch for 1.5 years (predominantly affecting his upper extremities; WI-NRS = 10) and a history of ascending aortic aneurysm, hypertension, and hyperlipidemia. Previous failed therapies included dupilumab (> 6 months), topical steroids, tacrolimus, and antihistamines. Labs showed elevated IgE (456 kU/L) and eosinophil levels (11.7%). After 12 weeks of treatment with abrocitinib, the WI-NRS decreased to 2.
 

 

 

PD-1 Inhibitors As a Trigger

Chronic pruritus caused by the anticancer PD-1 inhibitors is becoming more common as the utilization of these immune checkpoint inhibitors increases, Dr. Kwatra noted. “You don’t see much in the skin, but [these patients have] very high IgE and eosinophils,” he said. “We’ve been seeing more reports recently of utilizing agents that target type 2 inflammation off label for PD-1 inhibitor–related skin manifestations.”

One such patient with PD-1 inhibitor–induced pruritus was a 65-year-old White man with metastatic melanoma who reported a 6-month history of itching that began 3 weeks after the start of treatment with the PD-1 inhibitor pembrolizumab. His WI-NRS score was 10 despite treatment with topical steroids and antihistamines. He had a history of psoriasis. Labs showed elevated IgE (1350 kU/L) and eosinophil levels (4.5%). At a 4-month follow-up after treatment with off-label dupilumab (a 600-mg subcutaneous loading dose followed by 300 mg every 14 days), his WI-NRS score was 0.

In a paper recently published in JAAD International, Dr. Kwatra, Ms. Manjunath, and coinvestigators reported on a series of 15 patients who developed chronic pruritus following an immune stimulus exposure, including immunotherapy and vaccination (2024 Apr 7:16:97-102. doi: 10.1016/j.jdin.2024.03.022). Most immunotherapy-treated patients experienced pruritus during treatment or after 21-60 days of receiving treatment, and the patients with vaccine-stimulated pruritus (after Tdap and messenger RNA COVID-19 vaccination) developed pruritus within a week of vaccination.

In addition to the elevated levels of IgE and eosinophils, plasma cytokine analysis showed elevated levels of IL-5, thymic stromal lymphopoietin, and other Th2-related cytokines and inflammatory markers in patients with immune-stimulated pruritus compared with healthy controls, Ms. Manjunath said at the meeting.

When a Malignancy Workup Becomes Important

The initial part of any diagnostic workup for CPUO should include CBC with differential, liver function tests, renal function tests, and thyroid function testing, said Kwatra, referring to a diagnostic algorithm he developed, which was published as part of a CME review in the Journal of the American Academy of Dermatology in 2022.

Then, as indicated by risk factors in the history and physical, one could order other tests such as HIV serology, hepatitis B/C serologies, bullous pemphigoid testing, chest x-rays, evaluation for gammopathies, stool examination for ova and parasites, or heavy metal testing. “Do you do everything at once? We like to keep it straightforward,” Dr. Kwatra said. “Depending on the patient’s risk factors, you could order more or less.”

A malignancy workup should be strongly considered in patients whose itch duration is less than 12 months — and especially if the duration is less than 3 months — with an emphasis on cancers more frequently associated with itch: Hematologic and hepatobiliary cancers. This is “when concern should be heightened ... when there should be a lower threshold for workup,” he said.

The 12-month recommendation stems from a Danish cohort study published in 2014 that demonstrated a twofold increased incidence of cancer among patients with pruritus in the first 3 months after the diagnosis of pruritus. The 1-year absolute cancer risk was 1.63%.

Other risk factors for underlying malignancy or malignancy development in patients with CPUO include age older than 60 years, male sex, liver disease, and current or prior smoking, according to another study, noted Dr. Kwatra.

Dr. Kwatra disclosed that he is an advisory board member/consultant for Pfizer, Regeneron, Sanofi, and other companies and an investigator for Galderma, Incyte, Pfizer, and Sanofi. Manjunath served as the codirector of the ElderDerm conference.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ELDERDERM 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cognitive Breakdown: The New Memory Condition Primary Care Needs to Know

Article Type
Changed
Wed, 08/07/2024 - 12:15

Patients experiencing memory problems often come to neurologist David Jones, MD, for second opinions. They repeat questions and sometimes misplace items. Their primary care clinician has suggested they may have Alzheimer’s disease or something else.

In many cases, Dr. Jones, a neurologist with Mayo Clinic in Rochester, Minnesota, performs a series of investigations and finds the patient instead has a different type of neurodegenerative syndrome, one that progresses slowly, seems limited chiefly to loss of memory, and which tests show affects only the limbic system.

The news of diagnosis can be reassuring to patients.

“Memory problems are not always Alzheimer’s disease,” Dr. Jones said. “It’s important to broaden the differential diagnosis and seek diagnostic clarity and precision for patients who experience problems with brain functioning later in life.”

Dr. Jones and colleagues recently published clinical criteria for what they call limbic-predominant amnestic neurodegenerative syndrome (LANS).

Various underlying etiologies are known to cause degeneration of the limbic system, the most frequent being a buildup of deposits of the TAR DNA-binding protein 43 (TDP-43) protein referred to as limbic-predominant, age-related TDP-43 encephalopathy neuropathological change (LATE-NC). LATE-NC first involves the amygdala, followed by the hippocampus, and then the middle frontal gyrus, and is found in about 40% of autopsied brains in people over age of 85 years.

By contrast, amnestic syndromes originating from neocortical degeneration are largely caused by neuropathological changes from Alzheimer’s disease and often present with non-memory features.
 

Criteria for LANS

Broken down into core, standard, and advanced features

Core clinical features:

The patient must present with a slow, amnestic, predominant neurodegenerative syndrome — an insidious onset with gradual progression over 2 or more years — without another condition that better accounts for the clinical deficits.

Standard supportive features:

1. Older age at evaluation.

  • Most patients are at least the age of 75 years. Older age increases the likelihood that the amnestic syndrome is caused by degeneration of the limbic system.

2. Mild clinical syndrome.

  • A diagnosis of mild cognitive impairment or mild amnestic dementia (ie, a score of ≤ 4 on the Clinical Dementia Rating Sum of Boxes [CDR-SB]) at the first visit.

3. Hippocampal atrophy out of proportion to syndrome severity.

  • Hippocampal volume was smaller than expected on MRI, compared with the CDR-SB score.

4. Mildly impaired semantic memory.

Advanced supportive features:

1.Limbic hypometabolism and absence of neocortical degenerative pattern on fludeoxyglucose-18-PET imaging.

2. Low likelihood of significant neocortical tau pathology.


Dr. Jones and colleagues also classified a degree of certainty for LANS to use when making a diagnosis. Those with the highest likelihood meet all core, standard, and advanced features.

Patients with a high likelihood of having LANS meet core features, at least three standard features and one advanced feature; or meet core features, at least two standard features as well as two advanced features. Those with a moderate likelihood meet core features and at least three standard features or meet core features and at least two standard features and one advanced feature. Those with a low likelihood of LANS meet core features and two or fewer standard features.

To develop these criteria, the group screened 218 autopsied patients participating in databases for the Mayo Clinic Study of Aging and the multicenter Alzheimer’s Disease Neuroimaging Initiative. They conducted neuropathological assessments, reviewed MRI and PET scans of the brains, and studied fluid biomarkers from samples of cerebrospinal fluid.

In LANS, the neocortex exhibits normal function, Dr. Jones said. High-level language functions, visual spatial functions, and executive function are preserved, and the disease stays mild for many years. LANS is highly associated with LATE, for which no biomarkers are yet available.

The National Institute on Aging in May 2023 held a workshop on LATE, and a consensus group was formed to publish criteria to help with the diagnosis. Many LANS criteria likely will be in that publication as well, Dr. Jones said.

Several steps lay ahead to improve the definition of LANS, the authors wrote, including conducting prospective studies and developing clinical tools that are sensitive and specific to its cognitive features. The development of in vivo diagnostic markers of TDP-43 pathology is needed to embed LANS into a disease state driven by LATE-NC, according to Dr. Jones’ group. Because LANS is newly defined, clinical trials are needed to determine the best treatments.
 

 

 

Heterogeneous Dementia

“We are increasingly recognizing that the syndrome of dementia in older adults is heterogeneous,” said Sudha Seshadri, MD, DM, a behavioral neurologist and founding director of the Glenn Biggs Institute for Alzheimer’s and Neurodegenerative Diseases at the University of Texas Health Science Center at San Antonio.

LANS “is something that needs to be diagnosed early but also needs to be worked up in a nuanced manner, with assessment of the pattern of cognitive deficits, the pattern of brain shrinkage on MRI, and also how the disease progresses over, say, a year,” said Dr. Seshadri. “We need to have both some primary care physicians and geriatricians who are comfortable doing this kind of nuanced advising and others who may refer patients to behavioral neurologists, geriatricians, or psychiatrists who have that kind of expertise.”

About 10% of people presenting to dementia clinics potentially could fit the LANS definition, Dr. Seshadri said. Dr. Seshadri was not a coauthor of the classification article but sees patients in the clinic who fit this description.

“It may be that as we start more freely giving the diagnosis of a possible LANS, the proportion of people will go up,” Dr. Seshadri said.

Primary care physicians can use a variety of assessments to help diagnose dementias, she said. These include the Montreal Cognitive Assessment (MoCA), which takes about 10 minutes to administer, or an MRI to determine the level of hippocampal atrophy. Blood tests for p-tau 217 and other plasma tests can stratify risk and guide referrals to a neurologist. Clinicians also should look for reversible causes of memory complaints, such as deficiencies in vitamin B12, folate, or the thyroid hormone.

“There aren’t enough behavioral neurologists around to work up every single person who has memory problems,” Dr. Seshadri said. “We really need to partner on educating and learning from our primary care partners as to what challenges they face, advocating for them to be able to address that, and then sharing what we know, because what we know is an evolving thing.”

Other tools primary care clinicians can use in the initial evaluation of dementia include the General Practitioner Assessment of Cognition and the Mini-Cog, as part of annual Medicare wellness visits or in response to patient or caregiver concerns about memory, said Allison Kaplan, MD, a family physician at Desert Grove Family Medical in Gilbert, Arizona, who coauthored a point-of-care guide for the American Academy of Family Physicians. Each of these tests takes just 3-4 minutes to administer.

If a patient has a positive result on the Mini-Cog or similar test, they should return for further dementia evaluation using the MoCA, Mini-Mental State Examination, or Saint Louis University Mental Status examination, she said. Physicians also can order brain imaging and lab work, as Dr. Seshadri noted. Dementias often accompany some type of cardiovascular disease, which should be managed.

Even if a patient or family member doesn’t express concern about memory, physicians can look for certain signs during medical visits.

“Patients will keep asking the same question, or you notice they’re having difficulty taking care of themselves, especially independent activities of daily living, which could clue you in to a dementia diagnosis,” she said.

Dr. Jones ,Dr. Seshadri, and Dr. Kaplan disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Patients experiencing memory problems often come to neurologist David Jones, MD, for second opinions. They repeat questions and sometimes misplace items. Their primary care clinician has suggested they may have Alzheimer’s disease or something else.

In many cases, Dr. Jones, a neurologist with Mayo Clinic in Rochester, Minnesota, performs a series of investigations and finds the patient instead has a different type of neurodegenerative syndrome, one that progresses slowly, seems limited chiefly to loss of memory, and which tests show affects only the limbic system.

The news of diagnosis can be reassuring to patients.

“Memory problems are not always Alzheimer’s disease,” Dr. Jones said. “It’s important to broaden the differential diagnosis and seek diagnostic clarity and precision for patients who experience problems with brain functioning later in life.”

Dr. Jones and colleagues recently published clinical criteria for what they call limbic-predominant amnestic neurodegenerative syndrome (LANS).

Various underlying etiologies are known to cause degeneration of the limbic system, the most frequent being a buildup of deposits of the TAR DNA-binding protein 43 (TDP-43) protein referred to as limbic-predominant, age-related TDP-43 encephalopathy neuropathological change (LATE-NC). LATE-NC first involves the amygdala, followed by the hippocampus, and then the middle frontal gyrus, and is found in about 40% of autopsied brains in people over age of 85 years.

By contrast, amnestic syndromes originating from neocortical degeneration are largely caused by neuropathological changes from Alzheimer’s disease and often present with non-memory features.
 

Criteria for LANS

Broken down into core, standard, and advanced features

Core clinical features:

The patient must present with a slow, amnestic, predominant neurodegenerative syndrome — an insidious onset with gradual progression over 2 or more years — without another condition that better accounts for the clinical deficits.

Standard supportive features:

1. Older age at evaluation.

  • Most patients are at least the age of 75 years. Older age increases the likelihood that the amnestic syndrome is caused by degeneration of the limbic system.

2. Mild clinical syndrome.

  • A diagnosis of mild cognitive impairment or mild amnestic dementia (ie, a score of ≤ 4 on the Clinical Dementia Rating Sum of Boxes [CDR-SB]) at the first visit.

3. Hippocampal atrophy out of proportion to syndrome severity.

  • Hippocampal volume was smaller than expected on MRI, compared with the CDR-SB score.

4. Mildly impaired semantic memory.

Advanced supportive features:

1.Limbic hypometabolism and absence of neocortical degenerative pattern on fludeoxyglucose-18-PET imaging.

2. Low likelihood of significant neocortical tau pathology.


Dr. Jones and colleagues also classified a degree of certainty for LANS to use when making a diagnosis. Those with the highest likelihood meet all core, standard, and advanced features.

Patients with a high likelihood of having LANS meet core features, at least three standard features and one advanced feature; or meet core features, at least two standard features as well as two advanced features. Those with a moderate likelihood meet core features and at least three standard features or meet core features and at least two standard features and one advanced feature. Those with a low likelihood of LANS meet core features and two or fewer standard features.

To develop these criteria, the group screened 218 autopsied patients participating in databases for the Mayo Clinic Study of Aging and the multicenter Alzheimer’s Disease Neuroimaging Initiative. They conducted neuropathological assessments, reviewed MRI and PET scans of the brains, and studied fluid biomarkers from samples of cerebrospinal fluid.

In LANS, the neocortex exhibits normal function, Dr. Jones said. High-level language functions, visual spatial functions, and executive function are preserved, and the disease stays mild for many years. LANS is highly associated with LATE, for which no biomarkers are yet available.

The National Institute on Aging in May 2023 held a workshop on LATE, and a consensus group was formed to publish criteria to help with the diagnosis. Many LANS criteria likely will be in that publication as well, Dr. Jones said.

Several steps lay ahead to improve the definition of LANS, the authors wrote, including conducting prospective studies and developing clinical tools that are sensitive and specific to its cognitive features. The development of in vivo diagnostic markers of TDP-43 pathology is needed to embed LANS into a disease state driven by LATE-NC, according to Dr. Jones’ group. Because LANS is newly defined, clinical trials are needed to determine the best treatments.
 

 

 

Heterogeneous Dementia

“We are increasingly recognizing that the syndrome of dementia in older adults is heterogeneous,” said Sudha Seshadri, MD, DM, a behavioral neurologist and founding director of the Glenn Biggs Institute for Alzheimer’s and Neurodegenerative Diseases at the University of Texas Health Science Center at San Antonio.

LANS “is something that needs to be diagnosed early but also needs to be worked up in a nuanced manner, with assessment of the pattern of cognitive deficits, the pattern of brain shrinkage on MRI, and also how the disease progresses over, say, a year,” said Dr. Seshadri. “We need to have both some primary care physicians and geriatricians who are comfortable doing this kind of nuanced advising and others who may refer patients to behavioral neurologists, geriatricians, or psychiatrists who have that kind of expertise.”

About 10% of people presenting to dementia clinics potentially could fit the LANS definition, Dr. Seshadri said. Dr. Seshadri was not a coauthor of the classification article but sees patients in the clinic who fit this description.

“It may be that as we start more freely giving the diagnosis of a possible LANS, the proportion of people will go up,” Dr. Seshadri said.

Primary care physicians can use a variety of assessments to help diagnose dementias, she said. These include the Montreal Cognitive Assessment (MoCA), which takes about 10 minutes to administer, or an MRI to determine the level of hippocampal atrophy. Blood tests for p-tau 217 and other plasma tests can stratify risk and guide referrals to a neurologist. Clinicians also should look for reversible causes of memory complaints, such as deficiencies in vitamin B12, folate, or the thyroid hormone.

“There aren’t enough behavioral neurologists around to work up every single person who has memory problems,” Dr. Seshadri said. “We really need to partner on educating and learning from our primary care partners as to what challenges they face, advocating for them to be able to address that, and then sharing what we know, because what we know is an evolving thing.”

Other tools primary care clinicians can use in the initial evaluation of dementia include the General Practitioner Assessment of Cognition and the Mini-Cog, as part of annual Medicare wellness visits or in response to patient or caregiver concerns about memory, said Allison Kaplan, MD, a family physician at Desert Grove Family Medical in Gilbert, Arizona, who coauthored a point-of-care guide for the American Academy of Family Physicians. Each of these tests takes just 3-4 minutes to administer.

If a patient has a positive result on the Mini-Cog or similar test, they should return for further dementia evaluation using the MoCA, Mini-Mental State Examination, or Saint Louis University Mental Status examination, she said. Physicians also can order brain imaging and lab work, as Dr. Seshadri noted. Dementias often accompany some type of cardiovascular disease, which should be managed.

Even if a patient or family member doesn’t express concern about memory, physicians can look for certain signs during medical visits.

“Patients will keep asking the same question, or you notice they’re having difficulty taking care of themselves, especially independent activities of daily living, which could clue you in to a dementia diagnosis,” she said.

Dr. Jones ,Dr. Seshadri, and Dr. Kaplan disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Patients experiencing memory problems often come to neurologist David Jones, MD, for second opinions. They repeat questions and sometimes misplace items. Their primary care clinician has suggested they may have Alzheimer’s disease or something else.

In many cases, Dr. Jones, a neurologist with Mayo Clinic in Rochester, Minnesota, performs a series of investigations and finds the patient instead has a different type of neurodegenerative syndrome, one that progresses slowly, seems limited chiefly to loss of memory, and which tests show affects only the limbic system.

The news of diagnosis can be reassuring to patients.

“Memory problems are not always Alzheimer’s disease,” Dr. Jones said. “It’s important to broaden the differential diagnosis and seek diagnostic clarity and precision for patients who experience problems with brain functioning later in life.”

Dr. Jones and colleagues recently published clinical criteria for what they call limbic-predominant amnestic neurodegenerative syndrome (LANS).

Various underlying etiologies are known to cause degeneration of the limbic system, the most frequent being a buildup of deposits of the TAR DNA-binding protein 43 (TDP-43) protein referred to as limbic-predominant, age-related TDP-43 encephalopathy neuropathological change (LATE-NC). LATE-NC first involves the amygdala, followed by the hippocampus, and then the middle frontal gyrus, and is found in about 40% of autopsied brains in people over age of 85 years.

By contrast, amnestic syndromes originating from neocortical degeneration are largely caused by neuropathological changes from Alzheimer’s disease and often present with non-memory features.
 

Criteria for LANS

Broken down into core, standard, and advanced features

Core clinical features:

The patient must present with a slow, amnestic, predominant neurodegenerative syndrome — an insidious onset with gradual progression over 2 or more years — without another condition that better accounts for the clinical deficits.

Standard supportive features:

1. Older age at evaluation.

  • Most patients are at least the age of 75 years. Older age increases the likelihood that the amnestic syndrome is caused by degeneration of the limbic system.

2. Mild clinical syndrome.

  • A diagnosis of mild cognitive impairment or mild amnestic dementia (ie, a score of ≤ 4 on the Clinical Dementia Rating Sum of Boxes [CDR-SB]) at the first visit.

3. Hippocampal atrophy out of proportion to syndrome severity.

  • Hippocampal volume was smaller than expected on MRI, compared with the CDR-SB score.

4. Mildly impaired semantic memory.

Advanced supportive features:

1.Limbic hypometabolism and absence of neocortical degenerative pattern on fludeoxyglucose-18-PET imaging.

2. Low likelihood of significant neocortical tau pathology.


Dr. Jones and colleagues also classified a degree of certainty for LANS to use when making a diagnosis. Those with the highest likelihood meet all core, standard, and advanced features.

Patients with a high likelihood of having LANS meet core features, at least three standard features and one advanced feature; or meet core features, at least two standard features as well as two advanced features. Those with a moderate likelihood meet core features and at least three standard features or meet core features and at least two standard features and one advanced feature. Those with a low likelihood of LANS meet core features and two or fewer standard features.

To develop these criteria, the group screened 218 autopsied patients participating in databases for the Mayo Clinic Study of Aging and the multicenter Alzheimer’s Disease Neuroimaging Initiative. They conducted neuropathological assessments, reviewed MRI and PET scans of the brains, and studied fluid biomarkers from samples of cerebrospinal fluid.

In LANS, the neocortex exhibits normal function, Dr. Jones said. High-level language functions, visual spatial functions, and executive function are preserved, and the disease stays mild for many years. LANS is highly associated with LATE, for which no biomarkers are yet available.

The National Institute on Aging in May 2023 held a workshop on LATE, and a consensus group was formed to publish criteria to help with the diagnosis. Many LANS criteria likely will be in that publication as well, Dr. Jones said.

Several steps lay ahead to improve the definition of LANS, the authors wrote, including conducting prospective studies and developing clinical tools that are sensitive and specific to its cognitive features. The development of in vivo diagnostic markers of TDP-43 pathology is needed to embed LANS into a disease state driven by LATE-NC, according to Dr. Jones’ group. Because LANS is newly defined, clinical trials are needed to determine the best treatments.
 

 

 

Heterogeneous Dementia

“We are increasingly recognizing that the syndrome of dementia in older adults is heterogeneous,” said Sudha Seshadri, MD, DM, a behavioral neurologist and founding director of the Glenn Biggs Institute for Alzheimer’s and Neurodegenerative Diseases at the University of Texas Health Science Center at San Antonio.

LANS “is something that needs to be diagnosed early but also needs to be worked up in a nuanced manner, with assessment of the pattern of cognitive deficits, the pattern of brain shrinkage on MRI, and also how the disease progresses over, say, a year,” said Dr. Seshadri. “We need to have both some primary care physicians and geriatricians who are comfortable doing this kind of nuanced advising and others who may refer patients to behavioral neurologists, geriatricians, or psychiatrists who have that kind of expertise.”

About 10% of people presenting to dementia clinics potentially could fit the LANS definition, Dr. Seshadri said. Dr. Seshadri was not a coauthor of the classification article but sees patients in the clinic who fit this description.

“It may be that as we start more freely giving the diagnosis of a possible LANS, the proportion of people will go up,” Dr. Seshadri said.

Primary care physicians can use a variety of assessments to help diagnose dementias, she said. These include the Montreal Cognitive Assessment (MoCA), which takes about 10 minutes to administer, or an MRI to determine the level of hippocampal atrophy. Blood tests for p-tau 217 and other plasma tests can stratify risk and guide referrals to a neurologist. Clinicians also should look for reversible causes of memory complaints, such as deficiencies in vitamin B12, folate, or the thyroid hormone.

“There aren’t enough behavioral neurologists around to work up every single person who has memory problems,” Dr. Seshadri said. “We really need to partner on educating and learning from our primary care partners as to what challenges they face, advocating for them to be able to address that, and then sharing what we know, because what we know is an evolving thing.”

Other tools primary care clinicians can use in the initial evaluation of dementia include the General Practitioner Assessment of Cognition and the Mini-Cog, as part of annual Medicare wellness visits or in response to patient or caregiver concerns about memory, said Allison Kaplan, MD, a family physician at Desert Grove Family Medical in Gilbert, Arizona, who coauthored a point-of-care guide for the American Academy of Family Physicians. Each of these tests takes just 3-4 minutes to administer.

If a patient has a positive result on the Mini-Cog or similar test, they should return for further dementia evaluation using the MoCA, Mini-Mental State Examination, or Saint Louis University Mental Status examination, she said. Physicians also can order brain imaging and lab work, as Dr. Seshadri noted. Dementias often accompany some type of cardiovascular disease, which should be managed.

Even if a patient or family member doesn’t express concern about memory, physicians can look for certain signs during medical visits.

“Patients will keep asking the same question, or you notice they’re having difficulty taking care of themselves, especially independent activities of daily living, which could clue you in to a dementia diagnosis,” she said.

Dr. Jones ,Dr. Seshadri, and Dr. Kaplan disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BRAIN COMMUNICATION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

An Effective Nondrug Approach to Improve Sleep in Dementia, Phase 3 Data Show

Article Type
Changed
Tue, 08/06/2024 - 11:48

A multicomponent nonpharmaceutical intervention improves sleep in people with dementia living at home, early results of a new phase 3 randomized controlled trial (RCT) show.

The benefits of the intervention — called DREAMS-START — were sustained at 8 months and extended to caregivers, the study found.

“We’re pleased with our results. We think that we were able to deliver it successfully and to a high rate of fidelity,” said study investigator Penny Rapaport, PhD, Division of Psychiatry, University College London, England.

The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
 

Sustained, Long-Term Effect

Sleep disturbances are very common in dementia. About 26% of people with all types of dementia will experience sleep disturbances, and that rate is higher in certain dementia subtypes, such as dementia with Lewy bodies, said Dr. Rapaport.

Such disturbances are distressing for people living with dementia as well as for those supporting them, she added. They’re “often the thing that will lead to people transitioning and moving into a care home.”

Dr. Rapaport noted there has not been full RCT evidence that any nonpharmacologic interventions or light-based treatments are effective in improving sleep disturbances.

Medications such as antipsychotics and benzodiazepines aren’t recommended as first-line treatment in people with dementia “because often these can be harmful,” she said.

The study recruited 377 dyads of people living with dementia (mean age, 79.4 years) and their caregivers from 12 national health service sites across England. “We were able to recruit an ethnically diverse sample from a broad socioeconomic background,” said Dr. Rapaport.

Researchers allocated the dyads to the intervention or to a treatment as usual group.

About 92% of participants were included in the intention-to-treat analysis at 8 months, which was the primary time point.

The intervention consists of six 1-hour interactive sessions that are “personalized and tailored to individual goals and needs,” said Dr. Rapaport. It was delivered by supervised, trained graduates, not clinicians.

The sessions focused on components of sleep hygiene (healthy habits, behaviors, and environments); activity and exercise; a tailored sleep routine; strategies to manage distress; natural and artificial light; and relaxation. A whole session was devoted to supporting sleep of caregivers.

The trial included masked outcome assessments, “so the people collecting the data were blinded to the intervention group,” said Dr. Rapaport.

The primary outcome was the Sleep Disorders Inventory (SDI) score. The SDI is a questionnaire about frequency and severity of sleep-disturbed behaviors completed by caregivers; a higher score indicates a worse outcome. The study adjusted for baseline SDI score and study site.

The adjusted mean difference between groups on the SDI was −4.7 points (95% confidence interval [CI], −7.65 to −1.74; P = .002) at 8 months.

The minimal clinically important difference on the SDI is a 4-point change, noted Dr. Rapaport.

The adjusted mean difference on the SDI at 4 months (a secondary outcome) was −4.4 points (95% CI, −7.3 to −1.5; P = .003).

Referring to illustrative graphs, Dr. Rapaport said that SDI scores decreased at both 4 and 8 months. “You can see statistically, there’s a significant difference between groups at both time points,” she said.

“We saw a sustained effect, so not just immediately after the intervention, but afterwards at 8 months.”

As for other secondary outcomes, the study found a significant reduction in neuropsychiatric symptoms among people with dementia at 8 months in the intervention arm relative to the control arm.

In addition, sleep and anxiety significantly improved among caregivers after 8 months. This shows “a picture of things getting better for the person with dementia, and the person who’s caring for them,” said Dr. Rapaport.

She noted the good adherence rate, with almost 83% of people in the intervention arm completing four or more sessions.

Fidelity to the intervention (ie, the extent to which it is implemented as intended) was also high, “so we feel it was delivered well,” said Dr. Rapaport.

Researchers also carried out a health economics analysis and looked at strategies for implementation of the program, but Dr. Rapaport did not discuss those results.
 

 

 

Encouraging Findings

Commenting for this news organization, Alex Bahar-Fuchs, PhD, Faculty of Health, School of Psychology, Deakin University, Victoria, Australia, who co-chaired the session featuring the research, said the findings of this “well-powered” RCT are “encouraging,” both for the primary outcome of sleep quality and for some of the secondary outcomes for the care-partner.

“The study adds to the growing evidence behind several nonpharmacological treatment approaches for cognitive and neuropsychiatric symptoms of people with dementia,” he said. 

The results “offer some hope for the treatment of a common disturbance in people with dementia which is associated with poorer outcomes and increased caregiver burden,” he added. 

An important area for further work would be to incorporate more objective measures of sleep quality, said Dr. Bahar-Fuchs.

Because the primary outcome was measured using a self-report questionnaire (the SDI) completed by care-partners, and because the intervention arm could not be blinded, “it remains possible that some detection bias may have affected the study findings,” said Dr. Bahar-Fuchs.

He said he would like to see the research extended to include an active control condition “to be able to better ascertain treatment mechanisms.”

The study was supported by the National Institute of Health and Care Research. Dr. Rapaport and Dr. Bahar-Fuchs reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A multicomponent nonpharmaceutical intervention improves sleep in people with dementia living at home, early results of a new phase 3 randomized controlled trial (RCT) show.

The benefits of the intervention — called DREAMS-START — were sustained at 8 months and extended to caregivers, the study found.

“We’re pleased with our results. We think that we were able to deliver it successfully and to a high rate of fidelity,” said study investigator Penny Rapaport, PhD, Division of Psychiatry, University College London, England.

The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
 

Sustained, Long-Term Effect

Sleep disturbances are very common in dementia. About 26% of people with all types of dementia will experience sleep disturbances, and that rate is higher in certain dementia subtypes, such as dementia with Lewy bodies, said Dr. Rapaport.

Such disturbances are distressing for people living with dementia as well as for those supporting them, she added. They’re “often the thing that will lead to people transitioning and moving into a care home.”

Dr. Rapaport noted there has not been full RCT evidence that any nonpharmacologic interventions or light-based treatments are effective in improving sleep disturbances.

Medications such as antipsychotics and benzodiazepines aren’t recommended as first-line treatment in people with dementia “because often these can be harmful,” she said.

The study recruited 377 dyads of people living with dementia (mean age, 79.4 years) and their caregivers from 12 national health service sites across England. “We were able to recruit an ethnically diverse sample from a broad socioeconomic background,” said Dr. Rapaport.

Researchers allocated the dyads to the intervention or to a treatment as usual group.

About 92% of participants were included in the intention-to-treat analysis at 8 months, which was the primary time point.

The intervention consists of six 1-hour interactive sessions that are “personalized and tailored to individual goals and needs,” said Dr. Rapaport. It was delivered by supervised, trained graduates, not clinicians.

The sessions focused on components of sleep hygiene (healthy habits, behaviors, and environments); activity and exercise; a tailored sleep routine; strategies to manage distress; natural and artificial light; and relaxation. A whole session was devoted to supporting sleep of caregivers.

The trial included masked outcome assessments, “so the people collecting the data were blinded to the intervention group,” said Dr. Rapaport.

The primary outcome was the Sleep Disorders Inventory (SDI) score. The SDI is a questionnaire about frequency and severity of sleep-disturbed behaviors completed by caregivers; a higher score indicates a worse outcome. The study adjusted for baseline SDI score and study site.

The adjusted mean difference between groups on the SDI was −4.7 points (95% confidence interval [CI], −7.65 to −1.74; P = .002) at 8 months.

The minimal clinically important difference on the SDI is a 4-point change, noted Dr. Rapaport.

The adjusted mean difference on the SDI at 4 months (a secondary outcome) was −4.4 points (95% CI, −7.3 to −1.5; P = .003).

Referring to illustrative graphs, Dr. Rapaport said that SDI scores decreased at both 4 and 8 months. “You can see statistically, there’s a significant difference between groups at both time points,” she said.

“We saw a sustained effect, so not just immediately after the intervention, but afterwards at 8 months.”

As for other secondary outcomes, the study found a significant reduction in neuropsychiatric symptoms among people with dementia at 8 months in the intervention arm relative to the control arm.

In addition, sleep and anxiety significantly improved among caregivers after 8 months. This shows “a picture of things getting better for the person with dementia, and the person who’s caring for them,” said Dr. Rapaport.

She noted the good adherence rate, with almost 83% of people in the intervention arm completing four or more sessions.

Fidelity to the intervention (ie, the extent to which it is implemented as intended) was also high, “so we feel it was delivered well,” said Dr. Rapaport.

Researchers also carried out a health economics analysis and looked at strategies for implementation of the program, but Dr. Rapaport did not discuss those results.
 

 

 

Encouraging Findings

Commenting for this news organization, Alex Bahar-Fuchs, PhD, Faculty of Health, School of Psychology, Deakin University, Victoria, Australia, who co-chaired the session featuring the research, said the findings of this “well-powered” RCT are “encouraging,” both for the primary outcome of sleep quality and for some of the secondary outcomes for the care-partner.

“The study adds to the growing evidence behind several nonpharmacological treatment approaches for cognitive and neuropsychiatric symptoms of people with dementia,” he said. 

The results “offer some hope for the treatment of a common disturbance in people with dementia which is associated with poorer outcomes and increased caregiver burden,” he added. 

An important area for further work would be to incorporate more objective measures of sleep quality, said Dr. Bahar-Fuchs.

Because the primary outcome was measured using a self-report questionnaire (the SDI) completed by care-partners, and because the intervention arm could not be blinded, “it remains possible that some detection bias may have affected the study findings,” said Dr. Bahar-Fuchs.

He said he would like to see the research extended to include an active control condition “to be able to better ascertain treatment mechanisms.”

The study was supported by the National Institute of Health and Care Research. Dr. Rapaport and Dr. Bahar-Fuchs reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

A multicomponent nonpharmaceutical intervention improves sleep in people with dementia living at home, early results of a new phase 3 randomized controlled trial (RCT) show.

The benefits of the intervention — called DREAMS-START — were sustained at 8 months and extended to caregivers, the study found.

“We’re pleased with our results. We think that we were able to deliver it successfully and to a high rate of fidelity,” said study investigator Penny Rapaport, PhD, Division of Psychiatry, University College London, England.

The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
 

Sustained, Long-Term Effect

Sleep disturbances are very common in dementia. About 26% of people with all types of dementia will experience sleep disturbances, and that rate is higher in certain dementia subtypes, such as dementia with Lewy bodies, said Dr. Rapaport.

Such disturbances are distressing for people living with dementia as well as for those supporting them, she added. They’re “often the thing that will lead to people transitioning and moving into a care home.”

Dr. Rapaport noted there has not been full RCT evidence that any nonpharmacologic interventions or light-based treatments are effective in improving sleep disturbances.

Medications such as antipsychotics and benzodiazepines aren’t recommended as first-line treatment in people with dementia “because often these can be harmful,” she said.

The study recruited 377 dyads of people living with dementia (mean age, 79.4 years) and their caregivers from 12 national health service sites across England. “We were able to recruit an ethnically diverse sample from a broad socioeconomic background,” said Dr. Rapaport.

Researchers allocated the dyads to the intervention or to a treatment as usual group.

About 92% of participants were included in the intention-to-treat analysis at 8 months, which was the primary time point.

The intervention consists of six 1-hour interactive sessions that are “personalized and tailored to individual goals and needs,” said Dr. Rapaport. It was delivered by supervised, trained graduates, not clinicians.

The sessions focused on components of sleep hygiene (healthy habits, behaviors, and environments); activity and exercise; a tailored sleep routine; strategies to manage distress; natural and artificial light; and relaxation. A whole session was devoted to supporting sleep of caregivers.

The trial included masked outcome assessments, “so the people collecting the data were blinded to the intervention group,” said Dr. Rapaport.

The primary outcome was the Sleep Disorders Inventory (SDI) score. The SDI is a questionnaire about frequency and severity of sleep-disturbed behaviors completed by caregivers; a higher score indicates a worse outcome. The study adjusted for baseline SDI score and study site.

The adjusted mean difference between groups on the SDI was −4.7 points (95% confidence interval [CI], −7.65 to −1.74; P = .002) at 8 months.

The minimal clinically important difference on the SDI is a 4-point change, noted Dr. Rapaport.

The adjusted mean difference on the SDI at 4 months (a secondary outcome) was −4.4 points (95% CI, −7.3 to −1.5; P = .003).

Referring to illustrative graphs, Dr. Rapaport said that SDI scores decreased at both 4 and 8 months. “You can see statistically, there’s a significant difference between groups at both time points,” she said.

“We saw a sustained effect, so not just immediately after the intervention, but afterwards at 8 months.”

As for other secondary outcomes, the study found a significant reduction in neuropsychiatric symptoms among people with dementia at 8 months in the intervention arm relative to the control arm.

In addition, sleep and anxiety significantly improved among caregivers after 8 months. This shows “a picture of things getting better for the person with dementia, and the person who’s caring for them,” said Dr. Rapaport.

She noted the good adherence rate, with almost 83% of people in the intervention arm completing four or more sessions.

Fidelity to the intervention (ie, the extent to which it is implemented as intended) was also high, “so we feel it was delivered well,” said Dr. Rapaport.

Researchers also carried out a health economics analysis and looked at strategies for implementation of the program, but Dr. Rapaport did not discuss those results.
 

 

 

Encouraging Findings

Commenting for this news organization, Alex Bahar-Fuchs, PhD, Faculty of Health, School of Psychology, Deakin University, Victoria, Australia, who co-chaired the session featuring the research, said the findings of this “well-powered” RCT are “encouraging,” both for the primary outcome of sleep quality and for some of the secondary outcomes for the care-partner.

“The study adds to the growing evidence behind several nonpharmacological treatment approaches for cognitive and neuropsychiatric symptoms of people with dementia,” he said. 

The results “offer some hope for the treatment of a common disturbance in people with dementia which is associated with poorer outcomes and increased caregiver burden,” he added. 

An important area for further work would be to incorporate more objective measures of sleep quality, said Dr. Bahar-Fuchs.

Because the primary outcome was measured using a self-report questionnaire (the SDI) completed by care-partners, and because the intervention arm could not be blinded, “it remains possible that some detection bias may have affected the study findings,” said Dr. Bahar-Fuchs.

He said he would like to see the research extended to include an active control condition “to be able to better ascertain treatment mechanisms.”

The study was supported by the National Institute of Health and Care Research. Dr. Rapaport and Dr. Bahar-Fuchs reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAIC 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Self-Rated Health Predicts Hospitalization and Death

Article Type
Changed
Fri, 08/02/2024 - 15:31

Adults who self-rated their health as poor in middle age were at least three times more likely to die or be hospitalized when older than those who self-rated their health as excellent, based on data from nearly 15,000 individuals.

Previous research has shown that self-rated health is an independent predictor of hospitalization or death, but the effects of individual subject-specific risks on these outcomes has not been examined, wrote Scott Z. Mu, MD, of the Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, and colleagues.

In a study published in the Journal of General Internal Medicine, the researchers reviewed data from 14,937 members of the Atherosclerosis Risk in Communities (ARIC) cohort, a community-based prospective study of middle-aged men and women that began with their enrollment from 1987 to 1989. The primary outcome was the association between baseline self-rated health and subsequent recurrent hospitalizations and deaths over a median follow-up period of 27.7 years.

At baseline, 34% of the participants rated their health as excellent, 47% good, 16% fair, and 3% poor. After the median follow-up, 39%, 51%, 67%, and 83% of individuals who rated their health as excellent, good, fair, and poor, respectively, had died.

The researchers used a recurrent events survival model that adjusted for clinical and demographic factors and also allowed for dependency between the rates of hospitalization and hazards of death.

After controlling for demographics and medical history, a lower self-rating of health was associated with increased rates of hospitalization and death. Compared with individuals with baseline reports of excellent health, hospitalization rates were 1.22, 2.01, and 3.13 times higher for those with baseline reports of good, fair, or poor health, respectively. Similarly, compared with individuals with baseline reports of excellent health, hazards of death were 1.30, 2.15, and 3.40 for those with baseline reports of good, fair, or poor health, respectively.

Overall, individuals who reported poor health at baseline were significantly more likely than those who reported excellent health to be older (57.0 years vs 53.0 years), obese (44% vs 18%), and current smokers (39% vs 21%). Those who reported poor health at baseline also were significantly more likely than those who reported excellent health to have a history of cancer (9.5% vs 4.4%), emphysema/COPD (18% vs 2.3%), coronary heart disease (21% vs 1.6%), myocardial infarction (19% vs 1.3%), heart failure (25% vs. 1.2%), hypertension (67% vs 19%), or diabetes (39% vs 4.6%).

Potential explanations for the independent association between poor self-rated health and poor outcomes include the ability of self-rated health to capture health information not accounted for by traditional risk factors, the researchers wrote in their discussion. “Another explanation is that self-rated health reflects subconscious bodily sensations that provide a direct sense of health unavailable to external observation,” they said. Alternatively, self-rated health may reinforce beneficial behaviors in those with higher self-rated health and harmful behaviors in those with lower self-rated health, they said.

The findings were limited by several factors including the measurement of self-rated health and the validity of hospitalization as a proxy for morbidity, the researchers noted. Other limitations include the use of models instead of repeated self-rated health measures, and a lack of data on interventions to directly or indirectly improve self-rated health, the researchers noted.

However, the study shows the potential value of self-rated health in routine clinical care to predict future hospitalizations, they said. “Clinicians can use this simple and convenient measure for individual patients to provide more accurate and personalized risk assessments,” they said.

Looking ahead, the current study findings also support the need for more research into the routine assessment not only of self-rated health but also targeted interventions to improve self-rated health and its determinants, the researchers concluded. The ARIC study has been supported by the National Heart, Lung, and Blood Institute, National Institutes of Health. Dr. Mu disclosed support from the National Heart, Lung, and Blood Institute.

Publications
Topics
Sections

Adults who self-rated their health as poor in middle age were at least three times more likely to die or be hospitalized when older than those who self-rated their health as excellent, based on data from nearly 15,000 individuals.

Previous research has shown that self-rated health is an independent predictor of hospitalization or death, but the effects of individual subject-specific risks on these outcomes has not been examined, wrote Scott Z. Mu, MD, of the Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, and colleagues.

In a study published in the Journal of General Internal Medicine, the researchers reviewed data from 14,937 members of the Atherosclerosis Risk in Communities (ARIC) cohort, a community-based prospective study of middle-aged men and women that began with their enrollment from 1987 to 1989. The primary outcome was the association between baseline self-rated health and subsequent recurrent hospitalizations and deaths over a median follow-up period of 27.7 years.

At baseline, 34% of the participants rated their health as excellent, 47% good, 16% fair, and 3% poor. After the median follow-up, 39%, 51%, 67%, and 83% of individuals who rated their health as excellent, good, fair, and poor, respectively, had died.

The researchers used a recurrent events survival model that adjusted for clinical and demographic factors and also allowed for dependency between the rates of hospitalization and hazards of death.

After controlling for demographics and medical history, a lower self-rating of health was associated with increased rates of hospitalization and death. Compared with individuals with baseline reports of excellent health, hospitalization rates were 1.22, 2.01, and 3.13 times higher for those with baseline reports of good, fair, or poor health, respectively. Similarly, compared with individuals with baseline reports of excellent health, hazards of death were 1.30, 2.15, and 3.40 for those with baseline reports of good, fair, or poor health, respectively.

Overall, individuals who reported poor health at baseline were significantly more likely than those who reported excellent health to be older (57.0 years vs 53.0 years), obese (44% vs 18%), and current smokers (39% vs 21%). Those who reported poor health at baseline also were significantly more likely than those who reported excellent health to have a history of cancer (9.5% vs 4.4%), emphysema/COPD (18% vs 2.3%), coronary heart disease (21% vs 1.6%), myocardial infarction (19% vs 1.3%), heart failure (25% vs. 1.2%), hypertension (67% vs 19%), or diabetes (39% vs 4.6%).

Potential explanations for the independent association between poor self-rated health and poor outcomes include the ability of self-rated health to capture health information not accounted for by traditional risk factors, the researchers wrote in their discussion. “Another explanation is that self-rated health reflects subconscious bodily sensations that provide a direct sense of health unavailable to external observation,” they said. Alternatively, self-rated health may reinforce beneficial behaviors in those with higher self-rated health and harmful behaviors in those with lower self-rated health, they said.

The findings were limited by several factors including the measurement of self-rated health and the validity of hospitalization as a proxy for morbidity, the researchers noted. Other limitations include the use of models instead of repeated self-rated health measures, and a lack of data on interventions to directly or indirectly improve self-rated health, the researchers noted.

However, the study shows the potential value of self-rated health in routine clinical care to predict future hospitalizations, they said. “Clinicians can use this simple and convenient measure for individual patients to provide more accurate and personalized risk assessments,” they said.

Looking ahead, the current study findings also support the need for more research into the routine assessment not only of self-rated health but also targeted interventions to improve self-rated health and its determinants, the researchers concluded. The ARIC study has been supported by the National Heart, Lung, and Blood Institute, National Institutes of Health. Dr. Mu disclosed support from the National Heart, Lung, and Blood Institute.

Adults who self-rated their health as poor in middle age were at least three times more likely to die or be hospitalized when older than those who self-rated their health as excellent, based on data from nearly 15,000 individuals.

Previous research has shown that self-rated health is an independent predictor of hospitalization or death, but the effects of individual subject-specific risks on these outcomes has not been examined, wrote Scott Z. Mu, MD, of the Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, and colleagues.

In a study published in the Journal of General Internal Medicine, the researchers reviewed data from 14,937 members of the Atherosclerosis Risk in Communities (ARIC) cohort, a community-based prospective study of middle-aged men and women that began with their enrollment from 1987 to 1989. The primary outcome was the association between baseline self-rated health and subsequent recurrent hospitalizations and deaths over a median follow-up period of 27.7 years.

At baseline, 34% of the participants rated their health as excellent, 47% good, 16% fair, and 3% poor. After the median follow-up, 39%, 51%, 67%, and 83% of individuals who rated their health as excellent, good, fair, and poor, respectively, had died.

The researchers used a recurrent events survival model that adjusted for clinical and demographic factors and also allowed for dependency between the rates of hospitalization and hazards of death.

After controlling for demographics and medical history, a lower self-rating of health was associated with increased rates of hospitalization and death. Compared with individuals with baseline reports of excellent health, hospitalization rates were 1.22, 2.01, and 3.13 times higher for those with baseline reports of good, fair, or poor health, respectively. Similarly, compared with individuals with baseline reports of excellent health, hazards of death were 1.30, 2.15, and 3.40 for those with baseline reports of good, fair, or poor health, respectively.

Overall, individuals who reported poor health at baseline were significantly more likely than those who reported excellent health to be older (57.0 years vs 53.0 years), obese (44% vs 18%), and current smokers (39% vs 21%). Those who reported poor health at baseline also were significantly more likely than those who reported excellent health to have a history of cancer (9.5% vs 4.4%), emphysema/COPD (18% vs 2.3%), coronary heart disease (21% vs 1.6%), myocardial infarction (19% vs 1.3%), heart failure (25% vs. 1.2%), hypertension (67% vs 19%), or diabetes (39% vs 4.6%).

Potential explanations for the independent association between poor self-rated health and poor outcomes include the ability of self-rated health to capture health information not accounted for by traditional risk factors, the researchers wrote in their discussion. “Another explanation is that self-rated health reflects subconscious bodily sensations that provide a direct sense of health unavailable to external observation,” they said. Alternatively, self-rated health may reinforce beneficial behaviors in those with higher self-rated health and harmful behaviors in those with lower self-rated health, they said.

The findings were limited by several factors including the measurement of self-rated health and the validity of hospitalization as a proxy for morbidity, the researchers noted. Other limitations include the use of models instead of repeated self-rated health measures, and a lack of data on interventions to directly or indirectly improve self-rated health, the researchers noted.

However, the study shows the potential value of self-rated health in routine clinical care to predict future hospitalizations, they said. “Clinicians can use this simple and convenient measure for individual patients to provide more accurate and personalized risk assessments,” they said.

Looking ahead, the current study findings also support the need for more research into the routine assessment not only of self-rated health but also targeted interventions to improve self-rated health and its determinants, the researchers concluded. The ARIC study has been supported by the National Heart, Lung, and Blood Institute, National Institutes of Health. Dr. Mu disclosed support from the National Heart, Lung, and Blood Institute.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Too Much Coffee Linked to Accelerated Cognitive Decline

Article Type
Changed
Mon, 08/05/2024 - 09:24

PHILADELPHIA – Drinking more than three cups of coffee a day is linked to more rapid cognitive decline over time, results from a large study suggest.

Investigators examined the impact of different amounts of coffee and tea on fluid intelligence — a measure of cognitive functions including abstract reasoning, pattern recognition, and logical thinking.

“It’s the old adage that too much of anything isn’t good. It’s all about balance, so moderate coffee consumption is okay but too much is probably not recommended,” said study investigator Kelsey R. Sewell, PhD, Advent Health Research Institute, Orlando, Florida. 

The findings of the study were presented at the 2024 Alzheimer’s Association International Conference (AAIC).
 

One of the World’s Most Widely Consumed Beverages

Coffee is one of the most widely consumed beverages around the world. The beans contain a range of bioactive compounds, including caffeine, chlorogenic acid, and small amounts of vitamins and minerals.

Consistent evidence from observational and epidemiologic studies indicates that intake of both coffee and tea has beneficial effects on stroke, heart failure, cancers, diabetes, and Parkinson’s disease.  

Several studies also suggest that coffee may reduce the risk for Alzheimer’s disease, said Dr. Sewell. However, there are limited longitudinal data on associations between coffee and tea intake and cognitive decline, particularly in distinct cognitive domains.

Dr. Sewell’s group previously published a study of cognitively unimpaired older adults that found greater coffee consumption was associated with slower cognitive decline and slower accumulation of brain beta-amyloid.

Their current study extends some of the prior findings and investigates the relationship between both coffee and tea intake and cognitive decline over time in a larger sample of older adults.

This new study included 8451 mostly female (60%) and White (97%) cognitively unimpaired adults older than 60 (mean age, 67.8 years) in the UK Biobank, a large-scale research resource containing in-depth, deidentified genetic and health information from half a million UK participants. Study subjects had a mean body mass index (BMI) of 26, and about 26% were apolipoprotein epsilon 4 (APOE e4) gene carriers.

Researchers divided coffee and tea consumption into tertiles: high, moderate, and no consumption.

For daily coffee consumption, 18% reported drinking four or more cups (high consumption), 58% reported drinking one to three cups (moderate consumption), and 25% reported that they never drink coffee. For daily tea consumption, 47% reported drinking four or more cups (high consumption), 38% reported drinking one to three cups (moderate consumption), and 15% reported that they never drink tea.

The study assessed cognitive function at baseline and at least two additional patient visits. 

Researchers used linear mixed models to assess the relationships between coffee and tea intake and cognitive outcomes. The models adjusted for age, sex, Townsend deprivation index (reflecting socioeconomic status), ethnicity, APOE e4 status, and BMI.
 

Steeper Decline 

Compared with high coffee consumption (four or more cups daily), people who never consumed coffee (beta, 0.06; standard error [SE], 0.02; P = .005) and those with moderate consumption (beta, 0.07; SE, 0.02; P = < .001) had slower decline in fluid intelligence after an average of 8.83 years of follow-up.

“We can see that those with high coffee consumption showed the steepest decline in fluid intelligence across the follow up, compared to those with moderate coffee consumption and those never consuming coffee,” said Dr. Sewell, referring to illustrative graphs.

At the same time, “our data suggest that across this time period, moderate coffee consumption can serve as some kind of protective factor against cognitive decline,” she added.

For tea, there was a somewhat different pattern. People who never drank tea had a greater decline in fluid intelligence, compared with those who had moderate consumption (beta, 0.06; SE, 0.02; P = .0090) or high consumption (beta, 0.06; SE, 0.02; P = .003).

Because this is an observational study, “we still need randomized controlled trials to better understand the neuroprotective mechanism of coffee and tea compounds,” said Dr. Sewell.

Responding later to a query from a meeting delegate about how moderate coffee drinking could be protective, Dr. Sewell said there are probably “different levels of mechanisms,” including at the molecular level (possibly involving amyloid toxicity) and the behavioral level (possibly involving sleep patterns).

Dr. Sewell said that she hopes this line of investigation will lead to new avenues of research in preventive strategies for Alzheimer’s disease. 

“We hope that coffee and tea intake could contribute to the development of a safe and inexpensive strategy for delaying the onset and reducing the incidence for Alzheimer’s disease.”

A limitation of the study is possible recall bias, because coffee and tea consumption were self-reported. However, this may not be much of an issue because coffee and tea consumption “is usually quite a habitual behavior,” said Dr. Sewell.

The study also had no data on midlife coffee or tea consumption and did not compare the effect of different preparation methods or types of coffee and tea — for example, green tea versus black tea. 

When asked if the study controlled for smoking, Dr. Sewell said it didn’t but added that it would be interesting to explore its impact on cognition.

Dr. Sewell reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

PHILADELPHIA – Drinking more than three cups of coffee a day is linked to more rapid cognitive decline over time, results from a large study suggest.

Investigators examined the impact of different amounts of coffee and tea on fluid intelligence — a measure of cognitive functions including abstract reasoning, pattern recognition, and logical thinking.

“It’s the old adage that too much of anything isn’t good. It’s all about balance, so moderate coffee consumption is okay but too much is probably not recommended,” said study investigator Kelsey R. Sewell, PhD, Advent Health Research Institute, Orlando, Florida. 

The findings of the study were presented at the 2024 Alzheimer’s Association International Conference (AAIC).
 

One of the World’s Most Widely Consumed Beverages

Coffee is one of the most widely consumed beverages around the world. The beans contain a range of bioactive compounds, including caffeine, chlorogenic acid, and small amounts of vitamins and minerals.

Consistent evidence from observational and epidemiologic studies indicates that intake of both coffee and tea has beneficial effects on stroke, heart failure, cancers, diabetes, and Parkinson’s disease.  

Several studies also suggest that coffee may reduce the risk for Alzheimer’s disease, said Dr. Sewell. However, there are limited longitudinal data on associations between coffee and tea intake and cognitive decline, particularly in distinct cognitive domains.

Dr. Sewell’s group previously published a study of cognitively unimpaired older adults that found greater coffee consumption was associated with slower cognitive decline and slower accumulation of brain beta-amyloid.

Their current study extends some of the prior findings and investigates the relationship between both coffee and tea intake and cognitive decline over time in a larger sample of older adults.

This new study included 8451 mostly female (60%) and White (97%) cognitively unimpaired adults older than 60 (mean age, 67.8 years) in the UK Biobank, a large-scale research resource containing in-depth, deidentified genetic and health information from half a million UK participants. Study subjects had a mean body mass index (BMI) of 26, and about 26% were apolipoprotein epsilon 4 (APOE e4) gene carriers.

Researchers divided coffee and tea consumption into tertiles: high, moderate, and no consumption.

For daily coffee consumption, 18% reported drinking four or more cups (high consumption), 58% reported drinking one to three cups (moderate consumption), and 25% reported that they never drink coffee. For daily tea consumption, 47% reported drinking four or more cups (high consumption), 38% reported drinking one to three cups (moderate consumption), and 15% reported that they never drink tea.

The study assessed cognitive function at baseline and at least two additional patient visits. 

Researchers used linear mixed models to assess the relationships between coffee and tea intake and cognitive outcomes. The models adjusted for age, sex, Townsend deprivation index (reflecting socioeconomic status), ethnicity, APOE e4 status, and BMI.
 

Steeper Decline 

Compared with high coffee consumption (four or more cups daily), people who never consumed coffee (beta, 0.06; standard error [SE], 0.02; P = .005) and those with moderate consumption (beta, 0.07; SE, 0.02; P = < .001) had slower decline in fluid intelligence after an average of 8.83 years of follow-up.

“We can see that those with high coffee consumption showed the steepest decline in fluid intelligence across the follow up, compared to those with moderate coffee consumption and those never consuming coffee,” said Dr. Sewell, referring to illustrative graphs.

At the same time, “our data suggest that across this time period, moderate coffee consumption can serve as some kind of protective factor against cognitive decline,” she added.

For tea, there was a somewhat different pattern. People who never drank tea had a greater decline in fluid intelligence, compared with those who had moderate consumption (beta, 0.06; SE, 0.02; P = .0090) or high consumption (beta, 0.06; SE, 0.02; P = .003).

Because this is an observational study, “we still need randomized controlled trials to better understand the neuroprotective mechanism of coffee and tea compounds,” said Dr. Sewell.

Responding later to a query from a meeting delegate about how moderate coffee drinking could be protective, Dr. Sewell said there are probably “different levels of mechanisms,” including at the molecular level (possibly involving amyloid toxicity) and the behavioral level (possibly involving sleep patterns).

Dr. Sewell said that she hopes this line of investigation will lead to new avenues of research in preventive strategies for Alzheimer’s disease. 

“We hope that coffee and tea intake could contribute to the development of a safe and inexpensive strategy for delaying the onset and reducing the incidence for Alzheimer’s disease.”

A limitation of the study is possible recall bias, because coffee and tea consumption were self-reported. However, this may not be much of an issue because coffee and tea consumption “is usually quite a habitual behavior,” said Dr. Sewell.

The study also had no data on midlife coffee or tea consumption and did not compare the effect of different preparation methods or types of coffee and tea — for example, green tea versus black tea. 

When asked if the study controlled for smoking, Dr. Sewell said it didn’t but added that it would be interesting to explore its impact on cognition.

Dr. Sewell reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

PHILADELPHIA – Drinking more than three cups of coffee a day is linked to more rapid cognitive decline over time, results from a large study suggest.

Investigators examined the impact of different amounts of coffee and tea on fluid intelligence — a measure of cognitive functions including abstract reasoning, pattern recognition, and logical thinking.

“It’s the old adage that too much of anything isn’t good. It’s all about balance, so moderate coffee consumption is okay but too much is probably not recommended,” said study investigator Kelsey R. Sewell, PhD, Advent Health Research Institute, Orlando, Florida. 

The findings of the study were presented at the 2024 Alzheimer’s Association International Conference (AAIC).
 

One of the World’s Most Widely Consumed Beverages

Coffee is one of the most widely consumed beverages around the world. The beans contain a range of bioactive compounds, including caffeine, chlorogenic acid, and small amounts of vitamins and minerals.

Consistent evidence from observational and epidemiologic studies indicates that intake of both coffee and tea has beneficial effects on stroke, heart failure, cancers, diabetes, and Parkinson’s disease.  

Several studies also suggest that coffee may reduce the risk for Alzheimer’s disease, said Dr. Sewell. However, there are limited longitudinal data on associations between coffee and tea intake and cognitive decline, particularly in distinct cognitive domains.

Dr. Sewell’s group previously published a study of cognitively unimpaired older adults that found greater coffee consumption was associated with slower cognitive decline and slower accumulation of brain beta-amyloid.

Their current study extends some of the prior findings and investigates the relationship between both coffee and tea intake and cognitive decline over time in a larger sample of older adults.

This new study included 8451 mostly female (60%) and White (97%) cognitively unimpaired adults older than 60 (mean age, 67.8 years) in the UK Biobank, a large-scale research resource containing in-depth, deidentified genetic and health information from half a million UK participants. Study subjects had a mean body mass index (BMI) of 26, and about 26% were apolipoprotein epsilon 4 (APOE e4) gene carriers.

Researchers divided coffee and tea consumption into tertiles: high, moderate, and no consumption.

For daily coffee consumption, 18% reported drinking four or more cups (high consumption), 58% reported drinking one to three cups (moderate consumption), and 25% reported that they never drink coffee. For daily tea consumption, 47% reported drinking four or more cups (high consumption), 38% reported drinking one to three cups (moderate consumption), and 15% reported that they never drink tea.

The study assessed cognitive function at baseline and at least two additional patient visits. 

Researchers used linear mixed models to assess the relationships between coffee and tea intake and cognitive outcomes. The models adjusted for age, sex, Townsend deprivation index (reflecting socioeconomic status), ethnicity, APOE e4 status, and BMI.
 

Steeper Decline 

Compared with high coffee consumption (four or more cups daily), people who never consumed coffee (beta, 0.06; standard error [SE], 0.02; P = .005) and those with moderate consumption (beta, 0.07; SE, 0.02; P = < .001) had slower decline in fluid intelligence after an average of 8.83 years of follow-up.

“We can see that those with high coffee consumption showed the steepest decline in fluid intelligence across the follow up, compared to those with moderate coffee consumption and those never consuming coffee,” said Dr. Sewell, referring to illustrative graphs.

At the same time, “our data suggest that across this time period, moderate coffee consumption can serve as some kind of protective factor against cognitive decline,” she added.

For tea, there was a somewhat different pattern. People who never drank tea had a greater decline in fluid intelligence, compared with those who had moderate consumption (beta, 0.06; SE, 0.02; P = .0090) or high consumption (beta, 0.06; SE, 0.02; P = .003).

Because this is an observational study, “we still need randomized controlled trials to better understand the neuroprotective mechanism of coffee and tea compounds,” said Dr. Sewell.

Responding later to a query from a meeting delegate about how moderate coffee drinking could be protective, Dr. Sewell said there are probably “different levels of mechanisms,” including at the molecular level (possibly involving amyloid toxicity) and the behavioral level (possibly involving sleep patterns).

Dr. Sewell said that she hopes this line of investigation will lead to new avenues of research in preventive strategies for Alzheimer’s disease. 

“We hope that coffee and tea intake could contribute to the development of a safe and inexpensive strategy for delaying the onset and reducing the incidence for Alzheimer’s disease.”

A limitation of the study is possible recall bias, because coffee and tea consumption were self-reported. However, this may not be much of an issue because coffee and tea consumption “is usually quite a habitual behavior,” said Dr. Sewell.

The study also had no data on midlife coffee or tea consumption and did not compare the effect of different preparation methods or types of coffee and tea — for example, green tea versus black tea. 

When asked if the study controlled for smoking, Dr. Sewell said it didn’t but added that it would be interesting to explore its impact on cognition.

Dr. Sewell reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAIC 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Statins: So Misunderstood

Article Type
Changed
Wed, 07/31/2024 - 16:39

Recently, a patient of mine was hospitalized with chest pain. She was diagnosed with an acute coronary syndrome and started on a statin in addition to a beta-blocker, aspirin, and clopidogrel. After discharge, she had symptoms of dizziness and recurrent chest pain and her first thought was to stop the statin because she believed that her symptoms were statin-related side effects. I will cover a few areas where I think that there are some misunderstandings about statins.

Statins Are Not Bad For the Liver

When lovastatin first became available for prescription in the 1980s, frequent monitoring of transaminases was recommended. Patients and healthcare professionals became accustomed to frequent liver tests to monitor for statin toxicity, and to this day, some healthcare professionals still obtain liver function tests for this purpose.

But is there a reason to do this? Pfeffer and colleagues reported on the results of over 112,000 people enrolled in the West of Scotland Coronary Protection trial and found that the percentage of patients with any abnormal liver function test was similar (> 3 times the upper limit of normal for ALT) for patients taking pravastatin (1.4%) and for patients taking placebo (1.4%).1 A panel of liver experts concurred that statin-associated transaminase elevations were not indicative of liver damage or dysfunction.2 Furthermore, they noted that chronic liver disease and compensated cirrhosis were not contraindications to statin use.

Dr. Douglas S. Paauw, University of Washington, Seattle
Dr. Douglas S. Paauw

In a small study, use of low-dose atorvastatin in patients with nonalcoholic steatohepatitis improved transaminase values in 75% of patients and liver steatosis and nonalcoholic fatty liver disease activity scores were significantly improved on biopsy in most of the patients.3 The US Food and Drug Administration (FDA) removed the recommendation for routine regular monitoring of liver function for patients on statins in 2012.4

Statins Do Not Cause Muscle Pain in Most Patients

Most muscle pain occurring in patients on statins is not due to the statin although patient concerns about muscle pain are common. In a meta-analysis of 19 large statin trials, 27.1% of participants treated with a statin reported at least one episode of muscle pain or weakness during a median of 4.3 years, compared with 26.6% of participants treated with placebo.5 Muscle pain for any reason is common, and patients on statins may stop therapy because of the symptoms.

Cohen and colleagues performed a survey of past and current statin users, asking about muscle symptoms.6 Muscle-related side effects were reported by 60% of former statin users and 25% of current users.

Herrett and colleagues performed an extensive series of n-of-1 trials involving 200 patients who had stopped or were considering stopping statins because of muscle symptoms.7 Participants received either 2-month blocks of atorvastatin 20 mg or 2-month blocks of placebo, six times. They rated their muscle symptoms on a visual analogue scale at the end of each block. There was no difference in muscle symptom scores between the statin and placebo periods.

Wood and colleagues took it a step further when they planned an n-of-1 trial that included statin, placebo, and no treatment.8 Each participant received four bottles of atorvastatin 20 mg, four bottles of placebo, and four empty bottles. Each month they used treatment from the bottles based on a random sequence and reported daily symptom scores. The mean symptom intensity score was 8.0 during no-tablet months, 15.4 during placebo months (P < .001, compared with no-tablet months), and 16.3 during statin months (P < .001, compared with no-tablet months; P = .39, compared with placebo).
 

 

 

Statins Are Likely Helpful In the Very Elderly

Should we be using statins for primary prevention in our very old patients? For many years the answer was generally “no” on the basis of a lack of evidence. Patients in their 80s often were not included in clinical trials. The much used American Heart Association risk calculator stops at age 79. Given the prevalence of coronary artery disease in patients as they reach their 80s, wouldn’t primary prevention really be secondary prevention? Xu and colleagues in a recent study compared outcomes for patients who were treated with statins for primary prevention with a group who were not. In the patients aged 75-84 there was a risk reduction for major cardiovascular events of 1.2% over 5 years, and for those 85 and older the risk reduction was 4.4%. Importantly, there were no significantly increased risks for myopathies and liver dysfunction in either age group.

Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at imnews@mdedge.com.

References

1. Pfeffer MA et al. Circulation. 2002;105(20):2341-6.

2. Cohen DE et al. Am J Cardiol. 2006;97(8A):77C-81C.

3. Hyogo H et al. Metabolism. 2008;57(12):1711-8.

4. FDA Drug Safety Communication: Important safety label changes to cholesterol-lowering statin drugs. 2012 Feb 28.

5. Cholesterol Treatment Trialists’ Collaboration. Lancet. 2022;400(10355):832-45.

6. Cohen JD et al. J Clin Lipidol. 2012;6(3):208-15.

7. Herrett E et al. BMJ. 2021 Feb 24;372:n1355.

8. Wood FA et al. N Engl J Med. 2020;383(22):2182-4.

9. Xu W et al. Ann Intern Med. 2024;177(6):701-10.

Publications
Topics
Sections

Recently, a patient of mine was hospitalized with chest pain. She was diagnosed with an acute coronary syndrome and started on a statin in addition to a beta-blocker, aspirin, and clopidogrel. After discharge, she had symptoms of dizziness and recurrent chest pain and her first thought was to stop the statin because she believed that her symptoms were statin-related side effects. I will cover a few areas where I think that there are some misunderstandings about statins.

Statins Are Not Bad For the Liver

When lovastatin first became available for prescription in the 1980s, frequent monitoring of transaminases was recommended. Patients and healthcare professionals became accustomed to frequent liver tests to monitor for statin toxicity, and to this day, some healthcare professionals still obtain liver function tests for this purpose.

But is there a reason to do this? Pfeffer and colleagues reported on the results of over 112,000 people enrolled in the West of Scotland Coronary Protection trial and found that the percentage of patients with any abnormal liver function test was similar (> 3 times the upper limit of normal for ALT) for patients taking pravastatin (1.4%) and for patients taking placebo (1.4%).1 A panel of liver experts concurred that statin-associated transaminase elevations were not indicative of liver damage or dysfunction.2 Furthermore, they noted that chronic liver disease and compensated cirrhosis were not contraindications to statin use.

Dr. Douglas S. Paauw, University of Washington, Seattle
Dr. Douglas S. Paauw

In a small study, use of low-dose atorvastatin in patients with nonalcoholic steatohepatitis improved transaminase values in 75% of patients and liver steatosis and nonalcoholic fatty liver disease activity scores were significantly improved on biopsy in most of the patients.3 The US Food and Drug Administration (FDA) removed the recommendation for routine regular monitoring of liver function for patients on statins in 2012.4

Statins Do Not Cause Muscle Pain in Most Patients

Most muscle pain occurring in patients on statins is not due to the statin although patient concerns about muscle pain are common. In a meta-analysis of 19 large statin trials, 27.1% of participants treated with a statin reported at least one episode of muscle pain or weakness during a median of 4.3 years, compared with 26.6% of participants treated with placebo.5 Muscle pain for any reason is common, and patients on statins may stop therapy because of the symptoms.

Cohen and colleagues performed a survey of past and current statin users, asking about muscle symptoms.6 Muscle-related side effects were reported by 60% of former statin users and 25% of current users.

Herrett and colleagues performed an extensive series of n-of-1 trials involving 200 patients who had stopped or were considering stopping statins because of muscle symptoms.7 Participants received either 2-month blocks of atorvastatin 20 mg or 2-month blocks of placebo, six times. They rated their muscle symptoms on a visual analogue scale at the end of each block. There was no difference in muscle symptom scores between the statin and placebo periods.

Wood and colleagues took it a step further when they planned an n-of-1 trial that included statin, placebo, and no treatment.8 Each participant received four bottles of atorvastatin 20 mg, four bottles of placebo, and four empty bottles. Each month they used treatment from the bottles based on a random sequence and reported daily symptom scores. The mean symptom intensity score was 8.0 during no-tablet months, 15.4 during placebo months (P < .001, compared with no-tablet months), and 16.3 during statin months (P < .001, compared with no-tablet months; P = .39, compared with placebo).
 

 

 

Statins Are Likely Helpful In the Very Elderly

Should we be using statins for primary prevention in our very old patients? For many years the answer was generally “no” on the basis of a lack of evidence. Patients in their 80s often were not included in clinical trials. The much used American Heart Association risk calculator stops at age 79. Given the prevalence of coronary artery disease in patients as they reach their 80s, wouldn’t primary prevention really be secondary prevention? Xu and colleagues in a recent study compared outcomes for patients who were treated with statins for primary prevention with a group who were not. In the patients aged 75-84 there was a risk reduction for major cardiovascular events of 1.2% over 5 years, and for those 85 and older the risk reduction was 4.4%. Importantly, there were no significantly increased risks for myopathies and liver dysfunction in either age group.

Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at imnews@mdedge.com.

References

1. Pfeffer MA et al. Circulation. 2002;105(20):2341-6.

2. Cohen DE et al. Am J Cardiol. 2006;97(8A):77C-81C.

3. Hyogo H et al. Metabolism. 2008;57(12):1711-8.

4. FDA Drug Safety Communication: Important safety label changes to cholesterol-lowering statin drugs. 2012 Feb 28.

5. Cholesterol Treatment Trialists’ Collaboration. Lancet. 2022;400(10355):832-45.

6. Cohen JD et al. J Clin Lipidol. 2012;6(3):208-15.

7. Herrett E et al. BMJ. 2021 Feb 24;372:n1355.

8. Wood FA et al. N Engl J Med. 2020;383(22):2182-4.

9. Xu W et al. Ann Intern Med. 2024;177(6):701-10.

Recently, a patient of mine was hospitalized with chest pain. She was diagnosed with an acute coronary syndrome and started on a statin in addition to a beta-blocker, aspirin, and clopidogrel. After discharge, she had symptoms of dizziness and recurrent chest pain and her first thought was to stop the statin because she believed that her symptoms were statin-related side effects. I will cover a few areas where I think that there are some misunderstandings about statins.

Statins Are Not Bad For the Liver

When lovastatin first became available for prescription in the 1980s, frequent monitoring of transaminases was recommended. Patients and healthcare professionals became accustomed to frequent liver tests to monitor for statin toxicity, and to this day, some healthcare professionals still obtain liver function tests for this purpose.

But is there a reason to do this? Pfeffer and colleagues reported on the results of over 112,000 people enrolled in the West of Scotland Coronary Protection trial and found that the percentage of patients with any abnormal liver function test was similar (> 3 times the upper limit of normal for ALT) for patients taking pravastatin (1.4%) and for patients taking placebo (1.4%).1 A panel of liver experts concurred that statin-associated transaminase elevations were not indicative of liver damage or dysfunction.2 Furthermore, they noted that chronic liver disease and compensated cirrhosis were not contraindications to statin use.

Dr. Douglas S. Paauw, University of Washington, Seattle
Dr. Douglas S. Paauw

In a small study, use of low-dose atorvastatin in patients with nonalcoholic steatohepatitis improved transaminase values in 75% of patients and liver steatosis and nonalcoholic fatty liver disease activity scores were significantly improved on biopsy in most of the patients.3 The US Food and Drug Administration (FDA) removed the recommendation for routine regular monitoring of liver function for patients on statins in 2012.4

Statins Do Not Cause Muscle Pain in Most Patients

Most muscle pain occurring in patients on statins is not due to the statin although patient concerns about muscle pain are common. In a meta-analysis of 19 large statin trials, 27.1% of participants treated with a statin reported at least one episode of muscle pain or weakness during a median of 4.3 years, compared with 26.6% of participants treated with placebo.5 Muscle pain for any reason is common, and patients on statins may stop therapy because of the symptoms.

Cohen and colleagues performed a survey of past and current statin users, asking about muscle symptoms.6 Muscle-related side effects were reported by 60% of former statin users and 25% of current users.

Herrett and colleagues performed an extensive series of n-of-1 trials involving 200 patients who had stopped or were considering stopping statins because of muscle symptoms.7 Participants received either 2-month blocks of atorvastatin 20 mg or 2-month blocks of placebo, six times. They rated their muscle symptoms on a visual analogue scale at the end of each block. There was no difference in muscle symptom scores between the statin and placebo periods.

Wood and colleagues took it a step further when they planned an n-of-1 trial that included statin, placebo, and no treatment.8 Each participant received four bottles of atorvastatin 20 mg, four bottles of placebo, and four empty bottles. Each month they used treatment from the bottles based on a random sequence and reported daily symptom scores. The mean symptom intensity score was 8.0 during no-tablet months, 15.4 during placebo months (P < .001, compared with no-tablet months), and 16.3 during statin months (P < .001, compared with no-tablet months; P = .39, compared with placebo).
 

 

 

Statins Are Likely Helpful In the Very Elderly

Should we be using statins for primary prevention in our very old patients? For many years the answer was generally “no” on the basis of a lack of evidence. Patients in their 80s often were not included in clinical trials. The much used American Heart Association risk calculator stops at age 79. Given the prevalence of coronary artery disease in patients as they reach their 80s, wouldn’t primary prevention really be secondary prevention? Xu and colleagues in a recent study compared outcomes for patients who were treated with statins for primary prevention with a group who were not. In the patients aged 75-84 there was a risk reduction for major cardiovascular events of 1.2% over 5 years, and for those 85 and older the risk reduction was 4.4%. Importantly, there were no significantly increased risks for myopathies and liver dysfunction in either age group.

Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at imnews@mdedge.com.

References

1. Pfeffer MA et al. Circulation. 2002;105(20):2341-6.

2. Cohen DE et al. Am J Cardiol. 2006;97(8A):77C-81C.

3. Hyogo H et al. Metabolism. 2008;57(12):1711-8.

4. FDA Drug Safety Communication: Important safety label changes to cholesterol-lowering statin drugs. 2012 Feb 28.

5. Cholesterol Treatment Trialists’ Collaboration. Lancet. 2022;400(10355):832-45.

6. Cohen JD et al. J Clin Lipidol. 2012;6(3):208-15.

7. Herrett E et al. BMJ. 2021 Feb 24;372:n1355.

8. Wood FA et al. N Engl J Med. 2020;383(22):2182-4.

9. Xu W et al. Ann Intern Med. 2024;177(6):701-10.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Alzheimer’s Blood Test in Primary Care Could Slash Diagnostic, Treatment Wait Times

Article Type
Changed
Tue, 07/30/2024 - 11:56

As disease-modifying treatments for Alzheimer’s disease (AD) become available, equipping primary care physicians with a highly accurate blood test could significantly reduce diagnostic wait times. Currently, the patient diagnostic journey is often prolonged owing to the limited number of AD specialists, causing concern among healthcare providers and patients alike. Now, a new study suggests that use of high-performing blood tests in primary care could identify potential patients with AD much earlier, possibly reducing wait times for specialist care and receipt of treatment.

“We need to triage in primary care and send preferentially the ones that actually could be eligible for treatment, and not those who are just worried because their grandmother reported that she has Alzheimer’s,” lead researcher Soeren Mattke, MD, DSc, told this news organization.

“By combining a brief cognitive test with an accurate blood test of Alzheimer’s pathology in primary care, we can reduce unnecessary referrals, and shorten appointment wait times,” said Dr. Mattke, director of the Brain Health Observatory at the University of Southern California in Los Angeles.

The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
 

Projected Wait Times 100 Months by 2033

The investigators used a Markov model to estimate wait times for patients eligible for AD treatment, taking into account constrained capacity for specialist visits.

The model included the projected US population of people aged 55 years or older from 2023 to 2032. It assumed that individuals would undergo a brief cognitive assessment in primary care and, if suggestive of early-stage cognitive impairment, be referred to a AD specialist under three scenarios: no blood test, blood test to rule out AD pathology, and blood test to confirm AD pathology.

According to the model, without an accurate blood test for AD pathology, projected wait times to see a specialist are about 12 months in 2024 and will increase to more than 100 months in 2033, largely owing to a lack of specialist appointments.

In contrast, with the availability of an accurate blood test to rule out AD, average wait times would be just 3 months in 2024 and increase to only about 13 months in 2033, because far fewer patients would need to see a specialist.

Availability of a blood test to rule in AD pathology in primary care would have a limited effect on wait times because 50% of patients would still undergo confirmatory testing based on expert assumptions, the model suggests.
 

Prioritizing Resources 

“Millions of people have mild memory complaints, and if they all start coming to neurologists, it could completely flood the system and create long wait times for everybody,” Dr. Mattke told this news organization.

The problem, he said, is that brief cognitive tests performed in primary care are not particularly specific for mild cognitive impairment.

“They work pretty well for manifest advanced dementia but for mild cognitive impairment, which is a very subtle, symptomatic disease, they are only about 75% accurate. One quarter are false-positives. That’s a lot of people,” Dr. Mattke said.

He also noted that although earlier blood tests were about 75% accurate, they are now about 90% accurate, “so we are getting to a level where we can pretty much say with confidence that this is likely Alzheimer’s,” Dr. Mattke said.

Commenting on this research for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, said it is clear that blood tests, “once confirmed, could have a significant impact on the wait times” for dementia assessment. 

“After an initial blood test, we might be able to rule out or rule in individuals who should go to a specialist for further follow-up and testing. This allows us to really ensure that we’re prioritizing resources accordingly,” said Dr. Snyder, who was not involved in the study. 

This project was supported by a research contract from C2N Diagnostics LLC to USC. Dr. Mattke serves on the board of directors of Senscio Systems Inc. and the scientific advisory board of ALZPath and Boston Millennia Partners and has received consulting fees from Biogen, C2N, Eisai, Eli Lilly, Novartis, and Roche/Genentech. Dr. Snyder has no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

As disease-modifying treatments for Alzheimer’s disease (AD) become available, equipping primary care physicians with a highly accurate blood test could significantly reduce diagnostic wait times. Currently, the patient diagnostic journey is often prolonged owing to the limited number of AD specialists, causing concern among healthcare providers and patients alike. Now, a new study suggests that use of high-performing blood tests in primary care could identify potential patients with AD much earlier, possibly reducing wait times for specialist care and receipt of treatment.

“We need to triage in primary care and send preferentially the ones that actually could be eligible for treatment, and not those who are just worried because their grandmother reported that she has Alzheimer’s,” lead researcher Soeren Mattke, MD, DSc, told this news organization.

“By combining a brief cognitive test with an accurate blood test of Alzheimer’s pathology in primary care, we can reduce unnecessary referrals, and shorten appointment wait times,” said Dr. Mattke, director of the Brain Health Observatory at the University of Southern California in Los Angeles.

The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
 

Projected Wait Times 100 Months by 2033

The investigators used a Markov model to estimate wait times for patients eligible for AD treatment, taking into account constrained capacity for specialist visits.

The model included the projected US population of people aged 55 years or older from 2023 to 2032. It assumed that individuals would undergo a brief cognitive assessment in primary care and, if suggestive of early-stage cognitive impairment, be referred to a AD specialist under three scenarios: no blood test, blood test to rule out AD pathology, and blood test to confirm AD pathology.

According to the model, without an accurate blood test for AD pathology, projected wait times to see a specialist are about 12 months in 2024 and will increase to more than 100 months in 2033, largely owing to a lack of specialist appointments.

In contrast, with the availability of an accurate blood test to rule out AD, average wait times would be just 3 months in 2024 and increase to only about 13 months in 2033, because far fewer patients would need to see a specialist.

Availability of a blood test to rule in AD pathology in primary care would have a limited effect on wait times because 50% of patients would still undergo confirmatory testing based on expert assumptions, the model suggests.
 

Prioritizing Resources 

“Millions of people have mild memory complaints, and if they all start coming to neurologists, it could completely flood the system and create long wait times for everybody,” Dr. Mattke told this news organization.

The problem, he said, is that brief cognitive tests performed in primary care are not particularly specific for mild cognitive impairment.

“They work pretty well for manifest advanced dementia but for mild cognitive impairment, which is a very subtle, symptomatic disease, they are only about 75% accurate. One quarter are false-positives. That’s a lot of people,” Dr. Mattke said.

He also noted that although earlier blood tests were about 75% accurate, they are now about 90% accurate, “so we are getting to a level where we can pretty much say with confidence that this is likely Alzheimer’s,” Dr. Mattke said.

Commenting on this research for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, said it is clear that blood tests, “once confirmed, could have a significant impact on the wait times” for dementia assessment. 

“After an initial blood test, we might be able to rule out or rule in individuals who should go to a specialist for further follow-up and testing. This allows us to really ensure that we’re prioritizing resources accordingly,” said Dr. Snyder, who was not involved in the study. 

This project was supported by a research contract from C2N Diagnostics LLC to USC. Dr. Mattke serves on the board of directors of Senscio Systems Inc. and the scientific advisory board of ALZPath and Boston Millennia Partners and has received consulting fees from Biogen, C2N, Eisai, Eli Lilly, Novartis, and Roche/Genentech. Dr. Snyder has no relevant disclosures.

A version of this article first appeared on Medscape.com.

As disease-modifying treatments for Alzheimer’s disease (AD) become available, equipping primary care physicians with a highly accurate blood test could significantly reduce diagnostic wait times. Currently, the patient diagnostic journey is often prolonged owing to the limited number of AD specialists, causing concern among healthcare providers and patients alike. Now, a new study suggests that use of high-performing blood tests in primary care could identify potential patients with AD much earlier, possibly reducing wait times for specialist care and receipt of treatment.

“We need to triage in primary care and send preferentially the ones that actually could be eligible for treatment, and not those who are just worried because their grandmother reported that she has Alzheimer’s,” lead researcher Soeren Mattke, MD, DSc, told this news organization.

“By combining a brief cognitive test with an accurate blood test of Alzheimer’s pathology in primary care, we can reduce unnecessary referrals, and shorten appointment wait times,” said Dr. Mattke, director of the Brain Health Observatory at the University of Southern California in Los Angeles.

The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
 

Projected Wait Times 100 Months by 2033

The investigators used a Markov model to estimate wait times for patients eligible for AD treatment, taking into account constrained capacity for specialist visits.

The model included the projected US population of people aged 55 years or older from 2023 to 2032. It assumed that individuals would undergo a brief cognitive assessment in primary care and, if suggestive of early-stage cognitive impairment, be referred to a AD specialist under three scenarios: no blood test, blood test to rule out AD pathology, and blood test to confirm AD pathology.

According to the model, without an accurate blood test for AD pathology, projected wait times to see a specialist are about 12 months in 2024 and will increase to more than 100 months in 2033, largely owing to a lack of specialist appointments.

In contrast, with the availability of an accurate blood test to rule out AD, average wait times would be just 3 months in 2024 and increase to only about 13 months in 2033, because far fewer patients would need to see a specialist.

Availability of a blood test to rule in AD pathology in primary care would have a limited effect on wait times because 50% of patients would still undergo confirmatory testing based on expert assumptions, the model suggests.
 

Prioritizing Resources 

“Millions of people have mild memory complaints, and if they all start coming to neurologists, it could completely flood the system and create long wait times for everybody,” Dr. Mattke told this news organization.

The problem, he said, is that brief cognitive tests performed in primary care are not particularly specific for mild cognitive impairment.

“They work pretty well for manifest advanced dementia but for mild cognitive impairment, which is a very subtle, symptomatic disease, they are only about 75% accurate. One quarter are false-positives. That’s a lot of people,” Dr. Mattke said.

He also noted that although earlier blood tests were about 75% accurate, they are now about 90% accurate, “so we are getting to a level where we can pretty much say with confidence that this is likely Alzheimer’s,” Dr. Mattke said.

Commenting on this research for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, said it is clear that blood tests, “once confirmed, could have a significant impact on the wait times” for dementia assessment. 

“After an initial blood test, we might be able to rule out or rule in individuals who should go to a specialist for further follow-up and testing. This allows us to really ensure that we’re prioritizing resources accordingly,” said Dr. Snyder, who was not involved in the study. 

This project was supported by a research contract from C2N Diagnostics LLC to USC. Dr. Mattke serves on the board of directors of Senscio Systems Inc. and the scientific advisory board of ALZPath and Boston Millennia Partners and has received consulting fees from Biogen, C2N, Eisai, Eli Lilly, Novartis, and Roche/Genentech. Dr. Snyder has no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAIC 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New Models Predict Time From Mild Cognitive Impairment to Dementia

Article Type
Changed
Tue, 07/30/2024 - 10:23

Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.

The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.

The findings were published online in Neurology.
 

Easy-to-Use Prototype

On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.

Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.

For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.

Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.

At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.

In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.

The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).

However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.

For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.

External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.

Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.

Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.

“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.

Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.

The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.

The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.

The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.

The findings were published online in Neurology.
 

Easy-to-Use Prototype

On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.

Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.

For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.

Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.

At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.

In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.

The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).

However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.

For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.

External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.

Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.

Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.

“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.

Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.

The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.

The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.

A version of this article first appeared on Medscape.com.

Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.

The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.

The findings were published online in Neurology.
 

Easy-to-Use Prototype

On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.

Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.

For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.

Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.

At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.

In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.

The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).

However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.

For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.

External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.

Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.

Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.

“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.

Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.

The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.

The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article