Acute myeloid leukemia genomic classification and prognosis

Distinct paths in the development of AML
Article Type
Changed
Fri, 01/04/2019 - 09:52
Display Headline
Acute myeloid leukemia genomic classification and prognosis

Acute myeloid leukemia (AML) consists of at least 11 disease classes that represent distinct paths in the evolution of AML and have prognostic implications, based on an analysis of somatic driver mutations in 1,540 patients.

In total, 5,234 driver mutations were identified in 76 genes or regions, with 96% of patients having at least one mutation and 86% having two or more mutations. However, nearly one-half of the cohort did not fall into one of the molecular groups defined by the World Health Organization in 2008.

Elli Papaemmanuil, Ph.D.

“The characterization of many new leukemia genes, multiple driver mutations per patient, and complex co-mutation patterns prompted us to reevaluate genomic classification of AML from the beginning,” wrote Elli Papaemmanuil, Ph.D., a molecular geneticist at Memorial Sloan Kettering, New York, and of the Cancer Genome Project, Wellcome Trust Sanger Institute, and her colleagues (N Engl J Med. 2016 Jun 9; 374:2209-21).

The team developed a Bayesian statistical model to define 11 mutually exclusive subtypes based on patterns of co-mutations. The schema unambiguously classified 1,236 of 1,540 patients (80%) into a single subgroup and 56 (4%) into two or more groups. A subset of patients (166, 11%) remained unclassified, possibly due to mutations in genes not sequenced in the study.

NPM1-mutated AML was the largest class (27% of the cohort), followed by the chromatin-spliceosome group (18% of the cohort) that included mutations in genes regulating RNA splicing (SRSF2, SF3B1, U2AF1, and ZRSR2), chromatin (ASXL1, STAG2, BCOR, MLLPTD, EZH2, and PHF6), or transcription (RUNX1). Another subgroup consisted of mutations in TP53, as well as complex karyotype alterations, cytogenetically visible copy-number alterations (aneuploidies), or a combination. While broader than previous classifications, such as “monosomal karyotype AML” and “complex karyotype AML,” this group emerged from correlated chromosomal abnormalities and was mutually exclusive of other class-defining lesions. In general, patients in this group were older and had fewer RAS pathway mutations.

The groups had considerable differences in clinical presentation and overall survival, according to the report. The TP53-aneuploidy subgroup had poor outcomes, as previously described. Patients in the chromatin-spliceosome group had lower rates of response to induction chemotherapy, higher relapse rates, and poorer long-term outcomes, compared with other groups. Most of these patients (84%) would be classified as intermediate risk under current guidelines, but the characteristics were more similar to those of subgroups with adverse outcomes.

Overall survival was correlated with the number of driver mutations, and deleterious affects of mutations often were found to be additive. In some cases, complex gene interactions accounted for variation in outcomes, suggesting the clinical effect of some driver mutations may depend on the occurrence of co-mutations in a wider genomic context.

References

Click for Credit Link
Body

The study by Papaemmanuil and her colleagues offers practice-changing insights that redefine molecular classification of AML. The mutational analysis of more than 1,500 AML patients provides a deeper understanding of the specific paths from normal blood cell to leukemia.

Specific concurrent mutations were linked to clinical outcomes. For example, co-mutations in NPM1, FLT3ITD, and DNMT3A are associated with a poor clinical outcome, but NPM1 and DNMT3A mutations without FLT3ITD are associated with better outcomes. In addition, mutations in NPM1 and DNMT3A in the presence of NRASG12/13 are associated with a more favorable outcome. The evolution of DNMT3A-NPM1 mutated clones along separate paths appears to affect disease outcome and may be relevant to clinical trials in AML subgroups.

Dr. Aaron Viny

Previous, smaller studies had suggested that somatic mutations in splicing factors and chromatin modifiers were specific for secondary AML that arises from myelodysplastic syndromes (MDS). Papaemmanuil and her colleagues provide extensive data to support that hypothesis. Patients with chromatin-spliceosome mutations, previously classified as intermediate-risk AML, are classed into the same molecular subgroup as patients with secondary AML arising from MDS.

These data may inform the design of mechanism-based clinical trials based on the presence of specific mutations and co-mutations.

Dr. Aaron Viny is a medical oncologist at Memorial Sloan Kettering Cancer Center, New York. Dr. Ross Levine is Director of the Memorial Sloan Kettering Center for Hematologic Malignancies. These remarks were part of an editorial accompanying a report in The New England Journal of Medicine (2016 Jun 9; 374:2282-4). Dr. Levine reports personal fees from Foundation Medicine outside the submitted work.

Author and Disclosure Information

Publications
Topics
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

Body

The study by Papaemmanuil and her colleagues offers practice-changing insights that redefine molecular classification of AML. The mutational analysis of more than 1,500 AML patients provides a deeper understanding of the specific paths from normal blood cell to leukemia.

Specific concurrent mutations were linked to clinical outcomes. For example, co-mutations in NPM1, FLT3ITD, and DNMT3A are associated with a poor clinical outcome, but NPM1 and DNMT3A mutations without FLT3ITD are associated with better outcomes. In addition, mutations in NPM1 and DNMT3A in the presence of NRASG12/13 are associated with a more favorable outcome. The evolution of DNMT3A-NPM1 mutated clones along separate paths appears to affect disease outcome and may be relevant to clinical trials in AML subgroups.

Dr. Aaron Viny

Previous, smaller studies had suggested that somatic mutations in splicing factors and chromatin modifiers were specific for secondary AML that arises from myelodysplastic syndromes (MDS). Papaemmanuil and her colleagues provide extensive data to support that hypothesis. Patients with chromatin-spliceosome mutations, previously classified as intermediate-risk AML, are classed into the same molecular subgroup as patients with secondary AML arising from MDS.

These data may inform the design of mechanism-based clinical trials based on the presence of specific mutations and co-mutations.

Dr. Aaron Viny is a medical oncologist at Memorial Sloan Kettering Cancer Center, New York. Dr. Ross Levine is Director of the Memorial Sloan Kettering Center for Hematologic Malignancies. These remarks were part of an editorial accompanying a report in The New England Journal of Medicine (2016 Jun 9; 374:2282-4). Dr. Levine reports personal fees from Foundation Medicine outside the submitted work.

Body

The study by Papaemmanuil and her colleagues offers practice-changing insights that redefine molecular classification of AML. The mutational analysis of more than 1,500 AML patients provides a deeper understanding of the specific paths from normal blood cell to leukemia.

Specific concurrent mutations were linked to clinical outcomes. For example, co-mutations in NPM1, FLT3ITD, and DNMT3A are associated with a poor clinical outcome, but NPM1 and DNMT3A mutations without FLT3ITD are associated with better outcomes. In addition, mutations in NPM1 and DNMT3A in the presence of NRASG12/13 are associated with a more favorable outcome. The evolution of DNMT3A-NPM1 mutated clones along separate paths appears to affect disease outcome and may be relevant to clinical trials in AML subgroups.

Dr. Aaron Viny

Previous, smaller studies had suggested that somatic mutations in splicing factors and chromatin modifiers were specific for secondary AML that arises from myelodysplastic syndromes (MDS). Papaemmanuil and her colleagues provide extensive data to support that hypothesis. Patients with chromatin-spliceosome mutations, previously classified as intermediate-risk AML, are classed into the same molecular subgroup as patients with secondary AML arising from MDS.

These data may inform the design of mechanism-based clinical trials based on the presence of specific mutations and co-mutations.

Dr. Aaron Viny is a medical oncologist at Memorial Sloan Kettering Cancer Center, New York. Dr. Ross Levine is Director of the Memorial Sloan Kettering Center for Hematologic Malignancies. These remarks were part of an editorial accompanying a report in The New England Journal of Medicine (2016 Jun 9; 374:2282-4). Dr. Levine reports personal fees from Foundation Medicine outside the submitted work.

Title
Distinct paths in the development of AML
Distinct paths in the development of AML

Acute myeloid leukemia (AML) consists of at least 11 disease classes that represent distinct paths in the evolution of AML and have prognostic implications, based on an analysis of somatic driver mutations in 1,540 patients.

In total, 5,234 driver mutations were identified in 76 genes or regions, with 96% of patients having at least one mutation and 86% having two or more mutations. However, nearly one-half of the cohort did not fall into one of the molecular groups defined by the World Health Organization in 2008.

Elli Papaemmanuil, Ph.D.

“The characterization of many new leukemia genes, multiple driver mutations per patient, and complex co-mutation patterns prompted us to reevaluate genomic classification of AML from the beginning,” wrote Elli Papaemmanuil, Ph.D., a molecular geneticist at Memorial Sloan Kettering, New York, and of the Cancer Genome Project, Wellcome Trust Sanger Institute, and her colleagues (N Engl J Med. 2016 Jun 9; 374:2209-21).

The team developed a Bayesian statistical model to define 11 mutually exclusive subtypes based on patterns of co-mutations. The schema unambiguously classified 1,236 of 1,540 patients (80%) into a single subgroup and 56 (4%) into two or more groups. A subset of patients (166, 11%) remained unclassified, possibly due to mutations in genes not sequenced in the study.

NPM1-mutated AML was the largest class (27% of the cohort), followed by the chromatin-spliceosome group (18% of the cohort) that included mutations in genes regulating RNA splicing (SRSF2, SF3B1, U2AF1, and ZRSR2), chromatin (ASXL1, STAG2, BCOR, MLLPTD, EZH2, and PHF6), or transcription (RUNX1). Another subgroup consisted of mutations in TP53, as well as complex karyotype alterations, cytogenetically visible copy-number alterations (aneuploidies), or a combination. While broader than previous classifications, such as “monosomal karyotype AML” and “complex karyotype AML,” this group emerged from correlated chromosomal abnormalities and was mutually exclusive of other class-defining lesions. In general, patients in this group were older and had fewer RAS pathway mutations.

The groups had considerable differences in clinical presentation and overall survival, according to the report. The TP53-aneuploidy subgroup had poor outcomes, as previously described. Patients in the chromatin-spliceosome group had lower rates of response to induction chemotherapy, higher relapse rates, and poorer long-term outcomes, compared with other groups. Most of these patients (84%) would be classified as intermediate risk under current guidelines, but the characteristics were more similar to those of subgroups with adverse outcomes.

Overall survival was correlated with the number of driver mutations, and deleterious affects of mutations often were found to be additive. In some cases, complex gene interactions accounted for variation in outcomes, suggesting the clinical effect of some driver mutations may depend on the occurrence of co-mutations in a wider genomic context.

Acute myeloid leukemia (AML) consists of at least 11 disease classes that represent distinct paths in the evolution of AML and have prognostic implications, based on an analysis of somatic driver mutations in 1,540 patients.

In total, 5,234 driver mutations were identified in 76 genes or regions, with 96% of patients having at least one mutation and 86% having two or more mutations. However, nearly one-half of the cohort did not fall into one of the molecular groups defined by the World Health Organization in 2008.

Elli Papaemmanuil, Ph.D.

“The characterization of many new leukemia genes, multiple driver mutations per patient, and complex co-mutation patterns prompted us to reevaluate genomic classification of AML from the beginning,” wrote Elli Papaemmanuil, Ph.D., a molecular geneticist at Memorial Sloan Kettering, New York, and of the Cancer Genome Project, Wellcome Trust Sanger Institute, and her colleagues (N Engl J Med. 2016 Jun 9; 374:2209-21).

The team developed a Bayesian statistical model to define 11 mutually exclusive subtypes based on patterns of co-mutations. The schema unambiguously classified 1,236 of 1,540 patients (80%) into a single subgroup and 56 (4%) into two or more groups. A subset of patients (166, 11%) remained unclassified, possibly due to mutations in genes not sequenced in the study.

NPM1-mutated AML was the largest class (27% of the cohort), followed by the chromatin-spliceosome group (18% of the cohort) that included mutations in genes regulating RNA splicing (SRSF2, SF3B1, U2AF1, and ZRSR2), chromatin (ASXL1, STAG2, BCOR, MLLPTD, EZH2, and PHF6), or transcription (RUNX1). Another subgroup consisted of mutations in TP53, as well as complex karyotype alterations, cytogenetically visible copy-number alterations (aneuploidies), or a combination. While broader than previous classifications, such as “monosomal karyotype AML” and “complex karyotype AML,” this group emerged from correlated chromosomal abnormalities and was mutually exclusive of other class-defining lesions. In general, patients in this group were older and had fewer RAS pathway mutations.

The groups had considerable differences in clinical presentation and overall survival, according to the report. The TP53-aneuploidy subgroup had poor outcomes, as previously described. Patients in the chromatin-spliceosome group had lower rates of response to induction chemotherapy, higher relapse rates, and poorer long-term outcomes, compared with other groups. Most of these patients (84%) would be classified as intermediate risk under current guidelines, but the characteristics were more similar to those of subgroups with adverse outcomes.

Overall survival was correlated with the number of driver mutations, and deleterious affects of mutations often were found to be additive. In some cases, complex gene interactions accounted for variation in outcomes, suggesting the clinical effect of some driver mutations may depend on the occurrence of co-mutations in a wider genomic context.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Acute myeloid leukemia genomic classification and prognosis
Display Headline
Acute myeloid leukemia genomic classification and prognosis
Click for Credit Status
Active
Article Source

FROM NEJM

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Mutational analysis of 1,540 patients with acute myeloid leukemia (AML) identified 11 distinct classes with prognostic implications.

Major finding: In total, 5,234 driver mutations were identified involving 76 genes or regions; 96% of patients had at least one driver mutation, and 86% had two or more.

Data sources: Samples came from three prospective multicenter clinical trials of the German-Austrian AML Study Group: AMLHD98A, AML-HD98B, and AMLSG-07-04.

Disclosures: Dr. Papaemmanuil and most coauthors reported having no disclosures. Two coauthors reported financial ties to industry sources.

No OS benefit with tasquinimod in mCRPC

Article Type
Changed
Fri, 01/18/2019 - 15:59
Display Headline
No OS benefit with tasquinimod in mCRPC

The oral immunotherapy tasquinimod improved radiographic progression-free survival (rPFS) in men with metastatic castration-resistant prostate cancer (mCRPC), but the drug failed to improve overall survival (OS), according to results from a large, multinational phase III trial.

©alexdans/Thinkstock

Median rPFS was 7.0 months (95% CI, 5.8-8.2 months) for the tasquinimod group and 4.4 months (95% CI, 3.5-5.5 months) for placebo (HR, 0.64; 95% CI, 0.54-0.75; P less than .001). However, median OS was similar for the two groups: 21.3 months (19.5-23.0) for tasquinimod and 24.0 months (21.4-26.9) for placebo (HR, 1.10; 95% CI, 0.94 to 1.28; P = .25). At a median follow-up of 30 months, 96% of patients had discontinued treatment, most commonly because of progression (radiographic and symptomatic) and adverse events (J Clin Oncol. 2016 June 13. doi: 10.1200/JCO.2016.66.9697).

The 36% reduced risk of progression with tasquinimod versus placebo confirmed the phase II trial results, but the significant rPFS benefit did not translate to improved OS. The authors note that among one of several explanations for the lack of OS benefit is the availability of effective salvage therapies, many of which were not available during the phase II study.

“The current availability of such agents (e.g., abiraterone and enzalutamide) may have had an impact on the course of disease because patients in the placebo group gained access before those in the tasquinimod group on account of their earlier withdrawal from study treatment. Indeed, posttreatment use of abiraterone and enzalutamide was more common among patients in the placebo group,” wrote Dr. Cora Sternberg, chair of the department of medical oncology at San Camillo Forlanini Hospital, Italy, and colleagues.

The randomized, double-blind, placebo-controlled phase III study enrolled 1,245 patients from 241 sites in 37 countries. Patients with prostate adenocarcinoma with evidence of bone metastasis who had not received cytotoxic chemotherapy for 2 years were randomly assigned 2:1 to receive tasquinimod (n = 832) or placebo (n = 413).

Radiographic- and PSA-based secondary outcomes favored tasquinimod over placebo. By contrast, symptomatically assessed outcomes, such as time to symptomatic progression, time to opiate use, and deterioration of QoL, favored placebo. A greater proportion of the tasquinimod group discontinued treatment because of adverse events (17.7% vs. 10.2%), mainly decreased appetite, fatigue, asthenia, or nausea.

Tasquinimod affects the tumor microenvironment to counteract tumor growth. Preclinical evidence suggests it has an inhibitory effect on myeloid-derived suppressive cells and M2-polarized tumor-associated macrophages. Identification of immunologic biomarkers may help patient selection and determination of a rational combination strategy, according to the authors. Due to the lack of OS benefit, further clinical development of tasquinimod in this patient population was not pursued.

Dr. Sternberg reported having financial ties to Pfizer, Novartis, Janssen Pharmaceuticals, Sanofi, GlaxoSmithKline, Bristol-Myers Squibb, Bayer HealthCare Pharmaceuticals, Astellas Pharma, Eisai, Exelixis, Medivation, Active Biotech, and Genentech. Several of her coauthors reported ties to industry sources.

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
survival benefit, tasquinimod. metastatic castration-resistant prostate cancer
Author and Disclosure Information

Author and Disclosure Information

The oral immunotherapy tasquinimod improved radiographic progression-free survival (rPFS) in men with metastatic castration-resistant prostate cancer (mCRPC), but the drug failed to improve overall survival (OS), according to results from a large, multinational phase III trial.

©alexdans/Thinkstock

Median rPFS was 7.0 months (95% CI, 5.8-8.2 months) for the tasquinimod group and 4.4 months (95% CI, 3.5-5.5 months) for placebo (HR, 0.64; 95% CI, 0.54-0.75; P less than .001). However, median OS was similar for the two groups: 21.3 months (19.5-23.0) for tasquinimod and 24.0 months (21.4-26.9) for placebo (HR, 1.10; 95% CI, 0.94 to 1.28; P = .25). At a median follow-up of 30 months, 96% of patients had discontinued treatment, most commonly because of progression (radiographic and symptomatic) and adverse events (J Clin Oncol. 2016 June 13. doi: 10.1200/JCO.2016.66.9697).

The 36% reduced risk of progression with tasquinimod versus placebo confirmed the phase II trial results, but the significant rPFS benefit did not translate to improved OS. The authors note that among one of several explanations for the lack of OS benefit is the availability of effective salvage therapies, many of which were not available during the phase II study.

“The current availability of such agents (e.g., abiraterone and enzalutamide) may have had an impact on the course of disease because patients in the placebo group gained access before those in the tasquinimod group on account of their earlier withdrawal from study treatment. Indeed, posttreatment use of abiraterone and enzalutamide was more common among patients in the placebo group,” wrote Dr. Cora Sternberg, chair of the department of medical oncology at San Camillo Forlanini Hospital, Italy, and colleagues.

The randomized, double-blind, placebo-controlled phase III study enrolled 1,245 patients from 241 sites in 37 countries. Patients with prostate adenocarcinoma with evidence of bone metastasis who had not received cytotoxic chemotherapy for 2 years were randomly assigned 2:1 to receive tasquinimod (n = 832) or placebo (n = 413).

Radiographic- and PSA-based secondary outcomes favored tasquinimod over placebo. By contrast, symptomatically assessed outcomes, such as time to symptomatic progression, time to opiate use, and deterioration of QoL, favored placebo. A greater proportion of the tasquinimod group discontinued treatment because of adverse events (17.7% vs. 10.2%), mainly decreased appetite, fatigue, asthenia, or nausea.

Tasquinimod affects the tumor microenvironment to counteract tumor growth. Preclinical evidence suggests it has an inhibitory effect on myeloid-derived suppressive cells and M2-polarized tumor-associated macrophages. Identification of immunologic biomarkers may help patient selection and determination of a rational combination strategy, according to the authors. Due to the lack of OS benefit, further clinical development of tasquinimod in this patient population was not pursued.

Dr. Sternberg reported having financial ties to Pfizer, Novartis, Janssen Pharmaceuticals, Sanofi, GlaxoSmithKline, Bristol-Myers Squibb, Bayer HealthCare Pharmaceuticals, Astellas Pharma, Eisai, Exelixis, Medivation, Active Biotech, and Genentech. Several of her coauthors reported ties to industry sources.

The oral immunotherapy tasquinimod improved radiographic progression-free survival (rPFS) in men with metastatic castration-resistant prostate cancer (mCRPC), but the drug failed to improve overall survival (OS), according to results from a large, multinational phase III trial.

©alexdans/Thinkstock

Median rPFS was 7.0 months (95% CI, 5.8-8.2 months) for the tasquinimod group and 4.4 months (95% CI, 3.5-5.5 months) for placebo (HR, 0.64; 95% CI, 0.54-0.75; P less than .001). However, median OS was similar for the two groups: 21.3 months (19.5-23.0) for tasquinimod and 24.0 months (21.4-26.9) for placebo (HR, 1.10; 95% CI, 0.94 to 1.28; P = .25). At a median follow-up of 30 months, 96% of patients had discontinued treatment, most commonly because of progression (radiographic and symptomatic) and adverse events (J Clin Oncol. 2016 June 13. doi: 10.1200/JCO.2016.66.9697).

The 36% reduced risk of progression with tasquinimod versus placebo confirmed the phase II trial results, but the significant rPFS benefit did not translate to improved OS. The authors note that among one of several explanations for the lack of OS benefit is the availability of effective salvage therapies, many of which were not available during the phase II study.

“The current availability of such agents (e.g., abiraterone and enzalutamide) may have had an impact on the course of disease because patients in the placebo group gained access before those in the tasquinimod group on account of their earlier withdrawal from study treatment. Indeed, posttreatment use of abiraterone and enzalutamide was more common among patients in the placebo group,” wrote Dr. Cora Sternberg, chair of the department of medical oncology at San Camillo Forlanini Hospital, Italy, and colleagues.

The randomized, double-blind, placebo-controlled phase III study enrolled 1,245 patients from 241 sites in 37 countries. Patients with prostate adenocarcinoma with evidence of bone metastasis who had not received cytotoxic chemotherapy for 2 years were randomly assigned 2:1 to receive tasquinimod (n = 832) or placebo (n = 413).

Radiographic- and PSA-based secondary outcomes favored tasquinimod over placebo. By contrast, symptomatically assessed outcomes, such as time to symptomatic progression, time to opiate use, and deterioration of QoL, favored placebo. A greater proportion of the tasquinimod group discontinued treatment because of adverse events (17.7% vs. 10.2%), mainly decreased appetite, fatigue, asthenia, or nausea.

Tasquinimod affects the tumor microenvironment to counteract tumor growth. Preclinical evidence suggests it has an inhibitory effect on myeloid-derived suppressive cells and M2-polarized tumor-associated macrophages. Identification of immunologic biomarkers may help patient selection and determination of a rational combination strategy, according to the authors. Due to the lack of OS benefit, further clinical development of tasquinimod in this patient population was not pursued.

Dr. Sternberg reported having financial ties to Pfizer, Novartis, Janssen Pharmaceuticals, Sanofi, GlaxoSmithKline, Bristol-Myers Squibb, Bayer HealthCare Pharmaceuticals, Astellas Pharma, Eisai, Exelixis, Medivation, Active Biotech, and Genentech. Several of her coauthors reported ties to industry sources.

References

References

Publications
Publications
Topics
Article Type
Display Headline
No OS benefit with tasquinimod in mCRPC
Display Headline
No OS benefit with tasquinimod in mCRPC
Legacy Keywords
survival benefit, tasquinimod. metastatic castration-resistant prostate cancer
Legacy Keywords
survival benefit, tasquinimod. metastatic castration-resistant prostate cancer
Article Source

FROM JOURNAL OF CLINICAL ONCOLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Tasquinimod improved progression-free survival (PFS) but not overall survival (OS) in men with metastatic castration-resistant prostate cancer (mCRPC).

Major finding: Median radiographic PFS was 7.0 months (95% CI, 5.8-8.2 months) for the tasquinimod group and 4.4 months (95% CI, 3.5-5.5) months for placebo (HR, 0.64; 95% CI, 0.54 to 0.75; P less than .001). Median OS was similar for the two groups: 21.3 and 24.0 months, respectively (HR, 1.10; 95% CI, 0.94-1.28; P = .25).

Data sources: A randomized, double-blind, placebo-controlled phase III study conducted at 241 sites in 37 countries, comprising 832 patients who received tasquinimod and 413 who received placebo.

Disclosures: Dr. Sternberg reported having financial ties to Pfizer, Novartis, Janssen Pharmaceuticals, Sanofi, GlaxoSmithKline, Bristol-Myers Squibb, Bayer HealthCare Pharmaceuticals, Astellas Pharma, Eisai, Exelixis, Medivation, Active Biotech, and Genentech. Several of her coauthors reported ties to industry sources.

Baseline PSA at midlife predicts lethal prostate cancer

Article Type
Changed
Fri, 01/04/2019 - 13:17
Display Headline
Baseline PSA at midlife predicts lethal prostate cancer

A single, baseline prostate-specific antigen (PSA) level measured at midlife predicted risk of lethal prostate cancer over a 30-year follow-up, according to a nested, case-control study among men who participated in the Physicians’ Health Study.

PSA levels at the 90th percentile and above, compared with levels at the median and lower, were associated with increased risk of lethal prostate cancer (PCa) across all age groups: for men aged 40-49 years, the odds ratio was 8.7 (95% confidence interval, 1.0-78.2), for 50-54 years, 12.6 (1.4-110.4), and for 55-59 years, 6.9 (2.5-19.1). PSA levels above the median were associated with increased risk of all PCa: odds ratios were 7.3 (95% CI, 2.4-21.8) for 40-49 years, 7.6 (3.4-17.2) for 50-54 years, and 10.1 (5.2-19.6) for 55-59 years.

“These data identify subgroups of men, on the basis of their PSA levels at a given age, with widely divergent lifetime risk of PCa death, who therefore could benefit from screening intervals tailored to their actual magnitude of risk,” wrote Dr. Mark Preston of Brigham and Women’s Hospital, Boston, and colleagues (J Clin Oncol. 2016 Jun 13. doi: 10.1200/JCO.2016.66.7527).

The investigators noted that one of seven men with PSA greater than 2.1 mg/mL at 55-59 years and one of 12 men with PSA greater than 2.1 ng/mL at 50-54 years died as a result of PCa within 30 years.

“These findings do not necessarily imply that prostate biopsy or definitive treatment is immediately required in younger men with higher PSA levels at baseline, because this could lead to overdiagnosis, but only that they undergo more intensive PSA screening to enable earlier identification of cancer and potential cure while still possible,” the investigators wrote.

As a subset of the Physicians’ Health Study, a randomized, placebo-controlled trial of aspirin and beta-carotene, 14,916 men aged 40-84 years provided a blood sample during 1982-1984. Total PSA was determined from stored specimens, and self-reported incident PCa cases from 1982 to 2012 were confirmed through medical records.

In answer to the question of whether a low PSA level at 40-49 years might safely exempt men from further screening, results showed that for PSA levels below the 25th percentile, cumulative incidence of lethal PCa at 30 years was 0.37% (0.05-1.70) for men 40-44 years and 0.97% (0.30-2.49) for men 45-49 years. Because a small risk remains even with an exceptionally low first measure, another PSA test during the lifetime of men 40-49 is prudent, according to the researchers. At age 60 years, men with PSA below the median are unlikely to develop lethal PCa, based on the analysis.

tor@frontlinemedcom.com

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

A single, baseline prostate-specific antigen (PSA) level measured at midlife predicted risk of lethal prostate cancer over a 30-year follow-up, according to a nested, case-control study among men who participated in the Physicians’ Health Study.

PSA levels at the 90th percentile and above, compared with levels at the median and lower, were associated with increased risk of lethal prostate cancer (PCa) across all age groups: for men aged 40-49 years, the odds ratio was 8.7 (95% confidence interval, 1.0-78.2), for 50-54 years, 12.6 (1.4-110.4), and for 55-59 years, 6.9 (2.5-19.1). PSA levels above the median were associated with increased risk of all PCa: odds ratios were 7.3 (95% CI, 2.4-21.8) for 40-49 years, 7.6 (3.4-17.2) for 50-54 years, and 10.1 (5.2-19.6) for 55-59 years.

“These data identify subgroups of men, on the basis of their PSA levels at a given age, with widely divergent lifetime risk of PCa death, who therefore could benefit from screening intervals tailored to their actual magnitude of risk,” wrote Dr. Mark Preston of Brigham and Women’s Hospital, Boston, and colleagues (J Clin Oncol. 2016 Jun 13. doi: 10.1200/JCO.2016.66.7527).

The investigators noted that one of seven men with PSA greater than 2.1 mg/mL at 55-59 years and one of 12 men with PSA greater than 2.1 ng/mL at 50-54 years died as a result of PCa within 30 years.

“These findings do not necessarily imply that prostate biopsy or definitive treatment is immediately required in younger men with higher PSA levels at baseline, because this could lead to overdiagnosis, but only that they undergo more intensive PSA screening to enable earlier identification of cancer and potential cure while still possible,” the investigators wrote.

As a subset of the Physicians’ Health Study, a randomized, placebo-controlled trial of aspirin and beta-carotene, 14,916 men aged 40-84 years provided a blood sample during 1982-1984. Total PSA was determined from stored specimens, and self-reported incident PCa cases from 1982 to 2012 were confirmed through medical records.

In answer to the question of whether a low PSA level at 40-49 years might safely exempt men from further screening, results showed that for PSA levels below the 25th percentile, cumulative incidence of lethal PCa at 30 years was 0.37% (0.05-1.70) for men 40-44 years and 0.97% (0.30-2.49) for men 45-49 years. Because a small risk remains even with an exceptionally low first measure, another PSA test during the lifetime of men 40-49 is prudent, according to the researchers. At age 60 years, men with PSA below the median are unlikely to develop lethal PCa, based on the analysis.

tor@frontlinemedcom.com

A single, baseline prostate-specific antigen (PSA) level measured at midlife predicted risk of lethal prostate cancer over a 30-year follow-up, according to a nested, case-control study among men who participated in the Physicians’ Health Study.

PSA levels at the 90th percentile and above, compared with levels at the median and lower, were associated with increased risk of lethal prostate cancer (PCa) across all age groups: for men aged 40-49 years, the odds ratio was 8.7 (95% confidence interval, 1.0-78.2), for 50-54 years, 12.6 (1.4-110.4), and for 55-59 years, 6.9 (2.5-19.1). PSA levels above the median were associated with increased risk of all PCa: odds ratios were 7.3 (95% CI, 2.4-21.8) for 40-49 years, 7.6 (3.4-17.2) for 50-54 years, and 10.1 (5.2-19.6) for 55-59 years.

“These data identify subgroups of men, on the basis of their PSA levels at a given age, with widely divergent lifetime risk of PCa death, who therefore could benefit from screening intervals tailored to their actual magnitude of risk,” wrote Dr. Mark Preston of Brigham and Women’s Hospital, Boston, and colleagues (J Clin Oncol. 2016 Jun 13. doi: 10.1200/JCO.2016.66.7527).

The investigators noted that one of seven men with PSA greater than 2.1 mg/mL at 55-59 years and one of 12 men with PSA greater than 2.1 ng/mL at 50-54 years died as a result of PCa within 30 years.

“These findings do not necessarily imply that prostate biopsy or definitive treatment is immediately required in younger men with higher PSA levels at baseline, because this could lead to overdiagnosis, but only that they undergo more intensive PSA screening to enable earlier identification of cancer and potential cure while still possible,” the investigators wrote.

As a subset of the Physicians’ Health Study, a randomized, placebo-controlled trial of aspirin and beta-carotene, 14,916 men aged 40-84 years provided a blood sample during 1982-1984. Total PSA was determined from stored specimens, and self-reported incident PCa cases from 1982 to 2012 were confirmed through medical records.

In answer to the question of whether a low PSA level at 40-49 years might safely exempt men from further screening, results showed that for PSA levels below the 25th percentile, cumulative incidence of lethal PCa at 30 years was 0.37% (0.05-1.70) for men 40-44 years and 0.97% (0.30-2.49) for men 45-49 years. Because a small risk remains even with an exceptionally low first measure, another PSA test during the lifetime of men 40-49 is prudent, according to the researchers. At age 60 years, men with PSA below the median are unlikely to develop lethal PCa, based on the analysis.

tor@frontlinemedcom.com

References

References

Publications
Publications
Topics
Article Type
Display Headline
Baseline PSA at midlife predicts lethal prostate cancer
Display Headline
Baseline PSA at midlife predicts lethal prostate cancer
Article Source

FROM THE JOURNAL OF CLINICAL ONCOLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Prostate-specific antigen levels at midlife predicted subsequent lethal prostate cancer in men who participated in the Physicians’ Health Study and underwent opportunistic screening.

Major finding: PSA levels at the 90th percentile and above, compared with levels at the median and lower, were associated with increased risk of lethal PCa across all age groups: For men 40-49 years, the OR was 8.7 (95% CI, 1.0-78.2), for 50-54 years, 12.6 (1.4-110.4), and for 55-59 years, 6.9 (2.5-19.1).

Data sources: In the Physicians’ Health Study, 14,916 men aged 40-84 years provided a blood sample used for total PSA determination, and self-reported incident PCa cases from 1982 to 2012 were confirmed through medical records.

Disclosures: Dr. Preston reported having no disclosures. Several of his coauthors reported ties to industry sources.

In hemophilia A, emicizumab cuts bleeding; plasma-derived factor VIII less likely to trigger antibodies

Innovations in hemophilia therapy
Article Type
Changed
Fri, 01/04/2019 - 09:51
Display Headline
In hemophilia A, emicizumab cuts bleeding; plasma-derived factor VIII less likely to trigger antibodies

Patients with severe hemophilia A treated with plasma-derived factor VIII, as compared with recombinant factor VIII, had a lower risk of developing neutralizing antibodies, based on a recent report from the randomized SIPPET trial published in the New England Journal of Medicine.

In a second study, also published in the journal, prophylactic treatment with the bispecific antibody emicizumab decreased bleeding in patients with or without neutralizing antibodies.

Crystal/Wikimedia Commons/Creative Commons Attribution 2.0

Of 125 patients with severe hemophilia A who were treated with plasma-derived factor VIII with von Willebrand factor, 29 (23.2%) developed inhibitors, compared with 47 of 126 patients (37.3%) treated with recombinant factor VIII. In the plasma-derived group, 20 (16.0%) had high-titer inhibitors compared with 30 (23.8%) in the recombinant group. Cox regression models showed an 87% higher rate of inhibitor development with recombinant factor VIII (hazard ratio, 1.87; 95% confidence interval, 1.17-2.96). For development of high-titer inhibitors, the HR was 1.69 (95% CI, 0.96-2.98).

Previous observational studies have suggested greater immunogenicity of recombinant vs. plasm-derived factor VIII, but the results remained inconclusive. Meta-analyses of studies have been hindered by differences in design, enrollment criteria, definition of hemophilia A, sample size, method of inhibitor detection, and intervals of follow-up testing, according to Dr. Flora Peyvandi of the IRCCS Maggiore Hospital, University of Milan and colleagues. “Our trial was specifically designed to compare the immunogenicity of factor VIII products. As a result of randomization, the main risk factors for inhibitor development were evenly distributed between the two factor VIII classes,” they wrote, adding, “The finding that native factor VIII products from human plasma are less immunogenic than those engineered by recombinant DNA technology in animal cell lines has the potential to affect treatment strategies and open new investigations to better understand the mechanisms of the immunogenicity of various factor VIII preparations.” (N Engl J Med. 2016 May 25. doi: 10.1200/JCO.2015.64.0730).

The Survey of Inhibitors in Plasma-Product Exposed Toddlers (SIPPET) clinical trial randomized 251 patients with severe hemophilia A who received infusions of plasma-derived or recombinant factor VIII. The investigators found no association between risk of inhibitor development and race, intensity of treatment, or age at first treatment.

The second study, a 12-week dose-escalation trial of emicizumab, included 18 patients with severe hemophilia A, aged 12-59 years. Patients were assigned to one of three cohorts and received a once-weekly subcutaneous dose of 0.3, 1, or 3 mg per kilogram of body weight. During prophylactic emicizumab treatment, median annualized bleeding rates decreased from 32.5 (range, 8.1-77.1) to 4.4 (0.0-59.5) in cohort 1, from 18.3 (range, 10.1-38.6) to 0.0 (range, 0.0-4.3) in cohort 2, and from 15.2 (range, 0.0-32.5) to 0.0 (range, 0.0-4.2) in cohort 3. The annualized bleeding rate decreased regardless of factor VIII inhibitor status. Of 11 patients with factor VIII inhibitors, 8 (73%) had no bleeding episodes; 5 of 7 patients without factor VIII inhibitors (71%) had no bleeding episodes.

“This study showed that once-weekly subcutaneous administration of emicizumab as prophylaxis is safe and has the potential to reduce or prevent bleeding episodes in patients who have severe hemophilia A with or without factor VIII inhibitors,” wrote Dr. Midori Shima of Nara Medical University, Kashihara, Japan, and colleagues (N. Engl J Med. 2016 May 25. doi: 10.1056/NEJMoa1511769).

Emicizumab had an acceptable safety profile. In total, 43 adverse events were reported in 15 of 18 patients. All were mild, except two moderate events – an upper respiratory tract infection and a headache. Events thought to be treatment related included malaise, injection-site erythema, injection-site rash, diarrhea, increased C-reactive protein level, and increased blood creatine kinase level.

Dr. Peyvandi and coauthors disclosed consulting, advising, research funding and other relationships with several industry sources, including Grifols, Kedrion, and LFB. Dr. Shima and coauthors disclosed consulting, advising, research funding and other relationships with several industry sources, including Chugai Pharmaceuticals, which supported the study.

References

Body

The studies by Peyvandi et al. and Shima et al. bring exciting news to patients with hemophilia. Peyvandi et al. report the influence of factor VIII product source on the cumulative incidence of inhibitors in children with severe hemophilia. Exclusive use of recombinant factor VIII through the highest-risk period increased the cumulative incidence of all inhibitors, compared with plasma-derived factor VIII. Factor VIII protein glycosylation and von Willebrand factor association play a role in endocytosis, clearance, and antigen presentation and are altered by recombinant technology, which lends plausibility to the implication of product source affecting inhibitor generation. The results may lead to valuable mechanistic insight into factor VIII immunogenicity.

In the second study, Shima et al. evaluate once-weekly administration of emicizumab in patients with severe hemophilia. The bifunctional antibody functions as a conformation replica of factor VIII, forming a thrombin-generating complex with factors IX and X. Emicizumab would likely not induce or be inhibited by factor VIII–neutralizing antibodies, and its subcutaneous bioavailability permits a once-weekly treatment regimen instead of intravenous infusion. The study reports impressive short-term decreases in annualized bleeding rates in both inhibitor-positive and inhibitor-negative patients. The next steps in testing more substantial and sustained thrombin generation required for the treatment of acute hemorrhage will be important for emicizumab.

If proven feasible as a therapeutic agent, emicizumab may play a role in mediating factor VIII immunogenicity. Early sustained treatment with emicizumab in children could ultimately affect the epidemiology of neutralizing antibodies.

The clinical and scientific ramifications of these two studies will take time to be fully realized but offer exciting potential as therapeutic options where new approaches to bypassing the factor VIII requirement have so far fallen short (N Engl J Med. 2016 May 25. doi: 10.1056/NEJMe1603419).

Dr. Donna DiMichele is the deputy director of the Division of Blood Diseases and Resources at the National Heart, Lung, and Blood Institute, part of the National Institutes of Health. These remarks were part of an editorial accompanying two reports in the New England Journal of Medicine. Dr. DiMichele reports personal fees from Chugai Pharmaceuticals outside the submitted work. She was an early founding member of the SIPPET study Steering Committee but resigned when she came to NIH in 2010.

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Body

The studies by Peyvandi et al. and Shima et al. bring exciting news to patients with hemophilia. Peyvandi et al. report the influence of factor VIII product source on the cumulative incidence of inhibitors in children with severe hemophilia. Exclusive use of recombinant factor VIII through the highest-risk period increased the cumulative incidence of all inhibitors, compared with plasma-derived factor VIII. Factor VIII protein glycosylation and von Willebrand factor association play a role in endocytosis, clearance, and antigen presentation and are altered by recombinant technology, which lends plausibility to the implication of product source affecting inhibitor generation. The results may lead to valuable mechanistic insight into factor VIII immunogenicity.

In the second study, Shima et al. evaluate once-weekly administration of emicizumab in patients with severe hemophilia. The bifunctional antibody functions as a conformation replica of factor VIII, forming a thrombin-generating complex with factors IX and X. Emicizumab would likely not induce or be inhibited by factor VIII–neutralizing antibodies, and its subcutaneous bioavailability permits a once-weekly treatment regimen instead of intravenous infusion. The study reports impressive short-term decreases in annualized bleeding rates in both inhibitor-positive and inhibitor-negative patients. The next steps in testing more substantial and sustained thrombin generation required for the treatment of acute hemorrhage will be important for emicizumab.

If proven feasible as a therapeutic agent, emicizumab may play a role in mediating factor VIII immunogenicity. Early sustained treatment with emicizumab in children could ultimately affect the epidemiology of neutralizing antibodies.

The clinical and scientific ramifications of these two studies will take time to be fully realized but offer exciting potential as therapeutic options where new approaches to bypassing the factor VIII requirement have so far fallen short (N Engl J Med. 2016 May 25. doi: 10.1056/NEJMe1603419).

Dr. Donna DiMichele is the deputy director of the Division of Blood Diseases and Resources at the National Heart, Lung, and Blood Institute, part of the National Institutes of Health. These remarks were part of an editorial accompanying two reports in the New England Journal of Medicine. Dr. DiMichele reports personal fees from Chugai Pharmaceuticals outside the submitted work. She was an early founding member of the SIPPET study Steering Committee but resigned when she came to NIH in 2010.

Body

The studies by Peyvandi et al. and Shima et al. bring exciting news to patients with hemophilia. Peyvandi et al. report the influence of factor VIII product source on the cumulative incidence of inhibitors in children with severe hemophilia. Exclusive use of recombinant factor VIII through the highest-risk period increased the cumulative incidence of all inhibitors, compared with plasma-derived factor VIII. Factor VIII protein glycosylation and von Willebrand factor association play a role in endocytosis, clearance, and antigen presentation and are altered by recombinant technology, which lends plausibility to the implication of product source affecting inhibitor generation. The results may lead to valuable mechanistic insight into factor VIII immunogenicity.

In the second study, Shima et al. evaluate once-weekly administration of emicizumab in patients with severe hemophilia. The bifunctional antibody functions as a conformation replica of factor VIII, forming a thrombin-generating complex with factors IX and X. Emicizumab would likely not induce or be inhibited by factor VIII–neutralizing antibodies, and its subcutaneous bioavailability permits a once-weekly treatment regimen instead of intravenous infusion. The study reports impressive short-term decreases in annualized bleeding rates in both inhibitor-positive and inhibitor-negative patients. The next steps in testing more substantial and sustained thrombin generation required for the treatment of acute hemorrhage will be important for emicizumab.

If proven feasible as a therapeutic agent, emicizumab may play a role in mediating factor VIII immunogenicity. Early sustained treatment with emicizumab in children could ultimately affect the epidemiology of neutralizing antibodies.

The clinical and scientific ramifications of these two studies will take time to be fully realized but offer exciting potential as therapeutic options where new approaches to bypassing the factor VIII requirement have so far fallen short (N Engl J Med. 2016 May 25. doi: 10.1056/NEJMe1603419).

Dr. Donna DiMichele is the deputy director of the Division of Blood Diseases and Resources at the National Heart, Lung, and Blood Institute, part of the National Institutes of Health. These remarks were part of an editorial accompanying two reports in the New England Journal of Medicine. Dr. DiMichele reports personal fees from Chugai Pharmaceuticals outside the submitted work. She was an early founding member of the SIPPET study Steering Committee but resigned when she came to NIH in 2010.

Title
Innovations in hemophilia therapy
Innovations in hemophilia therapy

Patients with severe hemophilia A treated with plasma-derived factor VIII, as compared with recombinant factor VIII, had a lower risk of developing neutralizing antibodies, based on a recent report from the randomized SIPPET trial published in the New England Journal of Medicine.

In a second study, also published in the journal, prophylactic treatment with the bispecific antibody emicizumab decreased bleeding in patients with or without neutralizing antibodies.

Crystal/Wikimedia Commons/Creative Commons Attribution 2.0

Of 125 patients with severe hemophilia A who were treated with plasma-derived factor VIII with von Willebrand factor, 29 (23.2%) developed inhibitors, compared with 47 of 126 patients (37.3%) treated with recombinant factor VIII. In the plasma-derived group, 20 (16.0%) had high-titer inhibitors compared with 30 (23.8%) in the recombinant group. Cox regression models showed an 87% higher rate of inhibitor development with recombinant factor VIII (hazard ratio, 1.87; 95% confidence interval, 1.17-2.96). For development of high-titer inhibitors, the HR was 1.69 (95% CI, 0.96-2.98).

Previous observational studies have suggested greater immunogenicity of recombinant vs. plasm-derived factor VIII, but the results remained inconclusive. Meta-analyses of studies have been hindered by differences in design, enrollment criteria, definition of hemophilia A, sample size, method of inhibitor detection, and intervals of follow-up testing, according to Dr. Flora Peyvandi of the IRCCS Maggiore Hospital, University of Milan and colleagues. “Our trial was specifically designed to compare the immunogenicity of factor VIII products. As a result of randomization, the main risk factors for inhibitor development were evenly distributed between the two factor VIII classes,” they wrote, adding, “The finding that native factor VIII products from human plasma are less immunogenic than those engineered by recombinant DNA technology in animal cell lines has the potential to affect treatment strategies and open new investigations to better understand the mechanisms of the immunogenicity of various factor VIII preparations.” (N Engl J Med. 2016 May 25. doi: 10.1200/JCO.2015.64.0730).

The Survey of Inhibitors in Plasma-Product Exposed Toddlers (SIPPET) clinical trial randomized 251 patients with severe hemophilia A who received infusions of plasma-derived or recombinant factor VIII. The investigators found no association between risk of inhibitor development and race, intensity of treatment, or age at first treatment.

The second study, a 12-week dose-escalation trial of emicizumab, included 18 patients with severe hemophilia A, aged 12-59 years. Patients were assigned to one of three cohorts and received a once-weekly subcutaneous dose of 0.3, 1, or 3 mg per kilogram of body weight. During prophylactic emicizumab treatment, median annualized bleeding rates decreased from 32.5 (range, 8.1-77.1) to 4.4 (0.0-59.5) in cohort 1, from 18.3 (range, 10.1-38.6) to 0.0 (range, 0.0-4.3) in cohort 2, and from 15.2 (range, 0.0-32.5) to 0.0 (range, 0.0-4.2) in cohort 3. The annualized bleeding rate decreased regardless of factor VIII inhibitor status. Of 11 patients with factor VIII inhibitors, 8 (73%) had no bleeding episodes; 5 of 7 patients without factor VIII inhibitors (71%) had no bleeding episodes.

“This study showed that once-weekly subcutaneous administration of emicizumab as prophylaxis is safe and has the potential to reduce or prevent bleeding episodes in patients who have severe hemophilia A with or without factor VIII inhibitors,” wrote Dr. Midori Shima of Nara Medical University, Kashihara, Japan, and colleagues (N. Engl J Med. 2016 May 25. doi: 10.1056/NEJMoa1511769).

Emicizumab had an acceptable safety profile. In total, 43 adverse events were reported in 15 of 18 patients. All were mild, except two moderate events – an upper respiratory tract infection and a headache. Events thought to be treatment related included malaise, injection-site erythema, injection-site rash, diarrhea, increased C-reactive protein level, and increased blood creatine kinase level.

Dr. Peyvandi and coauthors disclosed consulting, advising, research funding and other relationships with several industry sources, including Grifols, Kedrion, and LFB. Dr. Shima and coauthors disclosed consulting, advising, research funding and other relationships with several industry sources, including Chugai Pharmaceuticals, which supported the study.

Patients with severe hemophilia A treated with plasma-derived factor VIII, as compared with recombinant factor VIII, had a lower risk of developing neutralizing antibodies, based on a recent report from the randomized SIPPET trial published in the New England Journal of Medicine.

In a second study, also published in the journal, prophylactic treatment with the bispecific antibody emicizumab decreased bleeding in patients with or without neutralizing antibodies.

Crystal/Wikimedia Commons/Creative Commons Attribution 2.0

Of 125 patients with severe hemophilia A who were treated with plasma-derived factor VIII with von Willebrand factor, 29 (23.2%) developed inhibitors, compared with 47 of 126 patients (37.3%) treated with recombinant factor VIII. In the plasma-derived group, 20 (16.0%) had high-titer inhibitors compared with 30 (23.8%) in the recombinant group. Cox regression models showed an 87% higher rate of inhibitor development with recombinant factor VIII (hazard ratio, 1.87; 95% confidence interval, 1.17-2.96). For development of high-titer inhibitors, the HR was 1.69 (95% CI, 0.96-2.98).

Previous observational studies have suggested greater immunogenicity of recombinant vs. plasm-derived factor VIII, but the results remained inconclusive. Meta-analyses of studies have been hindered by differences in design, enrollment criteria, definition of hemophilia A, sample size, method of inhibitor detection, and intervals of follow-up testing, according to Dr. Flora Peyvandi of the IRCCS Maggiore Hospital, University of Milan and colleagues. “Our trial was specifically designed to compare the immunogenicity of factor VIII products. As a result of randomization, the main risk factors for inhibitor development were evenly distributed between the two factor VIII classes,” they wrote, adding, “The finding that native factor VIII products from human plasma are less immunogenic than those engineered by recombinant DNA technology in animal cell lines has the potential to affect treatment strategies and open new investigations to better understand the mechanisms of the immunogenicity of various factor VIII preparations.” (N Engl J Med. 2016 May 25. doi: 10.1200/JCO.2015.64.0730).

The Survey of Inhibitors in Plasma-Product Exposed Toddlers (SIPPET) clinical trial randomized 251 patients with severe hemophilia A who received infusions of plasma-derived or recombinant factor VIII. The investigators found no association between risk of inhibitor development and race, intensity of treatment, or age at first treatment.

The second study, a 12-week dose-escalation trial of emicizumab, included 18 patients with severe hemophilia A, aged 12-59 years. Patients were assigned to one of three cohorts and received a once-weekly subcutaneous dose of 0.3, 1, or 3 mg per kilogram of body weight. During prophylactic emicizumab treatment, median annualized bleeding rates decreased from 32.5 (range, 8.1-77.1) to 4.4 (0.0-59.5) in cohort 1, from 18.3 (range, 10.1-38.6) to 0.0 (range, 0.0-4.3) in cohort 2, and from 15.2 (range, 0.0-32.5) to 0.0 (range, 0.0-4.2) in cohort 3. The annualized bleeding rate decreased regardless of factor VIII inhibitor status. Of 11 patients with factor VIII inhibitors, 8 (73%) had no bleeding episodes; 5 of 7 patients without factor VIII inhibitors (71%) had no bleeding episodes.

“This study showed that once-weekly subcutaneous administration of emicizumab as prophylaxis is safe and has the potential to reduce or prevent bleeding episodes in patients who have severe hemophilia A with or without factor VIII inhibitors,” wrote Dr. Midori Shima of Nara Medical University, Kashihara, Japan, and colleagues (N. Engl J Med. 2016 May 25. doi: 10.1056/NEJMoa1511769).

Emicizumab had an acceptable safety profile. In total, 43 adverse events were reported in 15 of 18 patients. All were mild, except two moderate events – an upper respiratory tract infection and a headache. Events thought to be treatment related included malaise, injection-site erythema, injection-site rash, diarrhea, increased C-reactive protein level, and increased blood creatine kinase level.

Dr. Peyvandi and coauthors disclosed consulting, advising, research funding and other relationships with several industry sources, including Grifols, Kedrion, and LFB. Dr. Shima and coauthors disclosed consulting, advising, research funding and other relationships with several industry sources, including Chugai Pharmaceuticals, which supported the study.

References

References

Publications
Publications
Topics
Article Type
Display Headline
In hemophilia A, emicizumab cuts bleeding; plasma-derived factor VIII less likely to trigger antibodies
Display Headline
In hemophilia A, emicizumab cuts bleeding; plasma-derived factor VIII less likely to trigger antibodies
Article Source

FROM THE NEW ENGLAND JOURNAL OF MEDICINE

PURLs Copyright

Inside the Article

Vitals

Key clinical points: Plasma-derived factor VIII carries lower risk of developing neutralizing antibodies than recombinant factor VIII; once-weekly emicizumab decreased bleeding in hemophilia A patients with or without neutralizing antibodies.

Major findings: Inhibitors developed in 23% of patients treated with plasma-derived factor VIII (16% high titer), compared with 37% treated with recombinant factor VIII (24% high titer); during prophylactic treatment with emicizumab, bleeding rates were markedly reduced, and more than 70% of patients had no bleeding episodes.

Data sources: The Survey of Inhibitors in Plasma-Product Exposed Toddlers (SIPPET) randomized trial evaluated 251 patients with severe hemophilia A who received plasma-derived or recombinant factor VIII; emicizumab was evaluated in a 12-week, open-label, nonrandomized dose-escalation study of 18 patients.

Disclosures: Dr. Peyvandi and coauthors disclosed consulting, advising, research funding and other relationships with several industry sources, including Grifols, Kedrion, and LFB. Dr. Shima and coauthors disclosed consulting, advising, research funding and other relationships with several industry sources, including Chugai Pharmaceuticals, which supported the study.

Mutation pattern in non–small-cell lung cancer influenced by increased BMI, smoking

Article Type
Changed
Fri, 01/04/2019 - 13:15
Display Headline
Mutation pattern in non–small-cell lung cancer influenced by increased BMI, smoking

The prevalence of mutations in oncogenic driver genes is correlated to smoking dose and body mass index, according to a prospective epidemiology study of environmental factors and mutation frequencies in non–small-cell lung cancer that was published online May 9.

In the Japan Molecular Epidemiology for Lung Cancer study, Dr. Tomoya Kawaguchi and colleagues found that increased mutation frequencies in TP53, KRAS, and NFE2L2 correlated with smoking dose (P < .001 for all), whereas decreased mutation frequencies were observed in EGFR (P < .001) and CTNNB1 (P = .030). The number of KRAS mutations in smokers increased in proportion to body-mass index (BMI) increases (P = .026).

©Sebastian Kaulitzki/Thinkstock

Simultaneous mutations in EGFR and CTNNB1 suggested possible biological relevance; 88% of CTNNB1 mutations (15/17) occurred with EGFR mutations. TP53 and NFE2L2 mutations were more frequent in advanced-stage disease, wrote Dr. Kawaguchi of the department of respiratory medicine at Osaka (Japan) City University and colleagues (J Clin Oncol. 2016 May 9. doi: 10.1200/JCO.2015.64.2322).

Although smoking is the most studied cause of lung cancer, about one-quarter of lung cancers worldwide occur in never-smokers.“It remains elusive which environmental factors contribute to the EGFR mutations that are frequently observed in never-smokers,” the investigators wrote. “In this study, the prevalence of EGFR mutations was higher in those who had more [environmental tobacco smoke], although this difference did not reach the level of statistical significance in the sample size. More detailed methods to detect the mutations (e.g., digital polymerase chain reaction) might yield more precise information.”

Levels of sex hormones were not significant factors in mutation frequencies, but the investigators found that estrogen receptor was more highly expressed in never-smokers than smokers, and the presence of estrogen receptor was associated with EGFR mutations in younger patients.

The investigators studied environmental influences on lung cancer by collecting information by questionnaire and by detecting mutations in 72 candidate genes from 876 patients with stage I to IIIB non–small-cell lung cancer (441 ever- and 435 never-smokers). In total, 622 patients had at least one mutation, and 860 mutations were observed. Dr. Kawaguchi and colleagues also examined patterns of estrogen-receptor expression by immunohistochemical staining and evidence of human papillomavirus (HPV) infection by a polymerase chain reaction–based microarray system.

Contrary to retrospective analyses that had pointed to a link between HPV and NSCLC, this prospective study showed little evidence for HPV in early NSCLC.

Dr. Kawaguchi reported having financial ties to Chugai Pharmaceutical and Eli Lilly. Several coauthors reported tis to industry sources.


References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

The prevalence of mutations in oncogenic driver genes is correlated to smoking dose and body mass index, according to a prospective epidemiology study of environmental factors and mutation frequencies in non–small-cell lung cancer that was published online May 9.

In the Japan Molecular Epidemiology for Lung Cancer study, Dr. Tomoya Kawaguchi and colleagues found that increased mutation frequencies in TP53, KRAS, and NFE2L2 correlated with smoking dose (P < .001 for all), whereas decreased mutation frequencies were observed in EGFR (P < .001) and CTNNB1 (P = .030). The number of KRAS mutations in smokers increased in proportion to body-mass index (BMI) increases (P = .026).

©Sebastian Kaulitzki/Thinkstock

Simultaneous mutations in EGFR and CTNNB1 suggested possible biological relevance; 88% of CTNNB1 mutations (15/17) occurred with EGFR mutations. TP53 and NFE2L2 mutations were more frequent in advanced-stage disease, wrote Dr. Kawaguchi of the department of respiratory medicine at Osaka (Japan) City University and colleagues (J Clin Oncol. 2016 May 9. doi: 10.1200/JCO.2015.64.2322).

Although smoking is the most studied cause of lung cancer, about one-quarter of lung cancers worldwide occur in never-smokers.“It remains elusive which environmental factors contribute to the EGFR mutations that are frequently observed in never-smokers,” the investigators wrote. “In this study, the prevalence of EGFR mutations was higher in those who had more [environmental tobacco smoke], although this difference did not reach the level of statistical significance in the sample size. More detailed methods to detect the mutations (e.g., digital polymerase chain reaction) might yield more precise information.”

Levels of sex hormones were not significant factors in mutation frequencies, but the investigators found that estrogen receptor was more highly expressed in never-smokers than smokers, and the presence of estrogen receptor was associated with EGFR mutations in younger patients.

The investigators studied environmental influences on lung cancer by collecting information by questionnaire and by detecting mutations in 72 candidate genes from 876 patients with stage I to IIIB non–small-cell lung cancer (441 ever- and 435 never-smokers). In total, 622 patients had at least one mutation, and 860 mutations were observed. Dr. Kawaguchi and colleagues also examined patterns of estrogen-receptor expression by immunohistochemical staining and evidence of human papillomavirus (HPV) infection by a polymerase chain reaction–based microarray system.

Contrary to retrospective analyses that had pointed to a link between HPV and NSCLC, this prospective study showed little evidence for HPV in early NSCLC.

Dr. Kawaguchi reported having financial ties to Chugai Pharmaceutical and Eli Lilly. Several coauthors reported tis to industry sources.


The prevalence of mutations in oncogenic driver genes is correlated to smoking dose and body mass index, according to a prospective epidemiology study of environmental factors and mutation frequencies in non–small-cell lung cancer that was published online May 9.

In the Japan Molecular Epidemiology for Lung Cancer study, Dr. Tomoya Kawaguchi and colleagues found that increased mutation frequencies in TP53, KRAS, and NFE2L2 correlated with smoking dose (P < .001 for all), whereas decreased mutation frequencies were observed in EGFR (P < .001) and CTNNB1 (P = .030). The number of KRAS mutations in smokers increased in proportion to body-mass index (BMI) increases (P = .026).

©Sebastian Kaulitzki/Thinkstock

Simultaneous mutations in EGFR and CTNNB1 suggested possible biological relevance; 88% of CTNNB1 mutations (15/17) occurred with EGFR mutations. TP53 and NFE2L2 mutations were more frequent in advanced-stage disease, wrote Dr. Kawaguchi of the department of respiratory medicine at Osaka (Japan) City University and colleagues (J Clin Oncol. 2016 May 9. doi: 10.1200/JCO.2015.64.2322).

Although smoking is the most studied cause of lung cancer, about one-quarter of lung cancers worldwide occur in never-smokers.“It remains elusive which environmental factors contribute to the EGFR mutations that are frequently observed in never-smokers,” the investigators wrote. “In this study, the prevalence of EGFR mutations was higher in those who had more [environmental tobacco smoke], although this difference did not reach the level of statistical significance in the sample size. More detailed methods to detect the mutations (e.g., digital polymerase chain reaction) might yield more precise information.”

Levels of sex hormones were not significant factors in mutation frequencies, but the investigators found that estrogen receptor was more highly expressed in never-smokers than smokers, and the presence of estrogen receptor was associated with EGFR mutations in younger patients.

The investigators studied environmental influences on lung cancer by collecting information by questionnaire and by detecting mutations in 72 candidate genes from 876 patients with stage I to IIIB non–small-cell lung cancer (441 ever- and 435 never-smokers). In total, 622 patients had at least one mutation, and 860 mutations were observed. Dr. Kawaguchi and colleagues also examined patterns of estrogen-receptor expression by immunohistochemical staining and evidence of human papillomavirus (HPV) infection by a polymerase chain reaction–based microarray system.

Contrary to retrospective analyses that had pointed to a link between HPV and NSCLC, this prospective study showed little evidence for HPV in early NSCLC.

Dr. Kawaguchi reported having financial ties to Chugai Pharmaceutical and Eli Lilly. Several coauthors reported tis to industry sources.


References

References

Publications
Publications
Topics
Article Type
Display Headline
Mutation pattern in non–small-cell lung cancer influenced by increased BMI, smoking
Display Headline
Mutation pattern in non–small-cell lung cancer influenced by increased BMI, smoking
Article Source

FROM THE JOURNAL OF CLINICAL ONCOLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: A prospective epidemiology study found that smoking dose and body-mass index correlated with mutation patterns in non–small-cell lung cancer (NSCLC).

Major finding: The prevalence of TP53, KRAS, and NFE2L2 increased proportionally with smoking dose (P < .001 for all), whereas mutation prevalence in EGFR (P < .001) and CTNNB1 (P = .030) decreased; KRAS mutations were observed more frequently in proportion to increasing BMI in ever-smokers.

Data source: The Japan Molecular Epidemiology for Lung Cancer Study examined mutations in 876 patients with stage I to IIIB NSCLC.

Disclosures: Dr. Kawaguchi reported having financial ties to Chugai Pharmaceutical and Eli Lilly. Several coauthors reported ties to industry sources.

Tamoxifen benefits premenopausal breast cancer patients

Article Type
Changed
Thu, 12/15/2022 - 17:57
Display Headline
Tamoxifen benefits premenopausal breast cancer patients

In premenopausal women with estrogen receptor–positive primary breast cancer, 2 years of adjuvant tamoxifen resulted in the long-term reduction of breast cancer–related mortality, compared with patients who received no systemic treatment, a phase III randomized trial showed.

Tamoxifen is the endocrine therapy of choice for most premenopausal patients with ER-positive disease, wrote Dr. Maria Ekholm and her colleagues. “The long-term effect reported in this study is particularly important for young patients with a potentially long life expectancy who are at risk for late relapse, as is commonly seen in [estrogen receptor]-positive breast cancer.”

In the phase III trial, the investigators randomized 564 (362 ER-positive) premenopausal women with stage II breast cancer: 276 received tamoxifen and 288 received no adjuvant treatment, reported Dr. Ekholm, an oncologist at Lund (Sweden) University and her colleagues (J Clin Oncol. 2016 May 9. doi: 10.1200/JCO.2015.65.6272).

Among the group with ER-positive tumors, patients younger than 40 years had the greatest mortality reduction (less than 40 years: hazard ratio, 0.37; 95% confidence interval, 0.17-0.82; greater than 40 years: HR, 0.87; 95% CI, 0.61-1.22; interaction P = .044). Of the 314 deaths, 262 were breast cancer related. Also, tamoxifen had a greater effect in the patient subgroup with grade 3 tumors, compared with subgroups with grade 1 or 2 tumors, Dr. Ekholm and her colleagues found.

ER-positive patients had a high fatality rate during the first few years of follow-up, and tamoxifen had no effect during this time. Tamoxifen’s beneficial effect on cumulative mortality and cumulative breast cancer–related mortality was highest during years 5-15 of follow-up, with relative mortality reductions of nearly 50%, compared with the control group.

“The positive effect of tamoxifen was weaker for the last follow-up period (greater than 15 years), including fewer events and hence lower power, but the [hazard ratios] and estimates of [cumulative mortality] and [cumulative breast cancer–related mortality] indicate a possible carryover effect beyond 15 years,” the investigators wrote.

Dr. Ekholm reported financial ties to Amgen. Two coauthors also reported ties to industry sources.


References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

In premenopausal women with estrogen receptor–positive primary breast cancer, 2 years of adjuvant tamoxifen resulted in the long-term reduction of breast cancer–related mortality, compared with patients who received no systemic treatment, a phase III randomized trial showed.

Tamoxifen is the endocrine therapy of choice for most premenopausal patients with ER-positive disease, wrote Dr. Maria Ekholm and her colleagues. “The long-term effect reported in this study is particularly important for young patients with a potentially long life expectancy who are at risk for late relapse, as is commonly seen in [estrogen receptor]-positive breast cancer.”

In the phase III trial, the investigators randomized 564 (362 ER-positive) premenopausal women with stage II breast cancer: 276 received tamoxifen and 288 received no adjuvant treatment, reported Dr. Ekholm, an oncologist at Lund (Sweden) University and her colleagues (J Clin Oncol. 2016 May 9. doi: 10.1200/JCO.2015.65.6272).

Among the group with ER-positive tumors, patients younger than 40 years had the greatest mortality reduction (less than 40 years: hazard ratio, 0.37; 95% confidence interval, 0.17-0.82; greater than 40 years: HR, 0.87; 95% CI, 0.61-1.22; interaction P = .044). Of the 314 deaths, 262 were breast cancer related. Also, tamoxifen had a greater effect in the patient subgroup with grade 3 tumors, compared with subgroups with grade 1 or 2 tumors, Dr. Ekholm and her colleagues found.

ER-positive patients had a high fatality rate during the first few years of follow-up, and tamoxifen had no effect during this time. Tamoxifen’s beneficial effect on cumulative mortality and cumulative breast cancer–related mortality was highest during years 5-15 of follow-up, with relative mortality reductions of nearly 50%, compared with the control group.

“The positive effect of tamoxifen was weaker for the last follow-up period (greater than 15 years), including fewer events and hence lower power, but the [hazard ratios] and estimates of [cumulative mortality] and [cumulative breast cancer–related mortality] indicate a possible carryover effect beyond 15 years,” the investigators wrote.

Dr. Ekholm reported financial ties to Amgen. Two coauthors also reported ties to industry sources.


In premenopausal women with estrogen receptor–positive primary breast cancer, 2 years of adjuvant tamoxifen resulted in the long-term reduction of breast cancer–related mortality, compared with patients who received no systemic treatment, a phase III randomized trial showed.

Tamoxifen is the endocrine therapy of choice for most premenopausal patients with ER-positive disease, wrote Dr. Maria Ekholm and her colleagues. “The long-term effect reported in this study is particularly important for young patients with a potentially long life expectancy who are at risk for late relapse, as is commonly seen in [estrogen receptor]-positive breast cancer.”

In the phase III trial, the investigators randomized 564 (362 ER-positive) premenopausal women with stage II breast cancer: 276 received tamoxifen and 288 received no adjuvant treatment, reported Dr. Ekholm, an oncologist at Lund (Sweden) University and her colleagues (J Clin Oncol. 2016 May 9. doi: 10.1200/JCO.2015.65.6272).

Among the group with ER-positive tumors, patients younger than 40 years had the greatest mortality reduction (less than 40 years: hazard ratio, 0.37; 95% confidence interval, 0.17-0.82; greater than 40 years: HR, 0.87; 95% CI, 0.61-1.22; interaction P = .044). Of the 314 deaths, 262 were breast cancer related. Also, tamoxifen had a greater effect in the patient subgroup with grade 3 tumors, compared with subgroups with grade 1 or 2 tumors, Dr. Ekholm and her colleagues found.

ER-positive patients had a high fatality rate during the first few years of follow-up, and tamoxifen had no effect during this time. Tamoxifen’s beneficial effect on cumulative mortality and cumulative breast cancer–related mortality was highest during years 5-15 of follow-up, with relative mortality reductions of nearly 50%, compared with the control group.

“The positive effect of tamoxifen was weaker for the last follow-up period (greater than 15 years), including fewer events and hence lower power, but the [hazard ratios] and estimates of [cumulative mortality] and [cumulative breast cancer–related mortality] indicate a possible carryover effect beyond 15 years,” the investigators wrote.

Dr. Ekholm reported financial ties to Amgen. Two coauthors also reported ties to industry sources.


References

References

Publications
Publications
Topics
Article Type
Display Headline
Tamoxifen benefits premenopausal breast cancer patients
Display Headline
Tamoxifen benefits premenopausal breast cancer patients
Article Source

FROM THE JOURNAL OF CLINICAL ONCOLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Premenopausal women with ER-positive breast cancer had significantly longer survival with 2 years of adjuvant tamoxifen, compared with those who had no adjuvant treatment.

Major finding: At a median follow-up of 26 years, adjuvant tamoxifen was associated with decreased breast cancer–related mortality in patients with ER-positive tumors (hazard ratio, 0.73; 95% confidence interval, 0.53-0.99; P = .046).

Data source: The randomized, phase III trial included 564 (362 ER-positive) premenopausal women with stage II breast cancer; 276 received tamoxifen and 288 received no adjuvant treatment.

Disclosures: Dr. Ekholm reported financial ties to Amgen. Two coauthors also reported ties to industry sources.

Cirrhosis 30-day readmissions down 40% with quality improvement initiative

Article Type
Changed
Fri, 01/18/2019 - 15:50
Display Headline
Cirrhosis 30-day readmissions down 40% with quality improvement initiative

Using checklists and electronic decision support in an inpatient liver unit, quality improvement (QI) care protocols reduced 30-day readmissions of patients with cirrhosis by 40%, due mostly to a drop in readmissions for hepatic encephalopathy (HE) according to a report published in the May issue of Clinical Gastroenterology and Hepatology.

For patients initially admitted for overt HE, the 30-day readmission rate was 26.0% (27 of 104), compared with 48.9% (66 of 135) before implementation of QI. The proportion of total readmissions due to HE after QI was 9.6% (14 of 146), compared with 40.7% (79 of 194) before QI. In addition, length of stay for HE patients was significantly reduced (–1.34 days; 95% confidence interval, –2.38 to –0.32; P = .01). There were no significant changes in 90-day mortality.

Source: American Gastroenterological Association

“Our study advances the current literature on QI for patients with cirrhosis by presenting an inexpensive, easy to implement, and generalizable approach,” wrote Dr. Elliot Tapper of Beth Israel Deaconess Medical Center, Boston, and his colleagues. Previous studies have addressed readmission interventions among patients with cirrhosis, but the protocols required costly infrastructure, expertise, and institutional commitments. The current study supports the value of standard checklists and education, according to the investigators, “showing that outcomes improve further when checklist items are hard-wired into the ordering system.” (Clin Gastroenterol Hepatol. 2016 Apr 7. doi: 10.1016/j.cgh.2015.08.041).

The QI initiative encompassed several aspects of care. All HE patients were designated to receive rifaximin, and their lactulose dosing was adjusted to mental status using the Richmond Agitation and Sedation Scale. For patients with spontaneous bacterial peritonitis (SBP), timely administration of the correct dose of antibiotics and albumin was promoted, as were prophylactic measures for all patients, such as variceal hemorrhage prophylaxis and subcutaneous heparin for the prevention of venous thrombosis.

The three-part program entailed a run-in phase for preliminary checklist troubleshooting, a hand-held checklist phase, including the HE protocol, SBP treatment, and prophylactic measures, and a final electronic phase in which checklist items were incorporated into the hospital’s electronic provider order entry system using mandatory preset doses and linked medications.

Individual protocol items were demonstrated to affect the readmission rate. Rifaximin use for HE patients rose from 78.1% to 96.3%, and use of rifaximin was associated with lower adjusted odds of 30-day readmission (OR, 0.39; 95% CI, 0.16-0.87; P = .02). The dose/frequency of lactulose for HE patients increased, and patients who had 6 or more cups of lactulose on the day of their readmission had significantly lower adjusted length of stay (–2.36 days; 95% CI, –3.40 to –1.31; P less than .0001). Patients taking SBP prophylaxis had lower readmission rates (OR, 0.51; 95% CI, 0.31-0.83; P = .007).

The prospective study from 2011 to 2013 evaluated patients with cirrhosis who were admitted to the liver unit of Beth Israel Deaconess Medical Center, Boston. Patients were diagnosed with cirrhosis caused by hepatitis C (44.9%), alcoholic liver disease (34%), hepatitis B (5.4%), and biliary cirrhosis (1.8%). In total, 824 unique patients were admitted 1,720 times; 485 (58.9%) were admitted once, 268 (32.5%) were admitted 2-4 times, and 71 (8.6%) were admitted 5 or more times. The median length of stay for all patients was 4.0 days (interquartile range, 2.0-8.0).

Dr. Tapper and his coauthors reported having no disclosures.

References

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Using checklists and electronic decision support in an inpatient liver unit, quality improvement (QI) care protocols reduced 30-day readmissions of patients with cirrhosis by 40%, due mostly to a drop in readmissions for hepatic encephalopathy (HE) according to a report published in the May issue of Clinical Gastroenterology and Hepatology.

For patients initially admitted for overt HE, the 30-day readmission rate was 26.0% (27 of 104), compared with 48.9% (66 of 135) before implementation of QI. The proportion of total readmissions due to HE after QI was 9.6% (14 of 146), compared with 40.7% (79 of 194) before QI. In addition, length of stay for HE patients was significantly reduced (–1.34 days; 95% confidence interval, –2.38 to –0.32; P = .01). There were no significant changes in 90-day mortality.

Source: American Gastroenterological Association

“Our study advances the current literature on QI for patients with cirrhosis by presenting an inexpensive, easy to implement, and generalizable approach,” wrote Dr. Elliot Tapper of Beth Israel Deaconess Medical Center, Boston, and his colleagues. Previous studies have addressed readmission interventions among patients with cirrhosis, but the protocols required costly infrastructure, expertise, and institutional commitments. The current study supports the value of standard checklists and education, according to the investigators, “showing that outcomes improve further when checklist items are hard-wired into the ordering system.” (Clin Gastroenterol Hepatol. 2016 Apr 7. doi: 10.1016/j.cgh.2015.08.041).

The QI initiative encompassed several aspects of care. All HE patients were designated to receive rifaximin, and their lactulose dosing was adjusted to mental status using the Richmond Agitation and Sedation Scale. For patients with spontaneous bacterial peritonitis (SBP), timely administration of the correct dose of antibiotics and albumin was promoted, as were prophylactic measures for all patients, such as variceal hemorrhage prophylaxis and subcutaneous heparin for the prevention of venous thrombosis.

The three-part program entailed a run-in phase for preliminary checklist troubleshooting, a hand-held checklist phase, including the HE protocol, SBP treatment, and prophylactic measures, and a final electronic phase in which checklist items were incorporated into the hospital’s electronic provider order entry system using mandatory preset doses and linked medications.

Individual protocol items were demonstrated to affect the readmission rate. Rifaximin use for HE patients rose from 78.1% to 96.3%, and use of rifaximin was associated with lower adjusted odds of 30-day readmission (OR, 0.39; 95% CI, 0.16-0.87; P = .02). The dose/frequency of lactulose for HE patients increased, and patients who had 6 or more cups of lactulose on the day of their readmission had significantly lower adjusted length of stay (–2.36 days; 95% CI, –3.40 to –1.31; P less than .0001). Patients taking SBP prophylaxis had lower readmission rates (OR, 0.51; 95% CI, 0.31-0.83; P = .007).

The prospective study from 2011 to 2013 evaluated patients with cirrhosis who were admitted to the liver unit of Beth Israel Deaconess Medical Center, Boston. Patients were diagnosed with cirrhosis caused by hepatitis C (44.9%), alcoholic liver disease (34%), hepatitis B (5.4%), and biliary cirrhosis (1.8%). In total, 824 unique patients were admitted 1,720 times; 485 (58.9%) were admitted once, 268 (32.5%) were admitted 2-4 times, and 71 (8.6%) were admitted 5 or more times. The median length of stay for all patients was 4.0 days (interquartile range, 2.0-8.0).

Dr. Tapper and his coauthors reported having no disclosures.

Using checklists and electronic decision support in an inpatient liver unit, quality improvement (QI) care protocols reduced 30-day readmissions of patients with cirrhosis by 40%, due mostly to a drop in readmissions for hepatic encephalopathy (HE) according to a report published in the May issue of Clinical Gastroenterology and Hepatology.

For patients initially admitted for overt HE, the 30-day readmission rate was 26.0% (27 of 104), compared with 48.9% (66 of 135) before implementation of QI. The proportion of total readmissions due to HE after QI was 9.6% (14 of 146), compared with 40.7% (79 of 194) before QI. In addition, length of stay for HE patients was significantly reduced (–1.34 days; 95% confidence interval, –2.38 to –0.32; P = .01). There were no significant changes in 90-day mortality.

Source: American Gastroenterological Association

“Our study advances the current literature on QI for patients with cirrhosis by presenting an inexpensive, easy to implement, and generalizable approach,” wrote Dr. Elliot Tapper of Beth Israel Deaconess Medical Center, Boston, and his colleagues. Previous studies have addressed readmission interventions among patients with cirrhosis, but the protocols required costly infrastructure, expertise, and institutional commitments. The current study supports the value of standard checklists and education, according to the investigators, “showing that outcomes improve further when checklist items are hard-wired into the ordering system.” (Clin Gastroenterol Hepatol. 2016 Apr 7. doi: 10.1016/j.cgh.2015.08.041).

The QI initiative encompassed several aspects of care. All HE patients were designated to receive rifaximin, and their lactulose dosing was adjusted to mental status using the Richmond Agitation and Sedation Scale. For patients with spontaneous bacterial peritonitis (SBP), timely administration of the correct dose of antibiotics and albumin was promoted, as were prophylactic measures for all patients, such as variceal hemorrhage prophylaxis and subcutaneous heparin for the prevention of venous thrombosis.

The three-part program entailed a run-in phase for preliminary checklist troubleshooting, a hand-held checklist phase, including the HE protocol, SBP treatment, and prophylactic measures, and a final electronic phase in which checklist items were incorporated into the hospital’s electronic provider order entry system using mandatory preset doses and linked medications.

Individual protocol items were demonstrated to affect the readmission rate. Rifaximin use for HE patients rose from 78.1% to 96.3%, and use of rifaximin was associated with lower adjusted odds of 30-day readmission (OR, 0.39; 95% CI, 0.16-0.87; P = .02). The dose/frequency of lactulose for HE patients increased, and patients who had 6 or more cups of lactulose on the day of their readmission had significantly lower adjusted length of stay (–2.36 days; 95% CI, –3.40 to –1.31; P less than .0001). Patients taking SBP prophylaxis had lower readmission rates (OR, 0.51; 95% CI, 0.31-0.83; P = .007).

The prospective study from 2011 to 2013 evaluated patients with cirrhosis who were admitted to the liver unit of Beth Israel Deaconess Medical Center, Boston. Patients were diagnosed with cirrhosis caused by hepatitis C (44.9%), alcoholic liver disease (34%), hepatitis B (5.4%), and biliary cirrhosis (1.8%). In total, 824 unique patients were admitted 1,720 times; 485 (58.9%) were admitted once, 268 (32.5%) were admitted 2-4 times, and 71 (8.6%) were admitted 5 or more times. The median length of stay for all patients was 4.0 days (interquartile range, 2.0-8.0).

Dr. Tapper and his coauthors reported having no disclosures.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Cirrhosis 30-day readmissions down 40% with quality improvement initiative
Display Headline
Cirrhosis 30-day readmissions down 40% with quality improvement initiative
Sections
Article Source

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Care protocols implemented by electronic decision support reduced 30-day readmissions of patients with cirrhosis by 40% in an inpatient liver unit.

Major finding: The drop was likely driven by fewer readmissions for hepatic encephalopathy (HE): the 30-day HE readmission rate was 26.0% (27 of 104), compared with 48.9% (66 of 135) before implementation of quality improvement.

Data sources: The prospective study evaluated 824 patients who were admitted 1,720 times to the liver unit of Beth Israel Deaconess Medical Center, Boston.

Disclosures: Dr. Tapper and his coauthors reported having no disclosures.

Additional D1 biopsy increased diagnostic yield for celiac disease

Article Type
Changed
Sat, 12/08/2018 - 02:34
Display Headline
Additional D1 biopsy increased diagnostic yield for celiac disease

Among a large cohort of patients referred for endoscopy for suspected celiac disease as well as all upper gastrointestinal symptoms, a single additional D1 biopsy specimen from any site significantly increased the diagnostic yield for celiac disease, according to researchers.

Of 1,378 patients who had D2 and D1 biopsy specimens taken, 268 were newly diagnosed with celiac disease, and 26 had villous atrophy confined to D1, defined as ultrashort celiac disease (USCD). Compared with a standard D2 biopsy, an additional D1 biopsy increased the diagnostic yield by 9.7% (P less than .0001). Among the 26 diagnosed with USCD, 7 had normal D2 biopsy specimens, and 4 others had negative tests for endomysial antibodies (EMAs), totaling 11 patients for whom celiac disease would have been missed in the absence of a D1 biopsy.

 

“The addition of a D1 biopsy specimen to diagnose celiac disease may reduce the known delay in diagnosis that many patients with celiac disease experience. This may allow earlier institution of a gluten-free diet, potentially prevent nutritional deficiencies, and reduce the symptomatic burden of celiac disease,” wrote Dr. Peter Mooney of Royal Hallamshire Hospital, Sheffield, England, and his colleagues. (Gastroenterology 2016 April 7. doi: 10.1053/j-gastro.2016.01.029).

The prospective study recruited 1,378 consecutive patients referred to a single teaching hospital for endoscopy from 2008 to 2014. In total, 268 were newly diagnosed with celiac disease, and 26 were diagnosed with USCD.

To investigate the optimal site for targeted D1 sampling, 171 patients underwent quadrantic D1 biopsy, 61 of whom were diagnosed with celiac disease. Biopsy specimens from any topographical area resulted in high sensitivity, a fact that increases the feasibility of a D1 biopsy policy, since no specific target area is required, according to the researchers. Nonceliac abnormalities such as peptic duodenitis or gastric heterotopia have been suggested to impede interpretation of D1 biopsies, but these were rare in the study and did not interfere with the analysis.

USCD may be an early form of conventional celiac disease, an idea supported by the findings. Compared with patients diagnosed with conventional celiac disease, patients diagnosed with USCD were younger and had a much lower rate of diarrhea, which by decision-tree analysis was the single factor discriminating between the two groups. Compared with healthy controls, individuals with conventional celiac disease, but not USCD, were more likely to present with anemia, diarrhea, a family history of celiac disease, lethargy, and osteoporosis. Patients with USCD and conventional disease had similar rates of IgA tissue transglutaminase antibodies (tTG), but USCD patients had lower titers (P less than .001). The USCD group also had fewer ferritin and folate deficiencies.

The researchers suggested that clinical phenotypic differences may be due to minimal loss of absorptive capacity associated with a short segment of villous atrophy. Given the younger average age at diagnosis of USCD and lower tTG titers, USCD may represent an early stage of celiac disease, resulting in fewer nutritional deficiencies observed because of a shorter lead time to diagnosis.

Although USCD patients had a milder clinical phenotype, which has raised concerns that a strict gluten-free diet may be unnecessary, follow-up data demonstrated that a gluten-free diet produced improvement in symptoms and a significant decrease in the tTG titer. These results may indicate that the immune cascade was switched off, according to the researchers, and that early diagnosis may present a unique opportunity to prevent further micronutrient deficiency.

Dr. Mooney and his coauthors reported having no relevant financial disclosures.

Publications
Topics
Sections

Among a large cohort of patients referred for endoscopy for suspected celiac disease as well as all upper gastrointestinal symptoms, a single additional D1 biopsy specimen from any site significantly increased the diagnostic yield for celiac disease, according to researchers.

Of 1,378 patients who had D2 and D1 biopsy specimens taken, 268 were newly diagnosed with celiac disease, and 26 had villous atrophy confined to D1, defined as ultrashort celiac disease (USCD). Compared with a standard D2 biopsy, an additional D1 biopsy increased the diagnostic yield by 9.7% (P less than .0001). Among the 26 diagnosed with USCD, 7 had normal D2 biopsy specimens, and 4 others had negative tests for endomysial antibodies (EMAs), totaling 11 patients for whom celiac disease would have been missed in the absence of a D1 biopsy.

 

“The addition of a D1 biopsy specimen to diagnose celiac disease may reduce the known delay in diagnosis that many patients with celiac disease experience. This may allow earlier institution of a gluten-free diet, potentially prevent nutritional deficiencies, and reduce the symptomatic burden of celiac disease,” wrote Dr. Peter Mooney of Royal Hallamshire Hospital, Sheffield, England, and his colleagues. (Gastroenterology 2016 April 7. doi: 10.1053/j-gastro.2016.01.029).

The prospective study recruited 1,378 consecutive patients referred to a single teaching hospital for endoscopy from 2008 to 2014. In total, 268 were newly diagnosed with celiac disease, and 26 were diagnosed with USCD.

To investigate the optimal site for targeted D1 sampling, 171 patients underwent quadrantic D1 biopsy, 61 of whom were diagnosed with celiac disease. Biopsy specimens from any topographical area resulted in high sensitivity, a fact that increases the feasibility of a D1 biopsy policy, since no specific target area is required, according to the researchers. Nonceliac abnormalities such as peptic duodenitis or gastric heterotopia have been suggested to impede interpretation of D1 biopsies, but these were rare in the study and did not interfere with the analysis.

USCD may be an early form of conventional celiac disease, an idea supported by the findings. Compared with patients diagnosed with conventional celiac disease, patients diagnosed with USCD were younger and had a much lower rate of diarrhea, which by decision-tree analysis was the single factor discriminating between the two groups. Compared with healthy controls, individuals with conventional celiac disease, but not USCD, were more likely to present with anemia, diarrhea, a family history of celiac disease, lethargy, and osteoporosis. Patients with USCD and conventional disease had similar rates of IgA tissue transglutaminase antibodies (tTG), but USCD patients had lower titers (P less than .001). The USCD group also had fewer ferritin and folate deficiencies.

The researchers suggested that clinical phenotypic differences may be due to minimal loss of absorptive capacity associated with a short segment of villous atrophy. Given the younger average age at diagnosis of USCD and lower tTG titers, USCD may represent an early stage of celiac disease, resulting in fewer nutritional deficiencies observed because of a shorter lead time to diagnosis.

Although USCD patients had a milder clinical phenotype, which has raised concerns that a strict gluten-free diet may be unnecessary, follow-up data demonstrated that a gluten-free diet produced improvement in symptoms and a significant decrease in the tTG titer. These results may indicate that the immune cascade was switched off, according to the researchers, and that early diagnosis may present a unique opportunity to prevent further micronutrient deficiency.

Dr. Mooney and his coauthors reported having no relevant financial disclosures.

Among a large cohort of patients referred for endoscopy for suspected celiac disease as well as all upper gastrointestinal symptoms, a single additional D1 biopsy specimen from any site significantly increased the diagnostic yield for celiac disease, according to researchers.

Of 1,378 patients who had D2 and D1 biopsy specimens taken, 268 were newly diagnosed with celiac disease, and 26 had villous atrophy confined to D1, defined as ultrashort celiac disease (USCD). Compared with a standard D2 biopsy, an additional D1 biopsy increased the diagnostic yield by 9.7% (P less than .0001). Among the 26 diagnosed with USCD, 7 had normal D2 biopsy specimens, and 4 others had negative tests for endomysial antibodies (EMAs), totaling 11 patients for whom celiac disease would have been missed in the absence of a D1 biopsy.

 

“The addition of a D1 biopsy specimen to diagnose celiac disease may reduce the known delay in diagnosis that many patients with celiac disease experience. This may allow earlier institution of a gluten-free diet, potentially prevent nutritional deficiencies, and reduce the symptomatic burden of celiac disease,” wrote Dr. Peter Mooney of Royal Hallamshire Hospital, Sheffield, England, and his colleagues. (Gastroenterology 2016 April 7. doi: 10.1053/j-gastro.2016.01.029).

The prospective study recruited 1,378 consecutive patients referred to a single teaching hospital for endoscopy from 2008 to 2014. In total, 268 were newly diagnosed with celiac disease, and 26 were diagnosed with USCD.

To investigate the optimal site for targeted D1 sampling, 171 patients underwent quadrantic D1 biopsy, 61 of whom were diagnosed with celiac disease. Biopsy specimens from any topographical area resulted in high sensitivity, a fact that increases the feasibility of a D1 biopsy policy, since no specific target area is required, according to the researchers. Nonceliac abnormalities such as peptic duodenitis or gastric heterotopia have been suggested to impede interpretation of D1 biopsies, but these were rare in the study and did not interfere with the analysis.

USCD may be an early form of conventional celiac disease, an idea supported by the findings. Compared with patients diagnosed with conventional celiac disease, patients diagnosed with USCD were younger and had a much lower rate of diarrhea, which by decision-tree analysis was the single factor discriminating between the two groups. Compared with healthy controls, individuals with conventional celiac disease, but not USCD, were more likely to present with anemia, diarrhea, a family history of celiac disease, lethargy, and osteoporosis. Patients with USCD and conventional disease had similar rates of IgA tissue transglutaminase antibodies (tTG), but USCD patients had lower titers (P less than .001). The USCD group also had fewer ferritin and folate deficiencies.

The researchers suggested that clinical phenotypic differences may be due to minimal loss of absorptive capacity associated with a short segment of villous atrophy. Given the younger average age at diagnosis of USCD and lower tTG titers, USCD may represent an early stage of celiac disease, resulting in fewer nutritional deficiencies observed because of a shorter lead time to diagnosis.

Although USCD patients had a milder clinical phenotype, which has raised concerns that a strict gluten-free diet may be unnecessary, follow-up data demonstrated that a gluten-free diet produced improvement in symptoms and a significant decrease in the tTG titer. These results may indicate that the immune cascade was switched off, according to the researchers, and that early diagnosis may present a unique opportunity to prevent further micronutrient deficiency.

Dr. Mooney and his coauthors reported having no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Display Headline
Additional D1 biopsy increased diagnostic yield for celiac disease
Display Headline
Additional D1 biopsy increased diagnostic yield for celiac disease
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Alternative CME
Vitals

Key clinical point: When added to a standard D2 biopsy, a single D1 biopsy from any site significantly increased the diagnostic yield for celiac disease.

Major finding: In total, 26 of 268 patients diagnosed with celiac disease had villous atrophy confined to D1 (ultrashort celiac disease); an additional D1 biopsy increased the diagnostic yield by 9.7% (P less than .0001), compared with a standard D2 biopsy.

Data source: A prospective study of 1,378 consecutive patients referred to a single teaching hospital for endoscopy from 2008 to 2014, 268 of whom were newly diagnosed with celiac disease and 26 with USCD.

Disclosures: Dr. Mooney and his coauthors reported having no relevant financial disclosures.

Racial disparities in colon cancer survival mainly driven by tumor stage at presentation

Results applicable to older black, white patients only
Article Type
Changed
Wed, 05/26/2021 - 13:54
Display Headline
Racial disparities in colon cancer survival mainly driven by tumor stage at presentation

Although black patients with colon cancer received significantly less treatment than white patients, particularly for late stage disease, much of the overall survival disparity between black and white patients was explained by tumor presentation at diagnosis rather than treatment differences, according to an analysis of SEER data.

Among demographically matched black and white patients, the 5-year survival difference was 8.3% (P less than .0001). Presentation match reduced the difference to 5.0% (P less than .0001), which accounted for 39.8% of the overall disparity. Additional matching by treatment reduced the difference only slightly to 4.9% (P less than .0001), which accounted for 1.2% of the overall disparity. Black patients had lower rates for most treatments, including surgery, than presentation-matched white patients (88.5% vs. 91.4%), and these differences were most pronounced at advanced stages. For example, significant differences between black and white patients in the use of chemotherapy was observed for stage III (53.1% vs. 64.2%; P less than .0001) and stage IV (56.1% vs. 63.3%; P = .001).

Courtesy Wikimedia Commons/Nephron/Creative Commons License

“Our results indicate that tumor presentation, including tumor stage, is indeed one of the most important factors contributing to the racial disparity in colon cancer survival. We observed that, after controlling for demographic factors, black patients in comparison with white patients had a significantly higher proportion of stage IV and lower proportions of stages I and II disease. Adequately matching on tumor presentation variables (e.g., stage, grade, size, and comorbidity) significantly reduced survival disparities,” wrote Dr. Yinzhi Lai of the Department of Medical Oncology at Sidney Kimmel Cancer Center, Philadelphia, and colleagues (Gastroenterology. 2016 Apr 4. doi: 10.1053/j.gastro.2016.01.030).

Treatment differences in advanced-stage patients, compared with early-stage patients, explained a higher proportion of the demographic-matched survival disparity. For example, in stage II patients, treatment match resulted in modest reductions in 2-, 3-, and 5-year survival rate disparities (2.7%-2.8%, 4.1%-3.6%, and 4.6%-4.0%, respectively); by contrast, in stage III patients, treatment match resulted in more substantial reductions in 2-, 3-, and 5-year survival rate disparities (4.5%-2.2%, 3.1%-2.0%, and 4.3%-2.8%, respectively). A similar effect was observed in patients with stage IV disease. The results suggest that, “to control survival disparity, more efforts may need to be tailored to minimize treatment disparities (especially chemotherapy use) in patients with advanced-stage disease,” the investigators wrote.

The retrospective data analysis used patient information from 68,141 patients (6,190 black, 61,951 white) aged 66 years and older with colon cancer identified from the National Cancer Institute SEER-Medicare database. Using a novel minimum distance matching strategy, investigators drew from the pool of white patients to match three distinct comparison cohorts to the same 6,190 black patients. Close matches between black and white patients bypassed the need for model-based analysis.

The primary matching analysis was limited by the inability to control for substantial differences in socioeconomic status, marital status, and urban/rural residence. A subcohort analysis of 2,000 matched black and white patients showed that when socioeconomic status was added to the demographic match, survival differences were reduced, indicating the important role of socioeconomic status on racial survival disparities.

Significantly better survival was observed in all patients who were diagnosed in 2004 or later, the year the Food and Drug Administration approved the important chemotherapy medicines oxaliplatin and bevacizumab. Separating the cohorts into those who were diagnosed before and after 2004 revealed that the racial survival disparity was lower in the more recent group, indicating a favorable impact of oxaliplatin and/or bevacizumab in reducing the survival disparity.

References

Body

Prior studies have documented racial disparities in the incidence and outcomes of colon cancer in the United States. Black men and women have a higher overall incidence and more advanced stage of disease at diagnosis than white men and women, while being less likely to receive guideline-concordant treatment.

Dr. Jennifer Lund

To extend this work, the authors evaluated treatment disparities between black and white colon cancer patients aged 66 years and older and examined the impact of a variety of patient characteristics on racial disparities in overall survival using a novel, sequential matching algorithm that minimized the overall distance between black and white patients based on demographic-, tumor specific–, and treatment-related variables. The authors found that differences in overall survival were mainly driven by tumor presentation; however, advanced-stage black colon cancer patients received less guideline concordant-treatment than white patients. While this minimum-distance algorithm provided close black-white matches on prespecified factors, it could not accommodate other factors (for example, socioeconomic, marital, and urban/rural status); therefore, methodologic improvements to this method and comparisons to other commonly used approaches (that is, propensity score matching and weighting) are warranted.

Finally, these results apply to older black and white colon cancer patients with Medicare fee-for-service coverage only. Additional research using similar methods in older Medicare Advantage populations or younger adults may uncover unique drivers of overall survival disparities by race, which may require tailored interventions.

Jennifer L. Lund, Ph.D., is an assistant professor, department of epidemiology, University of North Carolina at Chapel Hill. She receives research support from the UNC Oncology Clinical Translational Research Training Program (K12 CA120780), as well as through a Research Starter Award from the PhRMA Foundation to the UNC Department of Epidemiology.

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Body

Prior studies have documented racial disparities in the incidence and outcomes of colon cancer in the United States. Black men and women have a higher overall incidence and more advanced stage of disease at diagnosis than white men and women, while being less likely to receive guideline-concordant treatment.

Dr. Jennifer Lund

To extend this work, the authors evaluated treatment disparities between black and white colon cancer patients aged 66 years and older and examined the impact of a variety of patient characteristics on racial disparities in overall survival using a novel, sequential matching algorithm that minimized the overall distance between black and white patients based on demographic-, tumor specific–, and treatment-related variables. The authors found that differences in overall survival were mainly driven by tumor presentation; however, advanced-stage black colon cancer patients received less guideline concordant-treatment than white patients. While this minimum-distance algorithm provided close black-white matches on prespecified factors, it could not accommodate other factors (for example, socioeconomic, marital, and urban/rural status); therefore, methodologic improvements to this method and comparisons to other commonly used approaches (that is, propensity score matching and weighting) are warranted.

Finally, these results apply to older black and white colon cancer patients with Medicare fee-for-service coverage only. Additional research using similar methods in older Medicare Advantage populations or younger adults may uncover unique drivers of overall survival disparities by race, which may require tailored interventions.

Jennifer L. Lund, Ph.D., is an assistant professor, department of epidemiology, University of North Carolina at Chapel Hill. She receives research support from the UNC Oncology Clinical Translational Research Training Program (K12 CA120780), as well as through a Research Starter Award from the PhRMA Foundation to the UNC Department of Epidemiology.

Body

Prior studies have documented racial disparities in the incidence and outcomes of colon cancer in the United States. Black men and women have a higher overall incidence and more advanced stage of disease at diagnosis than white men and women, while being less likely to receive guideline-concordant treatment.

Dr. Jennifer Lund

To extend this work, the authors evaluated treatment disparities between black and white colon cancer patients aged 66 years and older and examined the impact of a variety of patient characteristics on racial disparities in overall survival using a novel, sequential matching algorithm that minimized the overall distance between black and white patients based on demographic-, tumor specific–, and treatment-related variables. The authors found that differences in overall survival were mainly driven by tumor presentation; however, advanced-stage black colon cancer patients received less guideline concordant-treatment than white patients. While this minimum-distance algorithm provided close black-white matches on prespecified factors, it could not accommodate other factors (for example, socioeconomic, marital, and urban/rural status); therefore, methodologic improvements to this method and comparisons to other commonly used approaches (that is, propensity score matching and weighting) are warranted.

Finally, these results apply to older black and white colon cancer patients with Medicare fee-for-service coverage only. Additional research using similar methods in older Medicare Advantage populations or younger adults may uncover unique drivers of overall survival disparities by race, which may require tailored interventions.

Jennifer L. Lund, Ph.D., is an assistant professor, department of epidemiology, University of North Carolina at Chapel Hill. She receives research support from the UNC Oncology Clinical Translational Research Training Program (K12 CA120780), as well as through a Research Starter Award from the PhRMA Foundation to the UNC Department of Epidemiology.

Title
Results applicable to older black, white patients only
Results applicable to older black, white patients only

Although black patients with colon cancer received significantly less treatment than white patients, particularly for late stage disease, much of the overall survival disparity between black and white patients was explained by tumor presentation at diagnosis rather than treatment differences, according to an analysis of SEER data.

Among demographically matched black and white patients, the 5-year survival difference was 8.3% (P less than .0001). Presentation match reduced the difference to 5.0% (P less than .0001), which accounted for 39.8% of the overall disparity. Additional matching by treatment reduced the difference only slightly to 4.9% (P less than .0001), which accounted for 1.2% of the overall disparity. Black patients had lower rates for most treatments, including surgery, than presentation-matched white patients (88.5% vs. 91.4%), and these differences were most pronounced at advanced stages. For example, significant differences between black and white patients in the use of chemotherapy was observed for stage III (53.1% vs. 64.2%; P less than .0001) and stage IV (56.1% vs. 63.3%; P = .001).

Courtesy Wikimedia Commons/Nephron/Creative Commons License

“Our results indicate that tumor presentation, including tumor stage, is indeed one of the most important factors contributing to the racial disparity in colon cancer survival. We observed that, after controlling for demographic factors, black patients in comparison with white patients had a significantly higher proportion of stage IV and lower proportions of stages I and II disease. Adequately matching on tumor presentation variables (e.g., stage, grade, size, and comorbidity) significantly reduced survival disparities,” wrote Dr. Yinzhi Lai of the Department of Medical Oncology at Sidney Kimmel Cancer Center, Philadelphia, and colleagues (Gastroenterology. 2016 Apr 4. doi: 10.1053/j.gastro.2016.01.030).

Treatment differences in advanced-stage patients, compared with early-stage patients, explained a higher proportion of the demographic-matched survival disparity. For example, in stage II patients, treatment match resulted in modest reductions in 2-, 3-, and 5-year survival rate disparities (2.7%-2.8%, 4.1%-3.6%, and 4.6%-4.0%, respectively); by contrast, in stage III patients, treatment match resulted in more substantial reductions in 2-, 3-, and 5-year survival rate disparities (4.5%-2.2%, 3.1%-2.0%, and 4.3%-2.8%, respectively). A similar effect was observed in patients with stage IV disease. The results suggest that, “to control survival disparity, more efforts may need to be tailored to minimize treatment disparities (especially chemotherapy use) in patients with advanced-stage disease,” the investigators wrote.

The retrospective data analysis used patient information from 68,141 patients (6,190 black, 61,951 white) aged 66 years and older with colon cancer identified from the National Cancer Institute SEER-Medicare database. Using a novel minimum distance matching strategy, investigators drew from the pool of white patients to match three distinct comparison cohorts to the same 6,190 black patients. Close matches between black and white patients bypassed the need for model-based analysis.

The primary matching analysis was limited by the inability to control for substantial differences in socioeconomic status, marital status, and urban/rural residence. A subcohort analysis of 2,000 matched black and white patients showed that when socioeconomic status was added to the demographic match, survival differences were reduced, indicating the important role of socioeconomic status on racial survival disparities.

Significantly better survival was observed in all patients who were diagnosed in 2004 or later, the year the Food and Drug Administration approved the important chemotherapy medicines oxaliplatin and bevacizumab. Separating the cohorts into those who were diagnosed before and after 2004 revealed that the racial survival disparity was lower in the more recent group, indicating a favorable impact of oxaliplatin and/or bevacizumab in reducing the survival disparity.

Although black patients with colon cancer received significantly less treatment than white patients, particularly for late stage disease, much of the overall survival disparity between black and white patients was explained by tumor presentation at diagnosis rather than treatment differences, according to an analysis of SEER data.

Among demographically matched black and white patients, the 5-year survival difference was 8.3% (P less than .0001). Presentation match reduced the difference to 5.0% (P less than .0001), which accounted for 39.8% of the overall disparity. Additional matching by treatment reduced the difference only slightly to 4.9% (P less than .0001), which accounted for 1.2% of the overall disparity. Black patients had lower rates for most treatments, including surgery, than presentation-matched white patients (88.5% vs. 91.4%), and these differences were most pronounced at advanced stages. For example, significant differences between black and white patients in the use of chemotherapy was observed for stage III (53.1% vs. 64.2%; P less than .0001) and stage IV (56.1% vs. 63.3%; P = .001).

Courtesy Wikimedia Commons/Nephron/Creative Commons License

“Our results indicate that tumor presentation, including tumor stage, is indeed one of the most important factors contributing to the racial disparity in colon cancer survival. We observed that, after controlling for demographic factors, black patients in comparison with white patients had a significantly higher proportion of stage IV and lower proportions of stages I and II disease. Adequately matching on tumor presentation variables (e.g., stage, grade, size, and comorbidity) significantly reduced survival disparities,” wrote Dr. Yinzhi Lai of the Department of Medical Oncology at Sidney Kimmel Cancer Center, Philadelphia, and colleagues (Gastroenterology. 2016 Apr 4. doi: 10.1053/j.gastro.2016.01.030).

Treatment differences in advanced-stage patients, compared with early-stage patients, explained a higher proportion of the demographic-matched survival disparity. For example, in stage II patients, treatment match resulted in modest reductions in 2-, 3-, and 5-year survival rate disparities (2.7%-2.8%, 4.1%-3.6%, and 4.6%-4.0%, respectively); by contrast, in stage III patients, treatment match resulted in more substantial reductions in 2-, 3-, and 5-year survival rate disparities (4.5%-2.2%, 3.1%-2.0%, and 4.3%-2.8%, respectively). A similar effect was observed in patients with stage IV disease. The results suggest that, “to control survival disparity, more efforts may need to be tailored to minimize treatment disparities (especially chemotherapy use) in patients with advanced-stage disease,” the investigators wrote.

The retrospective data analysis used patient information from 68,141 patients (6,190 black, 61,951 white) aged 66 years and older with colon cancer identified from the National Cancer Institute SEER-Medicare database. Using a novel minimum distance matching strategy, investigators drew from the pool of white patients to match three distinct comparison cohorts to the same 6,190 black patients. Close matches between black and white patients bypassed the need for model-based analysis.

The primary matching analysis was limited by the inability to control for substantial differences in socioeconomic status, marital status, and urban/rural residence. A subcohort analysis of 2,000 matched black and white patients showed that when socioeconomic status was added to the demographic match, survival differences were reduced, indicating the important role of socioeconomic status on racial survival disparities.

Significantly better survival was observed in all patients who were diagnosed in 2004 or later, the year the Food and Drug Administration approved the important chemotherapy medicines oxaliplatin and bevacizumab. Separating the cohorts into those who were diagnosed before and after 2004 revealed that the racial survival disparity was lower in the more recent group, indicating a favorable impact of oxaliplatin and/or bevacizumab in reducing the survival disparity.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Racial disparities in colon cancer survival mainly driven by tumor stage at presentation
Display Headline
Racial disparities in colon cancer survival mainly driven by tumor stage at presentation
Sections
Article Source

FROM GASTROENTEROLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Tumor stage at diagnosis had a greater effect on survival disparities between black and white patients with colon cancer than treatment differences.

Major finding: Among demographically matched black and white patients, the 5-year survival difference was 8.3% (P less than .0001); matching by presentation reduced the difference to 5.0% (P less than .0001), and additional matching by treatment reduced the difference only slightly to 4.9% (P less than .0001).

Data sources: In total, 68,141 patients (6,190 black, 61,951 white) aged 66 years and older with colon cancer were identified from the National Cancer Institute SEER-Medicare database. Three white comparison cohorts were assembled and matched to the same 6,190 black patients.

Disclosures: Dr. Lai and coauthors reported having no disclosures.

Surveillance finds pancreatic ductal carcinoma in situ at resectable stage

Progress in earlier detection of pancreatic cancer
Article Type
Changed
Wed, 05/26/2021 - 13:54
Display Headline
Surveillance finds pancreatic ductal carcinoma in situ at resectable stage

Surveillance of CDNK2A mutation carriers detected most pancreatic ductal carcinoma in situ (PDAC) at a resectable stage, while the surveillance benefit was lower for those with familial prostate cancer.

Among 178 CDKN2A mutation carriers, PDAC was detected in 13 (7.3%), 9 of whom underwent surgery. Compared with previously reported rates of 15%-20% for symptomatic PDAC, this 70% resection rate represents a substantial increase. The 5-year survival rate of 24% for screen-detected PDAC was higher than 4%-7% reported for symptomatic sporadic PDAC. Among individuals with familial prostate cancer (FPC), 13 of 214 individuals (6.1%) underwent surgery, but with a higher proportion of precursor lesions detected, just four high-risk lesions (1.9% of screened FPC patients) were removed.

 

Whether surveillance improved prognosis for FPC families was difficult to determine, according to the investigators. The yield of PDAC was low at 0.9%, as was the yield of relevant precursor lesions (grade 3 PanIN and high-grade IPMN) at 1.9%.

“However, if surgical removal of multifocal grade 2 PanIN and multifocal BD-IPMNs is regarded as beneficial, the diagnostic yield increases to 3.7% (eight of 214 patients), and surveillance of FPC might also be considered effective,” wrote Dr. Hans Vasen, professor in the department of gastroenterology and hepatology at the Leiden University Medical Center, the Netherlands, and colleagues. “The value of surveillance of FPC is still not clear, and the main effect seems to be prevention of PDAC by removal of” precursor lesions, they added (J Clin Oncol. 2016 Apr 25. doi: 10.1200/JCO.2015.64.0730).

The retrospective evaluation of an ongoing prospective follow-up study included 411 high-risk individuals: 178 with CDKN2A mutations, 214 with familial pancreatic cancer, and 19 with BRCA1/2 or PALB2 mutations. The study was conducted at three expert centers in Marburg, Germany; Leiden, the Netherlands; and Madrid.

In the BRCA1/2 and PALB2 mutation cohort, one individual (3.8%) with a BRCA2 mutation developed PDAC and underwent surgery; 17 months after the surgery this patient died of liver metastasis. Two others underwent surgery for cystic lesions and are in good health at 10 and 21 months after surgery.

In the cohort of CDKN2A mutation carriers, the mean age at the start of surveillance was 56 years (range, 37-75) and the mean follow-up time was 53 months (range, 0-169): in total, 866 MRIs and 106 endoscopic ultrasounds were conducted. In the FPC group, the mean age was 48 years (range, 27-81), and the mean follow up was 2.8 years (range, 0-10.8): 618 MRIs and 402 endoscopic ultrasounds were conducted. Among BRCA1/2 and PALB2 mutation carriers, the mean age was 52.6 years (range, 25-70), and the mean follow up was 32.7 months (range, 1-119).

Click for Credit Link
Body

Given the difficulty of detecting precursor lesions and distinguishing incipient neoplasia from lower grade or nonneoplastic cystic lesions, the authors of the accompanying study achieved impressive results in improving cancer outcomes among high-risk individuals.

Several strategies for earlier cancer detection can be gleaned from the study. Improved outcomes may depend on expert centers running the surveillance. The detection rate of 2%-7%, depending on the cohort studied and the surveillance protocol, may have room for improvement with better risk stratification and refined protocols for cost effectiveness. The age at the start of surveillance may be one place to start: the mean age of pancreatic ductal carcinoma in situ detection was 53-68 years, depending on the center, and it may be possible to shift the starting age upward to improve yield.

The type of mutation conferring susceptibility may aid in risk stratification. For example, CDKN2A mutation carriers had a higher cancer rate (16%) than BRCA/PALB2 mutation carriers (5%). Other factors that could mitigate risk upward include diabetes, family history, and smoking history. A composite risk assessment could aid in identifying the highest-risk patients. Lastly, future studies are needed to determine which surveillance protocols are best. To make valid comparisons, several surveillance protocols must be tested.

These results impact not only high-risk individuals, but the general population as well. The data support that early detection improves outcomes and highlights the need for developing better biomarkers and tests for early detection of PDAC.

 

Dr. Teresa A. Brentnall is professor in the department of medicine, division of gastroenterology, University of Washington, Seattle. These remarks were part of an accompanying editorial (J Clin Oncol. 2016 Apr 25. doi: 10.1200/JCO.2015.64.0730).

Publications
Topics
Click for Credit Link
Click for Credit Link
Body

Given the difficulty of detecting precursor lesions and distinguishing incipient neoplasia from lower grade or nonneoplastic cystic lesions, the authors of the accompanying study achieved impressive results in improving cancer outcomes among high-risk individuals.

Several strategies for earlier cancer detection can be gleaned from the study. Improved outcomes may depend on expert centers running the surveillance. The detection rate of 2%-7%, depending on the cohort studied and the surveillance protocol, may have room for improvement with better risk stratification and refined protocols for cost effectiveness. The age at the start of surveillance may be one place to start: the mean age of pancreatic ductal carcinoma in situ detection was 53-68 years, depending on the center, and it may be possible to shift the starting age upward to improve yield.

The type of mutation conferring susceptibility may aid in risk stratification. For example, CDKN2A mutation carriers had a higher cancer rate (16%) than BRCA/PALB2 mutation carriers (5%). Other factors that could mitigate risk upward include diabetes, family history, and smoking history. A composite risk assessment could aid in identifying the highest-risk patients. Lastly, future studies are needed to determine which surveillance protocols are best. To make valid comparisons, several surveillance protocols must be tested.

These results impact not only high-risk individuals, but the general population as well. The data support that early detection improves outcomes and highlights the need for developing better biomarkers and tests for early detection of PDAC.

 

Dr. Teresa A. Brentnall is professor in the department of medicine, division of gastroenterology, University of Washington, Seattle. These remarks were part of an accompanying editorial (J Clin Oncol. 2016 Apr 25. doi: 10.1200/JCO.2015.64.0730).

Body

Given the difficulty of detecting precursor lesions and distinguishing incipient neoplasia from lower grade or nonneoplastic cystic lesions, the authors of the accompanying study achieved impressive results in improving cancer outcomes among high-risk individuals.

Several strategies for earlier cancer detection can be gleaned from the study. Improved outcomes may depend on expert centers running the surveillance. The detection rate of 2%-7%, depending on the cohort studied and the surveillance protocol, may have room for improvement with better risk stratification and refined protocols for cost effectiveness. The age at the start of surveillance may be one place to start: the mean age of pancreatic ductal carcinoma in situ detection was 53-68 years, depending on the center, and it may be possible to shift the starting age upward to improve yield.

The type of mutation conferring susceptibility may aid in risk stratification. For example, CDKN2A mutation carriers had a higher cancer rate (16%) than BRCA/PALB2 mutation carriers (5%). Other factors that could mitigate risk upward include diabetes, family history, and smoking history. A composite risk assessment could aid in identifying the highest-risk patients. Lastly, future studies are needed to determine which surveillance protocols are best. To make valid comparisons, several surveillance protocols must be tested.

These results impact not only high-risk individuals, but the general population as well. The data support that early detection improves outcomes and highlights the need for developing better biomarkers and tests for early detection of PDAC.

 

Dr. Teresa A. Brentnall is professor in the department of medicine, division of gastroenterology, University of Washington, Seattle. These remarks were part of an accompanying editorial (J Clin Oncol. 2016 Apr 25. doi: 10.1200/JCO.2015.64.0730).

Title
Progress in earlier detection of pancreatic cancer
Progress in earlier detection of pancreatic cancer

Surveillance of CDNK2A mutation carriers detected most pancreatic ductal carcinoma in situ (PDAC) at a resectable stage, while the surveillance benefit was lower for those with familial prostate cancer.

Among 178 CDKN2A mutation carriers, PDAC was detected in 13 (7.3%), 9 of whom underwent surgery. Compared with previously reported rates of 15%-20% for symptomatic PDAC, this 70% resection rate represents a substantial increase. The 5-year survival rate of 24% for screen-detected PDAC was higher than 4%-7% reported for symptomatic sporadic PDAC. Among individuals with familial prostate cancer (FPC), 13 of 214 individuals (6.1%) underwent surgery, but with a higher proportion of precursor lesions detected, just four high-risk lesions (1.9% of screened FPC patients) were removed.

 

Whether surveillance improved prognosis for FPC families was difficult to determine, according to the investigators. The yield of PDAC was low at 0.9%, as was the yield of relevant precursor lesions (grade 3 PanIN and high-grade IPMN) at 1.9%.

“However, if surgical removal of multifocal grade 2 PanIN and multifocal BD-IPMNs is regarded as beneficial, the diagnostic yield increases to 3.7% (eight of 214 patients), and surveillance of FPC might also be considered effective,” wrote Dr. Hans Vasen, professor in the department of gastroenterology and hepatology at the Leiden University Medical Center, the Netherlands, and colleagues. “The value of surveillance of FPC is still not clear, and the main effect seems to be prevention of PDAC by removal of” precursor lesions, they added (J Clin Oncol. 2016 Apr 25. doi: 10.1200/JCO.2015.64.0730).

The retrospective evaluation of an ongoing prospective follow-up study included 411 high-risk individuals: 178 with CDKN2A mutations, 214 with familial pancreatic cancer, and 19 with BRCA1/2 or PALB2 mutations. The study was conducted at three expert centers in Marburg, Germany; Leiden, the Netherlands; and Madrid.

In the BRCA1/2 and PALB2 mutation cohort, one individual (3.8%) with a BRCA2 mutation developed PDAC and underwent surgery; 17 months after the surgery this patient died of liver metastasis. Two others underwent surgery for cystic lesions and are in good health at 10 and 21 months after surgery.

In the cohort of CDKN2A mutation carriers, the mean age at the start of surveillance was 56 years (range, 37-75) and the mean follow-up time was 53 months (range, 0-169): in total, 866 MRIs and 106 endoscopic ultrasounds were conducted. In the FPC group, the mean age was 48 years (range, 27-81), and the mean follow up was 2.8 years (range, 0-10.8): 618 MRIs and 402 endoscopic ultrasounds were conducted. Among BRCA1/2 and PALB2 mutation carriers, the mean age was 52.6 years (range, 25-70), and the mean follow up was 32.7 months (range, 1-119).

Surveillance of CDNK2A mutation carriers detected most pancreatic ductal carcinoma in situ (PDAC) at a resectable stage, while the surveillance benefit was lower for those with familial prostate cancer.

Among 178 CDKN2A mutation carriers, PDAC was detected in 13 (7.3%), 9 of whom underwent surgery. Compared with previously reported rates of 15%-20% for symptomatic PDAC, this 70% resection rate represents a substantial increase. The 5-year survival rate of 24% for screen-detected PDAC was higher than 4%-7% reported for symptomatic sporadic PDAC. Among individuals with familial prostate cancer (FPC), 13 of 214 individuals (6.1%) underwent surgery, but with a higher proportion of precursor lesions detected, just four high-risk lesions (1.9% of screened FPC patients) were removed.

 

Whether surveillance improved prognosis for FPC families was difficult to determine, according to the investigators. The yield of PDAC was low at 0.9%, as was the yield of relevant precursor lesions (grade 3 PanIN and high-grade IPMN) at 1.9%.

“However, if surgical removal of multifocal grade 2 PanIN and multifocal BD-IPMNs is regarded as beneficial, the diagnostic yield increases to 3.7% (eight of 214 patients), and surveillance of FPC might also be considered effective,” wrote Dr. Hans Vasen, professor in the department of gastroenterology and hepatology at the Leiden University Medical Center, the Netherlands, and colleagues. “The value of surveillance of FPC is still not clear, and the main effect seems to be prevention of PDAC by removal of” precursor lesions, they added (J Clin Oncol. 2016 Apr 25. doi: 10.1200/JCO.2015.64.0730).

The retrospective evaluation of an ongoing prospective follow-up study included 411 high-risk individuals: 178 with CDKN2A mutations, 214 with familial pancreatic cancer, and 19 with BRCA1/2 or PALB2 mutations. The study was conducted at three expert centers in Marburg, Germany; Leiden, the Netherlands; and Madrid.

In the BRCA1/2 and PALB2 mutation cohort, one individual (3.8%) with a BRCA2 mutation developed PDAC and underwent surgery; 17 months after the surgery this patient died of liver metastasis. Two others underwent surgery for cystic lesions and are in good health at 10 and 21 months after surgery.

In the cohort of CDKN2A mutation carriers, the mean age at the start of surveillance was 56 years (range, 37-75) and the mean follow-up time was 53 months (range, 0-169): in total, 866 MRIs and 106 endoscopic ultrasounds were conducted. In the FPC group, the mean age was 48 years (range, 27-81), and the mean follow up was 2.8 years (range, 0-10.8): 618 MRIs and 402 endoscopic ultrasounds were conducted. Among BRCA1/2 and PALB2 mutation carriers, the mean age was 52.6 years (range, 25-70), and the mean follow up was 32.7 months (range, 1-119).

Publications
Publications
Topics
Article Type
Display Headline
Surveillance finds pancreatic ductal carcinoma in situ at resectable stage
Display Headline
Surveillance finds pancreatic ductal carcinoma in situ at resectable stage
Click for Credit Status
Active
Article Source

FROM THE JOURNAL OF CLINICAL ONCOLOGY

Disallow All Ads
Alternative CME
Vitals

Key clinical point: Surveillance of high-risk individuals was relatively successful in detecting pancreatic ductal carcinoma in situ (PDAC) at a resectable stage.

Major finding: The detection rate in CDKN2A mutation carriers was 7.3% and the resection rate for screen-detected PDAC was 75%, compared with previous reports of 15%-20% for symptomatic PDAC; the PDAC detection rate in individuals with familial prostate cancer was much lower at 0.9%.

Data source: Evaluation of an ongoing prospective follow-up study at three European centers included 411 individuals: 178 with CDKN2A mutations, 214 with familial pancreatic cancer, and 19 with BRCA1/2 or PALB2 mutations.

Disclosures: Dr. Vasen and most coauthors reported having no disclosures. Five coauthors reported financial ties to industry sources.