Big Data: The Paradigm Shift Needed to Revolutionize Musculoskeletal Clinical Research

Article Type
Changed
Thu, 09/19/2019 - 13:40
Display Headline
Big Data: The Paradigm Shift Needed to Revolutionize Musculoskeletal Clinical Research

One year ago, we wrote an editorial in The American Journal of Orthopedics on missing data.1 This year, data is once again the focus of our editorial but from a
different perspective. Rather than focus on the problems of incomplete data, we want to talk about the possibilities of collecting all data through advanced technology, a phenomenon better known as “Big Data.”

New Technology

The factors driving Big Data’s ascendency are the digitalization of useful data, increased means to gather digitalized data, and cheaper analytic power.2 Computer behemoth IBM claims that 90% of the data in the world today has been created in the last 2 years alone.3 Big Data is not just an industry buzzword; it is already an industry in itself. Revenue from Big Data reached $18 billion in 2013 and is predicted to rise to $50 billion in the next 5 years.4 While it is easy to see how Internet companies like Amazon can both collect and use all of the data they receive from customers (to suggest their next purchase, for example), it might be less easy to see how Big Data concepts can be applied to clinical research.

Health Care

Electronic data records are propelling the development of pools of information in health care. Almost half of all hospitals in the United States are participating in health information exchanges (HIEs).5 When these sources of data pools are integrated, the information collected can be used in a powerful way. For example, the health maintenance organization Kaiser Permanente uses a new computer system that drives data exchange between medical facilities. Patient benefits include improved outcomes of cardiovascular disease, and an estimated $1 billion has been saved due to reduced office visits and laboratory tests.5

Contemporary Studies

Let’s quickly consider how we currently conduct clinical studies. Because we do not usually collect data from the entire population, contemporary clinical studies offer only a snapshot of a subsection of patients. The results from this sample are then usually extrapolated to the general population. This was fine when there were insurmountable technological and logistical issues. So instead of trying to
collect data from everyone in the population of interest, we select a sample of patients and expend our energy on controlling for the suboptimal methods we currently employ, techniques which are the best ones available to us.

What are the consequences of all this? Those of us in clinical research are usually very concerned about dealing with confounding factors: selection bias, adjusting for missing data, controlling for errors, and so on. We can also see how imprecise our current methods are by how often a scientific manuscript ends with a call for larger-scale research. Indeed, a scientific research paper that does not list the study’s limitations is often regarded with suspicion, a telling indictment of the problems we expect to encounter in clinical research.

So what has historically been the best current solution to overcome these challenges? A meta-analysis of randomized controlled trials sits atop the evidence pyramid as being the best level of evidence. However, even the use of meta-analyses can be problematic. One group of researchers found that in 2005 and 2008, respectively, 18% and 30% of orthopedic meta-analyses had major to extensive flaws in their methodology.6 Indeed, implicit in the use of a meta-analysis is a criticism that our current studies with their limited sample sizes do not tell the whole story.

Paradigm Shift

We are in the middle of a paradigm shift in the way we can collect and analyze data. Our focus until now has been on identifying a causal relationship in our studies. New technology which allows for large-scale data collection and analysis means that we can now collect ALL patient data, in other words N = all. When you can collect all data, the why (causality) something is happening becomes less
important than the what (correlation) is happening.7 Studies will therefore begin to focus on effectiveness in the real world as opposed to measurements taken under the ideal (or nearly ideal) conditions of efficacy.

All of this is going to have implications, the greatest of which is the change in mindset that we are going to have to go through. How we conduct our studies and what their focus is will both change and expand. For example, the Mini-Sentinel project uses preexisting electronic health care data from multiple sources to monitor the safety of medical products that are regulated by the US Food and
Drug Administration (FDA). This FDA-sponsored initiative, which only began in 2008, had already collected data on 178 million individuals by July 2014.8

 

 

Since we cannot ignore Big Data, we must do what we can to ensure that its potential is harnessed to reduce costs and improve patient outcomes. Given the potential of using electronic clinical data, it is also necessary to strike a note of caution. We have to keep uppermost in mind that new technologies like Big Data can unsettle a lot of people. A central tenet of clinical research is that patient data belong to the patient. Robust and transparent processes need to be developed to ensure that patients do not feel compromised in any way by their data being used in such new and widespread methods. The need to rethink and implement safeguards is already being addressed. For example, the university-associated Regenstrief Institute does not pass along even deidentified data to their Big Data industry partner.9


However, we need to also be cognizant of the fact that society is changing in the way people use and regard their own information. Patient-reported data is already being shared among patients online, for both common and rare diseases. The data are also richer and can go beyond the usual outcomes that are recorded to give a bigger picture, eg, why patients are not adhering to treatment regimens.10

In summary, it is our earnest belief that if the health care industry can embrace the concept of Big Data and utilize it properly, our patients and medical practices will be all the better for it.

References

1. Helfet DL, Hanson BP, De Faoite D. Publish or perish; but what, when,
and how? Am J Orthop. 2013;42(9):399-400.
2. Nash DB. Harnessing the power of big data in healthcare. Am Health
Drug Benefits
. 2014;7(2):69-70.
3. What is big data? IBM website. http://www-01.ibm.com/software/data/
bigdata/what-is-big-data.html. Accessed July 22, 2014.
4. Upbin B. Visualizing the big data industrial complex. Forbes website.
http://www.forbes.com/sites/bruceupbin/2013/08/30/visualizing-thebig-
data-industrial-complex-infographic/. Published August 30, 2013. Accessed July 22, 2014.
5. Kayyali B, Knott D, Van Kuiken S. The big-data revolution in US health
care: accelerating value and innovation. McKinsey & Company website.
http://www.mckinsey.com/insights/health_systems_and_services/
the_big-data_revolution_in_us_health_care. Published January 2013.
Accessed July 22, 2014.
6. Dijkman BG, Abouali JA, Kooistra BW, et al. Twenty years of meta-analyses
in orthopaedic surgery: has quality kept up with quantity? J Bone Joint Surg Am. 2010;92(1):48-57.
7. Cukier K, Mayer-Schonberger V. Big Data: A Revolution That Will Transform
How We Live, Work, and Think
. New York, NY: Eamon Dolan/Houghton Mifflin Harcourt; 2013.
8. Mini-Sentinel distributed data “at a glance.” Mini-Sentinel website.
http://www.mini-sentinel.org/about_us/MSDD_At-a-Glance.aspx. Accessed July 22, 2014.
9. Jain SH, Rosenblatt M, Duke J. Is big data the new frontier for academic-
industry collaboration? JAMA. 2014;311(21):2171-2172.
10. Okun S, McGraw D, Stang P, et al. Making the case for continuous
learning from routinely collected data. Institute of Medicine website.
http://www.iom.edu/Global/Perspectives/2013/~/media/Files/
Perspectives-Files/2013/Discussion-Papers/VSRT-MakingtheCase.pdf. Published April 15, 2013. Accessed July 22, 2014.

References

Article PDF
Author and Disclosure Information

David L. Helfet, MD, MBChB, Beate P. Hanson, MD, MPH, and Diarmuid De Faoite, MBS, BBS

Issue
The American Journal of Orthopedics - 43(9)
Publications
Topics
Page Number
399-400
Legacy Keywords
american journal of orthopedics, ajo, guest editorial, editorial, paradigm shift, musculoskeletal, research, data, technology, big data, helfet, hanson, de faoite
Sections
Author and Disclosure Information

David L. Helfet, MD, MBChB, Beate P. Hanson, MD, MPH, and Diarmuid De Faoite, MBS, BBS

Author and Disclosure Information

David L. Helfet, MD, MBChB, Beate P. Hanson, MD, MPH, and Diarmuid De Faoite, MBS, BBS

Article PDF
Article PDF

One year ago, we wrote an editorial in The American Journal of Orthopedics on missing data.1 This year, data is once again the focus of our editorial but from a
different perspective. Rather than focus on the problems of incomplete data, we want to talk about the possibilities of collecting all data through advanced technology, a phenomenon better known as “Big Data.”

New Technology

The factors driving Big Data’s ascendency are the digitalization of useful data, increased means to gather digitalized data, and cheaper analytic power.2 Computer behemoth IBM claims that 90% of the data in the world today has been created in the last 2 years alone.3 Big Data is not just an industry buzzword; it is already an industry in itself. Revenue from Big Data reached $18 billion in 2013 and is predicted to rise to $50 billion in the next 5 years.4 While it is easy to see how Internet companies like Amazon can both collect and use all of the data they receive from customers (to suggest their next purchase, for example), it might be less easy to see how Big Data concepts can be applied to clinical research.

Health Care

Electronic data records are propelling the development of pools of information in health care. Almost half of all hospitals in the United States are participating in health information exchanges (HIEs).5 When these sources of data pools are integrated, the information collected can be used in a powerful way. For example, the health maintenance organization Kaiser Permanente uses a new computer system that drives data exchange between medical facilities. Patient benefits include improved outcomes of cardiovascular disease, and an estimated $1 billion has been saved due to reduced office visits and laboratory tests.5

Contemporary Studies

Let’s quickly consider how we currently conduct clinical studies. Because we do not usually collect data from the entire population, contemporary clinical studies offer only a snapshot of a subsection of patients. The results from this sample are then usually extrapolated to the general population. This was fine when there were insurmountable technological and logistical issues. So instead of trying to
collect data from everyone in the population of interest, we select a sample of patients and expend our energy on controlling for the suboptimal methods we currently employ, techniques which are the best ones available to us.

What are the consequences of all this? Those of us in clinical research are usually very concerned about dealing with confounding factors: selection bias, adjusting for missing data, controlling for errors, and so on. We can also see how imprecise our current methods are by how often a scientific manuscript ends with a call for larger-scale research. Indeed, a scientific research paper that does not list the study’s limitations is often regarded with suspicion, a telling indictment of the problems we expect to encounter in clinical research.

So what has historically been the best current solution to overcome these challenges? A meta-analysis of randomized controlled trials sits atop the evidence pyramid as being the best level of evidence. However, even the use of meta-analyses can be problematic. One group of researchers found that in 2005 and 2008, respectively, 18% and 30% of orthopedic meta-analyses had major to extensive flaws in their methodology.6 Indeed, implicit in the use of a meta-analysis is a criticism that our current studies with their limited sample sizes do not tell the whole story.

Paradigm Shift

We are in the middle of a paradigm shift in the way we can collect and analyze data. Our focus until now has been on identifying a causal relationship in our studies. New technology which allows for large-scale data collection and analysis means that we can now collect ALL patient data, in other words N = all. When you can collect all data, the why (causality) something is happening becomes less
important than the what (correlation) is happening.7 Studies will therefore begin to focus on effectiveness in the real world as opposed to measurements taken under the ideal (or nearly ideal) conditions of efficacy.

All of this is going to have implications, the greatest of which is the change in mindset that we are going to have to go through. How we conduct our studies and what their focus is will both change and expand. For example, the Mini-Sentinel project uses preexisting electronic health care data from multiple sources to monitor the safety of medical products that are regulated by the US Food and
Drug Administration (FDA). This FDA-sponsored initiative, which only began in 2008, had already collected data on 178 million individuals by July 2014.8

 

 

Since we cannot ignore Big Data, we must do what we can to ensure that its potential is harnessed to reduce costs and improve patient outcomes. Given the potential of using electronic clinical data, it is also necessary to strike a note of caution. We have to keep uppermost in mind that new technologies like Big Data can unsettle a lot of people. A central tenet of clinical research is that patient data belong to the patient. Robust and transparent processes need to be developed to ensure that patients do not feel compromised in any way by their data being used in such new and widespread methods. The need to rethink and implement safeguards is already being addressed. For example, the university-associated Regenstrief Institute does not pass along even deidentified data to their Big Data industry partner.9


However, we need to also be cognizant of the fact that society is changing in the way people use and regard their own information. Patient-reported data is already being shared among patients online, for both common and rare diseases. The data are also richer and can go beyond the usual outcomes that are recorded to give a bigger picture, eg, why patients are not adhering to treatment regimens.10

In summary, it is our earnest belief that if the health care industry can embrace the concept of Big Data and utilize it properly, our patients and medical practices will be all the better for it.

References

1. Helfet DL, Hanson BP, De Faoite D. Publish or perish; but what, when,
and how? Am J Orthop. 2013;42(9):399-400.
2. Nash DB. Harnessing the power of big data in healthcare. Am Health
Drug Benefits
. 2014;7(2):69-70.
3. What is big data? IBM website. http://www-01.ibm.com/software/data/
bigdata/what-is-big-data.html. Accessed July 22, 2014.
4. Upbin B. Visualizing the big data industrial complex. Forbes website.
http://www.forbes.com/sites/bruceupbin/2013/08/30/visualizing-thebig-
data-industrial-complex-infographic/. Published August 30, 2013. Accessed July 22, 2014.
5. Kayyali B, Knott D, Van Kuiken S. The big-data revolution in US health
care: accelerating value and innovation. McKinsey & Company website.
http://www.mckinsey.com/insights/health_systems_and_services/
the_big-data_revolution_in_us_health_care. Published January 2013.
Accessed July 22, 2014.
6. Dijkman BG, Abouali JA, Kooistra BW, et al. Twenty years of meta-analyses
in orthopaedic surgery: has quality kept up with quantity? J Bone Joint Surg Am. 2010;92(1):48-57.
7. Cukier K, Mayer-Schonberger V. Big Data: A Revolution That Will Transform
How We Live, Work, and Think
. New York, NY: Eamon Dolan/Houghton Mifflin Harcourt; 2013.
8. Mini-Sentinel distributed data “at a glance.” Mini-Sentinel website.
http://www.mini-sentinel.org/about_us/MSDD_At-a-Glance.aspx. Accessed July 22, 2014.
9. Jain SH, Rosenblatt M, Duke J. Is big data the new frontier for academic-
industry collaboration? JAMA. 2014;311(21):2171-2172.
10. Okun S, McGraw D, Stang P, et al. Making the case for continuous
learning from routinely collected data. Institute of Medicine website.
http://www.iom.edu/Global/Perspectives/2013/~/media/Files/
Perspectives-Files/2013/Discussion-Papers/VSRT-MakingtheCase.pdf. Published April 15, 2013. Accessed July 22, 2014.

One year ago, we wrote an editorial in The American Journal of Orthopedics on missing data.1 This year, data is once again the focus of our editorial but from a
different perspective. Rather than focus on the problems of incomplete data, we want to talk about the possibilities of collecting all data through advanced technology, a phenomenon better known as “Big Data.”

New Technology

The factors driving Big Data’s ascendency are the digitalization of useful data, increased means to gather digitalized data, and cheaper analytic power.2 Computer behemoth IBM claims that 90% of the data in the world today has been created in the last 2 years alone.3 Big Data is not just an industry buzzword; it is already an industry in itself. Revenue from Big Data reached $18 billion in 2013 and is predicted to rise to $50 billion in the next 5 years.4 While it is easy to see how Internet companies like Amazon can both collect and use all of the data they receive from customers (to suggest their next purchase, for example), it might be less easy to see how Big Data concepts can be applied to clinical research.

Health Care

Electronic data records are propelling the development of pools of information in health care. Almost half of all hospitals in the United States are participating in health information exchanges (HIEs).5 When these sources of data pools are integrated, the information collected can be used in a powerful way. For example, the health maintenance organization Kaiser Permanente uses a new computer system that drives data exchange between medical facilities. Patient benefits include improved outcomes of cardiovascular disease, and an estimated $1 billion has been saved due to reduced office visits and laboratory tests.5

Contemporary Studies

Let’s quickly consider how we currently conduct clinical studies. Because we do not usually collect data from the entire population, contemporary clinical studies offer only a snapshot of a subsection of patients. The results from this sample are then usually extrapolated to the general population. This was fine when there were insurmountable technological and logistical issues. So instead of trying to
collect data from everyone in the population of interest, we select a sample of patients and expend our energy on controlling for the suboptimal methods we currently employ, techniques which are the best ones available to us.

What are the consequences of all this? Those of us in clinical research are usually very concerned about dealing with confounding factors: selection bias, adjusting for missing data, controlling for errors, and so on. We can also see how imprecise our current methods are by how often a scientific manuscript ends with a call for larger-scale research. Indeed, a scientific research paper that does not list the study’s limitations is often regarded with suspicion, a telling indictment of the problems we expect to encounter in clinical research.

So what has historically been the best current solution to overcome these challenges? A meta-analysis of randomized controlled trials sits atop the evidence pyramid as being the best level of evidence. However, even the use of meta-analyses can be problematic. One group of researchers found that in 2005 and 2008, respectively, 18% and 30% of orthopedic meta-analyses had major to extensive flaws in their methodology.6 Indeed, implicit in the use of a meta-analysis is a criticism that our current studies with their limited sample sizes do not tell the whole story.

Paradigm Shift

We are in the middle of a paradigm shift in the way we can collect and analyze data. Our focus until now has been on identifying a causal relationship in our studies. New technology which allows for large-scale data collection and analysis means that we can now collect ALL patient data, in other words N = all. When you can collect all data, the why (causality) something is happening becomes less
important than the what (correlation) is happening.7 Studies will therefore begin to focus on effectiveness in the real world as opposed to measurements taken under the ideal (or nearly ideal) conditions of efficacy.

All of this is going to have implications, the greatest of which is the change in mindset that we are going to have to go through. How we conduct our studies and what their focus is will both change and expand. For example, the Mini-Sentinel project uses preexisting electronic health care data from multiple sources to monitor the safety of medical products that are regulated by the US Food and
Drug Administration (FDA). This FDA-sponsored initiative, which only began in 2008, had already collected data on 178 million individuals by July 2014.8

 

 

Since we cannot ignore Big Data, we must do what we can to ensure that its potential is harnessed to reduce costs and improve patient outcomes. Given the potential of using electronic clinical data, it is also necessary to strike a note of caution. We have to keep uppermost in mind that new technologies like Big Data can unsettle a lot of people. A central tenet of clinical research is that patient data belong to the patient. Robust and transparent processes need to be developed to ensure that patients do not feel compromised in any way by their data being used in such new and widespread methods. The need to rethink and implement safeguards is already being addressed. For example, the university-associated Regenstrief Institute does not pass along even deidentified data to their Big Data industry partner.9


However, we need to also be cognizant of the fact that society is changing in the way people use and regard their own information. Patient-reported data is already being shared among patients online, for both common and rare diseases. The data are also richer and can go beyond the usual outcomes that are recorded to give a bigger picture, eg, why patients are not adhering to treatment regimens.10

In summary, it is our earnest belief that if the health care industry can embrace the concept of Big Data and utilize it properly, our patients and medical practices will be all the better for it.

References

1. Helfet DL, Hanson BP, De Faoite D. Publish or perish; but what, when,
and how? Am J Orthop. 2013;42(9):399-400.
2. Nash DB. Harnessing the power of big data in healthcare. Am Health
Drug Benefits
. 2014;7(2):69-70.
3. What is big data? IBM website. http://www-01.ibm.com/software/data/
bigdata/what-is-big-data.html. Accessed July 22, 2014.
4. Upbin B. Visualizing the big data industrial complex. Forbes website.
http://www.forbes.com/sites/bruceupbin/2013/08/30/visualizing-thebig-
data-industrial-complex-infographic/. Published August 30, 2013. Accessed July 22, 2014.
5. Kayyali B, Knott D, Van Kuiken S. The big-data revolution in US health
care: accelerating value and innovation. McKinsey & Company website.
http://www.mckinsey.com/insights/health_systems_and_services/
the_big-data_revolution_in_us_health_care. Published January 2013.
Accessed July 22, 2014.
6. Dijkman BG, Abouali JA, Kooistra BW, et al. Twenty years of meta-analyses
in orthopaedic surgery: has quality kept up with quantity? J Bone Joint Surg Am. 2010;92(1):48-57.
7. Cukier K, Mayer-Schonberger V. Big Data: A Revolution That Will Transform
How We Live, Work, and Think
. New York, NY: Eamon Dolan/Houghton Mifflin Harcourt; 2013.
8. Mini-Sentinel distributed data “at a glance.” Mini-Sentinel website.
http://www.mini-sentinel.org/about_us/MSDD_At-a-Glance.aspx. Accessed July 22, 2014.
9. Jain SH, Rosenblatt M, Duke J. Is big data the new frontier for academic-
industry collaboration? JAMA. 2014;311(21):2171-2172.
10. Okun S, McGraw D, Stang P, et al. Making the case for continuous
learning from routinely collected data. Institute of Medicine website.
http://www.iom.edu/Global/Perspectives/2013/~/media/Files/
Perspectives-Files/2013/Discussion-Papers/VSRT-MakingtheCase.pdf. Published April 15, 2013. Accessed July 22, 2014.

References

References

Issue
The American Journal of Orthopedics - 43(9)
Issue
The American Journal of Orthopedics - 43(9)
Page Number
399-400
Page Number
399-400
Publications
Publications
Topics
Article Type
Display Headline
Big Data: The Paradigm Shift Needed to Revolutionize Musculoskeletal Clinical Research
Display Headline
Big Data: The Paradigm Shift Needed to Revolutionize Musculoskeletal Clinical Research
Legacy Keywords
american journal of orthopedics, ajo, guest editorial, editorial, paradigm shift, musculoskeletal, research, data, technology, big data, helfet, hanson, de faoite
Legacy Keywords
american journal of orthopedics, ajo, guest editorial, editorial, paradigm shift, musculoskeletal, research, data, technology, big data, helfet, hanson, de faoite
Sections
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Publish or Perish; But What, When, and How?

Article Type
Changed
Thu, 09/19/2019 - 13:46
Display Headline
Publish or Perish; But What, When, and How?

If we were to try to identify a Zeitgeist (spirit of the time) in society, one possible answer would be data. In the field of clinical research this could mean data that is collected, not collected, public, hidden from view, published, not published—the list of issues connected to data is almost endless.

In this editorial, we would like to examine clinical research data from 3 different perspectives. What happens when there is no data available? Or when only incomplete data can be accessed? Or when all of the data is in the public realm but is uncritically taken at face value?

There is currently a groundswell of opinion that the subject of transparency of clinical trial data needs to be tackled. This campaign is particularly strong in the United Kingdom where the British Medical Journal and advocacy groups like www.alltrials.net have gained prominence. Ben Goldacre, author of the recent Bad Pharma book, goes so far as to say, “The problem of missing trials is one of the greatest ethical and practical problems facing medicine today.”1

Here in the United States we also have issues with data. One study from 2009 found that the results of only 44% of trials conducted in the United States and Canada is published in the medical literature.2 However, this study was on general medicine, how are we faring in orthopedics? A study from 2011 targeted orthopedic trauma trials registered on www.clinicaltrials.gov and followed them up to see if they were published within a reasonable timeframe.3 The result? Only 43.2% of the orthopedic trauma trials studied resulted in a publication—a figure that almost exactly mirrors the findings from the general medicine study.

Data that is not released obviously skews the evidence available to us as clinicians and researchers. More insidious still is incomplete data as it gives a false picture to anyone reading the original study or to a researcher who wants to include the study in a meta-analysis. We are all aware of the difficulty of having complete patient follow-up because, ironically, we as surgeons have enabled our patients to walk away from the study. How should we best deal with these gaps in our knowledge? Some statistical techniques have been developed to deal with just this problem.

One set of researchers looked at how missing data was dealt with in an intention-to-treat analysis in orthopedic randomized clinical trials.4 They took 1 published study and recalculated the way patients on a displaced midshaft clavicular fracture trial who were lost to follow-up are handled. These researchers used the Last Observation Carried Forward technique and compared this to the original method, which was exclusion from the analysis. This change in approach changed the significance of the nonunion and overall complication results. However, the use of these various methods to deal with missing data in intention-to-treat analysis is in itself the subject of some controversy in orthopedic clinical research.5

There is more than merely anecdotal evidence that uncritical acceptance of research findings could harm patients. We are all familiar with the recent metal-on-metal hip implant controversy when promising early results were not borne out by later experience. One study, which found combined clinical and radiographic failure rates of 28% among large diameter metal-on-metal articulations in total hip arthroplasty, notes that, “adequate preclinical trials may have identified some of the shortcomings of this class of implants before the marketing and widespread use of these implants ensued.”6

Is this volte-face in the evidence released a rare occurrence? Perhaps not. A well-known review of 49 studies from 2005 found that 45 claimed the intervention was effective.7 Subsequent investigations contradicted the findings of 7 of the original studies with positive results (16%), and a further 7 of these studies (16%) reported effects stronger than those of any of the follow-up studies, studies which were larger or better controlled. The evidence for almost one-third of the positive result studies was therefore changed, either wholly or partly. Keep in mind that this figure does not take into account the 11 positive result studies which were not replicated at all.

In all of this, we have to accept that things are rarely black and white. When is the best time to release information? For example, the conclusion for a closed fracture treatment subgroup in the study to prospectively evaluate reamed intramedullary (IM) nails in tibial fractures (SPRINT) changed only after 800 patients had been enrolled. A smaller trial would have led to an incorrect conclusion for this subgroup.8 As you can see, deciding on when to release data is a delicate subject and is influenced by many factors, not least time and costs. Many contemporary clinical researchers also operate under publication pressures.9 And all of us are aware of the kudos that accrue from being first-in-manuscript authors!

 

 

Unfortunately, knowing how to identify good and bad (and premature) information, and how to filter out relevant information in today’s flood of publications in the field of medicine is likely to remain an intractable problem for all of us involved in conducting or assessing clinical research for the foreseeable future. This is why the critical appraisal techniques of evidence-based medicine are invaluable.

Starr10 in writing about the advances in fracture repair achieved by the AO (Arbeitsgemeinschaft für Osteosynthesefragen/Association for the Study of Internal Fixation), says that, “Fortunately, the surgical pioneers who described early use of these techniques were harsh critics of their own work. The need for better methods and implants was evident.” From its founding, the AO inculcated a culture in which data, positive or negative, was shared.

Perhaps the ‘Golden Age of Orthopedic Surgery’ has already passed. But even with all of the advances in today’s operating room, we should continue to strive to improve what it is we do, even if it is only incrementally. As this editorial has illustrated, complacency about clinical research data presents a challenge to better patient care. We need to continue to be inquisitive and questioning in our quest to be better!

Dr. Helfet is Associate Editor of Trauma of this journal; Professor, Department of Orthopedic Surgery, Cornell University Medical College; and Director of the Orthopaedic Trauma Service, at the Hospital for Special Surgery and New York–Presbyterian Hospital, New York, New York. Dr. Hanson is Director and Mr. De Faoite is Education Manager, AO (Arbeitsgemeinschaft für Osteosynthesefragen/Association for the Study of Internal Fixation) Clinical Investigation and Documentation (AOCID), Dübendorf, Switzerland.

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Am J Orthop. 2013;42(9):399-400. Copyright Frontline Medical Communications Inc. 2013. All rights reserved.

References


1. Davies E. The shifting debate on trial data transparency. BMJ. 2013;347:f4485.
2. Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.Gov: a cross-sectional analysis. PLoS Med. 2009;6(9):e1000144.
3. Gandhi R, Jan M, Smith HN, Mahomed NN, Bhandari M. Comparison of published orthopaedic trauma trials following registration in Clinicaltrials.gov. BMC Musculoskelet Disord. 2011;12:278.
4. Herman A, Botser IB, Tenenbaum S, Chechick A. Intention-to-treat analysis and accounting for missing data in orthopaedic randomized clinical trials. J Bone Joint Surg Am. 2009;91(9):2137-2143.
5. Scharfstein DO, Hogan J, Herman A. On the prevention and analysis of missing data in randomized clinical trials: the state of the art. J Bone Joint Surg Am. 2012;94 suppl 1:80-84.
6. Steele GD, Fehring TK, Odum SM, Dennos AC, Nadaud MC. Early failure of articular surface replacement XL total hip arthroplasty. J Arthroplasty. 2011;26(6 suppl):14-18.
7. Ioannidis JP. Contradicted and initially stronger effects in highly cited clinical research. JAMA. 2005;294(2):218-228.
8. Slobogean GP, Sprague S, Bhandari M. The tactics of large randomized trials. J Bone Joint Surg Am. 2012;94 suppl 1:19-23.
9. Duvivier R, Crocker-Buqué T, Stull MJ. Young doctors and the pressure of publication. Lancet. 2013;381(9876):e10.
10. Starr AJ. Fracture repair: successful advances, persistent problems, and the psychological burden of trauma. J Bone Joint Surg Am. 2008;90 suppl 1:132-137.

Article PDF
Author and Disclosure Information

David L. Helfet, MD, MBChB, Beate P. Hanson, MD, MPH, and Diarmuid De Faoite, MBS, BBS

Issue
The American Journal of Orthopedics - 42(9)
Publications
Topics
Page Number
399-400
Legacy Keywords
ajo, the american journal of orthopedics, helfet, clinical trial data
Sections
Author and Disclosure Information

David L. Helfet, MD, MBChB, Beate P. Hanson, MD, MPH, and Diarmuid De Faoite, MBS, BBS

Author and Disclosure Information

David L. Helfet, MD, MBChB, Beate P. Hanson, MD, MPH, and Diarmuid De Faoite, MBS, BBS

Article PDF
Article PDF

If we were to try to identify a Zeitgeist (spirit of the time) in society, one possible answer would be data. In the field of clinical research this could mean data that is collected, not collected, public, hidden from view, published, not published—the list of issues connected to data is almost endless.

In this editorial, we would like to examine clinical research data from 3 different perspectives. What happens when there is no data available? Or when only incomplete data can be accessed? Or when all of the data is in the public realm but is uncritically taken at face value?

There is currently a groundswell of opinion that the subject of transparency of clinical trial data needs to be tackled. This campaign is particularly strong in the United Kingdom where the British Medical Journal and advocacy groups like www.alltrials.net have gained prominence. Ben Goldacre, author of the recent Bad Pharma book, goes so far as to say, “The problem of missing trials is one of the greatest ethical and practical problems facing medicine today.”1

Here in the United States we also have issues with data. One study from 2009 found that the results of only 44% of trials conducted in the United States and Canada is published in the medical literature.2 However, this study was on general medicine, how are we faring in orthopedics? A study from 2011 targeted orthopedic trauma trials registered on www.clinicaltrials.gov and followed them up to see if they were published within a reasonable timeframe.3 The result? Only 43.2% of the orthopedic trauma trials studied resulted in a publication—a figure that almost exactly mirrors the findings from the general medicine study.

Data that is not released obviously skews the evidence available to us as clinicians and researchers. More insidious still is incomplete data as it gives a false picture to anyone reading the original study or to a researcher who wants to include the study in a meta-analysis. We are all aware of the difficulty of having complete patient follow-up because, ironically, we as surgeons have enabled our patients to walk away from the study. How should we best deal with these gaps in our knowledge? Some statistical techniques have been developed to deal with just this problem.

One set of researchers looked at how missing data was dealt with in an intention-to-treat analysis in orthopedic randomized clinical trials.4 They took 1 published study and recalculated the way patients on a displaced midshaft clavicular fracture trial who were lost to follow-up are handled. These researchers used the Last Observation Carried Forward technique and compared this to the original method, which was exclusion from the analysis. This change in approach changed the significance of the nonunion and overall complication results. However, the use of these various methods to deal with missing data in intention-to-treat analysis is in itself the subject of some controversy in orthopedic clinical research.5

There is more than merely anecdotal evidence that uncritical acceptance of research findings could harm patients. We are all familiar with the recent metal-on-metal hip implant controversy when promising early results were not borne out by later experience. One study, which found combined clinical and radiographic failure rates of 28% among large diameter metal-on-metal articulations in total hip arthroplasty, notes that, “adequate preclinical trials may have identified some of the shortcomings of this class of implants before the marketing and widespread use of these implants ensued.”6

Is this volte-face in the evidence released a rare occurrence? Perhaps not. A well-known review of 49 studies from 2005 found that 45 claimed the intervention was effective.7 Subsequent investigations contradicted the findings of 7 of the original studies with positive results (16%), and a further 7 of these studies (16%) reported effects stronger than those of any of the follow-up studies, studies which were larger or better controlled. The evidence for almost one-third of the positive result studies was therefore changed, either wholly or partly. Keep in mind that this figure does not take into account the 11 positive result studies which were not replicated at all.

In all of this, we have to accept that things are rarely black and white. When is the best time to release information? For example, the conclusion for a closed fracture treatment subgroup in the study to prospectively evaluate reamed intramedullary (IM) nails in tibial fractures (SPRINT) changed only after 800 patients had been enrolled. A smaller trial would have led to an incorrect conclusion for this subgroup.8 As you can see, deciding on when to release data is a delicate subject and is influenced by many factors, not least time and costs. Many contemporary clinical researchers also operate under publication pressures.9 And all of us are aware of the kudos that accrue from being first-in-manuscript authors!

 

 

Unfortunately, knowing how to identify good and bad (and premature) information, and how to filter out relevant information in today’s flood of publications in the field of medicine is likely to remain an intractable problem for all of us involved in conducting or assessing clinical research for the foreseeable future. This is why the critical appraisal techniques of evidence-based medicine are invaluable.

Starr10 in writing about the advances in fracture repair achieved by the AO (Arbeitsgemeinschaft für Osteosynthesefragen/Association for the Study of Internal Fixation), says that, “Fortunately, the surgical pioneers who described early use of these techniques were harsh critics of their own work. The need for better methods and implants was evident.” From its founding, the AO inculcated a culture in which data, positive or negative, was shared.

Perhaps the ‘Golden Age of Orthopedic Surgery’ has already passed. But even with all of the advances in today’s operating room, we should continue to strive to improve what it is we do, even if it is only incrementally. As this editorial has illustrated, complacency about clinical research data presents a challenge to better patient care. We need to continue to be inquisitive and questioning in our quest to be better!

Dr. Helfet is Associate Editor of Trauma of this journal; Professor, Department of Orthopedic Surgery, Cornell University Medical College; and Director of the Orthopaedic Trauma Service, at the Hospital for Special Surgery and New York–Presbyterian Hospital, New York, New York. Dr. Hanson is Director and Mr. De Faoite is Education Manager, AO (Arbeitsgemeinschaft für Osteosynthesefragen/Association for the Study of Internal Fixation) Clinical Investigation and Documentation (AOCID), Dübendorf, Switzerland.

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Am J Orthop. 2013;42(9):399-400. Copyright Frontline Medical Communications Inc. 2013. All rights reserved.

If we were to try to identify a Zeitgeist (spirit of the time) in society, one possible answer would be data. In the field of clinical research this could mean data that is collected, not collected, public, hidden from view, published, not published—the list of issues connected to data is almost endless.

In this editorial, we would like to examine clinical research data from 3 different perspectives. What happens when there is no data available? Or when only incomplete data can be accessed? Or when all of the data is in the public realm but is uncritically taken at face value?

There is currently a groundswell of opinion that the subject of transparency of clinical trial data needs to be tackled. This campaign is particularly strong in the United Kingdom where the British Medical Journal and advocacy groups like www.alltrials.net have gained prominence. Ben Goldacre, author of the recent Bad Pharma book, goes so far as to say, “The problem of missing trials is one of the greatest ethical and practical problems facing medicine today.”1

Here in the United States we also have issues with data. One study from 2009 found that the results of only 44% of trials conducted in the United States and Canada is published in the medical literature.2 However, this study was on general medicine, how are we faring in orthopedics? A study from 2011 targeted orthopedic trauma trials registered on www.clinicaltrials.gov and followed them up to see if they were published within a reasonable timeframe.3 The result? Only 43.2% of the orthopedic trauma trials studied resulted in a publication—a figure that almost exactly mirrors the findings from the general medicine study.

Data that is not released obviously skews the evidence available to us as clinicians and researchers. More insidious still is incomplete data as it gives a false picture to anyone reading the original study or to a researcher who wants to include the study in a meta-analysis. We are all aware of the difficulty of having complete patient follow-up because, ironically, we as surgeons have enabled our patients to walk away from the study. How should we best deal with these gaps in our knowledge? Some statistical techniques have been developed to deal with just this problem.

One set of researchers looked at how missing data was dealt with in an intention-to-treat analysis in orthopedic randomized clinical trials.4 They took 1 published study and recalculated the way patients on a displaced midshaft clavicular fracture trial who were lost to follow-up are handled. These researchers used the Last Observation Carried Forward technique and compared this to the original method, which was exclusion from the analysis. This change in approach changed the significance of the nonunion and overall complication results. However, the use of these various methods to deal with missing data in intention-to-treat analysis is in itself the subject of some controversy in orthopedic clinical research.5

There is more than merely anecdotal evidence that uncritical acceptance of research findings could harm patients. We are all familiar with the recent metal-on-metal hip implant controversy when promising early results were not borne out by later experience. One study, which found combined clinical and radiographic failure rates of 28% among large diameter metal-on-metal articulations in total hip arthroplasty, notes that, “adequate preclinical trials may have identified some of the shortcomings of this class of implants before the marketing and widespread use of these implants ensued.”6

Is this volte-face in the evidence released a rare occurrence? Perhaps not. A well-known review of 49 studies from 2005 found that 45 claimed the intervention was effective.7 Subsequent investigations contradicted the findings of 7 of the original studies with positive results (16%), and a further 7 of these studies (16%) reported effects stronger than those of any of the follow-up studies, studies which were larger or better controlled. The evidence for almost one-third of the positive result studies was therefore changed, either wholly or partly. Keep in mind that this figure does not take into account the 11 positive result studies which were not replicated at all.

In all of this, we have to accept that things are rarely black and white. When is the best time to release information? For example, the conclusion for a closed fracture treatment subgroup in the study to prospectively evaluate reamed intramedullary (IM) nails in tibial fractures (SPRINT) changed only after 800 patients had been enrolled. A smaller trial would have led to an incorrect conclusion for this subgroup.8 As you can see, deciding on when to release data is a delicate subject and is influenced by many factors, not least time and costs. Many contemporary clinical researchers also operate under publication pressures.9 And all of us are aware of the kudos that accrue from being first-in-manuscript authors!

 

 

Unfortunately, knowing how to identify good and bad (and premature) information, and how to filter out relevant information in today’s flood of publications in the field of medicine is likely to remain an intractable problem for all of us involved in conducting or assessing clinical research for the foreseeable future. This is why the critical appraisal techniques of evidence-based medicine are invaluable.

Starr10 in writing about the advances in fracture repair achieved by the AO (Arbeitsgemeinschaft für Osteosynthesefragen/Association for the Study of Internal Fixation), says that, “Fortunately, the surgical pioneers who described early use of these techniques were harsh critics of their own work. The need for better methods and implants was evident.” From its founding, the AO inculcated a culture in which data, positive or negative, was shared.

Perhaps the ‘Golden Age of Orthopedic Surgery’ has already passed. But even with all of the advances in today’s operating room, we should continue to strive to improve what it is we do, even if it is only incrementally. As this editorial has illustrated, complacency about clinical research data presents a challenge to better patient care. We need to continue to be inquisitive and questioning in our quest to be better!

Dr. Helfet is Associate Editor of Trauma of this journal; Professor, Department of Orthopedic Surgery, Cornell University Medical College; and Director of the Orthopaedic Trauma Service, at the Hospital for Special Surgery and New York–Presbyterian Hospital, New York, New York. Dr. Hanson is Director and Mr. De Faoite is Education Manager, AO (Arbeitsgemeinschaft für Osteosynthesefragen/Association for the Study of Internal Fixation) Clinical Investigation and Documentation (AOCID), Dübendorf, Switzerland.

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Am J Orthop. 2013;42(9):399-400. Copyright Frontline Medical Communications Inc. 2013. All rights reserved.

References


1. Davies E. The shifting debate on trial data transparency. BMJ. 2013;347:f4485.
2. Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.Gov: a cross-sectional analysis. PLoS Med. 2009;6(9):e1000144.
3. Gandhi R, Jan M, Smith HN, Mahomed NN, Bhandari M. Comparison of published orthopaedic trauma trials following registration in Clinicaltrials.gov. BMC Musculoskelet Disord. 2011;12:278.
4. Herman A, Botser IB, Tenenbaum S, Chechick A. Intention-to-treat analysis and accounting for missing data in orthopaedic randomized clinical trials. J Bone Joint Surg Am. 2009;91(9):2137-2143.
5. Scharfstein DO, Hogan J, Herman A. On the prevention and analysis of missing data in randomized clinical trials: the state of the art. J Bone Joint Surg Am. 2012;94 suppl 1:80-84.
6. Steele GD, Fehring TK, Odum SM, Dennos AC, Nadaud MC. Early failure of articular surface replacement XL total hip arthroplasty. J Arthroplasty. 2011;26(6 suppl):14-18.
7. Ioannidis JP. Contradicted and initially stronger effects in highly cited clinical research. JAMA. 2005;294(2):218-228.
8. Slobogean GP, Sprague S, Bhandari M. The tactics of large randomized trials. J Bone Joint Surg Am. 2012;94 suppl 1:19-23.
9. Duvivier R, Crocker-Buqué T, Stull MJ. Young doctors and the pressure of publication. Lancet. 2013;381(9876):e10.
10. Starr AJ. Fracture repair: successful advances, persistent problems, and the psychological burden of trauma. J Bone Joint Surg Am. 2008;90 suppl 1:132-137.

References


1. Davies E. The shifting debate on trial data transparency. BMJ. 2013;347:f4485.
2. Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.Gov: a cross-sectional analysis. PLoS Med. 2009;6(9):e1000144.
3. Gandhi R, Jan M, Smith HN, Mahomed NN, Bhandari M. Comparison of published orthopaedic trauma trials following registration in Clinicaltrials.gov. BMC Musculoskelet Disord. 2011;12:278.
4. Herman A, Botser IB, Tenenbaum S, Chechick A. Intention-to-treat analysis and accounting for missing data in orthopaedic randomized clinical trials. J Bone Joint Surg Am. 2009;91(9):2137-2143.
5. Scharfstein DO, Hogan J, Herman A. On the prevention and analysis of missing data in randomized clinical trials: the state of the art. J Bone Joint Surg Am. 2012;94 suppl 1:80-84.
6. Steele GD, Fehring TK, Odum SM, Dennos AC, Nadaud MC. Early failure of articular surface replacement XL total hip arthroplasty. J Arthroplasty. 2011;26(6 suppl):14-18.
7. Ioannidis JP. Contradicted and initially stronger effects in highly cited clinical research. JAMA. 2005;294(2):218-228.
8. Slobogean GP, Sprague S, Bhandari M. The tactics of large randomized trials. J Bone Joint Surg Am. 2012;94 suppl 1:19-23.
9. Duvivier R, Crocker-Buqué T, Stull MJ. Young doctors and the pressure of publication. Lancet. 2013;381(9876):e10.
10. Starr AJ. Fracture repair: successful advances, persistent problems, and the psychological burden of trauma. J Bone Joint Surg Am. 2008;90 suppl 1:132-137.

Issue
The American Journal of Orthopedics - 42(9)
Issue
The American Journal of Orthopedics - 42(9)
Page Number
399-400
Page Number
399-400
Publications
Publications
Topics
Article Type
Display Headline
Publish or Perish; But What, When, and How?
Display Headline
Publish or Perish; But What, When, and How?
Legacy Keywords
ajo, the american journal of orthopedics, helfet, clinical trial data
Legacy Keywords
ajo, the american journal of orthopedics, helfet, clinical trial data
Sections
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Shift Needed in Evidence-Based Medicine

Article Type
Changed
Thu, 09/19/2019 - 13:51
Display Headline
Shift Needed in Evidence-Based Medicine

Article PDF
Author and Disclosure Information

David L. Helfet, MD, MBChB, Michael Suk, MD, JD, MPH, Beate P. Hanson, MD, MPH, and Diarmuid De Faoite, MBS

Issue
The American Journal of Orthopedics - 41(9)
Publications
Topics
Page Number
396,412
Legacy Keywords
orthopedic practice management, medicolegal issues, Evidence-Based Medicine, EBM, ajo, the american journal of orthopedics, AO Clinical Study Center programorthopedic practice management, medicolegal issues, Evidence-Based Medicine, EBM, ajo, the american journal of orthopedics, AO Clinical Study Center program
Sections
Author and Disclosure Information

David L. Helfet, MD, MBChB, Michael Suk, MD, JD, MPH, Beate P. Hanson, MD, MPH, and Diarmuid De Faoite, MBS

Author and Disclosure Information

David L. Helfet, MD, MBChB, Michael Suk, MD, JD, MPH, Beate P. Hanson, MD, MPH, and Diarmuid De Faoite, MBS

Article PDF
Article PDF

Issue
The American Journal of Orthopedics - 41(9)
Issue
The American Journal of Orthopedics - 41(9)
Page Number
396,412
Page Number
396,412
Publications
Publications
Topics
Article Type
Display Headline
Shift Needed in Evidence-Based Medicine
Display Headline
Shift Needed in Evidence-Based Medicine
Legacy Keywords
orthopedic practice management, medicolegal issues, Evidence-Based Medicine, EBM, ajo, the american journal of orthopedics, AO Clinical Study Center programorthopedic practice management, medicolegal issues, Evidence-Based Medicine, EBM, ajo, the american journal of orthopedics, AO Clinical Study Center program
Legacy Keywords
orthopedic practice management, medicolegal issues, Evidence-Based Medicine, EBM, ajo, the american journal of orthopedics, AO Clinical Study Center programorthopedic practice management, medicolegal issues, Evidence-Based Medicine, EBM, ajo, the american journal of orthopedics, AO Clinical Study Center program
Sections
Article Source

PURLs Copyright

Inside the Article

Article PDF Media