Article Text

Download PDFPDF

Improving standards of medical and public health research
  1. G A Colditz
  1. Channing Laboratory, Department of Medicine, Brigham and Women's Hospital, and Harvard Medical School, Boston, MA, USA
  1. Correspondence to:
 Professor G A Golditz, Harvard Medical School, Channing Laboratory, 181 Longwood Avenue, Boston, MA 02115-5889, USA;
 graham.colditz{at}channing.harvard.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The need for application and dissemination of best practices

In his essay, Feinstein reiterates many of the themes he pursued over the past 20 years as he called for greater scientific rigour in the evaluation of clinical medicine. His call for the application of standard clinical epidemiological research rigour to the evolving field of molecular markers in clinical medicine emphasises his prior writing in this area,1 and similar calls to improve standards for evaluation of diagnostic tests research.2 This follows on the tradition of content analysis of published studies to improve the quality of research. Studies that evaluate the methods used in the conduct and reporting of studies have contributed over the years to improved standards for clinical research in surgery and medicine.3,4 For example, DerSimonian et al surveyed all 67 clinical trials published in the New England Journal of Medicine, the Lancet, and the British Medical Journal from July to December 1979 and in the Journal of the American Medical Association from July 1979 to June 1980 to determine the frequency of reporting 11 important aspects of design and analysis.4 Based on their review they concluded that reporting could be substantially improved. Subsequently, guidelines for reporting of randomised trials have been adopted5,6 and the quality of reporting has improved.7 This approach to documenting current practice in medical research across a wide range of topic areas has been implemented by Feinstein and his colleagues, as well as numerous other groups working to improve the quality of medical research and its application.

Is the cup half full or half empty? While Feinstein sets forth a series of methodological challenges, it is important to note that much is already being done to address several of the issues that are raised. For example, the notion that many diagnostic tests do not have a definitive or “gold standard” has been addressed by methodologists who now set forth a strategy to combine results for tests when the standard is not perfect.8 Importantly, the role of the evaluation of diagnostic tests is primarily an evaluation of the value of the information that is used in clinical decision making.9 In fact, the very emphasis on statistical methods that Feinstein criticises10 forms the basis for many of the evolving approaches to the evaluation and application of diagnostic tests. The Radiology Diagnostic Oncology Group has been in place in the United States for over a decade with a goal to provide timely and generalisable clinical evaluations of imaging technologies.11

Also of note is the call by Feinstein to reconsider the allocation of resources to avoid research that is “often unnecessary”. This is consistent with the refrain by many that the value of information from testing should be evaluated before the test is ordered, and if no new action will be taken based on the possible results then the test is not indicated. The clear handbook on evidence based clinical medicine that grew out of the informative series in the JAMA explicitly sets forth the steps in evaluating research studies and applying the results to individual patients.12 It also addresses such issues as the measurement of agreement between observers and the grading of treatment guidelines or recommendations. Thus, while Feinstein has criticised the “use of reductive calculation of sensitivity and specificity, and other mathematical indices that directly connect manifestation, marker, or test result with a specific disease” the field has evolved to also consider the steps in the application of results from group level data (as seen in the results of clinical trials) to individuals.13,14 The implementation of these approaches remains to be effectively disseminated.15 Furthermore, there is an urgent need for research on the relative effectiveness and efficiency of different strategies to speed dissemination and implementation.

A theme of Feinstein's writing has been the limited quality of scientific studies and the menace that this can cause—either in clinical settings through bias and lack of consideration of error in evaluations, or in epidemiological studies where causal inference may be incorrectly applied.16 His critiques have not always been correct. His erroneous conclusion that alcohol is not causally related to breast cancer has been clearly rebutted by the more than 40 studies published on this relation,17 consistent results from numerous prospective studies,18 and a series of mechanistic studies.19

Despite these concerns, the need for rigour in the evaluation of medical interventions is imperative. The priority for this is exemplified by reports from the US Institute of Medicine, National Academy of Science,20,21 and more recently the establishment in the United Kingdom of the National Coordinating Centre for Health Technology Assessment, which coordinates the health technology assessment programme for the department of health. In summary, much exciting work is ongoing to address the concerns identified by Feinstein. The field requires continuing methods development, application, and dissemination of best practices. The challenge is before us and we must endeavour to move all aspects of this research agenda forward.

The need for application and dissemination of best practices

REFERENCES