Article Text

Download PDFPDF

Future challenges for research on diagnostic tests: genetic tests and disease prevention
Free
  1. S S Coughlin
  1. Epidemiology and Health Services Research Branch, Division of Cancer Prevention and Control, National Center for Chronic Disease Prevention and Health Promotion, Centers for Disease Control and Prevention, Atlanta, GA, USA
  1. Correspondence to:
 Dr S S Coughlin, Epidemiology and Health Services Research Branch, Division of Cancer Prevention and Control, National Center for Chronic Disease Prevention and Health Promotion, Centers for Disease Control and Prevention, 4770 Buford Hwy, NE (K-55), Atlanta, GA 30341, USA;
 sic9{at}cdc.gov

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Evidence based assessments for new diagnostic strategies

In his paper about possible ways to improve research on diagnostic testing,1 Dr Feinstein observed that “the methodological problems are particularly noteworthy in the new era of molecular biology and genetic testing.” Methodological issues surrounding research on genetic testing have been addressed in several recent articles and reports from expert advisory panels.2–5 Although some of these reports have focused on technological and quality assurance challenges in large scale genetic testing and screening, many of their recommendations and conclusions are applicable to genetic testing conducted in a clinical setting or as part of research.

The National Institutes of Health-Department of Energy Working Group on Ethical, Legal, and Social Implications of Human Genome Research, Task Force on Genetic Testing, noted that the clinical use of a genetic test must be based on evidence that the gene being examined is associated with the disease in question, that the test results will be useful to the people being tested, and that the test itself has analytical and clinical validity.4 Whereas clinical validity refers to the accuracy with which a test predicts the presence or absence of a clinical condition or predisposition, analytical validity is an indicator of how well a test performs in the laboratory.5 For DNA based tests, analytical validity requires establishing the probability that a test will be positive when a particular sequence (analyte) is present (analytical sensitivity) and the probability that the test will be negative when the sequence is absent (analytical specificity).4 Analytical validation of a new genetic test includes comparing it with the most definitive or “gold standard” method, performing replicate determinations to ensure that a single observation is not spurious, and “blind” testing of coded positive samples (from patients with the disease in whom the genetic alteration is known to be present) and negative samples (from controls).4 In most instances, the gold standard entails gene sequencing to detect mutations.

As noted by the Task Force, clinical validation involves establishing several measures of clinical performance including the probability that the test will be positive in people with the disease (clinical sensitivity), the probability that the test will be negative in people without the disease (clinical specificity), and the probability that people with positive test results will get the disease (positive predictive value) and that people with negative results will not get the disease (negative predictive value), topics also considered by Dr Feinstein. For many predictive genetic tests, knowledge of the test's clinical validity may be incomplete for years after the test is developed, which requires that the potential harms of the test be considered more carefully.5

The heterogeneity of genetic diseases and their penetrance (the probability that disease will appear when a disease related genotype is present) affect clinical validity. The same genetic disease might result from the presence of any of several different variants (alleles) of the same gene (allelic diversity) or of different genes (locus heterogeneity).4 With current technology, all disease related alleles cannot always be identified, and this failure to detect all disease related mutations reduces a test's clinical sensitivity.4 When the penetrance is incomplete, the positive predictive value is reduced.4

Dr Feinstein noted the challenges of evaluating diagnostic marker tests when the disease cannot always be determined to be present or absent, and test results are not always positive or negative. Even with DNA based testing, test results can be indeterminate and genetic variants are sometimes detected that are of unknown clinical significance. For example, studies of BRCA1 and BRCA2 gene mutations have sometimes found genetic variants that are of uncertain significance with respect to risk of breast and ovarian cancer.6 Information about the limitations of genetic testing should be conveyed to research participants as part of informed consent and genetic counselling.

Dr Feinstein noted that some tests “are used not to diagnose a particular condition but to identify the patient's location in the spectrum of phenomena associated with the disease.” Genetic tests may have similar applications in the future. Persons identified on the basis of phenotype or a pronounced family history of disease may undergo genetic testing in research protocols to determine whether they carry certain mutations. For example, persons with hereditary non-polyposis colorectal cancer (HNPCC) may undergo genetic testing to determine whether they carry hMLH1 or hMSH2 gene mutations.7 Genetic studies have also examined whether disease risks vary by the location of genetic mutations. For example, risk of breast and ovarian cancer has been reported to vary according to the specific location of BRCA2 gene mutations.8

In addition to studies of test validity, scientifically rigorous evaluations of the effectiveness of genetic testing and screening are needed.9,10 Evidence that screening for a genetic trait or mutation and providing early intervention or treatment results in improved prognosis and favourable health outcomes should ideally come from randomised trials. Genetic testing and screening sometimes allow for the identification of a genetic condition or predisposition before the onset of clinically recognised, irreversible disease. Screening for phenylketonuria (PKU) in newborns, for example, can detect the condition for which a preventive intervention is available.

Genetic screening for adult onset disorders is not recommended at present, outside of high risk families and research protocols. However, studies are examining the genetic basis of common, adult onset disorders that cause substantial morbidity and mortality. Examples include mutations to the hMSH2 and hMLH1 genes, which are associated with increased susceptibility to colorectal cancer and associations between factor V Leiden and thromboembolic disorders.7

In the future, it may be feasible to prevent diseases in adults by identifying and modifying environmental risk factors among genetically susceptible persons.10,11 Genetic testing might allow for the identification of persons truly at increased risk for an illness and for targeted medical interventions. For example, genetic testing may allow for the identification of subgroups of patients who are more likely to benefit from preventive strategies such as the use of cholesterol lowering drugs. Pharmacogenetics may allow for tailored drug therapy and disease prevention based upon genetic variation in effectiveness and side effects.12,13

Rigorous research methods for assessing the validity and effectiveness of genetic tests are needed to obtain evidence based assessments of their clinical and public health utility. Both conventional methods and future refinements in research methods may have a role.

Evidence based assessments for new diagnostic strategies

REFERENCES