Article Text

PDF

Pragmatic randomised controlled trials in parenting research: the issue of intention to treat
  1. Karen Whittaker,
  2. Chris Sutton,
  3. Chris Burton
  1. Department of Nursing, University of Central Lancashire, Preston UK
  1. Correspondence to:
 Karen Whittaker
 Department of Nursing, University of Central Lancashire, Preston PR1 2HE, UK; kwhittaker1{at}uclan.ac.uk

Abstract

Study objective: To evaluate trials of parenting programmes, regarding their use of intention to treat (ITT).

Design: Individual trials included in two relevant Cochrane systematic reviews were scrutinised by two independent reviewers. Data on country of origin, target audience, trial type, treatment violations, use of ITT, and the management of missing data were extracted.

Main results: Thirty trial reports were reviewed. Three reported the use of an ITT approach to data analysis. Nineteen reported losing subjects to follow up although the implications of this were rarely considered. Insufficient detail in reports meant it was difficult to identify study drop outs, the nature of treatment violations, and those failing to provide outcome assessments. In two trials, study drop outs were considered as additional control groups, violating the basic principle of ITT.

Conclusions: It is recommended that future trial reports adhere to CONSORT guidelines. In particular ITT should be used for the main analyses, with strategies for managing treatment violations and handling missing data being reported a priori. Those conducting trials need to acknowledge the social nature of these programmes can sometimes result in erratic parent attendance and participation, which would only increase the chances of missing data. The use of approaches that can limit the proportion of missing data is therefore recommended.

  • CONSORT
  • intention to treat
  • parenting
  • randomised controlled trial

Statistics from Altmetric.com

Early investment in the lives of children is recognised by the World Health Organisation, the United Nations Population Fund, The World Bank, and Family Health International as a global policy imperative to reduce rates of premature death in the poorest parts of the world. Moreover, interventions that are able to combine support for both the physical and psychological needs of children, by providing guidance aimed at strengthening parent-child relations, will have additional benefits for the survival of those within disadvantaged communities.1 One means of doing this has been though the provision of programmes addressing parenting practices.

The presence of parenting as an area of interest in UK government policy corresponds to the simultaneous attention on crime and antisocial behaviour.2 A connection between parenting practices, early life experiences, and later behaviour has been repeatedly referred to and has led to the development in the UK of new modes of service delivery such as Sure Start and, more latterly, Children’s Centres.3 These programmes, like their forerunner the United States Head Start programme,4 have been developed for young families living within socially disadvantaged communities. The Sure Start programme has radically influenced the way in which community health services for children are now organised within the UK. Moreover the message from UK national policy is that parents need supporting5 given that they are the most important influence in the lives of growing children3: a political will that is perhaps fuelled by the simultaneous desire to reduce crime and improve training and employment chances, as the basis for a stable national economy?6 Regardless of motive, this political climate has opened the way for the development of numerous parenting support programmes within the UK.7 This has supported a growth in what might be described as the “parenting support industry” creating parenting programmes as commercial products. As a consequence, the National Institute for Health and Clinical Effectiveness (NICE) for UK health and social care services has recently published its own appraisal of the effectiveness of these programmes8 and in the same month the UK Home Office Respect Task Force has launched plans to expand parenting provision nationally.9 This reinforces the need for parenting interventions to be supported by credible evidence10 and thus subject to criteria endorsed by the evidence based healthcare movement.11,12

Traditionally, the strongest evidence of effectiveness is shown using rigorous evaluation methods, specifically the randomised controlled trial (RCT). However, it is the quality of the application of methods rather than the method itself that should be the primary concern.13 The purpose of this paper is to explore the application and the quality in the reporting of RCTs in parenting. To date, the RCT has been one of the main methods used in the evaluation of parenting interventions, possibly to address the scepticism that exists about the justification for using public money to fund parenting support. Critics of the use of RCTs in evaluating socially complex interventions highlight issues regarding homogeneity of the intervention received.14–16 In parenting trials, factors affecting homogeneity may include the interaction between parents and practitioners and parents dipping in and out of programmes without necessarily withdrawing. This reinforces the need for good research design, careful and thoughtful analysis and reporting. The Medical Research Council framework for the evaluation of complex interventions17 recognises these issues to some degree. It notes that traditional principles of evaluation may be applied to a complex intervention, and appreciates the importance of early theoretical work in identifying its active ingredients and their interrelationships. However, it fails to provide clear guidance on the analytical issues relating to intention to treat (ITT).

PRAGMATIC RANDOMISED CONTROLLED TRIALS

There are essentially two types of RCTs: explanatory (sometimes now termed fastidious) and pragmatic. A trial that is explanatory will provide evidence of efficacy of the experimental intervention in ideal conditions, preferably when compared with a placebo. A pragmatic trial however should study clinical effectiveness in relation to current best practice.18 However, pragmatic and explanatory attitudes are likely to coexist in some trials19 (for example, the use of both sham acupuncture and “no treatment” control groups in acupuncture trials).

A key feature of pragmatism in RCTs is the management of missing data, or data obtained from participants who did not progress through a trial as planned (often termed treatment or protocol violators). Although allocating participants in a random manner is vital to assure an unbiased estimate of clinical effectiveness, there is the potential for participants not to receive the planned or minimum level of intervention as detailed in the trial protocol. The CONSORT (consolidated standards of reporting trials) statement has in recent years set a benchmark for the quality of clinical trial reporting. The first CONSORT statement was published in the Journal of the American Medical Association in 199620 with a revision published in 2001.21 This revision proposes 22 items requiring consideration in a trial report, and recommends the use of a flowchart to show patterns of subject recruitment, withdrawal, and completion. Critically, the CONSORT guidelines state that reports of RCTs should state how an ITT was implemented in the analysis of data. Despite CONSORT, reviews would suggest that this is rarely followed.22,23

The first reference to ITT within health literature is attributed to Austin Bradford Hill24 (page 258) and may have appeared in editions of his book dating back to the 1950s.25 He stated “unless the losses are very few and therefore unimportant, we may inevitably have to keep such patients in the comparison and thus measure the intention to treat in a given way rather than the actual treatment. The question of the introduction of bias through exclusions for any reason (including lost sight of) must, therefore, always be carefully studied, not only at the end of a trial but throughout its progress. This continuous care is essential in order that we may immediately consider the nature of the exclusions and whether they must be retained for enquiry for follow-up, measurement etc. It will be too late to decide that at the end of the trial”.

While there is no standard definition of ITT, the American Statistical Association (ASA) gave what is probably the most widely accepted version: “an intention to treat analysis is one which includes all randomised patients in the groups to which they were randomly assigned, regardless of the compliance with the entry criteria, regardless of the treatment they actually received, and regardless of subsequent withdrawal from treatment or deviation from the protocol”.26 It is claimed that using an ITT approach in the analysis reduces the possibility of overestimating any clinical effectiveness23,27 and is therefore most suitable for pragmatic RCTs.

Additionally, on initial consideration of the ASA definition, it might seem that using an ITT approach to analysis is a straightforward process, with complete data being available for all participants whether they withdraw, comply or not. In practice this is rarely the case. There is the potential for any missing data to introduce bias.28,29 Standard methods for analysing data rely on the assumption that any data missing are so for reasons unrelated to their potential values. However, this necessary assumption for the validity of analyses cannot be formally tested without the actual data. While some reasons for missing data (for example, incorrectly enrolling ineligible participants in the trial) are less likely than others (for example, withdrawal because of side effects or lack of efficacy) to cause substantive bias in estimating effectiveness, it will still be difficult to provide a convincing argument that the missing data can be ignored. For example, the failure to complete outcome assessment in a study of a parenting programme, for whatever reason, would need to be unrelated to the programme’s outcomes for those families to allow the data analyst to validly ignore any missing data.

Strategies have been proposed for the management of missing data, many of which entail considering one or more plausible potential reasons for data being missing. Based on these alternative reasons, values should be imputed (filled in) and the sensitivity of the conclusions assessed. While last observation carried forward (LOCF) is the most commonly applied imputation method in trials with repeated outcome assessment, many authors have been critical of its widespread application.30–33 It leads to bias in the estimation of the of clinical effectiveness, but the magnitude and direction of the bias can only be surmised. Moreover, substituting a fixed value, such as the group mean, will artificially reduce the variability and increase the sample size, thus reducing the standard error. This will increase the type I error rate above that stated, making it more probable that clinical effectiveness is concluded incorrectly. The recommendation now made in the literature (for example, Verbeke et al30) is that LOCF should be used with caution.

EXPLORATORY REVIEW

This paper will report an exploratory review of the current position regarding ITT in parenting research. A representative pool of RCTs in parenting was sought so that data collection and analysis techniques could be scrutinised. A decision was taken to use systematic reviews in parenting identified from the Cochrane database of systematic reviews as, by definition, these reviews should contain the most rigorously conducted trials in this area.34 Two suitable systematic reviews were identified focusing on the effectiveness of parent-training programmes,35 and the relative merits of individual compared with group parenting programmes.36 Both reviews are current, and were updated in 2003 and 2001 respectively. Individual trial reports identified in the systematic reviews were obtained for analysis and data extracted independently by two reviewers (KW and CS). Extracted data related to the country of origin, target audience, type of trial, reported treatment violations, explanations regarding the reported use of ITT and claims regarding missing data. A third reviewer (CB) was used to support decision making where the appraisal of papers was complex.

RESULTS

A pool of 30 papers37–66 was available for review (table 1). Issues that necessitated the involvement of the third reviewer were identified in four of the papers. This was principally to clarify the issues relating to unconventional study designs.

Table 1

 Summary of papers reviewed

Most of the trials were conducted in North American countries (n = 23). Most were reported in the 1990s (n = 22) with a minority in the 1970s and 1980s (n = 5). Only three papers were published since 2000. Twenty four of the papers were published in psychology oriented journals.

RANDOMISATION

None of the papers with the exception of Patterson et al53 provided a flowchart to illustrate numbers of subjects randomised or lost during the course of the study. In total 25 of the papers reported RCTs, four of which used cluster randomisation, and four papers reported an experimental design with no random allocation of subjects. As the non-randomised studies differ in their philosophy and the overarching principle of ITT, namely to maintain a comparison of randomised groups does not apply, we shall not consider further these studies.37,49,56,57 One further paper54 was difficult to classify as it reported the results of two separate RCTs (one of which was also separately reported).55 For the purpose of this review, these papers are considered separately, as in the original systematic review.

Even with those papers identified as RCTs, the identification of group allocation proved to be a challenge as some papers described an unconventional study design. For example Truss et al62 had a group of 56 subjects who did not attend the experimental intervention sessions as originally randomised. These subjects were subsequently analysed as a separate control group. A second example was Gross et al43 where again two control groups were used, as the early drop outs (n = 7) from the intervention group were analysed separately despite the fact they had been originally randomised into the treatment group. Additionally, and extrapolated from details of parent drop outs, at least 435 parents must have been randomised in the Cunningham et al paper,40 but only 150 entered the intervention phase.

INTENTION TO TREAT

Only two papers claimed to use ITT. The first61 reported on a Canadian study and was published in 1998 in a mental health/psychology journal. The second53 reported on a UK study and was published in a medical journal. In both studies, subjects were followed up according to their original allocation, but where data were missing because of failure to respond, subjects were excluded from the relevant analyses (see table 2).

Table 2

 Reporting of ITT principles

In a further paper, the main principles of ITT are applied,44 but the authors do not make an explicit claim nor use the terminology ITT. Instead they state: “because this was designed as an effectiveness trial, the analyses presented below used all of the data available after randomization, regardless of whether the parents attended any group sessions” (page 813). In effect they tried to include all subjects originally recruited to the study. This is an American study published in a psychology journal.

In seven of the RCTs (six individually randomised and one cluster randomised) it was implied or stated that no subjects dropped out of the study and that no outcome data were missing (see table 2). In the remaining 19 randomised trials, there was some reporting of lost subjects, although the detail of this reporting was highly variable and the implications were rarely discussed (see table 2). Furthermore, the limited consistency in style of tabular presentation of data, with the absence of sample sizes, introduced additional challenges when interpreting study results. Five of the studies reporting missing data indicated that small payments had been made to participating parents,47,50,51,61,62 and despite this treatment violation had still occurred. Some studies40,44 analysed the demographic features of drop outs and participants or completed an attrition analysis.60 In another paper38 a comparison of baseline and demographics data was made between those who dropped out and those who completed the study was performed. In two trials43,62 the most basic principle of ITT was violated when drop outs from the intervention group were analysed as another control group.

A differentiation between treatment violators and non-responders was not always made.38,51 Equally, when treatment violation had been identified either as a result of programme drop out or partial non-attendance at sessions, it was not always clear whether outcome data had still been provided. It was therefore not always possible to ascertain numbers of participants who dropped out of the study, dropped out of the programme, failed to fully comply with the treatment regimen, and who did none of the above but simply failed to complete particular outcome assessments.

DISCUSSION

The case to improve the quality of community health research has been established.67 The CONSORT guidelines provide a useful quality assurance benchmark for the reporting of RCTs in health care, and some journals (for example, BMJ) require that submitted papers are cross referenced to the guidelines on first submission of a paper. While the guidelines refer to a wide range of relevant issues, this paper has focused on the consideration of ITT, which reinforces the pragmatic stance required in the evaluation of effectiveness. Both the ITT approach to the management of participant drop out and treatment non-concordance are important features of the pragmatic trial. In particular CONSORT suggests the use of a diagrammatic description of the flow of participants through the trial and a clear statement as to how many participants were included in each analysis.

None of the papers in our review makes reference to the CONSORT statement. Only one53 attempted to present participant progress through the study using a flowchart. This may be partly because studies predated the CONSORT guidelines, although none of the papers predates the emergence of ITT. Additionally only two of the papers38,53 were published in journals currently on the CONSORT list of those endorsing support.68 It may be that health professionals are seeking to use evidence from allied disciplines (psychology and social work), which may not be applying the same criteria for maintaining scientific rigour when reporting RCTs. This results in evidence regarding parenting interventions that is difficult to evaluate and hence use effectively for healthcare decision making purposes. This might improve given recent organisational endorsement for CONSORT statements from bodies such as the American Psychological Association and the Evidence-based Behavioral Medicine Committee. Indeed there is an emerging discussion and promotion of CONSORT standards within in the psychological literature.69–73

The adoption of ITT as a strategy for clinical trials can be viewed in part as a testament to a shift towards pragmatism in research, and is strongly recommended in the CONSORT statement.21 While there is no single directive as to how to perform an ITT analysis, it is consistently recommended that outcome data should be analysed by the group to which subjects were randomised, regardless of the actual treatment received or its intensity.28 The lack of application of and adherence to ITT is disappointing; especially in view of the numbers of studies reporting treatment violation (n = 15). Moreover, we found trials that overtly violated this most basic principle, forming additional control groups using those who dropped out of the study after randomisation.43,62

Hollis and Campbell23 provide more detailed recommendations for the conduct of ITT. These include the provision of an a priori statement of inclusion criteria (if any), which, if violated, would lead to the exclusion of participants from an ITT analysis. In addition they recommend minimising the period between randomisation and starting treatment to limit the potential for participants dropping out before the start of their treatment. If there is a substantial time delay before starting treatments, or there is an important task for the participants to complete between randomisation and starting treatment, including providing informed consent, then there is the potential for substantial drop out. For example, we saw that Cunningham et al40 randomised families in blocks to one of three trial arms before gaining their consent, resulting in a very low percentage of those randomised to the active intervention actually starting treatment. Fergusson et al74 concur with the International Conference on Harmonisation (ICH) Guidance on Statistical Principles for Clinical Trials75 by suggesting that participants who have not taken at least one dose of the trial drug can be excluded providing there is reasonable assurance that their exclusion will not introduce bias. They also agree that ineligible participants may also be excluded as long as the assessment of eligibility is done in a fair manner and not affected by events that occur (crucially) after randomisation. Therefore, the consensus seems to be that trialists should:

  • apply the highest possible standards of design to limit problems that might lead to the consideration of post-randomisation exclusions;

  • exclude subjects from the primary analysis only if there is a supporting and convincing argument that such action will not lead to more than minimal bias.

Both Lachin29 and Shih76 suggest that ITT is as much about good design as analysis and the ICH Guidance on General Considerations for Clinical Trials77 states that the protocol “should specify procedures for the follow-up of patients who stop treatment prematurely” (page 9). Montori and Guyatt78 suggest using a protocol that ensures maximum adherence to the trial conditions (for example, use run-in period where possible and exclude any participants who do not comply with treatment), and make recommendations for dealing with “loss to follow up”.

What this paper adds

This paper highlights how ITT has been poorly adopted in parenting trials and how standards of reporting vary across disciplines. This has limited the credibility of evidence behind parenting support and shows the need for future research to adhere to CONSORT guidance during the conduct and reporting of new work.

Policy implications

  • Local service providers, such as Sure Start, charged with developing family support services for local communities should be aware that the current evidence that underpins national guidelines for parenting programmes is methodologically weak and poorly reported.

  • Parenting education and support should continue to be recognised as an international policy priority. However, if it is to be translated into relevant action, national policies need to be informed by stronger evidence of what works in parenting support.

  • Improvements to the design and delivery of future parenting programmes, will be dependent on the funding available for rigorous research. Those commissioning services should also support the need for investment in well designed and clearly reported trials of programme effectiveness.

  • Journal editors have a responsibility to support improvements in clinical trial reporting by requiring researchers to show an adherence to CONSORT guidance as a prerequisite for publication.

In seven RCTs it was stated or implied that no outcome data were missing; this is an unlikely scenario in most pragmatic trials. Indeed, Schulz et al79 found that trials that did not report any exclusions or missing data were methodologically weaker in other respects than those with some reported missing data, strongly suggesting that, in at least some cases, participants were excluded but this was not reported. Among the remaining reports of RCTs, several authors did not clarify how many participants did not provide data, with the only indication of the number of responses analysed being the reported residual degrees of freedom. While the CONSORT statement21,28 indicates that the number of participants included in each analysis and whether the analysis was by “intention to treat” should be clearly stated, others (for example, Hollis and Campbell,23 Shih,76, and Montori and Guyatt78) go further than this. In particular Shih76 recommends: reporting reasons for drop outs and proportions for each treatment group; conducting sensitivity analyses to encompass different scenarios of assumptions (for the pattern of missing responses), discussing consistencies or inconsistencies between them; paying attention to minimising the potential for missing data when designing the trial, including every effort to collect post-drop out data on the primary outcome variable(s); considering defining drop out as a further primary outcome variable.

Four papers38,40,44,60 followed Shih’s recommendations to some extent, by including some attempt to investigate the robustness of their conclusions to the missing data. In the remainder the reporting was more rudimentary showing the potential for substantial bias that could have been considerably reduced by the application of ITT. The four that did, compared the baseline characteristics of those who provided outcome data and those who did not. However, the randomisation procedure adopted by Cunningham et al40 led to an undesirable level of drop out. Despite their subsequent investigations of comparability between those enrolling and not enrolling on the programme, the potential for substantial bias remained. None of the other three reported studies38,44,60 found a difference in the characteristics, so did not perform any further sensitivity analyses. Cunningham et al did, however, obtain information on the acceptability of the programme and the characteristics of those likely to complete such a programme (at least within the setting of a pragmatic RCT). Such information is vital for those organising parenting programmes as part of broader strategies aimed at improving local child population health. In the real world of parenting programme delivery courses can last up to 8–10 weeks and are invariably dependent on the availability of key resources such as community venues and creche places. Thus the socially bound nature of programmes makes it reasonable to expect a variety of patterns of attendance and participation from parents. This makes the conduct of trials inherently difficult and the adherence to ITT all the more relevant.

It is recognised that the use of exemplar reviews necessarily limits the external validity of our findings to parenting research in general. However, many of the concerns we have identified are evident within both systematic reviews explored in this paper. It is hoped that this paper will contribute to an academic debate on the advances in trial methodology, and their implications for future trial design and reporting. In particular we recommend that trials of parenting interventions:

  • adhere to the CONSORT guidelines for the reporting of trials;

  • adopt an intention to treat approach to data analysis, in which accepted strategies for managing treatment violations and handling missing data are established and reported;

  • limit the proportion of missing data, for example by minimising the amount of time between randomisation and the delivery of trial interventions, and actively managing data collection on all participants irrespective of treatment allocation or adherence; and

  • include appropriate sensitivity analyses to explore the robustness of the trial conclusions to the effects of missing data.

CONCLUSION

It is of concern that recommendations for parental training programmes are still based on some trial reports that violate the main principle of ITT and others where insufficient detail is provided regarding handling of treatment violators and missing data. There is, however, some evidence that reporting standards of relevant trials are improving. Consequently, providing evidence based recommendations for parental training programmes is, at best, difficult, as robustness of the evidence is unclear. The adoption of the CONSORT guidelines, which include the use of ITT, should be adopted as a minimum standard. Researchers should design trials that reflect the socially bound nature of parenting programmes and thereby the everyday challenges faced by practitioners delivering services. As such, strategies that limit the amount of missing data should be developed and reported a priori, as should methods for handling treatment violations and missing data in the analyses.

Acknowledgments

Anna Hart, Professor Sarah Cowley, and journal reviewers are thanked for their constructive comments on earlier manuscript drafts.

REFERENCES

View Abstract

Footnotes

  • Funding: none.

  • Competing interests: KW contributed to the preparation of a submission to the National Institute for Health and Clinical Excellence appraisal of parenting research for the Community Practitioners and Health Visitors Association.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Linked Articles

  • In this issue
    Carlos Alvarez-Dardet John R Ashton