Article Text
Abstract
Introduction When electronic patient records are used for non-randomised research, a range of different study designs, eligibility criteria and adjustment methods may be used. This study aimed to compare the apparent bias and precision of effect estimates resulting from different potential design and analysis methods.
Methods Comparisons were based on the association between thiazolidinedione (TZD) therapy and heart failure in 91 872 participants with diabetes. Nested within the same dataset, we applied all feasible combinations (N=162) of: five study designs; five sets of eligibility criteria for sample selection; and eight methods of adjustment. Apparent bias was evaluated by comparison to the RR of 1.72 (1.21–2.42) from a meta-analysis of RCTs. Precision was evaluated from SEs.
Results The multiple regression adjusted HR from the full sample cohort study was 1.34 (1.15–1.56). Adjusted effect estimates from the case-only study designs had low precision and were higher than the reference value, ranging up to an OR of 8.22 (4.92–13.71) for the case-crossover design. After applying restrictive eligibility criteria (including new-user, and RCT-like criteria) precision was lower and adjusted effect estimates were generally lower than the reference value. Application of new user, propensity score and confounder based exclusions gave the lowest HR of 0.43 (0.10–1.76). Choice of adjustment method had a relatively small impact on the magnitude and precision of the effect estimate.
Conclusion Our results suggest that restricting eligibility criteria, or implementing case-only designs, may not always reduce bias, and may reduce precision, in comparison to a cohort study using the full sample.