More information about text formats
I heard this study reported on the Today programme on Radio 4. What effective dissemination! I applaud the inclusion of SES as a covariate in this study, and I agree that the correlational effects reported are of interest. Nevertheless, I believe that one important aspect of the interpretation of the study is omitted from the discussion. This is the small effect sizes.
In Table 2, standardized regression coefficients relating to 140 different analyes are presented. Ignoring the sign, these range from values of 0.00 to 0.16. Squaring a standardized regression coefficient gives a value of R2, which is an estimate of the amount of variance explained by the regression model. Therefore the largest amount of variance explained by any of these models is the square of 0.16, which equals 0.026, or 2.6%. Cohen (1988) has defined small, medium and large effect sizes for values of R2. He defined small effect size as R2 = 0.02, medium effect size as R2 = 0.13, and large effect size as R2 = 0.26. Therefore, all of these are small effects.
The reported p values add credibility to these data, however, one must bear in mind the large sample size employed (n = 5352). As sample size increases, so study power increases, leading to increased sensitivity to detect, and label ‘significant’ even the smallest of effects. Therefore, this ‘superpowered’ study has, not surprisingly, detected many small, and smaller, effects. This is OK in itself, however, in my opinion, some comment as to their ‘real world’ significance should have made. Additionally, the lack of consideration of likely Type I error inflation due to multiple hypothesis testing is a conspicuous omission.