Methods Commentary: Risk of Bias in Randomized Trials 1
The following commentary has been contributed by the CLARITY Group at McMaster University.
G.Guyatt, J. Busse
Risk of Bias a Better Term Than Alternatives
Authors in general, and authors of systematic reviews in particular, when addressing studies comparing alternative interventions, often refer to the “quality” of the studies. They may also refer to the “methodological quality”, the “validity” or the “internal validity” (distinguished from “external validity” which is synonymous with generalizability or applicability) of the studies.
Each of these terms may refer to risk bias: the likelihood that, because of flaws in design and execution of a study, it is at risk of a systematic deviation from the truth (i.e. overestimating or underestimating the true treatment effect). Risk of bias should be distinguished from random error, which does not have a direction and is best captured by the confidence intervals around the best estimates of effect.
Each of these terms, however, may not refer (or not exclusively refer) to risk of bias. The GRADE group1, for instance, uses quality to refer to confidence in the estimates of treatment effect2. In this use of the term, quality refers not only to risk of bias, but also to issues such as precision, consistency, and directness of evidence. Some authors use the term “validity” to mean not only risk of bias, but also considerations of generalizability or applicability, and even precision. The term “risk of bias” is therefore preferable: it avoids the ambiguity of alternative terms.
Assessing Risk of Bias in Randomized Trials
Several dozen systems for assessing bias in randomized trials are available. The Cochrane Collaboration has brought some order to the resulting confusion by developing a “risk of bias” instrument that many would consider a gold standard. The Cochrane risk of bias tool identifies six possible sources of bias in randomized (or quasi-randomized) trials: sequence generation, concealment of allocation, blinding, loss to follow-up, selective outcome reporting, and other problems (see model form, risk of bias assessment in randomized trials).
No system is perfect, and we have some reservations about the Cochrane risk of bias approach. These reservations are reflected in the model risk of bias form that we have created. First, systematic reviewers of randomized trials may choose, for several reasons, to omit the sequence generation criterion. Many (if not most) authors of trials that actually meet this criterion do not mention how they generated the randomization sequence. Furthermore, the serious breaches of sequence generation (allocation by date of birth, time of presentation, chart number, etc.) are addressed in the second criterion, allocation concealment.
Second, the Cochrane instructions indicate that with respect to concealment of randomization, sequentially numbered, opaque, sealed envelopes should be considered low risk of bias. We disagree. Although it has proved impossible to provide definitive documentation, anecdotal evidence suggests that envelope systems are vulnerable to abuse, even if numbered, opaque, and sealed. We believe that central randomization is the only secure way of ensuring concealment.
Third, Cochrane authors have chosen to include selective reporting bias (primary studies report some outcomes and not others on the basis of results) as an issue of risk of bias in individual studies. Making this judgment on the basis of information from individual studies is challenging. An alternative approach would be to consider selective reporting bias as, along with publication bias, an issue of selective reporting that is best judged looking across available studies. If review authors find no clear evidence of selective reporting bias they may choose to omit this item from their formal assessment of risk of bias.
Fourth the Cochrane instrument, for each category, uses a classification system of “yes”, “unclear” and “no”. This is problematic because information that permits a definitive “yes” or “no” for each item is often unavailable. Nevertheless, information that makes it very likely (or unlikely) that the studies meet a particular criterion is available. We have demonstrated, for instance, that it is possible to accurately distinguish, by use of a designation of “probably yes” or “probably no”, blinding status in studies that did not provide a sufficiently transparent explanation for definitive “yes” or “no” judgments4. Thus, for each category, we suggest response options “yes”, “probably yes”, “probably no” and “no”.
We have demonstrated the reproducibility of these response options for rating of blinding4. The judgments are, however, likely to be reproducible only if detailed criteria for decisions are available. We have developed such criteria for blinding which are presented in an appendix below.
Rating of Risk of Bias should be outcome specific
Traditionally, systematic review authors have provided a single rating of risk of bias for a particular study. This tradition is, unfortunately, both persistent and misguided. The reason it is misguided is that risk of bias can differ between outcomes. Consider, for instance, a surgical trial the outcomes of which include both quality of life and disease-specific mortality. Patients completing self-administered self-report questionnaires will be unblinded, but adjudicators of cause-specific mortality may be blinded. Similarly, one might anticipate substantially greater loss to follow-up for quality of life than for all-cause mortality. Thus, it may well turn out that the quality of life outcome is associated with high risk of bias and the cause-specific mortality outcome is associated with a low risk of bias. A single rating of risk of bias for a trial is appropriate only if the risk of bias is identical for each of the relevant outcomes.
Note: If you are using DistillerSR you can access the Risk of Bias form, prepared by the authors of this paper, for use in your project. There is no charge for using this form.
Guyatt GH, Oxman AD, Vist GE, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. Bmj 2008; 336:924-926
Guyatt GH, Oxman AD, Kunz R, et al. What is “quality of evidence” and why is it important to clinicians? Bmj 2008; 336:995-998
Higgins JP, Altman D. Assessing the risk of bias in included studies. In: Higgins J, Green S, eds. Cochrane Handbook for Systematic
Reviews of Interventions 5.0.1. Chichester, U.K.: John Wiley & Sons, 2008
Akl E, Sun X, Busse J, et al. Specific instructions for estimating unclearly reported blinding status in randomized clinical trials were reliable and valid. Journal of Clinical Epidemiology, submitted. 2011
Copyright The Clarity Group and Evidence Partners 2011