About Systematic Reviews
Risk of Bias Tools for Systematic Reviews
A systematic review is regarded as the highest form of evidence due to its rigorous methodology, which even involves preliminary scopes or additional tasks such as meta-analysis in between (if you’re wondering, “Does a systematic review need a meta-analysis?”, the answer is not always).
It involves exhausting all relevant, available literature, including unpublished ones, to answer a focused research question. It requires proper planning and the use of protocols and criteria. Still, a systematic review can be susceptible to bias depending on the validity of the studies included in the review, which is why assessing the risk of bias using risk of bias tools is necessary during the process.
Assessment of risk of bias is critical as it ensures the transparency of evidence synthesis results and findings. The process is usually performed for each included study in a systematic review and involves determination of systematic flaws or limitations in the conduct, design, or analysis of a review that may distort the reported findings of the review.
What Is Bias?
Bias refers to factors that can distort the results of the systematic review. It’s introduced when there are systematic flaws or limitations in the design, conduct, or analysis of the research.
When different types of systematic reviews have bias, they will result in false findings, making the project a waste of time and resources and a missed opportunity for effective intervention.
Types Of Biases In Systematic Reviews
Bias manifests itself in several ways, from the selection of the studies to be considered in the systematic review, to the classification of their outcomes. Here are some common biases that could come up in research:
Selection bias refers to the problems with the comparability of the participants or populations in a study—that there exists systematic differences in their baseline characteristics when compared.
Also called confounding, performance bias is introduced when there are factors other than the intervention or exposure of interest that influence the effect estimate of the study. It refers to a difference between groups in terms of treatment or behavior due to knowledge of the interventions involved.
Detection bias arises from problems with the measurement or classification of exposures or outcomes. It arises when there are differences in outcome assessment out of the knowledge of treatment allocation by unblinded outcome assessors.
Reporting bias happens when there are problems with measurements or classifications of outcomes, specifically when there is missing information. It occurs when published trials selectively report only subsets of their measured outcomes.
What Are Risk Of Bias Tools?
To ensure that systematic reviews yield balanced, accurate, and valid results, it’s necessary to assess the risk of bias in the different studies included in the research. This step, which can be done by a methodological expert or experienced researchers, helps in regulating and establishing transparency in the synthesis of evidence and findings.
Assessing risk of bias is frequently done using risk of bias tools or guidelines and protocols established to evaluate the design and conduct of the study to determine the existence of factors that could introduce bias.
Examples of risk of bias tools include:
- AMSTAR 2 – A Measurement Tool to Assess Systematic Reviews
- GRADE – Grading of Recommendations Assessment, Development, and Evaluation
- SAQAT – Semi-Automated Quality Assessment Tool
- NOS – Newcastle-Ottawa Scale
- RoB 1.0 – Cochrane Risk of Bias tool for randomized trials
- RoB 2.0 – Revised tool for assessing risk of bias in randomized trials
- ROBIS – Risk Of Bias in Systematic Reviews
- RoBANS – Risk of Bias Assessment Tool for Non-randomized Studies
- PROBAST – Prediction model risk of bias assessment tool
- JBI – the Joanna Briggs Institute
While systematic reviews are the gold standard of evidence synthesis, their findings are highly dependent on the validity of the studies they’ve assessed. Researchers must, then, ensure that these studies are relevant and free from bias, which can be done using quality assessment measures such as risk of bias tools.
Another way to guarantee the validity of a systematic review is to back up the methodology with automation through literature review software such as DistillerSR. This helps produce more accurate results in a simpler, more seamless, and faster way.
Learn More About DistillerSR