Skip to Main Content

Systematic Reviews & Evidence Synthesis

About Step 6: Quality Assessment

Step 6: Assess Quality of Included Studies
Average time (in hours) to complete

In Step 6, you will evaluate all included articles for bias by: 

  1. Using a designated assessment tool (based on your research aims) to grade each article
  2. Creating a graphical summary of the risk of bias judgements you've made on all articles included in your review.

How a librarian can help with this step

ZSR can support you with Step 6 by helping you: 

  • Understand what the quality assessment / risk of bias stage of the review is all about
  • Choose the right quality assessment / risk of bias tool for your project
  • Include best practices for reporting the quality assessment / risk of bias results in your review

Click HERE to contact us for SR support!

Assessing studies for QUALITY and bias

Systematic review project teams must assess each article they've selected to include in the review (in the full text screening phase) for QUALITY and BIAS. Bias typically involves levels of relevance, reliability, validity, and applicability of the study (discussed below). 

The primary item to consider before embarking on the risk of bias assessment is which of the many tools available to use. That decision is solely based on the STUDY DESIGN(S) of the articles you've chosen to included in your review. 

For instance, if your SR only includes Randomized Controlled Trials (RCTs), you will need to choose a risk of bias tool specifically designed to assess RCTs (like the Cochrane Risk of Bisk Tool).


For an in-depth review on quality assessment & risk of bias, check out this training video from Cochrane Mental Health: 

Selecting your assessment tool

Choosing your Quality / Risk of Bias Assessment tool depends on the design of the studies you included in your SR. Tools (or questionnaires) are formulated to ask specific questions about the methodology of the study. 

If you are including studies that have multiple types of designs, you may use several tools from one organization that offers a range of assessment tools for many study designs. Check out the ZSR Study Designs in Clinical Medicine page for more detailed information on this topic.  

Check out the options available based on study designs. NOTE: This is not an exhaustive list. Tools listed are some of the most popular tools available.  

EXAMPLE: What will you appraise if you're including RCTs in your study?

A popular way to graphically represent Risk of Bias judgements of articles included in systematic reviews is by creating a "traffic light" plot.

Here is an example of how judgements for each of the five domains in the Cochrane Risk of Bias 2.0 Tool are represented using color coded symbols.

Visit each tab in this section for an explanation and examples for each domain.

Traffic light plots can be generated using the robvis visualization tool once judgements are made and can be uploaded in a useable format (.csv / .xlsx / .xls). 

In Randomized Controlled Trials, subjects are randomized to an intervention group or a control group. In this domain, SR project teams need to appraise the randomization process used by the authors of each RCT study included in the review. Some examples of randomization include: 

  • computer-generated random numbers
  • reference to a random number table
  • shuffling cards or envelopes
  • coin tossing
  • throwing dice
  • drawing lots

If the study's randomization process was based on birth or admission dates, patient record numbers, or clinician or participant decisions for instance, these don't constitute BLIND and RANDOM methods and, therefore, contribute to the risk of bias embedded in the study's methodology. 

This step involves assessing how RCT study researchers assigned the intervention to participants.

  • Were participants and / or people delivering the intervention aware of participants' assigned intervention during the trial?
  • Was there an imbalance between intervention / control groups (demographics, health history, etc.)?
  • Did the researchers have an interest in the effect of adhering to the intervention? In other words, was the study funded by a corporation, manufacturer, or other organization that would benefit from favorable results? 

If the answer is YES (or even Probably Yes) to any of these, there may be a risk of bias embedded in the methodology of this study. In simple terms, this domain asks, "Did participants actually receive the treatment they were supposed to receive, and was anyone's knowledge about the treatment (or lack thereof) likely to have influenced the results?"

Bias due to missing outcome data would be evident if:

  • Data was not available for all, or nearly all randomized participants
  • The reason for missing data is not included in the study*
  • Missing data is more evident in the intervention vs control group (or vice versa)
  • Researchers didn't account for missing data

*Or if the reason is included, but it's not justified. For example, if participants drop out of the study because the treatment isn't working, and authors only analyze the data from participants who stayed, the results will look artificially good for that treatment.

When making judgements about whether measurement methodology includes bias or not, consider these questions: 

  • Was the outcome measured fairly and consistently across all groups? 
    • In a pain study, for instance, are participants asked leading questions that may influence responses? Are the intervention and control groups asked the same questions in the same manor? 
  • Were the people measuring the outcome blinded to which participants got the intervention and control? 
    • Does the person assessing the outcome know which treatment a participant received? If so, their knowledge (conscious or unconscious) could influence how they measure or interpret the results.
  • Was the method of measuring the outcome appropriate? 
    • If the outcome is subjective (like pain), the risk of bias from unblinded assessors is much higher.
    • Were the tools or methods used to measure the outcome validated and reliable?

Remember, if the way a study collects the results (or outcomes) systematically favored one treatment over another, it could have lead to an inaccurate conclusion. 

In plain language, this domain asks, "Did the researchers choose to present only the 'good' results while ignoring other less favorable ones?"

The study's protocol is an important factor in this domain. Was if publicly registered before starting the study. This will help to see if the researchers stuck to their original intentions. If not, this should raise some concerns about the intentions of the study and the researchers. 

If a study appears to have selectively reported its results, it creates a risk that the conclusions are biased because only a partial or favorable view of the evidence is being presented

Covidence templates for quality assessment / risk of bias

Covidence offers a Cochrane Risk of Bias template as well as customizable options. If you decide to customize the template, however, you cannot switch back to using the Cochrane Risk of Bias template. 

More information about quality assessment using Covidence, including how to customize the quality assessment template, can be found on the Covidence YouTube Channel