to Systematic Reviews
What is a systematic review?
Systematic reviews involve taking a robust, systematic approach to collecting all the relevant literature about a specific subject and synthesizing it for use in answering a well-defined research question or questions.
Systematic reviews are used throughout evidence-based healthcare, including in the creation of medical practice guidelines, to inform social and health-related policy decisions, and to assess the current state of knowledge in specific areas. Of course, there are many other uses for the systematic review process, but these areas are where the majority of work is being done today.
One of the first examples of what we now know as a “systematic review” was conducted in 1753 by James Lind — he published a report that looked at all the unbiased evidence on scurvy. However, systematic reviews truly came into prominence in the 1970s and 1980s after Archie Cochrane’s influential Effectiveness and Efficiency text was published, urging the practice of evidence-based medicine. This led to the inception of the eponymous Cochrane Collaboration, a collection of healthcare researchers around the world who share the commitment to using high-quality evidence to make healthcare decisions, and the belief that evidence should be easily accessible, quality-assured, and cumulative. Eventually, it became clear that evidence synthesis was valuable in more than just healthcare, resulting in the creation of the Campbell Collaboration, an international scientific research organization that promotes evidence-based research beyond healthcare.
Why are systematic reviews valuable?
Simply speaking, better research benefits the world. As a top method for gathering and reporting on data, systematic reviews are essential to the global population in terms of quality scientific research and innovation.
Policymakers and guideline developers must have the full picture on any given topic before making recommendations or informing new policies and guidelines. They rely on systematic reviews as one of the main tools to inform their decisions which impact the general population, such as when setting regulatory standards.
Systematic Review vs Primary Study
A primary study is research that collects data from an original source rather than from research that has already been done. Examples of primary research include surveys, interviews, cohort studies, randomized clinical trials (RCTs), case studies, lab notebooks, and dissertations to name a few.
Unlike primary studies, systematic reviews gather the entire catalog of data on a specific topic, which makes them statistically more powerful. In some cases, systematic reviews can be done before the primary research. This enables research teams to “tune” their primary studies to only answer the questions not answered by the systematic review. This approach is a good idea because it tends to create safer studies with fewer patient interventions. Systematic reviews are also often less costly and time-consuming than primary studies, which makes them vital to keep up with the high demand for evidence-based research today.
Systematic Review or Literature Review?
In certain industries or academic streams, “literature review” is synonymous with “systematic review” (for example, in preparation of clinical evaluation reports), while in others, the defining aspect of a systematic review is that it is systematic and follows a set of specific guidelines, whereas a general literature review may not adhere to the same kinds of standards.
However, the name and definition depends entirely on specific terminology and context. Depending on the geographic location, industry, or even the organization, you could call it a “systematic review,” “literature review,” or even “systematic literature review.” Read about the types of Systematic Reviews.
What is the systematic review process?
The Systematic Review Lifecycle can be broken down into five steps:
- Search: The research question is the most crucial component of the systematic review. A well-defined research question will inform the search strategy, screening criteria, data extraction protocol, and the final report. Using a framework like PICO is a great way to define the research question and begin your search.
- Screen: This is often the most time-consuming part of the process. Depending on the methods of the research team, they might start by screening title and abstract first then moving on to full-text screening. At each screening level, researchers will decide whether the reference is relevant (and should be included) or if it is not relevant (and should be excluded.)
- Full-Text Retrieval: In this stage, researchers will gather the full-text documents identified in their title and abstract screening.
- Data Extraction/Appraisal: In this step, researchers will capture repeating data sets in a reportable format. This is also the step where the work is double-checked with some sort of quality assessment tool.
- Reporting: Researchers will produce the reference lists, outcomes, and summary tables, PRISMA reports, and meta-analyses and forest plots where appropriate. They will also file for or submit for peer review.
These are the main five steps involved in completing a systematic review. Still, you should also know about a 6th step:
- Update: This involves monitoring for new content on the topic and adjusting your research question if needed per the new information available.
This is what makes a systematic review more like a cycle rather than a linear process. We refer to this process as maintaining a “living” or “evergreen” review process. It’s essential for research groups who are working towards regulatory compliance.
Systematic Review Best Practices
As early as 20 years ago, systematic reviews were done with pen and paper. Researchers were physically printing off references, screening, highlighting, and keeping references in separate stacks or filing cabinets. As you can imagine, this process was extraordinarily time-consuming and resource-intensive.
Fifteen years ago, systematic reviews moved into the digital sphere and were completed using spreadsheets. Researchers would copy and paste data into cells. This saved a lot of paper, but as the demand for evidence grew, the spreadsheet method began to severely lacked efficiency and accuracy needed by modern researchers.
Today, systematic review software is the status quo for completing fast and accurate systematic reviews–more on that later.
Regardless of the purpose of your review, there are several universally accepted systematic review best practices. These best practices improve efficiency, transparency, and reproducibility in systematic reviews, which are goals that all reviews strive to achieve.
- Dual independent screening is the gold standard best practice for systematic reviews. Humans make errors, so it’s critical to have processes in place that will catch the errors before they become catastrophic to your research.
- Establish an experienced team to carry out the systematic review tasks. If there are certain elements of the review that your organization is not experienced with, it would be advisable to hire an information specialist or other subject matter experts to help.
- Search all relevant databases and alternative information sources for grey literature. An incomplete search is one of the foremost reasons that a systematic literature review could be invalidated.
- Determine inclusion and exclusion criteria and develop a conflict resolution protocol beforehand. Stay away from ad hoc processes.
- Capture the reason for exclusion. Researchers can easily reference the list during the peer review or audit stage. It could also help expedite the search and screening process in different but similar reviews that look at some of the same references.
- Systematic reviews hinge on the idea that evidence should be transparent. A brief look at the PRISMA checklist shows that transparency in every step of the review from developing the search strategy to reporting the results ought to be documented and as transparent as possible.
Systematic Review Challenges
The research community faces numerous challenges in their work every day. Chief among those challenges is the fact that standards are always changing, science is always innovating, and best practices are ever-shifting, all while resources like time and funding are becoming more scarce. Keeping up with the demand for evidence-based research is challenging. But there are a few specific factors that make systematic reviews particularly challenging in today’s age:
- Often, researchers are dealing with massive datasets that take considerable time and effort to screen
- Research teams are often strapped for resources; time and funding can be scarce, so all resources must be optimized for efficiency and results
- Many tools are not designed to support effective collaboration
- New, more stringent regulatory requirements put more pressure on organizations to show transparency and reproducible results
Literature Review for EU MDR
EU MDR is a new medical device regulation directive in Europe. It impacts all new and legacy (existing) medical devices sold in the European Union. The new directive was released in 2017, and since then, medical device manufacturers have been working towards ensuring their products are MDR-compliant.
Under EU MDR, medical devices are reviewed by notified bodies who are designated by the European Commission. These notified bodies have a duty to perform the conformity assessments and audits to confirm if medical devices meet the current standards. They are for-profit organizations hired by medical device manufacturers to perform “unannounced audits,” so it’s critical for manufacturers always to be prepared.
So what constitutes compliance? For medical device manufacturers, the literature review is just one (albeit important) part of their conformity assessment. Notified bodies look for a few things when assessing a CER literature review. Some of the biggest challenges medical device manufacturer’s face with their literature reviews include:
- Incomplete audit trail: NB’s want to see transparency. They want to know the path researchers take to reach their conclusions. An audit trail is essential for showing work.
- Ad hoc processes: A proper literature review must follow a systematic and reproducible process.
- Incomplete search coverage: Missing studies is a critical error in a CER literature review, and missing relevant information could invalidate your work.
- Data integrity: CER literature reviews are used to inform important healthcare decisions, so accuracy is essential.
- Efficiency: When dealing with massive amounts of data to make informed decisions for evidence-based healthcare, it’s important to work as efficiently as possible.
Although these are common challenges for organizations seeking EU MDR-compliance, there are several ways to mitigate the challenges and make it easier to comply with the new standards, so your device doesn’t fail its audit. Medical device manufacturers must establish “State of the Art” in their CER literature review, which is essentially a section of the report that >“describes what is currently and generally considered standard of care, or best practice, for the medical condition or treatment for which the device is used.” However, the State of the Art section impacts the entirety of the review, and if you fail to establish it, your review could be considered incomplete.
How does systematic literature review software work?
Systematic review software has become more common in the research community in the past 5-10 years. The demand for a faster and better way to perform systematic reviews has driven the need for better tools for researchers to use.
But how does it work? Where does DistillerSR fit into your workflow?
- Upload your references: You can drag and drop references from any reference management software, upload references and their full-text documents. You can search and import references directly from PubMed as well as directly access full text documents from Article Galaxy, Copyright Clearance Center, and PubMed Central. You can then use DistillerSR’s powerful duplicate detection engine to find and quarantine duplicate references.,
- Build forms: DistillerSR enables users to build custom screening and data extraction forms which are used to quickly and efficiently screen, extract data from, and assess the quality of references. The forms are 100% customizable and can be reused across reviews.
- Configure your workflow and assign reviewers: This is what truly sets DistillerSR apart — enables users to customize their own workflow and process based on their review requirements. Users can set up any number of forms, create custom inclusion/exclusion protocols, configure keyword highlighting, and use boolean filters to branch the studies further.
- Monitor and tune your review: You can see real-time results from your reviewers, including reference progress, reviewer conflicts, and included/excluded reference counts. You can also track time spent on the review, get reviewer participation reports. DistillerSR automatically calculates kappa interrater reliability scores.
- Export your results: Create reference tables, PRISMA reports, and included/excluded study lists with the click of a button. The reports can also be fully customized for your needs.
Did you notice that the Systematic Review Lifecycle lines up almost identically with this workflow? Coincidence? We think not! When choosing systematic literature review software, it’s important to think about your unique challenges and workflow. However, be warned! In this case, relying on a feature matrix can be misleading, so it’s best to do extensive research about what review software best meets your needs before buying.
How systematic review software can save costs and time
If you consider the “old way” of doing a systematic review, with pen and paper or with clunky spreadsheet programs, it makes sense to use specialized software instead. After all, the research is of critical importance, doesn’t it make sense to do everything you can to make sure it’s done correctly?
With SRS, researchers can work more efficiently, reduce errors, and, therefore, spend less time fixing mistakes. This efficiency results in lower costs and faster access to better evidence-based research.
Other significant benefits include:
- Real-time collaboration: Specialized software like DistillerSR enables users to work on their review from any browser, which means you can work on projects regardless of your location.
- Audit-trail: The software also has a clear audit trail with timestamping and user stamping, which promotes transparency, ensures the team follows proper protocol and helps pinpoint errors and changes to the project.
- Simple reporting: Access to reports including Kappa, PRISMA, User Metrics, and Reference Progress with the simple click of a button.
- Automation: Other features such as deduplication and reference prioritization are automations that save even more time.
- Simplified data extraction: Custom data extraction forms enable users to manage complex datasets including capturing related, repeating blocks of information (also known as hierarchical data.)
How artificial intelligence can help the systematic review process
One of the most exciting new developments in the world of systematic reviews is the adoption of artificial intelligence (AI) to automate some of the more logistic-heavy tasks involved in conducting a systematic review.
We talk a lot about the “demand for evidence-based research” reaching new heights and it’s getting ridiculous — the demand is so high that many research teams can’t keep up with their workload. It’s a different world we live in today, and therefore, we need better tools to handle the onslaught of new information that continually overwhelms researchers.
The answer lies in artificial intelligence.
Today’s systematic review is still mostly done by human reviewers, but with the adoption of AI into workflows, researchers can save even more time and effort on their reviews.
So how does it work?
AI in systematic reviews is powered by a subfield of AI known as “Natural Language Processing” (NLP). This technology uses algorithms to break down sentences and words to their absolute base. By doing this and assigning mathematical properties to sentences, NLP can take large amounts of unformatted text (e.g., articles, blog posts, books, etc.) and turn it into data that can be understood by a computer.
Natural language processing is the subfield of AI that powers many everyday tools we use every day, such as spam filters, spelling and grammar checkers, and search engines.
Natural language processing is the AI that powers the real heavy hitter when it comes to automating systematic review tasks. Classifiers are a statistical model that uses NLP to classify information and process it accordingly.
In a systematic review, a classifier can answer a specific question about the text you are reviewing. “Is this an RCT?” “Is the study about humans?” are just a couple questions that can be answered by natural language processing.
As you can imagine, this type of technology holds limitless possibilities for systematic reviewers who spend most of their time deciding whether or not to include or exclude references based on the presence or absence of specific data.
However, before getting too excited, it’s important to remember to take a pragmatic approach to AI in systematic reviews. While the technology is fascinating and holds many opportunities to save time and automate many time-consuming SR tasks, we need to remember that the AI of today is not perfect.
We also can’t expect it to be perfect. The current gold standard for accuracy in a systematic review is using dual independent screening, so we should expect AI to at least meet the accuracy rating of humans instead of expecting it to be perfect.
It’s important to use AI in a way that it doesn’t need to perfect — use safeguards that would prevent any catastrophic problems if the AI makes a mistake, and make sure to double-check and test the AI frequently to make sure it’s working the way it should. Consider it “training” your robot to do the tasks you need it to.
In research communities where accuracy is the difference between vital resources getting allocated to specific populations, and potential harm done to humans, we must take a pragmatic, cautious approach to automating tasks. By taking smaller steps towards integrating AI into systematic review workflows, we can build confidence in the tools, so they are used in a way that both saves time and reduces errors.