DistillerAI FAQs

Why don’t I see the DistillerAI tab?

So, you’re ready to use DistillerAI but don’t see the tab on your Dashboard? There are a couple of possible reasons for this:

  1. You’ve already screened all of the references with the required number of reviewers, so there’s nothing for the robot to do.
  2. You have not screened enough references for DistillerAI to properly train itself. It will wait until you have screened some more before showing you the tab.

When it’s available for use in your project, the DistillerAI tab will appear on your Dashboard to the right of the My Tasks and Project Progress tabs.

What training set should I use?

You want to train DistillerAI using the most accurate data possible. If you train it on data with errors, it will learn to replicate those errors. So, here are a few tips to keep in mind:

Minimum Training Set Size

We have found that 10 or more included references and 40 or more excluded references should be enough to get your robot trained to know the difference.

Maximum Training Set Size

We’ve observed that learning diminishes with training sets in excess of 300 references. Adjust your training set percentage so that you are provided with about 300 references.

Please note that if you use a training set larger than 1000, the training process may fail (the progress bar will just disappear….I know…we’re working on that). If that happens, just reduce your training set size and try again.

Use multiple screening levels to train

Humans almost always include more references than we should at level one. This is a best practice to avoid accident false excludes. However, if you train DistillerAI on level 1 screening results results, you will be training your robot on falsely included references. It will learn to make the same mistakes as your human screeners.

To avoid this issue, start your training set at level 1 and go to the highest level where you have screened references. For example, if you include levels 1 to 3, DistillerAI will look at all references excluded at, or before, level 3 and only look at references included at level 3. This should dramatically reduce the number of false inclusions in your training set.

What can I do with DistillerAI?

There are a number of ways to use DistillerAI. Here’s a quick rundown of each option and when you might use it.

High Confidence Review

By default, DistillerAI uses two different classifiers (AIs) to assess references. When both classifiers agree on the inclusion/exclusion of a reference, that is a high confidence decision.

This behaves exactly the same way as if you had two people screening a reference: if both must agree in order to include or exclude the reference. However, if the two AIs are not in agreement, no decision will be made on the reference in question. That reference will be left unreviewed so that human reviewers can make a final decision.

We recommend using High Confidence review to quickly assess roughly how many included references you have in a new search result. After running it, you can always delete DistillerAI’s screening decisions if need be (see below).

You may also want to use High Confidence Review as your second or third screener.

To perform a High Confidence Review, go to the DistillerAI tab on your dashboard and click Meet the DistillerAI. The High Confidence Review option will be selected by default.

DistillerAI will tell you which question and answer it will be clicking on your screening form in the event of an inclusion or exclusion decision. Note that it can only answer one question on a screening form in the current version of the tool.

To run the review, simply click Run DistillerAI, and check your results!

AI Preview and Rank

Even if you’re not comfortable handing over all the screening to the robots, imagine how much more efficient you would be if your references were ordered by likelihood of inclusion?

AI Preview and Rank does just that by:

  • Scoring references on a scale of 0 to 1 in terms of likelihood for inclusion (1 is strong include, 0 is strong exclude). This will then allow you to order references by their likelihood of being included while you are screening; essentially, it is bubbling your most promising references to the top of the pile.
  • Showing the human screening decision, the AI decision and the score.
  • Attaching these scores to the references, allowing them to be sorted.

To run the Preview, select your training levels as defined above. You can also select which tags (or fields) in your references you would like DistillerAI to read when looking for similarities and differences between included and excluded references.

Pick the largest training set size that you can without the total training set going over 1,000 references (see training set discussion above) and set References to Preview to 100%, then click Run Preview and Rank.

If you are happy with the scores allocated to the references, click the Tag References button to attach the scores to their references. Now, reviewers can pick “Score High to Low” or “Score Low to High” from the the Order By drop down on their list of references to screen.

AI Test

To check how well DistillerAI is likely to judge inclusion and exclusion, you can use the DistillerAI Test function. This is available by going to the DistillerAI Toolkit, the last item under your References menu. Set a training set size using the approaches discussed above and run the test.

The test uses the High Confidence Review approach discussed above. The references you have reviewed are divided into a training set (to train your robot) and a test set (to compare your robots answers with your human-screened results) based on the percentages you define.

DistillerAI picks references at random from your training allotment and trains itself. It then reviews the remaining references in the set and compares its results with  the human-reviewed results.

Check the last line of the test results to see how it faired. Displayed are the total references screened where the result agreed with the humans (right), total references where the AI and humans disagreed (wrong) and the number of refs where the two AIs disagreed with each other (not sure).

Note that the Test function does not write anything to your dataset and will not impact your review in any way.

AI Audit

By reviewing the references that the humans have screened, DistillerAI can get a very accurate picture of what an included reference looks like. It can then review every reference that was excluded to check that none were excluded by mistake.

This tool uses the training set you provide (see best practices for this above) and then looks at all of your excluded references to see if any of them closely resemble your included references.

Like AI Test, this tool does not write anything to your dataset so using it will have no impact on your project.

AI Review

AI Review lets you use DistillerAI for fully automated screening. The tool lets you pick which screening form to complete and which question on the form to answer in the event of an Include, Exclude or Can’t Tell result.

We recommend that AI Review be used as a second, or even third, screener to help check on your human-screened results. It can also be used to include only high confidence references (rather than perform include and exclude) since this poses no risk of accidental exclusion.

You may also want to use it after using the Scoring/Ordering multiple times to go through the remaining references that are not likely to be included anyway.

Unlike High Confidence Review, the tool lets you pick the threshold scores for inclusion and exclusion, with anything between those values falling into the “can’t tell” pile. It also allows you to restrict DistillerAI to only include or only exclude references.

How can I see what screening decisions DistillerAI made?

DistillerAI uses DistillerSR the same way a human does, and you can review its work the same way too. DistillerAI’s work will appear as “DistillerAutomatedReviewer” in your project reports.

Running a conflict check in Datarama or a Kappa report will allow you to compare DistillerAI’s answers with those of your human screeners.

You can use Datarama to see DistillerAI’s screening decisions by following these simple steps:

  1. On the top menu, choose Datarama and open the Data Criteria tab.
  2. Select “DistillerAutomatedReviewer” from the list of users.
  3. Go to the Report Settings tab and select the level(s) where you used DistillerAI in the Data to Display box.
  4. Click Run Report. DistillerAI’s work will appear as forms submitted by  “DistillerAutomatedReviewer”.

After viewing DistillerAI’s work, you can choose to remove it by clicking the Delete Response Sets at the bottom of the Datarama report (be sure that you are only displaying data submitted by “DistillerAutomatedReviewer”). This will allow you to use DistillerAI as often as you like without committing to the decisions it makes.

Can DistillerAI answer multiple screening questions on the same form?

Not in this version. At the moment, DistillerAI simply learns to differentiate between references in your included and excluded sets and, once trained, will put references into one of those two piles for you.

Will DistillerAI change my data?

No. DistillerAI behaves just like a reviewer. If you use it to screen references, its screening choices will be entered as if it completed your screening form. If you look at the responses in Datarama, the user name used by the AI is DistillerAutomatedReviewer. Of course, you can always delete submissions by any reviewer, including the AI, from Datarama.

How do I know if I can trust DistillerAI’s decisions?

With AI, use the mantra “Trust, but verify”. We don’t encourage using AI in a way where a mistake could cause you trouble. Just as we do not encourage solo screening by humans (dual screening is a best practice), using DistillerAI to screen on its own is probably not a great idea.

We suggest using DistillerAI as a second screener. This will allow you run Kappa scores and conflict checks between DistillerAI and your human screener(s) and make consensus corrections exactly as you would with two human screeners.

Using DistillerAI to score your references so that you can order them in terms of likeness for inclusion is a low risk use of the tool, assuming that you still plan to screen all of your references. Similarly, using DistillerAI to check for accidental exclusions does not introduce any risks to your process.

Lastly, to ensure that DistillerAI is making accurate decisions, you can run the AI Test (see above) before you use it to ensure that you are getting the results you need.

Will DistillerAI’s capabilities be expanded?

Definitely. The DistillerSR team is working closely with the community to refine the platform and make it more powerful. DistillerAI will be evolving rapidly over the months and years ahead.