Machine learning algorithms for systematic review: Reducing workload in a preclinical review of animal studies and reducing human screening error

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperReviewResearchpeer-review

  • Alexandra Bannach-Brown
  • Piotr Przybyła, Manchester University
  • ,
  • James Thomas, UCL
  • ,
  • Andrew S.C. Rice, Imperial College London, London, UK.
  • ,
  • Sophia Ananiadou, Manchester University
  • ,
  • Jing Liao, Edinburgh University
  • ,
  • Malcolm Robert Macleod, Edinburgh University

Background: Here, we outline a method of applying existing machine learning (ML) approaches to aid citation screening in an on-going broad and shallow systematic review of preclinical animal studies. The aim is to achieve a high-performing algorithm comparable to human screening that can reduce human resources required for carrying out this step of a systematic review. Methods: We applied ML approaches to a broad systematic review of animal models of depression at the citation screening stage. We tested two independently developed ML approaches which used different classification models and feature sets. We recorded the performance of the ML approaches on an unseen validation set of papers using sensitivity, specificity and accuracy. We aimed to achieve 95% sensitivity and to maximise specificity. The classification model providing the most accurate predictions was applied to the remaining unseen records in the dataset and will be used in the next stage of the preclinical biomedical sciences systematic review. We used a cross-validation technique to assign ML inclusion likelihood scores to the human screened records, to identify potential errors made during the human screening process (error analysis). Results: ML approaches reached 98.7% sensitivity based on learning from a training set of 5749 records, with an inclusion prevalence of 13.2%. The highest level of specificity reached was 86%. Performance was assessed on an independent validation dataset. Human errors in the training and validation sets were successfully identified using the assigned inclusion likelihood from the ML model to highlight discrepancies. Training the ML algorithm on the corrected dataset improved the specificity of the algorithm without compromising sensitivity. Error analysis correction leads to a 3% improvement in sensitivity and specificity, which increases precision and accuracy of the ML algorithm. Conclusions: This work has confirmed the performance and application of ML algorithms for screening in systematic reviews of preclinical animal studies. It has highlighted the novel use of ML algorithms to identify human error. This needs to be confirmed in other reviews with different inclusion prevalence levels, but represents a promising approach to integrating human decisions and automation in systematic review methodology.

Original languageEnglish
Article number23
JournalSystematic Reviews
Volume8
Issue1
ISSN2046-4053
DOIs
Publication statusPublished - 15 Jan 2019

    Research areas

  • Analysis of human error, Automation tools, Citation screening, Machine learning, Systematic review

See relations at Aarhus University Citationformats

ID: 145272434