Loading…

Bat4RCT: A suite of benchmark data and baseline methods for text classification of randomized controlled trials

Randomized controlled trials (RCTs) play a major role in aiding biomedical research and practices. To inform this research, the demand for highly accurate retrieval of scientific articles on RCT research has grown in recent decades. However, correctly identifying all published RCTs in a given domain...

Full description

Saved in:
Bibliographic Details
Published in:PloS one 2023-03, Vol.18 (3), p.e0283342-e0283342
Main Authors: Kim, Jenna, Kim, Jinmo, Lee, Aejin, Kim, Jinseok
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Randomized controlled trials (RCTs) play a major role in aiding biomedical research and practices. To inform this research, the demand for highly accurate retrieval of scientific articles on RCT research has grown in recent decades. However, correctly identifying all published RCTs in a given domain is a non-trivial task, which has motivated computer scientists to develop methods for identifying papers involving RCTs. Although existing studies have provided invaluable insights into how RCT tags can be predicted for biomedicine research articles, they used datasets from different sources in varying sizes and timeframes and their models and findings cannot be compared across studies. In addition, as datasets and code are rarely shared, researchers who conduct RCT classification have to write code from scratch, reinventing the wheel. In this paper, we present Bat4RCT, a suite of data and an integrated method to serve as a strong baseline for RCT classification, which includes the use of BERT-based models in comparison with conventional machine learning techniques. To validate our approach, all models are applied on 500,000 paper records in MEDLINE. The BERT-based models showed consistently higher recall scores than conventional machine learning and CNN models while producing slightly better or similar precision scores. The best performance was achieved by the BioBERT model when trained on both title and abstract texts, with the F1 score of 90.85%. This infrastructure of dataset and code will provide a competitive baseline for the evaluation and comparison of new methods and the convenience of future benchmarking. To our best knowledge, our study is the first work to apply BERT-based language modeling techniques to RCT classification tasks and to share dataset and code in order to promote reproducibility and improvement in text classification in biomedicine research.
ISSN:1932-6203
1932-6203
DOI:10.1371/journal.pone.0283342