Loading…

LAGOS‐AND: A large gold standard dataset for scholarly author name disambiguation

In this article, we present a method to automatically build large labeled datasets for the author ambiguity problem in the academic world by leveraging the authoritative academic resources, ORCID and DOI. Using the method, we built LAGOS‐AND, two large, gold‐standard sub‐datasets for author name dis...

Full description

Saved in:
Bibliographic Details
Published in:Journal of the American Society for Information Science and Technology 2023-02, Vol.74 (2), p.168-185
Main Authors: Zhang, Li, Lu, Wei, Yang, Jinqing
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this article, we present a method to automatically build large labeled datasets for the author ambiguity problem in the academic world by leveraging the authoritative academic resources, ORCID and DOI. Using the method, we built LAGOS‐AND, two large, gold‐standard sub‐datasets for author name disambiguation (AND), of which LAGOS‐AND‐BLOCK is created for clustering‐based AND research and LAGOS‐AND‐PAIRWISE is created for classification‐based AND research. Our LAGOS‐AND datasets are substantially different from the existing ones. The initial versions of the datasets (v1.0, released in February 2021) include 7.5 M citations authored by 798 K unique authors (LAGOS‐AND‐BLOCK) and close to 1 M instances (LAGOS‐AND‐PAIRWISE). And both datasets show close similarities to the whole Microsoft Academic Graph (MAG) across validations of six facets. In building the datasets, we reveal the variation degrees of last names in three literature databases, PubMed, MAG, and Semantic Scholar, by comparing author names hosted to the authors' official last names shown on the ORCID pages. Furthermore, we evaluate several baseline disambiguation methods as well as the MAG's author IDs system on our datasets, and the evaluation helps identify several interesting findings. We hope the datasets and findings will bring new insights for future studies. The code and datasets are publicly available.
ISSN:2330-1635
2330-1643
DOI:10.1002/asi.24720