Loading…

Methods for accrediting publications to authors or countries: Consequences for evaluation studies

One aim of science evaluation studies is to determine quantitatively the contribution of different players (authors, departments, countries) to the whole system. This information is then used to study the evolution of the system, for instance to gauge the results of special national or international...

Full description

Saved in:
Bibliographic Details
Published in:Journal of the American Society for Information Science 2000, Vol.51 (2), p.145-157
Main Authors: Egghe, Leo, Rousseau, Ronald, Van Hooydonk, Guido
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:One aim of science evaluation studies is to determine quantitatively the contribution of different players (authors, departments, countries) to the whole system. This information is then used to study the evolution of the system, for instance to gauge the results of special national or international programs. Taking articles as our basic data, we want to determine the exact relative contribution of each coauthor or each country. These numbers are then brought together to obtain country scores, or department scores, etc. It turns out, as we will show in this article, that different scoring methods can yield totally different rankings. In addition to this, a relative increase according to one method can go hand in hand with a relative decrease according to another counting method. Indeed, we present examples in which country (or author) c has a smaller relative score in the total counting system than in the fractional counting one, yet this smaller score has a higher importance than the larger one (fractional counting). Similar anomalies were constructed for total versus proportional counts and for total versus straight counts. Consequently, a ranking between countries, universities, research groups or authors, based on one particular accrediting method does not contain an absolute truth about their relative importance. Different counting methods should be used and compared. Differences are illustrated with a real‐life example. Finally, it is shown that some of these anomalies can be avoided by using geometric instead of arithmetic averages.
ISSN:0002-8231
2330-1635
1097-4571
2330-1643
DOI:10.1002/(SICI)1097-4571(2000)51:2<145::AID-ASI6>3.0.CO;2-9