Loading…
Relative performance evaluation and peer-performance summarization errors
In tests of the relative performance evaluation (RPE) hypothesis, empiricists rarely aggregate peer performance in the same way as a firm’s board of directors. Framed as a standard errors-in-variables problem, a commonly held view is that such aggregation errors attenuate the regression coefficient...
Saved in:
Published in: | Review of accounting studies 2013-03, Vol.18 (1), p.34-65 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In tests of the relative performance evaluation (RPE) hypothesis, empiricists rarely aggregate peer performance in the same way as a firm’s board of directors. Framed as a standard errors-in-variables problem, a commonly held view is that such aggregation errors attenuate the regression coefficient on systematic firm performance towards zero, which creates a bias in favor of the strong-form RPE hypothesis. In contrast, we analytically demonstrate that aggregation differences generate more complicated summarization errors, which create a bias
against
finding support for strong-form RPE (potentially inducing a Type-II error). Using simulation methods, we demonstrate the sensitivity of empirical inferences to the bias by showing how an empiricist can conclude erroneously that boards, on average, do not apply RPE, simply by selecting more, fewer, or different peers than the board does. We also show that when the board does not apply RPE, empiricists will not find support for RPE (that is, precluding a Type-I error). |
---|---|
ISSN: | 1380-6653 1573-7136 |
DOI: | 10.1007/s11142-012-9212-9 |