Loading…

Revised "LEPS" Scores for Assessing Climate Model Simulations and Long-Range Forecasts

The most commonly used measures for verifying forecasts or simulations of continuous variables are root-mean-squared error (rmse) and anomaly correlation. Some disadvantages of these measures are demonstrated. Existing assessment systems for categorical forecasts are discussed briefly. An alternativ...

Full description

Saved in:
Bibliographic Details
Published in:Journal of climate 1996-01, Vol.9 (1), p.34-53
Main Authors: Potts, J. M., Folland, C. K., Jolliffe, I. T., Sexton, D.
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The most commonly used measures for verifying forecasts or simulations of continuous variables are root-mean-squared error (rmse) and anomaly correlation. Some disadvantages of these measures are demonstrated. Existing assessment systems for categorical forecasts are discussed briefly. An alternative unbiased verification measure is developed, known as the linear error in probability space (LEPS) score. The LEPS score may be used to assess forecasts of both continuous and categorical variables and has some advantages over rmse and anomaly correlation. The properties of the version of LEPS discussed here are reviewed and compared with an earlier form of LEPS. A skill-score version of LEPS may be used to obtain an overall measure of the skill of a number of forecasts. This skill score is biased, but the bias is negligible if the number of effectively independent forecasts or simulations is large. Some examples are given in which the LEPS skill score is compared with rmse and anomaly correlation.
ISSN:0894-8755
1520-0442
DOI:10.1175/1520-0442(1996)009<0034:rsfacm>2.0.co;2