Loading…
Evaluating expert knowledge: plant species responses to cattle grazing and fire
Expert judgment, standardized in a meaningful format, can be used to identify research/survey needs and to characterize areas of (dis)agreement in species responses, associated traits, and factors affecting responses. Feasible methods are needed to facilitate the evaluation of expertise in a complex...
Saved in:
Published in: | Journal of range management 1998-05, Vol.51 (3), p.332-344 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Expert judgment, standardized in a meaningful format, can be used to identify research/survey needs and to characterize areas of (dis)agreement in species responses, associated traits, and factors affecting responses. Feasible methods are needed to facilitate the evaluation of expertise in a complex domain characterized by moderate to low learnability. Specific objectives for this study were 1. to evaluate agreement among experts on range plant species behavior and 2. to develop an agreement-based classification method for plant species responses. Declarative information at landscape scale was elicited from 7 role-suggested experts on expected responses to cattle grazing (none, moderate, or heavy) and fire (absent, applied in late summer or fall, or applied in late winter or spring) of 198 plant species from the Edwards Plateau (Texas). Trends were requested to be assessed in a 3-level ordinal scale (decrease, unaffected, increase). Kappa statistics (pair-wise and multi-rater versions) and log-linear models were used to evaluate agreement. A procedure based upon cumulative probability distributions of possible rating combinations was developed to classify plant species while accounting for agreement. A total of 4,584 opinions (cattle grazing: 2,959; fire: 1,625) was elicited and analyzed. Low to moderate agreement was observed. Average pair-wise kappa statistics ranged from 0.07 to 0.39; multiple-rater kappa coefficients ranged from -0.17 to 0.53. Log-linear analyses were consistent with those estimations: agreement beyond chance or baseline association between ratings (P 0.05) was observed in 62 out of 114 possible pair-wise cases. Non-homogeneous marginal distributions of opinion were an important source of disagreement. Experts performed beyond chance expectations in all scenarios but agreement was better (and pattern of agreement more consistent) when scenarios were most familiar to the experts (e.g., heavy grazing and winter/spring burning). |
---|---|
ISSN: | 0022-409X 2162-2728 |
DOI: | 10.2307/4003420 |