Loading…
Discriminant validity testing in marketing: an analysis, causes for concern, and proposed remedies
The results of this research suggest a new mandate for discriminant validity testing in marketing. Specifically, the authors demonstrate that the AVE-SV comparison (Fornell and Larcker 1981) and HTMT ratio (Henseler et al. 2015) with 0.85 cutoff provide the best assessment of discriminant validity a...
Saved in:
Published in: | Journal of the Academy of Marketing Science 2016-01, Vol.44 (1), p.119-134 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The results of this research suggest a new mandate for discriminant validity testing in marketing. Specifically, the authors demonstrate that the AVE-SV comparison (Fornell and Larcker 1981) and HTMT ratio (Henseler et al. 2015) with 0.85 cutoff provide the best assessment of discriminant validity and should be the standard for publication in marketing. These conclusions are based on a thorough assessment of the literature and the results of a Monte Carlo simulation. First, based on a content analysis of articles published in seven leading marketing journals from 1996 to 2012, the authors demonstrate that three tests—the constrained phi (Jöreskog 1971), AVE-SV (Fornell and Larcker 1981), and overlapping confidence intervals (Anderson and Gerbing 1988)—are by far most common. Further review reveals that (1) more than 20% of survey-based and over 80% of non-survey-based marketing studies fail to document tests for discriminant validity, (2) there is wide variance across journals and research streams in terms of whether discriminant validity tests are performed, (3) conclusions have already been drawn about the relative stringency of the three most common methods, and (4) the method that is generally perceived to be most generous is being consistently misapplied in a way that erodes its stringency. Second, a Monte Carlo simulation is conducted to assess the relative rigor of the three most common tests, as well as an emerging technique (HTMT). Results reveal that (1) on average, the four discriminant validity testing methods detect violations approximately 50% of the time, (2) the constrained phi and overlapping confidence interval approaches perform very poorly in detecting violations whereas the AVE-SV test and HTMT (with a ratio cutoff of 0.85) methods perform well, and (3) the HTMT
.85
method offers the best balance between high detection and low arbitrary violation (i.e., false positive) rates. |
---|---|
ISSN: | 0092-0703 1552-7824 |
DOI: | 10.1007/s11747-015-0455-4 |