Loading…
Project quality rating by experts and practitioners: experience with Preffi 2.0 as a quality assessment instrument
Preffi 2.0 is an evidence-based Dutch quality assessment instrument for health promotion interventions. It is mainly intended for both planning and assessing one's own projects but can also be used to assess other people's projects (external use). This article reports a study on the reliab...
Saved in:
Published in: | Health education research 2006-04, Vol.21 (2), p.219-229 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c492t-bd300474ae256e2d8eea4a0a7492d97738392b275bd1c912daf31f3aa139a9a83 |
---|---|
cites | |
container_end_page | 229 |
container_issue | 2 |
container_start_page | 219 |
container_title | Health education research |
container_volume | 21 |
creator | Molleman, Gerard R. M. Peters, Louk W. H. Hosman, Clemens M. H. Kok, Gerjo J. Oosterveld, Paul |
description | Preffi 2.0 is an evidence-based Dutch quality assessment instrument for health promotion interventions. It is mainly intended for both planning and assessing one's own projects but can also be used to assess other people's projects (external use). This article reports a study on the reliability of Preffi as an external quality assessment instrument. Preffi is used to assess quality at three levels: (i) specific criteria, (ii) clusters of criteria and (iii) entire projects. The study compared Preffi-based assessments of 20 projects by three practitioners with their intuitive assessments of the same projects and with assessments by three experts, which were to be used as external criteria. The intuitive assessments only related to the cluster and project levels. Our main hypothesis was that intuitive assessments by practitioners would be less reliable and accurate than their Preffi-based assessments and the experts' assessments. On the whole, we failed to confirm this hypothesis: the experts' assessments proved less reliable and accurate than the practitioners' intuitive and Preffi-based assessments and differed too much from each other to be used as external criteria. The Preffi-based assessments by the practitioners had an acceptable generalizability coefficient (G) and accuracy (standard error of measurement). At the level of the entire project, two assessors are needed to produce sufficiently reliable and accurate assessments, whereas three are needed for assessment at cluster level. The study also showed that different assessors use different perspectives and base their assessment on a variety of aspects. This was regarded as inevitable and even useful by the assessors themselves. Discussions between assessors are important to achieve consensus. The article suggests some improvements to Preffi to further increase its reliability. |
doi_str_mv | 10.1093/her/cyh058 |
format | article |
fullrecord | <record><control><sourceid>jstor_proqu</sourceid><recordid>TN_cdi_proquest_miscellaneous_764288217</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ericid>EJ938045</ericid><jstor_id>45110232</jstor_id><sourcerecordid>45110232</sourcerecordid><originalsourceid>FETCH-LOGICAL-c492t-bd300474ae256e2d8eea4a0a7492d97738392b275bd1c912daf31f3aa139a9a83</originalsourceid><addsrcrecordid>eNqFkUuLFDEURoMoTk_rxrVKcKEg1ExuUnnNTobxxYCz0HWRSt2y03RX9SQptP_9pKmmBRe6SuCcfOTej5AXwC6AWXG5wnjp9ysmzSOygFrJSqjaPCYLxpWpAKQ4I-cprRkDZUE_JWegOActxILEuziu0Wd6P7lNyHsaXQ7DT9ruKf7eYcyJuqGju-h8DjmMA8Z0NaOAg0f6K-QVvYvY94HyC0ZdeXAKcylhSlscMg1DynE6XJ-RJ73bJHx-PJfkx8eb79efq9tvn75cf7itfG15rtpOMFbr2iGXCnlnEF3tmNOFdlZrYYTlLdey7cBb4J3rBfTCORDWWWfEkrybc3dxvJ8w5WYbksfNxg04TqnRqubGHPawJG__aSqtNTdS_FeUGiQ3jBfxzV_iepziUMZtwFqplOQH6f0s-TimVFbY7GLYurhvgDWHZpvSbDM3W-TXx8Sp3WL3Rz1WWYSXs1Ca8Sd889UKw2pZ8KsZr1Me44nXEqD8l4sHQueysg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>199566522</pqid></control><display><type>article</type><title>Project quality rating by experts and practitioners: experience with Preffi 2.0 as a quality assessment instrument</title><source>Applied Social Sciences Index & Abstracts (ASSIA)</source><source>JSTOR Archival Journals and Primary Sources Collection</source><source>Oxford Journals Online</source><source>ERIC</source><creator>Molleman, Gerard R. M. ; Peters, Louk W. H. ; Hosman, Clemens M. H. ; Kok, Gerjo J. ; Oosterveld, Paul</creator><creatorcontrib>Molleman, Gerard R. M. ; Peters, Louk W. H. ; Hosman, Clemens M. H. ; Kok, Gerjo J. ; Oosterveld, Paul</creatorcontrib><description>Preffi 2.0 is an evidence-based Dutch quality assessment instrument for health promotion interventions. It is mainly intended for both planning and assessing one's own projects but can also be used to assess other people's projects (external use). This article reports a study on the reliability of Preffi as an external quality assessment instrument. Preffi is used to assess quality at three levels: (i) specific criteria, (ii) clusters of criteria and (iii) entire projects. The study compared Preffi-based assessments of 20 projects by three practitioners with their intuitive assessments of the same projects and with assessments by three experts, which were to be used as external criteria. The intuitive assessments only related to the cluster and project levels. Our main hypothesis was that intuitive assessments by practitioners would be less reliable and accurate than their Preffi-based assessments and the experts' assessments. On the whole, we failed to confirm this hypothesis: the experts' assessments proved less reliable and accurate than the practitioners' intuitive and Preffi-based assessments and differed too much from each other to be used as external criteria. The Preffi-based assessments by the practitioners had an acceptable generalizability coefficient (G) and accuracy (standard error of measurement). At the level of the entire project, two assessors are needed to produce sufficiently reliable and accurate assessments, whereas three are needed for assessment at cluster level. The study also showed that different assessors use different perspectives and base their assessment on a variety of aspects. This was regarded as inevitable and even useful by the assessors themselves. Discussions between assessors are important to achieve consensus. The article suggests some improvements to Preffi to further increase its reliability.</description><identifier>ISSN: 0268-1153</identifier><identifier>EISSN: 1465-3648</identifier><identifier>DOI: 10.1093/her/cyh058</identifier><identifier>PMID: 16221733</identifier><identifier>CODEN: HRTPE2</identifier><language>eng</language><publisher>England: Oxford University Press</publisher><subject>Academic Achievement ; Criteria ; Educational Assessment ; Educational Quality ; Error of Measurement ; Evaluation ; Evidence ; Expertise ; Experts ; External assessors ; Followup Studies ; Generalizability Theory ; Health Promotion ; Health Promotion - standards ; Health technology assessment ; Intervention ; Netherlands ; ORIGINAL ARTICLES ; Program Effectiveness ; Program Evaluation - methods ; Quality assessment ; Quality Control ; Reliability ; Resistance (Psychology) ; Selfassessment</subject><ispartof>Health education research, 2006-04, Vol.21 (2), p.219-229</ispartof><rights>Oxford University Press 2006</rights><rights>Copyright Oxford University Press(England) Apr 2006</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c492t-bd300474ae256e2d8eea4a0a7492d97738392b275bd1c912daf31f3aa139a9a83</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.jstor.org/stable/pdf/45110232$$EPDF$$P50$$Gjstor$$H</linktopdf><linktohtml>$$Uhttps://www.jstor.org/stable/45110232$$EHTML$$P50$$Gjstor$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,30999,31000,58238,58471</link.rule.ids><backlink>$$Uhttp://eric.ed.gov/ERICWebPortal/detail?accno=EJ938045$$DView record in ERIC$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/16221733$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Molleman, Gerard R. M.</creatorcontrib><creatorcontrib>Peters, Louk W. H.</creatorcontrib><creatorcontrib>Hosman, Clemens M. H.</creatorcontrib><creatorcontrib>Kok, Gerjo J.</creatorcontrib><creatorcontrib>Oosterveld, Paul</creatorcontrib><title>Project quality rating by experts and practitioners: experience with Preffi 2.0 as a quality assessment instrument</title><title>Health education research</title><addtitle>Health Educ Res</addtitle><description>Preffi 2.0 is an evidence-based Dutch quality assessment instrument for health promotion interventions. It is mainly intended for both planning and assessing one's own projects but can also be used to assess other people's projects (external use). This article reports a study on the reliability of Preffi as an external quality assessment instrument. Preffi is used to assess quality at three levels: (i) specific criteria, (ii) clusters of criteria and (iii) entire projects. The study compared Preffi-based assessments of 20 projects by three practitioners with their intuitive assessments of the same projects and with assessments by three experts, which were to be used as external criteria. The intuitive assessments only related to the cluster and project levels. Our main hypothesis was that intuitive assessments by practitioners would be less reliable and accurate than their Preffi-based assessments and the experts' assessments. On the whole, we failed to confirm this hypothesis: the experts' assessments proved less reliable and accurate than the practitioners' intuitive and Preffi-based assessments and differed too much from each other to be used as external criteria. The Preffi-based assessments by the practitioners had an acceptable generalizability coefficient (G) and accuracy (standard error of measurement). At the level of the entire project, two assessors are needed to produce sufficiently reliable and accurate assessments, whereas three are needed for assessment at cluster level. The study also showed that different assessors use different perspectives and base their assessment on a variety of aspects. This was regarded as inevitable and even useful by the assessors themselves. Discussions between assessors are important to achieve consensus. The article suggests some improvements to Preffi to further increase its reliability.</description><subject>Academic Achievement</subject><subject>Criteria</subject><subject>Educational Assessment</subject><subject>Educational Quality</subject><subject>Error of Measurement</subject><subject>Evaluation</subject><subject>Evidence</subject><subject>Expertise</subject><subject>Experts</subject><subject>External assessors</subject><subject>Followup Studies</subject><subject>Generalizability Theory</subject><subject>Health Promotion</subject><subject>Health Promotion - standards</subject><subject>Health technology assessment</subject><subject>Intervention</subject><subject>Netherlands</subject><subject>ORIGINAL ARTICLES</subject><subject>Program Effectiveness</subject><subject>Program Evaluation - methods</subject><subject>Quality assessment</subject><subject>Quality Control</subject><subject>Reliability</subject><subject>Resistance (Psychology)</subject><subject>Selfassessment</subject><issn>0268-1153</issn><issn>1465-3648</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2006</creationdate><recordtype>article</recordtype><sourceid>7SW</sourceid><sourceid>7QJ</sourceid><recordid>eNqFkUuLFDEURoMoTk_rxrVKcKEg1ExuUnnNTobxxYCz0HWRSt2y03RX9SQptP_9pKmmBRe6SuCcfOTej5AXwC6AWXG5wnjp9ysmzSOygFrJSqjaPCYLxpWpAKQ4I-cprRkDZUE_JWegOActxILEuziu0Wd6P7lNyHsaXQ7DT9ruKf7eYcyJuqGju-h8DjmMA8Z0NaOAg0f6K-QVvYvY94HyC0ZdeXAKcylhSlscMg1DynE6XJ-RJ73bJHx-PJfkx8eb79efq9tvn75cf7itfG15rtpOMFbr2iGXCnlnEF3tmNOFdlZrYYTlLdey7cBb4J3rBfTCORDWWWfEkrybc3dxvJ8w5WYbksfNxg04TqnRqubGHPawJG__aSqtNTdS_FeUGiQ3jBfxzV_iepziUMZtwFqplOQH6f0s-TimVFbY7GLYurhvgDWHZpvSbDM3W-TXx8Sp3WL3Rz1WWYSXs1Ca8Sd889UKw2pZ8KsZr1Me44nXEqD8l4sHQueysg</recordid><startdate>20060401</startdate><enddate>20060401</enddate><creator>Molleman, Gerard R. M.</creator><creator>Peters, Louk W. H.</creator><creator>Hosman, Clemens M. H.</creator><creator>Kok, Gerjo J.</creator><creator>Oosterveld, Paul</creator><general>Oxford University Press</general><general>Oxford Publishing Limited (England)</general><scope>7SW</scope><scope>BJH</scope><scope>BNH</scope><scope>BNI</scope><scope>BNJ</scope><scope>BNO</scope><scope>ERI</scope><scope>PET</scope><scope>REK</scope><scope>WWN</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QJ</scope><scope>7TS</scope><scope>K9.</scope><scope>NAPCQ</scope><scope>7X8</scope><scope>ASE</scope><scope>FPQ</scope><scope>K6X</scope></search><sort><creationdate>20060401</creationdate><title>Project quality rating by experts and practitioners: experience with Preffi 2.0 as a quality assessment instrument</title><author>Molleman, Gerard R. M. ; Peters, Louk W. H. ; Hosman, Clemens M. H. ; Kok, Gerjo J. ; Oosterveld, Paul</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c492t-bd300474ae256e2d8eea4a0a7492d97738392b275bd1c912daf31f3aa139a9a83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2006</creationdate><topic>Academic Achievement</topic><topic>Criteria</topic><topic>Educational Assessment</topic><topic>Educational Quality</topic><topic>Error of Measurement</topic><topic>Evaluation</topic><topic>Evidence</topic><topic>Expertise</topic><topic>Experts</topic><topic>External assessors</topic><topic>Followup Studies</topic><topic>Generalizability Theory</topic><topic>Health Promotion</topic><topic>Health Promotion - standards</topic><topic>Health technology assessment</topic><topic>Intervention</topic><topic>Netherlands</topic><topic>ORIGINAL ARTICLES</topic><topic>Program Effectiveness</topic><topic>Program Evaluation - methods</topic><topic>Quality assessment</topic><topic>Quality Control</topic><topic>Reliability</topic><topic>Resistance (Psychology)</topic><topic>Selfassessment</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Molleman, Gerard R. M.</creatorcontrib><creatorcontrib>Peters, Louk W. H.</creatorcontrib><creatorcontrib>Hosman, Clemens M. H.</creatorcontrib><creatorcontrib>Kok, Gerjo J.</creatorcontrib><creatorcontrib>Oosterveld, Paul</creatorcontrib><collection>ERIC</collection><collection>ERIC (Ovid)</collection><collection>ERIC</collection><collection>ERIC</collection><collection>ERIC (Legacy Platform)</collection><collection>ERIC( SilverPlatter )</collection><collection>ERIC</collection><collection>ERIC PlusText (Legacy Platform)</collection><collection>Education Resources Information Center (ERIC)</collection><collection>ERIC</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Applied Social Sciences Index & Abstracts (ASSIA)</collection><collection>Physical Education Index</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Nursing & Allied Health Premium</collection><collection>MEDLINE - Academic</collection><collection>British Nursing Index</collection><collection>British Nursing Index (BNI) (1985 to Present)</collection><collection>British Nursing Index</collection><jtitle>Health education research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Molleman, Gerard R. M.</au><au>Peters, Louk W. H.</au><au>Hosman, Clemens M. H.</au><au>Kok, Gerjo J.</au><au>Oosterveld, Paul</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><ericid>EJ938045</ericid><atitle>Project quality rating by experts and practitioners: experience with Preffi 2.0 as a quality assessment instrument</atitle><jtitle>Health education research</jtitle><addtitle>Health Educ Res</addtitle><date>2006-04-01</date><risdate>2006</risdate><volume>21</volume><issue>2</issue><spage>219</spage><epage>229</epage><pages>219-229</pages><issn>0268-1153</issn><eissn>1465-3648</eissn><coden>HRTPE2</coden><abstract>Preffi 2.0 is an evidence-based Dutch quality assessment instrument for health promotion interventions. It is mainly intended for both planning and assessing one's own projects but can also be used to assess other people's projects (external use). This article reports a study on the reliability of Preffi as an external quality assessment instrument. Preffi is used to assess quality at three levels: (i) specific criteria, (ii) clusters of criteria and (iii) entire projects. The study compared Preffi-based assessments of 20 projects by three practitioners with their intuitive assessments of the same projects and with assessments by three experts, which were to be used as external criteria. The intuitive assessments only related to the cluster and project levels. Our main hypothesis was that intuitive assessments by practitioners would be less reliable and accurate than their Preffi-based assessments and the experts' assessments. On the whole, we failed to confirm this hypothesis: the experts' assessments proved less reliable and accurate than the practitioners' intuitive and Preffi-based assessments and differed too much from each other to be used as external criteria. The Preffi-based assessments by the practitioners had an acceptable generalizability coefficient (G) and accuracy (standard error of measurement). At the level of the entire project, two assessors are needed to produce sufficiently reliable and accurate assessments, whereas three are needed for assessment at cluster level. The study also showed that different assessors use different perspectives and base their assessment on a variety of aspects. This was regarded as inevitable and even useful by the assessors themselves. Discussions between assessors are important to achieve consensus. The article suggests some improvements to Preffi to further increase its reliability.</abstract><cop>England</cop><pub>Oxford University Press</pub><pmid>16221733</pmid><doi>10.1093/her/cyh058</doi><tpages>11</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0268-1153 |
ispartof | Health education research, 2006-04, Vol.21 (2), p.219-229 |
issn | 0268-1153 1465-3648 |
language | eng |
recordid | cdi_proquest_miscellaneous_764288217 |
source | Applied Social Sciences Index & Abstracts (ASSIA); JSTOR Archival Journals and Primary Sources Collection; Oxford Journals Online; ERIC |
subjects | Academic Achievement Criteria Educational Assessment Educational Quality Error of Measurement Evaluation Evidence Expertise Experts External assessors Followup Studies Generalizability Theory Health Promotion Health Promotion - standards Health technology assessment Intervention Netherlands ORIGINAL ARTICLES Program Effectiveness Program Evaluation - methods Quality assessment Quality Control Reliability Resistance (Psychology) Selfassessment |
title | Project quality rating by experts and practitioners: experience with Preffi 2.0 as a quality assessment instrument |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T10%3A17%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-jstor_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Project%20quality%20rating%20by%20experts%20and%20practitioners:%20experience%20with%20Preffi%202.0%20as%20a%20quality%20assessment%20instrument&rft.jtitle=Health%20education%20research&rft.au=Molleman,%20Gerard%20R.%20M.&rft.date=2006-04-01&rft.volume=21&rft.issue=2&rft.spage=219&rft.epage=229&rft.pages=219-229&rft.issn=0268-1153&rft.eissn=1465-3648&rft.coden=HRTPE2&rft_id=info:doi/10.1093/her/cyh058&rft_dat=%3Cjstor_proqu%3E45110232%3C/jstor_proqu%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c492t-bd300474ae256e2d8eea4a0a7492d97738392b275bd1c912daf31f3aa139a9a83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=199566522&rft_id=info:pmid/16221733&rft_ericid=EJ938045&rft_jstor_id=45110232&rfr_iscdi=true |