Loading…

A Multi-Method Approach to Evaluating Human-System Interactions during Operational Testing

"The quality of human-system interactions is a key determinant of mission success for military systems. However, operational testers rarely approach the evaluation of human-system interactions with the same rigor that they approach the evaluation of physical system requirements, such as miss di...

Full description

Saved in:
Bibliographic Details
Published in:Policy File 2017
Main Authors: Thomas, Dean, Wojton, Heather, Biebe, Chad, Porter, Daniel
Format: Report
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title Policy File
container_volume
creator Thomas, Dean
Wojton, Heather
Biebe, Chad
Porter, Daniel
description "The quality of human-system interactions is a key determinant of mission success for military systems. However, operational testers rarely approach the evaluation of human-system interactions with the same rigor that they approach the evaluation of physical system requirements, such as miss distance or interoperability. Often, testers evaluate human-system interactions solely using survey instruments (e.g., NASA-Task Load Index (NASA-TLX)), excluding other methods entirely. In this paper, we argue that a multi-method approach that leverages methodological triangulation provides greater insights into human-system interactions observed during operational testing. Specifically, we present data from an operational test in which a multi-method approach was used. Ten attack helicopter pilots identified and responded to threats under four conditions: high vs. low threat density and presence vs. absence of a threat detection technology. Testers recorded two primary measures of pilot workload: time to detect first threat and the NASA-TLX. Pilots took significantly longer to detect threats under low threat density than high threat density when the threat detection technology was absent. However, there was no difference in time to detect threats when the threat detection technology was present. The NASA-TLX data showed a similar pattern of results, suggesting that the observed effect is a result of pilot workload rather than the method used to measure workload .e., survey instrument vs. behavioral metric. Triangulating methods in this way provides a more rigorous and defensible test of the research question, and when combined with qualitative methods, provides useful information for identifying whether degradations in performance should be addressed through additional training or interface redesign. "
format report
fullrecord <record><control><sourceid>proquest_AOXKD</sourceid><recordid>TN_cdi_proquest_reports_2496191656</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2496191656</sourcerecordid><originalsourceid>FETCH-proquest_reports_24961916563</originalsourceid><addsrcrecordid>eNrjZIhyVPAtzSnJ1PVNLcnIT1FwLCgoyk9MzlAoyVdwLUvMKU0sycxLV_AozU3M0w2uLC5JzVXwzCtJLUpMLsnMzytWSCktAinwLwAKgUQScxRCUotBmngYWNMSc4pTeaE0N4OSm2uIs4cu0IbCUqCa-KLUgvyikuJ4IxNLM0NLQzNTM2OiFAEAm2A86w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>report</recordtype><pqid>2496191656</pqid></control><display><type>report</type><title>A Multi-Method Approach to Evaluating Human-System Interactions during Operational Testing</title><source>Policy File Index</source><creator>Thomas, Dean ; Wojton, Heather ; Biebe, Chad ; Porter, Daniel</creator><creatorcontrib>Thomas, Dean ; Wojton, Heather ; Biebe, Chad ; Porter, Daniel</creatorcontrib><description>"The quality of human-system interactions is a key determinant of mission success for military systems. However, operational testers rarely approach the evaluation of human-system interactions with the same rigor that they approach the evaluation of physical system requirements, such as miss distance or interoperability. Often, testers evaluate human-system interactions solely using survey instruments (e.g., NASA-Task Load Index (NASA-TLX)), excluding other methods entirely. In this paper, we argue that a multi-method approach that leverages methodological triangulation provides greater insights into human-system interactions observed during operational testing. Specifically, we present data from an operational test in which a multi-method approach was used. Ten attack helicopter pilots identified and responded to threats under four conditions: high vs. low threat density and presence vs. absence of a threat detection technology. Testers recorded two primary measures of pilot workload: time to detect first threat and the NASA-TLX. Pilots took significantly longer to detect threats under low threat density than high threat density when the threat detection technology was absent. However, there was no difference in time to detect threats when the threat detection technology was present. The NASA-TLX data showed a similar pattern of results, suggesting that the observed effect is a result of pilot workload rather than the method used to measure workload .e., survey instrument vs. behavioral metric. Triangulating methods in this way provides a more rigorous and defensible test of the research question, and when combined with qualitative methods, provides useful information for identifying whether degradations in performance should be addressed through additional training or interface redesign. "</description><language>eng</language><publisher>Institute for Defense Analyses</publisher><subject>Institute for Defense Analyses</subject><ispartof>Policy File, 2017</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2496191656?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>780,784,4490,43748,72968,72973</link.rule.ids><linktorsrc>$$Uhttps://www.proquest.com/docview/2496191656?pq-origsite=primo$$EView_record_in_ProQuest$$FView_record_in_$$GProQuest</linktorsrc></links><search><creatorcontrib>Thomas, Dean</creatorcontrib><creatorcontrib>Wojton, Heather</creatorcontrib><creatorcontrib>Biebe, Chad</creatorcontrib><creatorcontrib>Porter, Daniel</creatorcontrib><title>A Multi-Method Approach to Evaluating Human-System Interactions during Operational Testing</title><title>Policy File</title><description>"The quality of human-system interactions is a key determinant of mission success for military systems. However, operational testers rarely approach the evaluation of human-system interactions with the same rigor that they approach the evaluation of physical system requirements, such as miss distance or interoperability. Often, testers evaluate human-system interactions solely using survey instruments (e.g., NASA-Task Load Index (NASA-TLX)), excluding other methods entirely. In this paper, we argue that a multi-method approach that leverages methodological triangulation provides greater insights into human-system interactions observed during operational testing. Specifically, we present data from an operational test in which a multi-method approach was used. Ten attack helicopter pilots identified and responded to threats under four conditions: high vs. low threat density and presence vs. absence of a threat detection technology. Testers recorded two primary measures of pilot workload: time to detect first threat and the NASA-TLX. Pilots took significantly longer to detect threats under low threat density than high threat density when the threat detection technology was absent. However, there was no difference in time to detect threats when the threat detection technology was present. The NASA-TLX data showed a similar pattern of results, suggesting that the observed effect is a result of pilot workload rather than the method used to measure workload .e., survey instrument vs. behavioral metric. Triangulating methods in this way provides a more rigorous and defensible test of the research question, and when combined with qualitative methods, provides useful information for identifying whether degradations in performance should be addressed through additional training or interface redesign. "</description><subject>Institute for Defense Analyses</subject><fulltext>true</fulltext><rsrctype>report</rsrctype><creationdate>2017</creationdate><recordtype>report</recordtype><sourceid>ABWIU</sourceid><sourceid>AFVLS</sourceid><sourceid>ALSLI</sourceid><sourceid>AOXKD</sourceid><sourceid>DPSOV</sourceid><recordid>eNrjZIhyVPAtzSnJ1PVNLcnIT1FwLCgoyk9MzlAoyVdwLUvMKU0sycxLV_AozU3M0w2uLC5JzVXwzCtJLUpMLsnMzytWSCktAinwLwAKgUQScxRCUotBmngYWNMSc4pTeaE0N4OSm2uIs4cu0IbCUqCa-KLUgvyikuJ4IxNLM0NLQzNTM2OiFAEAm2A86w</recordid><startdate>20171101</startdate><enddate>20171101</enddate><creator>Thomas, Dean</creator><creator>Wojton, Heather</creator><creator>Biebe, Chad</creator><creator>Porter, Daniel</creator><general>Institute for Defense Analyses</general><scope>ABWIU</scope><scope>AFVLS</scope><scope>ALSLI</scope><scope>AOXKD</scope><scope>DPSOV</scope></search><sort><creationdate>20171101</creationdate><title>A Multi-Method Approach to Evaluating Human-System Interactions during Operational Testing</title><author>Thomas, Dean ; Wojton, Heather ; Biebe, Chad ; Porter, Daniel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_reports_24961916563</frbrgroupid><rsrctype>reports</rsrctype><prefilter>reports</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Institute for Defense Analyses</topic><toplevel>online_resources</toplevel><creatorcontrib>Thomas, Dean</creatorcontrib><creatorcontrib>Wojton, Heather</creatorcontrib><creatorcontrib>Biebe, Chad</creatorcontrib><creatorcontrib>Porter, Daniel</creatorcontrib><collection>Social Science Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>Policy File Index</collection><collection>Politics Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Thomas, Dean</au><au>Wojton, Heather</au><au>Biebe, Chad</au><au>Porter, Daniel</au><format>book</format><genre>unknown</genre><ristype>RPRT</ristype><atitle>A Multi-Method Approach to Evaluating Human-System Interactions during Operational Testing</atitle><jtitle>Policy File</jtitle><date>2017-11-01</date><risdate>2017</risdate><abstract>"The quality of human-system interactions is a key determinant of mission success for military systems. However, operational testers rarely approach the evaluation of human-system interactions with the same rigor that they approach the evaluation of physical system requirements, such as miss distance or interoperability. Often, testers evaluate human-system interactions solely using survey instruments (e.g., NASA-Task Load Index (NASA-TLX)), excluding other methods entirely. In this paper, we argue that a multi-method approach that leverages methodological triangulation provides greater insights into human-system interactions observed during operational testing. Specifically, we present data from an operational test in which a multi-method approach was used. Ten attack helicopter pilots identified and responded to threats under four conditions: high vs. low threat density and presence vs. absence of a threat detection technology. Testers recorded two primary measures of pilot workload: time to detect first threat and the NASA-TLX. Pilots took significantly longer to detect threats under low threat density than high threat density when the threat detection technology was absent. However, there was no difference in time to detect threats when the threat detection technology was present. The NASA-TLX data showed a similar pattern of results, suggesting that the observed effect is a result of pilot workload rather than the method used to measure workload .e., survey instrument vs. behavioral metric. Triangulating methods in this way provides a more rigorous and defensible test of the research question, and when combined with qualitative methods, provides useful information for identifying whether degradations in performance should be addressed through additional training or interface redesign. "</abstract><pub>Institute for Defense Analyses</pub></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof Policy File, 2017
issn
language eng
recordid cdi_proquest_reports_2496191656
source Policy File Index
subjects Institute for Defense Analyses
title A Multi-Method Approach to Evaluating Human-System Interactions during Operational Testing
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T20%3A02%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_AOXKD&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=unknown&rft.atitle=A%20Multi-Method%20Approach%20to%20Evaluating%20Human-System%20Interactions%20during%20Operational%20Testing&rft.jtitle=Policy%20File&rft.au=Thomas,%20Dean&rft.date=2017-11-01&rft_id=info:doi/&rft_dat=%3Cproquest_AOXKD%3E2496191656%3C/proquest_AOXKD%3E%3Cgrp_id%3Ecdi_FETCH-proquest_reports_24961916563%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2496191656&rft_id=info:pmid/&rfr_iscdi=true