Loading…
Are We Asking the Right Questions?: Designing for Community Stakeholders' Interactions with AI in Policing
Research into recidivism risk prediction in the criminal legal system has garnered significant attention from HCI, critical algorithm studies, and the emerging field of human-AI decision-making. This study focuses on algorithmic crime mapping, a prevalent yet underexplored form of algorithmic decisi...
Saved in:
Published in: | arXiv.org 2024-03 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Haque, MD Romael Saxena, Devansh Weathington, Katy Chudzik, Joseph Guha, Shion |
description | Research into recidivism risk prediction in the criminal legal system has garnered significant attention from HCI, critical algorithm studies, and the emerging field of human-AI decision-making. This study focuses on algorithmic crime mapping, a prevalent yet underexplored form of algorithmic decision support (ADS) in this context. We conducted experiments and follow-up interviews with 60 participants, including community members, technical experts, and law enforcement agents (LEAs), to explore how lived experiences, technical knowledge, and domain expertise shape interactions with the ADS, impacting human-AI decision-making. Surprisingly, we found that domain experts (LEAs) often exhibited anchoring bias, readily accepting and engaging with the first crime map presented to them. Conversely, community members and technical experts were more inclined to engage with the tool, adjust controls, and generate different maps. Our findings highlight that all three stakeholders were able to provide critical feedback regarding AI design and use - community members questioned the core motivation of the tool, technical experts drew attention to the elastic nature of data science practice, and LEAs suggested redesign pathways such that the tool could complement their domain expertise. |
doi_str_mv | 10.48550/arxiv.2402.05348 |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2924079662</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2924079662</sourcerecordid><originalsourceid>FETCH-LOGICAL-a522-daf04b8d01f11762441b980377e01fb42e4e2864bb657bbef255af3caf9d81613</originalsourceid><addsrcrecordid>eNotUEtLAzEYDIJgqf0B3gIePG1Nvjx214ss9VUQfBU8lqSbdNNHoknq49-7VU8DM8wMMwidUDLmlRDkXMUv9zEGTmBMBOPVARoAY7SoOMARGqW0IoSALEEINkCrJhr8anCT1s4vce4MfnbLLuOnnUnZBZ8uL_CVSW7p97oNEU_CdrvzLn_jl6zWpgub1sR0hqc-m6gWvyb86XKHmyl2Hj-GjVv05mN0aNUmmdE_DtHs5no2uSvuH26nk-a-UAKgaJUlXFctoZbSUgLnVNcVYWVpekpzMNxAJbnWUpRaG9vvUJYtlK3bikrKhuj0L_Ythvf9iPkq7KLvG-dQ97eUtZTAfgA8tVkl</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2924079662</pqid></control><display><type>article</type><title>Are We Asking the Right Questions?: Designing for Community Stakeholders' Interactions with AI in Policing</title><source>ProQuest - Publicly Available Content Database</source><creator>Haque, MD Romael ; Saxena, Devansh ; Weathington, Katy ; Chudzik, Joseph ; Guha, Shion</creator><creatorcontrib>Haque, MD Romael ; Saxena, Devansh ; Weathington, Katy ; Chudzik, Joseph ; Guha, Shion</creatorcontrib><description>Research into recidivism risk prediction in the criminal legal system has garnered significant attention from HCI, critical algorithm studies, and the emerging field of human-AI decision-making. This study focuses on algorithmic crime mapping, a prevalent yet underexplored form of algorithmic decision support (ADS) in this context. We conducted experiments and follow-up interviews with 60 participants, including community members, technical experts, and law enforcement agents (LEAs), to explore how lived experiences, technical knowledge, and domain expertise shape interactions with the ADS, impacting human-AI decision-making. Surprisingly, we found that domain experts (LEAs) often exhibited anchoring bias, readily accepting and engaging with the first crime map presented to them. Conversely, community members and technical experts were more inclined to engage with the tool, adjust controls, and generate different maps. Our findings highlight that all three stakeholders were able to provide critical feedback regarding AI design and use - community members questioned the core motivation of the tool, technical experts drew attention to the elastic nature of data science practice, and LEAs suggested redesign pathways such that the tool could complement their domain expertise.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2402.05348</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Community ; Crime ; Decision making ; Redesign ; Scandals</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2924079662?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25752,27924,37011,44589</link.rule.ids></links><search><creatorcontrib>Haque, MD Romael</creatorcontrib><creatorcontrib>Saxena, Devansh</creatorcontrib><creatorcontrib>Weathington, Katy</creatorcontrib><creatorcontrib>Chudzik, Joseph</creatorcontrib><creatorcontrib>Guha, Shion</creatorcontrib><title>Are We Asking the Right Questions?: Designing for Community Stakeholders' Interactions with AI in Policing</title><title>arXiv.org</title><description>Research into recidivism risk prediction in the criminal legal system has garnered significant attention from HCI, critical algorithm studies, and the emerging field of human-AI decision-making. This study focuses on algorithmic crime mapping, a prevalent yet underexplored form of algorithmic decision support (ADS) in this context. We conducted experiments and follow-up interviews with 60 participants, including community members, technical experts, and law enforcement agents (LEAs), to explore how lived experiences, technical knowledge, and domain expertise shape interactions with the ADS, impacting human-AI decision-making. Surprisingly, we found that domain experts (LEAs) often exhibited anchoring bias, readily accepting and engaging with the first crime map presented to them. Conversely, community members and technical experts were more inclined to engage with the tool, adjust controls, and generate different maps. Our findings highlight that all three stakeholders were able to provide critical feedback regarding AI design and use - community members questioned the core motivation of the tool, technical experts drew attention to the elastic nature of data science practice, and LEAs suggested redesign pathways such that the tool could complement their domain expertise.</description><subject>Algorithms</subject><subject>Community</subject><subject>Crime</subject><subject>Decision making</subject><subject>Redesign</subject><subject>Scandals</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotUEtLAzEYDIJgqf0B3gIePG1Nvjx214ss9VUQfBU8lqSbdNNHoknq49-7VU8DM8wMMwidUDLmlRDkXMUv9zEGTmBMBOPVARoAY7SoOMARGqW0IoSALEEINkCrJhr8anCT1s4vce4MfnbLLuOnnUnZBZ8uL_CVSW7p97oNEU_CdrvzLn_jl6zWpgub1sR0hqc-m6gWvyb86XKHmyl2Hj-GjVv05mN0aNUmmdE_DtHs5no2uSvuH26nk-a-UAKgaJUlXFctoZbSUgLnVNcVYWVpekpzMNxAJbnWUpRaG9vvUJYtlK3bikrKhuj0L_Ythvf9iPkq7KLvG-dQ97eUtZTAfgA8tVkl</recordid><startdate>20240319</startdate><enddate>20240319</enddate><creator>Haque, MD Romael</creator><creator>Saxena, Devansh</creator><creator>Weathington, Katy</creator><creator>Chudzik, Joseph</creator><creator>Guha, Shion</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240319</creationdate><title>Are We Asking the Right Questions?: Designing for Community Stakeholders' Interactions with AI in Policing</title><author>Haque, MD Romael ; Saxena, Devansh ; Weathington, Katy ; Chudzik, Joseph ; Guha, Shion</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a522-daf04b8d01f11762441b980377e01fb42e4e2864bb657bbef255af3caf9d81613</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Community</topic><topic>Crime</topic><topic>Decision making</topic><topic>Redesign</topic><topic>Scandals</topic><toplevel>online_resources</toplevel><creatorcontrib>Haque, MD Romael</creatorcontrib><creatorcontrib>Saxena, Devansh</creatorcontrib><creatorcontrib>Weathington, Katy</creatorcontrib><creatorcontrib>Chudzik, Joseph</creatorcontrib><creatorcontrib>Guha, Shion</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest - Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Haque, MD Romael</au><au>Saxena, Devansh</au><au>Weathington, Katy</au><au>Chudzik, Joseph</au><au>Guha, Shion</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Are We Asking the Right Questions?: Designing for Community Stakeholders' Interactions with AI in Policing</atitle><jtitle>arXiv.org</jtitle><date>2024-03-19</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Research into recidivism risk prediction in the criminal legal system has garnered significant attention from HCI, critical algorithm studies, and the emerging field of human-AI decision-making. This study focuses on algorithmic crime mapping, a prevalent yet underexplored form of algorithmic decision support (ADS) in this context. We conducted experiments and follow-up interviews with 60 participants, including community members, technical experts, and law enforcement agents (LEAs), to explore how lived experiences, technical knowledge, and domain expertise shape interactions with the ADS, impacting human-AI decision-making. Surprisingly, we found that domain experts (LEAs) often exhibited anchoring bias, readily accepting and engaging with the first crime map presented to them. Conversely, community members and technical experts were more inclined to engage with the tool, adjust controls, and generate different maps. Our findings highlight that all three stakeholders were able to provide critical feedback regarding AI design and use - community members questioned the core motivation of the tool, technical experts drew attention to the elastic nature of data science practice, and LEAs suggested redesign pathways such that the tool could complement their domain expertise.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2402.05348</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2924079662 |
source | ProQuest - Publicly Available Content Database |
subjects | Algorithms Community Crime Decision making Redesign Scandals |
title | Are We Asking the Right Questions?: Designing for Community Stakeholders' Interactions with AI in Policing |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T09%3A58%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Are%20We%20Asking%20the%20Right%20Questions?:%20Designing%20for%20Community%20Stakeholders'%20Interactions%20with%20AI%20in%20Policing&rft.jtitle=arXiv.org&rft.au=Haque,%20MD%20Romael&rft.date=2024-03-19&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2402.05348&rft_dat=%3Cproquest%3E2924079662%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a522-daf04b8d01f11762441b980377e01fb42e4e2864bb657bbef255af3caf9d81613%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2924079662&rft_id=info:pmid/&rfr_iscdi=true |