Loading…
The Fallacy of AI Functionality
Deployed AI systems often do not work. They can be constructed haphazardly, deployed indiscriminately, and promoted deceptively. However, despite this reality, scholars, the press, and policymakers pay too little attention to functionality. This leads to technical and policy solutions focused on &qu...
Saved in:
Published in: | arXiv.org 2022-07 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Inioluwa, Deborah Raji Kumar, I Elizabeth Horowitz, Aaron Selbst, Andrew D |
description | Deployed AI systems often do not work. They can be constructed haphazardly, deployed indiscriminately, and promoted deceptively. However, despite this reality, scholars, the press, and policymakers pay too little attention to functionality. This leads to technical and policy solutions focused on "ethical" or value-aligned deployments, often skipping over the prior question of whether a given system functions, or provides any benefits at all. To describe the harms of various types of functionality failures, we analyze a set of case studies to create a taxonomy of known AI functionality issues. We then point to policy and organizational responses that are often overlooked and become more readily available once functionality is drawn into focus. We argue that functionality is a meaningful AI policy challenge, operating as a necessary first step towards protecting affected communities from algorithmic harm. |
doi_str_mv | 10.48550/arxiv.2206.09511 |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2679474762</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2679474762</sourcerecordid><originalsourceid>FETCH-LOGICAL-a522-736c9c3e366cf5b4869894fbdaf3f5f22ea4f6ca50373e5535ffb2668f8eae773</originalsourceid><addsrcrecordid>eNotjsFKw0AQQBdBsNR-gCcDnhM3M7uzm2MpRguFXnIvk3UHU0Kj2UTs3xvQ07u995R6KHVhvLX6mcef7rsA0FToypbljVoBYpl7A3CnNimdtdZADqzFlXpsPmJWc99zuGaDZNt9Vs-XMHXDhftuut6rW-E-xc0_16qpX5rdW344vu5320POFiB3SKEKGJEoiG2Np8pXRtp3FhQrAJGNUGCr0WFcwlakBSIvPnJ0Dtfq6U_7OQ5fc0zT6TzM47KQTstqZZxxBPgLb2s-LA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2679474762</pqid></control><display><type>article</type><title>The Fallacy of AI Functionality</title><source>Publicly Available Content (ProQuest)</source><creator>Inioluwa, Deborah Raji ; Kumar, I Elizabeth ; Horowitz, Aaron ; Selbst, Andrew D</creator><creatorcontrib>Inioluwa, Deborah Raji ; Kumar, I Elizabeth ; Horowitz, Aaron ; Selbst, Andrew D</creatorcontrib><description>Deployed AI systems often do not work. They can be constructed haphazardly, deployed indiscriminately, and promoted deceptively. However, despite this reality, scholars, the press, and policymakers pay too little attention to functionality. This leads to technical and policy solutions focused on "ethical" or value-aligned deployments, often skipping over the prior question of whether a given system functions, or provides any benefits at all. To describe the harms of various types of functionality failures, we analyze a set of case studies to create a taxonomy of known AI functionality issues. We then point to policy and organizational responses that are often overlooked and become more readily available once functionality is drawn into focus. We argue that functionality is a meaningful AI policy challenge, operating as a necessary first step towards protecting affected communities from algorithmic harm.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2206.09511</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Failure analysis ; Taxonomy</subject><ispartof>arXiv.org, 2022-07</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2679474762?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,27925,37012,44590</link.rule.ids></links><search><creatorcontrib>Inioluwa, Deborah Raji</creatorcontrib><creatorcontrib>Kumar, I Elizabeth</creatorcontrib><creatorcontrib>Horowitz, Aaron</creatorcontrib><creatorcontrib>Selbst, Andrew D</creatorcontrib><title>The Fallacy of AI Functionality</title><title>arXiv.org</title><description>Deployed AI systems often do not work. They can be constructed haphazardly, deployed indiscriminately, and promoted deceptively. However, despite this reality, scholars, the press, and policymakers pay too little attention to functionality. This leads to technical and policy solutions focused on "ethical" or value-aligned deployments, often skipping over the prior question of whether a given system functions, or provides any benefits at all. To describe the harms of various types of functionality failures, we analyze a set of case studies to create a taxonomy of known AI functionality issues. We then point to policy and organizational responses that are often overlooked and become more readily available once functionality is drawn into focus. We argue that functionality is a meaningful AI policy challenge, operating as a necessary first step towards protecting affected communities from algorithmic harm.</description><subject>Failure analysis</subject><subject>Taxonomy</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotjsFKw0AQQBdBsNR-gCcDnhM3M7uzm2MpRguFXnIvk3UHU0Kj2UTs3xvQ07u995R6KHVhvLX6mcef7rsA0FToypbljVoBYpl7A3CnNimdtdZADqzFlXpsPmJWc99zuGaDZNt9Vs-XMHXDhftuut6rW-E-xc0_16qpX5rdW344vu5320POFiB3SKEKGJEoiG2Np8pXRtp3FhQrAJGNUGCr0WFcwlakBSIvPnJ0Dtfq6U_7OQ5fc0zT6TzM47KQTstqZZxxBPgLb2s-LA</recordid><startdate>20220701</startdate><enddate>20220701</enddate><creator>Inioluwa, Deborah Raji</creator><creator>Kumar, I Elizabeth</creator><creator>Horowitz, Aaron</creator><creator>Selbst, Andrew D</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220701</creationdate><title>The Fallacy of AI Functionality</title><author>Inioluwa, Deborah Raji ; Kumar, I Elizabeth ; Horowitz, Aaron ; Selbst, Andrew D</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a522-736c9c3e366cf5b4869894fbdaf3f5f22ea4f6ca50373e5535ffb2668f8eae773</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Failure analysis</topic><topic>Taxonomy</topic><toplevel>online_resources</toplevel><creatorcontrib>Inioluwa, Deborah Raji</creatorcontrib><creatorcontrib>Kumar, I Elizabeth</creatorcontrib><creatorcontrib>Horowitz, Aaron</creatorcontrib><creatorcontrib>Selbst, Andrew D</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Inioluwa, Deborah Raji</au><au>Kumar, I Elizabeth</au><au>Horowitz, Aaron</au><au>Selbst, Andrew D</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The Fallacy of AI Functionality</atitle><jtitle>arXiv.org</jtitle><date>2022-07-01</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Deployed AI systems often do not work. They can be constructed haphazardly, deployed indiscriminately, and promoted deceptively. However, despite this reality, scholars, the press, and policymakers pay too little attention to functionality. This leads to technical and policy solutions focused on "ethical" or value-aligned deployments, often skipping over the prior question of whether a given system functions, or provides any benefits at all. To describe the harms of various types of functionality failures, we analyze a set of case studies to create a taxonomy of known AI functionality issues. We then point to policy and organizational responses that are often overlooked and become more readily available once functionality is drawn into focus. We argue that functionality is a meaningful AI policy challenge, operating as a necessary first step towards protecting affected communities from algorithmic harm.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2206.09511</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-07 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2679474762 |
source | Publicly Available Content (ProQuest) |
subjects | Failure analysis Taxonomy |
title | The Fallacy of AI Functionality |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T17%3A57%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20Fallacy%20of%20AI%20Functionality&rft.jtitle=arXiv.org&rft.au=Inioluwa,%20Deborah%20Raji&rft.date=2022-07-01&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2206.09511&rft_dat=%3Cproquest%3E2679474762%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a522-736c9c3e366cf5b4869894fbdaf3f5f22ea4f6ca50373e5535ffb2668f8eae773%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2679474762&rft_id=info:pmid/&rfr_iscdi=true |