Loading…
AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks
Despite interest in communicating ethical problems and social contexts within the undergraduate curriculum to advance Public Interest Technology (PIT) goals, interventions at the graduate level remain largely unexplored. This may be due to the conflicting ways through which distinct Artificial Intel...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 79 |
container_issue | |
container_start_page | 72 |
container_title | |
container_volume | |
creator | Andrus, McKane Dean, Sarah Gilbert, Thomas Krendl Lambert, Nathan Zick, Tom |
description | Despite interest in communicating ethical problems and social contexts within the undergraduate curriculum to advance Public Interest Technology (PIT) goals, interventions at the graduate level remain largely unexplored. This may be due to the conflicting ways through which distinct Artificial Intelligence (AI) research tracks conceive of their interface with social contexts. In this paper we track the historical emergence of sociotechnical inquiry in three distinct subfields of AI research: AI Safety, Fair Machine Learning (Fair ML) and Human-Inthe-Loop (HIL) Autonomy. We show that for each subfield, perceptions of PIT stem from the particular dangers faced by past integration of technical systems within a normative social order. We further interrogate how these histories dictate the response of each subfield to conceptual traps, as defined in the Science and Technology Studies literature. Finally, through a comparative analysis of these currently siloed fields, we present a roadmap for a unified approach to sociotechnical graduate pedogogy in AI. |
doi_str_mv | 10.1109/ISTAS50296.2020.9462193 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9462193</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9462193</ieee_id><sourcerecordid>9462193</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-769a316619abaed11ada964ebdcc09be64d7f364d3a7fb297c74a08bff8d22e53</originalsourceid><addsrcrecordid>eNotkN9KwzAcRqMgOGafwAvzAp3516Txrkw3CwPFVW9Hkv7Kol0zkij49g7czXfuDocPoTtKFpQSfd9uu2ZbEablghFGFlpIRjW_QIVWNZWyErQiil6iGaNVXXJB2TUqUvokhLBakErVM_TRtPgRfmAMxwNMGQ8h4rwH_PptR-9wO2WIkPIDXsVwwI1NORqXfZhwF80x4RzwNjgfMrj95J0Z8ZtPX-kGXQ1mTFCcOUfvq6du-VxuXtbtstmUnhGeSyW14adUqo010FNqeqOlANs7R7QFKXo18NNyowbLtHJKGFLbYah7xqDic3T77_UAsDtGfzDxd3d-gv8BtipUAg</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks</title><source>IEEE Xplore All Conference Series</source><creator>Andrus, McKane ; Dean, Sarah ; Gilbert, Thomas Krendl ; Lambert, Nathan ; Zick, Tom</creator><creatorcontrib>Andrus, McKane ; Dean, Sarah ; Gilbert, Thomas Krendl ; Lambert, Nathan ; Zick, Tom</creatorcontrib><description>Despite interest in communicating ethical problems and social contexts within the undergraduate curriculum to advance Public Interest Technology (PIT) goals, interventions at the graduate level remain largely unexplored. This may be due to the conflicting ways through which distinct Artificial Intelligence (AI) research tracks conceive of their interface with social contexts. In this paper we track the historical emergence of sociotechnical inquiry in three distinct subfields of AI research: AI Safety, Fair Machine Learning (Fair ML) and Human-Inthe-Loop (HIL) Autonomy. We show that for each subfield, perceptions of PIT stem from the particular dangers faced by past integration of technical systems within a normative social order. We further interrogate how these histories dictate the response of each subfield to conceptual traps, as defined in the Science and Technology Studies literature. Finally, through a comparative analysis of these currently siloed fields, we present a roadmap for a unified approach to sociotechnical graduate pedogogy in AI.</description><identifier>EISSN: 2158-3412</identifier><identifier>EISBN: 9781665415071</identifier><identifier>EISBN: 166541507X</identifier><identifier>DOI: 10.1109/ISTAS50296.2020.9462193</identifier><language>eng</language><publisher>IEEE</publisher><subject>Artificial intelligence ; Ethics ; History ; Law ; Machine learning ; Safety</subject><ispartof>2020 IEEE International Symposium on Technology and Society (ISTAS), 2020, p.72-79</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9462193$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,27904,54533,54910</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9462193$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Andrus, McKane</creatorcontrib><creatorcontrib>Dean, Sarah</creatorcontrib><creatorcontrib>Gilbert, Thomas Krendl</creatorcontrib><creatorcontrib>Lambert, Nathan</creatorcontrib><creatorcontrib>Zick, Tom</creatorcontrib><title>AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks</title><title>2020 IEEE International Symposium on Technology and Society (ISTAS)</title><addtitle>ISTAS</addtitle><description>Despite interest in communicating ethical problems and social contexts within the undergraduate curriculum to advance Public Interest Technology (PIT) goals, interventions at the graduate level remain largely unexplored. This may be due to the conflicting ways through which distinct Artificial Intelligence (AI) research tracks conceive of their interface with social contexts. In this paper we track the historical emergence of sociotechnical inquiry in three distinct subfields of AI research: AI Safety, Fair Machine Learning (Fair ML) and Human-Inthe-Loop (HIL) Autonomy. We show that for each subfield, perceptions of PIT stem from the particular dangers faced by past integration of technical systems within a normative social order. We further interrogate how these histories dictate the response of each subfield to conceptual traps, as defined in the Science and Technology Studies literature. Finally, through a comparative analysis of these currently siloed fields, we present a roadmap for a unified approach to sociotechnical graduate pedogogy in AI.</description><subject>Artificial intelligence</subject><subject>Ethics</subject><subject>History</subject><subject>Law</subject><subject>Machine learning</subject><subject>Safety</subject><issn>2158-3412</issn><isbn>9781665415071</isbn><isbn>166541507X</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2020</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotkN9KwzAcRqMgOGafwAvzAp3516Txrkw3CwPFVW9Hkv7Kol0zkij49g7czXfuDocPoTtKFpQSfd9uu2ZbEablghFGFlpIRjW_QIVWNZWyErQiil6iGaNVXXJB2TUqUvokhLBakErVM_TRtPgRfmAMxwNMGQ8h4rwH_PptR-9wO2WIkPIDXsVwwI1NORqXfZhwF80x4RzwNjgfMrj95J0Z8ZtPX-kGXQ1mTFCcOUfvq6du-VxuXtbtstmUnhGeSyW14adUqo010FNqeqOlANs7R7QFKXo18NNyowbLtHJKGFLbYah7xqDic3T77_UAsDtGfzDxd3d-gv8BtipUAg</recordid><startdate>20201112</startdate><enddate>20201112</enddate><creator>Andrus, McKane</creator><creator>Dean, Sarah</creator><creator>Gilbert, Thomas Krendl</creator><creator>Lambert, Nathan</creator><creator>Zick, Tom</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20201112</creationdate><title>AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks</title><author>Andrus, McKane ; Dean, Sarah ; Gilbert, Thomas Krendl ; Lambert, Nathan ; Zick, Tom</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-769a316619abaed11ada964ebdcc09be64d7f364d3a7fb297c74a08bff8d22e53</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial intelligence</topic><topic>Ethics</topic><topic>History</topic><topic>Law</topic><topic>Machine learning</topic><topic>Safety</topic><toplevel>online_resources</toplevel><creatorcontrib>Andrus, McKane</creatorcontrib><creatorcontrib>Dean, Sarah</creatorcontrib><creatorcontrib>Gilbert, Thomas Krendl</creatorcontrib><creatorcontrib>Lambert, Nathan</creatorcontrib><creatorcontrib>Zick, Tom</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Andrus, McKane</au><au>Dean, Sarah</au><au>Gilbert, Thomas Krendl</au><au>Lambert, Nathan</au><au>Zick, Tom</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks</atitle><btitle>2020 IEEE International Symposium on Technology and Society (ISTAS)</btitle><stitle>ISTAS</stitle><date>2020-11-12</date><risdate>2020</risdate><spage>72</spage><epage>79</epage><pages>72-79</pages><eissn>2158-3412</eissn><eisbn>9781665415071</eisbn><eisbn>166541507X</eisbn><abstract>Despite interest in communicating ethical problems and social contexts within the undergraduate curriculum to advance Public Interest Technology (PIT) goals, interventions at the graduate level remain largely unexplored. This may be due to the conflicting ways through which distinct Artificial Intelligence (AI) research tracks conceive of their interface with social contexts. In this paper we track the historical emergence of sociotechnical inquiry in three distinct subfields of AI research: AI Safety, Fair Machine Learning (Fair ML) and Human-Inthe-Loop (HIL) Autonomy. We show that for each subfield, perceptions of PIT stem from the particular dangers faced by past integration of technical systems within a normative social order. We further interrogate how these histories dictate the response of each subfield to conceptual traps, as defined in the Science and Technology Studies literature. Finally, through a comparative analysis of these currently siloed fields, we present a roadmap for a unified approach to sociotechnical graduate pedogogy in AI.</abstract><pub>IEEE</pub><doi>10.1109/ISTAS50296.2020.9462193</doi><tpages>8</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2158-3412 |
ispartof | 2020 IEEE International Symposium on Technology and Society (ISTAS), 2020, p.72-79 |
issn | 2158-3412 |
language | eng |
recordid | cdi_ieee_primary_9462193 |
source | IEEE Xplore All Conference Series |
subjects | Artificial intelligence Ethics History Law Machine learning Safety |
title | AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T02%3A00%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=AI%20Development%20for%20the%20Public%20Interest:%20From%20Abstraction%20Traps%20to%20Sociotechnical%20Risks&rft.btitle=2020%20IEEE%20International%20Symposium%20on%20Technology%20and%20Society%20(ISTAS)&rft.au=Andrus,%20McKane&rft.date=2020-11-12&rft.spage=72&rft.epage=79&rft.pages=72-79&rft.eissn=2158-3412&rft_id=info:doi/10.1109/ISTAS50296.2020.9462193&rft.eisbn=9781665415071&rft.eisbn_list=166541507X&rft_dat=%3Cieee_CHZPO%3E9462193%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-769a316619abaed11ada964ebdcc09be64d7f364d3a7fb297c74a08bff8d22e53%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9462193&rfr_iscdi=true |