Loading…

Synthesizing explainable counterfactual policies for algorithmic recourse with program synthesis

Being able to provide counterfactual interventions—sequences of actions we would have had to take for a desirable outcome to happen—is essential to explain how to change an unfavourable decision by a black-box machine learning model (e.g., being denied a loan request). Existing solutions have mainly...

Full description

Saved in:
Bibliographic Details
Published in:Machine learning 2023-04, Vol.112 (4), p.1389-1409
Main Authors: De Toni, Giovanni, Lepri, Bruno, Passerini, Andrea
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c363t-9c7c6ae6545e110b54ff6fa950c07c403a9f00ecda5f10f69b9e926a44c2c1613
cites cdi_FETCH-LOGICAL-c363t-9c7c6ae6545e110b54ff6fa950c07c403a9f00ecda5f10f69b9e926a44c2c1613
container_end_page 1409
container_issue 4
container_start_page 1389
container_title Machine learning
container_volume 112
creator De Toni, Giovanni
Lepri, Bruno
Passerini, Andrea
description Being able to provide counterfactual interventions—sequences of actions we would have had to take for a desirable outcome to happen—is essential to explain how to change an unfavourable decision by a black-box machine learning model (e.g., being denied a loan request). Existing solutions have mainly focused on generating feasible interventions without providing explanations of their rationale. Moreover, they need to solve a separate optimization problem for each user. In this paper, we take a different approach and learn a program that outputs a sequence of explainable counterfactual actions given a user description and a causal graph. We leverage program synthesis techniques, reinforcement learning coupled with Monte Carlo Tree Search for efficient exploration, and rule learning to extract explanations for each recommended action. An experimental evaluation on synthetic and real-world datasets shows how our approach, FARE (eFficient counterfActual REcourse), generates effective interventions by making orders of magnitude fewer queries to the black-box classifier with respect to existing solutions, with the additional benefit of complementing them with interpretable explanations.
doi_str_mv 10.1007/s10994-022-06293-7
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2791432509</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2791432509</sourcerecordid><originalsourceid>FETCH-LOGICAL-c363t-9c7c6ae6545e110b54ff6fa950c07c403a9f00ecda5f10f69b9e926a44c2c1613</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKt_wFPAc3SSbJLmKMUvKHhQzzGNyTZlu7smW7T-eqNb8OZpYHjed4YHoXMKlxRAXWUKWlcEGCMgmeZEHaAJFYoTEFIcognMZoJIysQxOsl5DQBMzuQEvT7t2mHlc_yKbY39Z9_Y2Npl47Hrtu3gU7Bu2NoG910TXfQZhy5h29RdisNqEx1OvpApe_xRFrhPXZ3sBud9bT5FR8E22Z_t5xS93N48z-_J4vHuYX69II5LPhDtlJPWS1EJTyksRRWCDFYLcKBcBdzqAODdmxWBQpB6qb1m0laVY45KyqfoYuwtH7xvfR7MurzVlpOGKU0rzgToQrGRcqnLOflg-hQ3Nu0MBfNj0owmTTFpfk0aVUJ8DOUCt7VPf9X_pL4BK_J5Eg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2791432509</pqid></control><display><type>article</type><title>Synthesizing explainable counterfactual policies for algorithmic recourse with program synthesis</title><source>Springer Link</source><creator>De Toni, Giovanni ; Lepri, Bruno ; Passerini, Andrea</creator><creatorcontrib>De Toni, Giovanni ; Lepri, Bruno ; Passerini, Andrea</creatorcontrib><description>Being able to provide counterfactual interventions—sequences of actions we would have had to take for a desirable outcome to happen—is essential to explain how to change an unfavourable decision by a black-box machine learning model (e.g., being denied a loan request). Existing solutions have mainly focused on generating feasible interventions without providing explanations of their rationale. Moreover, they need to solve a separate optimization problem for each user. In this paper, we take a different approach and learn a program that outputs a sequence of explainable counterfactual actions given a user description and a causal graph. We leverage program synthesis techniques, reinforcement learning coupled with Monte Carlo Tree Search for efficient exploration, and rule learning to extract explanations for each recommended action. An experimental evaluation on synthetic and real-world datasets shows how our approach, FARE (eFficient counterfActual REcourse), generates effective interventions by making orders of magnitude fewer queries to the black-box classifier with respect to existing solutions, with the additional benefit of complementing them with interpretable explanations.</description><identifier>ISSN: 0885-6125</identifier><identifier>EISSN: 1573-0565</identifier><identifier>DOI: 10.1007/s10994-022-06293-7</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Black boxes ; Computer Science ; Control ; Decision making ; Genetic algorithms ; Integer programming ; Intervention ; Machine Learning ; Mechatronics ; Natural Language Processing (NLP) ; Optimization ; Robotics ; Simulation and Modeling ; Special Issue on Learning and Reasoning 2022 ; Synthesis</subject><ispartof>Machine learning, 2023-04, Vol.112 (4), p.1389-1409</ispartof><rights>The Author(s) 2023</rights><rights>The Author(s) 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c363t-9c7c6ae6545e110b54ff6fa950c07c403a9f00ecda5f10f69b9e926a44c2c1613</citedby><cites>FETCH-LOGICAL-c363t-9c7c6ae6545e110b54ff6fa950c07c403a9f00ecda5f10f69b9e926a44c2c1613</cites><orcidid>0000-0002-8387-9983</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>De Toni, Giovanni</creatorcontrib><creatorcontrib>Lepri, Bruno</creatorcontrib><creatorcontrib>Passerini, Andrea</creatorcontrib><title>Synthesizing explainable counterfactual policies for algorithmic recourse with program synthesis</title><title>Machine learning</title><addtitle>Mach Learn</addtitle><description>Being able to provide counterfactual interventions—sequences of actions we would have had to take for a desirable outcome to happen—is essential to explain how to change an unfavourable decision by a black-box machine learning model (e.g., being denied a loan request). Existing solutions have mainly focused on generating feasible interventions without providing explanations of their rationale. Moreover, they need to solve a separate optimization problem for each user. In this paper, we take a different approach and learn a program that outputs a sequence of explainable counterfactual actions given a user description and a causal graph. We leverage program synthesis techniques, reinforcement learning coupled with Monte Carlo Tree Search for efficient exploration, and rule learning to extract explanations for each recommended action. An experimental evaluation on synthetic and real-world datasets shows how our approach, FARE (eFficient counterfActual REcourse), generates effective interventions by making orders of magnitude fewer queries to the black-box classifier with respect to existing solutions, with the additional benefit of complementing them with interpretable explanations.</description><subject>Artificial Intelligence</subject><subject>Black boxes</subject><subject>Computer Science</subject><subject>Control</subject><subject>Decision making</subject><subject>Genetic algorithms</subject><subject>Integer programming</subject><subject>Intervention</subject><subject>Machine Learning</subject><subject>Mechatronics</subject><subject>Natural Language Processing (NLP)</subject><subject>Optimization</subject><subject>Robotics</subject><subject>Simulation and Modeling</subject><subject>Special Issue on Learning and Reasoning 2022</subject><subject>Synthesis</subject><issn>0885-6125</issn><issn>1573-0565</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LAzEQhoMoWKt_wFPAc3SSbJLmKMUvKHhQzzGNyTZlu7smW7T-eqNb8OZpYHjed4YHoXMKlxRAXWUKWlcEGCMgmeZEHaAJFYoTEFIcognMZoJIysQxOsl5DQBMzuQEvT7t2mHlc_yKbY39Z9_Y2Npl47Hrtu3gU7Bu2NoG910TXfQZhy5h29RdisNqEx1OvpApe_xRFrhPXZ3sBud9bT5FR8E22Z_t5xS93N48z-_J4vHuYX69II5LPhDtlJPWS1EJTyksRRWCDFYLcKBcBdzqAODdmxWBQpB6qb1m0laVY45KyqfoYuwtH7xvfR7MurzVlpOGKU0rzgToQrGRcqnLOflg-hQ3Nu0MBfNj0owmTTFpfk0aVUJ8DOUCt7VPf9X_pL4BK_J5Eg</recordid><startdate>20230401</startdate><enddate>20230401</enddate><creator>De Toni, Giovanni</creator><creator>Lepri, Bruno</creator><creator>Passerini, Andrea</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7XB</scope><scope>88I</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M2P</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-8387-9983</orcidid></search><sort><creationdate>20230401</creationdate><title>Synthesizing explainable counterfactual policies for algorithmic recourse with program synthesis</title><author>De Toni, Giovanni ; Lepri, Bruno ; Passerini, Andrea</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c363t-9c7c6ae6545e110b54ff6fa950c07c403a9f00ecda5f10f69b9e926a44c2c1613</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial Intelligence</topic><topic>Black boxes</topic><topic>Computer Science</topic><topic>Control</topic><topic>Decision making</topic><topic>Genetic algorithms</topic><topic>Integer programming</topic><topic>Intervention</topic><topic>Machine Learning</topic><topic>Mechatronics</topic><topic>Natural Language Processing (NLP)</topic><topic>Optimization</topic><topic>Robotics</topic><topic>Simulation and Modeling</topic><topic>Special Issue on Learning and Reasoning 2022</topic><topic>Synthesis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>De Toni, Giovanni</creatorcontrib><creatorcontrib>Lepri, Bruno</creatorcontrib><creatorcontrib>Passerini, Andrea</creatorcontrib><collection>SpringerOpen</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Science Database</collection><collection>ProQuest advanced technologies &amp; aerospace journals</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Machine learning</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>De Toni, Giovanni</au><au>Lepri, Bruno</au><au>Passerini, Andrea</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Synthesizing explainable counterfactual policies for algorithmic recourse with program synthesis</atitle><jtitle>Machine learning</jtitle><stitle>Mach Learn</stitle><date>2023-04-01</date><risdate>2023</risdate><volume>112</volume><issue>4</issue><spage>1389</spage><epage>1409</epage><pages>1389-1409</pages><issn>0885-6125</issn><eissn>1573-0565</eissn><abstract>Being able to provide counterfactual interventions—sequences of actions we would have had to take for a desirable outcome to happen—is essential to explain how to change an unfavourable decision by a black-box machine learning model (e.g., being denied a loan request). Existing solutions have mainly focused on generating feasible interventions without providing explanations of their rationale. Moreover, they need to solve a separate optimization problem for each user. In this paper, we take a different approach and learn a program that outputs a sequence of explainable counterfactual actions given a user description and a causal graph. We leverage program synthesis techniques, reinforcement learning coupled with Monte Carlo Tree Search for efficient exploration, and rule learning to extract explanations for each recommended action. An experimental evaluation on synthetic and real-world datasets shows how our approach, FARE (eFficient counterfActual REcourse), generates effective interventions by making orders of magnitude fewer queries to the black-box classifier with respect to existing solutions, with the additional benefit of complementing them with interpretable explanations.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10994-022-06293-7</doi><tpages>21</tpages><orcidid>https://orcid.org/0000-0002-8387-9983</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0885-6125
ispartof Machine learning, 2023-04, Vol.112 (4), p.1389-1409
issn 0885-6125
1573-0565
language eng
recordid cdi_proquest_journals_2791432509
source Springer Link
subjects Artificial Intelligence
Black boxes
Computer Science
Control
Decision making
Genetic algorithms
Integer programming
Intervention
Machine Learning
Mechatronics
Natural Language Processing (NLP)
Optimization
Robotics
Simulation and Modeling
Special Issue on Learning and Reasoning 2022
Synthesis
title Synthesizing explainable counterfactual policies for algorithmic recourse with program synthesis
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T13%3A42%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Synthesizing%20explainable%20counterfactual%20policies%20for%20algorithmic%20recourse%20with%20program%20synthesis&rft.jtitle=Machine%20learning&rft.au=De%20Toni,%20Giovanni&rft.date=2023-04-01&rft.volume=112&rft.issue=4&rft.spage=1389&rft.epage=1409&rft.pages=1389-1409&rft.issn=0885-6125&rft.eissn=1573-0565&rft_id=info:doi/10.1007/s10994-022-06293-7&rft_dat=%3Cproquest_cross%3E2791432509%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c363t-9c7c6ae6545e110b54ff6fa950c07c403a9f00ecda5f10f69b9e926a44c2c1613%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2791432509&rft_id=info:pmid/&rfr_iscdi=true