Loading…

Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use

Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific...

Full description

Saved in:
Bibliographic Details
Published in:Philosophy & technology 2023-09, Vol.36 (3), p.45-45, Article 45
Main Authors: Baumann, Joachim, Loi, Michele
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c4959-99d3f3752857b9d740318d8784b08dd59471634d34db9fb90d43136d418dc7393
cites cdi_FETCH-LOGICAL-c4959-99d3f3752857b9d740318d8784b08dd59471634d34db9fb90d43136d418dc7393
container_end_page 45
container_issue 3
container_start_page 45
container_title Philosophy & technology
container_volume 36
creator Baumann, Joachim
Loi, Michele
description Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to people unless special efforts are made to avoid them. Insurance is no exception. In this paper, we provide a thorough analysis of algorithmic fairness in the case of insurance premiums. We ask what “fairness” might mean in this context and how the fairness of a premium system can be measured. For this, we apply the established fairness frameworks of the fair machine learning literature to the case of insurance premiums and show which of the existing fairness criteria can be applied to assess the fairness of insurance premiums. We argue that two of the often-discussed group fairness criteria, independence (also called statistical parity or demographic parity ) and separation (also known as equalized odds ), are not normatively appropriate for insurance premiums. Instead, we propose the sufficiency criterion (also known as well-calibration ) as a morally defensible alternative that allows us to test for systematic biases in premiums towards certain groups based on the risk they bring to the pool. In addition, we clarify the connection between group fairness and different degrees of personalization. Our findings enable insurers to assess the fairness properties of their risk models, helping them avoid reputation damage resulting from potentially unfair and discriminatory premium systems.
doi_str_mv 10.1007/s13347-023-00624-9
format article
fullrecord <record><control><sourceid>gale_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_10279561</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A753605135</galeid><sourcerecordid>A753605135</sourcerecordid><originalsourceid>FETCH-LOGICAL-c4959-99d3f3752857b9d740318d8784b08dd59471634d34db9fb90d43136d418dc7393</originalsourceid><addsrcrecordid>eNp9klFrFDEUhQdRbKn9Az5IwBdfpia5yWTiiyxrWwsFQSw-hswks02dSdZkR_Dfe9etWyuLmUCGyXdPbs6cqnrJ6BmjVL0tDEComnKoKW24qPWT6phzRmspBHu6fwc4qk5LuaM4JGuAq-fVESgQDWg4rr5e2JCjL4XY6MjnUL69I4tIzje3obcjWeTVPPm4IUPKxJLLnOY12Zd88EOIYRNSJFexzNnnQpY2kpviX1TPBjsWf3q_nlQ3F-dflh_r60-XV8vFdd0LLXWttYMBlOStVJ12SlBgrWtVKzraOie1UNizcDg7PXSaOgEMGieQ6hXe4KR6v9Ndz93kXY-9ZjuadQ6TzT9NssE83onh1qzSD8MoV1o2DBXe3Cvk9H32ZWOmUHo_jjb6NBfDW94qKYE3iL7-B71Lc454vy2lQDEO9IFa2dGbEIeEB_dbUbNQEhr8DSCRqg9QKx89dpkiOoufH_FnB3h8nJ9Cf7CA7wr6nErJftibwqjZRsjsImQwQuZ3hMzWzld_27kv-RMYBGAHFNyKK58fLPiP7C8_M8yz</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2827371230</pqid></control><display><type>article</type><title>Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use</title><source>ABI/INFORM Collection</source><source>Springer Nature</source><source>Sociology Collection</source><source>ProQuest Social Science Premium Collection</source><creator>Baumann, Joachim ; Loi, Michele</creator><creatorcontrib>Baumann, Joachim ; Loi, Michele</creatorcontrib><description>Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to people unless special efforts are made to avoid them. Insurance is no exception. In this paper, we provide a thorough analysis of algorithmic fairness in the case of insurance premiums. We ask what “fairness” might mean in this context and how the fairness of a premium system can be measured. For this, we apply the established fairness frameworks of the fair machine learning literature to the case of insurance premiums and show which of the existing fairness criteria can be applied to assess the fairness of insurance premiums. We argue that two of the often-discussed group fairness criteria, independence (also called statistical parity or demographic parity ) and separation (also known as equalized odds ), are not normatively appropriate for insurance premiums. Instead, we propose the sufficiency criterion (also known as well-calibration ) as a morally defensible alternative that allows us to test for systematic biases in premiums towards certain groups based on the risk they bring to the pool. In addition, we clarify the connection between group fairness and different degrees of personalization. Our findings enable insurers to assess the fairness properties of their risk models, helping them avoid reputation damage resulting from potentially unfair and discriminatory premium systems.</description><identifier>ISSN: 2210-5433</identifier><identifier>EISSN: 2210-5441</identifier><identifier>DOI: 10.1007/s13347-023-00624-9</identifier><identifier>PMID: 37346393</identifier><language>eng</language><publisher>Dordrecht: Springer Netherlands</publisher><subject>Algorithms ; Bias ; Calibration ; Context ; Criteria ; Customization ; Decision making ; Discrimination ; Education ; Ethical standards ; Ethics ; Injustice ; Insurance ; Insurance premiums ; Machine learning ; Parity ; Personal information ; Philosophy ; Philosophy of Technology ; Prediction models ; Research Article ; Risk ; Risk assessment ; Social aspects ; Variables</subject><ispartof>Philosophy &amp; technology, 2023-09, Vol.36 (3), p.45-45, Article 45</ispartof><rights>The Author(s) 2023</rights><rights>The Author(s) 2023.</rights><rights>COPYRIGHT 2023 Springer</rights><rights>The Author(s) 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c4959-99d3f3752857b9d740318d8784b08dd59471634d34db9fb90d43136d418dc7393</citedby><cites>FETCH-LOGICAL-c4959-99d3f3752857b9d740318d8784b08dd59471634d34db9fb90d43136d418dc7393</cites><orcidid>0000-0003-2019-4829 ; 0000-0002-7053-4724</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2827371230/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2827371230?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>230,314,780,784,885,11687,21393,21394,27923,27924,33610,33611,34529,34530,36059,36060,43732,44114,44362,73992,74410,74666</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37346393$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Baumann, Joachim</creatorcontrib><creatorcontrib>Loi, Michele</creatorcontrib><title>Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use</title><title>Philosophy &amp; technology</title><addtitle>Philos. Technol</addtitle><addtitle>Philos Technol</addtitle><description>Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to people unless special efforts are made to avoid them. Insurance is no exception. In this paper, we provide a thorough analysis of algorithmic fairness in the case of insurance premiums. We ask what “fairness” might mean in this context and how the fairness of a premium system can be measured. For this, we apply the established fairness frameworks of the fair machine learning literature to the case of insurance premiums and show which of the existing fairness criteria can be applied to assess the fairness of insurance premiums. We argue that two of the often-discussed group fairness criteria, independence (also called statistical parity or demographic parity ) and separation (also known as equalized odds ), are not normatively appropriate for insurance premiums. Instead, we propose the sufficiency criterion (also known as well-calibration ) as a morally defensible alternative that allows us to test for systematic biases in premiums towards certain groups based on the risk they bring to the pool. In addition, we clarify the connection between group fairness and different degrees of personalization. Our findings enable insurers to assess the fairness properties of their risk models, helping them avoid reputation damage resulting from potentially unfair and discriminatory premium systems.</description><subject>Algorithms</subject><subject>Bias</subject><subject>Calibration</subject><subject>Context</subject><subject>Criteria</subject><subject>Customization</subject><subject>Decision making</subject><subject>Discrimination</subject><subject>Education</subject><subject>Ethical standards</subject><subject>Ethics</subject><subject>Injustice</subject><subject>Insurance</subject><subject>Insurance premiums</subject><subject>Machine learning</subject><subject>Parity</subject><subject>Personal information</subject><subject>Philosophy</subject><subject>Philosophy of Technology</subject><subject>Prediction models</subject><subject>Research Article</subject><subject>Risk</subject><subject>Risk assessment</subject><subject>Social aspects</subject><subject>Variables</subject><issn>2210-5433</issn><issn>2210-5441</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ALSLI</sourceid><sourceid>HEHIP</sourceid><sourceid>M0C</sourceid><sourceid>M2R</sourceid><sourceid>M2S</sourceid><recordid>eNp9klFrFDEUhQdRbKn9Az5IwBdfpia5yWTiiyxrWwsFQSw-hswks02dSdZkR_Dfe9etWyuLmUCGyXdPbs6cqnrJ6BmjVL0tDEComnKoKW24qPWT6phzRmspBHu6fwc4qk5LuaM4JGuAq-fVESgQDWg4rr5e2JCjL4XY6MjnUL69I4tIzje3obcjWeTVPPm4IUPKxJLLnOY12Zd88EOIYRNSJFexzNnnQpY2kpviX1TPBjsWf3q_nlQ3F-dflh_r60-XV8vFdd0LLXWttYMBlOStVJ12SlBgrWtVKzraOie1UNizcDg7PXSaOgEMGieQ6hXe4KR6v9Ndz93kXY-9ZjuadQ6TzT9NssE83onh1qzSD8MoV1o2DBXe3Cvk9H32ZWOmUHo_jjb6NBfDW94qKYE3iL7-B71Lc454vy2lQDEO9IFa2dGbEIeEB_dbUbNQEhr8DSCRqg9QKx89dpkiOoufH_FnB3h8nJ9Cf7CA7wr6nErJftibwqjZRsjsImQwQuZ3hMzWzld_27kv-RMYBGAHFNyKK58fLPiP7C8_M8yz</recordid><startdate>20230901</startdate><enddate>20230901</enddate><creator>Baumann, Joachim</creator><creator>Loi, Michele</creator><general>Springer Netherlands</general><general>Springer</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>0-V</scope><scope>3V.</scope><scope>7WY</scope><scope>7WZ</scope><scope>7X5</scope><scope>7XB</scope><scope>87Z</scope><scope>88J</scope><scope>8A3</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>AABKS</scope><scope>ABSDQ</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ALSLI</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>HEHIP</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>M0C</scope><scope>M2R</scope><scope>M2S</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYYUZ</scope><scope>Q9U</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0003-2019-4829</orcidid><orcidid>https://orcid.org/0000-0002-7053-4724</orcidid></search><sort><creationdate>20230901</creationdate><title>Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use</title><author>Baumann, Joachim ; Loi, Michele</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c4959-99d3f3752857b9d740318d8784b08dd59471634d34db9fb90d43136d418dc7393</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Bias</topic><topic>Calibration</topic><topic>Context</topic><topic>Criteria</topic><topic>Customization</topic><topic>Decision making</topic><topic>Discrimination</topic><topic>Education</topic><topic>Ethical standards</topic><topic>Ethics</topic><topic>Injustice</topic><topic>Insurance</topic><topic>Insurance premiums</topic><topic>Machine learning</topic><topic>Parity</topic><topic>Personal information</topic><topic>Philosophy</topic><topic>Philosophy of Technology</topic><topic>Prediction models</topic><topic>Research Article</topic><topic>Risk</topic><topic>Risk assessment</topic><topic>Social aspects</topic><topic>Variables</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Baumann, Joachim</creatorcontrib><creatorcontrib>Loi, Michele</creatorcontrib><collection>SpringerOpen (Open Access)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Social Sciences Premium Collection【Remote access available】</collection><collection>ProQuest Central (Corporate)</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>Entrepreneurship Database (ProQuest)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection</collection><collection>Social Science Database (Alumni Edition)</collection><collection>Entrepreneurship Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Philosophy Collection</collection><collection>Philosophy Database</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Social Science Premium Collection</collection><collection>Advanced Technologies &amp; Aerospace Database‎ (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>Sociology Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ABI/INFORM Collection</collection><collection>Social Science Database</collection><collection>Sociology Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>One Business (ProQuest)</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Philosophy &amp; technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Baumann, Joachim</au><au>Loi, Michele</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use</atitle><jtitle>Philosophy &amp; technology</jtitle><stitle>Philos. Technol</stitle><addtitle>Philos Technol</addtitle><date>2023-09-01</date><risdate>2023</risdate><volume>36</volume><issue>3</issue><spage>45</spage><epage>45</epage><pages>45-45</pages><artnum>45</artnum><issn>2210-5433</issn><eissn>2210-5441</eissn><abstract>Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to people unless special efforts are made to avoid them. Insurance is no exception. In this paper, we provide a thorough analysis of algorithmic fairness in the case of insurance premiums. We ask what “fairness” might mean in this context and how the fairness of a premium system can be measured. For this, we apply the established fairness frameworks of the fair machine learning literature to the case of insurance premiums and show which of the existing fairness criteria can be applied to assess the fairness of insurance premiums. We argue that two of the often-discussed group fairness criteria, independence (also called statistical parity or demographic parity ) and separation (also known as equalized odds ), are not normatively appropriate for insurance premiums. Instead, we propose the sufficiency criterion (also known as well-calibration ) as a morally defensible alternative that allows us to test for systematic biases in premiums towards certain groups based on the risk they bring to the pool. In addition, we clarify the connection between group fairness and different degrees of personalization. Our findings enable insurers to assess the fairness properties of their risk models, helping them avoid reputation damage resulting from potentially unfair and discriminatory premium systems.</abstract><cop>Dordrecht</cop><pub>Springer Netherlands</pub><pmid>37346393</pmid><doi>10.1007/s13347-023-00624-9</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0003-2019-4829</orcidid><orcidid>https://orcid.org/0000-0002-7053-4724</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2210-5433
ispartof Philosophy & technology, 2023-09, Vol.36 (3), p.45-45, Article 45
issn 2210-5433
2210-5441
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_10279561
source ABI/INFORM Collection; Springer Nature; Sociology Collection; ProQuest Social Science Premium Collection
subjects Algorithms
Bias
Calibration
Context
Criteria
Customization
Decision making
Discrimination
Education
Ethical standards
Ethics
Injustice
Insurance
Insurance premiums
Machine learning
Parity
Personal information
Philosophy
Philosophy of Technology
Prediction models
Research Article
Risk
Risk assessment
Social aspects
Variables
title Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T03%3A53%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fairness%20and%20Risk:%20An%20Ethical%20Argument%20for%20a%20Group%20Fairness%20Definition%20Insurers%20Can%20Use&rft.jtitle=Philosophy%20&%20technology&rft.au=Baumann,%20Joachim&rft.date=2023-09-01&rft.volume=36&rft.issue=3&rft.spage=45&rft.epage=45&rft.pages=45-45&rft.artnum=45&rft.issn=2210-5433&rft.eissn=2210-5441&rft_id=info:doi/10.1007/s13347-023-00624-9&rft_dat=%3Cgale_pubme%3EA753605135%3C/gale_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c4959-99d3f3752857b9d740318d8784b08dd59471634d34db9fb90d43136d418dc7393%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2827371230&rft_id=info:pmid/37346393&rft_galeid=A753605135&rfr_iscdi=true