Loading…

Testing the equivalency of human "predators" and deep neural networks in the detection of cryptic moths

Researchers have shown growing interest in using deep neural networks (DNNs) to efficiently test the effects of perceptual processes on the evolution of color patterns and morphologies. Whether this is a valid approach remains unclear, as it is unknown whether the relative detectability of ecologica...

Full description

Saved in:
Bibliographic Details
Published in:Journal of evolutionary biology 2024-11
Main Authors: Arias, Mónica, Behrendt, Lis, Dreßler, Lyn, Raka, Adelina, Perrier, Charles, Elias, Marianne, Gomez, Doris, Renoult, Julien P, Tedore, Cynthia
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c248t-7f71a7d1a2e4848f8a642284f706bd5d41b33b6e3cc8aa751687944ddcd45473
container_end_page
container_issue
container_start_page
container_title Journal of evolutionary biology
container_volume
creator Arias, Mónica
Behrendt, Lis
Dreßler, Lyn
Raka, Adelina
Perrier, Charles
Elias, Marianne
Gomez, Doris
Renoult, Julien P
Tedore, Cynthia
description Researchers have shown growing interest in using deep neural networks (DNNs) to efficiently test the effects of perceptual processes on the evolution of color patterns and morphologies. Whether this is a valid approach remains unclear, as it is unknown whether the relative detectability of ecologically relevant stimuli to DNNs actually matches that of biological neural networks. To test this, we compare image classification performance by humans and six DNNs (AlexNet, VGG-16, VGG-19, ResNet-18, SqueezeNet, and GoogLeNet) trained to detect artificial moths on tree trunks. Moths varied in their degree of crypsis, conferred by different sizes and spatial configurations of transparent wing elements. Like humans, four of six DNN architectures found moths with larger transparent elements harder to detect. However, humans and only one DNN architecture (GoogLeNet) found moths with transparent elements touching one side of the moth's outline harder to detect than moths with untouched outlines. When moths were small, the camouflaging effect of transparent elements touching the moth's outline was reduced for DNNs but enhanced for humans. Prey size can thus interact with camouflage type in opposing directions in humans and DNNs, which warrants a deeper investigation of size interactions with a broader range of stimuli. Overall, our results suggest that humans and DNNs responses had some similarities, but not enough to justify the widespread use of DNNs for studies of camouflage.
doi_str_mv 10.1093/jeb/voae146
format article
fullrecord <record><control><sourceid>proquest_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_04785814v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3133416408</sourcerecordid><originalsourceid>FETCH-LOGICAL-c248t-7f71a7d1a2e4848f8a642284f706bd5d41b33b6e3cc8aa751687944ddcd45473</originalsourceid><addsrcrecordid>eNpNkT1PwzAQhi0E4ntiRxYTCBXs-JI4I0JAkSqxdLcc-9IGkjjYTlH_PSktiOlOp-ee4X0JueDsjrNC3L9jeb9yGjlke-SYQ8ImBWd8_99-RE5CeGeMZ5Cmh-RIFKksJINjsphjiHW3oHGJFD-HeqUb7Myauoouh1Z39Kr3aHV0PlxR3VlqEXva4eB1M4745fxHoHX3I7AY0cTadZt349d9rA1tXVyGM3JQ6Sbg-W6ekvnz0_xxOpm9vbw-PswmJgEZJ3mVc51brhMECbKSOoMkkVDlLCttaoGXQpQZCmOk1nnKM5kXANYaCynk4pTcbLVL3aje1632a-V0raYPM7W5MchlKjms-Mheb9neu89hjEG1dTDYNLpDNwQluBAwJsbkiN5uUeNdCB6rPzdnalOCGktQuxJG-nInHsoW7R_7m7r4BkCYg28</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3133416408</pqid></control><display><type>article</type><title>Testing the equivalency of human "predators" and deep neural networks in the detection of cryptic moths</title><source>Wiley Online Library - AutoHoldings Journals</source><source>Oxford Journals Online</source><creator>Arias, Mónica ; Behrendt, Lis ; Dreßler, Lyn ; Raka, Adelina ; Perrier, Charles ; Elias, Marianne ; Gomez, Doris ; Renoult, Julien P ; Tedore, Cynthia</creator><contributor>Rogers, Rebekah ; Montgomery, Stephen</contributor><creatorcontrib>Arias, Mónica ; Behrendt, Lis ; Dreßler, Lyn ; Raka, Adelina ; Perrier, Charles ; Elias, Marianne ; Gomez, Doris ; Renoult, Julien P ; Tedore, Cynthia ; Rogers, Rebekah ; Montgomery, Stephen</creatorcontrib><description>Researchers have shown growing interest in using deep neural networks (DNNs) to efficiently test the effects of perceptual processes on the evolution of color patterns and morphologies. Whether this is a valid approach remains unclear, as it is unknown whether the relative detectability of ecologically relevant stimuli to DNNs actually matches that of biological neural networks. To test this, we compare image classification performance by humans and six DNNs (AlexNet, VGG-16, VGG-19, ResNet-18, SqueezeNet, and GoogLeNet) trained to detect artificial moths on tree trunks. Moths varied in their degree of crypsis, conferred by different sizes and spatial configurations of transparent wing elements. Like humans, four of six DNN architectures found moths with larger transparent elements harder to detect. However, humans and only one DNN architecture (GoogLeNet) found moths with transparent elements touching one side of the moth's outline harder to detect than moths with untouched outlines. When moths were small, the camouflaging effect of transparent elements touching the moth's outline was reduced for DNNs but enhanced for humans. Prey size can thus interact with camouflage type in opposing directions in humans and DNNs, which warrants a deeper investigation of size interactions with a broader range of stimuli. Overall, our results suggest that humans and DNNs responses had some similarities, but not enough to justify the widespread use of DNNs for studies of camouflage.</description><identifier>ISSN: 1420-9101</identifier><identifier>ISSN: 1010-061X</identifier><identifier>EISSN: 1420-9101</identifier><identifier>DOI: 10.1093/jeb/voae146</identifier><identifier>PMID: 39589804</identifier><language>eng</language><publisher>England: Wiley</publisher><subject>Animal biology ; Biodiversity ; Computer Science ; Ecology, environment ; Invertebrate Zoology ; Life Sciences ; Neural and Evolutionary Computing ; Populations and Evolution ; Technology for Human Learning</subject><ispartof>Journal of evolutionary biology, 2024-11</ispartof><rights>The Author(s) 2024. Published by Oxford University Press on behalf of the European Society of Evolutionary Biology. All rights reserved. For commercial re-use, please contact reprints@oup.com for reprints and translation rights for reprints. All other permissions can be obtained through our RightsLink service via the Permissions link on the article page on our site—for further information please contact journals.permissions@oup.com.</rights><rights>Attribution - NonCommercial - NoDerivatives</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c248t-7f71a7d1a2e4848f8a642284f706bd5d41b33b6e3cc8aa751687944ddcd45473</cites><orcidid>0000-0002-9144-3426 ; 0000-0003-1331-2604 ; 0000-0002-3731-9037 ; 0000-0001-5820-9374 ; 0000-0002-1250-2353 ; 0000-0001-6690-0085</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,780,784,885,27922,27923</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39589804$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink><backlink>$$Uhttps://hal.science/hal-04785814$$DView record in HAL$$Hfree_for_read</backlink></links><search><contributor>Rogers, Rebekah</contributor><contributor>Montgomery, Stephen</contributor><creatorcontrib>Arias, Mónica</creatorcontrib><creatorcontrib>Behrendt, Lis</creatorcontrib><creatorcontrib>Dreßler, Lyn</creatorcontrib><creatorcontrib>Raka, Adelina</creatorcontrib><creatorcontrib>Perrier, Charles</creatorcontrib><creatorcontrib>Elias, Marianne</creatorcontrib><creatorcontrib>Gomez, Doris</creatorcontrib><creatorcontrib>Renoult, Julien P</creatorcontrib><creatorcontrib>Tedore, Cynthia</creatorcontrib><title>Testing the equivalency of human "predators" and deep neural networks in the detection of cryptic moths</title><title>Journal of evolutionary biology</title><addtitle>J Evol Biol</addtitle><description>Researchers have shown growing interest in using deep neural networks (DNNs) to efficiently test the effects of perceptual processes on the evolution of color patterns and morphologies. Whether this is a valid approach remains unclear, as it is unknown whether the relative detectability of ecologically relevant stimuli to DNNs actually matches that of biological neural networks. To test this, we compare image classification performance by humans and six DNNs (AlexNet, VGG-16, VGG-19, ResNet-18, SqueezeNet, and GoogLeNet) trained to detect artificial moths on tree trunks. Moths varied in their degree of crypsis, conferred by different sizes and spatial configurations of transparent wing elements. Like humans, four of six DNN architectures found moths with larger transparent elements harder to detect. However, humans and only one DNN architecture (GoogLeNet) found moths with transparent elements touching one side of the moth's outline harder to detect than moths with untouched outlines. When moths were small, the camouflaging effect of transparent elements touching the moth's outline was reduced for DNNs but enhanced for humans. Prey size can thus interact with camouflage type in opposing directions in humans and DNNs, which warrants a deeper investigation of size interactions with a broader range of stimuli. Overall, our results suggest that humans and DNNs responses had some similarities, but not enough to justify the widespread use of DNNs for studies of camouflage.</description><subject>Animal biology</subject><subject>Biodiversity</subject><subject>Computer Science</subject><subject>Ecology, environment</subject><subject>Invertebrate Zoology</subject><subject>Life Sciences</subject><subject>Neural and Evolutionary Computing</subject><subject>Populations and Evolution</subject><subject>Technology for Human Learning</subject><issn>1420-9101</issn><issn>1010-061X</issn><issn>1420-9101</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkT1PwzAQhi0E4ntiRxYTCBXs-JI4I0JAkSqxdLcc-9IGkjjYTlH_PSktiOlOp-ee4X0JueDsjrNC3L9jeb9yGjlke-SYQ8ImBWd8_99-RE5CeGeMZ5Cmh-RIFKksJINjsphjiHW3oHGJFD-HeqUb7Myauoouh1Z39Kr3aHV0PlxR3VlqEXva4eB1M4745fxHoHX3I7AY0cTadZt349d9rA1tXVyGM3JQ6Sbg-W6ekvnz0_xxOpm9vbw-PswmJgEZJ3mVc51brhMECbKSOoMkkVDlLCttaoGXQpQZCmOk1nnKM5kXANYaCynk4pTcbLVL3aje1632a-V0raYPM7W5MchlKjms-Mheb9neu89hjEG1dTDYNLpDNwQluBAwJsbkiN5uUeNdCB6rPzdnalOCGktQuxJG-nInHsoW7R_7m7r4BkCYg28</recordid><startdate>20241126</startdate><enddate>20241126</enddate><creator>Arias, Mónica</creator><creator>Behrendt, Lis</creator><creator>Dreßler, Lyn</creator><creator>Raka, Adelina</creator><creator>Perrier, Charles</creator><creator>Elias, Marianne</creator><creator>Gomez, Doris</creator><creator>Renoult, Julien P</creator><creator>Tedore, Cynthia</creator><general>Wiley</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0002-9144-3426</orcidid><orcidid>https://orcid.org/0000-0003-1331-2604</orcidid><orcidid>https://orcid.org/0000-0002-3731-9037</orcidid><orcidid>https://orcid.org/0000-0001-5820-9374</orcidid><orcidid>https://orcid.org/0000-0002-1250-2353</orcidid><orcidid>https://orcid.org/0000-0001-6690-0085</orcidid></search><sort><creationdate>20241126</creationdate><title>Testing the equivalency of human "predators" and deep neural networks in the detection of cryptic moths</title><author>Arias, Mónica ; Behrendt, Lis ; Dreßler, Lyn ; Raka, Adelina ; Perrier, Charles ; Elias, Marianne ; Gomez, Doris ; Renoult, Julien P ; Tedore, Cynthia</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c248t-7f71a7d1a2e4848f8a642284f706bd5d41b33b6e3cc8aa751687944ddcd45473</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Animal biology</topic><topic>Biodiversity</topic><topic>Computer Science</topic><topic>Ecology, environment</topic><topic>Invertebrate Zoology</topic><topic>Life Sciences</topic><topic>Neural and Evolutionary Computing</topic><topic>Populations and Evolution</topic><topic>Technology for Human Learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Arias, Mónica</creatorcontrib><creatorcontrib>Behrendt, Lis</creatorcontrib><creatorcontrib>Dreßler, Lyn</creatorcontrib><creatorcontrib>Raka, Adelina</creatorcontrib><creatorcontrib>Perrier, Charles</creatorcontrib><creatorcontrib>Elias, Marianne</creatorcontrib><creatorcontrib>Gomez, Doris</creatorcontrib><creatorcontrib>Renoult, Julien P</creatorcontrib><creatorcontrib>Tedore, Cynthia</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>Journal of evolutionary biology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Arias, Mónica</au><au>Behrendt, Lis</au><au>Dreßler, Lyn</au><au>Raka, Adelina</au><au>Perrier, Charles</au><au>Elias, Marianne</au><au>Gomez, Doris</au><au>Renoult, Julien P</au><au>Tedore, Cynthia</au><au>Rogers, Rebekah</au><au>Montgomery, Stephen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Testing the equivalency of human "predators" and deep neural networks in the detection of cryptic moths</atitle><jtitle>Journal of evolutionary biology</jtitle><addtitle>J Evol Biol</addtitle><date>2024-11-26</date><risdate>2024</risdate><issn>1420-9101</issn><issn>1010-061X</issn><eissn>1420-9101</eissn><abstract>Researchers have shown growing interest in using deep neural networks (DNNs) to efficiently test the effects of perceptual processes on the evolution of color patterns and morphologies. Whether this is a valid approach remains unclear, as it is unknown whether the relative detectability of ecologically relevant stimuli to DNNs actually matches that of biological neural networks. To test this, we compare image classification performance by humans and six DNNs (AlexNet, VGG-16, VGG-19, ResNet-18, SqueezeNet, and GoogLeNet) trained to detect artificial moths on tree trunks. Moths varied in their degree of crypsis, conferred by different sizes and spatial configurations of transparent wing elements. Like humans, four of six DNN architectures found moths with larger transparent elements harder to detect. However, humans and only one DNN architecture (GoogLeNet) found moths with transparent elements touching one side of the moth's outline harder to detect than moths with untouched outlines. When moths were small, the camouflaging effect of transparent elements touching the moth's outline was reduced for DNNs but enhanced for humans. Prey size can thus interact with camouflage type in opposing directions in humans and DNNs, which warrants a deeper investigation of size interactions with a broader range of stimuli. Overall, our results suggest that humans and DNNs responses had some similarities, but not enough to justify the widespread use of DNNs for studies of camouflage.</abstract><cop>England</cop><pub>Wiley</pub><pmid>39589804</pmid><doi>10.1093/jeb/voae146</doi><orcidid>https://orcid.org/0000-0002-9144-3426</orcidid><orcidid>https://orcid.org/0000-0003-1331-2604</orcidid><orcidid>https://orcid.org/0000-0002-3731-9037</orcidid><orcidid>https://orcid.org/0000-0001-5820-9374</orcidid><orcidid>https://orcid.org/0000-0002-1250-2353</orcidid><orcidid>https://orcid.org/0000-0001-6690-0085</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1420-9101
ispartof Journal of evolutionary biology, 2024-11
issn 1420-9101
1010-061X
1420-9101
language eng
recordid cdi_hal_primary_oai_HAL_hal_04785814v1
source Wiley Online Library - AutoHoldings Journals; Oxford Journals Online
subjects Animal biology
Biodiversity
Computer Science
Ecology, environment
Invertebrate Zoology
Life Sciences
Neural and Evolutionary Computing
Populations and Evolution
Technology for Human Learning
title Testing the equivalency of human "predators" and deep neural networks in the detection of cryptic moths
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T10%3A42%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Testing%20the%20equivalency%20of%20human%20%22predators%22%20and%20deep%20neural%20networks%20in%20the%20detection%20of%20cryptic%20moths&rft.jtitle=Journal%20of%20evolutionary%20biology&rft.au=Arias,%20M%C3%B3nica&rft.date=2024-11-26&rft.issn=1420-9101&rft.eissn=1420-9101&rft_id=info:doi/10.1093/jeb/voae146&rft_dat=%3Cproquest_hal_p%3E3133416408%3C/proquest_hal_p%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c248t-7f71a7d1a2e4848f8a642284f706bd5d41b33b6e3cc8aa751687944ddcd45473%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3133416408&rft_id=info:pmid/39589804&rfr_iscdi=true