Loading…

Demonstrating a new evaluation method on ReLU based Neural Networks for classification problems

Deep neural networks, which have proven to be effective methods to solve complex problems, can even be applied in decision systems controlling critical processes. However, in such applications the outcomes of the neural network must be checked if it we have a clear understanding regarding the operat...

Full description

Saved in:
Bibliographic Details
Published in:Expert systems with applications 2024-09, Vol.250, p.123905, Article 123905
Main Authors: Tollner, Dávid, Ziyu, Wang, Zöldy, Máté, Török, Árpád
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c295t-92931cd9a7dd39d69bf8379cd1d292e7f9f14e4b82991e5bf8e48cc39062cb493
container_end_page
container_issue
container_start_page 123905
container_title Expert systems with applications
container_volume 250
creator Tollner, Dávid
Ziyu, Wang
Zöldy, Máté
Török, Árpád
description Deep neural networks, which have proven to be effective methods to solve complex problems, can even be applied in decision systems controlling critical processes. However, in such applications the outcomes of the neural network must be checked if it we have a clear understanding regarding the operation of network at given input intervals. The most straightforward, though often computationally expensive approach for this checking process evaluates the network at discrete input points and estimates the expected outputs at the given interval. The present research aims to develop a novel approach that can identify in the case of specific input intervals whether the operation process and the output of a neural network can be considered as known or unknown. During the presented case study, we investigated the ReLU (Rectified Linear Unit) and Sigmoid activation functions, using the double moon and the Banknote Authentication classification problems for demonstration. Our method can be applied to identify certain input intervals where the given neural network cannot support critical decisions. The evaluation is performed based on a nonlinear system of equations and inequalities built on arbitrary continuous activation functions. To define the critical intervals of the input variables (i.e. where the decision-making system should not be relied on), those input variable combinations are identified which result in a non-expected output value. This inverse logic is intended to identify intervals of the input variables where the response of the system and the correct decision are not identical. The presented demonstration examples supported our assumption that the number of the neurons and the dimension of the decision space have a significant impact on the complexity of the evaluation process.
doi_str_mv 10.1016/j.eswa.2024.123905
format article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_eswa_2024_123905</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0957417424007711</els_id><sourcerecordid>S0957417424007711</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-92931cd9a7dd39d69bf8379cd1d292e7f9f14e4b82991e5bf8e48cc39062cb493</originalsourceid><addsrcrecordid>eNp9kMtOwzAQRb0AifL4AVb-gQTbceqOxAaVp1SBhOjacuwJuCRxZaet-HtchTWrO5rRGc0cQq45Kznj85tNielgSsGELLmogNUnZMagVoXkSp6R85Q2jHHFmJoRfY99GNIYzeiHT2rogAeKe9PtciMMtMfxKziaq3dcrWljEjr6irtouhzjIcTvRNsQqe1MSr71duK2MTQd9umSnLamS3j1lxdk_fjwsXwuVm9PL8u7VWEF1GMBAipuHRjlXAVuDk27qBRYx50AgaqFlkuUzUIAcKzzFOXC2vzcXNhGQnVBxLTXxpBSxFZvo-9N_NGc6aMWvdFHLfqoRU9aMnQ7QZgv23uMOlmPg0XnI9pRu-D_w38BJRJvZA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Demonstrating a new evaluation method on ReLU based Neural Networks for classification problems</title><source>Elsevier</source><creator>Tollner, Dávid ; Ziyu, Wang ; Zöldy, Máté ; Török, Árpád</creator><creatorcontrib>Tollner, Dávid ; Ziyu, Wang ; Zöldy, Máté ; Török, Árpád</creatorcontrib><description>Deep neural networks, which have proven to be effective methods to solve complex problems, can even be applied in decision systems controlling critical processes. However, in such applications the outcomes of the neural network must be checked if it we have a clear understanding regarding the operation of network at given input intervals. The most straightforward, though often computationally expensive approach for this checking process evaluates the network at discrete input points and estimates the expected outputs at the given interval. The present research aims to develop a novel approach that can identify in the case of specific input intervals whether the operation process and the output of a neural network can be considered as known or unknown. During the presented case study, we investigated the ReLU (Rectified Linear Unit) and Sigmoid activation functions, using the double moon and the Banknote Authentication classification problems for demonstration. Our method can be applied to identify certain input intervals where the given neural network cannot support critical decisions. The evaluation is performed based on a nonlinear system of equations and inequalities built on arbitrary continuous activation functions. To define the critical intervals of the input variables (i.e. where the decision-making system should not be relied on), those input variable combinations are identified which result in a non-expected output value. This inverse logic is intended to identify intervals of the input variables where the response of the system and the correct decision are not identical. The presented demonstration examples supported our assumption that the number of the neurons and the dimension of the decision space have a significant impact on the complexity of the evaluation process.</description><identifier>ISSN: 0957-4174</identifier><identifier>DOI: 10.1016/j.eswa.2024.123905</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>investigating ReLUbased systems ; Neural network evaluation ; Non-classic expert system testing ; Nonlinear optimization based analysis</subject><ispartof>Expert systems with applications, 2024-09, Vol.250, p.123905, Article 123905</ispartof><rights>2024 The Author(s)</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c295t-92931cd9a7dd39d69bf8379cd1d292e7f9f14e4b82991e5bf8e48cc39062cb493</cites><orcidid>0000-0002-1985-4095 ; 0000-0002-1267-9319</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Tollner, Dávid</creatorcontrib><creatorcontrib>Ziyu, Wang</creatorcontrib><creatorcontrib>Zöldy, Máté</creatorcontrib><creatorcontrib>Török, Árpád</creatorcontrib><title>Demonstrating a new evaluation method on ReLU based Neural Networks for classification problems</title><title>Expert systems with applications</title><description>Deep neural networks, which have proven to be effective methods to solve complex problems, can even be applied in decision systems controlling critical processes. However, in such applications the outcomes of the neural network must be checked if it we have a clear understanding regarding the operation of network at given input intervals. The most straightforward, though often computationally expensive approach for this checking process evaluates the network at discrete input points and estimates the expected outputs at the given interval. The present research aims to develop a novel approach that can identify in the case of specific input intervals whether the operation process and the output of a neural network can be considered as known or unknown. During the presented case study, we investigated the ReLU (Rectified Linear Unit) and Sigmoid activation functions, using the double moon and the Banknote Authentication classification problems for demonstration. Our method can be applied to identify certain input intervals where the given neural network cannot support critical decisions. The evaluation is performed based on a nonlinear system of equations and inequalities built on arbitrary continuous activation functions. To define the critical intervals of the input variables (i.e. where the decision-making system should not be relied on), those input variable combinations are identified which result in a non-expected output value. This inverse logic is intended to identify intervals of the input variables where the response of the system and the correct decision are not identical. The presented demonstration examples supported our assumption that the number of the neurons and the dimension of the decision space have a significant impact on the complexity of the evaluation process.</description><subject>investigating ReLUbased systems</subject><subject>Neural network evaluation</subject><subject>Non-classic expert system testing</subject><subject>Nonlinear optimization based analysis</subject><issn>0957-4174</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kMtOwzAQRb0AifL4AVb-gQTbceqOxAaVp1SBhOjacuwJuCRxZaet-HtchTWrO5rRGc0cQq45Kznj85tNielgSsGELLmogNUnZMagVoXkSp6R85Q2jHHFmJoRfY99GNIYzeiHT2rogAeKe9PtciMMtMfxKziaq3dcrWljEjr6irtouhzjIcTvRNsQqe1MSr71duK2MTQd9umSnLamS3j1lxdk_fjwsXwuVm9PL8u7VWEF1GMBAipuHRjlXAVuDk27qBRYx50AgaqFlkuUzUIAcKzzFOXC2vzcXNhGQnVBxLTXxpBSxFZvo-9N_NGc6aMWvdFHLfqoRU9aMnQ7QZgv23uMOlmPg0XnI9pRu-D_w38BJRJvZA</recordid><startdate>20240915</startdate><enddate>20240915</enddate><creator>Tollner, Dávid</creator><creator>Ziyu, Wang</creator><creator>Zöldy, Máté</creator><creator>Török, Árpád</creator><general>Elsevier Ltd</general><scope>6I.</scope><scope>AAFTH</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-1985-4095</orcidid><orcidid>https://orcid.org/0000-0002-1267-9319</orcidid></search><sort><creationdate>20240915</creationdate><title>Demonstrating a new evaluation method on ReLU based Neural Networks for classification problems</title><author>Tollner, Dávid ; Ziyu, Wang ; Zöldy, Máté ; Török, Árpád</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-92931cd9a7dd39d69bf8379cd1d292e7f9f14e4b82991e5bf8e48cc39062cb493</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>investigating ReLUbased systems</topic><topic>Neural network evaluation</topic><topic>Non-classic expert system testing</topic><topic>Nonlinear optimization based analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tollner, Dávid</creatorcontrib><creatorcontrib>Ziyu, Wang</creatorcontrib><creatorcontrib>Zöldy, Máté</creatorcontrib><creatorcontrib>Török, Árpád</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>CrossRef</collection><jtitle>Expert systems with applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tollner, Dávid</au><au>Ziyu, Wang</au><au>Zöldy, Máté</au><au>Török, Árpád</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Demonstrating a new evaluation method on ReLU based Neural Networks for classification problems</atitle><jtitle>Expert systems with applications</jtitle><date>2024-09-15</date><risdate>2024</risdate><volume>250</volume><spage>123905</spage><pages>123905-</pages><artnum>123905</artnum><issn>0957-4174</issn><abstract>Deep neural networks, which have proven to be effective methods to solve complex problems, can even be applied in decision systems controlling critical processes. However, in such applications the outcomes of the neural network must be checked if it we have a clear understanding regarding the operation of network at given input intervals. The most straightforward, though often computationally expensive approach for this checking process evaluates the network at discrete input points and estimates the expected outputs at the given interval. The present research aims to develop a novel approach that can identify in the case of specific input intervals whether the operation process and the output of a neural network can be considered as known or unknown. During the presented case study, we investigated the ReLU (Rectified Linear Unit) and Sigmoid activation functions, using the double moon and the Banknote Authentication classification problems for demonstration. Our method can be applied to identify certain input intervals where the given neural network cannot support critical decisions. The evaluation is performed based on a nonlinear system of equations and inequalities built on arbitrary continuous activation functions. To define the critical intervals of the input variables (i.e. where the decision-making system should not be relied on), those input variable combinations are identified which result in a non-expected output value. This inverse logic is intended to identify intervals of the input variables where the response of the system and the correct decision are not identical. The presented demonstration examples supported our assumption that the number of the neurons and the dimension of the decision space have a significant impact on the complexity of the evaluation process.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.eswa.2024.123905</doi><orcidid>https://orcid.org/0000-0002-1985-4095</orcidid><orcidid>https://orcid.org/0000-0002-1267-9319</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0957-4174
ispartof Expert systems with applications, 2024-09, Vol.250, p.123905, Article 123905
issn 0957-4174
language eng
recordid cdi_crossref_primary_10_1016_j_eswa_2024_123905
source Elsevier
subjects investigating ReLUbased systems
Neural network evaluation
Non-classic expert system testing
Nonlinear optimization based analysis
title Demonstrating a new evaluation method on ReLU based Neural Networks for classification problems
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T16%3A02%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Demonstrating%20a%20new%20evaluation%20method%20on%20ReLU%20based%20Neural%20Networks%20for%20classification%20problems&rft.jtitle=Expert%20systems%20with%20applications&rft.au=Tollner,%20D%C3%A1vid&rft.date=2024-09-15&rft.volume=250&rft.spage=123905&rft.pages=123905-&rft.artnum=123905&rft.issn=0957-4174&rft_id=info:doi/10.1016/j.eswa.2024.123905&rft_dat=%3Celsevier_cross%3ES0957417424007711%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c295t-92931cd9a7dd39d69bf8379cd1d292e7f9f14e4b82991e5bf8e48cc39062cb493%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true