Loading…
A numerical verification method for multi-class feed-forward neural networks
The use of neural networks in embedded systems is becoming increasingly common, but these systems often operate in safety–critical environments, where a failure or incorrect output can have serious consequences. Therefore, it is essential to verify the expected operation of neural networks before de...
Saved in:
Published in: | Expert systems with applications 2024-08, Vol.247, p.123345, Article 123345 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c344t-1309563c36d97a620953bb3676cab4d95ddddab522ef3534470c66b37cd3ca4f3 |
---|---|
cites | cdi_FETCH-LOGICAL-c344t-1309563c36d97a620953bb3676cab4d95ddddab522ef3534470c66b37cd3ca4f3 |
container_end_page | |
container_issue | |
container_start_page | 123345 |
container_title | Expert systems with applications |
container_volume | 247 |
creator | Grimm, Daniel Tollner, Dávid Kraus, David Török, Árpád Sax, Eric Szalay, Zsolt |
description | The use of neural networks in embedded systems is becoming increasingly common, but these systems often operate in safety–critical environments, where a failure or incorrect output can have serious consequences. Therefore, it is essential to verify the expected operation of neural networks before deploying them in such settings. In this publication, we present a novel approach for verifying the correctness of these networks using a nonlinear equation system under the assumption of closed-form activation functions. Our method is able to accurately predict the output of the network for given specification intervals, providing a valuable tool for ensuring the reliability and safety of neural networks in embedded systems.
•A novel verification concept for neural networks is developed.•Continuous activation function based NNs can be verified.•The approach provides explainability and transparency for the verified neural network.•Monotonicity or linearity are not necessary during the verification.•No model simplification is required to evaluate the operation process. |
doi_str_mv | 10.1016/j.eswa.2024.123345 |
format | article |
fullrecord | <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_eswa_2024_123345</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0957417424002100</els_id><sourcerecordid>S0957417424002100</sourcerecordid><originalsourceid>FETCH-LOGICAL-c344t-1309563c36d97a620953bb3676cab4d95ddddab522ef3534470c66b37cd3ca4f3</originalsourceid><addsrcrecordid>eNp9kMtKAzEUhoMoWKsv4CovkDHJySQOuCnFGxTc6DpkkjOYOhdJpi2-vSl17dmcC3yHn4-QW8ErwYW-21aYD66SXKpKSABVn5GFuDfAtGngnCx4UxumhFGX5CrnLefCcG4WZLOi427AFL3r6b70rkxznEY64Pw5BdpNiQ67fo7M9y5n2iEGVo4HlwIdcZcKN-J8mNJXviYXnesz3vz1Jfl4enxfv7DN2_PrerVhHpSamYCSRoMHHRrjtCwbtC1oo71rVWjqUMq1tZTYQV0Qw73WLRgfwDvVwZLI01-fppwTdvY7xcGlHyu4PfqwW3v0YY8-7MlHgR5OEJZk-4jJZh9x9BhiQj_bMMX_8F-Pgmoz</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A numerical verification method for multi-class feed-forward neural networks</title><source>ScienceDirect Journals</source><creator>Grimm, Daniel ; Tollner, Dávid ; Kraus, David ; Török, Árpád ; Sax, Eric ; Szalay, Zsolt</creator><creatorcontrib>Grimm, Daniel ; Tollner, Dávid ; Kraus, David ; Török, Árpád ; Sax, Eric ; Szalay, Zsolt</creatorcontrib><description>The use of neural networks in embedded systems is becoming increasingly common, but these systems often operate in safety–critical environments, where a failure or incorrect output can have serious consequences. Therefore, it is essential to verify the expected operation of neural networks before deploying them in such settings. In this publication, we present a novel approach for verifying the correctness of these networks using a nonlinear equation system under the assumption of closed-form activation functions. Our method is able to accurately predict the output of the network for given specification intervals, providing a valuable tool for ensuring the reliability and safety of neural networks in embedded systems.
•A novel verification concept for neural networks is developed.•Continuous activation function based NNs can be verified.•The approach provides explainability and transparency for the verified neural network.•Monotonicity or linearity are not necessary during the verification.•No model simplification is required to evaluate the operation process.</description><identifier>ISSN: 0957-4174</identifier><identifier>EISSN: 1873-6793</identifier><identifier>DOI: 10.1016/j.eswa.2024.123345</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Explainable neural networks ; Neural network verification ; Nonlinear optimization</subject><ispartof>Expert systems with applications, 2024-08, Vol.247, p.123345, Article 123345</ispartof><rights>2024 The Author(s)</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c344t-1309563c36d97a620953bb3676cab4d95ddddab522ef3534470c66b37cd3ca4f3</citedby><cites>FETCH-LOGICAL-c344t-1309563c36d97a620953bb3676cab4d95ddddab522ef3534470c66b37cd3ca4f3</cites><orcidid>0000-0002-6172-5772 ; 0000-0002-1985-4095 ; 0000-0003-3743-872X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Grimm, Daniel</creatorcontrib><creatorcontrib>Tollner, Dávid</creatorcontrib><creatorcontrib>Kraus, David</creatorcontrib><creatorcontrib>Török, Árpád</creatorcontrib><creatorcontrib>Sax, Eric</creatorcontrib><creatorcontrib>Szalay, Zsolt</creatorcontrib><title>A numerical verification method for multi-class feed-forward neural networks</title><title>Expert systems with applications</title><description>The use of neural networks in embedded systems is becoming increasingly common, but these systems often operate in safety–critical environments, where a failure or incorrect output can have serious consequences. Therefore, it is essential to verify the expected operation of neural networks before deploying them in such settings. In this publication, we present a novel approach for verifying the correctness of these networks using a nonlinear equation system under the assumption of closed-form activation functions. Our method is able to accurately predict the output of the network for given specification intervals, providing a valuable tool for ensuring the reliability and safety of neural networks in embedded systems.
•A novel verification concept for neural networks is developed.•Continuous activation function based NNs can be verified.•The approach provides explainability and transparency for the verified neural network.•Monotonicity or linearity are not necessary during the verification.•No model simplification is required to evaluate the operation process.</description><subject>Explainable neural networks</subject><subject>Neural network verification</subject><subject>Nonlinear optimization</subject><issn>0957-4174</issn><issn>1873-6793</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kMtKAzEUhoMoWKsv4CovkDHJySQOuCnFGxTc6DpkkjOYOhdJpi2-vSl17dmcC3yHn4-QW8ErwYW-21aYD66SXKpKSABVn5GFuDfAtGngnCx4UxumhFGX5CrnLefCcG4WZLOi427AFL3r6b70rkxznEY64Pw5BdpNiQ67fo7M9y5n2iEGVo4HlwIdcZcKN-J8mNJXviYXnesz3vz1Jfl4enxfv7DN2_PrerVhHpSamYCSRoMHHRrjtCwbtC1oo71rVWjqUMq1tZTYQV0Qw73WLRgfwDvVwZLI01-fppwTdvY7xcGlHyu4PfqwW3v0YY8-7MlHgR5OEJZk-4jJZh9x9BhiQj_bMMX_8F-Pgmoz</recordid><startdate>20240801</startdate><enddate>20240801</enddate><creator>Grimm, Daniel</creator><creator>Tollner, Dávid</creator><creator>Kraus, David</creator><creator>Török, Árpád</creator><creator>Sax, Eric</creator><creator>Szalay, Zsolt</creator><general>Elsevier Ltd</general><scope>6I.</scope><scope>AAFTH</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-6172-5772</orcidid><orcidid>https://orcid.org/0000-0002-1985-4095</orcidid><orcidid>https://orcid.org/0000-0003-3743-872X</orcidid></search><sort><creationdate>20240801</creationdate><title>A numerical verification method for multi-class feed-forward neural networks</title><author>Grimm, Daniel ; Tollner, Dávid ; Kraus, David ; Török, Árpád ; Sax, Eric ; Szalay, Zsolt</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c344t-1309563c36d97a620953bb3676cab4d95ddddab522ef3534470c66b37cd3ca4f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Explainable neural networks</topic><topic>Neural network verification</topic><topic>Nonlinear optimization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Grimm, Daniel</creatorcontrib><creatorcontrib>Tollner, Dávid</creatorcontrib><creatorcontrib>Kraus, David</creatorcontrib><creatorcontrib>Török, Árpád</creatorcontrib><creatorcontrib>Sax, Eric</creatorcontrib><creatorcontrib>Szalay, Zsolt</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>CrossRef</collection><jtitle>Expert systems with applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Grimm, Daniel</au><au>Tollner, Dávid</au><au>Kraus, David</au><au>Török, Árpád</au><au>Sax, Eric</au><au>Szalay, Zsolt</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A numerical verification method for multi-class feed-forward neural networks</atitle><jtitle>Expert systems with applications</jtitle><date>2024-08-01</date><risdate>2024</risdate><volume>247</volume><spage>123345</spage><pages>123345-</pages><artnum>123345</artnum><issn>0957-4174</issn><eissn>1873-6793</eissn><abstract>The use of neural networks in embedded systems is becoming increasingly common, but these systems often operate in safety–critical environments, where a failure or incorrect output can have serious consequences. Therefore, it is essential to verify the expected operation of neural networks before deploying them in such settings. In this publication, we present a novel approach for verifying the correctness of these networks using a nonlinear equation system under the assumption of closed-form activation functions. Our method is able to accurately predict the output of the network for given specification intervals, providing a valuable tool for ensuring the reliability and safety of neural networks in embedded systems.
•A novel verification concept for neural networks is developed.•Continuous activation function based NNs can be verified.•The approach provides explainability and transparency for the verified neural network.•Monotonicity or linearity are not necessary during the verification.•No model simplification is required to evaluate the operation process.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.eswa.2024.123345</doi><orcidid>https://orcid.org/0000-0002-6172-5772</orcidid><orcidid>https://orcid.org/0000-0002-1985-4095</orcidid><orcidid>https://orcid.org/0000-0003-3743-872X</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0957-4174 |
ispartof | Expert systems with applications, 2024-08, Vol.247, p.123345, Article 123345 |
issn | 0957-4174 1873-6793 |
language | eng |
recordid | cdi_crossref_primary_10_1016_j_eswa_2024_123345 |
source | ScienceDirect Journals |
subjects | Explainable neural networks Neural network verification Nonlinear optimization |
title | A numerical verification method for multi-class feed-forward neural networks |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T15%3A22%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20numerical%20verification%20method%20for%20multi-class%20feed-forward%20neural%20networks&rft.jtitle=Expert%20systems%20with%20applications&rft.au=Grimm,%20Daniel&rft.date=2024-08-01&rft.volume=247&rft.spage=123345&rft.pages=123345-&rft.artnum=123345&rft.issn=0957-4174&rft.eissn=1873-6793&rft_id=info:doi/10.1016/j.eswa.2024.123345&rft_dat=%3Celsevier_cross%3ES0957417424002100%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c344t-1309563c36d97a620953bb3676cab4d95ddddab522ef3534470c66b37cd3ca4f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |