Loading…

Rule Extraction from Binary Neural Networks with Convolutional Rules for Model Validation

Most deep neural networks are considered to be black boxes, meaning their output is hard to interpret. In contrast, logical expressions are considered to be more comprehensible since they use symbols that are semantically close to natural language instead of distributed representations. However, for...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2020-12
Main Authors: Burkhardt, Sophie, Brugger, Jannis, Wagner, Nicolas, Ahmadi, Zahra, Kersting, Kristian, Kramer, Stefan
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Burkhardt, Sophie
Brugger, Jannis
Wagner, Nicolas
Ahmadi, Zahra
Kersting, Kristian
Kramer, Stefan
description Most deep neural networks are considered to be black boxes, meaning their output is hard to interpret. In contrast, logical expressions are considered to be more comprehensible since they use symbols that are semantically close to natural language instead of distributed representations. However, for high-dimensional input data such as images, the individual symbols, i.e. pixels, are not easily interpretable. We introduce the concept of first-order convolutional rules, which are logical rules that can be extracted using a convolutional neural network (CNN), and whose complexity depends on the size of the convolutional filter and not on the dimensionality of the input. Our approach is based on rule extraction from binary neural networks with stochastic local search. We show how to extract rules that are not necessarily short, but characteristic of the input, and easy to visualize. Our experiments show that the proposed approach is able to model the functionality of the neural network while at the same time producing interpretable logical rules.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2470502565</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2470502565</sourcerecordid><originalsourceid>FETCH-proquest_journals_24705025653</originalsourceid><addsrcrecordid>eNqNjMsKwjAUBYMgWLT_cMF1ISZN69pScaMLEcFVCTbF1Nirefj4e1vwA1zNYuacEYkY54tkmTI2IbFzLaWUZTkTgkfktA9GQfn2Vp69xg4aizdY6U7aD-xUsNL08C-0Vwcv7S9QYPdEE4a4d8PcQYMWtlgrA0dpdC0HOSPjRhqn4h-nZL4uD8UmuVt8BOV81WKw_YerWJpTQZnIBP-v-gIMo0KT</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2470502565</pqid></control><display><type>article</type><title>Rule Extraction from Binary Neural Networks with Convolutional Rules for Model Validation</title><source>Publicly Available Content Database</source><creator>Burkhardt, Sophie ; Brugger, Jannis ; Wagner, Nicolas ; Ahmadi, Zahra ; Kersting, Kristian ; Kramer, Stefan</creator><creatorcontrib>Burkhardt, Sophie ; Brugger, Jannis ; Wagner, Nicolas ; Ahmadi, Zahra ; Kersting, Kristian ; Kramer, Stefan</creatorcontrib><description>Most deep neural networks are considered to be black boxes, meaning their output is hard to interpret. In contrast, logical expressions are considered to be more comprehensible since they use symbols that are semantically close to natural language instead of distributed representations. However, for high-dimensional input data such as images, the individual symbols, i.e. pixels, are not easily interpretable. We introduce the concept of first-order convolutional rules, which are logical rules that can be extracted using a convolutional neural network (CNN), and whose complexity depends on the size of the convolutional filter and not on the dimensionality of the input. Our approach is based on rule extraction from binary neural networks with stochastic local search. We show how to extract rules that are not necessarily short, but characteristic of the input, and easy to visualize. Our experiments show that the proposed approach is able to model the functionality of the neural network while at the same time producing interpretable logical rules.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Neural networks ; Symbols</subject><ispartof>arXiv.org, 2020-12</ispartof><rights>2020. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2470502565?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>777,781,25734,36993,44571</link.rule.ids></links><search><creatorcontrib>Burkhardt, Sophie</creatorcontrib><creatorcontrib>Brugger, Jannis</creatorcontrib><creatorcontrib>Wagner, Nicolas</creatorcontrib><creatorcontrib>Ahmadi, Zahra</creatorcontrib><creatorcontrib>Kersting, Kristian</creatorcontrib><creatorcontrib>Kramer, Stefan</creatorcontrib><title>Rule Extraction from Binary Neural Networks with Convolutional Rules for Model Validation</title><title>arXiv.org</title><description>Most deep neural networks are considered to be black boxes, meaning their output is hard to interpret. In contrast, logical expressions are considered to be more comprehensible since they use symbols that are semantically close to natural language instead of distributed representations. However, for high-dimensional input data such as images, the individual symbols, i.e. pixels, are not easily interpretable. We introduce the concept of first-order convolutional rules, which are logical rules that can be extracted using a convolutional neural network (CNN), and whose complexity depends on the size of the convolutional filter and not on the dimensionality of the input. Our approach is based on rule extraction from binary neural networks with stochastic local search. We show how to extract rules that are not necessarily short, but characteristic of the input, and easy to visualize. Our experiments show that the proposed approach is able to model the functionality of the neural network while at the same time producing interpretable logical rules.</description><subject>Artificial neural networks</subject><subject>Neural networks</subject><subject>Symbols</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjMsKwjAUBYMgWLT_cMF1ISZN69pScaMLEcFVCTbF1Nirefj4e1vwA1zNYuacEYkY54tkmTI2IbFzLaWUZTkTgkfktA9GQfn2Vp69xg4aizdY6U7aD-xUsNL08C-0Vwcv7S9QYPdEE4a4d8PcQYMWtlgrA0dpdC0HOSPjRhqn4h-nZL4uD8UmuVt8BOV81WKw_YerWJpTQZnIBP-v-gIMo0KT</recordid><startdate>20201215</startdate><enddate>20201215</enddate><creator>Burkhardt, Sophie</creator><creator>Brugger, Jannis</creator><creator>Wagner, Nicolas</creator><creator>Ahmadi, Zahra</creator><creator>Kersting, Kristian</creator><creator>Kramer, Stefan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20201215</creationdate><title>Rule Extraction from Binary Neural Networks with Convolutional Rules for Model Validation</title><author>Burkhardt, Sophie ; Brugger, Jannis ; Wagner, Nicolas ; Ahmadi, Zahra ; Kersting, Kristian ; Kramer, Stefan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24705025653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial neural networks</topic><topic>Neural networks</topic><topic>Symbols</topic><toplevel>online_resources</toplevel><creatorcontrib>Burkhardt, Sophie</creatorcontrib><creatorcontrib>Brugger, Jannis</creatorcontrib><creatorcontrib>Wagner, Nicolas</creatorcontrib><creatorcontrib>Ahmadi, Zahra</creatorcontrib><creatorcontrib>Kersting, Kristian</creatorcontrib><creatorcontrib>Kramer, Stefan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Databases</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Burkhardt, Sophie</au><au>Brugger, Jannis</au><au>Wagner, Nicolas</au><au>Ahmadi, Zahra</au><au>Kersting, Kristian</au><au>Kramer, Stefan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Rule Extraction from Binary Neural Networks with Convolutional Rules for Model Validation</atitle><jtitle>arXiv.org</jtitle><date>2020-12-15</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Most deep neural networks are considered to be black boxes, meaning their output is hard to interpret. In contrast, logical expressions are considered to be more comprehensible since they use symbols that are semantically close to natural language instead of distributed representations. However, for high-dimensional input data such as images, the individual symbols, i.e. pixels, are not easily interpretable. We introduce the concept of first-order convolutional rules, which are logical rules that can be extracted using a convolutional neural network (CNN), and whose complexity depends on the size of the convolutional filter and not on the dimensionality of the input. Our approach is based on rule extraction from binary neural networks with stochastic local search. We show how to extract rules that are not necessarily short, but characteristic of the input, and easy to visualize. Our experiments show that the proposed approach is able to model the functionality of the neural network while at the same time producing interpretable logical rules.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2470502565
source Publicly Available Content Database
subjects Artificial neural networks
Neural networks
Symbols
title Rule Extraction from Binary Neural Networks with Convolutional Rules for Model Validation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T01%3A12%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Rule%20Extraction%20from%20Binary%20Neural%20Networks%20with%20Convolutional%20Rules%20for%20Model%20Validation&rft.jtitle=arXiv.org&rft.au=Burkhardt,%20Sophie&rft.date=2020-12-15&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2470502565%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_24705025653%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2470502565&rft_id=info:pmid/&rfr_iscdi=true