Loading…
Adversarial Machine Learning in the Physical Domain
With deep neural networks (DNNs) being used increasingly in many applications, it is critical to improve our understanding of their failure modes and potential mitigations. A Johns Hopkins University Applied Physics Laboratory (APL) team successfully inserted a backdoor (train-time attack) into a co...
Saved in:
Published in: | Johns Hopkins APL technical digest 2021-01, Vol.35 (4), p.426 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | With deep neural networks (DNNs) being used increasingly in many applications, it is critical to improve our understanding of their failure modes and potential mitigations. A Johns Hopkins University Applied Physics Laboratory (APL) team successfully inserted a backdoor (train-time attack) into a common object detection model. In conjunction with this research, they developed a principled methodology to evaluate patch attacks (test-time attacks) and the factors impacting their success. Their approach enabled the creation of a novel optimization framework for the first-ever design of semitransparent patches that can overcome scale limitations while retaining desirable factors with regard to deployment and detectability. |
---|---|
ISSN: | 0270-5214 1930-0530 |