Loading…
Machine learning-resistant pseudo-random number generator
Conventional pseudo-random number generator (PRNG) is vulnerable to machine learning (ML) attacks since algorithms are used to generate the random number. Physical unclonable function (PUF) is a kind of hardware security primitive that can also be cracked by ML attacks. However, the main security di...
Saved in:
Published in: | Electronics letters 2019-05, Vol.55 (9), p.515-517 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Conventional pseudo-random number generator (PRNG) is vulnerable to machine learning (ML) attacks since algorithms are used to generate the random number. Physical unclonable function (PUF) is a kind of hardware security primitive that can also be cracked by ML attacks. However, the main security difference between a regular PRNG and a PUF is that training the output data of a regular PRNG is sufficient to break the PRNG while the challenge-to-response pairs of a PUF must be available for a successful training. In order to design a ML-resistant PRNG, in this Letter, the output data of a regular PRNG is fed into a PUF to generate the encrypted data first. Then the encrypted data is added to the output data of the other regular PRNG to create the output data for the ML-resistant PRNG. Since the input challenge of the PUF is concealed, the adversary is unable to model the PUF with ML techniques. The result shows that the training accuracy of a single output bit of the ML-resistant PRNG is only about 52.6% even if 200,000 data are sampled for training. In contrast, only 50,000 data are adequate to break a regular PRNG if ML attacks are executed. |
---|---|
ISSN: | 0013-5194 1350-911X 1350-911X |
DOI: | 10.1049/el.2019.0485 |