Loading…
Error tolerance in classical and neural network predictors
This paper studies the influence of transmission and network errors on the encoded residue stream produced by a number of predictor-based data compression schemes. Classical linear predictors such as FIR and lattice filters, as well as a variety of feedforward and recurrent neural networks are studi...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper studies the influence of transmission and network errors on the encoded residue stream produced by a number of predictor-based data compression schemes. Classical linear predictors such as FIR and lattice filters, as well as a variety of feedforward and recurrent neural networks are studied. The residue streams produced by these predictors are subjected to two types of commonly occurring transmission noise, namely Gaussian and burst. The noisy signal is decoded at the receiver and the magnitude of error, in terms or MSE and MAE are compared. Hardware failures in the input receptor and multiplier are also simulated and the performance of various predictors are compared. Overall, it is found that even small low-complexity neural networks are capable of displaying better error tolerance than the classical predictors. |
---|---|
DOI: | 10.1109/TENCON.2000.893530 |