Loading…
Latency-Distortion Tradeoffs in Communicating Classification Results over Noisy Channels
In this work, the problem of communicating decisions of a classifier over a noisy channel is considered. With machine learning based models being used in variety of time-sensitive applications, transmission of these decisions in a reliable and timely manner is of significant importance. To this end,...
Saved in:
Published in: | IEEE transactions on communications 2024-10, p.1-1 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this work, the problem of communicating decisions of a classifier over a noisy channel is considered. With machine learning based models being used in variety of time-sensitive applications, transmission of these decisions in a reliable and timely manner is of significant importance. To this end, we study the scenario where a probability vector (representing the decisions of a classifier) at the transmitter, needs to be transmitted over a noisy channel. Assuming that the distortion between the original probability vector and the reconstructed one at the receiver is measured via f-divergence, we study the trade-off between transmission latency and the distortion. We completely analyze this trade-off using uniform, lattice, and sparse lattice-based quantization techniques to encode the probability vector by first characterizing bit budgets for each technique given a requirement on the allowed source distortion. These bounds are then combined with results from finite-blocklength literature to provide a framework for analyzing the effects of both quantization distortion and distortion due to decoding error probability (i.e., channel effects) on the incurred transmission latency. Our results show that there is an interesting interplay between source distortion (i.e., distortion for the probability vector measured via f-divergence) and the subsequent channel encoding/ decoding parameters. We observe that the source distortion can be optimized for each quantization technique to attain a minimum latency. Our results also indicate that sparse lattice-based quantization is the most effective at minimizing latency for low end-to-end distortion requirements across different parameters and works best for sparse, high-dimensional probability vectors (i.e., high number of classes). To corroborate our framework, we use the quantization techniques on predictions made on real datasets and send them through a simulated channel. We use the metric of 'relative accuracy' to measure how often the class assigned with the highest probability by the classifier at the transmitter is correctly identified after transmission. Our results indicate that the lattice-based techniques require significantly smaller blocklengths than uniform quantization (subsequently incurring smaller latencies) but can still provide a comparable performance to uniform quantization. |
---|---|
ISSN: | 0090-6778 1558-0857 |
DOI: | 10.1109/TCOMM.2024.3484939 |