Loading…
For-All Sparse Recovery in Near-Optimal Time
An approximate sparse recovery system in ℓ 1 norm consists of parameters k , ϵ, N ; an m -by- N measurement Φ; and a recovery algorithm R . Given a vector, x , the system approximates x by xˆ = R (Φ x ), which must satisfy ‖ xˆ- x ‖ 1 ≤ (1+ϵ)‖ x - x k ‖ 1 . We consider the “for all” model, in which...
Saved in:
Published in: | ACM transactions on algorithms 2017-07, Vol.13 (3), p.1-26 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | An
approximate sparse recovery system
in ℓ
1
norm consists of parameters
k
, ϵ,
N
; an
m
-by-
N
measurement Φ; and a recovery algorithm
R
. Given a vector,
x
, the system approximates
x
by xˆ =
R
(Φ
x
), which must satisfy ‖ xˆ-
x
‖
1
≤ (1+ϵ)‖
x
-
x
k
‖
1
. We consider the “for all” model, in which a single matrix Φ, possibly “constructed” non-explicitly using the probabilistic method, is used for all signals
x
. The best existing sublinear algorithm by Porat and Strauss [2012] uses
O
(ϵ
−3
k
log (
N
/
k
)) measurements and runs in time
O
(
k
1 − α
N
α
) for any constant α > 0.
In this article, we improve the number of measurements to
O
(ϵ
− 2
k
log (
N
/
k
)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to
O
(
k
1+β
poly(log
N
,1/ϵ)), with a modest restriction that
k
⩽
N
1 − α
and ϵ ⩽ (log
k
/log
N
)
γ
for any constants α, β, γ > 0. When
k
⩽ log
c
N
for some
c
> 0, the runtime is reduced to
O
(
k
poly(
N
,1/ϵ)). With no restrictions on ϵ, we have an approximation recovery system with
m
=
O
(
k
/ϵlog (
N
/
k
)((log
N
/log
k
)
γ
+ 1/ϵ)) measurements.
The overall architecture of this algorithm is similar to that of Porat and Strauss [2012] in that we repeatedly use a weak recovery system (with varying parameters) to obtain a top-level recovery algorithm. The weak recovery system consists of a two-layer hashing procedure (or with two unbalanced expanders for a deterministic algorithm). The algorithmic innovation is a novel encoding procedure that is reminiscent of network coding and that reflects the structure of the hashing stages. The idea is to encode the signal position index
i
by associating it with a unique message
m
i
, which will be encoded to a longer message
m
′
i
(in contrast to Porat and Strauss [2012] in which the encoding is simply the identity). Portions of the message
m
′
i
correspond to repetitions of the hashing, and we use a regular expander graph to encode the linkages among these portions.
The decoding or recovery algorithm consists of recovering the portions of the longer messages
m
′
i
and then decoding to the original messages
m
i
, all the while ensuring that corruptions can be detected and/or corrected. The recovery algorithm is similar to list recovery introduced in Indyk et al. [2010] and used in Gilbert et al. [2013]. In our algorithm, the messages {
m
i
} are independent of the hashing, which enables us to obtain a better result. |
---|---|
ISSN: | 1549-6325 1549-6333 |
DOI: | 10.1145/3039872 |