Loading…
On risk concentration for convex combinations of linear estimators
We consider the estimation problem for an unknown vector β ∈ Rp in a linear model Y = Xβ + σξ, where ξ ∈ R n is a standard discrete white Gaussian noise and X is a known n × p matrix with n ≥ p . It is assumed that p is large and X is an ill-conditioned matrix. To estimate β in this situation, we us...
Saved in:
Published in: | Problems of information transmission 2016-10, Vol.52 (4), p.344-358 |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We consider the estimation problem for an unknown vector β ∈ Rp in a linear model
Y
=
Xβ
+ σξ, where ξ ∈ R
n
is a standard discrete white Gaussian noise and X is a known
n
× p matrix with
n
≥
p
. It is assumed that
p
is large and
X
is an ill-conditioned matrix. To estimate β in this situation, we use a family of spectral regularizations of the maximum likelihood method β
α
(
Y
) =
H
α
(
X
T
X
)
β
◦
(
Y
), α ∈ R
+
, where
β
◦
(
Y
) is the maximum likelihood estimate for
β
and {
H
α
(·): R
+
→ [0, 1], α ∈ R
+
} is a given ordered family of functions indexed by a regularization parameter α. The final estimate for β is constructed as a convex combination (in α) of the estimates β
α
(
Y
) with weights chosen based on the observations
Y
. We present inequalities for large deviations of the norm of the prediction error of this method. |
---|---|
ISSN: | 0032-9460 1608-3253 |
DOI: | 10.1134/S0032946016040037 |