Loading…

A PTAS for \(\ell_0\)-Low Rank Approximation: Solving Dense CSPs over Reals

We consider the Low Rank Approximation problem, where the input consists of a matrix \(A \in \mathbb{R}^{n_R \times n_C}\) and an integer \(k\), and the goal is to find a matrix \(B\) of rank at most \(k\) that minimizes \(\| A - B \|_0\), which is the number of entries where \(A\) and \(B\) differ....

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-11
Main Authors: Cohen-Addad, Vincent, Fan, Chenglin, Ghoshal, Suprovat, Lee, Euiwoong, de Mesmay, Arnaud, Newman, Alantha, Tony Chang Wang
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We consider the Low Rank Approximation problem, where the input consists of a matrix \(A \in \mathbb{R}^{n_R \times n_C}\) and an integer \(k\), and the goal is to find a matrix \(B\) of rank at most \(k\) that minimizes \(\| A - B \|_0\), which is the number of entries where \(A\) and \(B\) differ. For any constant \(k\) and \(\varepsilon > 0\), we present a polynomial time \((1 + \varepsilon)\)-approximation time for this problem, which significantly improves the previous best \(poly(k)\)-approximation. Our algorithm is obtained by viewing the problem as a Constraint Satisfaction Problem (CSP) where each row and column becomes a variable that can have a value from \(\mathbb{R}^k\). In this view, we have a constraint between each row and column, which results in a {\em dense} CSP, a well-studied topic in approximation algorithms. While most of previous algorithms focus on finite-size (or constant-size) domains and involve an exhaustive enumeration over the entire domain, we present a new framework that bypasses such an enumeration in \(\mathbb{R}^k\). We also use tools from the rich literature of Low Rank Approximation in different objectives (e.g., \(\ell_p\) with \(p \in (0, \infty)\)) or domains (e.g., finite fields/generalized Boolean). We believe that our techniques might be useful to study other real-valued CSPs and matrix optimization problems. On the hardness side, when \(k\) is part of the input, we prove that Low Rank Approximation is NP-hard to approximate within a factor of \(\Omega(\log n)\). This is the first superconstant NP-hardness of approximation for any \(p \in [0, \infty]\) that does not rely on stronger conjectures (e.g., the Small Set Expansion Hypothesis).
ISSN:2331-8422