Loading…

Hardness of Minimizing and Learning DNF Expressions

We study the problem of finding the minimum size DNF formula for a function f : {0, 1} d rarr {0,1} given its truth table. We show that unless NP sube DTIME(n poly(log n) ), there is no polynomial time algorithm that approximates this problem to within factor d 1-epsiv where epsiv > 0 is an arbit...

Full description

Saved in:
Bibliographic Details
Main Authors: Khot, S., Saket, R.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We study the problem of finding the minimum size DNF formula for a function f : {0, 1} d rarr {0,1} given its truth table. We show that unless NP sube DTIME(n poly(log n) ), there is no polynomial time algorithm that approximates this problem to within factor d 1-epsiv where epsiv > 0 is an arbitrarily small constant. Our result essentially matches the known O(d) approximation for the problem. We also study weak learnability of small size DNF formulas. We show that assuming NP sube RP, for arbitrarily small constant epsiv > 0 and any fixed positive integer t, a two term DNF cannot be PAC-learnt in polynomial time by a t term DNF to within 1/2 + epsiv accuracy. Under the same complexity assumption, we show that for arbitrarily small constants mu, epsiv > 0 and any fixed positive integer t, an AND function (i.e. a single term DNF) cannot be PAC-learnt in polynomial time under adversarial mu-noise by a t-CNF to within 1/2 + epsiv accuracy.
ISSN:0272-5428
DOI:10.1109/FOCS.2008.37