Loading…
Hypergraph removal lemmas via robust sharp threshold theorems
Hypergraph removal lemmas via robust sharp threshold theorems, Discrete Analysis 2020:10, 46 pp. A central result in additive and extremal combinatorics is the triangle removal lemma, which roughly speaking states that a graph with few triangles can be approximated by a graph with no triangles. This...
Saved in:
Published in: | Discrete analysis 2020-08 |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Hypergraph removal lemmas via robust sharp threshold theorems, Discrete Analysis 2020:10, 46 pp. A central result in additive and extremal combinatorics is the triangle removal lemma, which roughly speaking states that a graph with few triangles can be approximated by a graph with no triangles. This result implies, amongst other things, Roth's theorem on arithmetic progressions, and it has been the starting point for many generalizations. One of these, the hypergraph removal lemma, states that for a fixed $k$ and a fixed $k$-uniform hypergraph $H$, any $k$-uniform hypergraph that contains few copies of $H$ can be approximated by a hypergraph that contains no copies of $H$. This result (in the case that $H$ is a simplex) implies Szemerédi's theorem. The proof of the hypergraph removal lemma uses a hypergraph generalization of Szemerédi's regularity lemma, and a consequence of this is that it gives bounds that are extremely weak, which means that the lemma is useful only when $k$ is fixed and the size of the hypergraph tends to infinity. This interesting paper concerns hypergraph removal lemmas where $k$ can tend to infinity with $n$ -- indeed, it can even be linear in $n$. It would not be realistic to expect to prove removal lemmas for _all_ such hypergraphs, and indeed quite strong conditions are imposed: the number of edges must be bounded, as must the maximum size of the intersection of any two edges. But these conditions still allow for a wide class of hypergraphs that occur naturally. For example, if we take $H$ to be the $k$-uniform hypergraph that consists of two disjoint edges, then a $k$-uniform hypergraph that does not contain $H$ is precisely the same as an intersecting family of $k$-sets, so a consequence of the removal lemma proved in this paper is that an _almost_ intersecting family of $k$-sets can be approximated by an intersecting family, though this particular case is an already known theorem of Friedgut and Regev ([published in this journal](https://discreteanalysisjournal.com/article/3103-kneser-graphs-are-like-swiss-cheese)). If the hypergraph regularity lemma is too weak to prove results for large $k$, then what can be used instead? The answer is in the title of the paper. Early on in the development of the theory of random graphs, it was noticed that many graph properties, such as being connected, "appear suddenly" in the sense that if the edge probability is just below a certain threshold, then with high probability the graph does no |
---|---|
ISSN: | 2397-3129 |
DOI: | 10.19086/da.14165 |