June 20, 2024

August 12-16, 2011
Workshop on Approximability of CSPs

Talk Titles and Abstracts

Per Austrin (Toronto)
On the usefulness of predicates

Motivated by the pervasiveness of strong inapproximability results for MaxCSPs, we introduce a relaxed version of the MaxCSP. In this relaxed version, loosely speaking, the algorithm is allowed to replace
the constraints of an instance by some other constraints, and then only needs to satisfy as many of the new constraints as possible. For instance, given a Max 3-Lin instance, the algorithm would be allowed
to change all the constraints to 3-Sat constraints.

We consider the following (very weak) notion of such an algorithm being good: given that the original MaxCSP instance is almost satisfiable, the algorithm succeeds if it beats a random assignment on
the new instance with the constraints replaced. We say that a MaxCSP is useful if there is a good algorithm for it.

Under the Unique Games Conjecture, we can give a complete and simple characterization of useful MaxCSPs defined by a predicate: such a MaxCSP is useful if and only if there is a pairwise independent
distribution supported on its satisfying assignments.

(Joint work with Johan Håstad)


Boaz Barak (Microsoft)
Making the long code shorter, with applications to the unique games conjecture


Andrei Bulatov (Simon Fraser University)
Approximate counting

While the complexity of exact counting of solutions of CSP is well understood now, approximating the number of solutions is mostly open. In this talk we explain the connections of approximate counting to other areas such as sampling, partition functions, and graph polynomials. We introduce relevant complexity classe and give a short survey of recent results in this area.

Andrei Krokhin (Durham University)

Robust algorithms for CSPs

A polynomial-time algorithm deciding satisfiability of CSP instances treats all unsatisfiable instances the same. An approximation algorithm for CSP may not distinguish satisfiable and almost satisfiable instances. A natural combination of these notions was suggested by Zwick in 1998: call an algorithm for CSP robust if, for each epsilon and each (1-epsilon)-satisiable instance, the algorithm returns a (1-g(epsilon)) satisfying assignment where g(epsilon) tends to 0 as epsilon tends to 0. Informally, such an algorithm correctly identifies satisfiable instances and also finds almost satisfying assignments for almost satisfiable instances. A classical result by Hastad rules out robust algorithms for systems of linear equations with three variables per equation. On the other hand, Zwick gave robust algorithms for 2-Sat and Horn-k-Sat. In 2010, Guruswami and Zhou conjectured that a CSP (with a fixed finite set of allowed constraint types over a fixed finite domain) admits a robust algorithm if and only if it has bounded width (i.e. satisfiability can be decided by a local propagation algorithm). In this talk, we will present some initial results towards proving this conjecture, based on joint work with Victor Dalmau.


Gabor Kun (Institute for Advanced Study)
Linear programming robustly decides width-1 CSP's

We say that an algorithm robustly decides a CSP if it distinguishes >(1-epsilon)-satisfiable instances from <(1-r(epsilon))-satisfiable instances, where r(epsilon)->0 as epsilon->0. We prove that the canonical linear program robustly decides a CSP iff it has width 1. We also prove that this is exactly the class of CSP's testable with constantly many queries (in the so-called bounded degree model).
Joint work with Ryan O'Donnell, Suguru Tamaki, Yuichi Yoshida and Yuan Zhou.


Konstantin Makarychev (IBM T.J. Watson Research Center)

How to Play Unique Games Against a Semi-Random Adversary

Joint work with Alexandra Kolla (Microsoft Research) and Yury Makarychev (TTIC)

We study the average case complexity of the Unique Games problem. We propose a semi-random model, in which a unique game instance is generated in several steps. First an adversary selects a completely satisfiable instance of Unique Games, then she chooses an epsilon-fraction of all edges, and finally replaces ("corrupts") the constraints corresponding to these edges with new constraints. If all steps are adversarial, the adversary can obtain any (1-epsilon)-satisfiable instance, so then the problem is as hard as in the worst case. We show however that we can find a solution satisfying a (1-delta)-fraction of all constraints in polynomial-time if at least one step is random (we require that the average degree of the graph is at least log k). Our result holds only for epsilon less than some absolute constant. We prove that if epsilon > 1/2, then the problem is hard, that is, no polynomial-time algorithm can distinguish between the following two cases: (i) the instance is a (1-epsilon) satisfiable semi-random instance and (ii) the instance is at most delta satisfiable (for every delta > 0); the result assumes the 2-to-2 Unique Games conjecture.

Finally, we study semi-random instances of Unique Games that are at most (1-epsilon) satisfiable. We present an algorithm that distinguishes between the case when the instance is a semi-random instance and the case when the instance is an arbitrary (1-delta)-satisfiable instance (if epsilon > c delta, for some absolute constant c).

Yury Makarychev (Toyota Technological Institute at Chicago)

The Grothendieck Constant is Strictly Smaller than Krivine's Bound

Abstract: I will talk about Grothendieck's Inequality. The inequality was proved by Alexandre Grothendieck in 1953, and since then it has found numerous applications in Analysis, Quantum Mechanics and Computer Science. From the point of view of combinatorial optimization, the inequality states that the integrality gap of a certain semi definite program is less than some absolute constant. The optimal value of this constant is called the Grothendieck constant K_G. The Grothendieck constant lies between 1.67 and 1.7822..., however its exact value is still unknown. It was conjectured that K_G is equal to 1.7822.. ,Krivine's upper bound for K_G. In this talk, we will disprove this conjecture and show that K_G is strictly less than Krivine's bound.

Joint work with Mark Braverman (University of Toronto), Konstantin Makarychev (IBM Research), Assaf Naor (Courant Institute, NYU).


Dana Moshkovitz (MIT)
The Sliding Scale Conjecture From Intersecting Curves

The Sliding Scale Conjecture was posed by Bellare, Goldwasser, Lund and Russell in 1993 and has been open since. It says that there are PCPs with constant number of queries, polynomial alphabet and polynomially small error. We show that the conjecture can be proved assuming a certain geometric conjecture about curves over finite fields.
The geometric conjecture states that there are small families of low degree curves that behave, both in their distribution over points and in the intersections between pairs of curves from the family, similarly to the family of all low degree curves.

Elchanan Mossel (University of California, Berkeley)

Tests of deviation from dictatorship

I will give an overview of what we know (and don't know) about ("long code") tests separating dictatorships from functions far from dictatorships. Such tests play a role in proving hardness of approximation for constraint satisfaction problems.


Rishi Saket (Princeton University)

Some optimal geometric inapproximability results without assuming UGC

The Unique Games Conjecture (UGC) asserts that a certain two variable constraint satisfaction problem with bijective constraints is NP-hard to approximate. While the conjecture remains an important open question, it has yielded several optimal inapproximability results - many of which depend critically on the assumption of bijective constraints. In this talk we shall observe that some geometric inapproximability results - previously only known based on the UGC - can be shown without appealing to the conjecture.

Specifically, I shall present optimal NP-hardness of approximation results obtained in recent work [GRSW 11] for two problems: the L_p Subspace Approximation Problem and the L_p Quadratic Grothendieck Maximization Problem for constant p > 2. These results are based onthe so called "smooth" Label Cover instances which have "locally bijective" constraints. Such instances have also been used in earlier work [KS 08] to prove optimal hardness of learning intersections of two halfspaces. The focus of the talk would be to illustrate how assumption of the UGC can be avoided in proving the mentioned results.


Ali Sinop (Carnegie Mellon University)
Lasserre Hierarchy, Higher Eigenvalues, and Approximation Schemes for Quadratic Integer Programming with PSD Objectives

We present an approximation scheme for optimizing certain Quadratic Integer Programming problems with positive semidefinite objective functions and global linear constraints.
This framework includes well known graph problems such as Minimum graph bisection, Edge expansion, Uniform sparsest cut, and Small Set expansion, as well as the Unique Games problem.
These problems are notorious for the existence of huge gaps between the known algorithmic results and NP-hardness results. Our algorithm is based on rounding semidefinite programs from the Lasserre hierarchy, and the analysis uses bounds for low-rank approximations of a matrix in Frobenius norm using columns of the matrix. For all the above graph problems, we give an algorithm running in time n^O(r/\epsilon^2) with approximation ratio 1+\epsilon/ min(1,\lambda_r), where \lambda_r is the r’th smallest eigenvalue of the normalized graph aplacian L. In the case of graph bisection and small set expansion, the number of vertices in the cut is within lower-order terms of the stipulated bound. Our results imply (1 + O(\epsilon)) factor approximation in time n^O(r'/\epsilon^2) where r' is the number of eigenvalues of L smaller than 1 -\epsilon. This perhaps gives some indication as to why even showing mere APX-hardness for these problems has been elusive, since the reduction must produce graphs with a slowly growing spectrum (and classes like planar graphs which are known to have such a spectral property often admit good algorithms owing to their nice structure). For Unique Games, we give a factor (1 + 2+\epsilon/\lambda_r) approximation for minimizing the number of unsatisfied constraints in n^O(r/\epsilon^2) time. This improves an earlier bound for solving Unique Games on expanders, and also shows that Lasserre SDPs are powerful enough to solve wellknown integrality gap instances for the basic SDP.
We also give an algorithm for independent sets in graphs that performs well when the Laplacian does not have too many eigenvalues bigger than 1 + o(1).


David Steurer (Microsoft)
Subexponential Algorithms for Unique Games and Related Problems

We give a subexponential time approximation algorithm for the Unique Games problem: Given a Unique Games instance with optimal value 1-6 and alphabet size k, our algorithm finds in time exp(kn) a solution of value 1-.
We also obtain subexponential algorithms with similar approximation guarantees for Small-Set Expansion and Multi Cut. For Max Cut, Sparsest Cut and Vertex Cover, our techniques lead to subexponential algorithms with improved approximation guarantees on subclasses of instances.
Khot's Unique Games Conjecture (UGC) states that it is NP-hard to achieve approximation guarantees such as ours for Unique Games. While our result stops short of refuting the UGC, it does suggest that Unique Games is significantly easier than NP-hard problems such as Max 3-SAT, Label Cover and more, that are believed not to have subexponential algorithms achieving a non-trivial approximation ratio.
The main component in our algorithms is a new kind of graph decomposition that may have other applications: We show that by changing an fraction of its edges, any regular graph on n vertices can be broken into disjoint parts such that the stochastic adjacency matrix of each part has at most n eigenvalues larger than 1-6.


Aravindan Vijayaraghavan (Princeton University)
On the Densest k-Subgraph problem

The densest k-subgraph (DkS) problem, where given a graph and integer k, the goal is to find the subgraph on k vertices having the maximum number of edges, is one of the notorious problems in approximation algorithms. While the problem has frustrated the development of good approximation algorithms, inapproximability results have also been hard to come by ( it is NP-hard, and does not have a PTAS unless NP has subexponential time algorithms). Adding to the interest in studying this problem are a number of recent results which have exploited the conjectured hardness of densest $k$-subgraph and its variants.

This talk will cover some recent developments in the last couple of years, about the use of LP and SDP hierarchies to obtain better approximation algorithms for the problem. I will primarily focus on an
algorithm which gives an O(n^{1/4 + \epsilon}) factor approximation, by considering O(1/\epsilon) levels of an LP hierarchy. This worst-case algorithm is, in fact, inspired by studying a planted version of the problem. Moreover, these instances also seem to present a barrier to further progress.

Yi Wu (IBM Almaden Research Center)

On the hardness of pricing loss leaders

Consider the problem of pricing n items under an unlimited supply with m single minded buyers, each of which is interested in at most k of the items. The goal is to price each item with profit margin p_1, p_2, . . . , p_n so as to maximize the overall profit. There is an O (k)-approximation algorithm by [BB06] when the price on each item must be above its margin cost; i.e., each pi > 0. We investigate the above problem when the seller is allowed to price some of the items below their margin cost. It was shown in [BB06, BBCH08] that by pricing some of the items below cost, the seller could possibly increase the maximum profit by ?(log n) times. These items sold at low prices to stimulate other profitable sales are usually called "loss leader". It is unclear what kind of approximation guarantees are achievable when some of the items can be priced below cost. Understanding this question is posed as an open problem in [BB06].

In this work, we give a strong negative result for the problem of pricing loss leaders. We prove that it is NP-hard to get a constant approximation for the problem for even for k=3, i.e.when each buyer is interested in at most 3 items. When k=2, we show a similar hardness result assuming the Unique Games Conjecture.

This talk is based on joint work with Prepas Popat.

Yuan Zhou (Carnegie Mellon University)
The densest k-subgraph (DkS) problem

The densest k-subgraph (DkS) problem (i.e. find a size k subgraph with maximum number of edges), is one of the notorious problems in approximation algorithms. There is a significant gap between known upper and lower bounds for DkS: the current best algorithm gives an ~ O(n^{1/4}) approximation, while even showing a small constant factor hardness requires significantly stronger assumptions than P != NP. In addition to interest in designing better algorithms, a number of recent results have exploited the conjectured hardness of densest k-subgraph and its variants. Thus, understanding the approximability of DkS is an important challenge.

In this talk, we give evidence for the hardness of approximating DkS within polynomial factors. Specifically, we expose the limitations of strong semidefinite programs from SDP hierarchies in solving densest k-subgraph. Our results include:

* A lower bound of Omega(n^{1/4}/log^3 n) on the integrality gap for Omega(log n/log log n) rounds of the Sherali-Adams relaxation for DkS. This also holds for the relaxation obtained from Sherali-Adams with an added SDP constraint. Our gap instances are in fact Erdos-Renyi random graphs.

* For every epsilon > 0, a lower bound of n^{2/53-eps} on the integrality gap of n^{Omega(eps)} rounds of the Lasserre SDP relaxation for DkS, and an n^{Omega_eps(1)} gap for n^{1-eps} rounds. Our construction proceeds via a reduction from random instances of a certain Max-CSP over large domains.

In the absence of inapproximability results for DkS, our results show that even the most powerful SDPs are unable to beat a factor of n^{Omega(1)}, and in fact even improving the best known n^{1/4} factor is a barrier for current techniques.


Back to top