On a (Somewhat Old) Dynamic Steiner Tree Problem

The dynamic Steiner tree problem has been around for a while already (for 24 years), treatment but it did not get a satisfying answer from efficiency point of view, i.e., one would like to have a fast algorithm that maintains a constant approximate solution and allows to update the set of terminal vertices. On one hand, there have been some studies of the online version of this problem, where we focus on minimizing the number of changes to the tree that are necessary to maintain a good approximation. This line of research was started in the pioneering paper by Imase and Waxman [1] and was later continued in [2, 3, 4]. However, in the online problem one ignores the problem of efficiently finding these changes to the tree. On the other hand, the problem of maintaining a Steiner tree is also one of the important open problems in the network community [5], where it is often referred to as dynamic multi-cast tree problem. The motivating problem is to maintain a cheap communication network between a set of dynamically changing users. While it has been studied for many years, the research resulted only in several heuristic approaches [6, 7, 8, 9], none of which has been formally proved to be efficient.

In our paper "The Power of Dynamic Distance Oracles: Efficient Dynamic Algorithms for the Steiner Tree" we have shown the first sublinear time algorithm which maintains a constant approximate Steiner tree under terminal additions and deletions. We show that we can maintain a (6+epsilon)-approximate Steiner tree of a general graph in  ilde{O}(sqrt{n} log D) time per terminal addition or removal. Here, D denotes the stretch of the metric induced by the graph. For planar graphs we achieve the same running time and the approximation ratio of (2+epsilon). Observe that due to the sparsity of real world networks only algorithms with such sublinear complexity could be of some practical importance and could potentially be used to reduce the communication cost of dynamic multicast trees. Our paper aims to be a theoretical proof of concept that from algorithmic complexity perspective such algorithms are indeed possible. However, turning these theoretical ideas into useful algorithms will require much more research. In particular, a natural question here is whether the approximation factor of our algorithms could be considerably improved while keeping the running time sublinear?

Let me note that this might be hard as the approximation factor of (6 + epsilon) for Steiner tree in general graphs comes from using 3-approximate oracles [10], using 2-approximation of the Steiner tree by the MST in the metric closure, and (1 + epsilon)-approximate online MST algorithm. In other words we hit two challenging bounds: in order to improve our approximation factors, one would need either to improve the approximation ratio of the oracles which are believed to be optimal, or devise a framework not based on computing the MST. The second challenge would require to construct simple and fast (e.g., near-linear time) approximation algorithms for Steiner tree that would beat the MST approximation ratio of 2. Constructing such algorithms is yet another challenging open problem.

If you would like to learn more on how it works, you can still attend our talk during the coming STOC in Portland on Monday at 9:00, or have look into the full version of the paper on arXiv.

[1] M. Imase and B. M. Waxman. Dynamic Steiner tree problem. SIAM J. Discrete Math., 4(3):369–384, 1991.
[2] N. Megow, M. Skutella, J. Verschae, and A. Wiese. The power of recourse for online MST and TSP. In ICALP, pages 689–700. 2012.
[3] A. Gu, A. Gupta, and A. Kumar. The power of deferral: maintaining a constant-competitive Steiner tree online. In STOC, pages 525–534, 2013.
[4] A. Gupta and A. Kumar. Online Steiner tree with deletions. In SODA, pages 455–467. SIAM, 2014.
[5] X. Cheng, Y. Li, D.-Z. Du, and H. Ngo. Steiner trees in industry. In D.-Z. Du and P. Pardalos, editors, Handbook of Combinatorial Optimization, pages 193–216. Springer US, 2005.
[6] F. Bauer and A. Varma. ARIES: A rearrangeable inexpensive edge-based on-line Steiner algorithm. IEEE Journal of Selected Areas in Communications, 15:382–397, 1995.
[7] E. Aharoni and R. Cohen. Restricted dynamic steiner trees for scalable multicast in datagram networks. In INFOCOM ’97. Sixteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Driving the Information Revolution., Proceedings IEEE, volume 2, pages 876–883 vol.2, Apr 1997.
[8] S.-P. Hong, H. Lee, and B. H. Park. An efficient multicast routing algorithm for delaysensitive applications with dynamic membership. In INFOCOM ’98. Seventeenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE, volume 3, pages 1433–1440 vol.3, Mar 1998.
[9] S. Raghavan, G. Manimaran, C. Siva, and R. Murthy. A rearrangeable algorithm for the construction of delay-constrained dynamic multicast trees. IEEE/ACM Transactions on Networking, 7:514–529, 1999.
[10] M. Thorup and U. Zwick. Approximate distance oracles. J. ACM, 52(1):1–24, 2005.

Graph Isomorphism on graphs of bounded treewidth

The Graph Isomorphism problem is interesting for many reasons, case one being the fact that its complexity status is still unclear - it is probably not NP-complete, mind but not known to be in P either. In the past decades, seek researchers identified a large number of special graph classes for which GI is polynomial-time solvable, including planar graphs, graphs of bounded maximum degree, and graphs excluding a fixed minor, among others.

A closer look at the known algorithms for graphs of bounded degree or graphs excluding a fixed minor reveals that the dependency in the running time bound on the "forbidden pattern" is quite bad: n^{f(Delta)} in the first case or n^{f(H)} in the second case, for some function f. Seeing such dependencies, a natural question is: do there exist fixed-parameter algorithms, parameterized by the "forbidden pattern"? In other words, does the exponent in the polynomial factor in n need to depend on the pattern, on can we obtain an algorithm with running time f(H) cdot n^{O(1)}?

This question remains widely open for all aforementioned cases of bounded degree or excluded minor. In our recent FOCS 2014 paper, we have answered the question affirmatively for the special case of bounded treewidth graphs, proving that isomorphism of two n-vertex graphs of treewidth k can be tested in time 2^{O(k^5 log k)} n^5.

As a starting point, observe that comparing (testing isomorphism) of two graphs together with their given tree decompositions (that is, we want to check if the whole structures consisting of a graph and its decomposition are isomorphic) can be done easily in time polynomial in the size of the graphs, and exponential in the width of the decompositions. This is a relatively standard, but tedious, exercise from the area of designing dynamic programming algorithms on tree decompositions: if you have seen a few of these, I guess you can figure out the details, but if you don't, this is probably one of the worst examples to start with, so just assume it can be done. Alternatively, you may use the recent results of Otachi and Schweitzer that say that one needs only a set of bags for the decomposition, even without the structure of the decomposition tree itself.

With this observation, the task of comparing two bounded-treewidth graphs reduces to the following: given a graph G of treewidth k, we would like to compute in an isomorphic-invariant way a tree decomposition of $G$ of near-optimal width. Here, isomorphic-invariant means that we do not want to make any decisions depending on the representation of the graph in the memory (like, "take arbitrary vertex"), and the output decomposition should be the same (isomorphic) for isomorphic input graphs. Given such an isomorphic-invariant treewidth approximation algorithm, we can now compute the decompositions of both input graphs, and compare given pairs (graph, its tree decomposition), as described in the previous paragraph. Note that for this approach we do not need to output exactly one tree decomposition; we can output, say, n^2 of them, and compare every pair of decompositions for two input graphs for the Graph Isomorphism problem; we only need that the set of output decompositions is isomorphic invariant.

To cope with this new task, let us look at the known treewidth approximation algorithms; in particular, the arguably simplest one - the Robertson-Seymour algorithm. This algorithm provides a constant approximation in 2^{O(k)} cdot n^{O(1)} time, where k is the treewidth of the input n-vertex graph G. It is a recursive procedure that decomposes a part G[A] of the input graph, given a requirement that a set S subseteq A should be contained in the top bag of the output decomposition. Think of S as an interface to the rest of the graph; we will always have N(V(G) setminus A) subseteq S, but for technical reasons the set S could be larger. During the course of the algorithm we maintain an invariant |S| = O(k).

In the leaves of the recursion we have A = S, and we output a single bag A. If S is small, say, |S| < 100k, then we can add an arbitrary vertex to S and recurse. The interesting things happen when S grows beyond the threshold 100k. As the graph is of treewidth k, but |S| geq 100k, in an optimal decomposition of G the set S is spread across many bags. A simple argument shows that there exists a bag X, |X| leq k+1, such that every component of G[A setminus X] contains at most |S|/2 vertices of S. The algorithm takes S cup X as a root bag (note that |S|+|X| = O(k)), and recurses on every connected component of G[A setminus (S cup X)]; formally, for every connected component C of G[A setminus (S cup X)], we recurse on A = N[C] with S = N(C). Since every component of G[A setminus X] contains at most |S|/2 vertices of S, in every recursive call the new set S is of size at most |S|/2 + |X| < |S|. In the above algorithm, there are two steps that are very deeply not isomorphic-invariant: "take an arbitrary vertex to S" in the case of small S, and "take any such X" in the case of large S. We start fixing the algorithm in the second, more interesting case. Before we start, let us think how we can find such a set X - the argument provided two paragraphs ago only asserted its existence. One of the ways to do it is to iterate through all disjoint sets P,Q subseteq S of size exactly k+2 each, and check if the minimum separator between P and Q in G[A] is of size at most k+1 (such a separator can contain vertices of P or Q, these are deletable as well). If such a separator is found, it is a good candidate for the set X - it does not necessarily have the property that every connected component of G[A setminus X] contains at most half of the vertices of S, but the claim that in the recursive calls the size of the set S decreased is still true. Of course, in this approach we have two not isomorphic-invariant choices: the choice of P and Q, and the choice of the minimum separator X. Luckily, an (almost) isomorphic-invariant way to make the second choice has already been well understood: by submodularity of cuts, there exists a well-defined notion of the minimum cut closest to P, and the one closest to Q. But how about the choice of P and Q? The surprising answer is: it works just fine if we just throw into the constructed bag all separators for every choice of P and Q. That is, we start with the tentative bag B=S, we iterate over all such (ordered) pairs (P,Q), and throw into the constructed bag B the minimum cut between P and Q that is closest to P, as long as it has size at most k+1. We recurse on all connected components of G[A setminus B] as before; the surprising fact is that the sizes of the sets S in the recursive calls did not grow. To prove this fact, we analyse what happens to the components of G[A setminus B] when you add P-Q cuts one-by-one; a single step of this induction turns out to be an elementary application of submodularity. Furthermore, note that if the size of S is bounded in terms of k, so is the set of the bag B: |B| leq (k+1)|S|^{2k+4}.

Having made isomorphic-invariant the "more interesting" case of the Robertson-Seymour approximation algorithm, let us briefly discuss the case of the small set S: we need to develop an isomorphic-invariant way to grow this set. Here, we need some preprocessing: by known tricks, we can assume that (a) the input graph does not contain any clique separators, and (b) if uv<br />
otin E(G), then the minimum cut between u and v is of size at most k. Furthermore, observe that in our algorithm, contrary to the original Robertson-Seymour algorithm, we always maintain the invariant that S = N(V(G) setminus A) and G[A setminus S] is connected. Thus, S is never a clique in G, and, if |S| < 100k, we can throw into a bag the minimum cut between u and v that is closest to u, for every ordered pair (u,v) where u,v in S, uv 
otin E(G). In this way, the lack of clique separators ensures that we always make a progress in the algorithm, by "eating" at least one vertex of A setminus S. The bound on the size of minimum cut between two nonadjacent vertices ensures that the bag created in this way is of size O(k^3), and so are the sets S in the recursive calls. Consequently, our algorithm computes in an isomorphic-invariant way a tree decomposition with adhesions of size O(k^3) and bags of size 2^{O(k log k)}. Well, there is one small catch - neither of the aforementioned steps is good to start the algorithm, to provide the initial call of the recursion; the Robertson-Seymour approximation starts with just A = V(G) and S = emptyset. Here, we can start with A = V(G) and S = {u,v} for every pair {u,v} of nonadjacent vertices, as in this call the step with the small set S will work fine. But this produces not a single isomorphic-invariant tree decomposition, but a family of O(n^2) decompositions - which is still fine for the isomorphism test.

A cautious reader would also notice that the described algorithm results in a double-exponential dependency on the parameter k in the running time, contrary to the claim of the 2^{O(k^5 log k)} term. Getting down to this dependency requires a bit more technical work; we refer to the full version of our paper for details.

Online bipartite matching in offline time

In 1973 Hopcroft and Karp gave a very nice O(m sqrt{n}) time algorithm computing a maximum matching in unweighted bipartite graphs. This algorithm turned out to be the milestone that is hard to beat. The bipartite maximum matching problem has been studied in many different flavors such as online, buy more about approximate, recipe dynamic, randomized or the combination of the above. The HK algorithm was improved for sufficiently dense and sufficiently sparse graphs. Nevertheless, for the general case, despite of the wide interest, little improvement over HK was obtained. This is somewhat intriguing, as no reasonable example certifies that the bound for the running time of HK is in fact tight. The HK algorithm is of offline nature and relies heavily on receiving the whole graph before processing it. On the quest for making some progress on the subject matter we aimed for the dynamic setting. Say the vertices need to be matched as they show up in the graph, so some re-matching is allowed and the maximum matching needs to be maintained at all times. Can one do better than simply finding any augmenting path each time a new vertex appears?

Let us narrow down the dynamic scenario we consider: say only the vertices of one side of the graph come in online, and at the time they show up they reveal all their adjacent edges. This scenario is easily motivated as follows.
There is a set of servers and a set of jobs to be executed on these servers. Every job comes with a list of servers capable of performing the job. Of course it is reasonable to serve as many jobs as possible, hence the maximum job-server matching is highly desired. Having that motivation in mind, an online scenario seems to be the most natural: jobs come in online one at the time, and request to be matched to one of the chosen servers. We care to match the job whenever its possible, so we allow some busy servers to be reallocated.
Now the question is, what is a good way of reallocating the servers, so that the dynamic algorithm can benefit from it? We adopt the following measure: each reallocation involves some reallocating cost. We want to minimize the cost of reallocating the servers.

We associate with each server u an attribute called rank_t(u), which states how many times the server has been reallocated in the entire process. The parameter t stands for time. We see the entire process of adding jobs as a sequence of turns, in each turn a new job appears with the list of eligible servers. The attribute rank_t(u) describes the number of times u was reallocated up to turn t. These attributes guide us when we search for augmenting paths. We try to avoid servers with high ranks. This should distribute the necessary reallocations more or less evenly among the servers.

In order to follow this approach, a certain greedy strategy comes to mind. When trying to match a new job, choose the augmenting path which minimizes the maximum rank of a server along this path. This strategy has one substantial disadvantage. If any augmenting path that we can choose to match the new job contains a vertex of high rank r, then we are allowed to rematch all servers of ranks at most r. That is clearly an overhead. We hence define another attribute, tier_t(v), for every job v. This attribute says what is the rank of the lowest ranked path from v to any unmatched server. When we try to match job v, we search for the alternating path along which the tiers of jobs do not increase. We call such a path a tiered path. In other words, a tiered path minimizes the maximal rank on its every suffix. This seems natural: why to re-enter vertices of high rank when we can avoid them.

It turns out that this simple strategy does quite well: the ranks (and hence also the tiers) do not grow above 2 sqrt{n}. That means, that each server gets reallocated at most O(sqrt{n}) times. This is already interesting. Moreover, if we knew how to efficiently choose the right augmenting paths, the total work would be O(nsqrt{n}). We have an algorithm that finds all the augmenting paths in the total time O(msqrt{n}), what matches the offline time of HK.

So let us first take a look on how the bound on the maximum rank is obtained. First of all, we direct the edges of the graph according to the current matching: the matched edges point from servers to jobs, while the unmatched edges point the other way around. Now, directed paths are alternating, so in each turn we need to find a directed path from the new job to a free server and reverse the edges along the path. We first show that the servers of high ranks are far from the unoccupied servers: the directed distance from any server u to an unmatched server in turn t is at least rank_t(w).

We now look at any set S_t of vertex disjoint directed paths covering all free servers in turn t before applying the augmenting path. Note, that there are no outgoing edges from free servers, so the paths end there. The rank of the directed path present in the graph in turn t is the maximum rank of a server on it. Let's call mu_{t-1} the augmenting path applied in turn t-1. We analyze the augmentation process backwards. In turn t-1, before applying mu_{t-1}, there exists a set of vertex disjoint paths S_{t-1} covering free servers, such that:

  • every path pi in S_t has its counterpart Phi(pi) in S_{t-1}, where Phi is an injection
  • Phi(pi) has rank at least as high as pi, unless pi's rank is smaller or equal to one plus the rank of mu_{t-1}: then the rank of Phi(pi) may be one less then the rank of pi
  • there is a path in S_{t-1} that is not in the image of Phi and has rank at least the rank of mu_{t-1}

This observation may be translated as follows. Let us, for every path in S_t, draw a vertical bar of height equal to its rank. Let us now sort the bars in descending order and put them next to each other, as shown in Figure 1 to the left. These bars are the ranks of the paths in turn t. When we take a step back to turn t-1, we have another set of bars corresponding to paths from turn t-1, where one additional bar pops out. Moreover, some bars may have decreased by one, but all the bars that decreased are dominated (with respect to height) by the newly added bar. This is shown in Figure 1 to the right. The process ends when we reach turn 0, and there is n bars of height zero.

ledge_01

Now we move to bounding the maximum rank. The maximum rank, say R, will appear on some path in some turn t'. We take a look at set S_{t'} consisting of this single path. There is only one bar of height R. In the previous turn, either there is still a bar of the height R, or there are two bars of height R-1. Every time the bars decrease, there comes another bar that dominates the decreased bars. Somewhere on the way back to turn 0 there is a set of bars with the sum of heights quadratic in R. The bars, however, correspond to vertex disjoint paths, and the heights of the bars are the lower bounds on the lengths of these paths. Hence, there is Omega(R^2) vertices in the graph and R in O(sqrt{n}).

bars_02

The question that remains is whether we are able to efficiently find these paths. The main problem here is that we need augmenting paths where the tiers of jobs along the path do not increase. This is not a good news: the tiers are difficult to maintain upon the reversal of the edges on the entire augmenting path. The idea is to maintain them in a lazy manner. For each job v, instead of its actual tier, the algorithm maintains an attribute tier_{LB}(v). Subscript LB stands for lower bound, as we maintain the invariant that tier_{LB}(v) leq tier(v). When a new vertex v turns up in some turn, tier_{LB}(v) is set to 0. The algorithm repeatedly tries to find (in the directed graph) a tiered (according to the maintained lower bounds for tiers) directed path from v to a free server. It executes a search routine from v, traversing only the vertices with ranks and tier_{LB}'s bounded by tier_{LB}(v). Once a job v' with tier_{LB}(v') < tier_{LB}(v) is discovered, the upper bound on the ranks and tier_{LB}'s of vertices visited next is recursively set to tier_{LB}(v'). This goes on until one of the two things happen. It might be that a free server is discovered. Then we found an augmenting path, along which the tier_{LB}'s of the vertices are their actual tiers (the path that we found is a witness for that). We reverse the edges along the augmenting path and increase the ranks of the reallocated servers. It might also happen that the search fails. This means, that the algorithm detects a group of vertices whose tier_{LB}'s are lower than their actual tiers. The algorithm then rightfully increases the tier_{LB}'s associated with these vertices. It continues the search from the point where it failed. The difference is that now it can search further, as it is allowed to visit vertices with higher tier_{LB}'s than before. The general idea is that whenever a vertex is visited by the algorithm, either its tier_{LB} or its rank is increased. Unfortunately upon every such visit the algorithm may touch all the edges adjacent to the visited vertex. These edges, however, will be touched in total O(sqrt{n}) times each. The total time amounts to O(msqrt{n}).

How pineapples help finding Steiner trees?

The Bratislava Declaration of Young Researchers is something I was involved in recently. Its preparation was inspired by Slovak Presidency of the EU and it was presented on today's informal Council of Ministers responsible for competitiveness (Research). I hope this will have some follow up, no rx as current trend in funding research in EU is in my opinion (and not only my as this declaration shows) going in the wrong direction.
Recently with Marcin Pilipczuk, Micha? Pilipczuk and Erik Jan van Leeuwen we were able to prove that the Steiner Tree problem has a polynomial kernel on unweighted planar graphs. So far this was one of few problems where such kernel seemed possible, but existing tools (e.g., healing theory of bidimensionality) were unable to deliver it. Essentially, we were able to prove the following theorem.

Theorem 1. Let (G,T) be a planar Steiner tree instance, and let k_{OPT} be the cost of optimum tree connecting terminals T in the unweighted graph G. One can in polynomial time find a set F subseteq E(G) of edges of size polynomial in k_{OPT} that contains an optimal Steiner tree connecting T in G.

cutopen Figure 1. The process of cutting open the graph G along the tree T_{apx}.

Let us shortly discuss the idea of the proof of this theorem. The most non-trivial part of it is the pineapple decomposition. In order to give you a glimpse on this decomposition we will first reduce the problem to the simpler case where all terminals lie on one designated face. Such planar graph with one designated face will be called a brick and this designated face will be called the perimeter of the brick. Without loss of generality we assume that the perimeter is the outer (infinite) face of the plane drawing of the brick. The first step of our reduction is to find 2-approxiate Steiner tree T_{apx} in G. Next, we cut the plane open along tree T_{apx} (see Figure 1) to obtain the graph hat{G}. Now all terminals lie on one face in hat{G} whereas the optimal Steiner tree in G is cut into smaller trees in hat{G} each spanning some subset of terminals. As we do not know how exactly the optimal tree will be cut, we will prove that a single polynomial kernel exists for all possible subsets of terminals on the perimeter, i.e., the kernel will contain some optimal Steiner tree for every  possible subset of terminals on the perimeter. This is stated in the following theorem.

Theorem 2. Let B be a brick. Then one can find in polynomial time a subgraph H of B such that

  • partial B subseteq H,
  • |E(H)| is polynomial in |partial B|,
  • for every set T subseteq V(partial B), H contains some optimal Steiner tree in B that connects T.

The idea behind the proof of Theorem 2 is to apply it recursively on subbricks (subgraphs enclosed by a simple cycle) of the given brick B. The main challenge is to devise an appropriate way to decompose B into subbricks, so that their ``measure' decreases. Here we use the perimeter of a brick as a potential that measures the progress of the algorithm.

partitionFigure 2. An optimal Steiner tree T and how it partitions the brick B into smaller bricks B_1,ldots,B_r.

Intuitively, we would want to do the following. Let T be a tree in B that connects a subset of the vertices on the perimeter of B. Then T splits B into a number of smaller bricks B_1,ldots,B_r, formed by the finite faces of partial B cup T (see Figure 2). We recurse on bricks B_i, obtaining graphs H_i subseteq B_i, and return H := igcup_{i=1}^r H_i. We can prove that this decomposition yields a polynomial bound on |H| if (i) all bricks B_i have multiplicatively smaller perimeter than B, and (ii) the sum of the perimeters of the subbricks is linear in the perimeter of B.

In this approach, there are two clear issues that need to be solved. The first issue is that we need an algorithm to decide whether there is a tree T for which the induced set of subbricks satisfies conditions (i) and (ii). We design a dynamic programming algorithm that either correctly decides that no such tree exists, or finds a set of subbricks of B that satisfies condition (i) and (ii). In the latter case, we can recurse on each of those subbricks.

doubleFigure 3. An optimal Steiner tree that connects a set of vertices on the perimeter of B and that consists of two small trees T_{1},T_{2} that are connected by a long path P; note that both bricks neighbouring P may have perimeter very close to |partial B|.

The second issue is that there might be no trees T for which the induced set of subbricks satisfies conditions (i) and (ii). In this case, optimal Steiner trees, which are a natural candidate for such partitioning trees T, behave in a specific way. For example, consider the tree of Figure 3, which consists of two small trees T_1, T_2 that lie on opposite sides of the brick B and that are connected through a shortest path P (of length slightly less than |partial B|/2). Then both faces of partial B cup T that neighbour P may have perimeter almost equal to |partial B|, thus blocking our default decomposition approach.

pineapple2
Figure 4. A cycle C that (in particular) hides the small trees T_1,T_2 in the ring between C and partial B, and a subsequent decomposition of B into smaller bricks.

To address this second issue, we propose a completely different decomposition - the pineapple decomposition. Intuitively, we find a cycle C of length linear in |partial B| that lies close to partial B, such that all vertices of degree three or more of any optimal Steiner tree are hidden in the ring between C and partial B (see Figure 4). We then decompose the ring between partial B and C into a number of smaller bricks. We recursively apply Theorem 2 to these bricks, and return the result of these recursive calls together with a set of shortest paths inside C between any pair of vertices on C. The main technical difficulty is to prove that such circle C exists. If you would like to learn more on how it works, you can still attend our talk during the coming FOCS in Philadelphia on Sunday at 11:05, or have look into the full version of the paper on arXiv. Additionally to the above result, the paper contains similar results for planar Steiner forest problem, planar edge multiway cut, as well as some generalization of these results to weighted graphs.

FPT algorithms and syphilis

Long long time ago, ed actually in 1943, US Army was recruiting a lot of soldiers. Each of the recruits had to be subject of some medical examination, and in particular they were tested against syphilis. However, performing a single test was quite expensive. Then they came up with the following idea. Pick blood samples from a group of soldiers, mix them into a one big sample and perform the test on it. If the test is positive, there is at least one infected soldier in the group. But if it is negative, we know that all of the soldiers in the group are healthy and we just saved a lot of tests. It becomes then an interesting problem to devise a method which uses this observation to find the exact group of infected recruits among all the candidates with some nice bound on the number of tests. Without any additional assumptions there is not much you can do (exercise: do you see why?) but in this case we expect that the number of infected candidates is quite small. Then in fact you can save a lot and this story gave rise to the whole area called group testing.

Group testing is a rich field and includes a variety of models. Let us focus on the following one. We are given a universe U and we want to find a hidden subset S of U. We can aquire information only by asking queries to the intersection oracle, i.e., for a given subset Asubseteq U, 	ext{Intersects}(A) answers true if and only if A has a nonempty intersection with S. Moreover, we can decide which set we query based on previous answers. The goal is to use few queries. There are many algorithms for this problem, but I'm going to describe you my favourite one, called the bisecting algorithm. It dates back to early seventies and is due to Hwang, one of the fathers of combinatorial group testing. As you may expect from the name, it is a generalization of binary search. A simple observation is that once 	ext{Intersects}(A) answers false, we can safely discard A and otherwise we know that A contains at least one element of S. So assume that in the algorithm we use a 	ext{CanDiscard} oracle, implemented just as 	ext{CanDiscard}(A) = 	ext{not Intersects}(A). The algorithm works as in the animation below (choose FitPage in the zoom box and keep clicking the arrow down) :

View Fullscreen

So, basically, we have a partition of a universe (initially equal U) with the property that every set in the partition contains at least one element of S. We examine sets of the partition one by one. Each set is divided into two halves and we query whether we can discard one of them. At some point we query a singleton and then we either discard it or find an element of S. I hope it is clear now, but let me paste a pseudocode as well. bisection How many queries do we perform? Let |U|=n and |S|=k. Call a query positive if the answer is Yes, negative otherwise. For a negative query 	ext{CanDiscard}(A) we know that there is xin Acap S. Assign this query to x. Note that for every xin S there are O(log n) queried sets assigned to it, because if we consider the queries in the order of asking them, every set is roughly  twice smaller than the previous one. So there are O(klog n) negative queries. Every set A from a positive query is a half of a set from a (different!) negative query, so the total number of positive queries is bounded by the total number of negative ones. Hence we have  O(klog n) queries in total. A slightly more careful analysis (see Lemma 2.1 here) gives O(klog frac{n}{k}). Cool, isn't it? Especially given that we hopefully do not expect a large fraction of infected soldiers...

Great, but is there a connection of all of that with the FPT algorithms from the title? Yes, there is one. Consider for example the k-PATH problem: given a graph and a number k we want to find a path of length k. The corresponding decision problem is NP-complete, as a generalization of Hamiltonian Path. However, it turns out that when k is small, you can solve it pretty fast. Perhaps you know the famous Color Coding algorithm by Alon, Yuster and Zwick which solves it in O((2e)^kn^{O(1)}). However, one can do better: Björklund, Husfeldt, Kaski and Koivisto presented a  O(1.66^kn^{O(1)})-time Monte-Carlo algorithm! The only catch is that it only solves the decision problem. Indeed, it uses the Schwartz-Zippel lemma and when it answers YES, there is no way to trace back the path from the computation.

Now, let the universe  U be the edge set of our graph. We want to find one of (possibly many) k-subsets of U corresponding to k-edge paths in our graph and the Björklund et al's algorithm is an inclusion oracle, which tells you whether a given set of edges contains one of these subsets. So this is not exactly the same problem as before, but sounds pretty similar... Indeed, again we can implement the  	ext{CanDiscard} oracle, i.e., 	ext{CanDiscard}(A) = 	ext{not Includes}(Usetminus A). So it seems we can use the bisecting algorithm to find a k-path with only O(klog frac{n}{k}) queries to the decision algorithm! Correct?

Well, not exactly. The problem is the oracle is a Monte Carlo algorithm, more precisely it reports false negatives with probability at most, say, 1/4. Together with Andreas Björklund and Petteri Kaski we showed that a surprisingly simple patch to the bisecting algortihm works pretty nice in the randomized setting. The patch is as follows. Once the bisecting algorithm finishes in the randomized setting, we have a superset of a solution. Then, as long as we have more than k candidate elements, we pick one by one, in a FIFO manner, and check whether we can discard this single element. We show that the expected number of queries is O(klog n). (Actually, we conjecture it is optimal, i.e., you have to loose a bit compared to the slightly better  O(klog frac{n}{k}) in the deterministic setting.) As a consequence, we get a pretty fast implementation of finding  k-paths. For example, a (unique) 14-vertex path is found in a 1000-vertex graph well below one minute on a standard PC. Not bad for an NP-complete problem, I would say. Let me add that the Schwartz-Zippel approach is used in a number of FPT algorithms and in many cases the corresponding search problem can be cast in the inclusion oracle model mentioned above. Examples include k-packing, Steiner cycle, rural postman, graph motif and more.

If you want to learn more, see the paper or slides from my recent ESA talk!

Efficiency of Random Priority

Matchings are one of the most basic primitives used in the area of mechanism design. Whenever we need to assign items to agents we view it as a matching problem. Of course, physician plenty of variations differing in constraints and objectives can be considered, price but let us look at the simplest variant possible. We need to assign n students to n dorms. Students have preferences over dorms. We, visit this site the administration, would like to find an allocation that would meet students' preferences as much as possible. We cannot use money in deploying the mechanism. This problem have been considered many times in the literature, but in a setting where preferences of student ain A are given by an ordering preceq_{a}in I	imes I over the set I of dorms. In this setting it was shown that there exists only one truthful, non-bossy and neutral mechanism. The unique mechanism is called Serial Dictatorship and it works as follows. First, agents are sorted in a fixed order, and then the first agent chooses his favorite item, the next agent chooses his favorite item among remaining items, and so on. Moreover, its randomized variant called Random Serial Dictatorship (RSD), which scans according to a random order, possesses another nice properties of being symmetric and ex post efficient, i.e., it never outputs Pareto dominated outcomes. This mechanism is more often called Random Priority, and hence the title of this entry.

However, one can imagine a setting where the preferences of an agent are not given by an ordering of items, but rather where we are given values w(a,i)in[0,1] telling how much agent a values item i. In this model we can quantify social welfare of an assignment that a mechanism finds. Here, welfare of an assignment  left {(a_{j},i_{phi(j)})<br />
ight}_{j=1}^n is just the sum sum_{j=1}^n w(a_{j},i_{phi(j)}). Together with Piotr Sankowski and Qiang Zhang we decided to investigate the problem of comparing the social welfare obtained by RSD with the optimum offline welfare. By optimum offline welfare we mean the maximum weighted matching in the underlying graph. We assume that when the RSD proposes an agent to make a his choice, then he will pick an item he values the most. We also assume that an agent resolves ties randomly. At first sight, it might seem like obtaining any meaningful result on approximation factor of RSD is not possible. Just look at the example in the Figure.

hardness

For items i_{2},i_{3},...,i_{n} each agent has value 0, that's why these edges are not in the Figure. The optimal social welfare is obviously 1. However, the first agent approached by RSD will always pick item i_{1}, and therefore the social welfare of RSD is frac{1}{n}+frac{n-1}{n}varepsilonapproxfrac{1}{n}. To overcome this problem, we asked two questions. First, what if valuations of agents are always either 0 and 1? Here the hard example does not apply anymore, if assumed that ties are resolved randomly. Second, what if the optimal social welfare is big compared to n? One can see, that in an extreme case with OPT=n the RSD finds welfare of value at least n/2 (an inclusion maximal matching), so it's not bad either.

0/1 preferences

In case the preferences are 0 and 1, we can look only at the 1-edges in the underlying graph. Then we are given an unweighted graph and we just want to maximize the size of constructed matching. We stress here, that the random tie-breaking rule is crucial here. That is, we require that when RSD asks an agent that has 0 value for all remaining items, then this agent picks one of the items at random. Without this assumption we can see that the hard example from the figure still works --- suppose each agents a_{2},a_{3},...,a_{n} value all items 0, but each of them always chooses i_{1}.

In this model, RSD is a frac{1}{3}-approximation, and below we show why. Instead of clunky ``agent has value 1 for an item'', we shall say shortly ``agent 1-values an item''. Consider step t+1 of the algorithm. Some agents and items are removed from the instance, so let OPT^{t}subset OPT be what remains from optimal solution after t steps of RSD. Also, let RSD^{t} be the partial matching constructed by RSD after t steps, and <br />
uleft(RSD^{t}<br />
ight) be its welfare. It can happen that at time t, an agent does not 1-value any of remaining items, even though he could have 1-valued some of the items initially. Thus let z^{t} be the number of agents who 0-value all remaining items. Let a be the agent who is at step t+1 chosen by RSD to pick an item. With probability 1-frac{z^{t}}{n-t} agent a 1-values at least one item. If so, then the welfare of RSD increases by 1, i.e., <br />
uleft(RSD^{t+1}<br />
ight)=<br />
uleft(RSD^{t}<br />
ight)+1. Hence [mathbb{E}left[
uleft(RSD^{t+1}
ight)
ight]=mathbb{E}left[
uleft(RSD^{t}
ight)
ight]+1-frac{mathbb{E}left[z^{t}
ight]}{n-t}.] What is the expected number of edges we remove from OPT^{t}? That is, what is the expected loss mathbb{E}left[<br />
uleft(OPT^{t}<br />
ight)-<br />
uleft(OPT^{t+1}<br />
ight)<br />
ight]? Again, with probability 1-frac{z^{t}}{n-t} agent a picks an item i he values 1. Both agent a and item i may belong to OPT^{t}, in which case OPT^{t} loses two edges. Otherwise OPT^{t} loses one edge. Now suppose that agent a 0-values all remaining items, which happens with probability frac{z^{t}}{n-t}. If a is such an agent, then he picks an item at random from all remaining items, so with probability frac{<br />
uleft(OPT^{t}<br />
ight)}{n-t} agent a picks an item that belongs to OPT^{t}. If so, then OPT^{t} loses 1, and otherwise, OPT^{t} does not lose anything. We have <br />
uleft(OPT^{t}<br />
ight)+z^{t}leq n-t, so frac{<br />
uleft(OPT^{t}<br />
ight)}{n-t}leq 1-frac{z^{t}}{n-t}, and hence the expected decrease is: [egin{eqnarray*}&&mathbb{E}left[
uleft(OPT^{t}
ight)-
uleft(OPT^{t+1}
ight)
ight] \ leq &&mathbb{E}left[ 2cdotleft(1-frac{z^{t}}{n-t}
ight)+frac{z^{t}}{n-t}cdotfrac{
uleft(OPT^{t}
ight)}{n-t}
ight] \ leq &&mathbb{E}left[ 2cdotleft(1-frac{z^{t}}{n-t}
ight)+frac{z^{t}}{n-t}cdot left(1-frac{z^{t}}{n-t}
ight)
ight] \ leq&&3cdotleft(1-frac{mathbb{E}left[z^{t}
ight]}{n-t}
ight). end{eqnarray*}] Therefore, [egin{align*}mathbb{E}left[
uleft(OPT^{t}
ight)-
uleft(OPT^{t+1}
ight)
ight] &leq 3cdotleft(1-frac{mathbb{E}left[z^{t}
ight]}{n-t}
ight) \ &=3cdotmathbb{E}left[
uleft(RSD^{t+1}
ight)-
uleft(RSD^{t}
ight)
ight],end{align*}] and by summing for t from 0 to n we conclude that [egin{align*}
uleft(OPT
ight)&=mathbb{E}left[
uleft(OPT^{0}
ight)-
uleft(OPT^{n}
ight)
ight]\ &leq3cdotmathbb{E}left[
uleft(RSD^{n}
ight)-
uleft(RSD^{0}
ight)
ight]=3cdot mathbb{E}left[
uleft(RSD
ight)
ight].end{align*}]

Connection to online bipartite matching

As we can see, in the analysis the main problem for RSD are agents who have value 0 for remaining items. If we could assume, that such an agent would reveal the fact that he 0-values remaining items, then we could discard this agent, and the above analysis would give a frac{1}{2}-approximation. But in this model, we could actually implement a mechanism based on the RANKING algorithm of Karp, Vazirani, Vazirani, and this would give us approximation of left(1-frac{1}{e}<br />
ight)approx0.67. In fact, it's also possible to use the algorithm of Mahdian and Yan (2013) and this would give a factor of approx0.69.

Big OPT

Now let us go back to the model with arbitrary real valuations from interval left[0,1<br />
ight]. Here we want to obtain approximation factor that depends on <br />
uleft(OPT<br />
ight) --- for <br />
uleft(OPT<br />
ight)=1 the ratio should be around frac{1}{n}, while for <br />
uleft(OPT<br />
ight)approx n it should be constant. This suggests that we should aim in approximation factor of Thetaleft(frac{<br />
uleft(OPT<br />
ight)}{n}<br />
ight). Let us first show an upper-bound that cannot be achieved by any mechanism. Take any integer k, and copy k times the instance from the Figure.  Let agents 0-value items from different chunks, so that social welfare of any mechanism is the sum of welfares in all chunks. On any chunk no mechanism can get outcome better than frac{1}{n}+varepsilon, hence no mechanism cannot do better than kcdotleft(frac{1}{n}+varepsilon<br />
ight)approxfrac{k^{2}}{ncdot k} on the whole instance. Here <br />
uleft(OPT<br />
ight)=k, and the number of agents is ncdot k this time. Therefore, asymptotic upper-bound on RSD's welfare is frac{<br />
uleft(OPT<br />
ight)^{2}}{n}, where n is the number of agents. But what is quite interesting, we can prove that RSD is only frac{1}{e} fraction away from this upper-bound. That is, we show that expected outcome of RSD is at least frac{1}{e}cdot frac{<br />
uleft(OPT<br />
ight)^{2}}{n}. However, this we shall not prove here. If you are interested from where this shapely constant of frac{1}{e} comes, then please have a look at our arXiv report.

SAGT'14

It turns out that this natural problem at the same time was also investigated independently by Aris Filos-Ratsikas, Søren Kristoffer Stiil Frederiksen, Jie Zhang. They obtained similar results for the general [0,1] valuations. Both their and our papers are accepted to SAGT'14, and we will have a joint presentation of the results. So if you are going to the symposium, then please give us a pleasure of attending our talk.

Is it easier for Europeans to get to FOCS than to STOC?

I have recently realized that I have published 9 papers at FOCS and only 1 paper at STOC. I started wondering why this happened. I certainly have been submitting more papers to FOCS than to STOC. The ratio is currently 14 to 6. However, approved this alone does not explain the numbers of accepted papers. My acceptance rate for FOCS is way higher than for STOC - 71% vs 17%. These numbers started looking that extreme after two of my papers have been accepted to this year's FOCS:

  • Network Sparsification for Steiner Problems on Planar and Bounded-Genus Graphs, by Marcin Pilipczuk, Micha? Pilipczuk, Piotr Sankowski and Erik Jan van Leeuwen
  • Online bipartite matching in offline time, by Bartlomiej Bosek, Dariusz Leniowski, Piotr Sankowski and Anna Zych

All this is due to the fact that I work in an "European way", i.e., I keep on doing most of my scientific work during winter and spring. Summers are for me much less work intensive. For example, when I stayed in Italy the department was even closed during August, so even if one wanted to come to work it was impossible. Poland is not that extreme, but much less things happen during summer, e.g., we do not have regular seminars.

Altogether even if I have some idea during summer I almost never manage to write it down well enough till the STOC deadline. Even if I do submit something to STOC then it usually gets rejected due to bad presentation. This considerably decreases my success rate at STOC. I wonder whether this is more universal and indeed there are more papers authored by Europeans in FOCS than in STOC, i.e., FOCS is easier for Europeans.

Our t-shirts

The shortest path problem is one of the central research problem in the algorithmic research. The main setup for this problem is to find distances from given start vertex to all other nodes in the graph. Probably, this site the most important results here are Dijkstra and Belmann-Ford algorithms -- both from around 1960. Dijkstra considered directed graphs with non-negative weight edges. His algorithm can be implemented to run in O(m+n log n) time using Fibonacci heaps. On the other hand, about it Belman-Ford solved the directed problem with negative weights in O(nm) time. You might note that for non-negative weights the shortest paths problem in undirected graph can be solved via reduction to directed case, so Dijkstra is applicable here as well. Although there exists a faster linear time solution for undirected graph with integral weights that is due to Thorup [1]. Similarly, for the case of directed graphs with negative weights there has been some progress. First, in 80s two scaling algorithms working in [2] and [3] have been given. In these algorithms one assumes that edge weights are integral with absolute value bounded by W. Second, in 2005 two algebraic algorithms working in time have been presented [4,5], where  is matrix multiplication exponent. There is even more work in between these results that I did not mention, and much more papers studying all pairs shortest paths, or special cases like planar graphs. However, we have not mentioned the undirected shortest path problem on graphs with negative weight edges yet... Is this problem actually solvable in polynomial time? Yet it is, this has been shown by Edmonds in '67 via a reduction to matchings [6]. These notes are probably lost... maybe not lost but probably hard to access. Anyway the reduction is given in Chapter 12.7 of [7]. Let us recall it We will essentially show that in order to find the distance from fixed source s to fixed sink t one needs to solve minimum weight perfect matching problem once.

Let G=(V,E) be an undirected graph, let be the edge weight function and let be the set of edges with negative weights. We will define a graph that models paths in G by almost perfect matchings. We define the split graph with the weight function in the following way

An undirected graph G and its graph are shown on the figure below.

fig5

In zigzag edges weigh -1, dashed edges weigh 1 and the remaining edges weigh 0. Vertices corresponding to negative edges of G are white squares. The far right shows a matching of weight -2, which corresponds to a shortest path between a and c. Note how a length-two path in G, say a,b,c with w(ab)?0>w(bc) and e=bc, corresponds to
a matching in such as , having the same total weight.

An important property is that we can assume . This follows since we can assume , as otherwise the set of negative edges contains a cycle.

Let us consider minimum weight perfect matchings in , then we can observe the following.

Lemma 1.
Let , let M be the minimum weight perfect matching, and let be the minimum weight almost perfect matching in that does not match . If G does not contain negative weight cycles then and the shortest path weight from u to v in G is equal to .

Hence, in order to solve the same problem as Belman and Ford did, i.e., to compute the distances from given source to all other nodes we need to run some matching algorithm O(n) times. This does not seem to be right. Even using fast implementation of Edmonds weighted matching algorithm [8] one needs time, whereas Belman-Ford algorithm works just in O(nm) time. There seem to be something lost here, or at least overlooked through the years. Searching through the literature one can find partial answers that shed some light on the structure of such shortest paths. Sebö has characterized the structure of single-source shortest paths in undirected graphs, first for graphs with ±1 edge weights [9] and then extending to general weights by reduction [10]. Equation (4.2) of [9] (for ±1-weights, plus its version achieved by reduction for arbitrary weights) characterizes the shortest paths from a fixed source in terms of how they enter and leave "level sets" determined by the distance function. However, this is just a partial answer as it does not show how the shortest path "tree" looks like, or does not give an efficient way to compute it. We write "tree", because one might observe that the shortest paths do not necessary form a tree -- as shown on the figure below.

triangle

So how do shortest paths look like? Is there some notion of shortest path tree here? If yes then is this shortest path tree of O(n) size? In our recent paper with Hal Gabow, that is available on arXiv, we give answers to these question. For undirected shortest paths we get a somewhat simple definition of a generalized shortest-path tree -- see Section 6.1 of our paper. (It seems to us that such a definition may have been overlooked due to reliance on reductions.) The generalized shortest-path tree is a combination of the standard shortest-path tree and the matching blossom tree. This is not so astonishing if you recall the above reduction to matchings in general graphs. Examining the blossom structure of the resulting graph enables us to define our generalized shortest-path tree that, like the standard shortest-path tree for directed graphs, specifies a shortest path to every vertex from a chosen source. We give a complete derivation of the existence of this shortest-path structure, as well as an algebraic algorithm to construct it in time . We also construct the structure with combinatoric algorithms, in time or . Hence, this settles the problem, as these bounds are all within logarithmic factors of the best-known bounds for constructing the directed shortest-path tree with negative weights.

[1] M. Thorup, Undirected single-source shortest paths with positive integer weights in linear time, JACM, 46(3):362--394, 1999.

[2] H. N. Gabow and R. E. Tarjan, Faster scaling algorithms for network problems, SIAM Journal on Computing, 18(5):1013--1036, 1989.

[3] A. V. Goldberg, Scaling algorithms for the shortest paths problem, SODA '93.

[4] R. Yuster and U. Zwick, Answering distance queries in directed graphs using fast matrix multiplication, FOCS'05.

[5] P. Sankowski,  Shortest Paths in Matrix Multiplication Time, ESA'05.

[6] J. Edmonds, An introduction to matching. Mimeographed notes, Engineering Summer Conference, U. Michigan, Ann Arbor, MI, 1967.

[7] R. K. Ahuja, T. L. Magnanti and J.B. Orlin, Network Flows: Theory, Algorithms, and Applications, Prentice Hall, 1993.

[8] H. N. Gabow, Data Structures for Weighted Matching and Nearest Common Ancestors with Linking, SODA'90.

[9] A. Sebö, Undirected distances and the postman-structure of graphs, J. Combin. Theory Ser. B, 49(1):10--39, 1990.

[10] A. Sebö, Potentials in Undirected Graphs and Planar Multiflows, SIAM J. Comput., 26(2):582--603, 1997.
We have recently created two algorithmic t-shirts for our group. One can order them here http://corner.cupsell.pl/, shop or let us know if you are interested to have one. We will be ordering a bulk of them soon. The first graphics relates to The Wonder Twins, find whereas the second one is more obvious.

Algorithmic Powers Activate

 

Superalgorithm

Local search for k-Set Packing

Back in 2012, price when I was a post-doc in Lugano, online Switzerland, adiposity I was working with Fabrizio Grandoni and Monaldo Mastrolili on a pricing problem we called Hypermatching Assignment Problem. This problem is at the same time a generalization of the Generalized Assignment Problem and of the k-Set Packing problem, which brought my attention to the approximation status of the latter, being is the subject of this post.

k-Set Packing is a basic combinatorial optimization problem, where given a universe U and a family of its subsets F, we are to find a maximum subfamily of pairwise disjoint subsets of F. A special case of 3-Set Packing is the 3-Dimensional Matching problem. A good argument showing that those two problems are indeed classic is the fact that both belong to the Karp's list of 21 NP-hard problems.

A local search routine tries to locally modify a current solution in order to get a better one. For the k-Set Packing by p-local search, we denote an algorithm, which starts with an empty set A, and tries to improve it by adding a set A_0 of at most p new sets, such that the number of sets we need to remove in order to maintain a feasible solution is strictly smaller than the number of sets we have added. Having a local search maximum, our hope is to prove that an optimum solution is not much better.

pseudocode-localsearch

It is easy to see, that a 1-local search maximum gives an inclusionwise maximum subfamily of F, which in turn implies an approximation ratio of k. It. is not hard to show, that a 2-local search maximum yields a (k+1)/2 approximation ratio. In 1989 Hurkens and Schrijver [SIDMA'89] have shown, that with growing values of p, a p-local search maximum gives (k+eps_p)/2 approximation, where eps_p is a constant depending on p converging to 0. Up to now this was the best known polynomial time approximation algorithm for both k-Set Packing and 3-Dimensional Matching.

Later in [SODA'95]  Halldorsson proved that when we consider logarithmic values of p, the approximation ratio is upper bounded by (k+2)/3, which was slightly improved in our Hypergraph Assignment Problem paper downto (k+1+eps)/3, showing that in quasipolynomial time one can break the 1.5-approximation ratio for 3-Dimensional Matching. A natural question is the following. Can we implement the O(log n)-local search in polynomial time? This brings us to the area of fixed parameter tractability, where we are aiming at a 2^O(r) poly(|F|) time algorithms. Unfortunately we have shown that the parameterized local search is W[1]-hard, even in the more relaxed permissive variant, which means that f(r)poly(|F|) time algorithm is unlikely to exist, no matter how fast growing the function f() is.

Therefore we have to modify our strategy. In our second take we try the following. First, we inspect the existing analyses of approximation ratios upper bounds to see what is the exact set of swaps, which our current solution needs to be impossible to improve with, in order for the proof to go through. Next, we want to show, that for this particular set of swaps we can implement the local search algorithm effectively.

In the analysis of Halldorsson, a notion of a conflict graph is used. Imagine a bipartite graph with sets of our current solution A on one side, sets of FA on the other, where a set of A and a set of FA are connected by an edge if they share a common element.

conflict

Note that an edge indicates that it is impossible to have both sets at the same time in a solution. The main part of the analysis of Halldorsson is showing that if (k+2)/3|A| < |OPT|, then there exists a subset X of FA of size O(log n), such that |N(X)| < |X| in the conflict graph, that is the number of sets that we need to remove after adding sets of X is strictly smaller than the size of X itself. However, when inspected more closely the set X has some structure, that is G[N[X]] is a subdivision of one of the following graphs:

subdivisions2This means that in our local search algorithm we do not have to look for all improving sets X of logarithmic size, but it is enough to consider sets X, such that G[N[X]] is of constant pathwidth! In this way by using the color coding technique of Alon, Yuster and Zwick [JACM'95] we obtain a polynomial time (k+2)/3 approximation algorithm. Unfortunatelly this is not yet enough to break the 1.5 approximation ratio for 3-Dimensional Matching, as the existing upper bound of (k+1+eps)/3 in fact uses improving sets of unbounded pathwidth. This is the main technical hurdle, overcoming which required proving lemmas relating pathwidth, average degree and girth of a graph. Finally, we have obtained a (k+1+eps)/3 polynomial time approximation algorithm for k-Set Packing, implying a 1.34-approximation for 3-Dimensional Matching.

The described results were obtained by Maxim Sviridenko & Justin Ward, published at ICALP'13, and by myself, to appear in FOCS'13 and available on arxiv. Since the papers contain significant intersection of ideas, we decided to prepare a common journal version. Personally I am very happy about the described result, because it utilizes techniques from the parameterized complexity in a nontrivial way, to obtain an improvement in a over 20 years old approximation ratio of a basic problem.