# FPT algorithms and syphilis

Long long time ago, ed actually in 1943, US Army was recruiting a lot of soldiers. Each of the recruits had to be subject of some medical examination, and in particular they were tested against syphilis. However, performing a single test was quite expensive. Then they came up with the following idea. Pick blood samples from a group of soldiers, mix them into a one big sample and perform the test on it. If the test is positive, there is at least one infected soldier in the group. But if it is negative, we know that all of the soldiers in the group are healthy and we just saved a lot of tests. It becomes then an interesting problem to devise a method which uses this observation to find the exact group of infected recruits among all the candidates with some nice bound on the number of tests. Without any additional assumptions there is not much you can do (exercise: do you see why?) but in this case we expect that the number of infected candidates is quite small. Then in fact you can save a lot and this story gave rise to the whole area called group testing.

Group testing is a rich field and includes a variety of models. Let us focus on the following one. We are given a universe $U$ and we want to find a hidden subset $S$ of $U$. We can aquire information only by asking queries to the intersection oracle, i.e., for a given subset $Asubseteq U$, $ext{Intersects}(A)$ answers true if and only if $A$ has a nonempty intersection with $S$. Moreover, we can decide which set we query based on previous answers. The goal is to use few queries. There are many algorithms for this problem, but I'm going to describe you my favourite one, called the bisecting algorithm. It dates back to early seventies and is due to Hwang, one of the fathers of combinatorial group testing. As you may expect from the name, it is a generalization of binary search. A simple observation is that once $ext{Intersects}(A)$ answers false, we can safely discard $A$ and otherwise we know that $A$ contains at least one element of $S$. So assume that in the algorithm we use a $ext{CanDiscard}$ oracle, implemented just as $ext{CanDiscard}(A) = ext{not Intersects}(A)$. The algorithm works as in the animation below (choose FitPage in the zoom box and keep clicking the arrow down) :

So, basically, we have a partition of a universe (initially equal $U$) with the property that every set in the partition contains at least one element of $S$. We examine sets of the partition one by one. Each set is divided into two halves and we query whether we can discard one of them. At some point we query a singleton and then we either discard it or find an element of $S$. I hope it is clear now, but let me paste a pseudocode as well. How many queries do we perform? Let $|U|=n$ and $|S|=k$. Call a query positive if the answer is Yes, negative otherwise. For a negative query $ext{CanDiscard}(A)$ we know that there is $xin Acap S$. Assign this query to $x$. Note that for every $xin S$ there are $O(log n)$ queried sets assigned to it, because if we consider the queries in the order of asking them, every set is roughly  twice smaller than the previous one. So there are $O(klog n)$ negative queries. Every set $A$ from a positive query is a half of a set from a (different!) negative query, so the total number of positive queries is bounded by the total number of negative ones. Hence we have  $O(klog n)$ queries in total. A slightly more careful analysis (see Lemma 2.1 here) gives $O(klog frac{n}{k})$. Cool, isn't it? Especially given that we hopefully do not expect a large fraction of infected soldiers...

Great, but is there a connection of all of that with the FPT algorithms from the title? Yes, there is one. Consider for example the k-PATH problem: given a graph and a number $k$ we want to find a path of length $k$. The corresponding decision problem is NP-complete, as a generalization of Hamiltonian Path. However, it turns out that when $k$ is small, you can solve it pretty fast. Perhaps you know the famous Color Coding algorithm by Alon, Yuster and Zwick which solves it in $O((2e)^kn^{O(1)})$. However, one can do better: Björklund, Husfeldt, Kaski and Koivisto presented a  $O(1.66^kn^{O(1)})$-time Monte-Carlo algorithm! The only catch is that it only solves the decision problem. Indeed, it uses the Schwartz-Zippel lemma and when it answers YES, there is no way to trace back the path from the computation.

Now, let the universe  $U$ be the edge set of our graph. We want to find one of (possibly many) $k$-subsets of $U$ corresponding to $k$-edge paths in our graph and the Björklund et al's algorithm is an inclusion oracle, which tells you whether a given set of edges contains one of these subsets. So this is not exactly the same problem as before, but sounds pretty similar... Indeed, again we can implement the  $ext{CanDiscard}$ oracle, i.e., $ext{CanDiscard}(A) = ext{not Includes}(Usetminus A)$. So it seems we can use the bisecting algorithm to find a $k$-path with only $O(klog frac{n}{k})$ queries to the decision algorithm! Correct?

Well, not exactly. The problem is the oracle is a Monte Carlo algorithm, more precisely it reports false negatives with probability at most, say, 1/4. Together with Andreas Björklund and Petteri Kaski we showed that a surprisingly simple patch to the bisecting algortihm works pretty nice in the randomized setting. The patch is as follows. Once the bisecting algorithm finishes in the randomized setting, we have a superset of a solution. Then, as long as we have more than $k$ candidate elements, we pick one by one, in a FIFO manner, and check whether we can discard this single element. We show that the expected number of queries is $O(klog n)$. (Actually, we conjecture it is optimal, i.e., you have to loose a bit compared to the slightly better  $O(klog frac{n}{k})$ in the deterministic setting.) As a consequence, we get a pretty fast implementation of finding  $k$-paths. For example, a (unique) 14-vertex path is found in a 1000-vertex graph well below one minute on a standard PC. Not bad for an NP-complete problem, I would say. Let me add that the Schwartz-Zippel approach is used in a number of FPT algorithms and in many cases the corresponding search problem can be cast in the inclusion oracle model mentioned above. Examples include k-packing, Steiner cycle, rural postman, graph motif and more.

If you want to learn more, see the paper or slides from my recent ESA talk!

# Efficiency of Random Priority

Matchings are one of the most basic primitives used in the area of mechanism design. Whenever we need to assign items to agents we view it as a matching problem. Of course, physician plenty of variations differing in constraints and objectives can be considered, price but let us look at the simplest variant possible. We need to assign $n$ students to $n$ dorms. Students have preferences over dorms. We, visit this site the administration, would like to find an allocation that would meet students' preferences as much as possible. We cannot use money in deploying the mechanism. This problem have been considered many times in the literature, but in a setting where preferences of student $ain A$ are given by an ordering $preceq_{a}in I imes I$ over the set $I$ of dorms. In this setting it was shown that there exists only one truthful, non-bossy and neutral mechanism. The unique mechanism is called Serial Dictatorship and it works as follows. First, agents are sorted in a fixed order, and then the first agent chooses his favorite item, the next agent chooses his favorite item among remaining items, and so on. Moreover, its randomized variant called Random Serial Dictatorship (RSD), which scans according to a random order, possesses another nice properties of being symmetric and ex post efficient, i.e., it never outputs Pareto dominated outcomes. This mechanism is more often called Random Priority, and hence the title of this entry.

However, one can imagine a setting where the preferences of an agent are not given by an ordering of items, but rather where we are given values $w(a,i)in[0,1]$ telling how much agent $a$ values item $i$. In this model we can quantify social welfare of an assignment that a mechanism finds. Here, welfare of an assignment  $left {(a_{j},i_{phi(j)})
ight}_{j=1}^n$
is just the sum $sum_{j=1}^n w(a_{j},i_{phi(j)})$. Together with Piotr Sankowski and Qiang Zhang we decided to investigate the problem of comparing the social welfare obtained by RSD with the optimum offline welfare. By optimum offline welfare we mean the maximum weighted matching in the underlying graph. We assume that when the RSD proposes an agent to make a his choice, then he will pick an item he values the most. We also assume that an agent resolves ties randomly. At first sight, it might seem like obtaining any meaningful result on approximation factor of RSD is not possible. Just look at the example in the Figure.

For items $i_{2},i_{3},...,i_{n}$ each agent has value 0, that's why these edges are not in the Figure. The optimal social welfare is obviously 1. However, the first agent approached by RSD will always pick item $i_{1}$, and therefore the social welfare of RSD is $frac{1}{n}+frac{n-1}{n}varepsilonapproxfrac{1}{n}$. To overcome this problem, we asked two questions. First, what if valuations of agents are always either 0 and 1? Here the hard example does not apply anymore, if assumed that ties are resolved randomly. Second, what if the optimal social welfare is big compared to $n$? One can see, that in an extreme case with $OPT=n$ the RSD finds welfare of value at least $n/2$ (an inclusion maximal matching), so it's not bad either.

0/1 preferences

In case the preferences are 0 and 1, we can look only at the 1-edges in the underlying graph. Then we are given an unweighted graph and we just want to maximize the size of constructed matching. We stress here, that the random tie-breaking rule is crucial here. That is, we require that when RSD asks an agent that has 0 value for all remaining items, then this agent picks one of the items at random. Without this assumption we can see that the hard example from the figure still works --- suppose each agents $a_{2},a_{3},...,a_{n}$ value all items 0, but each of them always chooses $i_{1}$.

In this model, RSD is a $frac{1}{3}$-approximation, and below we show why. Instead of clunky agent has value 1 for an item'', we shall say shortly agent 1-values an item''. Consider step $t+1$ of the algorithm. Some agents and items are removed from the instance, so let $OPT^{t}subset OPT$ be what remains from optimal solution after $t$ steps of RSD. Also, let $RSD^{t}$ be the partial matching constructed by RSD after $t$ steps, and $
uleft(RSD^{t}
ight)$
be its welfare. It can happen that at time $t$, an agent does not 1-value any of remaining items, even though he could have 1-valued some of the items initially. Thus let $z^{t}$ be the number of agents who 0-value all remaining items. Let $a$ be the agent who is at step $t+1$ chosen by RSD to pick an item. With probability $1-frac{z^{t}}{n-t}$ agent $a$ 1-values at least one item. If so, then the welfare of RSD increases by 1, i.e., $
uleft(RSD^{t+1}
ight)=
uleft(RSD^{t}
ight)+1.$
Hence [mathbb{E}left[
uleft(RSD^{t+1}
ight)
ight]=mathbb{E}left[
uleft(RSD^{t}
ight)
ight]+1-frac{mathbb{E}left[z^{t}
ight]}{n-t}.] What is the expected number of edges we remove from $OPT^{t}$? That is, what is the expected loss $mathbb{E}left[
uleft(OPT^{t}
ight)-
uleft(OPT^{t+1}
ight)
ight]$
? Again, with probability $1-frac{z^{t}}{n-t}$ agent $a$ picks an item $i$ he values 1. Both agent $a$ and item $i$ may belong to $OPT^{t}$, in which case $OPT^{t}$ loses two edges. Otherwise $OPT^{t}$ loses one edge. Now suppose that agent $a$ 0-values all remaining items, which happens with probability $frac{z^{t}}{n-t}$. If $a$ is such an agent, then he picks an item at random from all remaining items, so with probability $frac{
uleft(OPT^{t}
ight)}{n-t}$
agent $a$ picks an item that belongs to $OPT^{t}$. If so, then $OPT^{t}$ loses 1, and otherwise, $OPT^{t}$ does not lose anything. We have $
uleft(OPT^{t}
ight)+z^{t}leq n-t$
, so $frac{
uleft(OPT^{t}
ight)}{n-t}leq 1-frac{z^{t}}{n-t}$
, and hence the expected decrease is: [egin{eqnarray*}&&mathbb{E}left[
uleft(OPT^{t}
ight)-
uleft(OPT^{t+1}
ight)
ight] \ leq &&mathbb{E}left[ 2cdotleft(1-frac{z^{t}}{n-t}
ight)+frac{z^{t}}{n-t}cdotfrac{
uleft(OPT^{t}
ight)}{n-t}
ight] \ leq &&mathbb{E}left[ 2cdotleft(1-frac{z^{t}}{n-t}
ight)+frac{z^{t}}{n-t}cdot left(1-frac{z^{t}}{n-t}
ight)
ight] \ leq&&3cdotleft(1-frac{mathbb{E}left[z^{t}
ight]}{n-t}
ight). end{eqnarray*}] Therefore, [egin{align*}mathbb{E}left[
uleft(OPT^{t}
ight)-
uleft(OPT^{t+1}
ight)
ight] &leq 3cdotleft(1-frac{mathbb{E}left[z^{t}
ight]}{n-t}
ight) \ &=3cdotmathbb{E}left[
uleft(RSD^{t+1}
ight)-
uleft(RSD^{t}
ight)
ight],end{align*}] and by summing for $t$ from $0$ to $n$ we conclude that [egin{align*}
uleft(OPT
ight)&=mathbb{E}left[
uleft(OPT^{0}
ight)-
uleft(OPT^{n}
ight)
ight]\ &leq3cdotmathbb{E}left[
uleft(RSD^{n}
ight)-
uleft(RSD^{0}
ight)
ight]=3cdot mathbb{E}left[
uleft(RSD
ight)
ight].end{align*}]

Connection to online bipartite matching

As we can see, in the analysis the main problem for RSD are agents who have value 0 for remaining items. If we could assume, that such an agent would reveal the fact that he 0-values remaining items, then we could discard this agent, and the above analysis would give a $frac{1}{2}$-approximation. But in this model, we could actually implement a mechanism based on the RANKING algorithm of Karp, Vazirani, Vazirani, and this would give us approximation of $left(1-frac{1}{e}
ight)approx0.67$
. In fact, it's also possible to use the algorithm of Mahdian and Yan (2013) and this would give a factor of $approx0.69$.

Big OPT

Now let us go back to the model with arbitrary real valuations from interval $left[0,1
ight]$
. Here we want to obtain approximation factor that depends on $
uleft(OPT
ight)$
--- for $
uleft(OPT
ight)=1$
the ratio should be around $frac{1}{n}$, while for $
uleft(OPT
ight)approx n$
it should be constant. This suggests that we should aim in approximation factor of $Thetaleft(frac{
uleft(OPT
ight)}{n}
ight)$
. Let us first show an upper-bound that cannot be achieved by any mechanism. Take any integer $k$, and copy $k$ times the instance from the Figure.  Let agents 0-value items from different chunks, so that social welfare of any mechanism is the sum of welfares in all chunks. On any chunk no mechanism can get outcome better than $frac{1}{n}+varepsilon$, hence no mechanism cannot do better than $kcdotleft(frac{1}{n}+varepsilon
ight)approxfrac{k^{2}}{ncdot k}$
on the whole instance. Here $
uleft(OPT
ight)=k$
, and the number of agents is $ncdot k$ this time. Therefore, asymptotic upper-bound on RSD's welfare is $frac{
uleft(OPT
ight)^{2}}{n}$
, where $n$ is the number of agents. But what is quite interesting, we can prove that RSD is only $frac{1}{e}$ fraction away from this upper-bound. That is, we show that expected outcome of RSD is at least $frac{1}{e}cdot frac{
uleft(OPT
ight)^{2}}{n}$
. However, this we shall not prove here. If you are interested from where this shapely constant of $frac{1}{e}$ comes, then please have a look at our arXiv report.

SAGT'14

It turns out that this natural problem at the same time was also investigated independently by Aris Filos-Ratsikas, Søren Kristoffer Stiil Frederiksen, Jie Zhang. They obtained similar results for the general $[0,1]$ valuations. Both their and our papers are accepted to SAGT'14, and we will have a joint presentation of the results. So if you are going to the symposium, then please give us a pleasure of attending our talk.

# Is it easier for Europeans to get to FOCS than to STOC?

I have recently realized that I have published 9 papers at FOCS and only 1 paper at STOC. I started wondering why this happened. I certainly have been submitting more papers to FOCS than to STOC. The ratio is currently 14 to 6. However, approved this alone does not explain the numbers of accepted papers. My acceptance rate for FOCS is way higher than for STOC - 71% vs 17%. These numbers started looking that extreme after two of my papers have been accepted to this year's FOCS:

• Network Sparsification for Steiner Problems on Planar and Bounded-Genus Graphs, by Marcin Pilipczuk, Micha? Pilipczuk, Piotr Sankowski and Erik Jan van Leeuwen
• Online bipartite matching in offline time, by Bartlomiej Bosek, Dariusz Leniowski, Piotr Sankowski and Anna Zych

All this is due to the fact that I work in an "European way", i.e., I keep on doing most of my scientific work during winter and spring. Summers are for me much less work intensive. For example, when I stayed in Italy the department was even closed during August, so even if one wanted to come to work it was impossible. Poland is not that extreme, but much less things happen during summer, e.g., we do not have regular seminars.

Altogether even if I have some idea during summer I almost never manage to write it down well enough till the STOC deadline. Even if I do submit something to STOC then it usually gets rejected due to bad presentation. This considerably decreases my success rate at STOC. I wonder whether this is more universal and indeed there are more papers authored by Europeans in FOCS than in STOC, i.e., FOCS is easier for Europeans.

# Our t-shirts

The shortest path problem is one of the central research problem in the algorithmic research. The main setup for this problem is to find distances from given start vertex to all other nodes in the graph. Probably, this site the most important results here are Dijkstra and Belmann-Ford algorithms -- both from around 1960. Dijkstra considered directed graphs with non-negative weight edges. His algorithm can be implemented to run in O(m+n log n) time using Fibonacci heaps. On the other hand, about it Belman-Ford solved the directed problem with negative weights in O(nm) time. You might note that for non-negative weights the shortest paths problem in undirected graph can be solved via reduction to directed case, so Dijkstra is applicable here as well. Although there exists a faster linear time solution for undirected graph with integral weights that is due to Thorup [1]. Similarly, for the case of directed graphs with negative weights there has been some progress. First, in 80s two scaling algorithms working in $small O(sqrt{n}m log nW)$ [2] and $small O(sqrt{n}m log W)$ [3] have been given. In these algorithms one assumes that edge weights are integral with absolute value bounded by W. Second, in 2005 two algebraic algorithms working in $small O(Wn^{omega})$ time have been presented [4,5], where $small omega < 2.4$ is matrix multiplication exponent. There is even more work in between these results that I did not mention, and much more papers studying all pairs shortest paths, or special cases like planar graphs. However, we have not mentioned the undirected shortest path problem on graphs with negative weight edges yet... Is this problem actually solvable in polynomial time? Yet it is, this has been shown by Edmonds in '67 via a reduction to matchings [6]. These notes are probably lost... maybe not lost but probably hard to access. Anyway the reduction is given in Chapter 12.7 of [7]. Let us recall it We will essentially show that in order to find the distance from fixed source s to fixed sink t one needs to solve minimum weight perfect matching problem once.

Let G=(V,E) be an undirected graph, let $small w:E o mathcal{R}$ be the edge weight function and let $small E^{-}$ be the set of edges with negative weights. We will define a graph $small ddot{G}$ that models paths in G by almost perfect matchings. We define the split graph $small ddot{G}=(ddot{V},ddot{E})$ with the weight function $small ddot{w}$ in the following way

$small ddot{V} = {v_1, v_2: v in V} cup {e_1,e_2: e in E^{-}},$

$small ddot{E} = {v_1 v_2: vin V} cup {u_1v_2, u_2v_1, u_1v_1, u_2v_2 : uvin E setminus E^{-}} \ phantom cup {u_1 e_1, u_2e_1, e_1 e_2, v_1 e_2, v_2 e_2 : e=uv in E^{-}, u

$small ddot{w}(u_i v_j) = left{ egin{array}{rl} w(uv) & extrm{if } uv in Esetminus E^{-}, \ w(e) & extrm{if } u_i=e_1 extrm{ and } v_j eq e_2 extrm{ and } e in E^{-},\ 0 & extrm{otherwise.} end{array} ight.$

An undirected graph G and its graph $small ddot{G}$ are shown on the figure below.

In $small ddot{G}$ zigzag edges weigh -1, dashed edges weigh 1 and the remaining edges weigh 0. Vertices corresponding to negative edges of G are white squares. The far right shows a matching $small M(a_2c_1)$ of weight -2, which corresponds to a shortest path between a and c. Note how a length-two path in G, say a,b,c with w(ab)?0>w(bc) and e=bc, corresponds to
a matching in $small ddot{G}$ such as $small a_1b_1, b_2e_1,e_2c_1$, having the same total weight.

An important property is that we can assume $small ddot{n}=|ddot{V}| le 4n$. This follows since we can assume $small |E^{-}| < n$, as otherwise the set of negative edges contains a cycle.

Let us consider minimum weight perfect matchings in $small ddot{G}$, then we can observe the following.

Lemma 1.
Let $small u,vin V$, let M be the minimum weight perfect matching, and let $small M(u_2v_1)$ be the minimum weight almost perfect matching in $small ddot{G}$ that does not match $small v_1 extrm{ nor } u_2$. If G does not contain negative weight cycles then $small ddot{w}(M)=0$ and the shortest path weight from u to v in G is equal to $small ddot{w}(M(u_2v_1))$.

Hence, in order to solve the same problem as Belman and Ford did, i.e., to compute the distances from given source to all other nodes we need to run some matching algorithm O(n) times. This does not seem to be right. Even using fast implementation of Edmonds weighted matching algorithm [8] one needs $small O(n^2(m+n log n))$ time, whereas Belman-Ford algorithm works just in O(nm) time. There seem to be something lost here, or at least overlooked through the years. Searching through the literature one can find partial answers that shed some light on the structure of such shortest paths. Sebö has characterized the structure of single-source shortest paths in undirected graphs, first for graphs with ±1 edge weights [9] and then extending to general weights by reduction [10]. Equation (4.2) of [9] (for ±1-weights, plus its version achieved by reduction for arbitrary weights) characterizes the shortest paths from a fixed source in terms of how they enter and leave "level sets" determined by the distance function. However, this is just a partial answer as it does not show how the shortest path "tree" looks like, or does not give an efficient way to compute it. We write "tree", because one might observe that the shortest paths do not necessary form a tree -- as shown on the figure below.

So how do shortest paths look like? Is there some notion of shortest path tree here? If yes then is this shortest path tree of O(n) size? In our recent paper with Hal Gabow, that is available on arXiv, we give answers to these question. For undirected shortest paths we get a somewhat simple definition of a generalized shortest-path tree -- see Section 6.1 of our paper. (It seems to us that such a definition may have been overlooked due to reliance on reductions.) The generalized shortest-path tree is a combination of the standard shortest-path tree and the matching blossom tree. This is not so astonishing if you recall the above reduction to matchings in general graphs. Examining the blossom structure of the resulting graph enables us to define our generalized shortest-path tree that, like the standard shortest-path tree for directed graphs, specifies a shortest path to every vertex from a chosen source. We give a complete derivation of the existence of this shortest-path structure, as well as an algebraic algorithm to construct it in time $small ilde{O}(Wn^omega)$. We also construct the structure with combinatoric algorithms, in time $small O(n(m+nlog n))$ or $small O(sqrt{n alpha(m,n)log n} m log (nW))$. Hence, this settles the problem, as these bounds are all within logarithmic factors of the best-known bounds for constructing the directed shortest-path tree with negative weights.

[1] M. Thorup, Undirected single-source shortest paths with positive integer weights in linear time, JACM, 46(3):362--394, 1999.

[2] H. N. Gabow and R. E. Tarjan, Faster scaling algorithms for network problems, SIAM Journal on Computing, 18(5):1013--1036, 1989.

[3] A. V. Goldberg, Scaling algorithms for the shortest paths problem, SODA '93.

[4] R. Yuster and U. Zwick, Answering distance queries in directed graphs using fast matrix multiplication, FOCS'05.

[5] P. Sankowski,  Shortest Paths in Matrix Multiplication Time, ESA'05.

[6] J. Edmonds, An introduction to matching. Mimeographed notes, Engineering Summer Conference, U. Michigan, Ann Arbor, MI, 1967.

[7] R. K. Ahuja, T. L. Magnanti and J.B. Orlin, Network Flows: Theory, Algorithms, and Applications, Prentice Hall, 1993.

[8] H. N. Gabow, Data Structures for Weighted Matching and Nearest Common Ancestors with Linking, SODA'90.

[9] A. Sebö, Undirected distances and the postman-structure of graphs, J. Combin. Theory Ser. B, 49(1):10--39, 1990.

[10] A. Sebö, Potentials in Undirected Graphs and Planar Multiflows, SIAM J. Comput., 26(2):582--603, 1997.
We have recently created two algorithmic t-shirts for our group. One can order them here http://corner.cupsell.pl/, shop or let us know if you are interested to have one. We will be ordering a bulk of them soon. The first graphics relates to The Wonder Twins, find whereas the second one is more obvious.

## 1st ACBD Workshop and IGAFIT meeting

### Gallery

This gallery contains 21 photos.

Back in 2012, this site when I was a post-doc in Lugano, Switzerland, I was working with Fabrizio Grandoni and Monaldo Mastrolili on a pricing problem we called Hypermatching Assignment Problem. This problem is at the same time a generalization … Continue reading

# Local search for k-Set Packing

Back in 2012, price when I was a post-doc in Lugano, online Switzerland, adiposity I was working with Fabrizio Grandoni and Monaldo Mastrolili on a pricing problem we called Hypermatching Assignment Problem. This problem is at the same time a generalization of the Generalized Assignment Problem and of the k-Set Packing problem, which brought my attention to the approximation status of the latter, being is the subject of this post.

k-Set Packing is a basic combinatorial optimization problem, where given a universe U and a family of its subsets F, we are to find a maximum subfamily of pairwise disjoint subsets of F. A special case of 3-Set Packing is the 3-Dimensional Matching problem. A good argument showing that those two problems are indeed classic is the fact that both belong to the Karp's list of 21 NP-hard problems.

A local search routine tries to locally modify a current solution in order to get a better one. For the k-Set Packing by p-local search, we denote an algorithm, which starts with an empty set A, and tries to improve it by adding a set A_0 of at most p new sets, such that the number of sets we need to remove in order to maintain a feasible solution is strictly smaller than the number of sets we have added. Having a local search maximum, our hope is to prove that an optimum solution is not much better.

It is easy to see, that a 1-local search maximum gives an inclusionwise maximum subfamily of F, which in turn implies an approximation ratio of k. It. is not hard to show, that a 2-local search maximum yields a (k+1)/2 approximation ratio. In 1989 Hurkens and Schrijver [SIDMA'89] have shown, that with growing values of p, a p-local search maximum gives (k+eps_p)/2 approximation, where eps_p is a constant depending on p converging to 0. Up to now this was the best known polynomial time approximation algorithm for both k-Set Packing and 3-Dimensional Matching.

Later in [SODA'95]  Halldorsson proved that when we consider logarithmic values of p, the approximation ratio is upper bounded by (k+2)/3, which was slightly improved in our Hypergraph Assignment Problem paper downto (k+1+eps)/3, showing that in quasipolynomial time one can break the 1.5-approximation ratio for 3-Dimensional Matching. A natural question is the following. Can we implement the O(log n)-local search in polynomial time? This brings us to the area of fixed parameter tractability, where we are aiming at a 2^O(r) poly(|F|) time algorithms. Unfortunately we have shown that the parameterized local search is W[1]-hard, even in the more relaxed permissive variant, which means that f(r)poly(|F|) time algorithm is unlikely to exist, no matter how fast growing the function f() is.

Therefore we have to modify our strategy. In our second take we try the following. First, we inspect the existing analyses of approximation ratios upper bounds to see what is the exact set of swaps, which our current solution needs to be impossible to improve with, in order for the proof to go through. Next, we want to show, that for this particular set of swaps we can implement the local search algorithm effectively.

In the analysis of Halldorsson, a notion of a conflict graph is used. Imagine a bipartite graph with sets of our current solution A on one side, sets of FA on the other, where a set of A and a set of FA are connected by an edge if they share a common element.

Note that an edge indicates that it is impossible to have both sets at the same time in a solution. The main part of the analysis of Halldorsson is showing that if (k+2)/3|A| < |OPT|, then there exists a subset X of FA of size O(log n), such that |N(X)| < |X| in the conflict graph, that is the number of sets that we need to remove after adding sets of X is strictly smaller than the size of X itself. However, when inspected more closely the set X has some structure, that is G[N[X]] is a subdivision of one of the following graphs:

This means that in our local search algorithm we do not have to look for all improving sets X of logarithmic size, but it is enough to consider sets X, such that G[N[X]] is of constant pathwidth! In this way by using the color coding technique of Alon, Yuster and Zwick [JACM'95] we obtain a polynomial time (k+2)/3 approximation algorithm. Unfortunatelly this is not yet enough to break the 1.5 approximation ratio for 3-Dimensional Matching, as the existing upper bound of (k+1+eps)/3 in fact uses improving sets of unbounded pathwidth. This is the main technical hurdle, overcoming which required proving lemmas relating pathwidth, average degree and girth of a graph. Finally, we have obtained a (k+1+eps)/3 polynomial time approximation algorithm for k-Set Packing, implying a 1.34-approximation for 3-Dimensional Matching.

The described results were obtained by Maxim Sviridenko & Justin Ward, published at ICALP'13, and by myself, to appear in FOCS'13 and available on arxiv. Since the papers contain significant intersection of ideas, we decided to prepare a common journal version. Personally I am very happy about the described result, because it utilizes techniques from the parameterized complexity in a nontrivial way, to obtain an improvement in a over 20 years old approximation ratio of a basic problem.

# In the search for lost ark - The talk from FOCS

The talk on the shortest path problem we have "forgotten to solve" that I have given at FOCS 2013.

# WG 2014

Let me say that I really like the WG series. Not only because WG'08 was my first conference (and now WG'14 the first one I'm in a PC), try but mainly because there is always a nice bunch of papers with cute combinatorics, viagra and you always travel back home from WG with a full sack of cool open problems to think on.

Speaking of these, I went through my private "open problem list" and found a few problems that, in my opinion, may nicely suit WG. Hey, there are still a few months till the deadline, so why not solve some of the problems and go for a trip to this lovely castle in France?

The problems are from parameterized complexity, since I mostly work in this area. To the best of my knowledge, there are currently open. On all of them I spent some significant time somewhere in the past.

Maybe a small disclaimer is in place: although I found these problems interesting and nice, the techniques to solve them may turn out to be boring, other PC members of WG 2014 may have different opinions on their importance, etc. So, in any sense you should not treat this as some promise that a solution will get into WG. I just wanted to inspire some research, and see solutions to some nice problems I thought about and didn't succeed, and that is the sole motivation for this post 🙂

Cutting short paths. In this paper the authors study (among others) the following problem: given a (directed or undirected) graph $G$ with source $s$ and sink $t$, and integers $k$ and $l$, cut at most $k$ edges of $G$ so that a shortest path from $s$ to $t$ is of length larger than $l$. They show an FPT algorithm, parameterized by both $k$ and $l$. The question is: does this problem admit a polynomial kernel with respect to this parameterization?

Constrained bipartite vertex cover. The minimum vertex cover problem in bipartite graphs is solvable in polynomial time. What about the following variant: given a bipartite graph with fixed bipartition $(A,B)$, and integers $a$ and $b$, find a vertex cover of the graph with at most $a$ vertices in $A$ and $b$ vertices in $B$. This is NP-hard. There is a simple reduction rule: a vertex from $A$ of degree larger than $b$ needs to be included in a solution, and symmetrically the same holds for a vertex in $B$ of degree larger than $a$. After this reduction is exhaustively applied, note that a solution may cover at most $2ab$ edges, so we have a kernel of this size. Can you do better with respect to the number of vertices in the kernel? The classical vertex cover problem in arbitrary graphs has a kernel with linear number of vertices.

Imbalance minimization. Given an undirected graph $G$, and an ordering $v_1,v_2,ldots,v_n$ of its vertices, the imbalance of $v_i$ in this ordering equals

that is, the absolute value of the difference between the number of neighbours of $v_i$ before and after $v_i$ in the ordering. The imbalance of the ordering is the sum of the imbalances of all vertices. Here the authors prove that the problem of finding an ordering of imbalance at most $k$, parameterized by $k$, is FPT. Does this problem admit a polynomial kernel?

Max-leaf outbranching, parameterized by treewidth. In a directed graph, an outbranching is a subgraph that is a rooted tree, where each arc is directed downwards. In the max-leaf outbranching problem we seek for an outbranching in the given graph with maximum number of leaves. We are interested in solving this problem, when we are given a tree decomposition of $G$ of width $t$, that is, we study treewidth DPs. Here we have shown an $O^*(6^t)$-time randomized algorithm, but we could not get a matching lower bound (as we did for most other problems studied there). Is $6$ the optimal base of the exponent? (Of course, assuming Strong ETH). Or maybe you can do better?

# Our Approximation Algorithm Library is Gaining Momentum

As theoreticians we usually tend to be over pessimistic about practical applicability of our results. This becomes even more visible in the area of approximation algorithms. It seems to be the case that algorithms with theoretical guarantees on approximation ratios are easily beaten by meta-heuristics like Simulated Annealing. Here, visit this for example the recent 1.39-approximation algorithm for Steiner tree problem comes to mind. The algorithm, at first glance, seems too complicated to be implemented, and it seems that even if it was implemented it cannot deliver decent results in practice. However, both of these statements are only partially true. First of all, if one has the right tools, then implementing this algorithm is not impossible - the code developed by Maciej Andrejczuk can be seen here: http://siekiera.mimuw.edu.pl:8082/#/c/59/. (It is still undergoing some reviews, but should be shortly merged into the master branch.) The code is based on the iterative rounding framework we have developed in our approximation algorithms library: http://siekiera.mimuw.edu.pl/~paal/. The algorithm, when enhanced with some speed up heuristics, is able to solve reasonable size instances. Although, more improvements and tests are planned.

The other things we have implemented using the iterative rounding framework include 2-apprixmate solution to the tree augmentation problem, as well as the +1-approximation algorithm for the degree bounded spanning tree problem by Singh and Lau. Although, the second one of them still requires adding some documentation. On the other hand, we have developed a nice framework for local search algorithms that can be used to solve: TSP, capacitated and non-capacitated facility location k-median, as well as some greedy approximation algorithms. Finally, we are currently working on a framework for primal-dual algorithms, which could be used to implement several classical approximation algorithms. So stay tuned!