FPT algorithms and syphilis

Long long time ago, ed actually in 1943, US Army was recruiting a lot of soldiers. Each of the recruits had to be subject of some medical examination, and in particular they were tested against syphilis. However, performing a single test was quite expensive. Then they came up with the following idea. Pick blood samples from a group of soldiers, mix them into a one big sample and perform the test on it. If the test is positive, there is at least one infected soldier in the group. But if it is negative, we know that all of the soldiers in the group are healthy and we just saved a lot of tests. It becomes then an interesting problem to devise a method which uses this observation to find the exact group of infected recruits among all the candidates with some nice bound on the number of tests. Without any additional assumptions there is not much you can do (exercise: do you see why?) but in this case we expect that the number of infected candidates is quite small. Then in fact you can save a lot and this story gave rise to the whole area called group testing.

Group testing is a rich field and includes a variety of models. Let us focus on the following one. We are given a universe and we want to find a hidden subset  of . We can aquire information only by asking queries to the intersection oracle, i.e., for a given subset , answers true if and only if has a nonempty intersection with . Moreover, we can decide which set we query based on previous answers. The goal is to use few queries. There are many algorithms for this problem, but I'm going to describe you my favourite one, called the bisecting algorithm. It dates back to early seventies and is due to Hwang, one of the fathers of combinatorial group testing. As you may expect from the name, it is a generalization of binary search. A simple observation is that once answers false, we can safely discard and otherwise we know that contains at least one element of . So assume that in the algorithm we use a  oracle, implemented just as . The algorithm works as in the animation below (choose FitPage in the zoom box and keep clicking the arrow down) :

So, basically, we have a partition of a universe (initially equal ) with the property that every set in the partition contains at least one element of . We examine sets of the partition one by one. Each set is divided into two halves and we query whether we can discard one of them. At some point we query a singleton and then we either discard it or find an element of . I hope it is clear now, but let me paste a pseudocode as well. bisection How many queries do we perform? Let and . Call a query positive if the answer is Yes, negative otherwise. For a negative query  we know that there is . Assign this query to . Note that for every there are  queried sets assigned to it, because if we consider the queries in the order of asking them, every set is roughly  twice smaller than the previous one. So there are  negative queries. Every set from a positive query is a half of a set from a (different!) negative query, so the total number of positive queries is bounded by the total number of negative ones. Hence we have   queries in total. A slightly more careful analysis (see Lemma 2.1 here) gives . Cool, isn't it? Especially given that we hopefully do not expect a large fraction of infected soldiers...

Great, but is there a connection of all of that with the FPT algorithms from the title? Yes, there is one. Consider for example the k-PATH problem: given a graph and a number we want to find a path of length . The corresponding decision problem is NP-complete, as a generalization of Hamiltonian Path. However, it turns out that when  is small, you can solve it pretty fast. Perhaps you know the famous Color Coding algorithm by Alon, Yuster and Zwick which solves it in . However, one can do better: Björklund, Husfeldt, Kaski and Koivisto presented a  -time Monte-Carlo algorithm! The only catch is that it only solves the decision problem. Indeed, it uses the Schwartz-Zippel lemma and when it answers YES, there is no way to trace back the path from the computation.

Now, let the universe   be the edge set of our graph. We want to find one of (possibly many) -subsets of corresponding to -edge paths in our graph and the Björklund et al's algorithm is an inclusion oracle, which tells you whether a given set of edges contains one of these subsets. So this is not exactly the same problem as before, but sounds pretty similar... Indeed, again we can implement the   oracle, i.e., . So it seems we can use the bisecting algorithm to find a -path with only  queries to the decision algorithm! Correct?

Well, not exactly. The problem is the oracle is a Monte Carlo algorithm, more precisely it reports false negatives with probability at most, say, 1/4. Together with Andreas Björklund and Petteri Kaski we showed that a surprisingly simple patch to the bisecting algortihm works pretty nice in the randomized setting. The patch is as follows. Once the bisecting algorithm finishes in the randomized setting, we have a superset of a solution. Then, as long as we have more than  candidate elements, we pick one by one, in a FIFO manner, and check whether we can discard this single element. We show that the expected number of queries is . (Actually, we conjecture it is optimal, i.e., you have to loose a bit compared to the slightly better   in the deterministic setting.) As a consequence, we get a pretty fast implementation of finding  -paths. For example, a (unique) 14-vertex path is found in a 1000-vertex graph well below one minute on a standard PC. Not bad for an NP-complete problem, I would say. Let me add that the Schwartz-Zippel approach is used in a number of FPT algorithms and in many cases the corresponding search problem can be cast in the inclusion oracle model mentioned above. Examples include k-packing, Steiner cycle, rural postman, graph motif and more.

If you want to learn more, see the paper or slides from my recent ESA talk!

Efficiency of Random Priority

Matchings are one of the most basic primitives used in the area of mechanism design. Whenever we need to assign items to agents we view it as a matching problem. Of course, plenty of variations differing in constraints and objectives can be considered, but let us look at the simplest variant possible. We need to assign students to dorms. Students have preferences over dorms. We, the administration, would like to find an allocation that would meet students' preferences as much as possible. We cannot use money in deploying the mechanism. This problem have been considered many times in the literature, but in a setting where preferences of student are given by an ordering over the set of dorms. In this setting it was shown that there exists only one truthful, non-bossy and neutral mechanism. The unique mechanism is called Serial Dictatorship and it works as follows. First, agents are sorted in a fixed order, and then the first agent chooses his favorite item, the next agent chooses his favorite item among remaining items, and so on. Moreover, its randomized variant called Random Serial Dictatorship (RSD), which scans according to a random order, possesses another nice properties of being symmetric and ex post efficient, i.e., it never outputs Pareto dominated outcomes. This mechanism is more often called Random Priority, and hence the title of this entry.

However, one can imagine a setting where the preferences of an agent are not given by an ordering of items, but rather where we are given values telling how much agent values item . In this model we can quantify social welfare of an assignment that a mechanism finds. Here, welfare of an assignment  is just the sum . Together with Piotr Sankowski and Qiang Zhang we decided to investigate the problem of comparing the social welfare obtained by RSD with the optimum offline welfare. By optimum offline welfare we mean the maximum weighted matching in the underlying graph. We assume that when the RSD proposes an agent to make a his choice, then he will pick an item he values the most. We also assume that an agent resolves ties randomly. At first sight, it might seem like obtaining any meaningful result on approximation factor of RSD is not possible. Just look at the example in the Figure.

hardness

For items each agent has value 0, that's why these edges are not in the Figure. The optimal social welfare is obviously 1. However, the first agent approached by RSD will always pick item , and therefore the social welfare of RSD is . To overcome this problem, we asked two questions. First, what if valuations of agents are always either 0 and 1? Here the hard example does not apply anymore, if assumed that ties are resolved randomly. Second, what if the optimal social welfare is big compared to ? One can see, that in an extreme case with the RSD finds welfare of value at least (an inclusion maximal matching), so it's not bad either.

0/1 preferences

In case the preferences are 0 and 1, we can look only at the 1-edges in the underlying graph. Then we are given an unweighted graph and we just want to maximize the size of constructed matching. We stress here, that the random tie-breaking rule is crucial here. That is, we require that when RSD asks an agent that has 0 value for all remaining items, then this agent picks one of the items at random. Without this assumption we can see that the hard example from the figure still works --- suppose each agents value all items 0, but each of them always chooses .

In this model, RSD is a -approximation, and below we show why. Instead of clunky ``agent has value 1 for an item'', we shall say shortly ``agent 1-values an item''. Consider step of the algorithm. Some agents and items are removed from the instance, so let be what remains from optimal solution after steps of RSD. Also, let be the partial matching constructed by RSD after steps, and be its welfare. It can happen that at time , an agent does not 1-value any of remaining items, even though he could have 1-valued some of the items initially. Thus let be the number of agents who 0-value all remaining items. Let be the agent who is at step chosen by RSD to pick an item. With probability agent 1-values at least one item. If so, then the welfare of RSD increases by 1, i.e., Hence [mathbb{E}left[
uleft(RSD^{t+1}
ight)
ight]=mathbb{E}left[
uleft(RSD^{t}
ight)
ight]+1-frac{mathbb{E}left[z^{t}
ight]}{n-t}.] What is the expected number of edges we remove from ? That is, what is the expected loss ? Again, with probability agent picks an item he values 1. Both agent and item may belong to , in which case loses two edges. Otherwise loses one edge. Now suppose that agent 0-values all remaining items, which happens with probability . If is such an agent, then he picks an item at random from all remaining items, so with probability agent picks an item that belongs to . If so, then loses 1, and otherwise, does not lose anything. We have , so , and hence the expected decrease is: [egin{eqnarray*}&&mathbb{E}left[
uleft(OPT^{t}
ight)-
uleft(OPT^{t+1}
ight)
ight] \ leq &&mathbb{E}left[ 2cdotleft(1-frac{z^{t}}{n-t}
ight)+frac{z^{t}}{n-t}cdotfrac{
uleft(OPT^{t}
ight)}{n-t}
ight] \ leq &&mathbb{E}left[ 2cdotleft(1-frac{z^{t}}{n-t}
ight)+frac{z^{t}}{n-t}cdot left(1-frac{z^{t}}{n-t}
ight)
ight] \ leq&&3cdotleft(1-frac{mathbb{E}left[z^{t}
ight]}{n-t}
ight). end{eqnarray*}] Therefore, [egin{align*}mathbb{E}left[
uleft(OPT^{t}
ight)-
uleft(OPT^{t+1}
ight)
ight] &leq 3cdotleft(1-frac{mathbb{E}left[z^{t}
ight]}{n-t}
ight) \ &=3cdotmathbb{E}left[
uleft(RSD^{t+1}
ight)-
uleft(RSD^{t}
ight)
ight],end{align*}] and by summing for from to we conclude that [egin{align*}
uleft(OPT
ight)&=mathbb{E}left[
uleft(OPT^{0}
ight)-
uleft(OPT^{n}
ight)
ight]\ &leq3cdotmathbb{E}left[
uleft(RSD^{n}
ight)-
uleft(RSD^{0}
ight)
ight]=3cdot mathbb{E}left[
uleft(RSD
ight)
ight].end{align*}]

Connection to online bipartite matching

As we can see, in the analysis the main problem for RSD are agents who have value 0 for remaining items. If we could assume, that such an agent would reveal the fact that he 0-values remaining items, then we could discard this agent, and the above analysis would give a -approximation. But in this model, we could actually implement a mechanism based on the RANKING algorithm of Karp, Vazirani, Vazirani, and this would give us approximation of . In fact, it's also possible to use the algorithm of Mahdian and Yan (2013) and this would give a factor of .

Big OPT

Now let us go back to the model with arbitrary real valuations from interval . Here we want to obtain approximation factor that depends on --- for the ratio should be around , while for it should be constant. This suggests that we should aim in approximation factor of . Let us first show an upper-bound that cannot be achieved by any mechanism. Take any integer , and copy times the instance from the Figure.  Let agents 0-value items from different chunks, so that social welfare of any mechanism is the sum of welfares in all chunks. On any chunk no mechanism can get outcome better than , hence no mechanism cannot do better than on the whole instance. Here , and the number of agents is this time. Therefore, asymptotic upper-bound on RSD's welfare is , where is the number of agents. But what is quite interesting, we can prove that RSD is only fraction away from this upper-bound. That is, we show that expected outcome of RSD is at least . However, this we shall not prove here. If you are interested from where this shapely constant of comes, then please have a look at our arXiv report.

SAGT'14

It turns out that this natural problem at the same time was also investigated independently by Aris Filos-Ratsikas, Søren Kristoffer Stiil Frederiksen, Jie Zhang. They obtained similar results for the general valuations. Both their and our papers are accepted to SAGT'14, and we will have a joint presentation of the results. So if you are going to the symposium, then please give us a pleasure of attending our talk.

Is it easier for Europeans to get to FOCS than to STOC?

I have recently realized that I have published 9 papers at FOCS and only 1 paper at STOC. I started wondering why this happened. I certainly have been submitting more papers to FOCS than to STOC. The ratio is currently 14 to 6. However, this alone does not explain the numbers of accepted papers. My acceptance rate for FOCS is way higher than for STOC - 71% vs 17%. These numbers started looking that extreme after two of my papers have been accepted to this year's FOCS:

  • Network Sparsification for Steiner Problems on Planar and Bounded-Genus Graphs, by Marcin Pilipczuk, Micha? Pilipczuk, Piotr Sankowski and Erik Jan van Leeuwen
  • Online bipartite matching in offline time, by Bartlomiej Bosek, Dariusz Leniowski, Piotr Sankowski and Anna Zych

All this is due to the fact that I work in an "European way", i.e., I keep on doing most of my scientific work during winter and spring. Summers are for me much less work intensive. For example, when I stayed in Italy the department was even closed during August, so even if one wanted to come to work it was impossible. Poland is not that extreme, but much less things happen during summer, e.g., we do not have regular seminars.

Altogether even if I have some idea during summer I almost never manage to write it down well enough till the STOC deadline. Even if I do submit something to STOC then it usually gets rejected due to bad presentation. This considerably decreases my success rate at STOC. I wonder whether this is more universal and indeed there are more papers authored by Europeans in FOCS than in STOC, i.e., FOCS is easier for Europeans.

Our t-shirts

The shortest path problem is one of the central research problem in the algorithmic research. The main setup for this problem is to find distances from given start vertex to all other nodes in the graph. Probably, the most important results here are Dijkstra and Belmann-Ford algorithms -- both from around 1960. Dijkstra considered directed graphs with non-negative weight edges. His algorithm can be implemented to run in O(m+n log n) time using Fibonacci heaps. On the other hand, Belman-Ford solved the directed problem with negative weights in O(nm) time. You might note that for non-negative weights the shortest paths problem in undirected graph can be solved via reduction to directed case, so Dijkstra is applicable here as well. Although there exists a faster linear time solution for undirected graph with integral weights that is due to Thorup [1]. Similarly, for the case of directed graphs with negative weights there has been some progress. First, in 80s two scaling algorithms working in [2] and [3] have been given. In these algorithms one assumes that edge weights are integral with absolute value bounded by W. Second, in 2005 two algebraic algorithms working in time have been presented [4,5], where  is matrix multiplication exponent. There is even more work in between these results that I did not mention, and much more papers studying all pairs shortest paths, or special cases like planar graphs. However, we have not mentioned the undirected shortest path problem on graphs with negative weight edges yet... Is this problem actually solvable in polynomial time? Yet it is, this has been shown by Edmonds in '67 via a reduction to matchings [6]. These notes are probably lost... maybe not lost but probably hard to access. Anyway the reduction is given in Chapter 12.7 of [7]. Let us recall it We will essentially show that in order to find the distance from fixed source s to fixed sink t one needs to solve minimum weight perfect matching problem once.

Let G=(V,E) be an undirected graph, let be the edge weight function and let be the set of edges with negative weights. We will define a graph that models paths in G by almost perfect matchings. We define the split graph with the weight function in the following way

An undirected graph G and its graph are shown on the figure below.

fig5

In zigzag edges weigh -1, dashed edges weigh 1 and the remaining edges weigh 0. Vertices corresponding to negative edges of G are white squares. The far right shows a matching of weight -2, which corresponds to a shortest path between a and c. Note how a length-two path in G, say a,b,c with w(ab)?0>w(bc) and e=bc, corresponds to
a matching in such as , having the same total weight.

An important property is that we can assume . This follows since we can assume , as otherwise the set of negative edges contains a cycle.

Let us consider minimum weight perfect matchings in , then we can observe the following.

Lemma 1.
Let , let M be the minimum weight perfect matching, and let be the minimum weight almost perfect matching in that does not match . If G does not contain negative weight cycles then and the shortest path weight from u to v in G is equal to .

Hence, in order to solve the same problem as Belman and Ford did, i.e., to compute the distances from given source to all other nodes we need to run some matching algorithm O(n) times. This does not seem to be right. Even using fast implementation of Edmonds weighted matching algorithm [8] one needs time, whereas Belman-Ford algorithm works just in O(nm) time. There seem to be something lost here, or at least overlooked through the years. Searching through the literature one can find partial answers that shed some light on the structure of such shortest paths. Sebö has characterized the structure of single-source shortest paths in undirected graphs, first for graphs with ±1 edge weights [9] and then extending to general weights by reduction [10]. Equation (4.2) of [9] (for ±1-weights, plus its version achieved by reduction for arbitrary weights) characterizes the shortest paths from a fixed source in terms of how they enter and leave "level sets" determined by the distance function. However, this is just a partial answer as it does not show how the shortest path "tree" looks like, or does not give an efficient way to compute it. We write "tree", because one might observe that the shortest paths do not necessary form a tree -- as shown on the figure below.

triangle

So how do shortest paths look like? Is there some notion of shortest path tree here? If yes then is this shortest path tree of O(n) size? In our recent paper with Hal Gabow, that is available on arXiv, we give answers to these question. For undirected shortest paths we get a somewhat simple definition of a generalized shortest-path tree -- see Section 6.1 of our paper. (It seems to us that such a definition may have been overlooked due to reliance on reductions.) The generalized shortest-path tree is a combination of the standard shortest-path tree and the matching blossom tree. This is not so astonishing if you recall the above reduction to matchings in general graphs. Examining the blossom structure of the resulting graph enables us to define our generalized shortest-path tree that, like the standard shortest-path tree for directed graphs, specifies a shortest path to every vertex from a chosen source. We give a complete derivation of the existence of this shortest-path structure, as well as an algebraic algorithm to construct it in time . We also construct the structure with combinatoric algorithms, in time or . Hence, this settles the problem, as these bounds are all within logarithmic factors of the best-known bounds for constructing the directed shortest-path tree with negative weights.

[1] M. Thorup, Undirected single-source shortest paths with positive integer weights in linear time, JACM, 46(3):362--394, 1999.

[2] H. N. Gabow and R. E. Tarjan, Faster scaling algorithms for network problems, SIAM Journal on Computing, 18(5):1013--1036, 1989.

[3] A. V. Goldberg, Scaling algorithms for the shortest paths problem, SODA '93.

[4] R. Yuster and U. Zwick, Answering distance queries in directed graphs using fast matrix multiplication, FOCS'05.

[5] P. Sankowski,  Shortest Paths in Matrix Multiplication Time, ESA'05.

[6] J. Edmonds, An introduction to matching. Mimeographed notes, Engineering Summer Conference, U. Michigan, Ann Arbor, MI, 1967.

[7] R. K. Ahuja, T. L. Magnanti and J.B. Orlin, Network Flows: Theory, Algorithms, and Applications, Prentice Hall, 1993.

[8] H. N. Gabow, Data Structures for Weighted Matching and Nearest Common Ancestors with Linking, SODA'90.

[9] A. Sebö, Undirected distances and the postman-structure of graphs, J. Combin. Theory Ser. B, 49(1):10--39, 1990.

[10] A. Sebö, Potentials in Undirected Graphs and Planar Multiflows, SIAM J. Comput., 26(2):582--603, 1997.
We have recently created two algorithmic t-shirts for our group. One can order them here http://corner.cupsell.pl/, or let us know if you are interested to have one. We will be ordering a bulk of them soon. The first graphics relates to The Wonder Twins, whereas the second one is more obvious.

Algorithmic Powers Activate

Superalgorithm

Local search for k-Set Packing

Back in 2012, when I was a post-doc in Lugano, Switzerland, I was working with Fabrizio Grandoni and Monaldo Mastrolili on a pricing problem we called Hypermatching Assignment Problem. This problem is at the same time a generalization of the Generalized Assignment Problem and of the k-Set Packing problem, which brought my attention to the approximation status of the latter, being is the subject of this post.

k-Set Packing is a basic combinatorial optimization problem, where given a universe U and a family of its subsets F, we are to find a maximum subfamily of pairwise disjoint subsets of F. A special case of 3-Set Packing is the 3-Dimensional Matching problem. A good argument showing that those two problems are indeed classic is the fact that both belong to the Karp's list of 21 NP-hard problems.

A local search routine tries to locally modify a current solution in order to get a better one. For the k-Set Packing by p-local search, we denote an algorithm, which starts with an empty set A, and tries to improve it by adding a set A_0 of at most p new sets, such that the number of sets we need to remove in order to maintain a feasible solution is strictly smaller than the number of sets we have added. Having a local search maximum, our hope is to prove that an optimum solution is not much better.

pseudocode-localsearch

It is easy to see, that a 1-local search maximum gives an inclusionwise maximum subfamily of F, which in turn implies an approximation ratio of k. It. is not hard to show, that a 2-local search maximum yields a (k+1)/2 approximation ratio. In 1989 Hurkens and Schrijver [SIDMA'89] have shown, that with growing values of p, a p-local search maximum gives (k+eps_p)/2 approximation, where eps_p is a constant depending on p converging to 0. Up to now this was the best known polynomial time approximation algorithm for both k-Set Packing and 3-Dimensional Matching.

Later in [SODA'95]  Halldorsson proved that when we consider logarithmic values of p, the approximation ratio is upper bounded by (k+2)/3, which was slightly improved in our Hypergraph Assignment Problem paper downto (k+1+eps)/3, showing that in quasipolynomial time one can break the 1.5-approximation ratio for 3-Dimensional Matching. A natural question is the following. Can we implement the O(log n)-local search in polynomial time? This brings us to the area of fixed parameter tractability, where we are aiming at a 2^O(r) poly(|F|) time algorithms. Unfortunately we have shown that the parameterized local search is W[1]-hard, even in the more relaxed permissive variant, which means that f(r)poly(|F|) time algorithm is unlikely to exist, no matter how fast growing the function f() is.

Therefore we have to modify our strategy. In our second take we try the following. First, we inspect the existing analyses of approximation ratios upper bounds to see what is the exact set of swaps, which our current solution needs to be impossible to improve with, in order for the proof to go through. Next, we want to show, that for this particular set of swaps we can implement the local search algorithm effectively.

In the analysis of Halldorsson, a notion of a conflict graph is used. Imagine a bipartite graph with sets of our current solution A on one side, sets of FA on the other, where a set of A and a set of FA are connected by an edge if they share a common element.

conflict

Note that an edge indicates that it is impossible to have both sets at the same time in a solution. The main part of the analysis of Halldorsson is showing that if (k+2)/3|A| < |OPT|, then there exists a subset X of FA of size O(log n), such that |N(X)| < |X| in the conflict graph, that is the number of sets that we need to remove after adding sets of X is strictly smaller than the size of X itself. However, when inspected more closely the set X has some structure, that is G[N[X]] is a subdivision of one of the following graphs:

subdivisions2This means that in our local search algorithm we do not have to look for all improving sets X of logarithmic size, but it is enough to consider sets X, such that G[N[X]] is of constant pathwidth! In this way by using the color coding technique of Alon, Yuster and Zwick [JACM'95] we obtain a polynomial time (k+2)/3 approximation algorithm. Unfortunatelly this is not yet enough to break the 1.5 approximation ratio for 3-Dimensional Matching, as the existing upper bound of (k+1+eps)/3 in fact uses improving sets of unbounded pathwidth. This is the main technical hurdle, overcoming which required proving lemmas relating pathwidth, average degree and girth of a graph. Finally, we have obtained a (k+1+eps)/3 polynomial time approximation algorithm for k-Set Packing, implying a 1.34-approximation for 3-Dimensional Matching.

The described results were obtained by Maxim Sviridenko & Justin Ward, published at ICALP'13, and by myself, to appear in FOCS'13 and available on arxiv. Since the papers contain significant intersection of ideas, we decided to prepare a common journal version. Personally I am very happy about the described result, because it utilizes techniques from the parameterized complexity in a nontrivial way, to obtain an improvement in a over 20 years old approximation ratio of a basic problem.

WG 2014

The call for papers for WG 2014 is out.

Let me say that I really like the WG series. Not only because WG'08 was my first conference (and now WG'14 the first one I'm in a PC), but mainly because there is always a nice bunch of papers with cute combinatorics, and you always travel back home from WG with a full sack of cool open problems to think on.

Speaking of these, I went through my private "open problem list" and found a few problems that, in my opinion, may nicely suit WG. Hey, there are still a few months till the deadline, so why not solve some of the problems and go for a trip to this lovely castle in France?

The problems are from parameterized complexity, since I mostly work in this area. To the best of my knowledge, there are currently open. On all of them I spent some significant time somewhere in the past.

Maybe a small disclaimer is in place: although I found these problems interesting and nice, the techniques to solve them may turn out to be boring, other PC members of WG 2014 may have different opinions on their importance, etc. So, in any sense you should not treat this as some promise that a solution will get into WG. I just wanted to inspire some research, and see solutions to some nice problems I thought about and didn't succeed, and that is the sole motivation for this post 🙂

Cutting short paths. In this paper the authors study (among others) the following problem: given a (directed or undirected) graph with source and sink , and integers and , cut at most edges of so that a shortest path from to is of length larger than . They show an FPT algorithm, parameterized by both and . The question is: does this problem admit a polynomial kernel with respect to this parameterization?

Constrained bipartite vertex cover. The minimum vertex cover problem in bipartite graphs is solvable in polynomial time. What about the following variant: given a bipartite graph with fixed bipartition , and integers and , find a vertex cover of the graph with at most vertices in and vertices in . This is NP-hard. There is a simple reduction rule: a vertex from of degree larger than needs to be included in a solution, and symmetrically the same holds for a vertex in of degree larger than . After this reduction is exhaustively applied, note that a solution may cover at most edges, so we have a kernel of this size. Can you do better with respect to the number of vertices in the kernel? The classical vertex cover problem in arbitrary graphs has a kernel with linear number of vertices.

Imbalance minimization. Given an undirected graph , and an ordering of its vertices, the imbalance of in this ordering equals

that is, the absolute value of the difference between the number of neighbours of before and after in the ordering. The imbalance of the ordering is the sum of the imbalances of all vertices. Here the authors prove that the problem of finding an ordering of imbalance at most , parameterized by , is FPT. Does this problem admit a polynomial kernel?

Max-leaf outbranching, parameterized by treewidth. In a directed graph, an outbranching is a subgraph that is a rooted tree, where each arc is directed downwards. In the max-leaf outbranching problem we seek for an outbranching in the given graph with maximum number of leaves. We are interested in solving this problem, when we are given a tree decomposition of of width , that is, we study treewidth DPs. Here we have shown an -time randomized algorithm, but we could not get a matching lower bound (as we did for most other problems studied there). Is the optimal base of the exponent? (Of course, assuming Strong ETH). Or maybe you can do better?

Our Approximation Algorithm Library is Gaining Momentum

As theoreticians we usually tend to be over pessimistic about practical applicability of our results. This becomes even more visible in the area of approximation algorithms. It seems to be the case that algorithms with theoretical guarantees on approximation ratios are easily beaten by meta-heuristics like Simulated Annealing. Here, for example the recent 1.39-approximation algorithm for Steiner tree problem comes to mind. The algorithm, at first glance, seems too complicated to be implemented, and it seems that even if it was implemented it cannot deliver decent results in practice. However, both of these statements are only partially true. First of all, if one has the right tools, then implementing this algorithm is not impossible - the code developed by Maciej Andrejczuk can be seen here: http://siekiera.mimuw.edu.pl:8082/#/c/59/. (It is still undergoing some reviews, but should be shortly merged into the master branch.) The code is based on the iterative rounding framework we have developed in our approximation algorithms library: http://siekiera.mimuw.edu.pl/~paal/. The algorithm, when enhanced with some speed up heuristics, is able to solve reasonable size instances. Although, more improvements and tests are planned.

The other things we have implemented using the iterative rounding framework include 2-apprixmate solution to the tree augmentation problem, as well as the +1-approximation algorithm for the degree bounded spanning tree problem by Singh and Lau. Although, the second one of them still requires adding some documentation. On the other hand, we have developed a nice framework for local search algorithms that can be used to solve: TSP, capacitated and non-capacitated facility location k-median, as well as some greedy approximation algorithms. Finally, we are currently working on a framework for primal-dual algorithms, which could be used to implement several classical approximation algorithms. So stay tuned!