Efficiency of Random Priority

Matchings are one of the most basic primitives used in the area of mechanism design. Whenever we need to assign items to agents we view it as a matching problem. Of course, plenty of variations differing in constraints and objectives can be considered, but let us look at the simplest variant possible. We need to assign n students to n dorms. Students have preferences over dorms. We, the administration, would like to find an allocation that would meet students' preferences as much as possible. We cannot use money in deploying the mechanism. This problem have been considered many times in the literature, but in a setting where preferences of student a\in A are given by an ordering \preceq_{a}\in I\times I over the set I of dorms. In this setting it was shown that there exists only one truthful, non-bossy and neutral mechanism. The unique mechanism is called Serial Dictatorship and it works as follows. First, agents are sorted in a fixed order, and then the first agent chooses his favorite item, the next agent chooses his favorite item among remaining items, and so on. Moreover, its randomized variant called Random Serial Dictatorship (RSD), which scans according to a random order, possesses another nice properties of being symmetric and ex post efficient, i.e., it never outputs Pareto dominated outcomes. This mechanism is more often called Random Priority, and hence the title of this entry.

However, one can imagine a setting where the preferences of an agent are not given by an ordering of items, but rather where we are given values w(a,i)\in[0,1] telling how much agent a values item i. In this model we can quantify social welfare of an assignment that a mechanism finds. Here, welfare of an assignment  \left \{(a_{j},i_{\phi(j)})\right\}_{j=1}^n is just the sum \sum_{j=1}^n w(a_{j},i_{\phi(j)}). Together with Piotr Sankowski and Qiang Zhang we decided to investigate the problem of comparing the social welfare obtained by RSD with the optimum offline welfare. By optimum offline welfare we mean the maximum weighted matching in the underlying graph. We assume that when the RSD proposes an agent to make a his choice, then he will pick an item he values the most. We also assume that an agent resolves ties randomly. At first sight, it might seem like obtaining any meaningful result on approximation factor of RSD is not possible. Just look at the example in the Figure.


For items i_{2},i_{3},...,i_{n} each agent has value 0, that's why these edges are not in the Figure. The optimal social welfare is obviously 1. However, the first agent approached by RSD will always pick item i_{1}, and therefore the social welfare of RSD is \frac{1}{n}+\frac{n-1}{n}\varepsilon\approx\frac{1}{n}. To overcome this problem, we asked two questions. First, what if valuations of agents are always either 0 and 1? Here the hard example does not apply anymore, if assumed that ties are resolved randomly. Second, what if the optimal social welfare is big compared to n? One can see, that in an extreme case with OPT=n the RSD finds welfare of value at least n/2 (an inclusion maximal matching), so it's not bad either.

0/1 preferences

In case the preferences are 0 and 1, we can look only at the 1-edges in the underlying graph. Then we are given an unweighted graph and we just want to maximize the size of constructed matching. We stress here, that the random tie-breaking rule is crucial here. That is, we require that when RSD asks an agent that has 0 value for all remaining items, then this agent picks one of the items at random. Without this assumption we can see that the hard example from the figure still works --- suppose each agents a_{2},a_{3},...,a_{n} value all items 0, but each of them always chooses i_{1}.

In this model, RSD is a \frac{1}{3}-approximation, and below we show why. Instead of clunky ``agent has value 1 for an item'', we shall say shortly ``agent 1-values an item''. Consider step t+1 of the algorithm. Some agents and items are removed from the instance, so let OPT^{t}\subset OPT be what remains from optimal solution after t steps of RSD. Also, let RSD^{t} be the partial matching constructed by RSD after t steps, and \nu\left(RSD^{t}\right) be its welfare. It can happen that at time t, an agent does not 1-value any of remaining items, even though he could have 1-valued some of the items initially. Thus let z^{t} be the number of agents who 0-value all remaining items. Let a be the agent who is at step t+1 chosen by RSD to pick an item. With probability 1-\frac{z^{t}}{n-t} agent a 1-values at least one item. If so, then the welfare of RSD increases by 1, i.e., \nu\left(RSD^{t+1}\right)=\nu\left(RSD^{t}\right)+1. Hence


What is the expected number of edges we remove from OPT^{t}? That is, what is the expected loss \mathbb{E}\left[\nu\left(OPT^{t}\right)-\nu\left(OPT^{t+1}\right)\right]? Again, with probability 1-\frac{z^{t}}{n-t} agent a picks an item i he values 1. Both agent a and item i may belong to OPT^{t}, in which case OPT^{t} loses two edges. Otherwise OPT^{t} loses one edge. Now suppose that agent a 0-values all remaining items, which happens with probability \frac{z^{t}}{n-t}. If a is such an agent, then he picks an item at random from all remaining items, so with probability \frac{\nu\left(OPT^{t}\right)}{n-t} agent a picks an item that belongs to OPT^{t}. If so, then OPT^{t} loses 1, and otherwise, OPT^{t} does not lose anything. We have \nu\left(OPT^{t}\right)+z^{t}\leq n-t, so \frac{\nu\left(OPT^{t}\right)}{n-t}\leq 1-\frac{z^{t}}{n-t}, and hence the expected decrease is:

\begin{eqnarray*}&&\mathbb{E}\left[\nu\left(OPT^{t}\right)-\nu\left(OPT^{t+1}\right)\right] \\ \leq &&\mathbb{E}\left[ 2\cdot\left(1-\frac{z^{t}}{n-t}\right)+\frac{z^{t}}{n-t}\cdot\frac{\nu\left(OPT^{t} \right)}{n-t}\right] \\ \leq &&\mathbb{E}\left[ 2\cdot\left(1-\frac{z^{t}}{n-t}\right)+\frac{z^{t}}{n-t}\cdot \left(1-\frac{z^{t}}{n-t} \right)\right] \\ \leq&&3\cdot\left(1-\frac{\mathbb{E}\left[z^{t}\right]}{n-t}\right). \end{eqnarray*}


\begin{align*}\mathbb{E}\left[\nu\left(OPT^{t}\right)-\nu\left(OPT^{t+1}\right)\right] &\leq 3\cdot\left(1-\frac{\mathbb{E}\left[z^{t}\right]}{n-t}\right) \\ &=3\cdot\mathbb{E}\left[\nu\left(RSD^{t+1}\right)-\nu\left(RSD^{t}\right)\right],\end{align*}

and by summing for t from 0 to n we conclude that

\begin{align*}\nu\left(OPT\right)&=\mathbb{E}\left[\nu\left(OPT^{0}\right)-\nu\left(OPT^{n}\right)\right]\\ &\leq3\cdot\mathbb{E}\left[\nu\left(RSD^{n}\right)-\nu\left(RSD^{0}\right)\right]=3\cdot \mathbb{E}\left[\nu\left(RSD\right)\right].\end{align*}

Connection to online bipartite matching

As we can see, in the analysis the main problem for RSD are agents who have value 0 for remaining items. If we could assume, that such an agent would reveal the fact that he 0-values remaining items, then we could discard this agent, and the above analysis would give a \frac{1}{2}-approximation. But in this model, we could actually implement a mechanism based on the RANKING algorithm of Karp, Vazirani, Vazirani, and this would give us approximation of \left(1-\frac{1}{e}\right)\approx0.67. In fact, it's also possible to use the algorithm of Mahdian and Yan (2013) and this would give a factor of \approx0.69.


Now let us go back to the model with arbitrary real valuations from interval \left[0,1\right]. Here we want to obtain approximation factor that depends on \nu\left(OPT\right) --- for \nu\left(OPT\right)=1 the ratio should be around \frac{1}{n}, while for \nu\left(OPT\right)\approx n it should be constant. This suggests that we should aim in approximation factor of \Theta\left(\frac{\nu\left(OPT\right)}{n}\right). Let us first show an upper-bound that cannot be achieved by any mechanism. Take any integer k, and copy k times the instance from the Figure.  Let agents 0-value items from different chunks, so that social welfare of any mechanism is the sum of welfares in all chunks. On any chunk no mechanism can get outcome better than \frac{1}{n}+\varepsilon, hence no mechanism cannot do better than k\cdot\left(\frac{1}{n}+\varepsilon\right)\approx\frac{k^{2}}{n\cdot k} on the whole instance. Here \nu\left(OPT\right)=k, and the number of agents is n\cdot k this time. Therefore, asymptotic upper-bound on RSD's welfare is \frac{\nu\left(OPT\right)^{2}}{n}, where n is the number of agents. But what is quite interesting, we can prove that RSD is only \frac{1}{e} fraction away from this upper-bound. That is, we show that expected outcome of RSD is at least \frac{1}{e}\cdot \frac{\nu\left(OPT\right)^{2}}{n}. However, this we shall not prove here. If you are interested from where this shapely constant of \frac{1}{e} comes, then please have a look at our arXiv report.


It turns out that this natural problem at the same time was also investigated independently by Aris Filos-Ratsikas, Søren Kristoffer Stiil Frederiksen, Jie Zhang. They obtained similar results for the general [0,1] valuations. Both their and our papers are accepted to SAGT'14, and we will have a joint presentation of the results. So if you are going to the symposium, then please give us a pleasure of attending our talk.

Is it easier for Europeans to get to FOCS than to STOC?

I have recently realized that I have published 9 papers at FOCS and only 1 paper at STOC. I started wondering why this happened. I certainly have been submitting more papers to FOCS than to STOC. The ratio is currently 14 to 6. However, this alone does not explain the numbers of accepted papers. My acceptance rate for FOCS is way higher than for STOC - 71% vs 17%. These numbers started looking that extreme after two of my papers have been accepted to this year's FOCS:

  • Network Sparsification for Steiner Problems on Planar and Bounded-Genus Graphs, by Marcin Pilipczuk, Michał Pilipczuk, Piotr Sankowski and Erik Jan van Leeuwen
  • Online bipartite matching in offline time, by Bartlomiej Bosek, Dariusz Leniowski, Piotr Sankowski and Anna Zych

All this is due to the fact that I work in an "European way", i.e., I keep on doing most of my scientific work during winter and spring. Summers are for me much less work intensive. For example, when I stayed in Italy the department was even closed during August, so even if one wanted to come to work it was impossible. Poland is not that extreme, but much less things happen during summer, e.g., we do not have regular seminars.

Altogether even if I have some idea during summer I almost never manage to write it down well enough till the STOC deadline. Even if I do submit something to STOC then it usually gets rejected due to bad presentation. This considerably decreases my success rate at STOC. I wonder whether this is more universal and indeed there are more papers authored by Europeans in FOCS than in STOC, i.e., FOCS is easier for Europeans.