4rd Highlights of Algorithms conference (HALG 2019)

Copenhagen, June 14-16, 2019

http://highlightsofalgorithms.

The Highlights of Algorithms conference is a forum for presenting the

highlights of recent developments in algorithms and for discussing

potential further advances in this area. The conference will provide a

broad picture of the latest research in algorithms through a series of

invited talks, as well as the possibility for all researchers and

students to present their recent results through a series of short

talks and poster presentations. Attending the Highlights of Algorithms

conference will also be an opportunity for networking and meeting

leading researchers in algorithms.

------------------------------

PROGRAM

The conference will begin on Friday, June 14, at 9:00 and end on

Sunday, June 16, at 18:00. A detailed schedule and a list of all

accepted short contributions can be found at:

2018.highlightsofalgorithms.or

------------------------------

REGISTRATION

Please register on our webpage

http://

We have done our best to keep registration fees at a minimum:

Early registration (by April 29, 2019)

- academic rate (incl. postdocs): 160€

- student rate: 115€

Regular registration will be 50€ more expensive.

The organizers strongly recommend that you book your hotel as soon as possible.

------------------------------

CONFERENCE VENUE

The conference will take place at the H.C. Ørsted Institute of the

University of Copenhagen.

The address is: Universitetsparken 5, DK-2100 Copenhagen.

------------------------------

INVITED SPEAKERS

Survey speakers:

Monika Henzinger (University of Vienna)

Thomas Vidick (California Institute of Technology)

Laszlo Vegh (London School of Economics)

James Lee (University of Washington)

Timothy Chan (University of Illinois at Urbana-Champaign)

Sergei Vassilvitskii (Google, New York)

Invited talks:

Martin Grohe (RWTH Aachen University)

Josh Alman (MIT)

Nima Anari (Stanford University)

Michal Koucký (Charles University)

Naveen Garg (IIT Delhi)

Vera Traub (University of Bonn)

Rico Zenklusen (ETH Zurich)

Shayan Oveis Gharan (University of Washington)

Greg Bodwin (MIT)

Cliff Stein (Columbia University)

Sungjin Im (University of California at Merced)

C. Seshadhriy (University of California, Santa Cruz)

Shay Moran (Technion)

Bundit Laekhanukit (Shanghai University of Finance and Economics)

Sebastien Bubeck (Microsoft Research, Redmond)

Sushant Sachdeva (University of Toronto)

Kunal Talwar (Google Brain)

Moses Charikar (Stanford University)

Shuichi Hirahara (University of Tokyo)

------------------------------

The deadline was approaching without mercy and there was, of course, still some polishing to be done for our SODA paper. But then we run into an issue. To make things worse, this issue turned out to be a hard one, a fundamental known open problem in computational geometry. The good thing is, I liked the problem so much that I decided to dedicate it this post. This is the story about the Sum of Square Roots problem and how we bypassed (ignored) it without solving it.

Everything began in the haze of the 70's of the last millennium. It is nebulous who stumbled first upon this enigma. Some say that Ron Graham has discussed the problem in public lectures, some others say that Joseph O'Rourke has posed it as an open question in the American Mathematical Monthly, while some others suspect that the problem had been already hiding in older different formulations. However, it is a historical fact that one shinny/cloudy day, three computer scientists finished polishing a manuscript that became a classical paper known as "Some NP-complete geometric problems". In this paper, Michael Garey, Ron Graham and David Johnson showed the NP-hardness of two important problems in geometric metrics: Steiner Tree and Traveling Salesman. For the Euclidean plane, they showed only NP-hardness as they did not manage to show that these problems are contained in NP. Moreover, they accentuated that we cannot even rule out that the decision version of Euclidean Minimum Spanning Tree is outside of NP. What a seeming paradox given that we can compute such a minimum tree in polynomial time! So, whom did they blame? The short answer: The Euclidean metric. Garey and his coauthors explain that all these problems have a common hidden issue: They rely on comparing Euclidean lengths, that is, they rely on comparing irrational numbers based on square roots. Whereas this task is trivial if we just want to compare two line segments (e.g. by comparing the radicands), the problem starts when we want to compare two polygonal paths. Even assuming rational (or, after scaling, integer) coordinates, this problem translates into a question that is fundamental in computational geometry: Given two lists of integers, *a _{1}* ... and

To emphasize the significance of this question, let me cite David Eppstein: "A major bottleneck in proving NP-completeness for geometric problems is a mismatch between the real-number and Turing machine models of computation: one is good for geometric algorithms but bad for reductions, and the other vice versa. Specifically, it is not known on Turing machines how to quickly compare a sum of distances (square roots of integers) with an integer or other similar sums, so even (decision versions of) easy problems such as the minimum spanning tree are not known to be in NP."

"Is this problem computable at all?" - might the curious reader ask after realizing that the approach of iteratively increasing the precision until the first difference in digits will never terminate if both numbers happen to be equal. Luckily, Allan Borodin et al. and Johannes Blömer showed that polynomial time is enough to decide whether two such sums have the same value. Therefore, it is astonishing that the problem of deciding the sign of their difference, a question that seems only slightly harder, has been only recently spotted by Eric Allender et al on an island of the counting hierarchy CH (3rd level) located in the wide ocean of PSPACE. Thus, it is still far far away from the needle head of P, where Gregorio Malajovich conjectured it to live. Until this will be shown, Jeff Erickson proposes to study which problems are (non-boringly) Σ√-hard.

Oh, if we could only sufficiently bound the difference of the sums from below! Then we could bound from above the precision required to determine the correct sign of the difference: Just image that *B* is such a lower bound on the difference. Now, take a precision that allows us to compute an approximate difference being only *B/2* away from the real one. Then it is easy to verify that the following procedure determines the correct sign: Output the sign of the approximate difference if it is outside of *[-B/2,B/2]*, otherwise output *0*. What is the run time of this procedure? It is proportional to the precision, which corresponds to the length of *B*, that is, to *-log B*. Hence, if *-log B* is bounded from above by a polynomial in the length of the input, then we are in P! So, is there any reasonable lower bound for the worst case? That is, **what is the smallest positive difference between any two sums of k square roots of integers of size at most n?** Welcome to Problem 33 of The Open Problems Project! Let

In the dramatic opening of this post, I mentioned our paper, where the problem of Sum of Square Roots appeared. There, we design a PTAS for the Traveling Salesman problem (TSP) with hyperplane neighborhoods, that is, we look for a *(1+ε)*-approximation of a minimum-length tour that, instead of points, visits a given set of hyperplanes in ℝ^{d} with *d* being fixed. The basis of our algorithm is an integer grid of constant size (depending on *ε* and *d*). Our key observation is that there is a translation and scaling of the grid such that snapping an (adequately sparsified) optimum solution to the grid results in a *(1+ε*)-approximation. Using this insight, we let our algorithm enumerate all (reasonable) TSP tours lying in our integer grid. With a simple LP, we translate and scale each tour such that it visits all the hyperplanes of the input whilst minimizing its tour length (i.e., the scaling factor). At the end, we obtain a set of feasible TSP tours and output the shortest one. And here we were confronted with the Sum of Square Roots problem. Which of the tours is the shortest one? Was all our work devastated, all our blood, sweat and tears in vain? Just because the very last line of our algorithm had unknown complexity? Our solution was simple: We just skipped the issue and our paper got accepted. Why? Well, the issue wasn't one for the following three reasons:

- We could be lazy and assume the real RAM computational model with constant time for square root operations (as often done in computational geometry).
- We could be industrious and approximate the real tour lengths with a sufficiently small error since we look anyway for an approximation in overall.
- We could think and then realize that our candidate tours consist of only constant many vertices as our integer grid has constant size. Indeed, the best known lower bound on the difference
*r(n,k)*implies the following nice take-away which shall conclude this epilogue:

**We can compare two sums of square roots in polynomial time if the number of square roots is constant!**

-------------------------

Proof sketches for the interested reader of the lower and upper bounds on *r(n,k)*:

(*) *Ω(k log n)* [Cheng et al.]: The interval *[k, k √{n}]* contains so many distinct sums of square roots (over square-free integers) that, by a pigeonhole argument, there are two sums with very small distance.

(**) *O(2 ^{2k} log n)* [Burnikel et al.]: The difference is an algebraic integer and, consequently, the root of a polynomial with integer coefficients and degree at most

Here's an old trick that we found useful for proving some tight complexity lower bounds. You are given *m* coins, each of weight either *a* or *b*, and a modern scale that can tell you the total weight of any chosen subset of coins. How many weighings do you need to identify which coin is which? Checking each coin individually uses *m* weighings, but can you do less?

In any weighing, we try some unknown number of weight-*a* coins between 0 and *m*, so this results in one of *m* + 1 possible values, giving us at most log(*m* + 1) bits of information. In total we need m bits of information to identify each coin, so clearly we will need at least Ω(*m* / log *m*) weighings.

It turns out that this many is in fact enough, and this generalizes to various other settings with less restricted weights. This is the basis for two of our recent results: a tight complexity lower bound for Integer Linear Programming with few constraints and for multicoloring (a.k.a. b-fold coloring), assuming the Exponential Time Hypothesis. The trick allows us to use constraints that check the value of some number between 0 and *m* to indeed extract about log(*m*) bits of new information from each, in a way that is general enough to check *m* clauses of a 3-CNF-SAT instance using only *O*(*m* / log *m*) constraints.

One way to rephrase this coin-weighing problem is as follows: the unknown coin weights are described by a (column) vector *x* in {*a*,*b*}^{m}. A subset of coins is described by a characteristic (row) vector *v* in {0,1}^{m} and the resulting value of the weighing is the dot product *v* **·** *x*.

For a number of weighings *k*, the row vectors form a 0-1 matrix *M* with *k* rows and *m* columns, while the obtained values are simply the *k*-element vector *M x*. The condition that these values are enough to identify each entry of *x* is formalized as follows:

For all *m*, there is a 0-1 matrix *M* with k=*O*(*m* / log *m*) rows and *m* columns

such that for any *x*, *y* in {*a*,*b*}^{m}, x ≠ y implies *M x* ≠ *M y*.

This is called a *detecting matrix*. In other words, instead of checking x = y entry by entry, it suffices to compare the k entries of M x and M y. (An equivalent definition is to require that the kernel of *M* is trivial on {-1,0,1}^{m}, since we could instead compare (*x* - *y*)/(*a* - *b*) with zero). A generalization for *x*, *y* in {0, 1, …, *d* - 1}^{m} was first proved by (Lindström 1965) (in fact he proves the constant hidden behind *O* is 2 log_{2}(*d*) and cannot be improved, for any *d*), giving a deterministic, poly-time construction. (Bshouty 2009) gives a more direct and general construction using Fourier analysis. However, a random matrix can be shown to be detecting with fairly elementary methods.

In ILP we are given a matrix *A* with *n* columns and *m* rows, a vector *b* with *m* entries, and we ask for a vector *x* with *n* non-negative integer entries such that *A x* = *b*. So each of the *m* rows is a linear equality to be satisfied. What is the complexity for small *m*?

We start with an instance *A x* = *b* encoding a suitable 3-CNF-SAT instance of size *O*(*m*) in a straightforward way, using only {0,1,2,3} entries in *A* and *b*. Instead of checking each row individually, we use detecting matrices to check them in bunches, reducing our problem to an equivalent instance *M A x* = *M b*. The matrix *M A *of the new instance has only O(*m* / log *m*) rows (at the cost of having larger entries in *M b*). With some tinkering, this simple reduction shows that, assuming ETH, ILP with *m* constraints cannot be solved in time 2^{o(m log m)}, even when the matrix has only 0-1 entries and *b* has poly(*m*) entries. This matches a recent (surprisingly simple and clever!) 2^{O(m log m)} algorithm by Eisenbrand and Weismantel (SODA 2018).

One caveat when applying detecting matrices is that we do not know any bound on one side, *A x*, except that it is a non-negative integer vector. Fortunately, (Grebinsky & Kucherov 2000) showed that a random 0-1 matrix satisfies the following:

For all *d*,*m*, there is a 0-1 matrix *M* with k=*O*(*m* *log* *d* / log *m*) rows and *m* columns

such that for any *x*, *y* in ℕ^{m} with ‖*y*‖_{1} ≤ *d* *m*, x ≠ y implies *M x* ≠ *M y*.

Here no upper bound on *x* is needed, because a single row in *M* full of ones is enough to guarantee that the sum of entries is the same: ‖*x*‖_{1} = ‖*y*‖_{1}. However, it remains an open problem to find a deterministic construction (we work around this by giving a somewhat weaker construction). See the full paper (STACS 2019) for details (with Dušan Knop and Michał Pilipczuk). The reduction for *multicoloring* is quite different and more problem-specific, see the full paper (ESA 2017) for definitions (with Marthe Bonamy, Łukasz Kowalik, Michał Pilipczuk and Arkadiusz Socała).

*Marcin Wrochna*

If you would like to present your results at HALG 2019, please submit their details the abstract of the talk or the contribution of the poster via EasyChair: https://easychair.org/conferen

The abstract should include (when relevant) information where the results have been published/accepted (e.g., conference), and where they are publicly available (e.g., arXiv). All submissions will be reviewed by the program committee, giving priority to new work not formally published yet, and to papers published in 2018 or later.

Submissions deadline: March 15th, 2019.

Late submissions will be accepted subject to space constraints.

In turns out that this is a well-studied optimization problem with a simple strategy that guarantees you can (on expectation) earn at least half as much as a hypothetical prophet, who knows all the bids in advance. This result is known as the prophet inequality. What is more surprising, this strategy would work even if you are a car retailer and want to sell five cars. Moreover, you might want to have some constraints, for example you do not want to sell two cars to buyers from the same city, or have multiple kinds of cars with different evaluations, and you can always guarantee an expected revenue comparable to the one of the prophets.

This problem not only exploits beautiful math but also has important applications in internet ad display. Actually, whenever you type a query into a web search engine, the ad system performs this kind of car-selling game with the ad suppliers, who offer different bids for their ad to be displayed to you.

Here is a link to our recent work with new developments in this theory: http://ieee-focs.org/FOCS-2018-Papers/pdfs/59f790.pdf

Michał Włodarczyk

]]>]]>

4th Highlights of Algorithms conference (HALG 2019)

Copenhagen, June 14-16, 2019

http://highlightsofalgorithms.org/

The HALG 2019 conference seeks high-quality nominations for invited

talks that will highlight recent advances in algorithmic research.

Similarly to previous years, there are two categories of invited

talks:

A. survey (60 minutes): a survey of an algorithmic topic that has seen

exciting developments in last couple of years.

B. paper (30 minutes): a significant algorithmic result appearing in a

paper in 2018 or later.

To nominate, please email halg2019.nominations@gmail.com

following information:

1. Basic details: speaker name + topic (for survey talk) or paper’s

title, authors, conference/arxiv + preferable speaker (for paper

talk).

2. Brief justification: Focus on the benefits to the audience, e.g.,

quality of results, importance/relevance of topic, clarity of talk,

speaker’s presentation skills.

All nominations will be reviewed by the Program Committee (PC) to

select speakers that will be invited to the conference.

Nominations deadline: December 9, 2018 (for full consideration).

Please keep in mind that the conference does not provide financial

support for the speakers.

**Renata Czarniecka: - What made you choose the Institute of Informatics at University of Warsaw (MIM UW ) for your postdoc?**

Lukas Pastor: - I found out about MIM UW from my friend, who is a researcher in my field. She told me that prof. Marcin Pilipczuk and his team from MIM UW are very renowned in the field and very nice people. So I decided to apply for the postdoc position. I sent an email with my postdoc research project and CV and then I was asked for a Skype interview. After the interview, Marcin told me that I got the position. I found Marcin and his brother Michał Pilipczuk very kind, open-minded people, and of very strong in mathematics in general.

**What advice would you give for future postdocs at MIM UW?**

- MIM UW is a great environment for a postdoc, with highly skilled and talented researchers. Do not hesitate to ask around who would be interested in working with you. You will find many research mates!

**What have you gained from your postdoc?**

- I’ve learned a lot of things thanks to my postdoc. Marcin and the people I worked with during my stay, shared many different things with me that strengthened my knowledge in the area of the Erdős-Hajnal conjecture. The most important were the proof techniques around this conjecture. I was struck by the fact that each of those results are impressive, clever and yet so little is known about this famous problem.

**What’s the potential impact of your work?**

- It is hard to tell. We tried very hard and got very little advances but we all learned from this work. Furthermore, I had the opportunity to work with a French PhD student, Marc Heinrich, that made a research visit in MIM UW. It was really extraordinary experience. Together we developed a parameterized algorithms for a graph optimization problem.

**What is your direct goal right now?**

- My direct goal right now is to give high quality lectures, tutorials and practical work to my students and go deep into my research topics.

**What are your plans for the future?**

- I would like to tackle some well-known, difficult and complex problems in graph theory with different people, including Marcin and the people I’ve worked with at MIM UW! I would like to try to solve some list-coloring problems. List-coloring problems are very hard and each result, even the smallest, needed hard work. Related to this kind of problems, I would also like to try to attack coloring problems in structured graphs, more precisely, graphs not containing long induced paths.

**What makes MIM UW a great place to work?**

- MIM UW is a great place to work because of high scientific level. Beside this everybody I’ve met here was very humble and happy to work with a young researcher such as myself. I also appreciate the help provided by the administrative staff at MIM UW. All of them were extremely helpful. In particular Ms. Ewa Puchalska who was super kind and supportive. She even helped me with things not directly related to MIM UW administration, such as renting a flat. Great thing of working at MIM UW was that there was no language barrier. Everybody at the faculty speaks fluent English, especially young people.

**How did you find Warsaw as a place to live in?**

-Warsaw is an interesting city. There is a lot of nice places to go out for a drink or eat. I was surprised by the good quality to price ratio in most of the restaurants I went to. Public transport in Warsaw is really well-organized so it was easy to travel around Warsaw thanks to the trams, buses and metro lines. Furthermore, Warsaw is a very safe city, even at night. All the things mentioned above make it a great city to live in for a young foreigner.

*Lucas Pastor** obtained his PhD in Grenoble under the supervision of Frédéric Maffray and Sylvain Gravier. His research interests focus mostly on graph theory, more precisely structural graph theory and optimization problems in graphs.*

*During the period from December 2017 to August 2018 he attended a one-year fellowship (postdoc position) at the Institute of Informatics, University of Warsaw, Poland. The position was supported by the ERC Starting Grant CUTACOMBS: “Cuts and decompositions: algorithms and combinatorial properties” led by prof. Marcin Pilipczuk.*

**homepage:** https://fc.isima.fr/~lupasto/