HIGHLIGHTS OF ALGORITHMS

The London School of Economics and Political Science (virtual), May 31 - June 3, 2021

Programme

All times are British Summer Time (BST).

All talks will take place on Zoom; links will be made available to all registered participants.

Click on a talk for abstracts or other information.

Monday, May 31


11:00 — 12:00 Contributed talks session 1
12:00 — 12:30 Contributed talks Q&A
12:30 — 13:30 Welcome Meet & Greet (Gather)
13:30 — 14:30
Rachel Cummings — Differential Privacy in Practice: Policy Decisions and User Expectations

Differential privacy (DP) is a parameterized notion of database privacy that gives a mathematically rigorous worst-case bound on the maximum amount of information that can be learned about an individual's data from the output of a computation. The privacy community has developed a diverse toolkit of DP algorithms, which are now being used in practice by major tech companies and government agencies. The widespread deployment of these algorithms brings about new policy challenges, such as how to choose the value of the privacy parameters, and how to explain the mathematically-nuanced privacy guarantees to non-experts. This talk will first give an introduction to differential privacy, will then survey recent behavioral studies measuring the response of users to varying differential privacy parameters and the efficacy of non-technical DP descriptions, and will conclude with a discussion of open problems and future challenges in bringing the theory of differential privacy into practice.

14:30 — 14:45 Break
14:45 — 15:50
Steve Hanneke — Proper Learning, Helly Number, and an Optimal SVM Bound

The classical PAC sample complexity bounds are stated for any Empirical Risk Minimizer (ERM) and contain an extra logarithmic factor log(1/ϵ) which is known to be necessary for ERM in general. It has been recently shown by Hanneke (2016) that the optimal sample complexity of PAC learning for any VC class C does not include this log factor and is achieved by a particular improper learning algorithm, which outputs a specific majority-vote of hypotheses in C. This leaves the question of when this bound can be achieved by proper learning algorithms, which are restricted to always output a hypothesis from C.

In this work we aim to characterize the classes for which the optimal sample complexity can be achieved by a proper learning algorithm. We identify that these classes can be characterized by the dual Helly number, which is a combinatorial parameter that arises in discrete geometry and abstract convexity. In particular, under general conditions on C, we show that the dual Helly number is bounded if and only if there is a proper learner that obtains the optimal dependence on ϵ.

As further implications of our techniques we resolve a long-standing open problem posed by Vapnik and Chervonenkis (1974) on the performance of the Support Vector Machine by proving that the sample complexity of SVM in the realizable case is Θ((n/ϵ)+(1/ϵ)log(1/δ)), where n is the dimension. This gives the first optimal PAC bound for Halfspaces achieved by a proper learning algorithm, and moreover is computationally efficient.

Joint work with Olivier Bousquet, Shay Moran, and Nikita Zhivotovskiy, which appeared at COLT 2020.

Nathan Klein — A (Slightly) Improved Approximation Algorithm for Metric TSP

I will describe recent joint work with Anna Karlin and Shayan Oveis Gharan in which we show a randomized 3/2-epsilon approximation algorithm for metric TSP, for some epsilon>10^{-36}. This slightly improves over the classical 3/2 approximation algorithm due to Christofides [1976] and Serdyukov [1978]. I will briefly talk about the major ingredients of our analysis, which relies on foundational work in the geometry of polynomials and the structure of near minimum cuts.

Joint work with Anna R. Karlin and Shayan Oveis Gharan

Elaine Shi — Optimal Oblivious RAM and Sorting Circuits for Short Keys

Oblivious RAM (ORAM), first proposed by Goldreich and Ostrovsky in 1987, is an algorithmic technique for provably hiding a program's access patterns. To date, ORAM has been applied in multiple fields, including cryptography, secure processors, secure cloud outsourcing, secure databases, and blockchains. In the original work of Goldreich and Ostrovsky, they prove a log n lower bound for the overhead of any ORAM scheme, where n is the space of the program. A long-standing open problem is, can we construct an optimal ORAM scheme with logarithmic overhead, matching the well-known lower bound?

We resolve this long-standing open question. Assuming the existence of one-way functions, we construct an optimal ORAM scheme called OptORAMa which achieves O(log n) overhead. To achieve this result, we unravel an intimate connection between optimal ORAM and sorting circuits for short keys.

Sorting circuit complexity is another long-standing question in the theoretical computer science literature. In a landmark result in 1983, Ajtai, Komlos and Szemeredi constructed a sorting circuit with O(n log n) comparator gates. While this is optimal in the comparison-based model (even for 1-bit keys), it remains unclear whether we can construct o(n log n) sorting circuits if we forego the comparison-based restriction: no such construction is known, and proving any super-linear unconditional lower bound for sorting circuits is beyond the reach of current techniques.

Inspired by the techniques behind OptORAMa, we show that using non-comparison-based techniques, one can construct a circuit of size O(n k) that can correctly sort k-bit keys. When k = o(log n), our result overcomes the n log n barrier of sorting circuits. We prove that assuming the Li-Li network coding conjecture, our circuit size is optimal for any choice of k. To the best of our knowledge, we are the first to use non-comparison-based techniques to get a non-trivial sorting circuit result. Our result also leads to other interesting by-products, for example, median can be computed using a linear-sized circuit (c.f., the textbook linear-time median algorithm by Blum et al. works only in a RAM model, and cannot be easily converted to a circuit).

Joint work with Gilad Asharov, Ilan Komargodski, Wei-Kai Lin, Kartik Nayak, and Enoch Peserico

Santosh Vempala — Solving Sparse Linear Systems Faster than Matrix Multiplication

Can linear systems be solved faster than matrix multiplication?

A celebrated line of work shows that the important special case of Laplacian linear systems can be solved in nearly linear time. In the general setting, however, the bit complexity of solving an n-by-n linear system Ax=b has been n^\omega, where \omega < 2.372864 is the matrix multiplication exponent. Even for sparse linear systems with poly(n) condition number, which arise throughout scientific computing, the famous Conjugate Gradient/Lanczos methods require \Omega(n) bits for intermediate numbers; improving the resulting n^2 * nnz(A) complexity has been an open problem.

We present an algorithm that solves linear systems with sparse A asymptotically faster than matrix multiplication for any \omega > 2. This speedup holds for any input matrix A with o(n^{\omega-1}/log(\kappa(A))) non-zeros, where \kappa(A) is the condition number of A. For poly(n)-conditioned matrices with O(n) nonzeros, and the current value of \omega, the bit complexity of our algorithm to solve to within any 1/poly(n) error is O(n^{2.331645}).

Our algorithm can be viewed as an efficient randomized implementation of the block Krylov method via recursive low-displacement-rank factorizations. It is inspired by the algebraic algorithm of [Eberly-Giesbrecht-Giorgi-Storjohann-Villard ISSAC 06, 07] for inverting matrices over finite fields. In our analysis of numerical stability, we develop matrix anti-concentration techniques to lower bound the smallest singular value and the smallest gap in singular values of semi-random matrices.

Joint work with Richard Peng

16:00 — 17:00
Women in Algorithms Event

Organizer: Monika Henzinger
Speakers: Keren Censor-Hillel, Leslie Goldberg, Monika Henzinger and Eva Tardos
Format: This event is open to female and non-binary members of the community. The speakers will briefly talk about lessons they have learnt in their careers as theoretical computer scientists; participants will then have the opportunity to ask questions and discuss.

Tuesday, June 1


11:00 — 12:00 Contributed talks session 2
12:00 — 12:30 Contributed talks Q&A
12:30 — 13:30 Break
13:30 — 14:30
Venkatesan Guruswami — Recent Progress on Binary Deletion-Correcting Codes

In the worst-case (bit)-deletion noise model, a subset of up to t arbitrarily chosen bits are deleted from a sequence of n codeword bits. Crucially, the locations of the deleted bits are not known to the receiver who receives a subsequence of the transmitted bit-string. The goal is to design codes of low redundancy that allow recovery of the deleted bits and the original codeword. The study of deletion-correcting codes itself is quite old, dating back to optimal single-deletion codes in the 60s, and positive rate codes to correct t = Omega(n) deletions in the 90s. However, many basic questions remained open and our understanding of deletion-correcting codes significantly lagged the vast literature concerning error-correcting codes to correct bit flips.

After a long hiatus, there has been much progress on deletion-correcting codes in the last 6-7 years, covering regimes when the number of deletions t is a small constant, a small constant fraction of n, and a large proportion of n, as well as the list-decoding model. The talk will survey some of this progress. The focus will be on binary codes for bit-deletions, but time permitting we will also mention results for larger alphabets as well as the closely related "insdel" model where symbols may be inserted and deleted.

14:30 — 14:45 Break
14:45 — 15:50
Mohsen Ghaffari — Distributed Derandomization and Network Decomposition

This talk provides a brief overview of a recent line of work that provides an efficient distributed derandomization method [Rozhoň and Ghaffari STOC 2020; Ghaffari, Harris, and Kuhn FOCS 2018; and Ghaffari, Kuhn, and Maus STOC 2017]. Concretely, the derandomization method shows that any (locally-checkable) graph problem that admits a polylogarithmic-time randomized distributed algorithm also admits a polylogarithmic-time deterministic distributed algorithm. This result resolved several central and decades-old open problems in distributed graph algorithms.

Joint work with David G. Harris, Fabian Kuhn, Yannic Maus and Vaclav Rozhon

Euiwoong Lee — The Karger-Stein Algorithm is Optimal for k-cut

In the $k$-cut problem, we are given an edge-weighted graph and want to find the least-weight set of edges whose deletion breaks the graph into $k$ connected components. Algorithms due to Karger-Stein and Thorup showed how to find such a minimum $k$-cut in time approximately $O(n^{2k-2})$. The best lower bounds come from conjectures about the solvability of the $k$-clique problem and a reduction from $k$-clique to $k$-cut, and show that solving $k$-cut is likely to require time $\Omega(n^k)$. Our recent results have given special-purpose algorithms that solve the problem in time $n^{1.98k + O(1)}$, and ones that have better performance for special classes of graphs (e.g., for small integer weights). In this work, we resolve the problem for general graphs, by showing that for any fixed $k \geq 2$, the Karger-Stein algorithm outputs any fixed minimum $k$-cut with probability at least $\widehat{O}(n^{-k})$, where $\widehat{O}(\cdot)$ hides a $2^{O(\ln \ln n)^2}$ factor. This also gives an extremal bound of $\widehat{O}(n^k)$ on the number of minimum $k$-cuts in an $n$-vertex graph and an algorithm to compute a minimum $k$-cut in similar runtime. Both are tight up to $\widehat{O}(1)$ factors. The first main ingredient in our result is a fine-grained analysis of how the graph shrinks---and how the average degree evolves---under the Karger-Stein process. The second ingredient is an extremal result bounding the number of cuts of size at most $(2-\delta) OPT/k$, using the Sunflower lemma.

Joint work with Anupam Gupta and Jason Li

Shachar Lovett — Recent Progress on the Sunflower Conjecture, and Connections to CS

Speaker: Shachar Lovett, slovett@cs.ucsd.edu The sunflower conjecture is an open problem in combinatorics, originally asked by Erdos and Rado in 1960, and without much progress until recently. I will describe recent work which gets closer to proving the conjecture. The techniques involved build upon surprising connections to computer science, in particular to the study of DNFs in computational complexity. I will describe these connections, and potential new applications in CS of these techniques.

Joint work with Ryan Alweiss, Kewen Wu and Jiapeng Zhang

John Wright — MIP*=RE

Quantum complexity theory gives us a powerful lens to study basic resources and properties that arise in quantum computing. One of the most beguiling of these is the mysterious phenomenon of quantum entanglement, in which two far-apart quantum systems can affect each other faster than the speed of light. To study this, researchers in 2004 introduced the complexity class MIP*, which connected entanglement to a classic notion in complexity theory known as multiprover interactive proofs. Since then, determining the power of MIP* has remained a major open problem in the field of quantum complexity theory.

In this talk, I will describe recent work giving a solution to this problem: MIP* = RE, the complexity class containing the halting problem and those languages which reduce to it. This shows that entanglement is a resource of almost unimaginable power, as it can be used to solve problems which are undecidable. The proof involves new techniques that allow a classical verifier to use entanglement to delegate increasingly complex computations to two quantum provers. I will also describe the deep and surprising connections that MIP* has to two other major open problems, Tsirelson's problem from entanglement theory and Connes' embedding problem from operator algebras, and show how this result leads to a resolution of both problems in the negative.

Joint work with Zhengfeng Ji, Anand Natarajan, Thomas Vidick, and Henry Yuen.

16:00 — 17:00 Junior/senior meetup (Gather)

Wednesday, June 2


11:00 — 12:00 Contributed talks session 3
12:00 — 12:30 Contributed talks Q&A
12:30 — 13:30 Break
13:30 — 14:35
Shalev Ben-David — A New Minimax Theorem for Randomized Algorithms

We introduce a new type of minimax theorem for randomized algorithms, one that gives a hard distribution well-suited for the analysis of small-bias randomized algorithms. A version of our minimax theorem applies in multiple randomized models of computation: query complexity, communication complexity, randomized circuit models, approximate polynomial degree, approximate rank, and various quantum models. I will briefly discuss these results, as well as the role that proper scoring rules (from statistics) play in our work.

Joint work with Eric Blais.

Omri Ben-Eliezer — A Framework for Adversarially Robust Streaming Algorithms

We investigate the adversarial robustness of streaming algorithms; an algorithm is considered robust if its performance guarantees hold even if the stream is chosen by an adaptive adversary, whose future stream updates may depend on previous outputs of the algorithm. Surprisingly little was known until recently about the adversarial robustness of streaming algorithms. Deterministic algorithms are always robust, but in many cases they are inefficient. Randomized algorithms tend to be very efficient, but are they robust? For many of the most widely used algorithms, the answer is either negative or unknown. How can one devise, then, principled methods to obtain streaming algorithms that are both robust and efficient?

We address this question by developing a simple and efficient framework allowing one to transform existing algorithms for many important streaming problems, such as norm estimation, distinct elements, heavy hitters, and entropy estimation, into robust ones, while inducing only a small overhead in the space complexity. Since this work was first published in early 2020, a flurry of subsequent work on adversarially robust streaming has been carried, and in the talk I will very quickly mention what is currently known about the area, as well as some intriguing open questions.

Joint work with Rajesh Jayaram, David Woodruff and Eylon Yogev.

Siyao Guo — Data Structures Meet Cryptography: 3SUM with Preprocessing

This paper shows several connections between data structure problems and cryptography against preprocessing attacks. Our results span data structure upper bounds, cryptographic applications, and data structure lower bounds, as summarized next.

First, we apply Fiat-Naor inversion, a technique with cryptographic origins, to obtain a data structure upper bound. In particular, our technique yields a suite of algorithms with space S and (online) time T for a preprocessing version of the N-input 3SUM problem where S^3T = O(N^6). This disproves a strong conjecture (Goldstein et al., WADS 2017) that there is no data structure that solves this problem for S=N^{2−δ} and T = N^{1−δ} for any constant δ>0.

Secondly, we show equivalence between lower bounds for a broad class of (static) data structure problems and one-way functions in the random oracle model that resist a very strong form of preprocessing attack. Concretely, given a random function F: [N] → [N] (accessed as an oracle) we show how to compile it into a function G^F: [N^2] → [N^2] which resists S-bit preprocessing attacks that run in query time T where ST=O(N^{2−ε}) (assuming a corresponding data structure lower bound on 3SUM). In contrast, a classical result of Hellman tells us that F itself can be more easily inverted, say with N^{2/3}-bit preprocessing in N^{2/3} time. We also show that much stronger lower bounds follow from the hardness of kSUM. Our results can be equivalently interpreted as security against adversaries that are very non-uniform, or have large auxiliary input, or as security in the face of a powerfully backdoored random oracle.

Thirdly, we give non-adaptive lower bounds for 3SUM which match the best known lower bounds for static data structure problems. Moreover, we show that our lower bound generalizes to a range of geometric problems, such as three points on a line, polygon containment, and others.

Joint work with Alexander Golovnev, Thibaut Horel, Sunoo Park, and Vinod Vaikuntanathan

Saurabh Sawlani — Near-Optimal Fully Dynamic Densest Subgraph

Dense subgraph discovery is an important primitive for many real-world applications such as community detection, link spam detection, distance query indexing, and computational biology. We approach the densest subgraph problem by framing its dual as a graph orientation problem, which we solve using an augmenting path-like adjustment technique. We give the first fully dynamic algorithm which maintains a (1−$\epsilon$)-approximate densest subgraph in worst-case time poly(log n,1/$\epsilon$) per update. Our result improves upon the previous best approximation factor of (1/4−$\epsilon$) for fully dynamic densest subgraph [Bhattacharya et. al., STOC `15].

Joint work with Junxing Wang

14:35 — 14:45 Break
14:45 — 15:45 Nika Haghtalab — Foundations of ML for a Social and Strategic World
16:00 — 17:00 Business meeting

Thursday, June 3


13:30 — 14:35
Pawel Gawrychowski — Minimum Cut in O(m log2 n) Time

In the (global) minimum cut problem, we are given an undirected edge-weighted graph on n nodes and m edges and seek a partition of the nodes into two nonempty parts that minimize the total weight of edges connecting the parts. In 1996, Karger designed a randomized algorithm that solves this problem in O(mlog^3n) time (correct with high probability). Since then, there was essentially no progress on obtaining a faster algorithm for sparse graphs (but there was some exciting progress on deterministic/faster algorithms for the unweighted case). I will provide an overview of some recent results that obtain faster algorithms for the general weighted case, in particular in O(mlog^2n) or O(mlogn+n^{1+eps}) time.

Zhiyi Huang — Online Correlated Selection and Its Applications in Online Matching Problems

This work initiates the study of Online Correlated Selection (OCS). Consider a set of ground elements and a sequence of pairs of elements which arrive one at a time. On the arrival of such a pair, the algorithm must select one of the two elements, each with probability half. For example, suppose that we select with a fresh random bit every time. Then, after an element appears in $k$ pairs, it will be selected at least once with probability $1-2^{-k}$. Can we do better?

Concretely, for any $0 \le \gamma \le 1$, a $\gamma$-semi-OCS is an algorithm which guarantees that an element is chosen at least once with probability $1-2^{-k}(1-\gamma)^{k-1}$, after appearing in $k$ pairs. For example, the above naïve independent selection algorithm is a $0$-OCS; on the other hand, $\gamma = 1$ corresponds to perfect negative correlation in the sense that an element is selected in its second appearance if and only if it is not selected in its first. We show that the largest possible $\gamma$ for which a $\gamma$-semi-OCS exists is $\gamma=\frac{1}{2}$.

Further, we consider a more difficult notion of $\gamma$-OCS, which considers arbitrary subsets of appearances of an element; in other words, they need not be the first $k$ appearances. We introduce a $\gamma$-OCS for some $\gamma > 0.1$.

As applications of the OCS, we obtain the first online algorithms that break the long-standing $0.5$ barrier in the edge-weighted online matching problem (a.k.a., Display Ads) and AdWords.

Joint work with Matthew Fahrbach, Runzhou Tao, and Morteza Zadimoghaddam

Shay Moran — An Equivalence Between Private PAC Learning and Online Learning

Let H be a concept class. We prove that H can be PAC-learned by an (approximate) differentially-private algorithm if and only if it has a finite Littlestone dimension. This implies an equivalence between online learnability and private PAC learnability.

Joint work with Noga Alon, Mark Bun, Roi Livni, and Maryanthe Malliaris

Eva Rotenberg — Fully-Dynamic Planarity Testing in Polylogarithmic Time

Given a dynamic graph subject to insertions and deletions of edges, a natural question is whether the graph presently admits a planar embedding. We give a deterministic fully-dynamic algorithm for general graphs, running in amortized O(log^3 n) time per edge in- sertion or deletion, that maintains a bit indicating whether or not the graph is presently planar.

Joint work with Jacob Holm

14:35 — 14:45 Break
14:45 — 15:45
Yin Tat Lee — Convex Optimization and Dynamic Data Structures

In the last four years, there are many breakthroughs in optimization such as solving linear programs as fast as matrix multiplication, solving dense maxflow in nearly linear time, and fast optimization algorithms on low treewidth graphs. All of these algorithms are based on a careful combination of optimization techniques and dynamic data structures. In this talk, we will explain the framework underlying all the recent breakthroughs

16:00 — 17:00 Social event (details to be announced)

Accepted contributed presentations

Each contributed presentation will consist of a 3-4 minute talk. A Q&A session follows each contributed talk session, where participants will be able to discuss with the speaker in Gather.

Monday 11:00 — 12:00

11:00 Mo Tiwari, Martin Jinye Zhang, James Mayclin, Sebastian Thrun, Chris Piech and Ilan Shomorony — BanditPAM: Almost Linear Time k-Medoids Clustering via Multi-Armed Bandits
Arturs Backurs, Avrim Blum and Neha Gupta — Active Local Learning
Meike Neuwohner — An Improved Approximation Algorithm for the Maximum Weight Independent Set Problem in d-Claw Free Graphs
Susanne Albers, Arindam Khan and Leon Ladewig — Best Fit Bin Packing with Random Order Revisited
Arindam Khan and Madhusudhan Reddy — On guillotine separability of squares and rectangles
11:20 Arindam Khan, Arnab Maiti, Amatya Sharma and Andreas Wiese — On Guillotine Separable Packings for the Two-dimensional Geometric Knapsack Problem
Lars Rohwedder and Andreas Wiese — A (2+epsilon)-approximation algorithm for preemptive weighted flow time on a single machine
Kunal Agrawal, Michael A. Bender, Rathish Das, William Kuszmaul, Enoch Peserico and Michele Scquizzato — Tight Bounds for Parallel Paging and Green Paging
Karl Bringmann and André Nusser — Fine-Grained Lower Bounds for Hausdorff Distance Under Translation
Nithin Varma and Yuichi Yoshida — Average Sensitivity of Graph Algorithms
11:40 Franziska Eberle, Ruben Hoeksma, Nicole Megow, Lukas Nölke, Kevin Schewior and Bertrand Simon — Speed-Robust Scheduling - Rocks, Bricks, and Sand
Mohammad Akbarpour, Yeganeh Alimohammadi, Shengwu Li and Amin Saberi — The Value of Excess Supply in Spatial Matching Markets
Jesper Nederlof, Jakub Pawlewicz, Céline Swennenhuis and Karol Węgrzycki — A Faster Exponential Time Algorithm for Bin Packing With a Constant Number of Bins via Additive Combinatorics
Bartlomiej Dudek, Pawel Gawrychowski and Tatiana Starikovskaya — All Non-trivial Variants of 3-LDT Are Equivalent
Mobin Y.Jeloudar, Irene Lo, Tristan Pollner and Amin Saberi — Search Approximates Optimal Matching

Tuesday 11:00 — 12:00

11:00 Amit Levi, Ramesh Pallavoor, Sofya Raskhodnikova and Nithin Varma — Erasure-Resilient Subliinear-Time Graph Algorithms
Laurent Feuilloley, José Correa, Andrés Cristi, Alexandros Tsigonias-Dimitriadis and Tim Oosterwijk — The Secretary Problem with Independent Sampling
Marc Roth, Johannes Schmitt and Philip Wellnitz — Counting Small Induced Subgraphs Satisfying Monotone Properties
Diodato Ferraioli, Paolo Penna and Carmine Ventre — Two-way Greedy: Algorithms for Imperfect Rationality
Daniel Lokshtanov, Saket Saurabh and Vaishali Surianarayanan — A Parameterized Approximation Scheme for Min k-Cut
11:20 Spyros Angelopoulos — Online Search with a Hint
Ronen Gradwohl, Niklas Hahn, Martin Hoefer and Rann Smorodinsky — Algorithms for Persuasion with Limited Communication
Paul Duetting, Federico Fusco, Philip Lazos, Stefano Leonardi and Rebecca Reiffenhäuser — Efficient Two-Sided Markets with Limited Information
Bernhard Haeupler, David Wajc and Goran Zuzic — Universally-Optimal Distributed Algorithms for Known Topologies
Vincent Cohen-Addad, David Saulpic and Chris Schwiegelshohn — A New Coreset Framework for Clustering
11:40 Bernhard Haeupler, D. Ellis Hershkowitz and Goran Zuzic — Tree Embeddings for Hop-Constrained Network Design
Yeganeh Alimohammadi, Nima Anari, Kirankumar Shiragur and Thuy Duong Vuong — Fractionally Log-Concave and Sector-Stable Polynomials: Counting Planar Matchings and More
Georgios Amanatidis, Federico Fusco, Philip Lazos, Stefano Leonardi and Rebecca Reiffenhäuser — Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint
Ainesh Bakshi and Pravesh Kothari — Outlier-Robust Clustering of Non-Spherical Mixtures
Richard Cole and Yixin Tao — On the Existence of Pareto Efficient and Envy Free Allocations

Wednesday 11:00 — 12:00

11:00 Graham Cormode, Zohar Karnin, Edo Liberty, Justin Thaler and Pavel Veselý — Relative Error Streaming Quantiles
Amir Abboud, Robert Krauthgamer and Ohad Trabelsi — Subcubic Algorithms for Gomory-Hu Tree in Unweighted Graphs
Waldo Gálvez, Fabrizio Grandoni, Arindam Khan, Diego Ramirez and Andreas Wiese — Improved Approximation Algorithms for 2-Dimensional Knapsack: Packing into Multiple L-Shapes, Spirals, and More
Vladimir Braverman, Robert Krauthgamer, Aditya Krishnan and Shay Sapir — Near-Optimal Entrywise Sampling of Numerically Sparse Matrices
Waldo Gálvez, Fabrizio Grandoni, Afrouz Jabal Ameli, Klaus Jansen, Arindam Khan and Malin Rau — A Tight (3/2+ɛ) Approximation for Skewed Strip Packing
11:20 Monika Henzinger, Stefan Neumann, Harald Räcke and Stefan Schmid — Tight Bounds for Online Graph Partitioning
Roie Levin and David Wajc — Streaming Submodular Matching Meets the Primal-Dual Method
Peter Gartland, Daniel Lokshtanov, Marcin Pilipczuk, Michał Pilipczuk and Paweł Rzążewski — Finding Large Induced Sparse Subgraphs in Ct-Free Graphs in Quasipolynomial Time
Andreas S. Schulz and Claudio Telha Cornejo — Integer factorization and Riemann’s hypothesis: Why two-item joint replenishment is hard
Bernadette Charron-Bost and Patrick Lambein-Monette — Average consensus in spite of dynamic communications
11:40 Katarzyna Paluch — New Approximation Algorithms for Maximum Asymmetric Traveling Salesman and Shortest Superstring
Grant Schoenebeck and Fang-Yi Yu — Learning and Strongly Truthful Multi-Task Peer Prediction: A Variational Approach
Daniel Dadush, László Végh and Bento Natura — Revisiting Tardos's Framework for Linear Programming: Faster Exact Solutions using Approximate Solvers
Kush Bhatia, Peter Bartlett, Anca Dragan and Jacob Steinhardt — Agnostic learning with unknown utilities
Parinya Chalermsook, Syamantak Das, Yunbum Kook, Bundit Laekhanukit, Yang P. Liu, Richard Peng, Mark Sellke and Daniel Vaz — Vertex Sparsification for Edge Connectivity