All times are British Summer Time (BST).
All talks will take place on Zoom; links have been sent via email to all registered participants.
Click on a talk for abstracts or other information.
|11:00 —12:00||Contributed talks session 1|
|12:00 —12:30||Contributed talks Q&A (Gather)|
|12:30 —13:30||Welcome Meet & Greet (Gather)|
|13:30 —14:30|| |
Rachel Cummings —Differential Privacy in Practice: Policy Decisions and User Expectations
Differential privacy (DP) is a parameterized notion of database privacy that gives a mathematically rigorous worst-case bound on the maximum amount of information that can be learned about an individual's data from the output of a computation. The privacy community has developed a diverse toolkit of DP algorithms, which are now being used in practice by major tech companies and government agencies. The widespread deployment of these algorithms brings about new policy challenges, such as how to choose the value of the privacy parameters, and how to explain the mathematically-nuanced privacy guarantees to non-experts. This talk will first give an introduction to differential privacy, will then survey recent behavioral studies measuring the response of users to varying differential privacy parameters and the efficacy of non-technical DP descriptions, and will conclude with a discussion of open problems and future challenges in bringing the theory of differential privacy into practice.
|14:45 —15:30|| |
Steve Hanneke —Proper Learning, Helly Number, and an Optimal SVM Bound
The classical PAC sample complexity bounds are stated for any Empirical Risk Minimizer (ERM) and contain an extra logarithmic factor log(1/ϵ) which is known to be necessary for ERM in general. It has been recently shown by Hanneke (2016) that the optimal sample complexity of PAC learning for any VC class C does not include this log factor and is achieved by a particular improper learning algorithm, which outputs a specific majority-vote of hypotheses in C. This leaves the question of when this bound can be achieved by proper learning algorithms, which are restricted to always output a hypothesis from C.
In this work we aim to characterize the classes for which the optimal sample complexity can be achieved by a proper learning algorithm. We identify that these classes can be characterized by the dual Helly number, which is a combinatorial parameter that arises in discrete geometry and abstract convexity. In particular, under general conditions on C, we show that the dual Helly number is bounded if and only if there is a proper learner that obtains the optimal dependence on ϵ.
As further implications of our techniques we resolve a long-standing open problem posed by Vapnik and Chervonenkis (1974) on the performance of the Support Vector Machine by proving that the sample complexity of SVM in the realizable case is Θ((n/ϵ)+(1/ϵ)log(1/δ)), where n is the dimension. This gives the first optimal PAC bound for Halfspaces achieved by a proper learning algorithm, and moreover is computationally efficient.
Joint work with Olivier Bousquet, Shay Moran, and Nikita Zhivotovskiy, which appeared at COLT 2020.
Nathan Klein —A (Slightly) Improved Approximation Algorithm for Metric TSP
I will describe recent joint work with Anna Karlin and Shayan Oveis Gharan in which we show a randomized 3/2-ε approximation algorithm for metric TSP, for some ε > 10-36. This slightly improves over the classical 3/2 approximation algorithm due to Christofides  and Serdyukov . I will briefly talk about the major ingredients of our analysis, which relies on foundational work in the geometry of polynomials and the structure of near minimum cuts.
Joint work with Anna R. Karlin and Shayan Oveis Gharan
Elaine Shi —Optimal Oblivious RAM and Sorting Circuits for Short Keys
Oblivious RAM (ORAM), first proposed by Goldreich and Ostrovsky in 1987, is an algorithmic technique for provably hiding a program's access patterns. To date, ORAM has been applied in multiple fields, including cryptography, secure processors, secure cloud outsourcing, secure databases, and blockchains. In the original work of Goldreich and Ostrovsky, they prove a log n lower bound for the overhead of any ORAM scheme, where n is the space of the program. A long-standing open problem is, can we construct an optimal ORAM scheme with logarithmic overhead, matching the well-known lower bound?
We resolve this long-standing open question. Assuming the existence of one-way functions, we construct an optimal ORAM scheme called OptORAMa which achieves O(log n) overhead. To achieve this result, we unravel an intimate connection between optimal ORAM and sorting circuits for short keys.
Sorting circuit complexity is another long-standing question in the theoretical computer science literature. In a landmark result in 1983, Ajtai, Komlos and Szemeredi constructed a sorting circuit with O(n log n) comparator gates. While this is optimal in the comparison-based model (even for 1-bit keys), it remains unclear whether we can construct o(n log n) sorting circuits if we forego the comparison-based restriction: no such construction is known, and proving any super-linear unconditional lower bound for sorting circuits is beyond the reach of current techniques.
Inspired by the techniques behind OptORAMa, we show that using non-comparison-based techniques, one can construct a circuit of size O(n k) that can correctly sort k-bit keys. When k = o(log n), our result overcomes the n log n barrier of sorting circuits. We prove that assuming the Li-Li network coding conjecture, our circuit size is optimal for any choice of k. To the best of our knowledge, we are the first to use non-comparison-based techniques to get a non-trivial sorting circuit result. Our result also leads to other interesting by-products, for example, median can be computed using a linear-sized circuit (c.f., the textbook linear-time median algorithm by Blum et al. works only in a RAM model, and cannot be easily converted to a circuit).
Joint work with Gilad Asharov, Ilan Komargodski, Wei-Kai Lin, Kartik Nayak, and Enoch Peserico
Santosh Vempala —Solving Sparse Linear Systems Faster than Matrix Multiplication
Can linear systems be solved faster than matrix multiplication?
A celebrated line of work shows that the important special case of Laplacian linear systems can be solved in nearly linear time. In the general setting, however, the bit complexity of solving an n-by-n linear system Ax=b has been nω, where ω < 2.372864 is the matrix multiplication exponent. Even for sparse linear systems with poly(n) condition number, which arise throughout scientific computing, the famous Conjugate Gradient/Lanczos methods require Ω(n) bits for intermediate numbers; improving the resulting n2·nnz(A) complexity has been an open problem.
We present an algorithm that solves linear systems with sparse A asymptotically faster than matrix multiplication for any ω > 2. This speedup holds for any input matrix A with o(nω-1/log(κ(A))) non-zeros, where κ(A) is the condition number of A. For poly(n)-conditioned matrices with O(n) nonzeros, and the current value of ω, the bit complexity of our algorithm to solve to within any 1/poly(n) error is O(n2.331645).
Our algorithm can be viewed as an efficient randomized implementation of the block Krylov method via recursive low-displacement-rank factorizations. It is inspired by the algebraic algorithm of [Eberly-Giesbrecht-Giorgi-Storjohann-Villard ISSAC 06, 07] for inverting matrices over finite fields. In our analysis of numerical stability, we develop matrix anti-concentration techniques to lower bound the smallest singular value and the smallest gap in singular values of semi-random matrices.
Joint work with Richard Peng
|15:30 —15:50||Q&A session (Zoom breakout rooms)|
Women in Algorithms Event
Organizer: Monika Henzinger
|11:00 —12:00||Contributed talks session 2|
|12:00 —12:30||Contributed talks Q&A (Gather)|
|13:30 —14:30|| |
Venkatesan Guruswami —Recent Progress on Binary Deletion-Correcting Codes
In the worst-case (bit)-deletion noise model, a subset of up to t arbitrarily chosen bits are deleted from a sequence of n codeword bits. Crucially, the locations of the deleted bits are not known to the receiver who receives a subsequence of the transmitted bit-string. The goal is to design codes of low redundancy that allow recovery of the deleted bits and the original codeword. The study of deletion-correcting codes itself is quite old, dating back to optimal single-deletion codes in the 60s, and positive rate codes to correct t = Ω(n) deletions in the 90s. However, many basic questions remained open and our understanding of deletion-correcting codes significantly lagged the vast literature concerning error-correcting codes to correct bit flips.
After a long hiatus, there has been much progress on deletion-correcting codes in the last 6-7 years, covering regimes when the number of deletions t is a small constant, a small constant fraction of n, and a large proportion of n, as well as the list-decoding model. The talk will survey some of this progress. The focus will be on binary codes for bit-deletions, but time permitting we will also mention results for larger alphabets as well as the closely related "insdel" model where symbols may be inserted and deleted.
|14:45 —15:30|| |
Mohsen Ghaffari —Distributed Derandomization and Network Decomposition
This talk provides a brief overview of a recent line of work that provides an efficient distributed derandomization method [Rozhoň and Ghaffari STOC 2020; Ghaffari, Harris, and Kuhn FOCS 2018; and Ghaffari, Kuhn, and Maus STOC 2017]. Concretely, the derandomization method shows that any (locally-checkable) graph problem that admits a polylogarithmic-time randomized distributed algorithm also admits a polylogarithmic-time deterministic distributed algorithm. This result resolved several central and decades-old open problems in distributed graph algorithms.
Joint work with David G. Harris, Fabian Kuhn, Yannic Maus and Vaclav Rozhon
Euiwoong Lee —The Karger-Stein Algorithm is Optimal for k-cut
In the k-cut problem, we are given an edge-weighted graph and want to find the least-weight set of edges whose deletion breaks the graph into k connected components. It is easy to see that the elegant randomized contraction algorithm of Karger and Stein for global mincut (k=2) can be naturally extended for general k with the running time of O(n2k-2). Using tree packings, Thorup gave a deterministic algorithm that has the same running time.
In this work, we show that for any fixed k ≥ 2, the Karger-Stein algorithm outputs any fixed minimum k-cut with probability at least Ω(n-k). This also gives an extremal bound of O(nk) on the number of minimum k-cuts in an n-vertex graph and an algorithm to compute a minimum k-cut in similar runtime. Both are essentially tight. The first main ingredient in our result is a fine-grained analysis of how the graph shrinks — and how the average degree evolves — under the Karger-Stein process. The second ingredient is an extremal result bounding the number of cuts of size at most (2-δ)OPT/k, using the Sunflower lemma.
Joint work with Anupam Gupta and Jason Li
Shachar Lovett —Recent Progress on the Sunflower Conjecture, and Connections to CS
The sunflower conjecture is an open problem in combinatorics, originally asked by Erdos and Rado in 1960, and without much progress until recently. I will describe recent work which gets closer to proving the conjecture. The techniques involved build upon surprising connections to computer science, in particular to the study of DNFs in computational complexity. I will describe these connections, and potential new applications in CS of these techniques.
Joint work with Ryan Alweiss, Kewen Wu and Jiapeng Zhang
John Wright —MIP*=RE
Quantum complexity theory gives us a powerful lens to study basic resources and properties that arise in quantum computing. One of the most beguiling of these is the mysterious phenomenon of quantum entanglement, in which two far-apart quantum systems can affect each other faster than the speed of light. To study this, researchers in 2004 introduced the complexity class MIP*, which connected entanglement to a classic notion in complexity theory known as multiprover interactive proofs. Since then, determining the power of MIP* has remained a major open problem in the field of quantum complexity theory.
In this talk, I will describe recent work giving a solution to this problem: MIP* = RE, the complexity class containing the halting problem and those languages which reduce to it. This shows that entanglement is a resource of almost unimaginable power, as it can be used to solve problems which are undecidable. The proof involves new techniques that allow a classical verifier to use entanglement to delegate increasingly complex computations to two quantum provers. I will also describe the deep and surprising connections that MIP* has to two other major open problems, Tsirelson's problem from entanglement theory and Connes' embedding problem from operator algebras, and show how this result leads to a resolution of both problems in the negative.
Joint work with Zhengfeng Ji, Anand Natarajan, Thomas Vidick, and Henry Yuen.
|15:30 —15:50||Q&A session (Zoom breakout rooms)|
|16:00 —17:00||Junior/senior meetup (Gather)|
|11:00 —12:00||Contributed talks session 3|
|12:00 —12:30||Contributed talks Q&A (Gather)|
|13:30 —14:15|| |
Shalev Ben-David —A New Minimax Theorem for Randomized Algorithms
We introduce a new type of minimax theorem for randomized algorithms, one that gives a hard distribution well-suited for the analysis of small-bias randomized algorithms. A version of our minimax theorem applies in multiple randomized models of computation: query complexity, communication complexity, randomized circuit models, approximate polynomial degree, approximate rank, and various quantum models. I will briefly discuss these results, as well as the role that proper scoring rules (from statistics) play in our work.
Joint work with Eric Blais.
Omri Ben-Eliezer —A Framework for Adversarially Robust Streaming Algorithms
We investigate the adversarial robustness of streaming algorithms; an algorithm is considered robust if its performance guarantees hold even if the stream is chosen by an adaptive adversary, whose future stream updates may depend on previous outputs of the algorithm. Surprisingly little was known until recently about the adversarial robustness of streaming algorithms. Deterministic algorithms are always robust, but in many cases they are inefficient. Randomized algorithms tend to be very efficient, but are they robust? For many of the most widely used algorithms, the answer is either negative or unknown. How can one devise, then, principled methods to obtain streaming algorithms that are both robust and efficient?
We address this question by developing a simple and efficient framework allowing one to transform existing algorithms for many important streaming problems, such as norm estimation, distinct elements, heavy hitters, and entropy estimation, into robust ones, while inducing only a small overhead in the space complexity. Since this work was first published in early 2020, a flurry of subsequent work on adversarially robust streaming has been carried, and in the talk I will very quickly mention what is currently known about the area, as well as some intriguing open questions.
Joint work with Rajesh Jayaram, David Woodruff and Eylon Yogev.
Siyao Guo —Data Structures Meet Cryptography: 3SUM with Preprocessing
This paper shows several connections between data structure problems and cryptography against preprocessing attacks. Our results span data structure upper bounds, cryptographic applications, and data structure lower bounds, as summarized next.
First, we apply Fiat-Naor inversion, a technique with cryptographic origins, to obtain a data structure upper bound. In particular, our technique yields a suite of algorithms with space S and (online) time T for a preprocessing version of the N-input 3SUM problem where S3T = O(N6). This disproves a strong conjecture (Goldstein et al., WADS 2017) that there is no data structure that solves this problem for S=N2−δ and T = N1−δ for any constant δ>0.
Secondly, we show equivalence between lower bounds for a broad class of (static) data structure problems and one-way functions in the random oracle model that resist a very strong form of preprocessing attack. Concretely, given a random function F: [N] → [N] (accessed as an oracle) we show how to compile it into a function GF: [N2] → [N2] which resists S-bit preprocessing attacks that run in query time T where ST=O(N2−ε) (assuming a corresponding data structure lower bound on 3SUM). In contrast, a classical result of Hellman tells us that F itself can be more easily inverted, say with N2/3-bit preprocessing in N2/3 time. We also show that much stronger lower bounds follow from the hardness of kSUM. Our results can be equivalently interpreted as security against adversaries that are very non-uniform, or have large auxiliary input, or as security in the face of a powerfully backdoored random oracle.
Thirdly, we give non-adaptive lower bounds for 3SUM which match the best known lower bounds for static data structure problems. Moreover, we show that our lower bound generalizes to a range of geometric problems, such as three points on a line, polygon containment, and others.
Joint work with Alexander Golovnev, Thibaut Horel, Sunoo Park, and Vinod Vaikuntanathan
Saurabh Sawlani —Near-Optimal Fully Dynamic Densest Subgraph
Dense subgraph discovery is an important primitive for many real-world applications such as community detection, link spam detection, distance query indexing, and computational biology. We approach the densest subgraph problem by framing its dual as a graph orientation problem, which we solve using an augmenting path-like adjustment technique. We give the first fully dynamic algorithm which maintains a (1 − ε)-approximate densest subgraph in worst-case time poly(log n,1/ε) per update. Our result improves upon the previous best approximation factor of (1/4 − ε) for fully dynamic densest subgraph [Bhattacharya et. al., STOC '15].
Joint work with Junxing Wang
|14:15 —14:35||Q&A session (Zoom breakout rooms)|
|14:45 —15:45|| |
Nika Haghtalab —Foundations of ML for a Social and Strategic World
When machine learning systems interact directly with people, they affect the way people behave. In recent years, corporations and government organizations have been steadily increasing their use of machine learning techniques to automate business, social, and economic decisions. These consequential decisions take place in the more complex social and economic context where the learners and objects of learning are often people or organizations that are impacted by the learning algorithm and, in return, can take actions that influence the learning process. A research area in the intersection of machine learning, algorithmic economics, and theoretical computer science has emerged in recent years to set a foundation through which we can design learning algorithms that perform well in presence of everyday societal and economic forces, as well as ensuring the integrity of social and economic forces that are born out of the use of machine learning systems. In this talk, I will survey recent progress in this field, discuss new trends and important directions for the future, and highlight connections to well-established research areas including online learning, smoothed analysis, and auctions.
|16:00 —17:00||Business meeting (Zoom)|
|13:30 —14:15|| |
Pawel Gawrychowski —Minimum Cut in O(m log2 n) Time
In the (global) minimum cut problem, we are given an undirected edge-weighted graph on n nodes and m edges and seek a partition of the nodes into two nonempty parts that minimize the total weight of edges connecting the parts. In 1996, Karger designed a randomized algorithm that solves this problem in O(m log3n) time (correct with high probability). Since then, there was essentially no progress on obtaining a faster algorithm for sparse graphs (but there was some exciting progress on deterministic/faster algorithms for the unweighted case). I will provide an overview of some recent results that obtain faster algorithms for the general weighted case, in particular in O(m log2n) or O(m logn + n1+ε) time.
Zhiyi Huang —Online Correlated Selection and Its Applications in Online Matching Problems
This work initiates the study of Online Correlated Selection (OCS). Consider a set of ground elements and a sequence of pairs of elements which arrive one at a time. On the arrival of such a pair, the algorithm must select one of the two elements, each with probability half. For example, suppose that we select with a fresh random bit every time. Then, after an element appears in k pairs, it will be selected at least once with probability 1-2-k. Can we do better?
Concretely, for any 0 ≤ 𝛾 ≤ 1, a 𝛾-semi-OCS is an algorithm which guarantees that an element is chosen at least once with probability 1-2-k(1-𝛾)k-1, after appearing in k pairs. For example, the above naïve independent selection algorithm is a 0-OCS; on the other hand, 𝛾 = 1 corresponds to perfect negative correlation in the sense that an element is selected in its second appearance if and only if it is not selected in its first. We show that the largest possible 𝛾 for which a 𝛾-semi-OCS exists is 𝛾=1/2.
Further, we consider a more difficult notion of 𝛾-OCS, which considers arbitrary subsets of appearances of an element; in other words, they need not be the first k appearances. We introduce a 𝛾-OCS for some 𝛾 > 0.1.
As applications of the OCS, we obtain the first online algorithms that break the long-standing 0.5 barrier in the edge-weighted online matching problem (a.k.a., Display Ads) and AdWords.
Joint work with Matthew Fahrbach, Runzhou Tao, and Morteza Zadimoghaddam
Shay Moran —An Equivalence Between Private PAC Learning and Online Learning
Let H be a concept class. We prove that H can be PAC-learned by an (approximate) differentially-private algorithm if and only if it has a finite Littlestone dimension. This implies an equivalence between online learnability and private PAC learnability.
Joint work with Noga Alon, Mark Bun, Roi Livni, and Maryanthe Malliaris
Eva Rotenberg —Fully-Dynamic Planarity Testing in Polylogarithmic Time
Given a dynamic graph subject to insertions and deletions of edges, a natural question is whether the graph presently admits a planar embedding. We give a deterministic fully-dynamic algorithm for general graphs, running in amortized O(log3 n) time per edge in- sertion or deletion, that maintains a bit indicating whether or not the graph is presently planar.
Joint work with Jacob Holm
|14:15 —14:35||Q&A session (Zoom breakout rooms)|
|14:45 —15:45|| |
Yin Tat Lee —Convex Optimization and Dynamic Data Structures
In the last four years, there are many breakthroughs in optimization such as solving linear programs as fast as matrix multiplication, solving dense maxflow in nearly linear time, and fast optimization algorithms on low treewidth graphs. All of these algorithms are based on a careful combination of optimization techniques and dynamic data structures. In this talk, we will explain the framework underlying all the recent breakthroughs
|16:00 —17:00||Farewell social (Gather)|
Accepted contributed presentations
Each contributed presentation will consist of a 3½ minute talk. A Q&A session follows each contributed talk session, where participants will be able to discuss with the speaker in Gather.
Monday 11:00 — 12:00
|1||Mo Tiwari, Martin Jinye Zhang, James Mayclin, Sebastian Thrun, Chris Piech and Ilan Shomorony —BanditPAM: Almost Linear Time k-Medoids Clustering via Multi-Armed Bandits|
|2||Arturs Backurs, Avrim Blum and Neha Gupta —Active Local Learning|
|3||Meike Neuwohner —An Improved Approximation Algorithm for the Maximum Weight Independent Set Problem in d-Claw Free Graphs|
|4||Susanne Albers, Arindam Khan and Leon Ladewig —Best Fit Bin Packing with Random Order Revisited|
|5||Arindam Khan and Madhusudhan Reddy —On guillotine separability of squares and rectangles|
|6||Arindam Khan, Arnab Maiti, Amatya Sharma and Andreas Wiese —On Guillotine Separable Packings for the Two-dimensional Geometric Knapsack Problem|
|7||Lars Rohwedder and Andreas Wiese —A (2+epsilon)-approximation algorithm for preemptive weighted flow time on a single machine|
|8||Kunal Agrawal, Michael A. Bender, Rathish Das, William Kuszmaul, Enoch Peserico and Michele Scquizzato —Tight Bounds for Parallel Paging and Green Paging|
|9||Karl Bringmann and André Nusser —Fine-Grained Lower Bounds for Hausdorff Distance Under Translation|
|10||Nithin Varma and Yuichi Yoshida —Average Sensitivity of Graph Algorithms|
|11||Franziska Eberle, Ruben Hoeksma, Nicole Megow, Lukas Nölke, Kevin Schewior and Bertrand Simon —Speed-Robust Scheduling - Rocks, Bricks, and Sand|
|12||Mohammad Akbarpour, Yeganeh Alimohammadi, Shengwu Li and Amin Saberi —The Value of Excess Supply in Spatial Matching Markets|
|13||Jesper Nederlof, Jakub Pawlewicz, Céline Swennenhuis and Karol Węgrzycki —A Faster Exponential Time Algorithm for Bin Packing With a Constant Number of Bins via Additive Combinatorics|
|14||Bartlomiej Dudek, Pawel Gawrychowski and Tatiana Starikovskaya —All Non-trivial Variants of 3-LDT Are Equivalent|
|15||Mobin Y.Jeloudar, Irene Lo, Tristan Pollner and Amin Saberi —Decentralized Matching in a Probabilistic Environment|
Tuesday 11:00 — 12:00
|1||Amit Levi, Ramesh Pallavoor, Sofya Raskhodnikova and Nithin Varma —Erasure-Resilient Subliinear-Time Graph Algorithms|
|2||Laurent Feuilloley, José Correa, Andrés Cristi, Alexandros Tsigonias-Dimitriadis and Tim Oosterwijk —The Secretary Problem with Independent Sampling|
|3||Marc Roth, Johannes Schmitt and Philip Wellnitz —Counting Small Induced Subgraphs Satisfying Monotone Properties|
|4||Diodato Ferraioli, Paolo Penna and Carmine Ventre —Two-way Greedy: Algorithms for Imperfect Rationality|
|5||Daniel Lokshtanov, Saket Saurabh and Vaishali Surianarayanan —A Parameterized Approximation Scheme for Min k-Cut|
|6||Spyros Angelopoulos —Online Search with a Hint|
|7||Ronen Gradwohl, Niklas Hahn, Martin Hoefer and Rann Smorodinsky —Algorithms for Persuasion with Limited Communication|
|8||Paul Duetting, Federico Fusco, Philip Lazos, Stefano Leonardi and Rebecca Reiffenhäuser —Efficient Two-Sided Markets with Limited Information|
|9||Bernhard Haeupler, David Wajc and Goran Zuzic —Universally-Optimal Distributed Algorithms for Known Topologies|
|10||Vincent Cohen-Addad, David Saulpic and Chris Schwiegelshohn —A New Coreset Framework for Clustering|
|11||Bernhard Haeupler, D. Ellis Hershkowitz and Goran Zuzic —Tree Embeddings for Hop-Constrained Network Design|
|12||Yeganeh Alimohammadi, Nima Anari, Kirankumar Shiragur and Thuy Duong Vuong —Fractionally Log-Concave and Sector-Stable Polynomials: Counting Planar Matchings and More|
|13||Georgios Amanatidis, Federico Fusco, Philip Lazos, Stefano Leonardi and Rebecca Reiffenhäuser —Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint|
|14||Ainesh Bakshi and Pravesh Kothari —Outlier-Robust Clustering of Non-Spherical Mixtures|
|15||Richard Cole and Yixin Tao —On the Existence of Pareto Efficient and Envy Free Allocations|
Wednesday 11:00 — 12:00
|1||Graham Cormode, Zohar Karnin, Edo Liberty, Justin Thaler and Pavel Veselý —Relative Error Streaming Quantiles|
|2||Amir Abboud, Robert Krauthgamer and Ohad Trabelsi —Subcubic Algorithms for Gomory-Hu Tree in Unweighted Graphs|
|3||Waldo Gálvez, Fabrizio Grandoni, Arindam Khan, Diego Ramirez and Andreas Wiese —Improved Approximation Algorithms for 2-Dimensional Knapsack: Packing into Multiple L-Shapes, Spirals, and More|
|4||Vladimir Braverman, Robert Krauthgamer, Aditya Krishnan and Shay Sapir —Near-Optimal Entrywise Sampling of Numerically Sparse Matrices|
|5||Waldo Gálvez, Fabrizio Grandoni, Afrouz Jabal Ameli, Klaus Jansen, Arindam Khan and Malin Rau —A Tight (3/2+ɛ) Approximation for Skewed Strip Packing|
|6||Monika Henzinger, Stefan Neumann, Harald Räcke and Stefan Schmid —Tight Bounds for Online Graph Partitioning|
|7||Roie Levin and David Wajc —Streaming Submodular Matching Meets the Primal-Dual Method|
|8||Peter Gartland, Daniel Lokshtanov, Marcin Pilipczuk, Michał Pilipczuk and Paweł Rzążewski —Finding Large Induced Sparse Subgraphs in Ct-Free Graphs in Quasipolynomial Time|
|9||Andreas S. Schulz and Claudio Telha Cornejo —Integer factorization and Riemann’s hypothesis: Why two-item joint replenishment is hard|
|10||Bernadette Charron-Bost and Patrick Lambein-Monette —Average consensus in spite of dynamic communications|
|11||Katarzyna Paluch —New Approximation Algorithms for Maximum Asymmetric Traveling Salesman and Shortest Superstring|
|12||Grant Schoenebeck and Fang-Yi Yu —Learning and Strongly Truthful Multi-Task Peer Prediction: A Variational Approach|
|13||Daniel Dadush, László Végh and Bento Natura —Revisiting Tardos's Framework for Linear Programming: Faster Exact Solutions using Approximate Solvers|
|14||Kush Bhatia, Peter Bartlett, Anca Dragan and Jacob Steinhardt —Agnostic learning with unknown utilities|
|15||Parinya Chalermsook, Syamantak Das, Yunbum Kook, Bundit Laekhanukit, Yang P. Liu, Richard Peng, Mark Sellke and Daniel Vaz —Vertex Sparsification for Edge Connectivity|