Error message

Monday, 04 January 2021
Time Speaker Title Resources
17:30 to 18:20 Shie Mannor (Technion - Israel Institute of Technology, Israel) Risk and robustness in Reinforcement Learning

In this talk I will start from giving a broad overview of my research, focusing on the essential elements needed for scaling reinforcement learning  to real-world problems. I will present a scheme called "extended intelligence" that concerns the design of systems that participate as responsible, aware and robust elements of more complex systems. I will then deep dive into  the question of how to create control policies from existing historical data and how to  sample trajectories so that future control policies would have less uncertain return. This question has been central in reinforcement learning in the last decade if not more, and involves methods from statistics, optimization, and control theory. We will focus on one the possible remedies to uncertainty in sequential decision problems: using risk measures such as the conditional value-at-risk as the objective to be optimized rather than the ubiquitous expected reward. We consider the complexity and efficiency of evaluating and optimizing risk measures. Our main theme is that considering risk is essential to obtain resilience to model uncertainty and  model mismatch. We then turn our attention to online approaches that adapt on-the-fly to the level of uncertainty of a given trajectory, thus achieving robustness without being overly conservative.

18:30 to 19:20 Navin Goyal (Microsoft Research Bengaluru, India) Towards Analyzing Normalizing Flows

In supervised learning (i.e., learning functions when give examples of input, output pairs), it has been proved in last 2-3 years that overparameterized neural networks with one hidden layer efficiently learn and generalize when trained using Stochastic Gradient Descent (SGD). In contrast, such results are not known for unsupervised learning, in particular for learning probability distributions. In this talk, I will first explain the supervised setting and sketch the result mentioned above. Then I will discuss normalizing flows which are one of the recently popular neural models for learning probability distributions. We will achieve some partial understanding: while a variant of normalizing flows can provably learn and generalize for univariate probability distributions, the situation for the multivariate case is not clear.

19:30 to 20:20 Mengdi Wang (Princeton University, New Jersey, US) Some theoretical results on model-based reinforcement learning
We discuss some recent results on model-based methods for reinforcement learning (RL) in both online and offline problems.  
 
  • For the online RL problem, we discuss several model-based RL methods that adaptively explore an unknown environment and learn to act with provable regret bounds. In particular, we focus on finite-horizon episodic RL where the unknown transition law belongs to a generic family of models. We propose a model based ‘value-targeted regression’ RL algorithm that is based on optimism principle: In each episode, the set of models that are `consistent' with the data collected is constructed. The criterion of consistency is based on the total squared error of that the model incurs on the task of predicting values as determined by the last value estimate along the transitions. The next value function is then chosen by solving the optimistic planning problem with the constructed set of models. We derive a bound on the regret, for arbitrary family of transition models, using the notion of the so-called Eluder dimension proposed by Russo & Van Roy (2014). 
  • Next we discuss batch data (offline) reinforcement learning, where the goal is to predict the value of a new policy using data generated by some behavior policy (which may be unknown). We show that the fitted Q-iteration method with linear function approximation is equivalent to a model-based plugin estimator. We establish that this model-based estimator is minimax optimal and its statistical limit is determined by a form of restricted chi-square divergence between the two policies.
Tuesday, 05 January 2021
Time Speaker Title Resources
09:00 to 09:50 Joel Tropp (California Institute of Technology, California, US) Random products and quantum simulation

How does a product of random matrices behave? This question has a long history in dynamical systems and ergodic theory, where the focus is on asymptotic behavior as the number of factors increases. It has also been studied within high-dimensional statistics and free probability, where the focus is on the asymptotic behavior as the dimension increases. Recently, products of random matrices have received renewed attention as a model for the behavior of randomized optimization algorithms, where it is important to obtain results for a fixed number of factors with a fixed dimension.

This talk outlines an easy method for obtaining such nonasymptotic concentration inequalities for a product of random matrices. As an application, we will show how this approach allows us to design an optimal randomized algorithm for the simulation of a quantum system. No background in random matrix theory or quantum information will be required.

The results are drawn from two collaborations: http://arxiv.org/abs/2003.05437 <http://arxiv.org/abs/2003.05437>, with De Huang, Jonathan Niles-Weed, and Rachel Ward. http://arxiv.org/abs/2008.11751 <http://arxiv.org/abs/2008.11751>, with Chi-Fang (Anthony) Chen, Hsin-Yuan (Robert) Huang, and Richard Kueng.

17:30 to 18:20 Rahul Roy (Indian Statistical Institute, Delhi, India) A non-planar random graph and its planar scaling limit
We study the two dimensional version of a random graph introduced by Athreya, et al (2008). This random graph, also called the torchlight model is a non-planar directed random graph. We show that the scaling limit of this graph is the Brownian Web. 

This is joint work with Azadeh Parvaneh. 

 
18:30 to 19:20 Anette Hosoi (Massachusetts Institute of Technology, Cambridge, US) Short Stories of Probability and Sports

In light of recent advances in data collection, sports possess a number of features that make them an ideal testing ground for new analyses and algorithms.  In this talk I will describe a few studies that lie at the intersection of sports and data.  The first centers on fantasy sports which have experienced a surge in popularity in the past decade. One of the consequences of this recent rapid growth is increased scrutiny surrounding the legal aspects of the games, which typically hinge on the relative roles of skill and chance in the outcome of a competition. While there are many ethical and legal arguments that enter into the debate, the answer to the skill versus chance question is grounded in mathematics. In this talk I will analyze data from daily fantasy competitions and propose a new metric to quantify the relative roles of skill and chance in games and other activities. Time permitting, in the second part of this talk, I will briefly outline two additional studies that lie at the intersection on sports and probability. The first is a collaboration with Major League Baseball to determine the physics behind the recent increase in the rate of home runs; in particular I will enumerate different potential drivers for the observed increase and evaluate the evidence in the data (box score, ball tracking, weather, etc) in support of each theory.  The second explores one aspect of the impact of the pandemic on sports through data associated with opening NFL stadiums to fans.

19:30 to 20:20 Nike Sun (Massachusetts Institute of Technology, Cambridge, US) Phase transitions in random constraint satisfaction problems.

I will survey recent progress in determination of asymptotic behavior for random constraint satisfaction problems, including phase transitions and some understanding of solution geometry. I will discuss (as time permits) two ideas that played an important role in these works: (1) combinatorial models for the solution geometry, and (2) contractivity of tree recursions as a tool for calculating expected partition functions on sparse random graphs. This lecture is based in part on joint works with Zsolt Bartha, Jian Ding, Allan Sly, and Yumeng Zhang.

Wednesday, 06 January 2021
Time Speaker Title Resources
17:30 to 18:20 Gabor Lugosi (Pompeu Fabra University, Barcelona, Spain) Root finding and broadcasting in random recursive trees

Uniform and preferential attachment trees are among the simplest examples of dynamically growing networks. The statistical problems we address in this talk regard discovering the past of the tree when a present-day snapshot is observed. We present a few results that show that, even in  gigantic networks, a lot of information is preserved from the very early days. In particular,  we discuss the problem of finding the root and the broadcasting problem.

18:30 to 19:20 Subhajit Goswami (TIFR, Mumbai, India) Adaptive Estimation via Optimal Decision Trees

Proposed by Donoho in 1997, Dyadic CART is a nonparametric regression method which computes a globally optimal dyadic decision tree and fits piece-wise constant functions in two dimensions. In this talk we discuss Dyadic CART and a closely related estimator, namely Optimal Regression Tree (ORT), in the context of estimating piece-wise smooth functions in general dimensions in the fixed design setup. More precisely, these optimal decision tree estimators fit piece-wise polynomials of any given degree. Like Dyadic CART in two dimensions, we will reason that these estimators can also be computed in polynomial time in the sample size via dynamic programming. We will discuss oracle inequalities for the finite sample risk of Dyadic CART and ORT which imply tight risk bounds for several function classes of interest. In this context we will consider two function spaces of recent interest where multivariate total variation denoising and univariate trend filtering are the state of the art methods. We will argue that Dyadic CART enjoys certain advantages over these estimators while still maintaining all their known guarantees. Based on a joint work with Sabyasachi Chatterjee.

19:30 to 20:20 Alexander Rakhlin (Massachusetts Institute of Technology, Cambridge, US) Is Memorization Compatible with Learning?
One of the key tenets taught in courses on Statistics and Machine Learning is that fitting the data too well (or, data memorization, data interpolation) inevitably leads to overfitting and poor prediction performance. Yet, over-parametrized neural networks appear to defy this "rule''. We will provide theoretical evidence that challenges the common wisdom. In particular, we will consider the minimum norm interpolant in a reproducing kernel Hilbert space and show its good generalization properties in certain high-dimensional regimes. Since gradient dynamics for wide randomly-initialized neural networks provably converge to a minimum-norm interpolant (with respect to a certain kernel), our results imply generalization and consistency for such neural networks. On the technical side, our work involves analysis of spectra of random kernel matrices.
 
Joint work with Tengyuan Liang and Xiyu Zhai.
Thursday, 07 January 2021
Time Speaker Title Resources
18:30 to 19:20 Alexandre Proutiere (KTH Royal Institute of Technology, Stockholm, Sweden) Fastest Identification in Linear Systems

We report recent results on two classical inference problems in linear systems. (i) In the first problem, we wish to identify the dynamics of a canonical linear time-invariant systems $x_{t+1}=Ax_t+\eta_{t+1}$ from an observed trajectory. We provide system-specific sample complexity lower bound satisfied by any $(\epsilon, \delta)$-PAC algorithm, i.e., yielding an estimation error less than $\epsilon$ with probability at least $1-\delta$. We further establish that the Ordinary Least Squares estimator achieves this fundamental limit. (ii) In the second inference problem, we aim at identifying the best arm in bandit optimization with linear rewards. We derive instance-specific sample complexity lower bounds for any $\delta$-PAC algorithm, and devise a simple track-and-stop algorithm achieving this lower bound. In both inference problems, the analysis relies on novel concentration results for the spectrum of the covariates matrix. Joint work with Yassir Jedra

19:30 to 20:20 Santosh Vempala (Georgia Institute of Technology, Atlanta, US) Reducing Isotropy to KLS: An Almost Cubic Volume Algorithm

Computing the volume of a convex body is an ancient problem whose study has led to many interesting mathematical developments. In the most general setting, the convex body is given only via a membership oracle. In this talk, we present a faster algorithm for isotropic transformation of an arbitrary convex body in R^n, with complexity n^3\psi^2, where \psi bounds the KLS constant for isotropic convex bodies. Together with the known bound of \psi = O(n^o(1)} [Chen 2020] and the Cousins-Vempala n^3 volume algorithm for well-rounded convex bodies [2015], this gives an n^{3+o(1)} volume algorithm for general convex bodies, improving on the n^4 algorithm of Lovász-Vempala [2003]. A positive resolution of the KLS conjecture (\psi = O(1)) would imply an n^3 volume algorithm (up to log factors). No background on algorithms, KLS or ABC will be assumed for the talk. This is joint work with He Jia, Aditi Laddha and Yin Tat Lee. 

22:00 to 22:50 Nima Anari (Stanford University, California, US) Fractionally Log-Concave and Sector-Stable Polynomials

We show fully polynomial time randomized approximation schemes for counting matchings of a given size, or more generally sampling/counting monomer-dimer systems in planar, not-necessarily-bipartite, graphs. While perfect matchings on planar graphs can be counted exactly in polynomial time, counting non-perfect matchings was shown by Jerrum [Jer87] to be #P-hard, who also raised the question of whether efficient approximate counting is possible. We answer this affirmatively by showing that the multi-site Glauber dynamics on the set of monomers in a monomer-dimer system always mixes rapidly, and that this dynamics can be implemented efficiently on families of graphs where counting perfect matchings is tractable. In order to analyze mixing properties of the multi-site Glauber dynamics, we establish two new notions for generating polynomials of discrete set-valued distributions: sector-stability and fractional log-concavity. These notions generalize well-studied properties like real-stability and log-concavity, but unlike them robustly degrade under useful transformations applied to the distribution. We relate these notions to pairwise correlations in the underlying distribution and the notion of spectral independence introduced by Anari, Liu, and Oveis Gharan [ALO20], providing a new tool for establishing spectral independence based on zeros of polynomials. As a byproduct of our techniques, we show that polynomials whose roots avoid certain regions of the complex plane must satisfy a form of convexity, extending a classic result established between hyperbolic and log-concave polynomials by Gårding [Går59] to new classes of polynomials. As further applications of our results, we show how to sample efficiently from certain partition-constrained strongly Rayleigh distributions, and certain asymmetric determinantal point processes using natural Markov chains. Joint work with Yeganeh Alimohammadi, Kirankumar Shiragur, and Thuy-Duong Vuong.

Friday, 08 January 2021
Time Speaker Title Resources
17:30 to 18:20 Gareth Roberts (University of Warwick, Coventry, UK) The Monte Carlo Fusion problem

Suppose we can readily access samples from $\pi_c(x)$, $1\le c\le C$, but we wish to obtain samples from $\pi (x) = \prod_ {c=1}^C \pi_c (x) $. The so-called Fusion problem comes up within various areas of modern Bayesian Statistical analysis, for example in the context of big data or privacy constraints, as well as more traditional areas such as meta-analysis. Many approximate solutions to this problem have been proposed. This talk will present an exact solution based on rejection sampling in an extended state space, where the accept/reject decision is carried out by simulating the skeleton of a suitably constructed auxiliary collection of Brownian bridges. It will also present a much more scalable Sequential Monte Carlo method which allow the method to be used for larger $C$ and in the context of greater discrepancies between the $\pi_c(x)$ densities. Some theoretical results underpinning the method and its computational efficiency will be presented. This is joint work with Hongsheng Dai (Essex, UK) and Murray Pollock (Newcastle, UK).

18:30 to 19:20 Natesh Pillai (Harvard University, Cambridge, US) Accelerating MCMC for Computationally Intensive Models

We construct a new framework for accelerating Markov chain Monte Carlo in posterior sampling problems where standard methods are limited by the computational cost of the likelihood, or of numerical models embedded therein. Our approach introduces local approximations of these models into the Metropolis-Hastings kernel, borrowing ideas from deterministic approximation theory, optimization, and experimental design. Previous efforts at integrating approximate models into inference typically sacrifice either the sampler's exactness or efficiency; our work seeks to address these limitations by exploiting useful convergence characteristics of local approximations. We prove the ergodicity of our approximate Markov chain, showing that it samples asymptotically from the exact posterior distribution of interest. We describe variations of the algorithm that employ various approximations, such as local polynomial approximations.

19:30 to 20:20 Jon Mattingly (Duke University, Durham, US) Evaluating Fairness in Redistricting
The use of a collection of “neutral”, computer-generated, redistricting maps is quickly becoming the standard for identifying Gerrymandering and understanding the ways it is achieved. Several teams have used these ideas in advising governors and legislatures as well as in several court proceedings. The Duke group has been involved in several cases that have lead to the redrawing of all of the federal and state-level redistricting maps for the 2020 elections.
 
After laying out the problem and its relation to gerrymandering (and recent court cases !), I will discuss some sampling schemes developed for this problem. More specifically, I will talk about attempts to move beyond simple reversable MCMC using  local Ising-like moves. I will discuss more global moves biased on balanced partitions generated using spanning trees/forests and a multiscale extension which allows for sampling large graphs. I will also talk about some recent non-reversible Markov Chain Methods which might be interesting in their own right. I will also touch on some methods which don't require equilibrium.
 
I will also discuss attempts to visualize the results and understand the geopolitical structure of the problem and the different ways gerrymandering manifests.
 
Overall, I hope to show you that redistricting is an interesting problem that has generating many interesting mathematical questions already with likely many more to come.