# Conférenciers invités

- Christophe ANDRIEU (University of Bristol). Résumé
- Francis BACH (INRIA, ENS). Résumé
- Ciprian CRAINICEANU (Johns Hopkins University). Résumé
- Laurens DE HAAN (Erasmus University Rotterdam). Résumé
- Persi DIACONIS (Stanford University). Résumé
- Susanne DITLEVSEN (University of Copenhagen). Résumé
- Susan HOLMES (Stanford University). Résumé
- Vladimir KOLTCHINSKII (Georgia University of Tehcnology), lauréat de la Conférence Le Cam. Résumé
- Rik LOPUHAA (Delft University of Technology). Résumé
- Clémentine PRIEUR (Université Joseph Fourier – Grenoble). Résumé
- Adrian RAFTERY (University of Washington). Résumé
- Judith ROUSSEAU (ENSAE). Résumé
- Richard SAMWORTH (University of Cambridge). Résumé
- Yves TILLE (Université de Neuchâtel). Résumé
- Mark VAN DER LAAN (University of California at Berkeley). Résumé
- Jean-Philippe VERT (Mines ParisTech). Résumé

### **Liste des résumés **

**Christophe ANDRIEU****Titre**: Establishing some order amongst exact approximations of MCMCs**Résumé**: Exact approximations of Markov chain Monte Carlo (MCMC) algorithms are a general class of sampling algorithms particularly well suited to Bayesian inference in complex models or computations in statistical physics where some of the quantities involved are intractable. One of the main ideas behind exact approximations consists of replacing intractable quantities required to run standard algorithms, such as the target probability density in a Metropolis-Hastings algorithm, with estimators. Perhaps surprisingly, suitable and implementable approximations turn out to lead to exact algorithms in the sense that they are guaranteed to target the probability distribution of interest without introducing any approximation. In this talk we present a general framework which allows one to compare, or order, performance measures of two such approximate implementations. We focus in particular on the mean acceptance probability, the first order autocorrelation coefficient, the so-called asymptotic variance and the right spectral gap. The key notion we identify as relevant to our purpose is that of the convex order between random variables, in our case two approximations of the aforementioned quantities required to implement standard algorithms. An important point is that we show that using the variance of such approximations as a means to compare performance is not sufficient whereas the convex order turns out to be natural and powerful. Indeed the literature concerned with the convex order is vast and we detail some examples of applications by identifying extremal distributions within given classes of approximations, showing that averaging replicas improves performance in a monotonic fashion and that stratification may improve performance–it is in fact that case in almost all situations for the standard implementation of the Approximate Bayesian Computation (ABC) MCMC method. We also point to other applications and future developments. This is a joint work with Matti Vihola.

**Francis BACH****Titre**: Apprentissage à grande échelle: au-delà du gradient stochastique**Résumé**: De nombreux problèmes d’apprentissage statistique peuvent être formulés comme des problèmes d’optimisation convexes. Une difficulté pratique importante est liée au grand nombre d’observations. Dans ce cadre, les algorithmes en-ligne qui n’accèdent aux données que peu de fois sont privilégiés. Dans cette présentation, je développerai une analyse moderne des algorithmes d’approximation stochastique, en mettant en évidence l’adaptivité naturelle ces méthodes à la difficulté du problème, ainsi qu’un nouvel algorithme adapté à plusieurs passages sur les données et atteignant une convergence linéaire (ces travaux ont été effectués en collaboration avec Nicolas Le Roux, Eric Moulines et Mark Schmidt).

**Ciprian CRAINICEANU****Titre**: Brain imaging and wearable computing: emerging problems with complex data structures**Résumé**: The talk will provide an introduction to structural brain imaging and wearable computing. Brain imaging will center on high resolution structural MRI (sMRI) and computed tomography (CT) with applications to multiple sclerosis and stroke. I will discuss problems related to automatic segmentations of lesions and quantifying the association between lesion locations and size and health outcomes. Wearable computing will center on automatic movement recognition using tri-axial accelerometer data and studying the association between activity intensity, aging and mental health disorders. I will explain how not everybody, but some people move like you.

**Laurens DE HAAN****Titre**: Statistics of heteroscedastic extremes**Résumé**: We extend classical extreme value theory to non-identically distributed observations. When the distribution tails are proportional much of extreme value statistics remains valid. The proportionality function for the tails can be estimated non-parametrically along with the (common) extreme value index. Joint asymptotic normality of both estimators is shown; they are asymptotically independent. We develop tests for the proportionality function and for the validity of the model. We show through simulations the good performance of tests for tail homoscedasticity. The results are applied to stock market returns. A main tool is the weak convergence of a weighted sequential tail empirical process. This is a joint work with John H.J. Einmahl and Chen Zhou.

**Persi DIACONIS****Titre**: Understanding and working with exponential random graphs**Résumé**: Exponential models are a standard tool in statistics. Using such models when working with network data leads to strange new phenomena: phase transitions, instability and (sometimes) the ability to estimate n parameters based on a sample of size one. Adapting the math-stat to real world estimation problems leads to novel tasks in non-convex optimization. All of this is joint work with Sourav Chatterjee.

**Susanne DITLEVSEN****Titre**: Estimation in partially observed diffusion models, with applications to stochastic neuron models**Résumé**: Parameter estimation in multi-dimensional diffusion models with only one coordinate observed is highly relevant in many biological applications, but a statistically difficult problem. In neuroscience, the membrane potential evolution in single neurons can be measured at high frequency, but biophysical realistic models have to include the unobserved dynamics of ion channels and/or synaptic input. These models are typically defined by multi-dimensional non-linear stochastic differential equations. The coordinates are coupled, i.e. the unobserved coordinates are non-autonomous, the model exhibits oscillations to mimick the spiking behavior, which means it is not of gradient-type, and the measurement noise from intra-cellular recordings is typically negligible. Therefore the hidden Markov model framework is degenerate, and available methods break down. I will discuss estimation in this ill-posed situation.

**Susan HOLMES****Titre**: Using the data, all the data**Résumé**: Study of microbiome census data together with covarying tables such as Mass Spectroscopy and Metagenomic data poses statistical and computational challenges linked to heterogeneity of the data structures and data sources. We will give some examples of solving some of these challenges using data transformation, visualizations and conjoint analyses. These techniques enable us to study resilience in the human microbiome following antibiotic intake as well as prediction of preterm birth from the study of the dynamics of microbial communities. This talk contains joint work with Paul J McMurdie, David Relman and Benjamin Callahan.

**Vladimir KOLTCHINSKII****Titre**: Estimation problems for large low rank matrices**Résumé**: We will discuss a problem of estimation of a large matrix based on its noisy linear measurements. The underlying assumption is that the target matrix has a small rank, or it can be well approximated by small rank matrices. This problem has been extensively studied in the recent years. Its important instances include matrix completion, where a random sample of entries of the target matrix is observed, and quantum state tomography, where the target matrix is a density matrix of a quantum system and it has to be estimated based on the measurements of a finite number of observables. We will consider several approaches to such problems based on a penalized least squares method (and its modifications) with complexity penalties defined in terms of functionals that « promote » small rank solutions (nuclear norm, von Neumann entropy). Oracle inequalities for the resulting estimators with explicit dependence of the error terms on the rank and other parameters of the problem will be discussed.

**Rik LOPUHAA****Titre**: Robust estimation of multivariate location and scatter: the MCD estimators**Résumé**: The topic of this presentation will be robust estimation of multivariate location and scatter parameters. The interest will be in affine equivariant estimators with a high breakdown point and a bounded influence function. I will discuss the robust properties and distributional behaviour of some proposals and the effect of using a robust estimator in a re-weighting procedure to discard possible outliers. I will focus on the minimum covariance determinant (MCD) estimators, which have become one of the most popular robust alternatives to the ordinary sample mean and sample covariance matrix. Nowadays they are used to determine robust Mahalanobis distances in a re-weighting procedure, and are used as robust plug-ins in all sorts of multivariate statistical techniques which need a location and/or covariance estimate, such as principal component analysis, factor analysis, discriminant analysis and linear multivariate regression. To perform statistical inference in these situations, the exact asymptotic expansion of the MCD estimators and, more importantly, the limit distribution and their limiting variances are essential. I will present some recent results in this direction that have been obtained in a very general multivariate setting.

**Clémentine PRIEUR****Titre**: Recent inference approaches for Sobol’ sensitivity indices**Résumé**: Many mathematical models use a large number of poorly-known parameters as inputs. Quantifying the influence of each of these parameters is one of the aims of sensitivity analysis. Stochastic approaches in the case of independent inputs have been widely developed. We will focus on recent statistical inference results, deriving both asymptotic and finite horizon properties. Some of the results should be illustrated on real test cases involving a high number of parameters.

**Adrian RAFTERY****Titre**: Probabilistic Population Projections for All Countries**Résumé**: Projections of countries’ future populations, broken down by age and sex, are widely used for planning and research. They are mostly done deterministically, but there is a widespread need for probabilistic projections. I will describe a Bayesian statistical method for probabilistic population projections for all countries. These new methods have been used by the United Nations to produce their most recent population projections for all countries.

**Judith ROUSSEAU****Titre**: On some recent advances on frequentist properties of Bayesian non

and semi-parametric approaches**Résumé**: We first shall present some review of general results on asymptotic properties of Bayesian nonparametric approaches. Then we shall discuss some issues that take place in semi-parametric problems or more generally on cases where the loss function is not the natural loss function – a notion which we shall make more precise in the talk. In particular we will give a lower bound which we have obtained for posterior concentration rates and explain some of the implications it gives. Finally we will give some new results on more precise statements of asymptotic properties of Bayesian semi-parametric approaches through the existence of Bernstein von Mises properties.

**Richard SAMWORTH****Titre**: Log-concave density estimation: basic concepts and new results**Résumé**: Log-concave density estimation is a central problem within the area of nonparametric inference under shape constraints. I will begin by describing the basic ideas and results and illustrate various applications. In the second half of the talk, I will describe very recent results on global rates of convergence. These come as quite a surprise, revealing in particular differences between this problem and that of estimating a density with two bounded derivatives (a problem to which it is often compared). This is joint work with Arlene K. H. Kim.

**Yves TILLE****Titre**: New Developments in Spatial Sampling**Résumé**: Spatial data are often autocorrelated. For the estimation of a mean or of a total, the selection of two neighboring units is inefficient because the values of the variables of interest for these units are in general similar. The analysis of a simple autocorrelated model enables us to confirm the need of spreading the sample. In spatial data, systematic sampling is considered as a very efficient strategy. Nevertheless, when the units must be selected with unequal inclusion probabilities, or when the statistical units are irregularly distributed in the space, systematic sampling cannot be implemented. Through several examples, we present a set of new methods that enable us to spread the units in the space and that satisfy given inclusion probabilities.

**Mark VAN DER LAAN****Titre**: Targeted Learning with Big Data**Résumé**: Learning from data involves defining 1) the experiment that generated the data, 2) the (often low dimensional) target parameter of the data generating distribution that we want to learn, the so called estimand, 3) the collection of possible data generating distributions, the so called statistical model, and 4) its possible parameterization in terms of underlying distributions, often involving non-testable assumptions, giving the so called model. The statistical model represents our statistical knowledge and should be defined so that it contains the true data generating distribution. The statistical estimation problem is now defined by the target parameter and statistical model. Realistic estimation problems thus involve learning a target parameter in very large semiparametric models for often very high dimensional data structures. Methods such as regularized maximum likelihood based estimation, though optimal for small semiparametric models, break down for such large semiparametric models, due to wrong (non-targeted) bias-variance trade-off. In response to this we developed targeted maximum likelihood estimation, and its natural generalization, targeted minimum loss based estimation (TMLE), as a template for construction of semiparametric efficient estimators of pathwise differentiable target parameters. It involves defining an initial estimator of the relevant part of the data generating distribution, allowing the integration of the state of the art in ensemble learning fully utilizing the power of cross-validation, and a targeted bias reduction step defined by a least favorable parametric submodel through the initial estimator, and a loss function to estimate the amount of fluctuation. The estimator of the target parameter is now the plug-in estimator corresponding with this updated initial estimator. Under appropriate conditions, TMLE results in semiparametric efficient (often robust w.r.t. various misspecifications) substitution estimators. We assigned the name Targeted Learning to the field concerned with data adaptive estimation of target parameters while still providing statistical inference. In our talk we will review this template, and demonstrate some recent work involving applications of TMLE to nonparametrically estimate optimal individualized treatment rules, while providing statistical inference for its gain relative to a standard treatment, and to estimate causal effects of stochastic interventions on a network of individuals. We also highlight modifications of TMLE that naturally handles streaming data and/or very large data sets.

**Jean-Philippe VERT****Titre**: Machine learning for personalized genomics**Résumé**: The development of DNA sequencing technologies allow us to collect large amounts of molecular data about the genome of each individual, and opens the possibility to predict drug response or evaluate the risk of various diseases from one’s molecular identity. In this talk I will discuss some regularization-based approaches we have developed to estimate complex, high-dimensional predictive models from relatively few samples, in particular in cancer prognosis and toxicogenetics