![]() |
Royal Statistical Society
|
![]() |
2010 | |
---|---|
5.00 p.m., 13 October |
MARK GIROLAMI AND BEN CALDERHEAD (University College London and University of Glasgow) Riemann manifold Langevin and Hamiltonian Monte Carlo methods The paper proposes Metropolis adjusted Langevin and Hamiltonian Monte Carlo sampling methods defined on the Riemann manifold to resolve the shortcomings of existing Monte Carlo algorithms when sampling from target densities that may be high dimensional and exhibit strong correlations. The methods provide fully automated adaptation mechanisms that circumvent the costly pilot runs that are required to tune proposal densities for Metropolis-Hastings or indeed Hamiltonian Monte Carlo and Metropolis adjusted Langevin algorithms. This allows for highly efficient sampling even in very high dimensions where different scalings may be required for the transient and stationary phases of the Markov chain. The methodology proposed exploits the Riemann geometry of the parameter space of statistical models and thus automatically adapts to the local structure when simulating paths across this manifold, providing highly efficient convergence and exploration of the target density. The performance of these Riemann manifold Monte Carlo methods is rigorously assessed by performing inference on logistic regression models, log-Gaussian Cox point processes, stochastic volatility models and Bayesian estimation of dynamic systems described by non-linear differential equations. Substantial improvements in the time-normalized effective sample size are reported when compared with alternative sampling approaches. MATLAB code that is available from the authors allows replication of all the results reported.
Electronic version of the paper: [PDF]. |
2010 | |
5.00 p.m., 12 May |
MADELEINE CULE, RICHARD SAMWORTH AND MICHAEL STEWART (University of Cambridge, University of Cambridge and University of Sydney) Maximum likelihood estimation of a multidimensional log-concave density Let $X_1,\ldots,X_n$ be independent and identically distributed random vectors with a (Lebesgue) density $f$. We first prove that, with probability one, there exists a unique log-concave maximum likelihood estimator $\hat{f}_n$ of $f$. The use of this estimator is attractive because, unlike kernel density estimation, the method is fully automatic, with no smoothing parameters to choose. Although the existence proof is non-constructive, we are able to reformulate the issue of computing $\hat{f}_n$ in terms of a non-differentiable convex optimisation problem, and thus combine techniques of computational geometry with Shor's \mbox{$r$-algorithm} to produce a sequence that converges to $\hat{f}_n$. An \proglang{R} version of the algorithm is available in the package \pkg{LogConcDEAD} -- Log-Concave Density Estimation in Arbitrary Dimensions. We demonstrate that the estimator has attractive theoretical properties both when the true density is log-concave and when this model is misspecified. For the moderate or large sample sizes in our simulations, $\hat{f}_n$ is shown to have smaller mean integrated squared error compared with kernel-based methods, even when we allow the use of a theoretical, optimal fixed bandwidth for the kernel estimator that would not be available in practice. We also present a real data clustering example, which shows that our methodology can be used in conjunction with the Expectation--Maximisation (EM) algorithm to fit finite mixtures of log-concave densities.
Electronic version of the paper: [PDF]. |
2010 | |
5.00 p.m., 3 February |
NICOLAI MEINSHAUSEN AND PETER BUEHLMANN (University of Oxford and ETH Zurich) Stability selection Estimation of structure, such as in variable selection, graphical modelling or cluster analysis, is notoriously difficult, especially for high dimensional data. We introduce stability selection. It is based on subsampling in combination with (high dimensional) selection algorithms. As such, the method is extremely general and has a very wide range of applicability. Stability selection provides finite sample control for some error rates of false discoveries and hence a transparent principle to choose a proper amount of regularisation for structure estimation. Variable selection and structure estimation improve markedly for a range of selection methods if stability selection is applied. We prove for the randomized lasso that stability selection will be variable selection consistent even if the necessary conditions needed for consistency of the original lasso method are violated. We demonstrate stability selection for variable selection and Gaussian graphical modelling, using real and simulated data.
Electronic version of the paper: [PDF]. |
2009 | |
---|---|
5.00 p.m., 14 October 2009 |
CHRISTOPHE ANDRIEU, ARNAUD DOUCET AND ROMAN HOLENSTEIN (University of Bristol, University of British Columbia and University of British Columbia) Particle Markov chain Monte Carlo methods Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods have emerged as the two main tools to sample from high dimensional probability distributions. Although asymptotic convergence of MCMC algorithms is ensured under weak assumptions, the performance of these algorithms is unreliable when the proposal distributions that are used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. We show how it is possible to build efficient high dimensional proposal distributions by using SMC methods. This allows us not only to improve over standard MCMC schemes but also to make Bayesian inference feasible for a large class of statistical models where this was not previously so. We demonstrate these algorithms on a non-linear state space model and a Lévy-driven stochastic volatility model.
Electronic version of the paper: [PDF]. |
2008 | |
---|---|
5.00 p.m., 17 December 2008 |
DAVID E. TYLER, FRANK CRITCHLEY, LUTZ DUeMBGEN, AND HANNU OJA (Rutgers University, Open University, University of Berne, and University of Tampere) Invariant coordinate selection We propose a general method for transforming multivariate data to affine invariant coordinates. By plotting the data with respect to these invariant coordinate systems, various data structures can be revealed. Under certain independent components models, the invariant coordinates correspond to the independent components. For mixtures of elliptical distributions, the invariant coordinates can be used to identify Fisher's linear discriminant subspace, even though the class identifications are unknown. Some illustrative examples are given.
Electronic version of the paper: [PDF]. |
2008 | |
5.00 p.m., 15 October 2008 |
Havard Rue, Sara Martino, Nicolas Chopin (The Norwegian University for Science and Technology, Trondheim, CREST-LS and ENSAE, Paris) Approximate Bayesian Inference for Latent Gaussian Models Using Integrated Nested Laplace Approximations By using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where MCMC algorithms need hours or days to run, our approximations provide more precise estimates in minutes or seconds.
Electronic version of the paper: [PDF]. |
2008 | |
5.00 p.m., 23 April 2008 |
Jianqing Fan and Jinchi Lv (Princeton University and University of Southern California) Sure Independence Screening for Ultra-High Dimensional Feature Space This paper introduces the concept of sure screening for high-dimensional feature selection. A new method SIS is proposed for variable screening. It concludes that a two-stage procedure starting with a sure screening method can improve both accuracy and speed on model selection. SIS combined with well-developed model selection techniques provides a powerful tool for high dimensional variable selection.
Electronic version of the paper: [PDF]. |
2008 | |
5.00 p.m., 6 February 2008 |
PETER McCULLAGH (UNIVERSITY OF CHICAGO) Sampling bias and logistic models This paper considers various forms of random-effects models for binary data. It concludes that parameter attenuation is a statistical illusion attributable to sampling bias and ambiguous notation. A new random-effects model makes it clear that the conditional distribution of $Y_i$ given $X_i=x$ is not the same as the marginal distribution of $Y$-values in stratum~$x$. Implications for likelihood calculations and estimating equations are described.
Electronic version of the paper: [PDF]. |
1996/1997 |
1997/1998 |
1998/1999 |
1999/2000 |
2000/2001 |
2001/2002 |
2002/2003 |
2003/2004 |
2006/2007 |
Prof. Mike Titterington (Chairman) Department of Statistics Room 222 - Mathematics Building University of Glasgow, G12 8QW (Tel: +44 1413 305022) |
Dr Richard Samworth (Honorary Secretary) Statistical Laboratory Wilberforce Road Cambridge, CB3 0WB (Tel: +44 1223 337950) |
2010 committee: Mike Titterington (Chairman), Richard Samworth (Secretary), Martin Owen (Executive Editor), V Didelez, P Fearnhead, N Friel, P Fryzlewicz, W Gilks C Jones, N Meinshausen, T Scheike, S Sisson, A Skrondal, D Wilkinson, S Wood, P Farrington (Council representative) V Isham (Theme director for meetings and conferences) G Casella (JRSSB editor) and C Robert (JRSSB editor).