Title: Convergence time of some non-reversible MCMC methods
Authors: Florian Maire - universite de montreal (Canada) [presenting]
Abstract: It is commonly admitted that non-reversible Markov chain Monte Carlo (MCMC) algorithms usually yield more accurate MCMC estimators than their reversible counterparts. Some recent results have established an ordering showing that a large class of non-reversible algorithms tend to have a smaller asymptotic variance than their reversible equivalent. In particular, the Guided Walk or some lifting techniques are shown to have a smaller asymptotic variance than the Metropolis algorithm. We show that in addition to their variance reduction effect, some non-reversible MCMC algorithms have also the undesirable property to slow down the convergence of the Markov chain towards its stationary distribution. This point, which has been overlooked by the literature, has obvious practical implications. We illustrate this small asymptotic variance/slow convergence scenario phenomenon for different non-reversible versions of the Metropolis algorithm on several discrete state space examples. Our findings echo an important discussion related to the design of the refreshment rate in several non-reversible algorithms including the bouncy particle sampler: Markov chains that are too irreversible see their rate of convergence slowing down and a trade-off is required. We present simple adjustments of some non-reversible MCMC algorithms to mitigate this risk.