Sampling from a high-dimensional distribution is a elementary activity in statistics, engineering, and the sciences. A canonical method is the Langevin Algorithm, i.e., the Markov chain for the discretized Langevin Diffusion. That is the sampling analog of Gradient Descent. Regardless of being studied for a number of a long time in a number of communities, tight mixing bounds for this algorithm stay unresolved even within the seemingly easy setting of log-concave distributions over a bounded area. This paper utterly characterizes the blending time of the Langevin Algorithm to its stationary distribution on this setting (and others). This mixing consequence may be mixed with any certain on the discretization bias as a way to pattern from the stationary distribution of the continual Langevin Diffusion. On this manner, we disentangle the examine of the blending and bias of the Langevin Algorithm.
Our key perception is to introduce a method from the differential privateness literature to the sampling literature. This system, known as Privateness Amplification by Iteration, makes use of as a possible a variant of Rényi divergence that’s made geometrically conscious through Optimum Transport smoothing. This provides a brief, easy proof of optimum mixing bounds and has a number of extra interesting properties. First, our method removes all pointless assumptions required by different sampling analyses. Second, our method unifies many settings: it extends unchanged if the Langevin Algorithm makes use of projections, stochastic mini-batch gradients, or strongly convex potentials (whereby our mixing time improves exponentially). Third, our method exploits convexity solely via the contractivity of a gradient step — paying homage to how convexity is utilized in textbook proofs of Gradient Descent. On this manner, we provide a brand new method in the direction of additional unifying the analyses of optimization and sampling algorithms.