LSVI stands for Least Square Variational Inference. It is a package for minimizing the KL divergence of an (unnormalized) distribution to a distribution within a chosen exponential family. Such procedures are typically used to approximate a posterior distribution in a Bayesian setting by an easy-to-sample from distribution, to estimate posterior-based quantities.
The first-order optimality condition
which we iterate until convergence. All the involved expectations are approximated by Monte Carlo sampling.
When the variational family is the set of full-rank Gaussian distributions, the previous iteration requires inversion of
See LSVI/experiments
for some examples, including the variational approximation of the posterior of a logistic
regression on a real dataset, the variable selection problem in a linear regression using the variational family of
product of Bernoullis, and a stochastic movement model with intractable likelihood.
Yvann Le Fay, Nicolas Chopin, Simon Barthelmé. Least squares variational inference. NeurIPS 2025. arxiv:2502.18475