Consider a continuous time Markov process \(X_t\) on \(\mathbb{R}^D\) that is ergodic with respect to the probability distribution \(\pi(dx)\). A Langevin diffusion is a typical example. Call \(\mathcal{L}\) the generator of this process so that for a test function \(\varphi: \mathbb{R}^D \to \mathbb{R}\) we have

\[ \varphi(X_t) = \varphi(X_0) + \int_{s=0}^t \mathcal{L}\varphi(X_s) \, ds + \textrm{($M_t \equiv$ martingale)}. \tag{1}\]

Now, assume further that \(\mathop{\mathrm{\mathbb{E}}}_{\pi}[\varphi(X)] = 0\) and that a Central Limit Theorem holds,

\[ \frac{1}{\sqrt{T}} \int_{s=0}^T \varphi(X_s) \, ds \; \to \; \mathcal{N}(0, \sigma^2). \tag{2}\]

How can one estimate the asymptotic variance \(\sigma^2\)?

### Approach I: Integrated autocovariance

One can directly try to compute the second moment of Equation 2 and obtain that

\[ \sigma^2 \; = \; \lim_{T \to \infty} \; \frac{1}{T} \, \iint_{0 \leq s,t \leq T} \mathop{\mathrm{\mathbb{E}}}[\varphi(X_s) \varphi(X_t)] \, ds \, dt \]

Since \(\mathop{\mathrm{\mathbb{E}}}[\varphi(X_s) \varphi(X_t)]\) falls quickly to zero as \(|s-t| \to 0\) and defining the auto-covariance at lag \(r > 0\) as

\[ C(r) = \mathop{\mathrm{\mathbb{E}}}[\varphi(X_t) \varphi(X_{t+r})], \]

one obtains that an expression of the asymptotic as the integrated autocovariance function,

\[ \sigma^2 \; = \; 2 \, \int_{r=0}^\infty C(r) \, dr. \tag{3}\]

In the MCMC literature, this relation is often expressed as

\[ \sigma^2 \; = \; \mathop{\mathrm{Var}}_{\pi}[\varphi] \, \times \, \textrm{(IACT)} \]

where the **integrated autocorrelation** function is defined as

\[ \textrm{(IACT)} = 2 \, \int_{r=0}^\infty \rho(r) \, dr. \]

for autocorrelation at lag \(r\geq 0\) defined as \(\rho(r) \equiv \mathop{\mathrm{Corr}}[\varphi(X_t), \varphi(X_{t+r})]\). The slower the autocorrelation function \(\rho(r)\) falls to zero as \(r \to \infty\), the larger the asymptotic variance \(\sigma^2\). Although Equation 3 is very intuitive, it can be difficult to estimate the autocorrelation function.

### Approach II: Poisson Equation

Under relatively general and mild conditions, since the expectation of \(\varphi\) under the invariant distribution \(\pi\) is zero and the Markov process is ergodic with respect to \(\pi\), there exists a function \(\Phi: \mathbb{R}^D \to \mathbb{R}\) such that

\[ \mathcal{L}\Phi \; = \; \varphi. \tag{4}\]

Equation 4 is called a Poisson Equation since \(\mathcal{L}\) is often a Laplacian-like operator (eg. diffusion-type processes). Equation 1 gives that

\[ \frac{1}{\sqrt{T}} \int_{s=0}^T \varphi(X_s) \, ds \; = \; \frac{M_T}{\sqrt{T}} + {\left\{ \frac{\Phi(X_T) - \Phi(X_0)}{\sqrt{T}} \right\}} \]

where \(M_T\) is the martingale and \([\Phi(X_T) - \Phi(X_0)]/\sqrt{T}\) typically vanishes as \(T \to \infty\) and can be neglected. For computing the asymptotic variance, it suffices to estimate \(\mathop{\mathrm{\mathbb{E}}}(M_T^2)\). And using the martingale property, it equals \(\int_{s=0}^T \mathop{\mathrm{\mathbb{E}}}(dM_t)^2\). Also, since \(M_t = \varphi(X_t) - \varphi(X_0) - \int_{s=0}^t \mathcal{L}\varphi(X_s) \, ds\), algebra gives that

\[ \frac{1}{\varepsilon} \, \mathop{\mathrm{\mathbb{E}}} {\left[ (M_{t+\varepsilon} - M_t)^2 \right]} \approx 2 \mathop{\mathrm{\mathbb{E}}} {\left[ (\Gamma \Phi)(X_t) \right]} \]

where the so-called **carré du champ** \((\Gamma \Phi)\) is defined as

\[ 2 \, (\Gamma \Phi)(X_t) \; = \; {\left( \mathcal{L}(\Phi^2) - 2 \Phi \mathcal{L}\Phi \right)} (X_t) \; = \; \lim_{\varepsilon\to 0} \; \frac{1}{\varepsilon} \mathop{\mathrm{Var}}(\Phi(X_{t+\varepsilon}) \, | \, X_t). \]

This shows that the asymptotic variance satisfies

\[ \sigma^2 \; = \; \lim_{T \to \infty} \frac{2}{T} \int_{s=0}^T \Gamma \Phi(X_s) \, ds \; = \; 2 \, \int_{x \in \mathbb{R}^D} \Gamma \Phi(x) \, \pi(dx). \]

Finally, since \(\int (\mathcal{L}\Phi^2)(x) \, \pi(dx) = 0\), this can equivalently be written as

\[ \sigma^2 \; = \; -2 \, \int_{x \in \mathbb{R}^D} \Phi(x) \, \mathcal{L}\Phi(x) \, \pi(dx) \; = \; 2 \, \mathcal{D}(\Phi) \tag{5}\]

where \(\mathcal{D}(\Phi)\) is the so-called Dirichlet form. In summary, we have just shown that the asymptotic variance of the additive functional \(T^{-1/2} \, \int_0^T \varphi(X_s) \, ds\) is given by two times the Dirichlet form \(\mathcal{D}(\Psi)\) where \(\Phi\) is solution to the Poisson equation \(\mathcal{L}\Phi = \varphi\). Note that this implies that the generator \(\Phi\) is a negative operator in the sense that for a test function \(\Phi\) we have that

\[ \left< \Phi, \mathcal{L}\Phi \right>_{\pi} \; \leq \; 0 \]

where we have used the dot-product notation \(\left< f,g \right>_{\pi} = \int f(x) g(x) \pi(dx)\).

### Poisson equation: Integral representation

It is often useful to think of the generator \(\mathcal{L}\) as an infinite dimensional equivalent of a standard negative definite symmetric matrix/operator \(M \in \mathbb{R}^{n,n}\). And since \(M^{-1} = -\int_{t=0}^{\infty} \exp(tM) \, dt\), as can be seen by diagonalizing \(M\), one can expect the following equation to hold,

\[ \mathcal{L}^{-1} \; = \; -\int_{t=0}^{\infty} e^{t \, \mathcal{L}} \, dt. \]

That is just another way of writing that the solution \(\Phi\) to the Poisson equation \(\mathcal{L}\Phi = \varphi\), with the **centering condition** \(\mathop{\mathrm{\mathbb{E}}}_{\pi}[\Phi(X)]=0\) for picking one solution out of the many possible solutions to the Poisson equation differing from each other by an additive constant, can be expressed as

\[ \Phi(x) \, = \, -\int_{t=0}^{\infty} \mathop{\mathrm{\mathbb{E}}}[\varphi(X_t)|X_0=x] \, dt. \tag{6}\]

Equation 6 is easily proved with Equation 1 by writing

\[ \Phi(x)-\Phi(X_T) = -\int_{t=0}^\infty \varphi(X_t) \, dt + \textrm{(martingale)} \]

and by taking expectation from both sides and noticing that \(\mathop{\mathrm{\mathbb{E}}}[\Phi(X_T)] \to 0\) thanks to the assumed centering condition \(\mathop{\mathrm{\mathbb{E}}}_{\pi}[\Phi(X)]=0\). Note that this remarks allows to give another derivation of Equation 5 starting from the integrated autocovariance formulation Equation 3. Indeed, note that

\[ \begin{align} \sigma^2 &= 2 \, \int_{r=0}^{\infty} C(r) \, dr\\ &= 2 \, \int_{x \in \mathbb{R}^D} \varphi(x) {\left\{ \int_{r=0}^{\infty} \mathop{\mathrm{\mathbb{E}}}[\varphi(X_t) | X_0=x] \, dr \right\}} \, \pi(dx)\\ &= -2 \, \int_{x \in \mathbb{R}^D} \varphi(x) \Phi(x) \, \pi(dx) = -2 \left< \Phi, \mathcal{L}\Phi \right>_{\pi}. \end{align} \]

### Example: OU process

Consider a OU process that is ergodic with respect to the standard Gaussian density \(\pi(x) = e^{-x^2/2} / \sqrt{2\pi}\),

\[ dX \; = \; -\varepsilon^{-1}X \, dt + \sqrt{2 \, \varepsilon^{-1}} \, dW. \]

That’s a standard OU process accelerated by a factor \(\varepsilon^{-1} > 0\). Its generator reads

\[ \mathcal{L}\, = \, \varepsilon^{-1} [-x \, \partial_x + \partial_{xx}]. \]

The function \(\varphi(x)=x\) is such that \(\pi(\varphi)=0\) and a solution to the Poisson equation \(\mathcal{L}\Phi = \varphi\) is \(\Phi(x) = -\varepsilon\, x\). This shows that the asymptotic variance is

\[ \sigma^2 \; = \; 2 \, \int_{x \in \mathbb{R}^D} \varepsilon x^2 \, \pi(dx) \; = \; 2 \varepsilon. \]

As expected, accelerating the OU process by a factor \(\varepsilon^{-1}\) means reducing the variance by a factor \(\varepsilon\).