STATS 413

Convergence of random variables

In this post, we prove a few technical results on convergence of random variables:

  1. If \((X_n)_{n=1}^\infty\pto X\), then \((X_n)_{n=1}^\infty\dto X\); i.e. convergence in probability implies convergence in distribution;
  2. If \((X_n)_{n=1}^\infty\dto x\) and \(x\) is a constant (\((X_n)_{n=1}^\infty\) converges in distribution to a constant), then \((X_n)_{n=1}^\infty\pto x\); i.e. convergence in distribution to a constant implies convergence in probability.

To keep things simple, we assume the random variables \((X_n)_{n=1}^\infty\), \(X\) and the constant \(x\) are scalars in the proofs; the results remain valid for (random) vectors.

Convergence in probability implies convergence in distribution. Recall the definition of convergence in probability: \((X_n)_{n=1}^\infty\pto X\) iff

\[\Pr(|X_n - X| > \eps) \to 0\text{ for any }\eps > 0.\]

Let \(F_n\) and \(F\) be the CDF’s of \(X_n\) and \(X\) respectively and \(t\) be a continuity point of \(F\) (i.e. \(F\) is continuous at \(t\)). We have

\[\begin{aligned} F_n(t) &= \Pr(X_n \le t) \\ &= \Pr(X_n \le t, X \le t+\eps) + \Pr(X_n \le t, X > t+\eps) \\ &\le \Pr(X \le t+\eps) + \Pr(|X_n - X| > \eps) \\ &= F(t+\eps) + \Pr(|X_n - X| > \eps). \end{aligned}\]

As \(n\to\infty\), we have \(\limsup_{n\to\infty}F_n(t) \le F(t+\eps)\). Similarly, we have

\[\begin{aligned} F(t-\eps) &= \Pr(X \le t-\eps) \\ &= \Pr(X \le t-\eps, X_n \le t) + \Pr(X \le t-\eps, X_n > t) \\ &\le \Pr(X_n \le t) + \Pr(|X_n - X| > \eps) \\ &= F_n(t) + \Pr(|X_n - X| > \eps), \end{aligned}\]

which implies (as \(n\to\infty\)) \(F(t-\eps) \le \liminf_{n\to\infty}F_n(t)\). We combine the two inequalities to see that all accumulation points of \(F_1(t), F_2(t),\dots\) are sandwiched between \(F(t-\eps)\) and \(F(t+\eps)\):

\[F(t-\eps) \le \liminf_{n\to\infty}F_n(t) \le \limsup_{n\to\infty}F_n(t) \le F(t+\eps).\]

This is valid for any \(\eps > 0\), so we let \(\eps\) tend to 0 to obtain \(\lim_{n\to\infty}F_n(t) = F(t)\), which is the definition of convergence in distribution.

Convergence in distribution to a constant implies convergence in probability. Recall the definition of convergence in distribution: \((X_n)_{n=1}^\infty\dto X\) iff

\[F_n(t) \to F(t)\text{ at all continuity points of }F,\]

where \(F_n\) and \(F\) are the CDF’s of \(X_n\) and \(X\) respectively. If the limit \(X\) is the constant \(x\) (i.e. \(X = x\) with probability one), then its CDF is

\[F(t) = \begin{cases}0 & t < x \\ 1 & t\ge x\end{cases}.\]

We have

\[\begin{aligned} \Pr(|X_n - x| > \eps) &= \Pr(X_n < x-\eps) + \Pr(X_n > x+\eps) \\ &= F_n(x-\eps) + (1-F_n(x+\eps)) \\ &= 0 + 1 - 1. \end{aligned}\]

Posted on October 20, 2021 from Ann Arbor, MI