STATS 413

Estimating the asymptotic variance of the OLS estimator

In this post, we show that the sandwich estimator of the asymptotic variance is consistent; i.e. \(\widehat{\textrm{Avar}}[\widehat{\boldsymbol\beta}] \pto \textrm{Avar}\), where

\[\def\Avar{\textrm{Avar}} \def\eps{\epsilon} \def\hAvar{\widehat{\Avar}} \def\hbeta{\widehat{\beta}} \def\heps{\widehat{\eps}} \def\hSigma{\widehat{\Sigma}} \def\hSigma{\widehat{\Sigma}} \def\hy{\widehat{y}} \widehat{\Avar}[\hbeta] \triangleq \hSigma_x^{-1}\hSigma_g\hSigma_x^{-1} ,\quad\begin{aligned} \hSigma_x \triangleq \frac1n\sum_{i=1}^nx_ix_i^T, \\ \hSigma_g \triangleq \frac1n\sum_{i=1}^n\heps_i^2x_ix_i^T. \end{aligned}\]

We shall show that the sandwich estimator is consistent in two steps

  1. show that \(\hSigma_x\) and \(\hSigma_g\) are consistent estimators of \(\Sigma_x\) and \(\Sigma_g\) respectively
  2. use the continuous mapping theorem (CMT) to conclude the sandwich estimator is consistent.

The consistency of \(\hSigma_x\) is a straightforward consequence of the law of large numbers:

\[\hSigma_x = \frac1n\sum_{i=1}^nx_ix_i^T \pto \Ex\big[x_1x_1^T\big] = \Sigma_x.\]

The consistency of \(\hSigma_g\) is trickier. Recall \(\heps_i \triangleq y_i - x_i^T\hbeta = \eps_i - x_i^T(\hbeta - \beta_*)\). This implies

\[\begin{aligned} \hSigma_g &= \frac1n\sum_{i=1}^n\heps_i^2x_ix_i^T \\ &= \underbrace{\frac1n\sum_{i=1}^n\eps_i^2x_ix_i^T}_{I} - \underbrace{\frac2n\sum_{i=1}^nx_i\eps_ix_i^T(\hbeta - \beta_*)x_i^T}_{II} + \underbrace{\frac1n\sum_{i=1}^nx_i(\hbeta - \beta_*)^Tx_ix_i^T(\hbeta - \beta_*)x_i^T}_{III}. \end{aligned}\]

The first term \(I\) converges in probability to \(\Sigma_g\). This is a consequence of the law of large numbers. All the entries of the second term \(II\) converges in probability to zero: the (probability) limit of its \(j,k\)-th entry is

\[\begin{aligned} \frac2n\sum_{i=1}^nx_{i,j}\eps_ix_i^T(\hbeta - \beta_*)x_{i,k} &= \frac2n\sum_{i=1}^nx_{i,j}\eps_ix_{i,k}x_i^T(\hbeta - \beta_*)^T \\ &= \left(\frac2n\sum_{i=1}^nx_{i,j}\eps_ix_{i,k}x_i^T\right)(\hbeta - \beta_*) \\ &\pto 2\Ex\left[x_{1,j}\eps_1x_{1,k}x_1^T\right]\cdot 0 \end{aligned}\]

Similarly, all the entries of \(III\) converge to zero: the (probability) limit of its \(j,k\)-th entry is

\[\begin{aligned} \frac1n\sum_{i=1}^nx_{i,j}(\hbeta - \beta_*)^Tx_ix_i^T(\hbeta - \beta_*)x_{i,k} &= \frac1n\sum_{i=1}^n(\hbeta - \beta_*)^Tx_ix_{i,j}x_{i,k}x_i^T(\hbeta - \beta_*) \\ &= (\hbeta - \beta_*)^T\left(\frac1n\sum_{i=1}^nx_ix_{i,j}x_{i,k}x_i^T\right)(\hbeta - \beta_*) \\ &\pto \Ex\left[x_1x_{1,j}x_{1,k}x_1^T\right]\cdot 0 \end{aligned}\]

We deduce \(\hSigma_g \pto \Sigma_g\). Finally, we use the CMT to conclude the sandwich estimator is consistent: \(\hSigma_x^{-1}\hSigma_g\hSigma_x^{-1} \pto \Sigma_x^{-1}\Sigma_x\Sigma_x^{-1}\).

Posted on November 08, 2021 from Ann Arbor, MI