When Does Heteroskedasticity Matter?

It is common for researchers to use the following strategy for dealing with heteroskedasticity:

  • Test for heteroskedasticity, perhaps using the White test.
  • After the inevitable rejection of the null of homoskedasticity, do a White correction of the standard errors.

Alternatively, they might do the following:

  • Assert that the error term might be heteroskedastic.
  • Do a White correction of the standard errors.

Davidson and MacKinnon (2004, pages 201-202) write:

Any form of heteroskedasticity affects the efficiency of the OLS parameter estimator, but only heteroskedasticity that is related to the squares and cross-products of the \(x_{ti}\) affects the validity of the usual OLS covariance matrix estimator.

We can go a little bit further than that. If the heteroskedasticity is only weakly related to the squares or cross-products of the regressors, any inconsistency of the OLS covariance matrix estimator is going to be small.

As they point out, you never have to correct for heteroskedasticity when estimating the mean of a variable using the regression \[y_{t} = \alpha + \varepsilon_{t}.\]

I’ve done some Monte Carlo simulations and was surprised by what I found in terms of the unimportance of heteroskedasticity for inference. I may post them if I get the time to write things up in a coherent fashion.

Last Update: 2015-11-12