Suppose you are doing an LR test in R, and that the test statistic has an asymptotic \(\chi^{2}\) distribution under the null hypothesis.

You were probably taught to compare the test statistic against the appropriate critical value reported in a table in the appendix of a statistics or econometrics textbook. That becomes inconvenient once you have to do more than two or three such tests, and if you are in a situation where you are doing many tests, it's no fun (and error-prone) to return nine months later to work on a revision. A better approach is to let R calculate p-values for you. Here are some examples.

With one degree of freedom, the critical \(\chi^{2}\) value for \(\alpha = 0.05\) is 3.84, and with two degrees of freedom, it is 5.99. We can confirm this:

```
pv1 <- 1-pchisq(3.84, 1)
pv2 <- 1-pchisq(5.99, 2)
```

The value of `pv1`

and `pv2`

are both approximately 0.05. There's no magic here. All the `pchisq`

function does is evaluate the cumulative distribution function at 3.84 and 5.99 for the given degrees of freedom. We subtract the result from 1 to get the probability of observing a \(\chi^{2}\) distributed variable at least that large under the null hypothesis.

Similarly, for the standard normal distribution:

```
pv3 <- 1-pnorm(1.96)
pv4 <- 1-pnorm(1.645)
```

`pv3`

and `pv4`

are approximately 0.025 and 0.05, as expected. Recall that these are the \(\alpha=0.05\) critical values for two-sided and one-sided tests, respectively.

*Last updated: March 23, 2017*

Index Home