Wednesday, June 6, 2018

How p-values Work in Practice


This post is the third in a series of six posts in which I advocate against the use of p-values to report the results of statistical analysis. You can find a summary of my argument and a link to all the posts in the series in the first post of the series.

In the previous post of the series, I explained what is sampling noise. In this post, I am going to explain how p-values are used to report the statistical significance of results. I'll describe the practice in economics, but I believe most fields use them in a similar way, especially within the social sciences.

Researchers generally check for statistical significance in the following way: they divide their estimated effect x by its standard error se(x) (in general, se(x) is equal to y/(2*1.96), where y is the width of the 95% confidence interval, the estimate of sampling noise I used in my previous post), take the absolute value and compare the resulting ratio to the following thresholds:
  • If the ratio is above 2.57, the p-value is below 1%, we say the effect is significant at 1% and some statistical software puts three stars in front of the estimate.
  • If the ratio is above 1.96, the p-value is below 5%, we say the results is significant at 5% and some statistical software put two stars in front of the estimate.
  • If the ratio is above 1.68, the p-value is below 10%, we say that the effect is significant at 10%  and some statistical software puts one star in front of the estimate. 

The rationale behind this approach is based on Null Hypothesis Significance Testing (NHST). Before seeing the data, you commit to reject the Null assumption of zero effect if and only if the probability that the true effect comes from a distribution centered at zero is low enough. The procedure that I describe above is called a two-sided procedure in which you do not want to commit ex ante to a privileged direction for the effect of your treatment. It can be either positive or negative and you want to be able to detect either of these effects, and you allocate the same power to both.

The graph below illustrates this procedure with an example taken from my class. The true distribution of the effect across random samples is in black (and nicely approximated by a normal distribution in blue) and is centered around the true value of the effect, in red (0.18). The problem of sampling noise is that you do not know where the true distribution is centered, and you only have access to one point from the black distribution. Thanks to the central limit theorem, you can estimate the width of the black distribution and you can approximate it by a normal, but you still do not know where to center it. Here comes NHST. With NHST, you assume that the true distribution is centered at zero (the green distribution in my graph), and you compute the threshold t for which the probability of falling above t or below -t is equal to say 0.05 when the distribution is centered at zero. This gives you the two green discontinuous lines around 0.1 and -0.1. The idea of NHST is to say that if the estimated parameter falls above t or below -t, it is very unlikely that it comes from the green distribution (over sampling replications, it will only happen 5% of the time if the green distribution is true).


Based on the notation I used in the previous post of the series, with x the effect of the treatment and y the (95%) sampling noise, you can compute the p-value associated with the two-sided test as follows: 2*(1-Phi(|x|/(y/(2*1.96)))), with Phi the cumulative distribution of the standard centered normal. With the example I used in my introductory post, we have x=0.1, y/2=0.09996, so that the standard error is equal to 0.051 and the p-value computed is 0.0498. So the result is said to be statistically significant at 5%.

No comments:

Post a Comment