Researchers have long relied on low p-values to identify statistically significant results in empirical research. In recent years, however, this reliance on p-values has drawn much criticism. Some have even argued that the use of p-values be banned altogether.
In the article "Why 'Statistical Significance' Is Often Insignificant", Noah Smith argues that the problem is not the p-value itself, but on an academic journal system that publishes an overabundance of studies with weak and questionable results. Smith argues that we must change the incentive for researchers to prove themselves by publishing questionable studies, improve the peer review filter, and look not just at p-value, but at the size of effects and their explanatory power.
Read the full article here on bloomberg.com.