This post is copied from my Blog for Students and is to some extent off-topic but relevant to anybody doing research.
Norman Matloff (2016) writes in his post:
Sadly, the concept of p-values and significance testing forms the very core of statistics. A number of us have been pointing out for decades that p-values are at best underinformative and often misleading…
Source: After 150 Years, the ASA Says No to p-values | Mad (Data) Scientist
Yesterday, the statement by the American Statistics Association was published on-line in the journal “The American Statistician”. Many statisticians have been aware of the problems of significance tests for a long time, but general practice, teaching and journal instructions and editors’ requirements had not changed. Let’s hope the statement will start real changes in everyday practice.
John W. Tukey (1991) has earlier written quite boldly about the problem:
Statisticians classically asked the wrong question—and were willing to answer with a lie, one that was often a downright lie. They asked “Are the effects of A and B different?” and they were willing to answer “no.”
All we know about the world teaches us that the effects of A and B are always different—in some decimal place—for any A and B. Thus asking “Are the effects different?” is foolish.
What we should be answering first is ”Can we tell the direction in which the effects of A differ from the effects of B?” In other words, can we be confident about the direction from A to B? Is it “up,” “down” or “uncertain”?
The third answer to this first question is that we are “uncertain about the direction”—it is not, and never should be, that we “accept the null hypothesis.”