Abstract
Analysis of scientific data involves many components, one of which is often statistical testing with the calculation of p-values. However, researchers too often pepper their papers with p-values in the absence of critical thinking about their results. In fact, statistical tests in their various forms address just one question: does an observed difference exceed that which might reasonably be expected solely as a result of sampling error and/or random allocation of experimental material? Such tests are best applied to the results of designed studies with reasonable control of experimental error and sampling error, as well as acquisition of a sufficient sample size. Nevertheless, attributing an observed difference to a specific treatment effect requires critical thinking on the part of the scientist. Observational studies involve data sets whose size is usually a matter of convenience with results that reflect a number of potentially confounding factors. In this situation, statistical testing is not appropriate and p-values may be misleading; other more modern statistical tools should be used instead, including graphic analysis, computer-intensive methods, regression trees, and other procedures broadly classified as bioinformatics, data mining, and exploratory data analysis. In this review, the utility of p-values calculated from designed experiments and observational studies are discussed, leading to the formation of a decision t ree to aid researchers and reviewers in understanding both the benefits and limitations of statistical testing. &
Original language | English (US) |
---|---|
Pages (from-to) | 675-680 |
Number of pages | 6 |
Journal | Aviation Space and Environmental Medicine |
Volume | 76 |
Issue number | 7 I |
State | Published - Jul 1 2005 |
Keywords
- Experimental design
- Inference
- Observational studies
- P-values
- Statistics
ASJC Scopus subject areas
- Public Health, Environmental and Occupational Health
- Pollution
- Medicine(all)