Statistical significance
Statistical significance
Statistical significance (pronunciation: /stəˈtɪstɪkəl sɪɡˈnɪfɪkəns/) is a term used in statistics to denote the likelihood that a relationship between two or more variables is caused by something other than chance.
Etymology
The term "statistical significance" was first used in the early 20th century. The concept has its roots in the work of Ronald Fisher, a British statistician and geneticist, who introduced the concept as a part of his development of experimental design.
Definition
Statistical significance is determined by a few key factors: the sample size, the effect size and the statistical power of the test being used. A result is considered statistically significant if the observed p-value is less than the pre-specified significance level, often 0.05. This means that the probability of the observed relation due to chance is less than 5%.
Related terms
- Null hypothesis: The null hypothesis is a general statement or default position that there is no relationship between two measured phenomena.
- Alternative hypothesis: The alternative hypothesis is the hypothesis used in hypothesis testing that is contrary to the null hypothesis. It is usually taken to be that the observations are the result of a real effect.
- Type I error: A Type I error is the rejection of a true null hypothesis.
- Type II error: A Type II error is failing to reject a false null hypothesis.
- Confidence interval: A confidence interval gives an estimated range of values which is likely to include an unknown population parameter.
External links
- Medical encyclopedia article on Statistical significance
- Wikipedia's article - Statistical significance
This WikiMD article is a stub. You can help make it a full article.
Languages: - East Asian
中文,
日本,
한국어,
South Asian
हिन्दी,
Urdu,
বাংলা,
తెలుగు,
தமிழ்,
ಕನ್ನಡ,
Southeast Asian
Indonesian,
Vietnamese,
Thai,
မြန်မာဘာသာ,
European
español,
Deutsch,
français,
русский,
português do Brasil,
Italian,
polski