- Statistics - Discussion
- Z table
- Weak Law of Large Numbers
- Venn Diagram
- Variance
- Type I & II Error
- Trimmed Mean
- Transformations
- Ti 83 Exponential Regression
- T-Distribution Table
- Sum of Square
- Student T Test
- Stratified sampling
- Stem and Leaf Plot
- Statistics Notation
- Statistics Formulas
- Statistical Significance
- Standard normal table
- Standard Error ( SE )
- Standard Deviation
- Skewness
- Simple random sampling
- Signal to Noise Ratio
- Shannon Wiener Diversity Index
- Scatterplots
- Sampling methods
- Sample planning
- Root Mean Square
- Residual sum of squares
- Residual analysis
- Required Sample Size
- Reliability Coefficient
- Relative Standard Deviation
- Regression Intercept Confidence Interval
- Rayleigh Distribution
- Range Rule of Thumb
- Quartile Deviation
- Qualitative Data Vs Quantitative Data
- Quadratic Regression Equation
- Process Sigma
- Process Capability (Cp) & Process Performance (Pp)
- Probability Density Function
- Probability Bayes Theorem
- Probability Multiplecative Theorem
- Probability Additive Theorem
- Probability
- Power Calculator
- Pooled Variance (r)
- Poisson Distribution
- Pie Chart
- Permutation with Replacement
- Permutation
- Outlier Function
- One Proportion Z Test
- Odd and Even Permutation
- Normal Distribution
- Negative Binomial Distribution
- Multinomial Distribution
- Means Difference
- Mean Deviation
- Mcnemar Test
- Logistic Regression
- Log Gamma Distribution
- Linear regression
- Laplace Distribution
- Kurtosis
- Kolmogorov Smirnov Test
- Inverse Gamma Distribution
- Interval Estimation
- Individual Series Arithmetic Mode
- Individual Series Arithmetic Median
- Individual Series Arithmetic Mean
- Hypothesis testing
- Hypergeometric Distribution
- Histograms
- Harmonic Resonance Frequency
- Harmonic Number
- Harmonic Mean
- Gumbel Distribution
- Grand Mean
- Goodness of Fit
- Geometric Probability Distribution
- Geometric Mean
- Gamma Distribution
- Frequency Distribution
- Factorial
- F Test Table
- F distribution
- Exponential distribution
- Dot Plot
- Discrete Series Arithmetic Mode
- Discrete Series Arithmetic Median
- Discrete Series Arithmetic Mean
- Deciles Statistics
- Data Patterns
- Data collection - Case Study Method
- Data collection - Observation
- Data collection - Questionaire Designing
- Data collection
- Cumulative Poisson Distribution
- Cumulative plots
- Correlation Co-efficient
- Co-efficient of Variation
- Cumulative Frequency
- Continuous Series Arithmetic Mode
- Continuous Series Arithmetic Median
- Continuous Series Arithmetic Mean
- Continuous Uniform Distribution
- Comparing plots
- Combination with replacement
- Combination
- Cluster sampling
- Circular Permutation
- Chi Squared table
- Chi-squared Distribution
- Central limit theorem
- Boxplots
- Black-Scholes model
- Binomial Distribution
- Beta Distribution
- Best Point Estimation
- Bar Graph
- Arithmetic Range
- Arithmetic Mode
- Arithmetic Median
- Arithmetic Mean
- Analysis of Variance
- Adjusted R-Squared
- Home
Selected Reading
- Who is Who
- Computer Glossary
- HR Interview Questions
- Effective Resume Writing
- Questions and Answers
- UPSC IAS Exams Notes
Statistics - Type I & II Errors
Type I and Type II errors signifies the erroneous outcomes of statistical hypothesis tests. Type I error represents the incorrect rejection of a vapd null hypothesis whereas Type II error represents the incorrect retention of an invapd null hypothesis.
Null Hypothesis
Null Hypothesis refers to a statement which nulpfies the contrary with evidence. Consider the following examples:
Example 1
Hypothesis - Water added to a toothpaste protects teeth against cavities.
Null Hypothesis - Water added to a toothpaste has no effect against cavities.
Example 2
Hypothesis - Floride added to a toothpaste protects teeth against cavities.
Null Hypothesis - Floride added to a toothpaste has no effect against cavities.
Here Null hypothesis is to be tested against experimental data to nulpfy the effect of floride and water on teeth s cavities.
Type I Error
Consider the Example 1. Here Null hypothesis is true i.e. Water added to a toothpaste has no effect against cavities. But if using experimental data, we detect an effect of water added on cavities then we are rejecting a true null hypothesis. This is a Type I error. It is also called a False Positive condition (a situation which indicates that a given condition is present but it actually is not present). The Type I error rate or significance level of Type I is represented by the probabipty of rejecting the null hypothesis given that it is true.
Type I error is denoted by $ alpha $ and is also called alpha level. Generally It is acceptable to have Type I error significance level as 0.05 or 5% which means that 5% probabipty of incorrectly rejecting the null hypothesis is acceptable.
Type II Error
Consider the Example 2. Here Null hypothesis is false i.e. Floride added to a toothpaste has effect against cavities. But if using experimental data, we do not detect an effect of floride added on cavities then we are accepting a false null hypothesis. This is a Type II error. It is also called a False Positive condition (a situation which indicates that a given condition is not present but it actually is present).
Type II error is denoted by $ eta $ and is also called beta level.
Goal of a statistical test is to determine that a null hypothesis can be rejected or not. A statistical test can reject or not be able to reject a null hypothesis. Following table illustrates the relationship between truth or falseness of the null hypothesis and outcomes of the test in terms of Type I or Type II error.
Judgment | Null hypothesis ($ H_0 $) is | Error Type | Inference |
---|---|---|---|
Reject | Vapd | Type I Error (False Positive) | Incorrect |
Reject | Invapd | True Positive | Correct |
Unable to Reject | Vapd | True Negative | Correct |
Unable to Reject | Invapd | Type II error(False Negative) | Incorrect |