 Volume 2 - Issue 3

## Statistical Type I and Type II Errors

T. Dhasaratharaman*

Statistician, Kauvery Hospitals, India

*Correspondence: Tel.: +91 90037 84310; email: dhasa.cst@kauveryhospital.com

In statistics, a Type I error is a false positive conclusion, while a Type II error is a false negative conclusion.

Making a statistical decision always involves uncertainties, so the risks of making these errors are unavoidable in hypothesis testing.

The probability of making a Type I error is the significance level, or alpha (α), while the probability of making a Type II error is beta (β). These risks can be minimized through careful planning in your study design.

Type I error

Consider we are testing two brands of paracetamol to evaluate if Brand 1 is better in curing subject's suffering from fever as compared to Brand 2. As both brands contain paracetamol, it is expected that the effect of both brands is similar. Let us try to build a statistical hypothesis around this

Null hypothesis (H0): Brand 1 is equal to Brand 2

Alternate Hypothesis (H1): Brand 1 is better than Brand 2

Let us try to evaluate error that can occur.

Error 1: Based on analysis it is concluded that Brand 1 is better than Brand 2, basically we reject H0. Knowing that Brand 1 is equal to Brand 2 (H0), we are making an error here by rejecting H0. This is called as Type I error. Statistically it is defined as

Type I error = (Reject H0/H0 is true).

Probability of Type I error is called as level of significance and is denoted as α.

Example: Statistical significance and Type I error

In your clinical study, you compare the symptoms of patients who received the new drug intervention or a control treatment. Using a t test, you obtain a p value of 0.035. This p value is lower than your alpha of 0.05, so you consider your results statistically significant and reject the null hypothesis.

However, the p value means that there is a 3.5% chance of your results occurring if the null hypothesis is true. Therefore, there is still a risk of making a Type I error.

Type II error

Consider we are testing paracetamol against placebo to evaluate if paracetamol is better in curing subject's suffering from fever as compared to placebo. (It is expected that the effect of paracetamol is better than placebo). Let us try to build a statistical hypothesis around this

Null hypothesis (H0): paracetamol is equal to placebo

Alternate hypothesis (H1): paracetamol is better than placebo

Let us try to evaluate the error that can happen.

Error 2: If analysis concludes that paracetamol is equal to placebo, we accept H0. Knowing that paracetamol is better than placebo (H1) we are making an error here by accepting H0. This is called as Type II error. Statistically it is defined as

Type II error = (Accept H0/H1 is true).

Probability of type II error is denoted as β.

In above case if analysis concludes that paracetamol is better than placebo, we reject H0, which would be correct decision. Probability of such a decision taking place is called as "Power".

Power = Probability (Reject H0/H1 is true) which is actually 1-β.

We can tabulate type I and type II error as

Decision Take/Actual FactHo is TrueH1 is True
Reject HoType I ErrorNo Error
Accept HoNo ErrorType II Error

Example: Statistical power and Type II error

When preparing your clinical study, you complete a power analysis and determine that with your sample size, you have an 80% chance of detecting an effect size of 20% or greater. An effect size of 20% means that the drug intervention reduces symptoms by 20% more than the control treatment.

However, a Type II may occur if an effect that's smaller than this size. A smaller effect size is unlikely to be detected in your study due to inadequate statistical power. Mr. T. Dhasaratharaman

LOCATE US