Difference between revisions of "SMHS ANOVA"
(→One-way ANOVA) |
(→Two-way ANOVA) |
||
(33 intermediate revisions by 4 users not shown) | |||
Line 3: | Line 3: | ||
===Overview=== | ===Overview=== | ||
− | Analysis of Variance (ANOVA) is | + | [[EBook#Chapter_XI:_Analysis_of_Variance_.28ANOVA.29|Analysis of Variance (ANOVA)]] is a method that is commonly applied to analyze differences between group means. In ANOVA, we divide the observed variance into components attributed to different sources of variation. It is a widely used statistical technique that provides a statistical test of whether or not the means of several groups are equal; ANOVA can be thought of as a generalized t-test for more than 2 groups. If there are only 2 groups, ANOVA results coincide with the corresponding results of a 2-sample independent t-test. Here, we introduce ANOVA, both one-way and two-way, and provide examples. |
===Motivation=== | ===Motivation=== | ||
− | In the previous two-sample inference, we applied | + | In the previous two-sample inference, we applied a t-test to compare two independent group means. What if we want to compare more than 2 independent samples? In this case, we will need to decompose the entire variation into components that allow us to analyze the variance of the entire dataset. Suppose 5 varieties of a particular crop are tested for further study. A field was divided into 20 plots, with each variety planted in 4 plots. The measurements are shown in the table below: |
<center> | <center> | ||
Line 40: | Line 40: | ||
</center> | </center> | ||
− | Using ANOVA, the data are regarded as random samples from | + | Using ANOVA, the data are regarded as random samples from 5 populations. Suppose the population means are denoted as $\mu_{1},\mu_{2},\mu_{3},\mu_{4},$ and $\mu_{5}$, and the population standard deviations are denoted as $\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},$ and $\sigma_{5}$. One method would be to apply $\binom{5}{2}=10$ separate t-tests and compare all independent pairs of groups. However, in this case, ANOVA would be much easier and more powerful. |
===Theory=== | ===Theory=== | ||
Line 47: | Line 47: | ||
One-way ANOVA expands our inference methods to study and compare k independent samples. In this case, we will be decomposing the entire variation in the data into independent components. | One-way ANOVA expands our inference methods to study and compare k independent samples. In this case, we will be decomposing the entire variation in the data into independent components. | ||
− | * | + | *Notation: $y_{ij}$ is the $j^{th}$ measurement from group $i$; $k$ is the number of groups; $n_{i}$ is the number of observations in group $i$; $n$ is the total number of observations and $n=n_{1}+n_{2}+⋯+n_{k}$. The group mean for group $i$ is $\bar y_{i}$=$\frac{\sum_{j=1}^{n_{i}} y_{ij}} {n_{i}}$, and the grand mean is $\bar y =\bar y_{..}=$ $\frac{\sum_{i=1}^{k}\sum_{j=1}^{n}_{i}y_{ij}}{n}$ |
− | *Difference between | + | *Difference between means (i.e., compare each group mean to the grand mean). |
+ | ** (Total) The total variance is calculated as the total sum of squares (SST) divided by the total degrees of freedom (df(total)). $SST=\sum_{i=1}^{k}\sum_{j=1}^{n_i}(y_{ij}-\bar y_{..})^{2}$ and $df(total)=n-1$. | ||
+ | ** (Between/Treatment) The difference between each group mean and the grand mean: $SST(between)$=$\sum_{i=1}^{k} {n_{i} (\bar y_{i.}-\bar y_{..})^2}$, degrees of freedom $df(between)=k-1$; | ||
+ | ** (Within/Error) Sum square due to error (combination of the variations within each group): $SSE(Error)=\sum_{i=1}^{k} n_{i}(\bar y_{i.}- \bar y_{..})^2$, degrees of freedom $df(within)=n-k$. | ||
+ | ** ANOVA variance decomposition yields | ||
+ | $$\sum_{i=1}^{k} {\sum_{j=1}^{n_{i}} {(y_{ij}- \bar y_{..})^2 }} = \sum_{i=1}^{k} {n_{i} (\bar y_{i.}-\bar y_{..})^2} + \sum_{i=1}^{k} {\sum_{j=1}^{n_i} {(y_{ij}-\bar y_{i.})^2}},$$ | ||
+ | :: that is $SST(total)$=$SST(between)$+$SSE(within)$ and $df(total)$=$df(between)$+$df(within).$ | ||
*Calculations: | *Calculations: | ||
Line 65: | Line 71: | ||
</center> | </center> | ||
− | * ANOVA hypotheses (general form): $H_{\sigma}:\mu_{1}=\mu_{2}=⋯=\mu_{k}$; $H_{a}:\mu_{ | + | * ANOVA hypotheses (general form): $H_{\sigma}:\mu_{1}=\mu_{2}=⋯=\mu_{k}$; $H_{a}:\mu_{i}≠\mu_{j}$ for some $i≠j$. The test statistics: $F_{0}=\frac{MST(between)}{MSE(within)}$ , if $F_{0}$ is large, then there is a lot between group variation, relative to the within group variation. Therefore, the discrepancies between the group means are large compared to the variability within the groups (error). That is large $F_{0}$ provides strong evidence against $H_{0}$. |
*Examples: given the following data from a hands-on study. | *Examples: given the following data from a hands-on study. | ||
Line 110: | Line 116: | ||
====Two-way ANOVA==== | ====Two-way ANOVA==== | ||
Two-way ANOVA decomposes the variance of a dataset into independent (orthogonal) components when we have two grouping factors. | Two-way ANOVA decomposes the variance of a dataset into independent (orthogonal) components when we have two grouping factors. | ||
− | Notations first: two-way model:$y_{ijk}=\mu+\tau_{i}+\beta_{j}+γ_{ij}+\varepsilon_{ijk},$ for all $1≤i≤a,1≤j≤b$ and $1≤k≤r.$ $y_{ijk}$ | + | Notations first: two-way model: $y_{ijk}=\mu+\tau_{i}+\beta_{j}+γ_{ij}+\varepsilon_{ijk},$ for all $1≤i≤a,1≤j≤b$ and $1≤k≤r.$ The measurement $y_{ijk}$ represents A-factor level $i$, and B-factor level $j$, observation-index $k$ -- the number of replications; $a_{i}$ is the number of A-factor observations at level $i,a=a_{1}+⋯+a_{I}$; $b_{j}$ is the number of B-factor observations at level $j$, $b=b_{1}+⋯+b_{J}$; $N$ is the total number of observations and $N=a\times b\times r$. Here $\mu$ is the overall mean response, $\tau_{i}$ is the effect due to the $i^{th}$ level of factor A, $\beta_{j}$ is the effect due to the $j^{th}$ level of factor B, and $\gamma_{ij}$ is the effect due to any interaction between the $i^{th}$ level of factor A and $j^{th}$ level of factor B. The mean for A-factor group mean at level $I$ and B-factor at level $j$ is $\bar{y}_{ij.}=\frac{\sum_{k=1}^{r} {y_{ijk}}} {r},$ the grand mean is $\bar {y} =\bar{y}_{...} = \frac{\sum_{k=1}^{r} {\sum_{i=1}^{a} {\sum_{j=1}^{b} {y_{ijk}}}}} {n}$, and we have we have: |
+ | $$SST(total)=SS(A)+SS(B)+SS(AB)+SSE,$$ | ||
+ | where: | ||
+ | $$ SST(total)=\sum_{i=1}^{a} {\sum_{j=1}^{b} {\sum_{k=1}^{r} {(y_{ijk}-\bar{y}_{...})^2}}},$$ | ||
+ | |||
+ | $$SS(A)=r\times b \times \sum_{i=1}^{a} {(\bar{y}_{i..}-\bar{y}_{...})^2},$$ | ||
+ | |||
+ | $$SS(B)=r\times a \times \sum_{j=1}^{b} {(\bar{y}_{.j.}-\bar{y}_{...})^2},$$ | ||
+ | |||
+ | $$SS(AB) =r\times \sum_{i=1}^{a} {\sum_{j=1}^{b} {(\bar{y}_{ij.}-\bar{y}_{i..}-\bar{y}_{.j.}-\bar{y}_{...})^2}},$$ | ||
+ | |||
+ | $$SSE= \sum_{i=1}^{a} {\sum_{j=1}^{b} {\sum_{k=1}^{r}{(y_{ijk}-\bar{y}_{ij.})^2}}}.$$ | ||
*Hypotheses: | *Hypotheses: | ||
− | **Null hypotheses: (1) the population means of the first factor are equal, which is like the one-way ANOVA for the row factor; (2) the population means of the second factor are equal, which is like the one-way ANOVA for the column factor; (3) there is no interaction between the two factors, which is similar to performing a test for independence with contingency tables. | + | **Null hypotheses: (1) the population means of the first factor are equal, which is like the one-way ANOVA for the row factor; (2) the population means of the second factor are equal, which is like the one-way ANOVA for the column factor; (3) there is no interaction between the two factors, which is similar to performing a [[AP_Statistics_Curriculum_2007_Contingency_Indep|test for independence with contingency tables]]. |
− | **Factors: factor A and factor B are independent variables in two-way ANOVA. | + | **Factors: factor A and factor B are independent variables in a two-way ANOVA. |
− | **Treatment groups: formed by making all possible combinations of two factors. For example, if the factor A has 3 levels and factor B has 5 levels, then there will be 3 | + | **Treatment groups: formed by making all possible combinations of two factors. For example, if the factor A has 3 levels and factor B has 5 levels, then there will be $3\times 5=15$ different treatment groups. |
**Main effect: involves the dependent variable one at a time. The interaction is ignored for this part. | **Main effect: involves the dependent variable one at a time. The interaction is ignored for this part. | ||
**Interaction effect: the effect that one factor has on the other factor. The degree of freedom is the product of the two degrees of freedom of each factor. | **Interaction effect: the effect that one factor has on the other factor. The degree of freedom is the product of the two degrees of freedom of each factor. | ||
− | *Calculations: | + | *Calculations: It is assumed that main effect A has $a$ levels (and $df(A) = a-1$), main effect B has $b$ levels (and ($df(B) = b-1$), $r$ is the sample size of each treatment, and $N = a\times b\times r$ is the total sample size. Notice the overall degree of freedom is once again one less than the total sample size. |
− | It is assumed that main effect A has | ||
<center> | <center> | ||
Line 140: | Line 156: | ||
* Two-way ANOVA is valid if: | * Two-way ANOVA is valid if: | ||
− | :: (1) the population from which the samples were obtained | + | :: (1) the population from which the samples were obtained are normally or approximately normally distributed; |
− | :: (2) the samples | + | :: (2) the samples are independent; |
− | :: (3) the variances of the populations | + | :: (3) the variances of the populations are equal; |
+ | :: (4) the groups have the same sample size. | ||
− | * | + | * Studying the residuals: When the ANOVA assumptions hold, the residuals should be approximately normally distributed. Otherwise, there may be outliers in the data, one or more groups may come from non-normal distributions, or the linear mode is just not able to capture the data complexity. Normality testing of the residuals gives an indication of whether the sample populations are roughly normal, but may not indicate the underlying cause of this non-normality. Normality tests include: |
+ | ** [[SOCR_EduMaterials_AnalysisActivities_KolmogorovSmirnoff|Kolmogorov-Smirnov test]] and [http://socr.umich.edu/html/dist Anderson-Darling test] are based on the estimate of the empirical CDF. | ||
+ | ** [[AP_Statistics_Curriculum_2007_MultiVar_ANOVA#Wilk.E2.80.99s_.5C.28.5CLambda.5C.29 | Shapiro-Wilk’s test]] is a correlation based test applicable for $n < 50$. | ||
+ | ** Graphical methods can convey the patterns and residual however they may be subjective. | ||
+ | * Examples | ||
+ | ** [[AP_Statistics_Curriculum_2007_ANOVA_2Way | Clinical example of knee pain study]] | ||
+ | ** [[SOCR_Activity_ANOVA_SnailsSexualDimorphism| Snails Sexual Dimorphism activity]] | ||
===Applications=== | ===Applications=== | ||
Line 156: | Line 179: | ||
===Software === | ===Software === | ||
* [http://socr.umich.edu/html/ana/ SOCR Analyses Java Applets] | * [http://socr.umich.edu/html/ana/ SOCR Analyses Java Applets] | ||
− | * [[SOCR_EduMaterials_AnalysisActivities_ANOVA_1 | One-Way ANOVA Activity]] | + | * [[SOCR_EduMaterials_AnalysisActivities_ANOVA_1 | One-Way ANOVA Activity]] and [[SOCR_Activity_ANOVA_SnailsSexualDimorphism| SOCR 2-way ANOVA Activity]] |
* R: | * R: | ||
+ | :: Generic R ANOVA | ||
# fit a model | # fit a model | ||
# one-way ANOVA with completely randomized design | # one-way ANOVA with completely randomized design | ||
Line 169: | Line 193: | ||
# type III SS and F test | # type III SS and F test | ||
drop1(fit, ~., test=’F’) | drop1(fit, ~., test=’F’) | ||
+ | |||
+ | :: Example R ANOVA: Suppose an FDA study plans to test four alternative pain relief medications for migraine headache. For the experiment 36 volunteers agreed to participate in the study and they were randomly assigned to four groups corresponding to the four treatments. The subjects are instructed to take the pain killers during their next migraine headache episode and report their pain on a scale of 1 (no pain) to 10 (worst pain) one hour after taking the drug. | ||
+ | # the 4x9=36 reported pain scores | ||
+ | pain = c(4, 5, 4, 3, 2, 4, 3, 4, 4, 6, 8, 4, 5, 4, 6, 5, 8, 6, 6, 7, 6, 6, 7, 5, 6, 5, 5, 7, 8, 7, 5, 8, 6, 6, 4, 9) | ||
+ | drug = c(rep("A",9), rep("B",9), rep("C",9), rep("D",9)) # make the group labels | ||
+ | migraine = data.frame(pain,drug) # make a frame from the data | ||
+ | migraine # print the data | ||
+ | plot(pain ~ drug, data=migraine) # EDA: plot the box-and-whisker plots for the 4 groups separately | ||
+ | |||
+ | aov_res = aov(pain ~ drug, data=migraine) # run the ANOVA | ||
+ | summary(aov_res) # show summary results | ||
+ | |||
+ | :: The ANOVA F-test addresses the question whether or not there are significant differences in the $k$ population means. It does not directly provide information about how and which group means may differ. When rejecting $H_0$ in ANOVA, additional tests may be required to determine what groups may be different in their mean response. | ||
+ | |||
+ | # compute the pairwise comparisons between group means (how many of these are there in this case?) | ||
+ | # to correct for multiple testing, we can specify a specific "p.adjust.methods" correction method | ||
+ | # (e.g., “Bonferroni”, "holm", "hochberg", "hommel", "BH", "BY", "fdr", "none") | ||
+ | pairwise.t.test(pain, drug, p.adjust="bonferroni") | ||
+ | |||
+ | # alternatively, the Tukey's method (Honest Significance Test) may be used to create a set of | ||
+ | # confidence intervals on the differences between means with the specified family-wise error rate. | ||
+ | TukeyHSD(aov_res, conf.level =0.95) | ||
===Problems=== | ===Problems=== | ||
* Tom was shopping for a ping pong table that could be taken apart quickly and easily. For some reason, the salesman happened to have a table of the assembly times (sec) for the three tables. Using ANOVA, do you think there is a difference in the average time of assembly for the three brands of ping pong tables? | * Tom was shopping for a ping pong table that could be taken apart quickly and easily. For some reason, the salesman happened to have a table of the assembly times (sec) for the three tables. Using ANOVA, do you think there is a difference in the average time of assembly for the three brands of ping pong tables? | ||
− | + | <center> | |
− | 93.0 1 | + | {| class="wikitable" style="text-align:center;" border="1" |
− | 67.0 1 | + | |- |
− | 77.0 1 | + | ! Assembly_time_(sec)||Brand |
− | 92.0 1 | + | |- |
− | 97.0 1 | + | | 93.0||1 |
− | 62.0 1 | + | |- |
− | 136.0 2 | + | | 67.0||1 |
− | 120.0 2 | + | |- |
− | 115.0 2 | + | | 77.0||1 |
− | 104.0 2 | + | |- |
− | 115.0 2 | + | | 92.0||1 |
− | 121.0 2 | + | |- |
− | 102.0 2 | + | | 97.0||1 |
− | 130.0 2 | + | |- |
− | 198.0 3 | + | | 62.0||1 |
− | 217.0 3 | + | |- |
− | 209.0 3 | + | | 136.0||2 |
− | 221.0 3 | + | |- |
− | 190.0 3 | + | | 120.0||2 |
+ | |- | ||
+ | | 115.0||2 | ||
+ | |- | ||
+ | | 104.0||2 | ||
+ | |- | ||
+ | | 115.0||2 | ||
+ | |- | ||
+ | | 121.0||2 | ||
+ | |- | ||
+ | | 102.0||2 | ||
+ | |- | ||
+ | | 130.0||2 | ||
+ | |- | ||
+ | | 198.0||3 | ||
+ | |- | ||
+ | | 217.0||3 | ||
+ | |- | ||
+ | | 209.0||3 | ||
+ | |- | ||
+ | | 221.0||3 | ||
+ | |- | ||
+ | | 190.0||3 | ||
+ | |} | ||
+ | </center> | ||
: (a) We can say that there is no reason to reject the null that the average assembly times are the same | : (a) We can say that there is no reason to reject the null that the average assembly times are the same | ||
Line 203: | Line 273: | ||
* Tom is curious to see if two-door vehicles drive faster on average than four-door vehicles. He parks behind a bush so as not to be seen, and records the car type and the speed reading. Here are the results (1 means two-door, and 2 means four-door): | * Tom is curious to see if two-door vehicles drive faster on average than four-door vehicles. He parks behind a bush so as not to be seen, and records the car type and the speed reading. Here are the results (1 means two-door, and 2 means four-door): | ||
− | + | <center> | |
− | 45 2 | + | {| class="wikitable" style="text-align:center;" border="1" |
− | 45 2 | + | |- |
− | 40 2 | + | ! Speed_(MPH)||Vehicle_Type |
− | 69 1 | + | |- |
− | 72 1 | + | | 45||2 |
− | 40 1 | + | |- |
− | 75 2 | + | | 45||2 |
− | 19 2 | + | |- |
− | 62 1 | + | | 40||2 |
− | 43 2 | + | |- |
− | 75 1 | + | | 69||1 |
− | 42 2 | + | |- |
− | 58 1 | + | | 72||1 |
− | 58 1 | + | |- |
− | 47 2 | + | | 40||1 |
− | 48 2 | + | |- |
− | 49 2 | + | | 75||2 |
− | 45 2 | + | |- |
− | 54 2 | + | | 19||2 |
+ | |- | ||
+ | | 62||1 | ||
+ | |- | ||
+ | | 43||2 | ||
+ | |- | ||
+ | | 75||1 | ||
+ | |- | ||
+ | | 42||2 | ||
+ | |- | ||
+ | | 58||1 | ||
+ | |- | ||
+ | | 58||1 | ||
+ | |- | ||
+ | | 47||2 | ||
+ | |- | ||
+ | | 48||2 | ||
+ | |- | ||
+ | | 49||2 | ||
+ | |- | ||
+ | | 45||2 | ||
+ | |- | ||
+ | | 54||2 | ||
+ | |} | ||
+ | </center> | ||
: At the 1% significance level, should we reject the null hypothesis that that average speed is the same for both types of vehicles? | : At the 1% significance level, should we reject the null hypothesis that that average speed is the same for both types of vehicles? | ||
Line 242: | Line 336: | ||
* Suppose that two factors, A and B, is thought to affect the top speed of a car. We will use two-way ANOVA analysis. Are the population means of factor A equal? | * Suppose that two factors, A and B, is thought to affect the top speed of a car. We will use two-way ANOVA analysis. Are the population means of factor A equal? | ||
− | + | <center> | |
− | 93.0 1 1 | + | {| class="wikitable" style="text-align:center;" border="1" |
− | 136.0 1 2 | + | |- |
− | 198.0 1 3 | + | ! Top_Speed||A||B |
− | 88.0 2 1 | + | |- |
− | 148.0 2 2 | + | | 93.0||1||1 |
− | 279.0 2 3 | + | |- |
+ | | 136.0||1||2 | ||
+ | |- | ||
+ | | 198.0||1||3 | ||
+ | |- | ||
+ | | 88.0||2||1 | ||
+ | |- | ||
+ | | 148.0||2||2 | ||
+ | |- | ||
+ | | 279.0||2||3 | ||
+ | |} | ||
+ | </center> | ||
: (a) Yes, they are equal. | : (a) Yes, they are equal. | ||
: (b) No, they are not equal. | : (b) No, they are not equal. | ||
Line 256: | Line 361: | ||
: (b) No, they are not equal. | : (b) No, they are not equal. | ||
− | * Use data from | + | * Use data from table above and apply the two-way ANOVA analysis, is there an interaction effect between the two factors |
: (a) Yes, they are equal. | : (a) Yes, they are equal. | ||
: (b) No, they are not equal. | : (b) No, they are not equal. | ||
=== References=== | === References=== | ||
− | * [http:// | + | * [http://wiki.stat.ucla.edu/socr/index.php/Probability_and_statistics_EBook#Chapter_XI:_Analysis_of_Variance_.28ANOVA.29 SOCR] |
− | + | * [http://en.wikipedia.org/wiki/Analysis_of_variance ANOVA Wikipedia] | |
− | * [http:// | ||
Latest revision as of 16:29, 8 October 2014
Contents
Scientific Methods for Health Sciences - Analysis of Variance (ANOVA)
Overview
Analysis of Variance (ANOVA) is a method that is commonly applied to analyze differences between group means. In ANOVA, we divide the observed variance into components attributed to different sources of variation. It is a widely used statistical technique that provides a statistical test of whether or not the means of several groups are equal; ANOVA can be thought of as a generalized t-test for more than 2 groups. If there are only 2 groups, ANOVA results coincide with the corresponding results of a 2-sample independent t-test. Here, we introduce ANOVA, both one-way and two-way, and provide examples.
Motivation
In the previous two-sample inference, we applied a t-test to compare two independent group means. What if we want to compare more than 2 independent samples? In this case, we will need to decompose the entire variation into components that allow us to analyze the variance of the entire dataset. Suppose 5 varieties of a particular crop are tested for further study. A field was divided into 20 plots, with each variety planted in 4 plots. The measurements are shown in the table below:
A | B | C | D | E |
26.2 | 29.2 | 29.1 | 21.3 | 20.1 |
24.3 | 28.1 | 30.8 | 22.4 | 19.3 |
21.8 | 27.3 | 33.9 | 24.3 | 19.9 |
28.1 | 31.2 | 32.8 | 21.8 | 22.1 |
A | 26.2,24.3,21.8,28.1 |
B | 29.2,28.1,27.3,31.2 |
C | 29.1,30.8,33.9,32.8 |
D | 21.3,22.4,24.3,21.8 |
E | 20.1,19.3,19.9,22.1 |
Using ANOVA, the data are regarded as random samples from 5 populations. Suppose the population means are denoted as $\mu_{1},\mu_{2},\mu_{3},\mu_{4},$ and $\mu_{5}$, and the population standard deviations are denoted as $\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},$ and $\sigma_{5}$. One method would be to apply $\binom{5}{2}=10$ separate t-tests and compare all independent pairs of groups. However, in this case, ANOVA would be much easier and more powerful.
Theory
One-way ANOVA
One-way ANOVA expands our inference methods to study and compare k independent samples. In this case, we will be decomposing the entire variation in the data into independent components.
- Notation: $y_{ij}$ is the $j^{th}$ measurement from group $i$; $k$ is the number of groups; $n_{i}$ is the number of observations in group $i$; $n$ is the total number of observations and $n=n_{1}+n_{2}+⋯+n_{k}$. The group mean for group $i$ is $\bar y_{i}$=$\frac{\sum_{j=1}^{n_{i}} y_{ij}} {n_{i}}$, and the grand mean is $\bar y =\bar y_{..}=$ $\frac{\sum_{i=1}^{k}\sum_{j=1}^{n}_{i}y_{ij}}{n}$
- Difference between means (i.e., compare each group mean to the grand mean).
- (Total) The total variance is calculated as the total sum of squares (SST) divided by the total degrees of freedom (df(total)). $SST=\sum_{i=1}^{k}\sum_{j=1}^{n_i}(y_{ij}-\bar y_{..})^{2}$ and $df(total)=n-1$.
- (Between/Treatment) The difference between each group mean and the grand mean: $SST(between)$=$\sum_{i=1}^{k} {n_{i} (\bar y_{i.}-\bar y_{..})^2}$, degrees of freedom $df(between)=k-1$;
- (Within/Error) Sum square due to error (combination of the variations within each group): $SSE(Error)=\sum_{i=1}^{k} n_{i}(\bar y_{i.}- \bar y_{..})^2$, degrees of freedom $df(within)=n-k$.
- ANOVA variance decomposition yields
$$\sum_{i=1}^{k} {\sum_{j=1}^{n_{i}} {(y_{ij}- \bar y_{..})^2 }} = \sum_{i=1}^{k} {n_{i} (\bar y_{i.}-\bar y_{..})^2} + \sum_{i=1}^{k} {\sum_{j=1}^{n_i} {(y_{ij}-\bar y_{i.})^2}},$$
- that is $SST(total)$=$SST(between)$+$SSE(within)$ and $df(total)$=$df(between)$+$df(within).$
- Calculations:
Variance Source | Degrees of Freedom (df) | Sum of Squares (SS) | Mean Sum of Squares (MS) | F-Statistics | P-value |
Treatment Effect (Between Group) | k-1 | \(\sum_{i=1}^{k}{n_i(\bar{y}_{i,.}-\bar{y})^2}\) | \(MST(Between)={SST(Between)\over df(Between)}\) | \(F_o = {MST(Between)\over MSE(Within)}\) | \(P(F_{(df(Between), df(Within))} > F_o)\) |
Error (Within Group) | n-k | \(\sum_{i=1}^{k}{\sum_{j=1}^{n_i}{(y_{i,j}-\bar{y}_{i,.})^2}}\) | \(MSE(Within)={SSE(Within)\over df(Within)}\) | F-Distribution Calculator | |
Total | n-1 | \(\sum_{i=1}^{k}{\sum_{j=1}^{n_i}{(y_{i,j} - \bar{y})^2}}\) | ANOVA Activity |
- ANOVA hypotheses (general form): $H_{\sigma}:\mu_{1}=\mu_{2}=⋯=\mu_{k}$; $H_{a}:\mu_{i}≠\mu_{j}$ for some $i≠j$. The test statistics: $F_{0}=\frac{MST(between)}{MSE(within)}$ , if $F_{0}$ is large, then there is a lot between group variation, relative to the within group variation. Therefore, the discrepancies between the group means are large compared to the variability within the groups (error). That is large $F_{0}$ provides strong evidence against $H_{0}$.
- Examples: given the following data from a hands-on study.
Groups | |||
Index | A | B | C |
1 | 0 | 1 | 4 |
2 | 1 | 0 | 5 |
3 | 2 | ||
$n_{i}$ | 2 | 3 | 2 |
$s$ | 1 | 3 | 9 |
$\bar y_{l}$ | 0.5 | 1 | 4.5 |
Using this data, we have the following ANOVA table:
Variance Source | Degrees of Freedom (df) | Sum of Squares (SS) | Mean Sum of Squares (MS) | F-Statistics | P-value |
Treatment Effect (Between Group) | 3-1 | \(\sum_{i=1}^{k}{n_i(\bar{y}_{i,.}-\bar{y})^2}=19.86\) | \({SST(Between)\over df(Between)}={19.86\over 2}\) | \(F_o = {MST(Between)\over MSE(Within)}=13.24\) | \(P(F_{(df(Between), df(Within))} > F_o)=0.017\) |
Error (Within Group) | 7-3 | \(\sum_{i=1}^{k}{\sum_{j=1}^{n_i}{(y_{i,j}-\bar{y}_{i,.})^2}}=3\) | \({SSE(Within)\over df(Within)}={3\over 4}\) | F-Distribution Calculator | |
Total | 7-1 | \(\sum_{i=1}^{k}{\sum_{j=1}^{n_i}{(y_{i,j} - \bar{y})^2}}=22.86\) | Anova Activity |
Based on the ANOVA table above, we can reject the null hypothesis at $\alpha=0.05.$
- ANOVA conditions: valid if (1) design conditions: all groups of observations represent random samples from their population respectively. Plus, all the observations within each group are independent of each other; (2) population conditions: the k population distributions must be approximately normal. If sample size is large, the normality condition is less crucial. Plus, the standard deviations of all populations are equal, which can be slightly relaxed when $0.5≤\frac{\sigma_{i}}{\sigma_{j}}≤2,$ for all $i$ and $j$, none of the population variance is twice larger than any of the other ones.
Two-way ANOVA
Two-way ANOVA decomposes the variance of a dataset into independent (orthogonal) components when we have two grouping factors. Notations first: two-way model: $y_{ijk}=\mu+\tau_{i}+\beta_{j}+γ_{ij}+\varepsilon_{ijk},$ for all $1≤i≤a,1≤j≤b$ and $1≤k≤r.$ The measurement $y_{ijk}$ represents A-factor level $i$, and B-factor level $j$, observation-index $k$ -- the number of replications; $a_{i}$ is the number of A-factor observations at level $i,a=a_{1}+⋯+a_{I}$; $b_{j}$ is the number of B-factor observations at level $j$, $b=b_{1}+⋯+b_{J}$; $N$ is the total number of observations and $N=a\times b\times r$. Here $\mu$ is the overall mean response, $\tau_{i}$ is the effect due to the $i^{th}$ level of factor A, $\beta_{j}$ is the effect due to the $j^{th}$ level of factor B, and $\gamma_{ij}$ is the effect due to any interaction between the $i^{th}$ level of factor A and $j^{th}$ level of factor B. The mean for A-factor group mean at level $I$ and B-factor at level $j$ is $\bar{y}_{ij.}=\frac{\sum_{k=1}^{r} {y_{ijk}}} {r},$ the grand mean is $\bar {y} =\bar{y}_{...} = \frac{\sum_{k=1}^{r} {\sum_{i=1}^{a} {\sum_{j=1}^{b} {y_{ijk}}}}} {n}$, and we have we have: $$SST(total)=SS(A)+SS(B)+SS(AB)+SSE,$$ where: $$ SST(total)=\sum_{i=1}^{a} {\sum_{j=1}^{b} {\sum_{k=1}^{r} {(y_{ijk}-\bar{y}_{...})^2}}},$$
$$SS(A)=r\times b \times \sum_{i=1}^{a} {(\bar{y}_{i..}-\bar{y}_{...})^2},$$
$$SS(B)=r\times a \times \sum_{j=1}^{b} {(\bar{y}_{.j.}-\bar{y}_{...})^2},$$
$$SS(AB) =r\times \sum_{i=1}^{a} {\sum_{j=1}^{b} {(\bar{y}_{ij.}-\bar{y}_{i..}-\bar{y}_{.j.}-\bar{y}_{...})^2}},$$
$$SSE= \sum_{i=1}^{a} {\sum_{j=1}^{b} {\sum_{k=1}^{r}{(y_{ijk}-\bar{y}_{ij.})^2}}}.$$
- Hypotheses:
- Null hypotheses: (1) the population means of the first factor are equal, which is like the one-way ANOVA for the row factor; (2) the population means of the second factor are equal, which is like the one-way ANOVA for the column factor; (3) there is no interaction between the two factors, which is similar to performing a test for independence with contingency tables.
- Factors: factor A and factor B are independent variables in a two-way ANOVA.
- Treatment groups: formed by making all possible combinations of two factors. For example, if the factor A has 3 levels and factor B has 5 levels, then there will be $3\times 5=15$ different treatment groups.
- Main effect: involves the dependent variable one at a time. The interaction is ignored for this part.
- Interaction effect: the effect that one factor has on the other factor. The degree of freedom is the product of the two degrees of freedom of each factor.
- Calculations: It is assumed that main effect A has $a$ levels (and $df(A) = a-1$), main effect B has $b$ levels (and ($df(B) = b-1$), $r$ is the sample size of each treatment, and $N = a\times b\times r$ is the total sample size. Notice the overall degree of freedom is once again one less than the total sample size.
Variance Source | Degrees of Freedom (df) | Sum of Squares (SS) | Mean Sum of Squares (MS) | F-Statistics | P-value |
Main Effect A | df(A)=a-1 | \(SS(A)=r\times b\times\sum_{i=1}^{a}{(\bar{y}_{i,.,.}-\bar{y})^2}\) | \({SS(A)\over df(A)}\) | \(F_o = {MS(A)\over MSE}\) | \(P(F_{(df(A), df(E))} > F_o)\) |
Main Effect B | df(B)=b-1 | \(SS(B)=r\times a\times\sum_{j=1}^{b}{(\bar{y}_{., j,.}-\bar{y})^2}\) | \({SS(B)\over df(B)}\) | \(F_o = {MS(B)\over MSE}\) | \(P(F_{(df(B), df(E))} > F_o)\) |
A vs.B Interaction | df(AB)=(a-1)(b-1) | \(SS(AB)=r\times \sum_{i=1}^{a}{\sum_{j=1}^{b}{((\bar{y}_{i, j,.}-\bar{y}_{i, .,.})+(\bar{y}_{., j,.}-\bar{y}))^2}}\) | \({SS(AB)\over df(AB)}\) | \(F_o = {MS(AB)\over MSE}\) | \(P(F_{(df(AB), df(E))} > F_o)\) |
Error | \(N-a\times b\) | \(SSE=\sum_{k=1}^r{\sum_{i=1}^{a}{\sum_{j=1}^{b}{(\bar{y}_{i, j,k}-\bar{y}_{i, j,.})^2}}}\) | \({SSE\over df(Error)}\) | ||
Total | N-1 | \(SST=\sum_{k=1}^r{\sum_{i=1}^{a}{\sum_{j=1}^{b}{(\bar{y}_{i, j,k}-\bar{y}_{., .,.})^2}}}\) | ANOVA Activity |
- Two-way ANOVA is valid if:
- (1) the population from which the samples were obtained are normally or approximately normally distributed;
- (2) the samples are independent;
- (3) the variances of the populations are equal;
- (4) the groups have the same sample size.
- Studying the residuals: When the ANOVA assumptions hold, the residuals should be approximately normally distributed. Otherwise, there may be outliers in the data, one or more groups may come from non-normal distributions, or the linear mode is just not able to capture the data complexity. Normality testing of the residuals gives an indication of whether the sample populations are roughly normal, but may not indicate the underlying cause of this non-normality. Normality tests include:
- Kolmogorov-Smirnov test and Anderson-Darling test are based on the estimate of the empirical CDF.
- Shapiro-Wilk’s test is a correlation based test applicable for $n < 50$.
- Graphical methods can convey the patterns and residual however they may be subjective.
Applications
- This activity presents the Box and Whisker Chart, which is often used in exploratory data analyses. It demonstrates the range, standard deviation, mean and quartiles of the values and is especially useful in comparing statistical data. This article illustrated the implementation of the chart in SOCR with comprehensive introduction. It also included the application of this method in different areas.
- The SOCR Two-Way ANOVA Java Applet includes examples of two-way analysis of variance using SOCR tools. It illustrated the application of two-way ANOVA with examples applied in the SOCR. It also expanded the two-way ANOVA in softwares like R and SAS.
- Ther SOCR Snails Sexual Dimorphism Activity shows an application of ANOVA. This activity recreates part of the design of a classification method for the Cocholotoma septemspirale snail. By observing multiple traits of the shells, the original researchers were able to decide on a series of dimorphisms (difference in forms) between male and female snails. This article presents a comprehensive illustration of the example.
Software
- Generic R ANOVA
# fit a model # one-way ANOVA with completely randomized design fit <- aov(y ~ A, data = mydata) # randomized block design (B as the blocking factor) fit <- aov(y ~ A + B, data = mydata) # two-way factorial design fit <- aov(y ~ A + B + A*B, data = mydata) # to check out the model fitted with type I ANOVA table summary(fit) # type III SS and F test drop1(fit, ~., test=’F’)
- Example R ANOVA: Suppose an FDA study plans to test four alternative pain relief medications for migraine headache. For the experiment 36 volunteers agreed to participate in the study and they were randomly assigned to four groups corresponding to the four treatments. The subjects are instructed to take the pain killers during their next migraine headache episode and report their pain on a scale of 1 (no pain) to 10 (worst pain) one hour after taking the drug.
# the 4x9=36 reported pain scores pain = c(4, 5, 4, 3, 2, 4, 3, 4, 4, 6, 8, 4, 5, 4, 6, 5, 8, 6, 6, 7, 6, 6, 7, 5, 6, 5, 5, 7, 8, 7, 5, 8, 6, 6, 4, 9) drug = c(rep("A",9), rep("B",9), rep("C",9), rep("D",9)) # make the group labels migraine = data.frame(pain,drug) # make a frame from the data migraine # print the data plot(pain ~ drug, data=migraine) # EDA: plot the box-and-whisker plots for the 4 groups separately
aov_res = aov(pain ~ drug, data=migraine) # run the ANOVA summary(aov_res) # show summary results
- The ANOVA F-test addresses the question whether or not there are significant differences in the $k$ population means. It does not directly provide information about how and which group means may differ. When rejecting $H_0$ in ANOVA, additional tests may be required to determine what groups may be different in their mean response.
# compute the pairwise comparisons between group means (how many of these are there in this case?) # to correct for multiple testing, we can specify a specific "p.adjust.methods" correction method # (e.g., “Bonferroni”, "holm", "hochberg", "hommel", "BH", "BY", "fdr", "none") pairwise.t.test(pain, drug, p.adjust="bonferroni") # alternatively, the Tukey's method (Honest Significance Test) may be used to create a set of # confidence intervals on the differences between means with the specified family-wise error rate. TukeyHSD(aov_res, conf.level =0.95)
Problems
- Tom was shopping for a ping pong table that could be taken apart quickly and easily. For some reason, the salesman happened to have a table of the assembly times (sec) for the three tables. Using ANOVA, do you think there is a difference in the average time of assembly for the three brands of ping pong tables?
Assembly_time_(sec) | Brand |
---|---|
93.0 | 1 |
67.0 | 1 |
77.0 | 1 |
92.0 | 1 |
97.0 | 1 |
62.0 | 1 |
136.0 | 2 |
120.0 | 2 |
115.0 | 2 |
104.0 | 2 |
115.0 | 2 |
121.0 | 2 |
102.0 | 2 |
130.0 | 2 |
198.0 | 3 |
217.0 | 3 |
209.0 | 3 |
221.0 | 3 |
190.0 | 3 |
- (a) We can say that there is no reason to reject the null that the average assembly times are the same
- (b) We should reject the null that the average assembly times are the same
- Based on the data in the previous problem, what is the value for R square:
- (a) 0.342
- (b) 0.143
- (c) 0.832
- (d) 0.943
- Tom is curious to see if two-door vehicles drive faster on average than four-door vehicles. He parks behind a bush so as not to be seen, and records the car type and the speed reading. Here are the results (1 means two-door, and 2 means four-door):
Speed_(MPH) | Vehicle_Type |
---|---|
45 | 2 |
45 | 2 |
40 | 2 |
69 | 1 |
72 | 1 |
40 | 1 |
75 | 2 |
19 | 2 |
62 | 1 |
43 | 2 |
75 | 1 |
42 | 2 |
58 | 1 |
58 | 1 |
47 | 2 |
48 | 2 |
49 | 2 |
45 | 2 |
54 | 2 |
- At the 1% significance level, should we reject the null hypothesis that that average speed is the same for both types of vehicles?
- (a) Yes, we should reject the null hypothesis.
- (b) No, we should not reject the null hypothesis.
- (c) There is not enough information.
- Based on data above, what is the value for R square?
- (a) 0.432
- (b) 0.983
- (c) 0.308
- (d) 0.231
- In a two-way ANOVA test, which of the following is not the typical null hypothesizes?
- (a) The population means of the first factor are equal.
- (b) The population means of the first and second factor are equal.
- (c) The population means of the second factor are equal.
- (d) There is no interaction between the two factors.
- Suppose that two factors, A and B, is thought to affect the top speed of a car. We will use two-way ANOVA analysis. Are the population means of factor A equal?
Top_Speed | A | B |
---|---|---|
93.0 | 1 | 1 |
136.0 | 1 | 2 |
198.0 | 1 | 3 |
88.0 | 2 | 1 |
148.0 | 2 | 2 |
279.0 | 2 | 3 |
- (a) Yes, they are equal.
- (b) No, they are not equal.
- Use the data above and apply the two-way ANOVA analysis, are the population means of factor B equal?
- (a) Yes, they are equal.
- (b) No, they are not equal.
- Use data from table above and apply the two-way ANOVA analysis, is there an interaction effect between the two factors
- (a) Yes, they are equal.
- (b) No, they are not equal.
References
- SOCR Home page: http://www.socr.umich.edu
Translate this page: