SMHS ANCOVA
Contents
Scientific Methods for Health Sciences - Analysis of Covariance (ANCOVA)
Overview
Analysis of Variance (ANOVA) is the common method applied to analyze the differences between group means. Analysis of Covariance (ANCOVA) is the common method applied to blend ANOVA and regression and evaluate whether population means of a dependent variance (DV) are equal across levels of a categorical independent variable (IV) while statistically controlling for the effects of other continuous variables (CV). Multivariate analysis of variance (MANOVA) is a statistical test procedure for comparing multivariate means of several groups, which is a generalized form of ANOVA. Similar to MANOVA, MANOCVA (multivariate analysis of covariance) is an extension of ANCOVA and is designed for cases where there is more than one dependent variable and where the control of concomitant continuous independent variables is required. In this section, we review ANOVA, ANCOVA, MANOVA and MANCOVA and illustrate their application with examples.
Motivation
We have talked about analysis of variance (ANOVA). ANCOVA is similar to ANOVA and deals with covariance instead. What if we have more than one dependent variable and studied on multivariate observations? What if we want to see if the interactions among dependent variables or changes in the independent variables influence the dependent variable? Then we will need to use the extension of ANOVA and ANCOVA, MANOVA and MACOVA respectively. So the question would be how do those methods work and what kind of conclusions we can drawn from those methods?
Theory
ANOVA
Analysis of Variance (ANOVA) is the common method applied to analyze the differences between group means. In ANOVA, we divide the observed variance into components attributed to different sources of variation.
- One-way ANOVA: we expand our inference methods to study and compare k independent samples. In this case, we will be decomposing the entire variation in the data into independent components.
- Notations first: $y_{ij}$ is the measurement from group $i$, observation index $j$; $k$ is the number of groups; $n_{i}$ is the number of observations in group $i$; $n$ is the total number of observations and $n=n_{1}+n_{2}+⋯+n_{k}$. The group mean for group $i$ is $\bar y_{i}. =\frac{\sum_{j=1}^n_{i}y_{ij}}{n_{i}},$ the grand mean is $\bar y = \bar y_{..}=\frac{\sum_{i=1}^{k}\sum_{j=1}^{n_{i}}y_{ij}} {n}$.
- Difference between the means (compare each group mean to the grand mean): total variance $SST(total)=\sum_{i=1}^{k}\sum_{j=1}^{n_{i}}\left(y_{ij}-\bar y_{..}\right)^{2}$, degrees of freedom $df(total)=n-1$; difference between each group mean and grand mean: $SST(between)=\sum_{i=}^{k}n_{i} \left(\bar y_{i.} \bar y_{..}\right )^{2}$, degrees of freedom $df(between)=k-1$; Sum square due to error (combination of variation within group): $SSE=\sum_{i=1}^{k} \sum_{j=1}^{n_{i}}\left(y_{ij}-\bar y_{i.}\right)^{2}$, degrees of freedom $df(within)=n-k$.
With ANOVA decomposition, we have $\sum_{i=1}^{k}\sum_{j=1}^{n_{i}}\left(y_{ij}-\bar y_{..}\right)^{2}=\sum_{i=}^{k}n_{i}\left(\bar y_{i.} - \bar y_{..}\right )^{2}+\sum_{i=1}^{k}\sum_{j=1}^{n_{i}}\left(y_{ij}-\bar y_{i.}\right)^{2}$,that is $ST(total)=SST(between)+SSE(within)\,and\, df(total)=df(between)+df(within).$
- Calculations:
ANOVA table:
Variance source | Degree of freedom (df) | Sum of squares (SS) | Mean sum of squares (MS) | F-statistics | P-value |
Treatment effect (between group) | $k-1$ | $\sum_{i=1}^{k}n_{i} \left(\bar y_{i.} -\bar y_{..}\right)^{2}$ | $\frac{SST(between)}{df(between)}$ | $F_{0}=\frac{MST(between)}{MSE(within)}$ | $p(F_{(df(between),df(within)}>F_{0}$ |
Error (within group) | $n-k$ | $\sum_{i=1}^{k} \sum_{j=1}^{n_{i}} \left(y_{ij}-\bar y_{i.}\right)^{2}$ | $\frac{SSE(within)}{df(within)}$ | ||
Total | $n-1$ | $\sum_{i=1}^{k} \sum_{j=1}^{n_{i}} \left(y_{ij}-\bar y_{..}\right)^{2}$ |
- ANOVA hypotheses(general form
$H_{o}:\mu_{1}=\mu_{2}=⋯=\mu_{k}; H_{a}:\mu_{i}≠μ_{j}$ for some $i≠j$. The test statistics: $F_{0}=\frac{MST(between)}{MSE(within)}$, if $F_{0}$ is large, then there is a lot between group variation, relative to the within group variation. Therefore, the discrepancies between the group means are large compared to the variability within the groups (error). That is large $F_{0}$ provides strong evidence against $H_{0}$.
- ANOVA conditions: valid if (1) design conditions: all groups of observations represent random samples from their population respectively. Plus, all the observations within each group are independent of each other; (2) population conditions: the k population distributions must be approximately normal. If sample size is large, the normality condition is less crucial. Plus, the standard deviations of all populations are equal, which can be slightly relaxed when $0.5≤\frac{\sigma_{i}} {\sigma_{j}}≤2,$ for all $i$ and $j$, none of the population variance is twice larger than any of the other ones.
Two-way ANOVA
We focus on decomposing the variance of a dataset into independent (orthogonal) components when we have two grouping factors.
- Notations first: two-way model: $y_{ijk}=\mu+\tau_{i}+\beta_{j}+\gamma_{ij}+\varepsilon_{ijk}$,for all $1≤i≤a,1≤j≤b$ and $1≤k≤r. y_{ijk}$ is the A-factor level $i$, and B-factor level $j,$ observation-index $k$ measurement; $k$ is the number of replications; $a_{i}$ is the number of A-factor observations at level $i,a=a_{1}+⋯+a_{i}; b_{j}$ is the number of B-factor observations at level $j,b=b_{1}+⋯+b_{J}; N$ is the total number of observations and $N=a*a*b.$ Here $\mu$ is the overall mean response, $\tau_{i}$ is the effect due to the $i^{th}$ level of factor A, $\beta_{j}$ is the effect due to the $j^{th}$ level of factor B, and $\gamma_{ij}$ is the effect due to any interaction between the $i^{th}$ level of factor A and $j^{th}$ level of factor B. The mean for A-factor group mean at level $i$ and B-factor at level $j$ is $\bar y_{ij}.=\frac{\sum_{k=1}^{r}y_{ijk}}{r}$, the grand mean is $\bar y=\bar y_{…}=\frac{\sum_{k=1}^{r}\sum_{i=1}^{a}\sum_{j=1}^{b} y_{ijk}} {n}$, we have $SST(total)=SS(A)+SS(B)+SS(AB)+SSE.$
- Hypotheses
- Null hypotheses: (1) the population means of the first factor are equal, which is like the one-way ANOVA for the row factor; (2) the population means of the second factor are equal, which is like the one-way ANOVA for the column factor; (3) there is no interaction between the two factors, which is similar to performing a test for independence with contingency tables.
- Factors: factor A and factor B are independent variables in two-way ANOVA.
- Treatment groups: formed by making all possible combinations of two factors. For example, if the factor A has 3 levels and factor B has 5 levels, then there will be $3*5=15$ different treatment groups.
- Main effect: involves the dependent variable one at a time. The interaction is ignored for this part.
- Interaction effect: the effect that one factor has on the other factor. The degree of freedom is the product of the two degrees of freedom of each factor.
- Calculations:
ANOVA table:
Variance source | Degree of freedom (df) | Sum of squares (SS) | Mean sum of squares (MS) | F-statistics | P-value |
Main effect A | $a-1$ | $SS(A)=rb\Sigma_{i=1}^{a} n_{i} (\bar y_{i..} -\bar y_{...} )^{2}$ | $SS(A)/df(A)$ | $F_{0}=\frac{MS(A)}{MSE}$ | $p(F_{(df(B),df(E)})>F_{0}$ |
Main effect B | $b-1$ | $SS(B)=ra\Sigma_{j=1}^{b} n_{i} (\bar y_{.j.}-\bar y_{...})^{2}$ | $SS(B)/df(B)$ | $F_{0}=\frac{MS(B)}{MSE}$ | $p(F_{(df(AB),df(E)} )>F_{0}$ |
A vs. B interaction | $(a-1)(b-1)$ | $SS(AB)=r\Sigma_{i=1}^{a} \Sigma_{j=1}^{b} ((\bar y_{ij.}-\bar y_{i..}) +\bar y_{.j.} -\bar y_{...} )^{2}$ | $SS(AB)/df(AB)$ | $F_{0}=\frac{MS(AB)} {MSE}$ | |
Error | $N-ab$ | $SSE=\Sigma_{k=1}^{r} \Sigma_{i=1}^{a} \Sigma_{j=1}^{b} (y_{ijk}-\bar y_{ij.})^{2}$ | $SSE/df(error)$ | ||
Total | $N-1$ | $SST=\Sigma_{k=1}^{r}\Sigma_{i=1}^{a} \Sigma_{j=1}^{b}(y_{ijk}-\bar y_{...})^{2}$ |
- ANOVA conditions: valid if (1) the population from which the samples were obtained must be normally or approximately normally distributed; (2) the samples must be independent; (3) the variances of the populations must be equal; (4) the groups must have the same sample size.
ANCOVA
Analysis of Covariance is the common method applied to blend ANOVA and regression and evaluate whether population means of a dependent variance (DV) are equal across levels of a categorical independent variable (IV) while statistically controlling for the effects of other continuous variables (CV).
- Assumptions of ANCOVA: (1) normality of residuals; (2) homogeneity of variance for error; (3) Homogeneity of regression slopes, regression lines should be parallel among groups; (4) Linearity of regression; (5) independence of error terms.
- Increase statistical power: ANCOVA reducing the with-in group error variance and increase statistical power. Use the F-test to evaluate difference between groups by dividing the explained variance between groups by the unexplained variance within the groups. $F=\frac{MS between}{MSwithin}$. If this value is greater than the critical value, then there is significant difference between groups. The influence of CVs is grouped into the denominator. When we control for the effect of CVs on the DV, remove it from the denominator making F bigger, thereby increased our power to find a significant effect if one exists.
- Adjusting preexisting difference in nonequivalent groups: correct for initial group difference that exists on DV among several intact groups. In this case, CVs are used to adjust scores and make participants more similar than without the CV since the participants cannot be made equal through random assignment. CV may be so intimately related to the IV that removing the variance on the DV associated with CV would remove considerable variance on DV, which will make the result meaningless.
- Conduct ANCOVA: (1) Test multicollinearity: if a CV is highly related to another CV, then it won’t adjust the DV over and above the other CV. One or the other should be removed since they are statistically redundant. (2) Test the homogeneity of variance assumption: Levene’s test of equality of error variances. (3) Test of the homogeneity of regression slopes assumption: tested by testing if the CV significantly interacts with the IV by running an ANCOVA model including both the IV and the CV*IV interaction term in the model. If the interaction term is significant, then we should not perform ANCOVA. Instead assess group difference on DV at particular level of CV. (4) Run ANCOVA analysis: if the interaction is not significant, then rerun the ANCOVA without the interaction term. In this analysis, use the adjust means and adjusted MSerror. (5) Follow-up analyses: if there was a significant main effect, then there is a significant difference between the levels of one IV, ignoring all other factors. To find out exactly which levels are significant from others, use the same follow-up tests for ANOVA.
MANOVA
Multivariate analysis of variance or multiple analysis of variance is a statistical test procedure for comparing multivariate means of several groups. MANOVA is a generalized form of ANOVA.
- Relationship with ANOVA:
- MANOVA is an extension of ANOVA, though, unlike ANOVA, it uses the variance-covariance between variables in testing the statistical significance of the mean difference. It is similar to ANOVA, but allows adding of interval independents as covariates. Several specific use-cases for MANOVA: (1) to compare groups formed by categorical independent variables on group differences in a set of dependent variables; (2) to use lack of difference for a set of dependent variables as a criterion for reducing a set of independent variables to a smaller, more easily modeled number of variables; (3) to identify the independent variables which differentiate a set of dependent variables the most.
- Analogous to ANOVA, MANOVA is based on the product of model variance matrix, $\Sigma_{model}$ and inverse of the error variance matrix, $\Sigma_{res}^{-1}, or A=\Sigma_{model} * \Sigma_{res}^{-1}$. The hypothesis that $\Sigma_{model} = \Sigma_{residual}$ implies that the product $A \sim I$. Invariance considerations imply the MANOVA statistic should be a measure of magnitude of the singular value decomposition of this matrix product, there is no unique choice owing to the multi-dimensional nature of the alternative hypothesis.
- MANOVA calculations closely resemble the ANOVA calculations, except that they are in vector and matrix forms. Assume that instead of a single dependent variance in the one-way ANOVA, there are three dependent variables as in our neuroimaging example above. Under the null hypothesis, it is assumed that scores on the three variables for each of the four groups are sampled from a tri-variate normal distribution mean vector $\mu =(\mu_{1}, \mu_{2}, \mu_{3})^{T}$ and variance-covariance matrix $\Sigma=\bigg ( \begin{smallmatrix} \sigma_{1}^{2} & \rho_{1,2}\sigma_{1}\sigma{2} &\rho_{1,3}\sigma_{1}\sigma_{3} \\ \rho_{2,1}\sigma_{2}\sigma_{1} & \sigma_{2}^{2} & \rho_{2,3}\sigma_{2}\sigma_{3} \\\rho_{3,1}\sigma_{3}\sigma_{1}&\rho_{3,2}\sigma_{3}\sigma_{2} &\sigma_{3}^{2}\end{smallmatrix}\bigg )$. Where the covariance between variables 1 and 2 is expressed in terms of their correlation $(\rho_{1,2})$ and individual variances $(\sigma_{1}$ and $\sigma_{2})$. Under the null hypothesis, the scores for all subjects in groups 1, 2 and 3 are sampled from the same distribution.
Example: a $2*2$ factorial design with medication as one factor and type of therapy as the second factor. The matrix of the data looks includes the patient ID, the drug-treatment (vitamin-E or Placebo), Therapy (Cognitive/physical), MMSE, CDR, Imaging. It's better when the study the design is balanced with equal numbers of patients in all four conditions, as this avoid potential problems of sample-size-driven effects (e.g., variance estimates). Recall that a univariate ANOVA (on any single outcome measure) would contain three types of effects -- a main effect for therapy, a mean effect for medication, and an interaction between therapy and medication. Similarly, MANOVA will contain the same three effects: main effects: (1) Therapy: The univariate ANOVA main effect for therapy tells whether the physical vs. cognitive therapy groups have different means, irrespective of their medications. The MANOVA main effect for therapy tells whether the physical vs. cognitive therapy group have different mean vectors irrespective of their medication. The vectors in this case are the $(3*1)$ column vectors of means (MMSE, CDR and Imaging); (2) Medication: The univariate ANOVA for medication tells whether the placebo group has a different mean from the Vitamin-E group irrespective of the therapy type. The MANOVA main effect for medication tells whether the placebo group has a different mean vector from the VItamin-E group irrespective of therapy; interaction effects: (3) The univariate ANOVA interaction tells whether the four means for a single variable differ from the value predicted from knowledge of the main effects of therapy and medication. The MANOVA interaction term tells whether the four mean vectors differ from the vector predicted from knowledge of the main effects of therapy and medication.
- Variance partitioning: MANOVA has the same properties as an ANOVA. The only difference is that an ANOVA deals with a $(1*1)$ mean vector for any group (as the response is univatiate). While a MANOVA deals with a $(k*1)$ vector for any group, $k$ being the number of dependent variables, 3 in our example. The variance-covariance matrix for 1 variable is a $(1*1)$ matrix that has only one element, the variance of the variable. What is the variance-covariance matrix for $k$ variables is a $(k*k)$ matrix with the variances on the diagonal and the covariances representing the off diagonal elements. The ANOVA partitions the $(1*1)$ covariance matrix into a part due to error and a part due to the researcher-specified hypotheses (the two main effects and the interaction term). That is: $V_{total} = V_{therapy} + V_{medication} + V_{therapy*medication} + V_{error}.$ Likewise, MANOVA partitions its $(k*k)$ covariance matrix into a part due to research-hypotheses and a part due to error. Thus, in out example, MANOVA will have a $(3*3)$ covariance matrix for total variability, a $(3*3)$ covariance matrix due to therapy, a $(3*3)$ covariance matrix due to medication, a $(3*3)$ covariance matrix due to the interaction of therapy with medication, and finally a $(3*3)$ covariance matrix for the error. That is: $V_{total} = V_{therapy} + V_{medication} + V_{therapy*medication} + V_{error}$. Now,$V$ stands for the appropriate $(3*3)$ matrix, as opposed to $(1*1)$ value, as in ANOVA. The second equation is the matrix-form of the first one. Here is how we interpret these matrices. The error matrix looks like:
MMSE | CDR | Imaging | |
MMSE | $V_{error1}$ | COV(error1, error2) | COV(error1, error3) |
CDR | COV(error2, error1) | $V_{error2}$ | COV(error2, error1) |
Imaging | COV(error3, error1) | COV(error3, error2) | $V_{error3}$ |
- Common statistics are summarized based on the root (eigenvalues) \lambda_{p} of the A matrix: (1) Samuel Stanley Wilks’, $\Lambda_{Wilks}=\prod_{1,\cdots,p} (1/(1+\lambda_{p}))=det(I+A)^{-1}=det(\Sigma_{res})/det(\Sigma_{res}+\Sigma_{model})$ distributed as lambda $(\Lambda)$; (2) The Pillai-M.S. Bartlett trace, $\Lambda_{Pillai}=\sum_{1,\cdots,p}(1/(1+\lambda_{p}))=tr((I+A)^{-1})$; (3) The Lawley-Hotelling trace, $\Lambda_{LH}=\sum_{1,\cdots,p}(\lambda_{p})=tr(A)$; (4) Roy’s greatest root, $\Lambda_{Roy}=max_{p}(\lambda_{p})=\left \| A \right \|_{\infty}$. The 4 major types of MANOVA test, the statistical power of these tests follow: $Pillai’s > Wilk’s > Hotelling’s > Roy’s Robustness$.
- Let the $A$ statistic be the ratio of the sum of squares for an hypothesis and the sum of squares for error. Let $H$ denote the hypothesis sums of squares and cross products matrix, and let $E$ denote the error sums of squares and cross products matrix. The multivariate $A$ statistic is the matrix $A = HE_{-1}.$ Notice how mean squares (that is, covariance matrices) disappear from MANOVA just as they did for ANOVA. All hypothesis tests may be performed on the matrix $A$. Note also that because both $H$ and $E$ are symmetric, $HE^{-1}=E^{-1} H$. This is one special case where the order of matrix multiplication does not matter.
- All MANOVA tests are made on $A=E^{-1}H$. There are four different multivariate tests that are made on this matrix. Each of the four test statistics has its own associated $F$ ratio. In some cases the four tests give an exact $F$ ratio for testing the null hypothesis and in other cases the $F$ ratio is only approximate. The reason for four different statistics and for approximations is that the MANOVA calculations may be complicated in some cases (i.e., the sampling distribution of the $F$ statistic in some multivariate cases would be difficult to compute exactly.) Suppose there are $k$ dependent variables in the MANOVA, and let $\lambda I$ denote the ith eigenvalue of $A=E^{-1}H$.
- Wilk’s $\Lambda:\, 1- \Lambda$ is an index of variance explained by the model, $\eta^{2}$ is a measure of effect size analogous to $R^{2}$ in regression. $Wilk’s\: \Lambda$ is the pooled ratio of error variance to effect variance plus error variance: $\Lambda=\frac{|E|}{|H+E|}=\prod_{i=1}^{k}\frac{1}{1+\lambda_{i}}$.
- Pillai’s Trace: Pillai’s criterion is the pooled effect variances. $Pillai’s\: trace=trace[H(H+E)^{-1}]=\prod_{i=1}^{k}\frac{1}{1+\lambda_{i}}$.
- Hotelling’s Trace: the pooled ratio of effect variance to error variance:$trace(A)=trace[HE^{-1}]=\sum_{i=1}^{k}\lambda_{i}$.
- Roy’s largest root: gives an upper bound for the F statistic, $Roy’s\: largest \,root=max(\hat{\lambda_{i}})$.
MANCOVA
A multivariate analysis of covariance MANCOVA is a statistical an extension of ANCOVA and is designed for cases where there is more than one dependent variable and where the control of concomitant continuous independent variables is required. The process of characterizing a covariate in a data source allows the reduction of the magnitude of the error term, represented in the MANCOVA design as $MS_{error}$. The MANCOVA allows the characterization of the difference in group means in regards to a linear combination of multiple dependent variables, while simultaneously controlling for covariates.
- Assumptions: (1) normality, for each group, each dependent variable follows a normal distribution and any linear combination of dependent variables are normally distributed; (2) independence of observations from all other observations; (3) homogeneity of variance: each dependent variable demonstrate similar levels of variance across each independent variable; (4) homogeneity of covariance: the intercorrelation matrix between dependent variables equals to each other across all levels of independent variable.
- Covariate represents the source of variance that has not been controlled in the experiment and is believed to affect the dependent variable. And methods like ANCOVA and MANCOVA aim to remove the effects of such uncontrolled variation in order to increase statistical power and to ensure an accurate measurement of the true relationship between independent and dependent variables.
- In some studies with covariates it happens that the F value actually becomes smaller (less significant) after including covariates in the design. This indicates that the covariates are not only correlated with the dependent variable, but also with the between-groups factors.
Applications
- This article examined articles published in several prominent educational journals to investigate the use of data analysis tools by researchers in four research paradigms: between-subjects univariate designs, between-subjects multivariate designs, repeated measures designs, and covariance designs. In addition to examining specific details pertaining to the research design (e.g., sample size, group size equality/inequality) and methods employed for data analysis, the authors also cataloged whether (a) validity assumptions were examined, (b) effect size indices were reported, (c) sample sizes were selected on the basis of power considerations, and (d) appropriate textbooks and/or articles were cited to communicate the nature of the analyses that were performed. The present analyses imply that researchers rarely verify that validity assumptions are satisfied and that, accordingly, they typically use analyses that are non-robust to assumption violations. In addition, researchers rarely report effect size statistics, nor do they routinely perform power analyses to determine sample size requirements. Recommendations are offered to rectify these shortcomings.
- This article investigated the performance of ANOVA, MANOVA, WLS, and GEE for repeated ordinal data with small sample sizes. Repeated ordinal outcomes are common in behavioral and medical sciences. Due to the familiarity, simplicity and robustness of ANOVA methodology, this approach has been frequently used for repeated ordinal data. Weighted least squares (WLS) and generalized estimating equations (GEE) are usually the procedures of choice for repeated ordinal data since, unlike ANOVA, they generally make no or few untenable assumptions. However, these methods are based on asymptotic results and their properties are not well understood for small samples. Moreover, few software packages have procedures for implementing these methods. For a design with two groups and four time points, the simulation results indicated that ANOVA with the Huynh-Feldt adjustment performed matrix, known as sphericity, or the H‐F condition, is a sufficient condition for the usual F tests to be valid.
Software
RCODE:
fit <- aov(y ~ A, data = mydata) #one way ANOVA (completely randomized design) fit <- aov(y ~ A+B, data=mydata) # randomized block design where B is the blocking factor fit <- aov(y ~ A+B+A*B, data=mydata) ## two way factorial design fit <- aov(y ~ A+x, data=mydata) ## analysis of covariance
- for within subject designs, the data frame has to be rearranged for each measurement on a subject to be a separate observation
fit <- aov(y ~ A+Error(subject/A), data=mydata) ## one within factor fit <- aov(y ~(w1*w2*B1*B2)+Error(Subject/(W1*W2))+(B1*B2),data=mydata) # two within factors W1 and W2, two between factors B1 and B2. ## 2*2 factorial MANOVA with 3 dependent variables Y <- cbind(y1,y2,y3) fit <- manova(Y ~ A*B)
Problems
Use data of the CPI (consumer price index) for food, housing, transportation and medical care from 1981 to 2007 to do a two-way analysis of the covariance in R. We take ‘Month’ as one factor and the item the CPI measured on as another factor and did a 2* 2 factorial design. The data are linked at Consumer Price Index.
In R: CPI <- read.csv('/location/CPI_Food.csv',header=T) attach(CPI) summary(CPI) Month <- factor(Month) CPI_Item <- factor(CPI_Item) fit <- aov(CPI_Value ~ Month + CPI_Item + CPI_Item*Month, data=CPI) fit Call: aov(formula = CPI_Value ~ Month + CPI_Item + CPI_Item * Month, data = CPI)
Terms: Month CPI_Item Month:CPI_Item Residuals Sum of Squares 3282.6 1078702.9 706.8 2987673.8 Deg. of Freedom 11 3 33 1248
Residual standard error: 48.92821 Estimated effects may be unbalanced
summary(fit) Df Sum Sq Mean Sq F value Pr(>F) Month 11 3283 298 0.125 1 CPI_Item 3 1078703 359568 150.197 <2e-16 *** Month:CPI_Item 33 707 21 0.009 1 Residuals 1248 2987674 2394
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
fit2 <- aov(CPI_Value ~ CPI_Item, data=CPI) fit2 Call: aov(formula = CPI_Value ~ CPI_Item, data = CPI)
Terms: CPI_Item Residuals Sum of Squares 1078703 2991663 Deg. of Freedom 3 1292
Residual standard error: 48.11994 Estimated effects may be unbalanced summary(fit2) Df Sum Sq Mean Sq F value Pr(>F) CPI_Item 3 1078703 359568 155.3 <2e-16 *** Residuals 1292 2991663 2316
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Seems like the CPI_Item is an important factor while month is not a significant factor of CPI based on the dataset we got.
References
- SOCR Home page: http://www.socr.umich.edu
Translate this page: