Difference between revisions of "AP Statistics Curriculum 2007 Infer BiVar"
(→Hands-on activities) |
|||
Line 75: | Line 75: | ||
* Use the [[SOCR_012708_ID_Data_HotDogs | hot-dogs dataset]] to formulate and test hypotheses about the difference of the population standard deviations of sodium between the poultry and the meet based hot-dogs. Repeat this with variances of calories between the beef and meet based hot-dogs. | * Use the [[SOCR_012708_ID_Data_HotDogs | hot-dogs dataset]] to formulate and test hypotheses about the difference of the population standard deviations of sodium between the poultry and the meet based hot-dogs. Repeat this with variances of calories between the beef and meet based hot-dogs. | ||
+ | |||
+ | ===See also=== | ||
+ | [[AP_Statistics_Curriculum_2007_NonParam_VarIndep | Fligner-Killeen non-parametric test for variance homogeneity]]. | ||
<hr> | <hr> |
Revision as of 21:30, 28 November 2008
Contents
General Advance-Placement (AP) Statistics Curriculum - Comparing Two Variances
In the section on inference about the variance and the standard deviation, we already learned how to do inference on either of these two population parameters. Now we discuss the comparison of the variances (or standard deviations) using data randomly sampled from two different populations.
Background
Recall that the sample-variance (s2) is an unbiased point estimate for the population variance \(\sigma^2\), and similarly, the sample-standard-deviation (s) is a point estimate for the population-standard-deviation \(\sigma\).
The sample-variance is roughly Chi-square distributed: \[\chi_o^2 = {(n-1)s^2 \over \sigma^2} \sim \Chi_{(df=n-1)}^2\]
Comparing Two Variances (\(\sigma_1^2 = \sigma_2^2\)?)
Suppose we study two populations which are approximately Normally distributed, and we take a random sample from each population, {\(X_1, X_2, X_3, \cdots, X_n\)} and {\(Y_1, Y_2, Y_3, \cdots, Y_k\)}. Recall that \({(n-1) s_1^2 \over \sigma_1^2}\) and \({(n-1) s_2^2 \over \sigma_2^2}\) have \(\Chi^2_{(df=n - 1)}\) and \(\Chi^2_{(df=k - 1)}\) distributions. We are interested in assessing \(H_o: \sigma_1^2 = \sigma_2^2\) vs. \(H_1: \sigma_1^2 \not= \sigma_2^2\), where \(s_1\) and \(\sigma_1\), and \(s_2\) and \(\sigma_2\) and the sample and the population standard deviations for the two populations/samples, respectively.
Notice that the Chi-Square Distribution is not symmetric (it is positively skewed). You can visualize the Chi-Square distribution and compute all critical values either using the SOCR Chi-Square Distribution or using the SOCR Chi-Square Distribution Calculator.
The Fisher's F Distribution, and the corresponding F-test, is used to test if the variances of two populations are equal. Depending on the alternative hypothesis, we can use either a two-tailed test or a one-tailed test. The two-tailed version tests against an alternative that the standard deviations are not equal (\(H_1: \sigma_1^2 \not= \sigma_2^2\)). The one-tailed version only tests in one direction (\(H_1: \sigma_1^2 < \sigma_2^2\) or \(H_1: \sigma_1^2 > \sigma_2^2\)). The choice is determined by the study design before any data is analyzed. For example, if a modification to an existent medical treatment is proposed, we may only be interested in knowing if the new treatment is more consistent and less variable than the established medical intervention.
- Test Statistic\[F_o = {\sigma_1^2 \over \sigma_2^2}\]
The farther away this ratio is from 1, the stronger the evidence for unequal population variances.
- Inference: Suppose we test at significance level \(\alpha=0.05\). Then the hypothesis that the two standard deviations are equal is rejected if the test statistics is outside this interval
\[H_1: \sigma_1^2 > \sigma_2^2\]: If \(F_o > F(\alpha,df_1=n_1-1,df_2=n_2-1)\)
\[H_1: \sigma_1^2 < \sigma_2^2\]: If \(F_o < F(1-\alpha,df_1=n_1-1,df_2=n_2-1)\)
\[H_1: \sigma_1^2 \not= \sigma_2^2\]: If either \(F_o < F(1-\alpha/2,df_1=n_1-1,df_2=n_2-1)\) or \(F_o > F(\alpha/2,df_1=n_1-1,df_2=n_2-1)\), where \(F(\alpha,df_1=n_1-1,df_2=n_2-1)\) is the critical value of the F distribution with degrees of freedom for the numerator and denominator, \(df_1=n_1-1,df_2=n_2-1\), respectively.
In the image below the left and right critical regions are white with \(F(\alpha,df_1=n_1-1,df_2=n_2-1)\) and \(F(1-\alpha,df_1=n_1-1,df_2=n_2-1)\) representing the lower and upper, respectively, critical values. In this example of \(F(df_1=12, df_2=15)\), the left and right critical values at \(\alpha/2=0.025\) are \(F(\alpha/2=0.025,df_1=9,df_2=14)=0.314744\) and \(F(1-\alpha/2=0.975,df_1=9,df_2=14)=2.96327\), respectively.
Comparing Two Standard Deviations (\(\sigma_1 = \sigma_2\)?)
Two make inference on whether the standard deviations of two populations are equal we calculate the sample variances and apply the inference on the ratio of the sample variance using the F-test, as described above.
Hands-on activities
- Formulate appropriate hypotheses and assess the significance of the evidence to reject the null hypothesis that the variances of the two populations, that the following data come from, are distinct. Assume the observations below represent random samples (of sizes 6 and 10) from two Normally distributed populations of liquid content (in fluid ounces) of beverage cans. Use (\(\alpha=0.1\)).
Sample from Population 1 | 14.816 | 14.863 | 14.814 | 14.998 | 14.965 | 14.824 | ||||
Sample from Population 2 | 14.884 | 14.838 | 14.916 | 15.021 | 14.874 | 14.856 | 14.860 | 14.772 | 14.980 | 14.919 |
- Hypotheses\[H_o: \sigma_1 = \sigma_2\] vs. \(H_1: \sigma_1 \not= \sigma_2\) .
- Get the sample statistics from SOCR Charts (e.g., Index Plot);
Sample Mean | Sample SD | Sample Variance | |
Sample 1 | 14.88 | 0.081272382 | 0.0066052 |
Sample 2 | 14.892 | 0.071269442 | 0.005079333 |
- Identify the degrees of freedom (\(df_1=6-1=5\) and \(df_2=10-1=9\)).
- Test Statistics\[F_o = {\sigma_1^2 \over \sigma_2^2}=1.300406878\]
- Significance Inference: P-value=\(P(F_{(df_1=5, df_2=9)} > F_o) = 0.328147\). This p-value does not indicate strong evidence in the data to reject the null hypothesis. That is, the data does not have power to discriminate between the population variances of the two populations based on these (small) samples.
More examples
- Use the hot-dogs dataset to formulate and test hypotheses about the difference of the population standard deviations of sodium between the poultry and the meet based hot-dogs. Repeat this with variances of calories between the beef and meet based hot-dogs.
See also
Fligner-Killeen non-parametric test for variance homogeneity.
References
- SOCR Home page: http://www.socr.ucla.edu
Translate this page: