AP Statistics Curriculum 2007 NonParam ANOVA

From SOCR
Revision as of 13:21, 3 March 2020 by Tdlee (talk | contribs) (Text replacement - "{{translate|pageName=http://wiki.stat.ucla.edu/socr/" to ""{{translate|pageName=http://wiki.socr.umich.edu/")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

General Advance-Placement (AP) Statistics Curriculum - Medians of Several Independent Samples

In this section we extend the multi-sample inference which we discussed in the ANOVA section, to the situation where the ANOVA assumptions are invalid. Hence we use a non-parametric analysis to study differences in centrality between two or more populations.

Motivational Example

Suppose four groups of students are randomly assigned to be taught with four different techniques, and their achievement test scores are recorded. Are the distributions of test scores the same, or do they differ in location? The data is presented in the table below.

Teaching Method
Method 1 Method 2 Method 3 Method 4
Index 65 75 59 94
87 69 78 89
73 83 67 80
79 81 62 88

The small sample sizes and the lack of distribution information of each sample illustrate how ANOVA may not be appropriate for analyzing these types of data.

The Kruskal-Wallis Test

Kruskal-Wallis One-Way Analysis of Variance by ranks is a non-parametric method for testing equality of two or more population medians. Intuitively, it is identical to a One-Way Analysis of Variance with the raw data (observed measurements) replaced by their ranks.

Since it is a non-parametric method, the Kruskal-Wallis Test does not assume a normal population, unlike the analogous one-way ANOVA. However, the test does assume identically-shaped distributions for all groups, except for any difference in their centers (e.g., medians).

Calculations

Let N be the total number of observations, then \(N = \sum_{i=1}^k {n_i}\).

Let \(R(X_{ij})\) denotes the rank assigned to \(X_{ij}\) and let \(R_i\) be the sum of ranks assigned to the \(i^{th}\) sample.

\[R_i = \sum_{j=1}^{n_i} {R(X_{ij})}, i = 1, 2, ... , k\].

The SOCR program computes \(R_i\) for each sample. The test statistic is defined for the following formulation of hypotheses:

\[H_o\]: All of the k population distribution functions are identical. \[H_1\]: At least one of the populations tends to yield larger observations than at least one of the other populations.

Suppose {\(X_{i,1}, X_{i,2}, \cdots, X_{i,n_i}\)} represents the values of the \(i^{th}\) sample, where \(1\leq i\leq k\).

Test statistics:
T = \((1/{{S}^{2}}) (\sum_{i=1}^{k} {{R_i}^{2}} / {n_i} {-}{N {(N + 1)}^{2} }) / 4\),

where

\[{{S}^{2}} = \left( \left({1/ {N - 1}}\right) \right) \sum{{R(X_{ij})}^{2}} {-} {N {\left(N + 1)\right)}^{2} } ) / 4\].
  • Note: If there are no ties, then the test statistic is reduced to:
\[T = \left(12 / N(N+1) \right) \sum_{i=1}^{k} {{R_i}^{2}} / {n_i} {-} 3 \left(N+1\right)\].

However, the SOCR implementation allows for the possibility of having ties; so it uses the non-simplified, exact method of computation.

Multiple comparisons have to be done here. For each pair of groups, the following is computed and printed at the Result Panel.

\(|R_{i} /n_{i} -R_{j} /n_{j} | > t_{1-\alpha /2} (S^{2^{} } (N-1-T)/(N-k))^{1/2_{} } /(1/n_{i} +1/n_{j} )^{1/2_{}}\).

The SOCR computation employs the exact method instead of the approximate one (Conover 1980), since computation is easy and fast to implement and the exact method is somewhat more accurate.

The Kruskal-Wallis Test Using SOCR Analyses

It is much quicker to use SOCR Analyses to compute the statistical significance of this test. This SOCR KruskalWallis Test Activity may also be helpful in understanding how to use this test in SOCR.

For the teaching-methods example above, we can easily compute the statistical significance of the differences between the group medians (centers):

SOCR EBook Dinov KruskalWallis 030108 Fig1.jpg

Clearly, there is only one significant group difference between medians, after the multiple testing correction, for the group1 vs. group4 comparison (see below):

Group Method1 vs. Group Method2: 1.0 < 5.2056
Group Method1 vs. Group Method3: 4.0 < 5.2056
Group Method1 vs. Group Method4: 6.0 > 5.2056
Group Method2 vs. Group Method3: 5.0 < 5.2056
Group Method2 vs. Group Method4: 5.0 < 5.2056
Group Method3 vs. Group Method4: 10.0 > 5.2056

Practice Examples

TBD

Notes

References

Conover W (1980). Practical Nonparametric Statistics. John Wiley & Sons, New York, second edition.


"-----


Translate this page:

(default)
Uk flag.gif

Deutsch
De flag.gif

Español
Es flag.gif

Français
Fr flag.gif

Italiano
It flag.gif

Português
Pt flag.gif

日本語
Jp flag.gif

България
Bg flag.gif

الامارات العربية المتحدة
Ae flag.gif

Suomi
Fi flag.gif

इस भाषा में
In flag.gif

Norge
No flag.png

한국어
Kr flag.gif

中文
Cn flag.gif

繁体中文
Cn flag.gif

Русский
Ru flag.gif

Nederlands
Nl flag.gif

Ελληνικά
Gr flag.gif

Hrvatska
Hr flag.gif

Česká republika
Cz flag.gif

Danmark
Dk flag.gif

Polska
Pl flag.png

România
Ro flag.png

Sverige
Se flag.gif