# Difference between revisions of "SOCR EduMaterials Activities General CI Experiment"

(→References) |
m (→Confidence intervals using bootstrapping) |
||

(3 intermediate revisions by the same user not shown) | |||

Line 579: | Line 579: | ||

<center>[[Image:SOCR_Activities_General_CI_Activity_070709_Fig22.png|300px]][[Image:SOCR_Activities_General_CI_Activity_070709_Fig23.png|300px]]</center> | <center>[[Image:SOCR_Activities_General_CI_Activity_070709_Fig22.png|300px]][[Image:SOCR_Activities_General_CI_Activity_070709_Fig23.png|300px]]</center> | ||

+ | |||

+ | ==Confidence intervals using bootstrapping== | ||

+ | |||

+ | Above we showed that asymptotically (for extremely large samples) the [[AP_Statistics_Curriculum_2007_Limits_CLT | CLT asymptotic properties]] allow us to estimate the confidence intervals for various statistics, e.g., mean, median, range, etc. for most ''nice distributions'', e.g., Weibull, Gamma, Lognormal, etc. However, in practice, we generally have only small samples, and the data may not be “perfectly distributed” according to a fixed known distribution. In such situations, we commonly compute the '''bootstrap confidence interval (BCI)''' estimates using the following protocol: | ||

+ | |||

+ | * Resample from the observed dataset and generate new samples with replacement \(B\) times. | ||

+ | * For each of these data sub-samples calculate the sample statistics (e.g., mean, median, etc.). | ||

+ | * Calculate an appropriate bootstrap confidence interval for each sample statistics using one of many different types of [[SOCR_Resampling_HTML5_Project | bootstrap confidence interval (BCI) recipes]]. | ||

+ | |||

+ | In practice, we calculate several BCI and examine any discrepancies between them. Let's look at this an R implementation example: | ||

+ | |||

+ | <blockquote> | ||

+ | rm(list=ls()) # clean the environment | ||

+ | |||

+ | set.seed(123) | ||

+ | |||

+ | data0 = rgamma(383,5,3) # specify distribution, or reference a specific dataset, here we use simulated data | ||

+ | |||

+ | mean(data0) # Sample statistics (e.g., mean) | ||

+ | |||

+ | hist(data0) # Histogram of the data | ||

+ | |||

+ | library(boot) | ||

+ | |||

+ | meanB <- function(data, indices) { # function to obtain the sample statistics (e.g., mean) for each sub-sample | ||

+ | |||

+ | d <- data[indices] # allows boot to select sample | ||

+ | |||

+ | return(mean(d)) | ||

+ | |||

+ | } | ||

+ | |||

+ | results <- boot(data=data0, statistic=meanB, R=10000) # bootstrapping with 10000 replications | ||

+ | |||

+ | results; plot(results) # explore the results of the multiple sample statistics calculations | ||

+ | |||

+ | boot.ci(results, type=c("norm", "basic", "perc", "bca")) # compute 95\% confidence interval | ||

+ | </blockquote> | ||

+ | |||

+ | See the [https://socr.umich.edu/HTML5/Resampling_Webapp/ SOCR Resampling Webapp] demonstrating the bootstrap resampling applications. | ||

==See also== | ==See also== | ||

* The [[SOCR_EduMaterials_Activities_ConfidenceIntervals | Simple Confidence Interval Activity]]. | * The [[SOCR_EduMaterials_Activities_ConfidenceIntervals | Simple Confidence Interval Activity]]. | ||

+ | * [http://www.socr.umich.edu/people/dinov/2017/Spring/DSPA_HS650/notes/DSPA_Appendix_01_BayesianInference_MCMC_Gibbs.html DSPA Bayesian Simulation, Modeling and Inference Chapter]. | ||

==References== | ==References== |

## Latest revision as of 10:14, 20 June 2023

## Contents

- 1 SOCR Experiments Activities - General Confidence Interval Activity
- 2 Summary
- 3 Goals
- 4 Motivational example
- 5 Activity
- 6 Introduction of the SOCR Confidence Interval Applet
- 7 Confidence intervals for the population mean \(\mu\) with known population variance \(\sigma^2\)
- 8 An empirical investigation
- 9 Confidence intervals for the population mean of normal distribution
**when the population variance \(\sigma^2\) is unknown** - 10 Confidence interval for the population proportion
*p* - 11 Calculating sample sizes
- 12 Exact confidence interval for
*p* - 13 Confidence interval for the population variance \(\sigma^2\) of the normal distribution
- 14 SOCR investigation
- 15 Confidence intervals for the population parameters of a distribution based on the asymptotic properties of maximum likelihood estimates
- 16 Confidence intervals using bootstrapping
- 17 See also
- 18 References

## SOCR Experiments Activities - General Confidence Interval Activity

## Summary

There are two types of parameter estimates – *point-based* and *interval-based* estimates. Point-estimates refer to unique quantitative estimates of various parameters. Interval-estimates represent ranges of plausible values for the parameters of interest. There are different algorithmic approaches, prior assumptions and principals for computing data-driven parameter estimates. Both point and interval estimates depend on the distribution of the process of interest, the available computational resources and other criteria that may be desirable (Stewarty 1999) – e.g., biasness and robustness of the estimates. Accurate, robust and efficient parameter estimation is critical in making inference about observable experiments, summarizing process characteristics and prediction of experimental behaviors.

This activity demonstrates the usage and functionality of SOCR General Confidence Interval Applet. This applet is complementary to the SOCR Simple Confidence Interval Applet and its corresponding activity.

## Goals

The aims of this activity are to:

- demonstrate the theory behind the use of interval-based estimates of parameters,
- illustrate various confidence intervals construction recipes
- draw parallels between the construction algorithms and intuitive meaning of confidence intervals
- present a new technology-enhanced approach for understanding and utilizing confidence intervals for various applications.

## Motivational example

A 2005 study proposing a new computational brain atlas for Alzheimer’s disease (Mega et al., 2005) investigated the mean volumetric characteristics and the spectra of shapes and sizes of different cortical and subcortical brain regions for Alzheimer’s patients, individuals with minor cognitive impairment and asymptomatic subjects. This study estimated a number of centrality and variability parameters for these three populations. Based on these point- and interval-estimates, the study analyzed a number of digital scans to derive criteria for imaging-based classification of subjects based on the intensities of their 3D brain scans. Their results enabled a number of subsequent inference studies that quantified the effects of subject demographics (e.g., education level, familial history, APOE allele, etc.), stage of the disease and the efficacy of new drug treatments targeting Alzheimer’s disease. The Figure to the right illustrates the *shape, center* and *distribution parameters* for the 3D geometric structure of the right hippocampus in the Alzheimer’s disease brain atlas. New imaging data can then be coregistered and compared relative to the amount of anatomical variability encoded in this atlas. This enables automated, efficient and quantitative inference on large number of brain volumes. Examples of point and interval estimates computed in this atlas framework include the mean-intensity and mean shape location, and the standard deviation of intensities and the mean deviation of shape.

## Activity

### Confidence intervals (CI) for the population mean \(\mu\) of normal population with known population variance \(\sigma^2\)

Let \(X_1, X_2, \cdots, X_n\) be a random sample from \(N(\mu, \sigma)\). We know that \(\bar X \sim N(\mu, \frac{\sigma}{\sqrt{n}})\). Therefore, \[P\left(-z_{\frac{\alpha}{2}} \le \frac{\bar X - \mu}{\frac{\sigma}{\sqrt{n}}} \le z_{\frac{\alpha}{2}} \right)=1-\alpha,\]

- where \(-z_{\frac{\alpha}{2}}\) and \(z_{\frac{\alpha}{2}}\) are defined as shown in the figure below:

The area \(1-\alpha\) is called *confidence level*. Usually, the choices for confidence levels are the following:

\(1-\alpha\) | \(z_{\frac{\alpha}{2}}\) |
---|---|

0.90 | 1.645 |

0.95 | 1.960 |

0.98 | 2.325 |

0.99 | 2.575 |

The expression above can be written as: \[P\left(\bar x -z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \le \mu \le \bar x + z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \right)=1-\alpha.\]

We say that we are \(1-\alpha\) confident that the mean \(\mu\) falls in the interval \(\bar x \pm z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}}\).

### Example 1

Suppose that the length of iron rods from a certain factory follows the normal distribution with known standard deviation \(\sigma=0.2\ m\) but unknown mean \(\mu\). Construct a *95% confidence interval* for the population mean \(\mu\) if a random sample of n=16 of these iron rods has sample mean \(\bar x=6 \ m\).

- We solve this problem by using our CI recipe

\[6 \pm 1.96 \frac{0.2}{\sqrt{16}}\] \[6 \pm 0.098\] \[5.902 \le \mu \le 6.098.\]

### Sample size determination for a given length of the confidence interval

Find the sample size *n* needed when we want the width of the confidence interval to be \(\pm E\) with confidence level \(1-\alpha\).

#### Solution

In the expression \(\bar x \pm z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}}\) the width of the confidence interval is given by \(z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}}\) (also called *margin of error*). We want this width to be equal to *E*. Therefore,

\[ E=z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \Rightarrow n=\left(\frac{z_{\frac{\alpha}{2}} \sigma}{E}\right)^2.\]

### Example 2

Following our first example above, suppose that we want the entire width of the confidence interval to be equal to \(0.05 \ m\). Find the sample size *n* needed.
\[n=\left(\frac{1.96 \times 0.2}{0.025}\right)^2=245.9 \Rightarrow n \approx 246.\]

## Introduction of the SOCR Confidence Interval Applet

To access the SOCR applet on confidence intervals go to http://socr.ucla.edu/htmls/exp/Confidence_Interval_Experiment_General.html. To select the type and parameters of the specific confidence interval of interest click on the **Confidence Interval** button on the top -- this will open a new pop-up window as shown below:

A confidence interval of interest can be selected from the drop-down list under *CI Settings*. In this case, we selected *Mean - Population Variance Known*.

In the same pop-up window, under *SOCR Distributions*, the drop-down menu offers a list of all the available distributions of SOCR. These distributions are the same as the ones included in the SOCR Distributions applet.

Once the desired distribution is selected, its parameters can be chosen numerically or via the sliders. In this example we select:

*normal distribution*with*mean 5*and*standard deviation 2*,- sample size (number of observations selected from the distribution) is
*20*, - the confidence level (\(1-\alpha=0.95\)), and
- the number of intervals to be constructed is 50 (see screenshot below).
**Note**: Make sure to hit enter after you enter any of the parameters above.

To run the SOCR CI simulation, go back to the applet in the main browser window. We can run the experiment once, by clicking on the *Step* button, or many times by clicking on the *Run* button. The number of experiments can be controlled by the value of the *Number of Experiments* variable (10, 100, 1,000, 10,000, or continuously).

In the screenshot above we observe the following:

- The shape of the distribution that was selected (in this case Normal).
- The observations selected from the distribution for the construction of each of the 50 intervals shown in blue on the top-left graph panel.
- The confidence intervals shown as red line segments on the bottom-left panel.
- The green dots represent instances of confidence intervals that do not include the parameter (in this case population mean of 5).
- All the parameters and simulation results are summarized on the right panel of the applet.

### Practice

Run the same experiment using sample sizes of 20, 30, 40, 50 with the same confidence level (\(1-alpha=0.95\)). What are your observations and conclusions?

## Confidence intervals for the population mean \(\mu\) with known population variance \(\sigma^2\)

From the central limit theorem we know that when the sample size is large (usually \(n \ge 30\)) the distribution of the sample mean \(\bar X\) approximately follows \(\bar X \sim N(\mu, \frac{\sigma}{\sqrt{n}})\). Therefore, the confidence interval for the population mean \(\mu\) is approximately given by the expression we previously discussed: \[P\left(\bar x -z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \le \mu \le \bar x + z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \right) \approx 1-\alpha.\]

The mean \(\mu\) falls in the interval \(\bar x \pm z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}}\).

Also, the sample size determination is given by the same formula: \[E=z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \Rightarrow n=\left(\frac{z_{\frac{\alpha}{2}} \sigma}{E}\right)^2.\]

### Example 3

A sample of size *n=50* is taken from the production of light bulbs at a certain factory. The sample mean of the lifetime of these 50 light bulbs is found to be \(\bar x = 1,570\) hours. Assume
that the population standard deviation is \(\sigma=120\) hours.

- Construct a
*95% confidence interval*for \(\mu\). - Construct a
*99% confidence interval*for \(\mu\). - What sample size is needed so that the length of the interval is 30 hours with
*95% confidence*?

## An empirical investigation

Two dice are rolled and the sum X of the two numbers that occurred is recorded. The probability distribution of X is as follows:

X | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
---|---|---|---|---|---|---|---|---|---|---|---|

P(X) | 1/36 | 2/36 | 3/36 | 4/36 | 5/36 | 6/36 | 5/36 | 4/36 | 3/36 | 2/36 | 1/36 |

This distribution has mean \(\mu=7\) and standard deviation \(\sigma=2.42\). We take 100 samples of size n=50 each from this distribution and compute for each sample the sample mean \(\bar x\). Pretend now that we only know that \(\sigma=2.42\), and that \(\mu\) is unknown. We are going to use these 100 sample means to construct 100 confidence intervals. each one with *95% confidence level* for the true population mean \(\mu\). Here are the results:

Sample | \(\bar x\) | 95% CI for \(\mu\)\[\bar x - 1.96 \frac{2.42}{\sqrt {50}} \le \mu \le \bar x + 1.96 \frac{2.42}{\sqrt {50}}\] | Is \(\mu=7\) included? |
---|---|---|---|

1 | 6.9 | \(6.23\leq \mu\leq 7.57\) | YES |

2 | 6.3 | \(5.63\leq\mu\leq 6.97\) | NO |

3 | 6.58 | \(5.91\leq\mu\leq 7.25\) | YES |

4 | 6.54 | \(5.87\leq\mu\leq 7.21\) | YES |

5 | 6.7 | \(6.03\leq\mu\leq 7.37\) | YES |

6 | 6.58 | \(5.91\leq\mu\leq 7.25\) | YES |

7 | 7.2 | \(6.53\leq\mu\leq 7.87\) | YES |

8 | 7.62 | \(6.95\leq\mu\leq 8.29\) | YES |

9 | 6.94 | \(6.27\leq\mu\leq 7.61\) | YES |

10 | 7.36 | \(6.69\leq\mu\leq 8.03\) | YES |

11 | 7.06 | \(6.39\leq\mu\leq 7.73\) | YES |

12 | 7.08 | \(6.41\leq\mu\leq 7.75\) | YES |

13 | 7.42 | \(6.75\leq\mu\leq 8.09\) | YES |

14 | 7.42 | \(6.75\leq\mu\leq 8.09\) | YES |

15 | 6.8 | \(6.13\leq\mu\leq 7.47\) | YES |

16 | 6.94 | \(6.27\leq\mu\leq 7.61\) | YES |

17 | 7.2 | \(6.53\leq\mu\leq 7.87\) | YES |

18 | 6.7 | \(6.03\leq\mu\leq 7.37\) | YES |

19 | 7.1 | \(6.43\leq\mu\leq 7.77\) | YES |

20 | 7.04 | \(6.37\leq\mu\leq 7.71\) | YES |

21 | 6.98 | \(6.31\leq\mu\leq 7.65\) | YES |

22 | 7.18 | \(6.51\leq\mu\leq 7.85\) | YES |

23 | 6.8 | \(6.13\leq\mu\leq 7.47\) | YES |

24 | 6.94 | \(6.27\leq\mu\leq 7.61\) | YES |

25 | 8.1 | \(7.43\leq\mu\leq 8.77\) | NO |

26 | 7 | \(6.33\leq\mu\leq 7.67\) | YES |

27 | 7.06 | \(6.39\leq\mu\leq 7.73\) | YES |

28 | 6.82 | \(6.15\leq\mu\leq 7.49\) | YES |

29 | 6.96 | \(6.29\leq\mu\leq 7.63\) | YES |

30 | 7.46 | \(6.79\leq\mu\leq 8.13\) | YES |

31 | 7.04 | \(6.37\leq\mu\leq 7.71\) | YES |

32 | 7.06 | \(6.39\leq\mu\leq 7.73\) | YES |

33 | 7.06 | \(6.39\leq\mu\leq 7.73\) | YES |

34 | 6.8 | \(6.13\leq\mu\leq 7.47\) | YES |

35 | 7.12 | \(6.45\leq\mu\leq 7.79\) | YES |

36 | 7.18 | \(6.51\leq\mu\leq 7.85\) | YES |

37 | 7.08 | \(6.41\leq\mu\leq 7.75\) | YES |

38 | 7.24 | \(6.57\leq\mu\leq 7.91\) | YES |

39 | 6.82 | \(6.15\leq\mu\leq 7.49\) | YES |

40 | 7.26 | \(6.59\leq\mu\leq 7.93\) | YES |

41 | 7.34 | \(6.67\leq\mu\leq 8.01\) | YES |

42 | 6.62 | \(5.95\leq\mu\leq 7.29\) | YES |

43 | 7.1 | \(6.43\leq\mu\leq 7.77\) | YES |

44 | 6.98 | \(6.31\leq\mu\leq 7.65\) | YES |

45 | 6.98 | \(6.31\leq\mu\leq 7.65\) | YES |

46 | 7.06 | \(6.39\leq\mu\leq 7.73\) | YES |

47 | 7.14 | \(6.47\leq\mu\leq 7.81\) | YES |

48 | 7.5 | \(6.83\leq\mu\leq 8.17\) | YES |

49 | 7.08 | \(6.41\leq\mu\leq 7.75\) | YES |

50 | 7.32 | \(6.65\leq\mu\leq 7.99\) | YES |

51 | 6.54 | \(5.87\leq\mu\leq 7.21\) | YES |

52 | 7.14 | \(6.47\leq\mu\leq 7.81\) | YES |

53 | 6.64 | \(5.97\leq\mu\leq 7.31\) | YES |

54 | 7.46 | \(6.79\leq\mu\leq 8.13\) | YES |

55 | 7.34 | \(6.67\leq\mu\leq 8.01\) | YES |

56 | 7.28 | \(6.61\leq\mu\leq 7.95\) | YES |

57 | 6.56 | \(5.89\leq\mu\leq 7.23\) | YES |

58 | 7.72 | \(7.05\leq\mu\leq 8.39\) | NO |

59 | 6.66 | \(5.99\leq\mu\leq 7.33\) | YES |

60 | 6.8 | \(6.13\leq\mu\leq 7.47\) | YES |

61 | 7.08 | \(6.41\leq\mu\leq 7.75\) | YES |

62 | 6.58 | \(5.91\leq\mu\leq 7.25\) | YES |

63 | 7.3 | \(6.63\leq\mu\leq 7.97\) | YES |

64 | 7.1 | \(6.43\leq\mu\leq 7.77\) | YES |

65 | 6.68 | \(6.01\leq\mu\leq 7.35\) | YES |

66 | 6.98 | \(6.31\leq\mu\leq 7.65\) | YES |

67 | 6.94 | \(6.27\leq\mu\leq 7.61\) | YES |

68 | 6.78 | \(6.11\leq\mu\leq 7.45\) | YES |

69 | 7.2 | \(6.53\leq\mu\leq 7.87\) | YES |

70 | 6.9 | \(6.23\leq\mu\leq 7.57\) | YES |

71 | 6.42 | \(5.75\leq\mu\leq 7.09\) | YES |

72 | 6.48 | \(5.81\leq\mu\leq 7.15\) | YES |

73 | 7.12 | \(6.45\leq\mu\leq 7.79\) | YES |

74 | 6.9 | \(6.23\leq\mu\leq 7.57\) | YES |

75 | 7.24 | \(6.57\leq\mu\leq 7.91\) | YES |

76 | 6.6 | \(5.93\leq\mu\leq 7.27\) | YES |

77 | 7.28 | \(6.61\leq\mu\leq 7.95\) | YES |

78 | 7.18 | \(6.51\leq\mu\leq 7.85\) | YES |

79 | 6.76 | \(6.09\leq\mu\leq 7.43\) | YES |

80 | 7.06 | \(6.39\leq\mu\leq 7.73\) | YES |

81 | 7 | \(6.33\leq\mu\leq 7.67\) | YES |

82 | 7.08 | \(6.41\leq\mu\leq 7.75\) | YES |

83 | 7.18 | \(6.51\leq\mu\leq 7.85\) | YES |

84 | 7.26 | \(6.59\leq\mu\leq 7.93\) | YES |

85 | 6.88 | \(6.21\leq\mu\leq 7.55\) | YES |

86 | 6.28 | \(5.61\leq\mu\leq 6.95\) | NO |

87 | 7.06 | \(6.39\leq\mu\leq 7.73\) | YES |

88 | 6.66 | \(5.99\leq\mu\leq 7.33\) | YES |

89 | 7.18 | \(6.51\leq\mu\leq 7.85\) | YES |

90 | 6.86 | \(6.19\leq\mu\leq 7.53\) | YES |

91 | 6.96 | \(6.29\leq\mu\leq 7.63\) | YES |

92 | 7.26 | \(6.59\leq\mu\leq 7.93\) | YES |

93 | 6.68 | \(6.01\leq\mu\leq 7.35\) | YES |

94 | 6.76 | \(6.09\leq\mu\leq 7.43\) | YES |

95 | 7.3 | \(6.63\leq\mu\leq 7.97\) | YES |

96 | 7.04 | \(6.37\leq\mu\leq 7.71\) | YES |

97 | 7.34 | \(6.67\leq\mu\leq 8.01\) | YES |

98 | 6.72 | \(6.05\leq\mu\leq 7.39\) | YES |

99 | 6.64 | \(5.97\leq\mu\leq 7.31\) | YES |

100 | 7.3 | \(6.63\leq\mu\leq 7.97\) | YES |

We observe that four confidence intervals among the 100 that we constructed fail to include the true population mean \(\mu=7\) (about 5%).

### Example 4

For this example, we will select the Exponential distribution with \(\lambda=5\) (mean of 1/5 = 0.2), sample size 60, confidence level 0.95, and number of intervals 50. These settings, along with the results of the simulations are shown below

## Confidence intervals for the population mean of normal distribution **when the population variance \(\sigma^2\) is unknown**

Let \(X_1, X_2, \cdots, X_n\) be a random sample from \(N(\mu, \sigma^2)\). It is known that \(\frac{\bar X - \mu}{\frac{s}{\sqrt{n}}} \sim t_{n-1}\). Therefore,

\[P\left(-t_{\frac{\alpha}{2}; n-1} \le \frac{\bar X - \mu}{\frac{s}{\sqrt{n}}} \le t_{\frac{\alpha}{2}; n-1} \right)=1-\alpha,\]

- where \(-t_{\frac{\alpha}{2};n-1}\) and \(t_{\frac{\alpha}{2};n-1}\) are defined as follows:

As before, the area \(1-\alpha\) is called the *confidence level*. The values of \(t_{\frac{\alpha}{2};n-1}\) can be found from:

- SOCR Studnet's T-distribution, interactive applet, or
- SOCR T-table, below are some examples:

\(1-\alpha\) | n | \(t_{\frac{\alpha}{2};n-1}\) |
---|---|---|

0.90 | 13 | 1.782 |

0.95 | 21 | 2.086 |

0.98 | 31 | 2.457 |

0.99 | 61 | 2.660 |

- Note: The sample standard deviation is computed as follows:

\[s=\sqrt{\frac{\sum_{i=1}^{n} (x_i-\bar x)^2}{n-1}}\]

- or using the shortcut formula.

\[ s=\sqrt{\frac{1}{n-1}\left[\sum_{i=1}^{n} x_i^2 - \frac{(\sum_{i=1}^{n} x_i)^2}{n}\right]}\]

After some rearranging the expression above can be written as: \[P\left(\bar x -t_{\frac{\alpha}{2};n-1} \frac{s}{\sqrt{n}} \le \mu \le \bar x + t_{\frac{\alpha}{2};n-1} \frac{s}{\sqrt{n}} \right)=1-\alpha\]

We say that we are \(1-\alpha\) confident that \(\mu\) falls in the interval\[\bar x \pm t_{\frac{\alpha}{2};n-1} \frac{s}{\sqrt{n}}.\]

### Example 3

The daily production of a chemical product last week in tons was: 785, 805, 790, 793, and 802.

- Construct a
*95% confidence interval*for the population mean \(\mu\). - What assumptions are necessary?

### SOCR investigation

For this case, we will select the normal distribution with mean 5 and standard deviation 2, sample size of 25, number of intervals 50, and confidence level 0.95. These settings and simulation results are shown below:

We observe that the length of the confidence interval differs for all the intervals because the margin of error is computed using the sample standard deviation.

## Confidence interval for the population proportion *p*

Let \(Y_1, Y_2, \cdots, Y_n\) be a random sample from the Bernoulli distribution with probability of success *p*. To construct a confidence interval for *p* the following result is used based on the normal approximation\[\frac{X-np}{\sqrt{np(1-p)}} \sim N(0,1),\] where \(X=\sum_{i=1}^n{Y_i}\) is the total number of successes in the *n* experiments.

Therefore, \[P\left(-z_{\frac{\alpha}{2}} \le \frac{X-np}{\sqrt{np(1-p)}} \le z_{\frac{\alpha}{2}} \right)=1-\alpha,\]

- where \(-z_{\frac{\alpha}{2}}\) and \(z_{\frac{\alpha}{2}}\) defined as above.

After rearranging we get: \[P\left(\frac{X}{n} - z_{\frac{\alpha}{2}} \sqrt{\frac{p(1-p)}{n}} \le p \le \frac{X}{n} + z_{\frac{\alpha}{2}} \sqrt{\frac{p(1-p)}{n}}\right)=1-\alpha.\]

The ratio \(\frac{x}{n}\) is the *point estimate* of the population *p* and it is denoted with \(\hat p=\frac{x}{n}\). The problem with this interval is that the unknown *p* appears also at the end points of the interval. As an approximation we can simply replace *p* with its estimate \(\hat p=\frac{x}{n}\). Finally the confidence interval is given by:

\[P\left(\hat p - z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}} \le p \le \hat p + z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}}\right)=1-\alpha.\]

We say that we are \(1-\alpha\) confident that *p* falls in \(\hat p \pm z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}}.\)

## Calculating sample sizes

The basic problem we will address now is how to determine the sample size needed so that the resulting confidence interval will have a fixed margin of error *E* with confidence level \(1-\alpha\).

In the expression \(\hat p \pm z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}}\), the width of the confidence interval is given by the *margin of error* \(z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}}\). We can simply solve for *n*:
\[E=z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}} \Rightarrow n = \frac {z_{\frac{\alpha}{2}}^2\hat p (1-\hat p)}{E^2}.\]

However, the value of \(\hat p\) is not known because we have not observed our sample yet. If we use \(\hat p=0.5\), we will obtain the largest possible sample size. Of course, if we have an idea about its value (from another study, etc.) we can use it.

### Example 6

At a survey poll before the elections candidate *A* receives the support of 650 voters in a sample of 1,200 voters.

- Construct a 95% confidence interval for the population proportion
*p*that supports candidate*A*. - Find the sample size needed so that the margin of error will be \(\pm 0.01\) with confidence level 95%.

### Another formula for the confidence interval for the population proportion *p*

Another way to solve for *p* is presented below:
\[P\left(-z_{\frac{\alpha}{2}} \le \frac{X-np}{\sqrt{np(1-p)}} \le z_{\frac{\alpha}{2}} \right)=1-\alpha\]

\[P\left(-z_{\frac{\alpha}{2}} \le \frac{\frac{X}{n}-p}{\sqrt{\frac{p(1-p)}{n}}} \le z_{\frac{\alpha}{2}} \right)=1-\alpha\]

\[P\left(\frac{|\hat p - p|}{\sqrt{\frac{p(1-p)}{n}}} \le z_{\frac{\alpha}{2}} \right) =1-\alpha\] \[P\left(\frac{(\hat p - p)^2}{\frac{p(1-p)}{n}} \le z_{\frac{\alpha}{2}}^2 \right) =1-\alpha\]

We obtain a quadratic expression in terms of *p*:
\[(\hat p - p)^2 - z_{\frac{\alpha}{2}}^2 \frac{p(1-p)}{n} \le 0\]
\[(1+\frac{z_{\frac{\alpha}{2}}^2}{n})p^2 - (2\hat p + \frac{z_{\frac{\alpha}{2}}^2}{n})p + \hat p^2 = 0\]

Solving for *p* we get the following confidence interval:
\[\frac{\hat p +\frac{z_{\frac{\alpha}{2}}^2}{2n} \pm
z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}+\frac{z_{\frac{\alpha}{2}}^2}{4n^2}}}
{1+\frac{z_{\frac{\alpha}{2}}^2}{n}}.\] When *n* is large, this interval is the same as before.

## Exact confidence interval for *p*

The first interval for proportions above (normal approximation) produces intervals that are too narrow when the sample size is small. The coverage is below \(1-\alpha\). The following **exact** method (or Clopper-Pearson) improves the low coverage of the normal approximation confidence interval. The exact confidence interval however has higher coverage than \(1-\alpha\).
\[\left[1+\frac{n-x+1}{xF_{1-\frac{\alpha}{2};2x,2(n-x+1)}}\right]^{-1} < p <
\left[1+\frac{n-x}{(x+1)F_{\frac{\alpha}{2};2(x+1),2(n-x)}}\right]^{-1},\]

- where,
*x*is the number of successes among*n*trials, and \(F_{a,b,c}\) is the*a*quantile of the*F*distribution with numerator degrees of freedom*b*and denominator degrees of freedom*c*.

## Confidence interval for the population variance \(\sigma^2\) of the normal distribution

Again, let \(X_1, X_2, \cdots, X_n\) random sample from \(N(\mu, \sigma^2)\). It is known that \(\frac{(n-1)S^2}{\sigma^2} \sim \chi^2_{n-1}\). Therefore, \[ P\left(\chi^2_{\frac{\alpha}{2}; n-1} \le \frac{(n-1)S^2}{\sigma^2} \le \chi^2_{1-\frac{\alpha}{2}; n-1} \right)=1-\alpha,\]

- where \(\chi^2_{\frac{\alpha}{2};n-1}\) and \(\chi^2_{1-\frac{\alpha}{2};n-1}\) are defined as follows:

As with the T-distribution, the values of \(\chi^2_{\frac{\alpha}{2};n-1}\) and \(\chi^2_{1-\frac{\alpha}{2};n-1}\) may be found from:

- Interactive SOCR Chi-Square Distribution applet, or
- the SOCR Chi-Square Table. Some examples are included below:

\(1-\alpha\) | n | \(\chi^2_{\frac{\alpha}{2};n-1}\) | \(\chi^2_{1-\frac{\alpha}{2};n-1}\) |
---|---|---|---|

0.90 | 4 | 0.352 | 7.81 |

0.95 | 16 | 6.26 | 27.49 |

0.98 | 25 | 10.86 | 42.98 |

0.99 | 41 | 20.71 | 66.77 |

If we rearrange the inequality above we get: \[P\left(\frac{(n-1)s^2}{\chi_{1-\frac{\alpha}{2};n-1}^2} \le \sigma^2 \le \frac{(n-1)s^2}{\chi_{\frac{\alpha}{2};n-1}^2}\right)=1-\alpha.\]

We say that we are \(1-\alpha\) confident that the population variance \(\sigma^2\) falls in the interval: \[\left[\frac{(n-1)s^2}{\chi_{1-\frac{\alpha}{2};n-1}^2}, \frac{(n-1)s^2}{\chi_{\frac{\alpha}{2};n-1}^2}\right]\]

**Comment**: When the sample size*n*is large, the \(\chi^2_{n-1}\) distribution can be approximated by \(N(n-1, \sqrt{2(n-1)})\). Therefore, in such situations, the confidence interval for the variance can be computed as follows:

\[\frac{s^2}{1+z_{\frac{\alpha}{2}}\sqrt{\frac{2}{n-2}}} \le \sigma^2 \le \frac{s^2}{1-z_{\frac{\alpha}{2}}\sqrt{\frac{2}{n-2}}}.\]

### Example 7

A precision instrument is guaranteed to read accurately to within 2 units. A sample of 4 instrument readings on the same object yielded the measurements 353, 351, 351, and 355. Find a 90% confidence interval for the population variance. Assume that these observations were selected from a population that follows the normal distribution.

## SOCR investigation

Using the SOCR confidence intervals applet, we run the following simulation experiment: normal distribution with mean 5 and standard deviation 2, sample size 30, confidence intervals 50, and confidence level 0.95.

However, if the population is *not normal*, the coverage is poor and this can be seen with the following SOCR example. Consider the exponential distribution with \(\lambda=2\) (variance is \(\sigma^2=0.25\)). If we use the confidence interval based on the \(\chi^2\) distribution, as described above, we obtain the following results (first with sample size 30 and then sample size 300).

We observe that regardless of the sample size the 95% CI(\(\sigma^2\)) coverage is poor. In these situations, (sampling from non-normal populations) an asymptotically distribution-free confidence interval for the variance can be obtained using the following large sample theory result: \[\sqrt{n}(s^2-\sigma^2) \rightarrow N\left(0, \mu_4-\sigma^4\right),\]

- That is, \(\frac{\sqrt{n}(s^2-\sigma^2)}{ \sqrt{\mu_4-\sigma^4}} \rightarrow N(0,1),\)
- where, \(\mu_4=E(X-\mu)^4\) is the fourth moment of the distribution. Of course, \(\mu_4\) is unknown and will be estimated by the fourth sample moment \(\mu_4=\frac{1}{n}\sum_{i=1}^n(X_i-\bar X)^4\). The confidence interval for the population variance is computed as follows:

\[ s^2 - z_{\frac{\alpha}{2}} \frac{\sqrt{m_4-s^4}}{\sqrt{n}} \le \sigma^2 \le s^2 + z_{\frac{\alpha}{2}} \frac{\sqrt{m_4-s^4}}{\sqrt{n}}.\]

Using the SOCR CI Applet, with exponential distribution (\(\lambda=2\)), sample size 300, number of intervals 50, and confidence level 0.95, we see that the coverage of *this interval* is approximately 95%.

The 95% CI(\(\sigma^2\)) coverage for the intervals constructed using the method of asymptotic distribution-free intervals is much closer to 95%.

## Confidence intervals for the population parameters of a distribution based on the asymptotic properties of maximum likelihood estimates

To construct confidence intervals for a parameter of some distribution, the following method can be used based on the large sample theory of maximum likelihood estimates. As the sample size *n* increases it can be shown that the maximum likelihood estimate \(\hat \theta\) of a parameter \(\theta\) follows approximately the normal distribution with mean \(\theta\) and variance equal to the lower bound of the Cramer-Rao inequality.
\[\hat \theta \sim N\left(\theta, \frac{1}{nI(\theta)}\right)\]

- where \(\sqrt{\frac{1}{nI(\theta)}}\) is the lower bound of the Cramer-Rao inequality.

Because \(I(\theta)\) (Fisher's information) is a function of the unknown parameter \(\theta\) we replace \(\theta\) with its maximum likelihood estimate \(\hat \theta\) to get \(I(\hat \theta)\).

Since, \(Z=\frac{\hat \theta - \theta}{\sqrt{\frac{1}{nI(\hat \theta)}}}\), we can write
\(P(-z_{\frac{\alpha}{2}} \le Z \le z_{\frac{\alpha}{2}}).\) If we replace *Z* with \(Z=\frac{\hat \theta - \theta}{\sqrt{\frac{1}{nI(\hat \theta)}}}\), we get
\[P\left(-z_{\frac{\alpha}{2}} \le \frac{\hat \theta - \theta}{\sqrt{\frac{1}{nI(\hat \theta)}}} \le z_{\frac{\alpha}{2}}\right).\]

Therefore, \[P\left(\hat \theta -z_{\frac{\alpha}{2}} \sqrt{\frac{1}{nI(\hat \theta)}} \le \theta \le \hat \theta + z_{\frac{\alpha}{2}} \sqrt{\frac{1}{nI(\hat \theta)}} \right).\]

Thus, we are \(1-\alpha\) confident that \(\theta\) falls in the interval \[\hat \theta \pm z_{\frac{\alpha}{2}} \sqrt{\frac{1}{nI(\hat \theta)}}.\]

### Example 8

Use the result above to construct a confidence interval for the Poisson parameter \(\lambda\). Let \(X_1, X_2, \cdots, X_n\) be independent and identically distributed random variables from a Poisson distribution with parameter \(\lambda\).

We know that the maximum likelihood estimate of \(\lambda\) is \(\hat \lambda=\bar x\). We need to find the lower bound of the Cramer-Rao inequality\[f(x)=\frac{\lambda e^{-\lambda x}}{x!} \Rightarrow lnf(x) = xln\lambda - \lambda -lnx!\].

Let's find the first and second derivatives w.r.t. \(\lambda\). \[\frac{\partial {lnf(x)}}{\partial \lambda}=\frac{x}{\lambda}-1,\] \[\frac{\partial^2{lnf(x)}}{\partial \lambda^2}=-\frac{x}{\lambda^2}.\]

Therefore, \(\frac{1}{-nE\left(\frac{\partial^2 lnf(x)}{\partial \lambda^2}\right)}=\frac{1}{-nE(-\frac{X}{\lambda^2})}= \frac{\lambda^2}{\lambda n}=\frac{\lambda}{n}\). When *n* is large, \(\hat \lambda\) follows approximately \(\hat \lambda \sim N\left(\lambda, \sqrt{\frac{\lambda}{n}}\right)\). Because \(\lambda\) is unknown, we replace it with its MLE estimate \(\hat \lambda\):
\[\hat \lambda \sim N\left(\bar X, \sqrt{\frac{\bar X}{n}}\right).\]

Therefore, the confidence interval for \(\lambda\) is: \[\bar X \pm z_{\frac{\alpha}{2}} \sqrt{\frac{\bar X}{n}}.\]

### Application

The number of pine trees at a certain forest follows the Poisson distribution with unknown parameter \(\lambda\) per acre. A random sample of size *n=50* acres is selected and the number of pine trees in each acre is counted. Here are the results:

```
7 4 5 3 1 5 7 6 4 3 2 6 6 9 2 3 3 7 2 5 5 4 4 8 8 7 2 6 3 5 0
5 8 9 3 4 5 4 6 1 0 5 4 6 3 6 9 5 7 6
```

The sample mean is \(\bar x=4.76\). Therefore, a 95% confidence interval for the parameter \(\lambda\) is \[4.76 \pm 1.96 \sqrt{\frac{4.76}{50}}\]

- That is, \(4.76 \pm 0.31.\)

Therefore \(4.15 \le \lambda \le 5.34\).

### Exponential distribution

Verify that for the parameter \(\lambda\) of the exponential distribution the confidence interval obtained by this method is given as follows: \[\frac{1}{\bar x} \pm z_{\frac{\alpha}{2}} \sqrt{\frac{1}{n \bar x^2}}.\]

The following SOCR simulations refer to

- Poisson distribution, \(\lambda=5\), sample size 40, number of intervals 50, confidence level 0.95.

- Exponential distribution, \(\lambda=0.5\), sample size 30, number of intervals 50, confidence level 0.95.

## Confidence intervals using bootstrapping

Above we showed that asymptotically (for extremely large samples) the CLT asymptotic properties allow us to estimate the confidence intervals for various statistics, e.g., mean, median, range, etc. for most *nice distributions*, e.g., Weibull, Gamma, Lognormal, etc. However, in practice, we generally have only small samples, and the data may not be “perfectly distributed” according to a fixed known distribution. In such situations, we commonly compute the **bootstrap confidence interval (BCI)** estimates using the following protocol:

- Resample from the observed dataset and generate new samples with replacement \(B\) times.
- For each of these data sub-samples calculate the sample statistics (e.g., mean, median, etc.).
- Calculate an appropriate bootstrap confidence interval for each sample statistics using one of many different types of bootstrap confidence interval (BCI) recipes.

In practice, we calculate several BCI and examine any discrepancies between them. Let's look at this an R implementation example:

rm(list=ls()) # clean the environment

set.seed(123)

data0 = rgamma(383,5,3) # specify distribution, or reference a specific dataset, here we use simulated data

mean(data0) # Sample statistics (e.g., mean)

hist(data0) # Histogram of the data

library(boot)

meanB <- function(data, indices) { # function to obtain the sample statistics (e.g., mean) for each sub-sample

d <- data[indices] # allows boot to select sample

return(mean(d))

}

results <- boot(data=data0, statistic=meanB, R=10000) # bootstrapping with 10000 replications

results; plot(results) # explore the results of the multiple sample statistics calculations

boot.ci(results, type=c("norm", "basic", "perc", "bca")) # compute 95\% confidence interval

See the SOCR Resampling Webapp demonstrating the bootstrap resampling applications.

## See also

## References

- Mega, M., Dinov, I., Thompson, P., Manese, M., Lindshield, C., Moussai, J., Tran, N., Olsen, K., Felix, J., Zoumalan, C., Woods, R., Toga, A., and Mazziotta, J. (2005).
*Automated brain tissue assessment in the elderly and demented population: Construction and validation of a sub-volume probabilistic brain atlas*. NeuroImage, 26(4), 1009-1018. - Stewarty, C. (1999).
*Robust Parameter Estimation in Computer Vision*. SIAM Review, 41(3), 513–537. - Wolfram, S. (2002).
*A New Kind of Science*, Wolfram Media Inc. - Agresti A, Coull A. “Approximate is Better than Exact for Interval Estimation of Binomial Proportions” American Statistician (1998).
- Sauro, J., Lewis, J. R., Estimating Completion Rates From Small Samples Using Binomial Confidence Intervals: Comparisons and Recommendations, Proceedings of the Human Factor AND Ergonomics Society 49th Annual Meeting (2005).
- Hogg, R. V., Tanis, E. A., Probability and Statistical Inference, 3rd Edition, Macmillan (1988).
- Ferguson, T., S., A Course in Large Sample Theory, Chapman & Hall (1996).
- John Rice, Mathematical Statistics and Data Analysis, Third Edition, Duxbury Press (2006).
- Christou N, Dinov ID (2011) Confidence Interval Based Parameter Estimation—A New SOCR Applet and Activity. PLoS ONE 6(5): e19178. doi:10.1371/journal.pone.0019178.

Translate this page: