Difference between revisions of "SMHS Estimation"
(→Motivation) |
(→Problems) |
||
(5 intermediate revisions by the same user not shown) | |||
Line 21: | Line 21: | ||
===Theory=== | ===Theory=== | ||
An estimate of a population parameter may be expressed in two ways: | An estimate of a population parameter may be expressed in two ways: | ||
− | *Point estimate: | + | *''Point estimate'': A single value of estimate. For example, sample mean is a point estimate of the population mean. |
− | *Interval estimate: | + | *''Interval estimate'': An interval estimate is defined by two numbers, between which a population parameter is said to lie. |
====Confidence Intervals (CIs)==== | ====Confidence Intervals (CIs)==== | ||
− | CIs describe the uncertainty of a sampling method and | + | CIs describe the uncertainty of a sampling method and contain a ''confidence level'', a ''statistic'' and a ''margin of error''. The statistic and the margin of error define an interval estimate, which represents the precision of the method. Confidence Interval is expressed as sample statistic plus the margin of error. |
− | |||
− | + | The interpretation of a confidence interval at 95% confidence level is that we have 95% confidence that the parameter will fall within the margin of the interval. | |
− | * | + | * ''Confidence level'': The probability part of a confidence interval. It describes the likelihood that a particular sampling method will produce a confidence interval that includes the true population parameter. |
− | * Critical value: The central limit theorem states that the sampling distribution of a statistic will be normal or nearly normal and the critical value can be expressed as a t score or as a z score | + | * ''Margin of error'': Range of the values above and below the sample statistic in confidence interval; ''margin of error $=$ critical value $*$ standard deviation of the statistic'' |
− | **The population distribution is normal | + | |
− | **The sampling distribution is symmetric, unimodal, without outliers, and the sample size is 15 or less | + | * ''Critical value'': The central limit theorem states that the sampling distribution of a statistic will be normal or nearly normal, and that the critical value can be expressed as a $t$ score or as a $z$ score provided that ANY of the following conditions apply: |
− | **The sampling distribution is moderately skewed, unimodal, without outliers, and the sample size is between 16 and 40 | + | **The population distribution is normal. |
+ | **The sampling distribution is symmetric, unimodal, without outliers, and the sample size is 15 or less. | ||
+ | **The sampling distribution is moderately skewed, unimodal, without outliers, and the sample size is between 16 and 40. | ||
**The sample size is greater than 40, without outliers. | **The sample size is greater than 40, without outliers. | ||
Line 42: | Line 43: | ||
*Find the critical probability $(p^*): p^* = 1 -\frac {\alpha} {2}$ | *Find the critical probability $(p^*): p^* = 1 -\frac {\alpha} {2}$ | ||
*To express the critical value as a $z$ score, find the $z$ score having a cumulative probability equal to the critical probability $(p^*)$. | *To express the critical value as a $z$ score, find the $z$ score having a cumulative probability equal to the critical probability $(p^*)$. | ||
− | *To express the critical value as a t score, follow these steps. Find the degree of freedom (DF): when estimating a mean score or a proportion from a single sample, DF is equal to the sample size minus one. For other applications, the degrees of freedom may be calculated differently. We will describe those computations as they come up. | + | *To express the critical value as a $t$ score, follow these steps. Find the degree of freedom (DF): when estimating a mean score or a proportion from a single sample, DF is equal to the sample size minus one. For other applications, the degrees of freedom may be calculated differently. We will describe those computations as they come up. |
− | : The critical t score $(t^*)$ is the t score having degrees of freedom equal to DF and a cumulative probability equal to the critical probability $(p^*)$. | + | : The critical $t$ score $(t^*)$ is the $t$ score having degrees of freedom equal to DF and a cumulative probability equal to the critical probability $(p^*)$. |
− | : Should you express the critical value as a t score or as a z score? As a practical matter, when the sample size is large (greater than 40), it doesn't make much difference. Both approaches yield similar results. Strictly speaking, when the population standard deviation is unknown or when the sample size is small, the t score is preferred. Nevertheless, many introductory statistics texts use the z score exclusively. | + | : Should you express the critical value as a $t$ score or as a $z$ score? As a practical matter, when the sample size is large (greater than 40), it doesn't make much difference. Both approaches yield similar results. Strictly speaking, when the population standard deviation is unknown or when the sample size is small, the $t$ score is preferred. Nevertheless, many introductory statistics texts use the $z$ score exclusively. |
− | * Standard error: an estimate of the standard deviation of a statistic. When the values of population parameters are unknown, it is valuable to compute the standard error as an unbiased estimate of the standard deviation of a statistic. It is computed form known sample statistic. The table below shows how to compute the standard error for simple random samples assuming that the population size is at least 10 times larger than the sample size. | + | * ''Standard error'': an estimate of the standard deviation of a statistic. When the values of population parameters are unknown, it is valuable to compute the standard error as an unbiased estimate of the standard deviation of a statistic. It is computed form known sample statistic. The table below shows how to compute the standard error for simple random samples assuming that the population size is at least 10 times larger than the sample size. |
<center> | <center> | ||
{| class="wikitable" style="text-align:center;width:25%"border="1" | {| class="wikitable" style="text-align:center;width:25%"border="1" | ||
Line 64: | Line 65: | ||
</center> | </center> | ||
− | * Degrees of freedom: the number of independent pieces of information on which the estimate is based | + | * ''Degrees of freedom'': the number of independent pieces of information on which the estimate is based |
: In general, the degrees of freedom for an estimate is equal to the number of values minus the number of parameters estimated to the estimate in question. Suppose we have sampled 20 data points then our estimate of the variance has 20 – 1 = 19 degree of freedom. | : In general, the degrees of freedom for an estimate is equal to the number of values minus the number of parameters estimated to the estimate in question. Suppose we have sampled 20 data points then our estimate of the variance has 20 – 1 = 19 degree of freedom. | ||
Line 94: | Line 95: | ||
===Problems=== | ===Problems=== | ||
− | * Which of the following statements is true | + | * Which of the following statements is true? |
: a. When the margin of error is small, the confidence level is high. | : a. When the margin of error is small, the confidence level is high. | ||
: b. When the margin of error is small, the confidence level is low. | : b. When the margin of error is small, the confidence level is low. | ||
Line 101: | Line 102: | ||
: e. None of the above. | : e. None of the above. | ||
− | * Which of the following statements is true | + | * Which of the following statements is true? |
: a. The standard error is computed solely from sample attributes. | : a. The standard error is computed solely from sample attributes. | ||
: b. The standard deviation is computed solely from sample attributes. | : b. The standard deviation is computed solely from sample attributes. |
Latest revision as of 10:43, 27 April 2015
Contents
Scientific Methods for Health Sciences - Parameter Estimation
Overview
Estimation is an important concept in the field of statistics and application of estimation is widely applied in various areas. It deals with estimating values of parameters of the population based on the sample data. And the parameters describe an underlying physical setting and their value would affect the distribution of the measured data. Two major approaches are commonly used in estimation:
- The probabilistic approach assumes that the measured data is random with probability distribution dependent on the parameters.
- The set-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector.
The purpose of estimation is to find an estimator that can be interpreted, which is accurate and which exhibits some form of optimality. Indicators like minimum variance unbiased estimator is usually applied to measure estimator optimality, although it is possible that an optimal estimator don’t always exist. Here we present the fundamentals of estimation theory and illustrate how to apply estimation in real studies.
Motivation
To obtain a desired estimator or estimation, we need to first determine a probability distribution with parameters of interest based on the data. After deciding the probabilistic model, we need to find the theoretically achievable precision available to any estimator based on the model and then develop an estimator based on this model. There is a variety of methods and criteria to develop and choose between estimators based on their performance:
- Maximum likelihood estimators
- Bayes estimators
- Method of moments estimators
- Minimum mean square error estimators
- Minimum variance unbiased estimator
- Best linear unbiased estimator, etc.
Experiment or simulations can also be run to test estimators’ performance.
Theory
An estimate of a population parameter may be expressed in two ways:
- Point estimate: A single value of estimate. For example, sample mean is a point estimate of the population mean.
- Interval estimate: An interval estimate is defined by two numbers, between which a population parameter is said to lie.
Confidence Intervals (CIs)
CIs describe the uncertainty of a sampling method and contain a confidence level, a statistic and a margin of error. The statistic and the margin of error define an interval estimate, which represents the precision of the method. Confidence Interval is expressed as sample statistic plus the margin of error.
The interpretation of a confidence interval at 95% confidence level is that we have 95% confidence that the parameter will fall within the margin of the interval.
- Confidence level: The probability part of a confidence interval. It describes the likelihood that a particular sampling method will produce a confidence interval that includes the true population parameter.
- Margin of error: Range of the values above and below the sample statistic in confidence interval; margin of error $=$ critical value $*$ standard deviation of the statistic
- Critical value: The central limit theorem states that the sampling distribution of a statistic will be normal or nearly normal, and that the critical value can be expressed as a $t$ score or as a $z$ score provided that ANY of the following conditions apply:
- The population distribution is normal.
- The sampling distribution is symmetric, unimodal, without outliers, and the sample size is 15 or less.
- The sampling distribution is moderately skewed, unimodal, without outliers, and the sample size is between 16 and 40.
- The sample size is greater than 40, without outliers.
To find the critical value, follow these steps.
- Compute alpha $(\alpha): \alpha = 1 - \left(\frac{confidence\ level}{100}\right)$
- Find the critical probability $(p^*): p^* = 1 -\frac {\alpha} {2}$
- To express the critical value as a $z$ score, find the $z$ score having a cumulative probability equal to the critical probability $(p^*)$.
- To express the critical value as a $t$ score, follow these steps. Find the degree of freedom (DF): when estimating a mean score or a proportion from a single sample, DF is equal to the sample size minus one. For other applications, the degrees of freedom may be calculated differently. We will describe those computations as they come up.
- The critical $t$ score $(t^*)$ is the $t$ score having degrees of freedom equal to DF and a cumulative probability equal to the critical probability $(p^*)$.
- Should you express the critical value as a $t$ score or as a $z$ score? As a practical matter, when the sample size is large (greater than 40), it doesn't make much difference. Both approaches yield similar results. Strictly speaking, when the population standard deviation is unknown or when the sample size is small, the $t$ score is preferred. Nevertheless, many introductory statistics texts use the $z$ score exclusively.
- Standard error: an estimate of the standard deviation of a statistic. When the values of population parameters are unknown, it is valuable to compute the standard error as an unbiased estimate of the standard deviation of a statistic. It is computed form known sample statistic. The table below shows how to compute the standard error for simple random samples assuming that the population size is at least 10 times larger than the sample size.
Statistic | Standard error |
Sample mean, $\bar{x}$ | $SE_{\bar{x}}=\frac{s}{\sqrt{n}}$ |
Sample proportion, $p$ | $SE_{p}=\sqrt{\frac{p(1-p)}{n}}$ |
Difference between means,$\bar{x}_{1} -\bar{x}_{2}$ | $ SE_{\bar{x}_1 -\bar{x}_2} = \sqrt{\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}}$ |
Difference between proportions, $\bar{p}_{1} - \bar{p}_{2}$ | $SE_{\bar{p}_{1} - \bar{p}_{2}} = \sqrt{ \frac{p_1 (1-p_1)}{n_1} +\frac{p_{2}(1-p_{2})}{n_{2}}}$ |
- Degrees of freedom: the number of independent pieces of information on which the estimate is based
- In general, the degrees of freedom for an estimate is equal to the number of values minus the number of parameters estimated to the estimate in question. Suppose we have sampled 20 data points then our estimate of the variance has 20 – 1 = 19 degree of freedom.
Characteristics of Estimators
- Bias: refers to whether an estimator tends to either overestimate or underestimate the parameter. We say an estimator is biased if the mean of the sampling distribution of the statistic is not equal to the parameter. For example, $σ^{2}=\frac{(x-μ)^{2}} {N}$ is a biased estimator of the population variance and sample variance $s^{2}=\frac{(x-\bar{x})^{2}} {N-1}$ is unbiased estimate of the population variance.
- Sampling variability: refers to how much the estimate varies from sample to sample. It is usually measured by its standard error: the smaller the standard error, the less the sampling variability. For example, the standard error of the mean is $σ_M=\frac{σ}{\sqrt{N}}$. So the larger the sample size $(N)$, the smaller the standard error of the mean, hence the smaller the sample variability.
- Unbiased estimate: $\eta (X_{1},X_{2},…,X_{n})=E[\delta(X_{1},X_{2},…,X_{n})|T]$ then $\delta(X_{1},X_{2},…,X_{n} )$ is unbiased estimate for $g(\theta)$ and $T$ is a complete sufficient statistic for the family of densities.
- (Uniformly) Minimum-variance unbiased estimator (UMVUE, or MVUE) is an unbiased estimator that has lower variance than any other unbiased estimator for all possible values of the parameter. It may not exist.Consider estimation of $g(\theta)$ based on data $X_{1},X_{2},…,X_{n}$ independent and identically distributed from some member of a family with density $p_\theta, \theta \in \Omega $, an unbiased estimator $\delta(X_{1},X_{2},…,X_{n})$ of $g(\theta)$ is UMVUE if $∀ \theta \in \Omega$, $var(\delta(X_{1},X_{2},…,X_{n})) \leq var(\tilde{\delta} (X_{1},X_{2},…,X_{n}))$ for any other unbiased estimator $\tilde{\delta}$.
- $MSE(\delta)=var(\delta)+(bias(\delta))^{2}$. The MVUE minimizes MSE among unbiased estimators. In some cases biased estimators have lower MSE because they have a smaller variance than does any unbiased estimator.
Applications
- This article presents the MOM and MLE methods of estimation. It illustrates the MOM method in detailed examples and attached several exercise for students to practice. MOM, which is short for Method Of Moments, is one of the most commonly used methods to estimate population parameters using observed data from the specific process. The idea is to use the sample data to calculate sample moments and then set these equal to their corresponding population counterparts. Steps: (1) determine the $k$ parameters of interest and specific distribution for this process; (2) compute the first $k$ (or more) sample-moments; (3) set the sample-moments equal to the population moments and solve for a system of $k$ equations with $k$ unknowns. Let’s look at a simple example as application of the MOM method.
- Consider we want to estimate the true probability of a head by flipping the coins (assume a unfair coin). Suppose we flip the coin 10 times and observe the following outcome: {H,T,H,H,T,T,T,H,T,T}. With MOM: (1) the parameter of interest is $p=P(H)$ and it follows a Bernoulli distribution, (2) $np=E[Y]=4,p=2/5$, where $Y$ is the number of heads for one experiment and it follows a Binomial distribution. (3) estimate of true probability of flipping a head in one experiment equals $2/5$. This is an easy example of MOM proportion example.
- This article presents a fundamental introduction to estimation theory and illustrated on basic concepts and application of estimation. It offers specific examples and exercises on each concept and application and works as a good start of introduction to estimation theory.
- This article proposed an algorithm, the bootstrap filter, for implementing recursive Bayesian filters. The required density of the state vector is represented as a set of random samples, which are updated and propagated by the algorithm. The method presented is not restricted by assumptions of linearity or Gaussian noise and it may be applied to any state transition or measurement model. It presents a simulation example of the bearings only tracking problems and includes schemes for improving the efficiency of the basic algorithm.
Software
Problems
- Which of the following statements is true?
- a. When the margin of error is small, the confidence level is high.
- b. When the margin of error is small, the confidence level is low.
- c. A confidence interval is a type of point estimate.
- d. A population mean is an example of a point estimate.
- e. None of the above.
- Which of the following statements is true?
- a. The standard error is computed solely from sample attributes.
- b. The standard deviation is computed solely from sample attributes.
- c. The standard error is a measure of central tendency.
- d. All of the above.
- e. None of the above.
- 900 students were randomly selected for a national survey. Among survey participants, the mean grade-point average (GPA) was 2.7, and the standard deviation was 0.4. What is the margin of error, assuming a 95% confidence level?
- a. 0.013
- b. 0.025
- c. 0.500
- d. 1.960
- Suppose we want to estimate the average weight of an adult male in Dekalb County, Georgia. We draw a random sample of 1,000 men from a population of 1,000,000 men and weigh them. We find that the average man in our sample weighs 180 pounds, and the standard deviation of the sample is 30 pounds. What is the 95% confidence interval?
- a. $180 \pm 1.86$
- b. $180 \pm 3.0$
- c. $180 \pm 5.88$
- d. $180 \pm 30$
- Suppose that simple random samples of seniors are selected from two colleges: 15 students from school A and 20 students from school B. On a standardized test, the sample from school A has an average score of 1000 with a standard deviation of 100. The sample from school B has an average score of 950 with a standard deviation of 90. What is the 90% confidence interval for the difference in test scores at the two schools, assuming that test scores came from normal distributions in both schools? (Hint: Since the sample sizes are small, use a t score as the critical value.)
- a. 50 + 1.70
- b. 50 + 28.49
- c. 50 + 32.74
- d. 50 + 55.66
- You know the population mean for a certain test score. You select 10 people from the population to estimate the standard deviation. How many degrees of freedom does your estimation of the standard deviation have?
- a. 8
- b. 9
- c. 10
- d. 11
- In the population, a parameter has a value of 10. Based on the means and standard errors of their sampling distributions, which of these statistics estimates this parameter with the least sampling variability?
- a. Mean = 10, SE = 5
- b. Mean = 9, SE = 4
- c. Mean = 11, SE = 2
- d. Mean = 13, SE = 3
References
- SOCR Home page: http://www.socr.umich.edu
Translate this page: