SMHS BiasPrecision

From SOCR
Revision as of 13:09, 4 August 2014 by Dinov (talk | contribs) (Applications)
Jump to: navigation, search

Scientific Methods for Health Sciences - Bias and Precision

Overview:

The figure below describes the fact that bias should not be the only criterion for estimator efficacy. So, which is more important and how should we choose between an average shot fall somewhere near the target with broad scatter, or trading a small offset for being close most of the time? In order to better understand bias and precision and the trade-off between these two as well as their significance in choosing a better model or test, we are going to present a general introduction to bias and precision and varieties of ways to measure these two important criteria. We are also going to discuss the relationship between these two and how we are going to choose a model based on performance in these two areas.

SMHS BIAS Precision Fig 1 cg 07282014.png

Motivation:

Which is more important, a unbiased result or result with higher precision? There is no easy answer. Consider the example of MLE, which is often biased, meaning that the long-run expected value of the estimator differs from the true value with some small bias. Often the bias can be corrected, for example, in the familiar denominator of the unbiased estimator for the standard deviation of a normal density with n-1 as a replacement of n in the denominator. Unbiased is often misunderstood as superior. However, this is not always true. In fact, this statement is only true when an unbiased estimator has superior precision too. But biased estimators often have smaller overall error than unbiased ones. So clearly, both criteria must be considered for an estimator to be judged superior to another.

Theory

Bias

Bias is a systematic error that contributes to the difference between the mean of a large number of test results and an accepted reference value. It is the average difference between the estimator and the true value. It describes the bias and the methods utilized to provide corrected test results. If the bias is unknown but the direction or bounds of the bias can be estimated, this information should be included in the bias statement.

  • Internal validity is the ability of a test or experiment to proof what is actually happening with the sample data. Bias can make it appear as if there is an association when there is non (bias away from the null) or mask an association when there is really one (bias towards the null).
  • Types of bias:
(1) selection bias: who is selected or retained in a study distorts your estimates the truth;
(2) information bias: the quality of your information distorts your estimate of the truth;
(3) confounding bias: differences between cases and controls or exposed and unexposed distorts your estimates of the truth.
  • Also, the internal validity of a test may also be violated because of pure chance, the luck of the draw gets you a study sample that is not representative of the larger population. If a test is not internal valid, it is not external valid and cannot be generalized to any one.

Precision

Precision is the closeness of agreement among test results obtained under prescribed conditions. It is the standard deviation of the estimator. It allows potential users of the test method to assess, in general terms, its usefulness in proposed application. It is not intended to contain values that can be duplicated in every user’s laboratory. The statement offers guidelines regarding the type of variability that can be expected among test results when the method is used in one or more reasonable competent laboratories.

  • Two measurements to express precision:
(1) repeatability. It addresses variability between independent test results gathered from within a single laboratory (intra-laboratory testing) and tends to produce nominal variability;
(2) reproducibility. It addresses variability among single test results gathered from different laboratories (inter-laboratory testing) and tends to produce appreciable variability.
  • One measure of the overall variability is the Mean Square Error (MSE), which is the average of the individual squared errors. $ MSE=(precision)^2 + (bias)^2 $ so the overall variability, in the same units as the parameter being estimated is the Root Mean Square Error, $ RMSE=\sqrt{MSE} $. Often the overall variability of a biased estimator is smaller than that for an unbiased estimator, as illustrated in the figure in the overall section, in which case the biased estimator is superior to the unbiased one.
  • Which is better?

To choose between tests or models based on bias and precision is really a trade-off between these two and the choice should be made based on the objective of the model. An unbiased and precise estimator would of course be the best choice. To choose between an unbiased and imprecise estimator and a biased and precise estimator, we need to be more careful. If it aims to find an estimator, which is on average more close to the true value, then unbiased estimator would weight more than precision. If precision and small variations are our top priority, then the later would be better than the former. However, this is on the basis that the difference is comparable between these two measurements.

Applications

Application I: Beta Estimate Experiment

  • SOCR Activities
    • Description: using the properties of Beta distribution, the beta estimate experiment illustrates the effects of bias and precision in parameter estimation. The beta distribution consist of continuous probability distributions defined on an interval differing in the values of the two parameters, a and 1. This experiment is to generate a random sample $ X_1,X_2,…,X_n $ of size n from the beta distribution. On each update, the distribution density if shown in blue and the sample density is shown in red in the graph. Below the graph, the following information is recorded: $ U=\frac {M}{1-M}$ where $M =\frac {X_{1}+X_{2}+⋯+X_{n}}{nV} =\frac{-n} {\ln(X_1}+\ln(X_2)+⋯\ln(X_n)}.$

In the second table, the empirical bias and mean square error of each estimator are recorded as the experiment continues to run. Statistics U and V are point estimators, which are functions of the sample data that are used to estimate the unknown population parameter. It is referred to as an estimate of the actual application of the set of data. The parameter a and n can be varied with scroll bars.

  • Goal: to provide an accessible simulation to explore the function of beta distribution and its point estimators of a parameter. In order to estimate a parameter of interest, the point estimator of the parameter must be calculated from a random sample from the population. The variability is then calculated and associated with the parameter of interest.
  • Experiment: the article provides specific steps of doing the Beta Estimate Experiment using the SOCR experiment tool
  • SOCR Experiments
  • The Beta Estimate Experiment illustrates the bias and precision when sampling from a large population with a varying parameter. The following are examples of using this simulation:

Students want to know the probability of being randomly selected by the professor in the lecture hall. With the initial value of a, the experiment may represent an equal probability of selecting any student within the lecture hall, but with a large value of a, the experiment shows a bias in which students sitting in the first three rows may have a higher change of being selected.

Application II: Uniform θ-Estimate Experiment

The $ θ $-estimate experiment allows us to generate a random sample $ X_1,X_2,…,X_n $ of size $ n$ from the uniform distribution on (0,1). The distribution density is shown in blue in the graph, and on each update, the sample density is shown in red. On each update, the following statistic is recorded: $ U=minimum\, n\, for\, which\, the\, sum S=X_1+X_2+⋯+X_n>1. $ That is,$ U=argmin_n (X_1+X_2+⋯+X_n>1) $, note that all $X_i≥0 $ so such n exists.

  • Goal: The purpose of the Uniform E-Estimate Experiment is to provide an interactive computer demonstration illustrating a simple idea behind a stochastic simulation for estimating the natural number e. If U = minimum n for which the sum $ S=X_1+X_2+⋯+X_n>1 $, then the expected value of $ U,E(U) $, is approximately equal to the natural number $ e~2.7182.... $
  • Application: Estimation of the natural number $ e $ is very important in many science and technology developments and studies. There are deterministic algorithms as well as stochastic methods for estimating the value of $ e $ . Many of these provide up to 10-billion decimal place accuracy for $ e $ .

This experiment demonstrates an easy to understand, demonstrate and utilize protocol for a stochastic estimation of $ e $ . The algorithm may be significantly improved in terms of both speed of convergence and accuracy, relative to sample size $ (n) $.However, the emphasis in this experiment is simplicity and simulation of a transcendental number in real-time using basic tools (sampling from uniform distribution).

  • This article compared bias and precision statistics in regression analysis when measurement techniques are compared, it also compared the inconsistencies occurred in reporting the results of this form of analysis in cardiac output measurement. It performed a MEDLINE search dating from 1986 and surveyed studies comparing techniques of cardiac output measurement using bias and precision statistics. This paper constructed an error-gram from the percentage error in the test and reference methods and used the error-gram to determine acceptable limits of agreement between methods. It came to the conclusion that when using bias and precision statistics, cardiac output, bias, limits of agreement, and percentage error should be presented and argued that acceptance of a new technique should rely on limits of agreement of up to ± 30% using current reference methods.
  • This article reported on a randomized controlled trial to investigate the effects of variations in the orientation and type of scale on bias and precision in cross-sectional and longitudinal analyses. It analyzed differences between scales by comparing variances (Levene’s test) and means (variance-covariance analysis for repeated measures) and showed scale characteristics to influence the proportion of zero and low values (floor effect), but not mean scores. It argued in conclusion that the characteristic of VAS seem to be important in cross-sectional studies, particularly when symptoms of low or high intensity are being measured and that researchers should try to reach a consensus on what type of VAS to use if studies are to be compared.

Software

Problems

Selection bias can occur in a case-control study when controls are _______ to be included in the study if they have been exposed.

(a) more likely
(b) less likely
(c) equally likely
(d) both A and B
(e) all of the above


Which of the following are ways to minimize selection bias in the design of a study?

(a) Utilize population lists that are as inclusive as possible
(b) obtain only convenient participant records
(c) use separate criteria for the selection of cases and controls
(d) all of the above


Parents of children who were born with birth defects may be more likely to remember any drugs or exposures that occurred during pregnancy than parents of children born without birth defects. This is an example of what type of bias?

(a) interviewer bias
(b) recall bias
(c) loss to follow up
(d) non-differential misclassification


The true odds ratio of a study was calculated from the table below to be 2.33.

TRUTH
Case Control Total
Exposed 25 15 40
Unexposed 25 35 60
Total 50 50 100

According to the table below, what direction is the bias in the observed results below? (Hint: Calculate OR)

Observed
Case Control Total
Exposed 42 25 67
Unexposed 8 25 33
Total 50 50 100
(a) bias away from the null
(b) bias towards the null
(c) unbiased
(d) cannot be determined from the information above


Individuals who are exposed are more likely to be lost to follow-up and have their outcome be unobserved. This is an example of selection bias.

(a) True
(b) False


Prove that sample variance is an unbiased estimator of the population variance.


References




Translate this page:

(default)
Uk flag.gif

Deutsch
De flag.gif

Español
Es flag.gif

Français
Fr flag.gif

Italiano
It flag.gif

Português
Pt flag.gif

日本語
Jp flag.gif

България
Bg flag.gif

الامارات العربية المتحدة
Ae flag.gif

Suomi
Fi flag.gif

इस भाषा में
In flag.gif

Norge
No flag.png

한국어
Kr flag.gif

中文
Cn flag.gif

繁体中文
Cn flag.gif

Русский
Ru flag.gif

Nederlands
Nl flag.gif

Ελληνικά
Gr flag.gif

Hrvatska
Hr flag.gif

Česká republika
Cz flag.gif

Danmark
Dk flag.gif

Polska
Pl flag.png

România
Ro flag.png

Sverige
Se flag.gif