Difference between revisions of "SMHS ProbabilityDistributions"

From SOCR
Jump to: navigation, search
(Scientific Methods for Health Sciences - Probability Distributions)
Line 26: Line 26:
  
 
3.3)  Introduction to expectation and variance.
 
3.3)  Introduction to expectation and variance.
*Expectation: The expected value, expectation or mean, of a discrete random variable X is defined as E[X]=∑_xxP(X=x),expectation of a continuous random variable Y is defined as E[Y]=∫yP(y)dy, which is the integral over the domain of Y and P(y) is the probability density function of Y. An important property of expectation is E[aX+bY]=aE[X]+bE[Y]
+
*Expectation: The expected value, expectation or mean, of a discrete random variable X is defined as \(E[X]=∑_xxP(X=x)\),expectation of a continuous random variable Y is defined as \(E[Y]=∫yP(y)dy\), which is the integral over the domain of Y and P(y) is the probability density function of Y. An important property of expectation is E[aX+bY]=aE[X]+bE[Y]
  
*Variance: The variance of a discrete random variable X is defined as VAR[X]=∑_x〖(X-E[X])^2 P(X=x)〗,variance of a continuous random variable Y is defined as VAR[Y]=∫〖(y-E[Y])^2 P(y)dy〗, which is the integral over the domain of Y and P(y) is the probability density function of Y. VAR[aX]=a^2 VAR[X]. VAR[X+Y]=VAR[X]+VAR[Y]+2COV(X,Y).
+
*Variance: The variance of a discrete random variable X is defined as \(VAR[X]=∑_x(X-E[X])^2 P(X=x)\)variance of a continuous random variable Y is defined as \(VAR[Y]=(y-E[Y])^2 P(y)dy\), which is the integral over the domain of Y and P(y) is the probability density function of Y. \(VAR[aX]= a^2 VAR[X]\).\(VAR[X+Y]=VAR[X]+VAR[Y]+2COV(X,Y\).
 +
*Covariance: \(OV(X,Y)=E[(X-E(X))(Y-E[Y])]\).
  
*Covariance: COV(X,Y)=E[(X-E(X))(Y-E[Y])].
 
  
 +
3.4) Bernoulli distribution: A Bernoulli trial is an experiment whose dichotomous outcomes are random (e.g. ‘head vs. ‘tail’). X(outcome)=\({█(0,s=head@1,s=tail ).┤\).  If ''p''=P(''head''), then \(E[X]=p,VAR[X]=p(1-p)\).
  
3.4) Bernoulli distribution: A Bernoulli trial is an experiment whose dichotomous outcomes are random (e.g. ‘head vs. ‘tail’).  X(outcome)=(0,s=''head''@1,s=''tail'' ).  If ''p''=P(''head''), then E[X]=p,VAR[X]=p(1-p).
 
  
  
3.5) Binomial distribution: Suppose we conduct an experiment observing n trial Bernoulli process. If we are interested in the RV x = {Number of heads in the n trials}, then X is called a Binomial RV and its distribution is called Binomial distribution, X~B(n,p), where n is sample size, p is the probability of head at one trial. P(X=x)=((n@x)) p^x (1-p)^(n-x),for x=0,1,…,n where ((n@x))=n!/x!(n-x)! is the binomial coefficient.
+
3.5) Binomial distribution: Suppose we conduct an experiment observing n trial Bernoulli process. If we are interested in the RV x = {Number of heads in the n trials}, then X is called a Binomial RV and its distribution is called Binomial distribution, \(X~B(n,p)\),where n is sample size, p is the probability of head at one trial. \(P(X=x)=((n@x)) p^x (1-p)^(n-x),for x=0,1,…,n,\) where \(((n@x))=n!/x!(n-x)!\)is the binomial coefficient.
  
E[X]=np,VAR[X]=np(1-p)
+
\(E[X]=np,VAR[X]=np(1-p\)
  
  
 
3.6) Multinomial distribution: an extension of binomial distribution. The experiment consists of k repeated trials and each trial has a discrete number of possible outcomes; on any given trial, the probability that a particular outcome will occur is constant; the trials are independent.
 
3.6) Multinomial distribution: an extension of binomial distribution. The experiment consists of k repeated trials and each trial has a discrete number of possible outcomes; on any given trial, the probability that a particular outcome will occur is constant; the trials are independent.
p=P(X_1=r_1∩…⋂X_k=r_k│r_1+⋯+r_k=n)=((n@r_1,…,r_k )) p_1^(r_1 ) p_2^(r_2 )…p_k^(r_k ),∀r_1+⋯+r_k=n, where((n@r_1,…,r_k ))=n!/(r_1 !,…,r_k !).
+
\(p=P(X_1=r_1∩…⋂X_k=r_k│r_1+⋯+r_k=n)=((n@r_1,…,r_k )) p_1^(r_1 ) p_2^(r_2 )…p_k^(r_k ),∀r_1+⋯+r_k=n\)where \(((n@r_1,…,r_k ))=n!/(r_1 !,…,r_k !).\)
  
  
3.7) Geometric distribution: the probability distribution of number X of Bernoulli trials needed to get one success, supported on the set {1,2,3,…}. P(X=x)= (1-p)^(x-1) p,for x = 1, 2, …
+
3.7) Geometric distribution: the probability distribution of number X of Bernoulli trials needed to get one success, supported on the set {1,2,3,…}. \(P(X=x)=(1-p)^(x-1) p\) p,for x = 1, 2, …
  
E[X]=1/p,VAR[X]=(1-p)/p^2  
+
\(E[X]=1/p,VAR[X]=(1-p)/p^2 \)
  
  

Revision as of 14:36, 23 July 2014

Scientific Methods for Health Sciences - Probability Distributions

IV. HS 850: Fundamentals

Distributions

1) Overview: Distribution is the fundamental basis of probability theory. They are two types of processes that we observe in nature – discrete and continuous distributions. The type of distribution depends on the type of data. Namely, discrete distribution is for discrete variable and continuous distribution is for continuous variable. This section aims to introduce various kinds of discrete and continuous distributions and the relationships between distributions.

  • Discrete distribution: Bernoulli distribution, Binomial distribution, Multinomial distribution, Geometric distribution, Hypergeometric distribution, Negative binomial distribution, Negative multinomial distribution, Poisson distribution.
  • Continuous distribution: Normal distribution, Multivariate normal distribution.


2) Motivation:

We have talked about different types of data and the fundamentals of probability theory. In order to capture and estimate the patterns of data, we introduced the concept of distribution. A probability distribution assigns a probability to each measurable subset of the possible outcomes of a random experiment. It can either be univariate or multivariate. A univariate distribution gives the probability of a single random variable while the a multivariate distribution (a joint probability distribution) gives the probability of a random vector which is a set of two or more random variables taking on various combinations of values. Consider the coin tossing experiment, what would be the distribution of the outcome?


3) Theory

3.1) Random variables: a random variable is a function or a mapping from a sample space into the real numbers (most of the time). In other words, a random variable assigns real values to outcomes of experiments.


3.2) Probability density / mass and (cumulative) distribution functions: The probability density or probability mass function (pdf), for a continuous or discrete random variable, is the function defined by the probability of the subset of the sample space S, {s∈S}⊂S. p(x)=P({s∈S}|X(s)=x), all x. The cumulative distribution function (cdf) F(x) of any random variable X with probability mass or density function p(x) is defined by the total probability of all {s∈S}⊂S, where X(s)≤x; F(x)=P(X≤x)


3.3) Introduction to expectation and variance.

  • Expectation: The expected value, expectation or mean, of a discrete random variable X is defined as \(E[X]=∑_xxP(X=x)\),expectation of a continuous random variable Y is defined as \(E[Y]=∫yP(y)dy\), which is the integral over the domain of Y and P(y) is the probability density function of Y. An important property of expectation is E[aX+bY]=aE[X]+bE[Y]
  • Variance: The variance of a discrete random variable X is defined as \(VAR[X]=∑_x(X-E[X])^2 P(X=x)\)variance of a continuous random variable Y is defined as \(VAR[Y]=∫ (y-E[Y])^2 P(y)dy\), which is the integral over the domain of Y and P(y) is the probability density function of Y. \(VAR[aX]= a^2 VAR[X]\).\(VAR[X+Y]=VAR[X]+VAR[Y]+2COV(X,Y\).
  • Covariance: \(OV(X,Y)=E[(X-E(X))(Y-E[Y])]\).


3.4) Bernoulli distribution: A Bernoulli trial is an experiment whose dichotomous outcomes are random (e.g. ‘head vs. ‘tail’). X(outcome)=\({█(0,s=head@1,s=tail ).┤\). If p=P(head), then \(E[X]=p,VAR[X]=p(1-p)\).


3.5) Binomial distribution: Suppose we conduct an experiment observing n trial Bernoulli process. If we are interested in the RV x = {Number of heads in the n trials}, then X is called a Binomial RV and its distribution is called Binomial distribution, \(X~B(n,p)\),where n is sample size, p is the probability of head at one trial. \(P(X=x)=((n@x)) p^x (1-p)^(n-x),for x=0,1,…,n,\) where \(((n@x))=n!/x!(n-x)!\)is the binomial coefficient.

\(E[X]=np,VAR[X]=np(1-p\)


3.6) Multinomial distribution: an extension of binomial distribution. The experiment consists of k repeated trials and each trial has a discrete number of possible outcomes; on any given trial, the probability that a particular outcome will occur is constant; the trials are independent. \(p=P(X_1=r_1∩…⋂X_k=r_k│r_1+⋯+r_k=n)=((n@r_1,…,r_k )) p_1^(r_1 ) p_2^(r_2 )…p_k^(r_k ),∀r_1+⋯+r_k=n\)where \(((n@r_1,…,r_k ))=n!/(r_1 !,…,r_k !).\)


3.7) Geometric distribution: the probability distribution of number X of Bernoulli trials needed to get one success, supported on the set {1,2,3,…}. \(P(X=x)=(1-p)^(x-1) p\) p,for x = 1, 2, …

\(E[X]=1/p,VAR[X]=(1-p)/p^2 \)


3.8) Hypergeometric distribution: a discrete probability distribution that describes the number of successes in a sequence of n draws from a finite population without replacement. An experimental design for using Hypergeometric distribution is illustrated in this table: a shipment of N objects in which m are defective. The Hypergeometric Distribution describes the probability that in a sample of n distinctive objects drawn from the shipment exactly k objects are defective.

Type Drawn Not-Drawn Total
Defective k m-k m
Non-Defective n-k N+k-n-m N-m
Total n N-n N


\(P(X=k)=((m@k))((N-m@n-k))/(((N@n)) ),E[X]=nm/N,VAR[X]=(nm/N(1-m/N)(N-n))/(N-1).\)


3.9) Negative binomial distribution: Suppose X=trial index (n) of the r^th success, or total # of experiments (n) to get r successes. P(X=n)=((n-1@r-1)) p^r (1-p)^(n-r), for n=r,r+1,r+2,…, n is the trial number of the 𝑟𝑡ℎ success.


\(E[X]=r/p,VAR[X]=r(1-p)/p^2\)


Suppose Y= Number of failures (k) to get r successes. \( P(Y=k)=((k+r-1@k)) p^r (1-p)^k, for k=0,1,2,…, \) where k is the number of failures before the r^th success. Y~NegBin(r,p), the probability of k failures and r successes in n = k+1 Bernoulli(p) trials with success on the last trial.


\(E[Y]=(r(1-p))/p,VAR[Y]=r(1-p)/p^2<math/> NOTE'"`UNIQ-MathJax1-QINU`"' 3.10) Negative multinomial distribution (NMD): a generalization of the two-parameter NB(r,p) to more than one outcomes. Suppose we have m possible outcomes where <math>m≥1\), \({X_0,…,X_m }\) each with probability \({p_0,…,p_m }\) respectively where \(0<p_i<1\) and \(∑_(i=0)^m p_i =1\) Suppose the experiment generate independent outcomes until \({X_0,…,X_m }\) occur exactly \({k_0,…,k_m }\)






















Translate this page:

(default)
Uk flag.gif

Deutsch
De flag.gif

Español
Es flag.gif

Français
Fr flag.gif

Italiano
It flag.gif

Português
Pt flag.gif

日本語
Jp flag.gif

България
Bg flag.gif

الامارات العربية المتحدة
Ae flag.gif

Suomi
Fi flag.gif

इस भाषा में
In flag.gif

Norge
No flag.png

한국어
Kr flag.gif

中文
Cn flag.gif

繁体中文
Cn flag.gif

Русский
Ru flag.gif

Nederlands
Nl flag.gif

Ελληνικά
Gr flag.gif

Hrvatska
Hr flag.gif

Česká republika
Cz flag.gif

Danmark
Dk flag.gif

Polska
Pl flag.png

România
Ro flag.png

Sverige
Se flag.gif