Difference between revisions of "AP Statistics Curriculum 2007 Bayesian Prelim"

From SOCR
Jump to: navigation, search
m (Text replacement - "{{translate|pageName=http://wiki.stat.ucla.edu/socr/" to ""{{translate|pageName=http://wiki.socr.umich.edu/")
 
(17 intermediate revisions by 3 users not shown)
Line 1: Line 1:
'''Bayes Theorem'''
+
==[[EBook | Probability and Statistics Ebook]] - Bayes Theorem==
  
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality
+
===Introduction===
 +
Bayes Theorem, or "Bayes Rule" can be stated succinctly by the equality
  
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math>
+
: <math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math>
  
 
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."
 
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."
  
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(\cdot)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say
+
Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. Let's call <math>f(\star)</math> the density (or in some cases, the likelihood) defined by the random process <math>\star</math>.  If <math>X</math> and <math>Y</math> are random variables, we can say
  
 +
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math>
  
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math>
+
===Example===
 +
Suppose a laboratory blood test is used as evidence for a disease. Assume P(positive Test| Disease) = 0.95, P(positive Test| no Disease)=0.01 and P(Disease) = 0.005. Find P(Disease|positive Test)=?
 +
 
 +
Denote D = {the test person has the disease}, <math>D^c</math> = {the test person does not have the disease} and  T = {the test result is positive}. Then
 +
<center><math>P(D | T) = {P(T | D) P(D) \over P(T)} = {P(T | D) P(D) \over P(T|D)P(D) + P(T|D^c)P(D^c)}=</math>
 +
<math>={0.95\times 0.005 \over {0.95\times 0.005 +0.01\times 0.995}}=0.3231293.</math></center>
  
 +
===Bayesian Statstics===
 
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.
 
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.
  
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(\cdot)</math> that takes a parameter, <math>\mu</math> whose value is not certainly known.
+
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from some known density, <math>f(X)</math>, that takes a parameter, <math>\mu</math>, whose value is not certainly known.
  
 
Using Bayes Theorem we may write
 
Using Bayes Theorem we may write
  
<math>f(\mu|\mathbf{x}) = \frac{lik(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math>
+
: <math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math>
 +
 
 +
In this formulation, we solve for <math>f(\mu|\mathbf{x})</math>, the "posterior" density of the population parameter, <math>\mu</math>.
 +
 
 +
For this we utilize the likelihood function of our data given our parameter, <math>f(\mathbf{x}|\mu) </math>, and, importantly, a density <math>f(\mu)</math>, that describes our "prior" belief in <math>\mu</math>.
  
 +
Since <math>\mathbf{x}</math> is fixed, <math>f(\mathbf{x})</math> is a fixed number -- a "normalizing constant" so to ensure that the posterior density integrates to one.
  
is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form "the probability of A, given B" and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0.
+
<math>f(\mathbf{x}) = \int_{\mu} f( \mathbf{x} \cap \mu) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu </math>
  
P(A) is often known as the Prior Probability (or as the Marginal Probability)
+
==See also==
 +
* [[EBook#Chapter_III:_Probability |Probability Chapter]]
  
P(A|B) is known as the Posterior Probability (Conditional Probability)
+
==References==
  
P(B|A) is the conditional probability of B given A (also known as the likelihood function)
+
<hr>
 +
* SOCR Home page: http://www.socr.ucla.edu
  
P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).
+
"{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=AP_Statistics_Curriculum_2007_Bayesian_Prelim}}

Latest revision as of 13:41, 3 March 2020

Probability and Statistics Ebook - Bayes Theorem

Introduction

Bayes Theorem, or "Bayes Rule" can be stated succinctly by the equality

\[P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}\]

In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."

Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. Let's call \(f(\star)\) the density (or in some cases, the likelihood) defined by the random process \(\star\). If \(X\) and \(Y\) are random variables, we can say

\(f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }\)

Example

Suppose a laboratory blood test is used as evidence for a disease. Assume P(positive Test| Disease) = 0.95, P(positive Test| no Disease)=0.01 and P(Disease) = 0.005. Find P(Disease|positive Test)=?

Denote D = {the test person has the disease}, \(D^c\) = {the test person does not have the disease} and T = {the test result is positive}. Then

\(P(D | T) = {P(T | D) P(D) \over P(T)} = {P(T | D) P(D) \over P(T|D)P(D) + P(T|D^c)P(D^c)}=\) \(={0.95\times 0.005 \over {0.95\times 0.005 +0.01\times 0.995}}=0.3231293.\)

Bayesian Statstics

What is commonly called Bayesian Statistics is a very special application of Bayes Theorem.

We will examine a number of examples in this Chapter, but to illustrate generally, imagine that x is a fixed collection of data that has been realized from some known density, \(f(X)\), that takes a parameter, \(\mu\), whose value is not certainly known.

Using Bayes Theorem we may write

\[f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }\]

In this formulation, we solve for \(f(\mu|\mathbf{x})\), the "posterior" density of the population parameter, \(\mu\).

For this we utilize the likelihood function of our data given our parameter, \(f(\mathbf{x}|\mu) \), and, importantly, a density \(f(\mu)\), that describes our "prior" belief in \(\mu\).

Since \(\mathbf{x}\) is fixed, \(f(\mathbf{x})\) is a fixed number -- a "normalizing constant" so to ensure that the posterior density integrates to one.

\(f(\mathbf{x}) = \int_{\mu} f( \mathbf{x} \cap \mu) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu \)

See also

References


"-----


Translate this page:

(default)
Uk flag.gif

Deutsch
De flag.gif

Español
Es flag.gif

Français
Fr flag.gif

Italiano
It flag.gif

Português
Pt flag.gif

日本語
Jp flag.gif

България
Bg flag.gif

الامارات العربية المتحدة
Ae flag.gif

Suomi
Fi flag.gif

इस भाषा में
In flag.gif

Norge
No flag.png

한국어
Kr flag.gif

中文
Cn flag.gif

繁体中文
Cn flag.gif

Русский
Ru flag.gif

Nederlands
Nl flag.gif

Ελληνικά
Gr flag.gif

Hrvatska
Hr flag.gif

Česká republika
Cz flag.gif

Danmark
Dk flag.gif

Polska
Pl flag.png

România
Ro flag.png

Sverige
Se flag.gif