Difference between revisions of "AP Statistics Curriculum 2007 Bayesian Prelim"
Line 7: | Line 7: | ||
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs." | In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs." | ||
− | Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>X</math> and <math>Y</math> are random variables, and <math>f(\cdot)</math> is a density, then we can say | + | Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. So, if <math>X</math> and <math>Y</math> are random variables, and <math>f(\cdot)</math> is a density, then we can say |
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math> | <math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math> |
Revision as of 15:24, 23 July 2009
Bayes Theorem
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality
\(P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}\)
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."
Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. So, if \(X\) and \(Y\) are random variables, and \(f(\cdot)\) is a density, then we can say
\(f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }\)
What is commonly called Bayesian Statistics is a very special application of Bayes Theorem.
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that x is a fixed collection of data that has been realized from under some known density, \(f(\cdot)\), that takes a parameter, \(\mu\), whose value is not certainly known.
Using Bayes Theorem we may write
\(f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }\)
In this formulation, we solve for \(f(\mu|\mathbf{x})\), the "posterior" density of the population parameter, \(\mu\).
For this we utilize the likelihood function of our data given our parameter, \(f(\mathbf{x}|\mu) \), and, importantly, a density \(f(\mu)\), that describes our "prior" belief in \(\mu\).
Since \(\mathbf{x}\) is fixed, \(f(\mathbf{x})\) is a fixed number -- a "normalizing constant" so to ensure that the posterior density integrates to one.
\(f(\mathbf{x}) = \int_{\mu} f(\mu \cap \mathbf{x}) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu \)