AP Statistics Curriculum 2007 Bayesian Other

From SOCR
Revision as of 01:40, 2 June 2009 by JayZzz (talk | contribs)
Jump to: navigation, search

Bayesian Inference for the Binomial Distribution

The parameters of interest in this section is the probability P of success in a number of trials which can result in either success or failure with the trials being independent of one another and having the same probability of success. Suppose that there are n trials such that you have an observation of x successes from a binomial distribution of index n and parameter P

x ~ B(n,P)

subsequently, we can show that p(x|P) = \({n \choose x}\) \( P^x \) \((1 - P)^{n - x}\) , (x = 0, 1, …, n)

p(x|P) is proportional to \(P^x (1 - P)^{n - x}\)


If the prior density has the form: p(P) proportional to \(P^{\alpha - 1}\)\( (P-1)^{\beta - 1}\) , (P between 0 and 1)

then it follows the beta distribution P ~ β(α,β)


From this we can appropriate the posterior which evidently has the form

p(P|x) is proportional to \(P^{\alpha + x - 1}\)\((1-P)^{\beta + n - x - 1}\)

The posterior distribution of the Binomial is

(P|x) ~ β(α + x, β + n – x)


Bayesian Inference for the Poisson Distribution

A discrete random variable x is said to have a Poisson distribution of mean \(\lambda\) if it has the density


P(x|\(\lambda\)) = (\(\lambda^x</x!\))\(e^{-\lambda}\)

Supose that you have n observations x=(x1, x2, …, xn) from such a distribution so that the likelihood is


L(\(\lambda\)|x) = \(\lambda^T e^{(-n \lambda)}\), where T = \(\sum{k_i}\)

In Bayesian inference, the conjugate prior for the parameter \(\lambda\) of the Poisson distribution is the Gamma distribution.

\(\lambda \sim\) Gamma(\(\alpha\) , \(\beta\) )


The Poisson parameter \(\lambda\) is distributed accordingly to the parameterized Gamma density g in terms of a shape and inverse scale parameter \(\alpha\) and \(\beta\) respectively


g(\(\lambda\)|\(\alpha\) , \(\beta\)) = \(\displaystyle\frac{\beta^\alpha}{\Gamma(\alpha)}\) \(\lambda^{\alpha - 1} e^{-\beta \lambda}\) For \(\lambda\) > 0


Then, given the same sample of n measured values \(k_i\) from our likelihood and a prior of Gamma(\(\alpha\), \(\beta\)), the posterior distribution becomes


\(\lambda \sim\) Gamma (\(\alpha + \displaystyle\sum_{i=1}^{\infty} k_i\) , \(\beta\) + n)

The posterior mean E[\(\lambda\)] approaches the maximum likelihood estimate in the limit as \(\alpha\) and \(\beta\) approach 0.