Difference between revisions of "AP Statistics Curriculum 2007 Bayesian Other"

From SOCR
Jump to: navigation, search
m (Probability and Statistics Ebook - Bayesian Inference for the Binomial and Poisson Distributions)
m (Text replacement - "{{translate|pageName=http://wiki.stat.ucla.edu/socr/" to ""{{translate|pageName=http://wiki.socr.umich.edu/")
 
Line 50: Line 50:
 
* SOCR Home page: http://www.socr.ucla.edu
 
* SOCR Home page: http://www.socr.ucla.edu
  
{{translate|pageName=http://wiki.stat.ucla.edu/socr/index.php?title=AP_Statistics_Curriculum_2007_Bayesian_Other}}
+
"{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=AP_Statistics_Curriculum_2007_Bayesian_Other}}

Latest revision as of 11:40, 3 March 2020

Probability and Statistics Ebook - Bayesian Inference for the Binomial and Poisson Distributions

The parameters of interest in this section is the probability P of success in a number of trials which can result in either success or failure with the trials being independent of one another and having the same probability of success. Suppose that there are n trials such that you have an observation of x successes from a binomial distribution of index n and parameter P: \[x \sim B(n,P)\]

We can show that \[p(x|P) = {n \choose x} P^x (1 - P)^{n - x}\], (x = 0, 1, …, n)

p(x|P) is proportional to \(P^x (1 - P)^{n - x}\).

If the prior density has the form: \[p(P) \sim P^{\alpha - 1} (P-1)^{\beta - 1}\], (P between 0 and 1),

then it follows the beta distribution \[P \sim \beta(\alpha,\beta)\].

From this we can appropriate the posterior which evidently has the form: \[p(P|x) \sim P^{\alpha + x - 1} (1-P)^{\beta + n - x - 1}\].

The posterior distribution of the Binomial is \[ (P|x) \sim \beta(\alpha+x,\beta+n-x)\].

Bayesian Inference for the Poisson Distribution

A discrete random variable x is said to have a Poisson distribution of mean \(\lambda\) if it has the density: \[P(x|\lambda) = {\lambda^x e^{-\lambda}\over x!}\]

Suppose that you have n observations \(x=(x_1, x_2, \cdots, x_n)\) from such a distribution so that the likelihood is: \[L(\lambda|x) = \lambda^T e^{(-n \lambda)}\], where \(T = \sum_{k_i}{x_i}\).

In Bayesian inference, the conjugate prior for the parameter \(\lambda\) of the Poisson distribution is the Gamma distribution.

\[\lambda \sim \Gamma(\alpha, \beta)\].

The Poisson parameter \(\lambda\) is distributed accordingly to the parametrized Gamma density g in terms of a shape and inverse scale parameter \(\alpha\) and \(\beta\) respectively:

\[g(\lambda|\alpha, \beta) = \displaystyle\frac{\beta^\alpha}{\Gamma(\alpha)}\lambda^{\alpha - 1} e^{-\beta \lambda}\]. For \(\lambda > 0\).

Then, given the same sample of n measured values \(k_i\) from our likelihood and a prior of \(\Gamma(\alpha, \beta)\), the posterior distribution becomes: \[\lambda \sim \Gamma (\alpha + \displaystyle\sum_{i=1}^{\infty} k_i, \beta +n)\].

The posterior mean \(E[\lambda]\) approaches the maximum likelihood estimate in the limit as \(\alpha\) and \(\beta\) approach 0.

See also

References


"-----


Translate this page:

(default)
Uk flag.gif

Deutsch
De flag.gif

Español
Es flag.gif

Français
Fr flag.gif

Italiano
It flag.gif

Português
Pt flag.gif

日本語
Jp flag.gif

България
Bg flag.gif

الامارات العربية المتحدة
Ae flag.gif

Suomi
Fi flag.gif

इस भाषा में
In flag.gif

Norge
No flag.png

한국어
Kr flag.gif

中文
Cn flag.gif

繁体中文
Cn flag.gif

Русский
Ru flag.gif

Nederlands
Nl flag.gif

Ελληνικά
Gr flag.gif

Hrvatska
Hr flag.gif

Česká republika
Cz flag.gif

Danmark
Dk flag.gif

Polska
Pl flag.png

România
Ro flag.png

Sverige
Se flag.gif