Difference between revisions of "SMHS BayesianInference"
(→Demonstration) |
(→References) |
||
(2 intermediate revisions by the same user not shown) | |||
Line 14: | Line 14: | ||
====Multiplication rule==== | ====Multiplication rule==== | ||
− | For any two events, $A$ and $B$, $ P(A\cap B)=P(A│B)P(B) $. In general, for $n$ events $A_1, ..., A_n$: $ P(A_1 \cap A_2 \cap A_3 \cap … \cap A_n ) =$ $ P(A_1 )P( | + | For any two events, $A$ and $B$, $ P(A\cap B)=P(A│B)P(B) $. In general, for $n$ events $A_1, ..., A_n$: $ P(A_1 \cap A_2 \cap A_3 \cap … \cap A_n ) =$ $ P(A_1 )P(A_2│A_1)P(A_3│A_1\cap A_2) … P(A_n│A_1\cap A_1\cap A_2\cap A_3\cap … \cap A_{n-1})$. |
====Law of total probability==== | ====Law of total probability==== | ||
Line 99: | Line 99: | ||
===Demonstration=== | ===Demonstration=== | ||
− | Suppose that we have data from $n = 129$ patients. Then, since the size $N$ of the general population is much greater than the size $n$ of the sample, $Y_1,...,Y_n$ can be considered to be exchangeable random variables. Using the [http://en.wikipedia.org/wiki/De_Finetti's_theorem De Finetti’s theorem], we can set up the following model for our data: a prior distribution $p(θ)$ for a parameter $θ$, representing the proportion of 1’s in the population $\left( \sum_{i=1}^N{\frac{Y_i}{N}}\right)$, given $θ$, the $Y_1,...,Y_n$ are i.i.d.’s with $p(y│θ)=θ^y (1-θ)^{1-y}$, the sampling model adopted leads to $p(y_1,...,y_n│θ)=p(y_1,..., | + | Suppose that we have data from $n = 129$ patients. Then, since the size $N$ of the general population is much greater than the size $n$ of the sample, $Y_1,...,Y_n$ can be considered to be exchangeable random variables. Using the [http://en.wikipedia.org/wiki/De_Finetti's_theorem De Finetti’s theorem], we can set up the following model for our data: a prior distribution $p(θ)$ for a parameter $θ$, representing the proportion of 1’s in the population $\left( \sum_{i=1}^N{\frac{Y_i}{N}}\right)$, given $θ$, the $Y_1,...,Y_n$ are i.i.d.’s with $p(y│θ)=θ^y (1-θ)^{1-y}$, the sampling model adopted leads to $p(y_1,...,y_n│θ)=p(y_1,...,y_{129}│θ)$=$\cup_{i=1}^{129}{θ^{y_i}(1-θ)^{1-y_i}}=$ $θ^{\sum_{i=1}^{129}{y_i}}(1-θ)^{129-\sum_{i=1}^{129}{y_i}}$, now we need to choose a prior distribution for $θ$. |
The parameter $θ$ is a random variable with parameter space $Θ=[0,1]$. A first choice would be to use a locally uniform prior, that is a prior distribution $p(θ)$ that assigns the same probability to all intervals $(a,b) \subseteq [0,1]$ of the same length. This leads to $p(θ) \sim Uniform([0,1])$, i.e., $p(θ) = 1$ for $θ \in [0,1]$. Then, we obtain the following posterior distribution: $p(θ│y_1,...,y_{129})=\frac{p(y_1,...,y_{129}│θ)p(θ)}{p(y_1,...,y_{129})}=$ $p(y_1,...,y_{129}│θ)\times \frac{1}{p(y_1,...,y_{129})} ∝ p(y_1,...,y_{129}│θ)$ since $p(y_1,...,y_{129})$ does not depend on $θ$. Therefore, the posterior distribution $p(y_1,...,y_{129}│θ)$ of $θ$ has the same shape as $p(y_1,...,y_{129}│θ)$ but they are on a different scale. | The parameter $θ$ is a random variable with parameter space $Θ=[0,1]$. A first choice would be to use a locally uniform prior, that is a prior distribution $p(θ)$ that assigns the same probability to all intervals $(a,b) \subseteq [0,1]$ of the same length. This leads to $p(θ) \sim Uniform([0,1])$, i.e., $p(θ) = 1$ for $θ \in [0,1]$. Then, we obtain the following posterior distribution: $p(θ│y_1,...,y_{129})=\frac{p(y_1,...,y_{129}│θ)p(θ)}{p(y_1,...,y_{129})}=$ $p(y_1,...,y_{129}│θ)\times \frac{1}{p(y_1,...,y_{129})} ∝ p(y_1,...,y_{129}│θ)$ since $p(y_1,...,y_{129})$ does not depend on $θ$. Therefore, the posterior distribution $p(y_1,...,y_{129}│θ)$ of $θ$ has the same shape as $p(y_1,...,y_{129}│θ)$ but they are on a different scale. | ||
Line 232: | Line 232: | ||
===References=== | ===References=== | ||
* [http://en.wikipedia.org/wiki/Bayesian_inference Wikipedia Bayesian Inference] | * [http://en.wikipedia.org/wiki/Bayesian_inference Wikipedia Bayesian Inference] | ||
+ | * [http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0019178 Christou N, Dinov, ID (2011) Confidence Interval Based Parameter Estimation—A New SOCR Applet and Activity. PLoS ONE 6(5): e19178. doi:10.1371/journal.pone.0019178]. | ||
Latest revision as of 15:46, 21 July 2017
Contents
Scientific Methods for Health Sciences - Bayesian Inference
Overview
Bayes’ theorem goes beyond connecting conditional and compound probability theory. It provides a method of inference where Bayes’ rule is used to update the probability estimate for a hypothesis as additional evidence is acquired. It is widely used in varieties of studies including medicine, law, science and engineering. In this section, we present a general introduction to the area of Bayesian inference and focus on its applications.
Motivation
We are already familiar with the concepts in frequentist statistics, which focuses on drawing conclusions and making statistical inference from sample data by the emphasis on the frequency or proportion of the data. However, what if we want to make inference in a different way where we make inference after taking into consideration of the observed data. That is we are going to introduce the concept of posterior distribution and inference we can make based on this. So, how does Bayesian inference work and how is it different from the traditional frequentist statistics?
Theory
Review our earlier discussion of Probability Theory.
Conditional Probability
The conditional probability of event $A$ occurring given that event $B$ occurs is $ P(A│B)=(P(A\cap B))/(P(B)) $. If $A$ and $B$ are independent then knowing $B$, or $B^c$, gives no information on the probability of $A$, i.e., $ P(A│B)=P(A) $.
Multiplication rule
For any two events, $A$ and $B$, $ P(A\cap B)=P(A│B)P(B) $. In general, for $n$ events $A_1, ..., A_n$: $ P(A_1 \cap A_2 \cap A_3 \cap … \cap A_n ) =$ $ P(A_1 )P(A_2│A_1)P(A_3│A_1\cap A_2) … P(A_n│A_1\cap A_1\cap A_2\cap A_3\cap … \cap A_{n-1})$.
Law of total probability
If $ {A_1,…,A_n} $ forms a partition the sample space $S$, then for every event $B$, the probability of $B$ to occur can be split into: $$P(B)=P(B│A_1 )P(A_1 )+P(B│A_2 )P(A_2 )+⋯ +P(B│A_n )P(A_n).$$
Inverting the order of conditioning
As event intersection is commutative, $ P(A \cap B) = P(A | B) \times P(B) = P(B | A) \times P(A) $.
Bayesian Rule
Combining the properties above, for any partition the sample space $S$, $ {A_1,…,A_n}$, and any two events $A$ and $B$ (i.e., subsets of $S$) we have: $$ P(A | B) = {P(B | A) P(A) \over P(B)} = {P(B | A) P(A) \over P(B|A_1)P(A_1) + P(B|A_2)P(A_2) + \cdots + P(B|A_n)P(A_n)}. $$
Interpretation
To interpret of some of the probability concepts above we can think of $P(A)$ as a prior distribution of event $A$, which is the initial degree of belief in $A$ (before seeing any data/evidence). $P(A|B)$ is referred to as the posterior distribution of event $A$ given (the evidence) $B$ representing the degree change of our initial belief having observed $B$.
The Bayesian’s rule tells us that the posterior odds equal prior odds times Bayes factor. When arbitrarily many events $A$ are of interest, not just two, the rule can be rephrased as posterior proportional to prior times likelihood, which can be expressed as $P(A|B) ∝ P(A)P(B|A)$, where the proportionality symbol (∝) means that the left hand side is proportional to the right hand side as $A$ varies given $B$.
Given events $A_1,A_2$ and $B$, with Bayes’ rule we have that the conditional odds of $A_1:A_2$ given $B$ are equal to the marginal odds of $A_1$ to $A_2$ (i.e., $A_1:A_2$) multiplied by the Bayes factor or likelihood ratio $Λ$. That is, $O(A_1:A_2│B)=Λ(A_1: A_2│B) \times O(A_1:A_2)$, where $Λ(A_1: A_2│B)=\frac{P(B│A_1)}{P(B│A_2)}$. The odds $O(A_1: A_2 )=\frac{P(A_1 )}{P(A_2)}$, and conditional odds $O(A_1: A_2│B)=\frac{P(A_1│B)}{P(A_2│B)}$ are known as prior odds and posterior odds respectively.
For the special case, where $A_1=A^c≡A'$, this translated into using two competing hypotheses concerning the cause of $B$. If we have $O(A)=O(A:A^c)$, the odds on A is by definition the odds for and against $A$. Bayes’ rule can then be written as $O(A│B)=O(A)Λ(A│B)$, that is the posterior odds on $A$ equals the prior odds on $A$ times the likelihood ratio for $A$ given $B$ (posterior odds equal prior odds times likelihood ratio).
Statistical inference
The goal of statistical inference is to learn about the general population by taking a subset from that population. The steps in classical statistical parametric inference are:
- collect a random sample from the population;
- determine a probability model for the data $y_1,...,y_n$ obtained (e.g. we are collecting data on the height of US women: each women’s height can be thought as a random variable with a normal distribution). This is called the sampling model;
- the sampling model will depend on a set of parameters, $θ$: $p(y_i |θ)$, for $i=1,...,n$;
- using the sampling model, construct the likelihood function $L(θ; y )$;
- determine the maximum likelihood estimate (MLE) of θ by maximizing the likelihood function $L(θ;y)$.
In the classical or frequentist approach to statistical inference, the parameters of a sampling model are assumed to be constant but unknown.
The Bayesian inference paradigm allows us to incorporate subjective prior beliefs regarding the parameter(s), $θ$ of the sampling model in our learning procedure. In Bayesian statistics, before we observe the data, we have some belief regarding possible values, or range of values for the set of parameters $θ$. This knowledge is expressed in the form of a prior distribution. Since we are assigning a distribution to $θ$, we are now assuming that θ is a random variable (univariate if $θ$ is just a single parameter, multivariate if $θ$ consists of more than one parameter).
Since $θ$ is random , there is a sample space associated with $θ$. This parameter space is typically denoted by $Θ$. The parameter space $\{\theta \in \Theta \}$ is the set of all possible values for the parameter.
Once data $y_1,...,y_n$ have been collected, we specify a sampling model for the data $p(y_i│θ)$, $i=1,…,n$, which represents the probability of observing $y_i$ if we knew $θ$. In light of the data observed, using Bayes’ theorem we update our belief about $θ$ and drive the posterior distribution $p(θ│y_1,...,y_n)=p(θ|y)$, where $y=\{y_1,...,y_n\}$.
Advantages of Bayesian methods
- Interpretation: Having a distribution for the unknown parameter(s) $θ$ makes it easier to understand what a point estimate and a confidence interval mean. For example: suppose the parameter of interest here is the population mean $θ$. We have collected data and we build a 95% confidence interval for $θ$. This is given by: $\hat{y} \pm 1.96 \frac{s}{\sqrt{n}}$. After the sample is collected and the interval is created, it either contains $θ$ or it does not. So, 95% is not the probability that $θ$ falls in the interval. For a frequentist, 95% is not a coverage probability, it is just a tag that informs how the interval performs over the long haul. By contrast, Bayesian confidence intervals, called credible sets, convey the posterior probability that the parameter falls in the interval.
- Likelihood Principle: Bayesian inference obeys the likelihood principle. The likelihood principle states that if two sampling models yield proportional likelihoods for $θ$, then inference about $θ$ should be identical under the two models. Frequentist inference does not always obey the likelihood principle! For example: suppose we observe the clinical outcomes in experimental treatment applied on 12 patients and at the end of the trial we observe 3 patient cured and 9 still sick. We want to test the hypothesis that the probability of successful cure is better than 50/50, i.e., $H_o$∶ $θ=1/2$ vs. $H_a$∶ $θ>1/2$. Suppose that we set up two sampling models for these data:
- Model 1: Y, number of successful treatments, as a Binomial(n=12,θ);
- Model 2: Y, number of successfully treated patients to be observed before terminating the clinical trial (in our example, getting 3 patients cured), which is a Negative Binomial(r=3,1-θ).
- The two likelihoods of the 2 models are proportional but they lead to two different p-values.
- Bayesian theorem provides a rational method for learning. Suppose you collect data $y_1,...,y_n$ and compute your posterior distribution $p(θ|y_1,...,y_n)$. If we later collect an additional observation $y_{n+1}$, then, the posterior distribution $p(θ|y_1,...,y_n)$ could be used as a prior for the current Bayesian data analysis.
- Bayesian inference does not require large sample theory. Bayesian methods do not require asymptotics for valid inference. Small sample Bayesian inference proceeds in the same way as if one had a large sample.
- Finally, Bayesian inference often includes frequentist inference as a special case. One can often obtain frequentist answers by choosing a uniform prior for the parameters. In this cases, frequentist answers can be obtained from the posterior distribution, where the MLE is then the posterior mode.
- The choice of the prior distribution is a delicate issue and the main criticism to Bayesian inference. Two people analyzing the same data but using different priors could obtain different answers. When carry out a Bayesian analysis it is important to examine the prior sensitivity, that is, determine how sensitive are the results to the choice of the prior distribution.
Bayes theorem for density functions
- Marginal density $p(y)=\int_{θ∈Θ}{p(y,θ)dθ}=\int_{θ∈Θ}{p(y|θ)p(θ)dθ}$, where $Θ$ is the parameter space. Substituting the expression for $p(y)$, Bayes’ theorem for density function can be written as $p(θ|y)=\frac{p(y|θ)p(θ)}{p(y)} = \frac{p(y|θ)p(θ)}{\int_{θ∈Θ}{p(y|θ)p(θ)dθ}}$. Given that $p(θ|y)$ is a function of $θ$, the denominator is a constant and thus, the posterior density of $θ$ is often expressed up to a proportionality constant as $p(θ|y)∝p(y|θ)p(θ)$, where the sign "∝" means “proportional to”.
- The proportionality constant in the posterior density $p(θ|y)$ can be determined using the fact that the posterior density $p(θ|y)$ is a PDF. Suppose that we have observed $Y=y$, and we want to evaluate and compare the odds of two possible numerical value for $θ$, let probability (density) of $θ_a$ relative to $θ_b$ is:
$$\frac{p(θ_a│y)}{p(θ_b│y)} = \frac{\frac{p(y|θ_a )p(θ_a)}{p(y)}}{\frac{p(y|θ_b )p(θ_b)}{p(y)}}= \frac{p(y|θ_a )p(θ_a )}{p(y|θ_b )p(θ_b )},$$
- where $\frac{p(θ_a )}{p(θ_b )}$ is the prior odds and $\frac{p(y|θ_a )}{p(y|θ_b )}$ is the likelihood ratio. In comparing the posterior probability of $θ_a$ and $θ_b$, it is not necessary to compute $p(y)$.
Prior Distributions
How to choose a prior distribution $p(θ)$? There are various type of prior distributions:
- Non-informative priors: a prior distributions $p(θ)$ is non-informative if it has minimal impact on the posterior distribution $p(θ|y)$ of $θ$. In general, a non-informative prior is one which is dominated by the likelihood, that is, it is a prior that does not change very much over the region in which the likelihood is appreciable and does not assume large values outside that range. A prior that has these properties is said to be locally uniform. Non-informative priors are also called reference priors, vague priors or flat priors.
- Examples:
- If $θ ∈ Θ = [0,1]$, then $p(θ) = Uniform(0,1)$ is a non-informative prior and $p(θ) = 1$ for $θ ∈ Θ = [0,1]$.
- If $θ ∈ Θ = R$, then if $p(θ) = Normal(μ_0,σ_0^2)$ with $σ_0^2 \longrightarrow \infty$, we get a non-informative prior. That is, we can pick $σ_0^2$ large enough so that we obtain a non-informative prior.
- Improper priors: a prior distribution $p(θ)$ on $θ ∈ Θ$ is said to be improper if $\int_{θ ∈ Θ}{p(θ)dθ}=∞$. Improper priors are often used in Bayesian inference because they yield non-informative priors. For example: if $θ ∈ Θ=R$ and $p(θ)∝1$, then $p(θ)$ is an improper prior since clearly for $\int_{θ ∈ Θ}{p(θ)dθ}=\int_{-\infty}^{+\infty}{dθ}$, which is undefined. An improper prior $p(θ)$ can result in an improper posterior distribution $p(θ|y)$. One cannot make inference with improper posterior distributions! An improper prior p(θ) may still lead to a proper posterior distribution. If $p(θ)$ is an improper prior, then the posterior distribution $p(θ|y)$ has always a particular form. In fact, if $p(θ)$ is improper than $p(θ) = c$ for some $c > 0$ and $p(θ│y )=\frac{p(y |θ)•p(θ)}{p(y)} =\frac{p(y |θ)•p(θ)}{\int_{θ∈Θ}{p(y│θ)cdθ}}=\frac{p(y |θ)•p(θ)}{\int_{θ∈Θ}{p(y│θ)cdθ}} = \frac{1}{K} p(y |θ)$, where $K=\int_{θ∈Θ}{p(y│θ)dθ}$. That is, the posterior distribution, $p(θ|y)$, is just the renormalized likelihood.
- One-parameter models: Binomial model as a start in performing Bayesian inference for binary variables. Consider data from a randomized controlled clinical trial evaluating a drug to reperfuse blocked blood vessels in patients with ischemic stroke. We indicate with $Y_i$ whether the $i^{th}$ patient treated with the drug had a favorable outcome (no disability at 90 days). Then the binary outcome variable is defined by:
$$ Y_i = \begin{cases} 1, & \text{if patient i has a favorable outcome} \\ 0, & \text{otherwise} \end{cases} .$$
Demonstration
Suppose that we have data from $n = 129$ patients. Then, since the size $N$ of the general population is much greater than the size $n$ of the sample, $Y_1,...,Y_n$ can be considered to be exchangeable random variables. Using the De Finetti’s theorem, we can set up the following model for our data: a prior distribution $p(θ)$ for a parameter $θ$, representing the proportion of 1’s in the population $\left( \sum_{i=1}^N{\frac{Y_i}{N}}\right)$, given $θ$, the $Y_1,...,Y_n$ are i.i.d.’s with $p(y│θ)=θ^y (1-θ)^{1-y}$, the sampling model adopted leads to $p(y_1,...,y_n│θ)=p(y_1,...,y_{129}│θ)$=$\cup_{i=1}^{129}{θ^{y_i}(1-θ)^{1-y_i}}=$ $θ^{\sum_{i=1}^{129}{y_i}}(1-θ)^{129-\sum_{i=1}^{129}{y_i}}$, now we need to choose a prior distribution for $θ$.
The parameter $θ$ is a random variable with parameter space $Θ=[0,1]$. A first choice would be to use a locally uniform prior, that is a prior distribution $p(θ)$ that assigns the same probability to all intervals $(a,b) \subseteq [0,1]$ of the same length. This leads to $p(θ) \sim Uniform([0,1])$, i.e., $p(θ) = 1$ for $θ \in [0,1]$. Then, we obtain the following posterior distribution: $p(θ│y_1,...,y_{129})=\frac{p(y_1,...,y_{129}│θ)p(θ)}{p(y_1,...,y_{129})}=$ $p(y_1,...,y_{129}│θ)\times \frac{1}{p(y_1,...,y_{129})} ∝ p(y_1,...,y_{129}│θ)$ since $p(y_1,...,y_{129})$ does not depend on $θ$. Therefore, the posterior distribution $p(y_1,...,y_{129}│θ)$ of $θ$ has the same shape as $p(y_1,...,y_{129}│θ)$ but they are on a different scale.
If of the n=129 patients receiving the drug 118 had favorable outcomes and 11 did not, then we have $\sum_{i=1}^{129}{y_i}=118$, so $$p(y_1,...,y_{129}│θ)=θ^{\sum_{i=1}^{129}{y_i(1-θ)^{129-\sum_{i=1}^{129}{y_i}}}}=θ^{118} (1-θ)^{11}.$$
Definition has that a random variable $Y$ with sample space $S=[0,1]$ is said to have a Beta distribution (a,b) if its PDF is given by $p(y│a,b)=\frac{Γ(a+b)}{Γ(a)Γ(b)} y^{a-1} (1-y)^{b-1}$, with $a,b>0$. For such a random variable, we have the following properties: $E[Y]=\frac{a}{a+b}$, $Mode[Y]=\frac{a-1}{(a-1)+(b-1)}$, if $a>1$ and $b>1$, $Var[Y]=\frac{ab}{(a+b+1) (a+b)^2} =\frac{E[Y](1-E[Y])}{a+b+1}$.
To determine the scale for the posterior distribution $p(θ│y_1,...,y_n)$ we use the fact that this is a conditional density and since it is a probability density function, it needs to integrate to 1, that is: $1=\int_{θ\in Θ}{p(θ│y_1,...,y_{129})dθ}=\int_0^1{\frac{θ^{118}(1-θ)^{11}}{p(y_1,...,y_{129})} dθ}$.
The following result from calculus is useful: $\int_0^1{θ^{a-1} (1-θ)^{b-1} dθ}=\frac{Γ(a)Γ(b)}{Γ(a+b)}$, where for any $x>0$, the gamma function, $Γ(x)$, can be looked up in a table or computed using SOCR or R using command gamma(). Thus, we have $1=\int_0^1{\frac{θ^{118} (1-θ)^{11}}{p(y_1,...,y_{129})}dθ}=$ $\frac{1}{p(y_1,...,y_{129})} int_0^1{θ^{118} (1-θ)^{11} dθ}=\frac{1}{p(y_1,...,y_{129})} \frac{Γ(119)Γ(12)}{Γ(131)}$. Hence, we have that $p(θ│y_1,...,y_{129})=\frac{Γ(131)}{Γ(119)Γ(12)} θ^{118} (1-θ)^{11}$. That is, $p(θ│y_1,...,y_{129})$ has a Beta distribution with $a=119$ and $b=12$ parameters. Therefore $E[θ│y_1,...,y_{129}]=0.908$, $Mode[θ│y_1,...,y_{129}]=0.915$, and $Var[θ│y_1,...,y_{129}]=6.25E-4$.
- Conclusion: In Bayesian inference: posterior information ≥ Prior information ≥ 0, where the second "≥" is just "=" only if the prior is non-informative.
Binomial model
Suppose the data $Y_1,...,Y_n$ represent exchangeable binary random variables (i.e., a sequence such that future samples behave like earlier samples, meaning formally that any order (of a finite number of samples) is equally likely). For our choice of prior and sampling model, we have obtained $p(θ│y_1,...,y_n) \sim Beta(1+\sum_{i=1}^n{y_i}, 1+n-\sum_{i=1}^n{y_i})$. Now, let’s suppose we want to compare the posterior odds of two values $θ_a$ and $θ_b$ of $θ$ in light of an instance of the data $y_1,...,y_n$.
The odds ratio is: $$ \frac{p(θ_a│y_1,...,y_n )}{p(θ_b│y_1,...,y_n)}=\frac{\frac{Γ(n+2)}{Γ(1+\sum_{i=1}^n{y_i}) Γ(1+n-\sum_{i=1}^n{y_i})} {θ_a}^{\sum_{i=1}^n{y_i}} (1-θ_a )^{n-\sum_{i=1}^n{y_i}}} {\frac{Γ(n+2)}{Γ(1+\sum_{i=1}^n{y_i}) Γ(1+n-\sum_{i=1}^n{y_i})} {θ_b}^{\sum_{i=1}^n{y_i}} (1-θ_b)^{n-\sum_{i=1}^n{y_i}}}.$$
$$ \frac{p(θ_a│y_1,...,y_n )}{p(θ_b│y_1,...,y_n)}=\frac{{θ_a}^{\sum_{i=1}^n{y_i}} (1-θ_a )^{n-\sum_{i=1}^n{y_i}}} { {θ_b}^{\sum_{i=1}^n{y_i}} (1-θ_b)^{n-\sum_{i=1}^n{y_i}}}= \left( \frac{θ_a}{θ_b}\right)^{\sum_{i=1}^n{y_i}} \left( \frac{1-θ_a}{1-θ_b}\right)^{n-\sum_{i=1}^n{y_i}}.$$
Hence, the posterior odds depend on the data only through $\sum_{i=1}^n{y_i}$. In general, this is also true for the posterior probability, since $p(θ│y_1,...,y_n)=Beta(1+\sum_{i=1}^n{y_i} ,1+n-\sum_{i=1}^n{y_i}$. In other words, $\sum_{i=1}^n{y_i}$ is a sufficient statistic.
Sufficient Statistics
If $Y_1,...,Y_n$ are random variables, a statistic $T(Y_1,...,Y_n)$ is a sufficient statistic if it captures all the information about $θ$ that is contained in the sample. If $Y_1,...,Y_n$ are binary random variables that are conditionally independently and identically distributed with $p(y|θ)$ a Bernoulli distribution with parameter $θ$, then $\sum_{i=1}^n{y_i}$ is a Binomial random variable and its density is $p(y│θ)={n \choose y} θ^y (1-θ)^{n-y}$, for $y\in \{0,1,…,n\}$.
What would be the posterior distribution of $θ$ if instead of working directly with the random variables $Y_1,...,Y_n$, we work with the sufficient statistic $Y=\sum_{i=1}^n{y_i}$?
Here, $θ\in Θ=[0,1]$ with prior distribution $p(θ)$. Let’s use again a uniform prior $p(θ)=1$ for $θ$, then $p(θ│y)=\frac{p(y│θ)p(θ)}{p(y)} =\frac{{n\choose y} θ^y (1-θ)^{n-y}*1}{p(y)} ∝ θ^y (1-θ)^{n-y}$, for $θ\in [0,1]$. We can determine again the proportionality constant by using the fact that $p(θ│y)$ is a density and thus it needs to integrate 1. We can see that the part of $p(θ│y)$ that depends on $θ$ has the same functional form as the kernel of a Beta distribution: $$Beta(x;a,b)=\frac{Γ(a+b)}{Γ(a)Γ(b)} x^{a-1} (1-x)^{b-1},$$ where $x\in [0,1]$, and $a$ and $b$ are positive. Since the kernel of $p(θ│y)$ matches that of a $Beta(a,b)$ with $a=1+y$ and $b=1+n-y$, the constant needed in $p(θ│y)$ is $\frac{Γ(n+2)}{Γ(a+1)Γ(1+n-y)}$.
Summarizing, we have the distribution of $θ$ is $p(θ)=1$, $θ\in [0,1]$ and $Y~p(y│θ)=Binomial (θ)$. Which implies that $p(θ│y)=Beta(1+y,1+n-y)$.
If $θ \sim p(θ)=Beta(a,b)$, $Y|θ \sim p(y│θ)=Binomial(θ)$, and we want to report a point estimate for $θ$ based on the data, we can then report its posterior mean. Since $p(θ│y)=Beta(\tilde{a}=a+y,\tilde{b}=b+n-y)$, using the formulas for the mean of a Beta(a,b) random variable, we have that the posterior mean is: $$E(θ│y)=\frac{\tilde{a}}{\tilde{a}+\tilde{b}}=$$ $$\frac{a+y}{(a+y)+(b+n-y)}=\frac{a+y}{a+b+n}=\frac{a+b}{a+b+n}\times \frac{a}{a+b}+\frac{n}{a+b+n}\frac{y}{n}=$$ $$\frac{a+b}{a+b+n}\times \text{ prior mean} +\frac{n}{n+a+b}\times \text{sample average}.$$
The posterior mean is a weighted average of the prior mean and the sample average. The weight of the prior mean, $\frac{a+b}{a+b+n}$ is a ratio of the prior sample size to the sum of the prior sample size and the sample size of the data. If $n≫a+b$=prior sample size, then the weight of the prior mean becomes rather small and the majority of the information about $θ$ comes form the data and not from the prior. In this case: $E[θ│y]\approx \frac{y}{n}=\hat{y}$. The mode of the posterior distribution $p(θ│y)=Beta(a+y,b+n-y)$ is given by $Mode(θ│y)=\frac{a+y-1}{a+b+n-2}$. Also here note that if the sample size n is very large, the posterior model is approximately equal to y$\frac{y}{n}$, the sample average.
The posterior distribution $p(θ|y)$ can also be summarized by providing intervals for $θ$ of the form $[l(y),u(y)]$ that have a high posterior probability of containing $θ$. For example, $P(θ\in [l(y),u(y)]|y)=0.95$ has a 95% Bayesian coverage or is a 95% credible interval for $θ$. To compute the 95% CI for $θ$ by taking the 2.5th percentile and 97.5th percentile of $Beta(a+y,b+n-y)$. For the above example with $n=129$ and $y=118$, we have:
Prior parameters $(a,b)$ | 95% Credible Interval for θ |
---|---|
(1,1) | [0.845, 0.951] |
(2,1) | [0.855, 0.952] |
(3,1) | [0.856, 0.952] |
(1,2) | [0.845, 0.946] |
(2,2) | [0.847, 0.947] |
Applications
- This article presents a generalization of Bayesian inference. Procedures of statistical inference are described which generalize Bayesian inference in specific ways. Probability is used in such a way that in general only bounds may be placed on the probabilities of given events, and probability systems of this kind are suggested both for sample information and for prior information. These systems are then combined using a specified rule. Illustrations are given for inferences about trinomial probabilities, and for inferences about a monotone sequence of binomial pi. Finally, some comments are made on the general class of models which produce upper and lower probabilities, and on the specific models which underlie the suggested inference procedures.
- This article discusses about Bayesian inference of Phylogeny and its impact on evolutionary biology. As a discipline, phylogenetics is becoming transformed by a flood of molecular data. These data allow broad questions to be asked about the history of life, but also present difficult statistical and computational problems. Bayesian inference of phylogeny brings a new perspective to a number of outstanding issues in evolutionary biology, including the analysis of large phylogenetic trees and complex evolutionary models and the detection of the footprint of natural selection in DNA sequences.
Software
## 3D and 2D plots of the posterior probability that theta > some value w given the data in the ## Beta-Binomial model ## The function that makes 3D plot is the persp function ## For help and documentation, type ?persp at the prompt in R ## The function persp takes as arguments: a x vector that gives the values on the x axis (this has ## to be in increasing order), a y vector with values on the y axis (also in increasing order), ## and a z matrix of dimension equal to the length of the x vector times the length of the y vector. ## The z matrix has at the (i,j) entry the value of the variable to be displayed on the third dimension ## corresponding to the i-th value of the x-vector and the j-th value of the y-vector, that is z[i,j] is ## the value of the z variable at x[i] and y[j].
a <- seq(0,10,by=0.1) b <- seq(0,10,by=0.1) w <- 0.8 n <- 129 y <- 118 ## Plotting the posterior distribution for(i in 1:length(a)){ for(j in 1:length(b)){ plot(seq(0,1,by=0.01),dbeta(seq(0,1,by=0.01),a[i]+y,b[j]+(n-y)),type="l",col="black",lty=2,lwd=1, xlim=c(0,1),xlab="Theta",ylab="P(theta | y)") title(main=paste("Posterior distribution a=",a[i]," b=",b[j],sep="")) abline(v=0.8,lty=3,col="red") } }
post.prob.theta <- matrix(0,length(a),length(b)) for(i in 1:length(a)){ for(j in 1:length(b)){ post.a <- a[i]+y post.b <- b[j]+(n-y) post.prob.theta[i,j] <- 1-pbeta(w,post.a,post.b) } } persp(a,b,post.prob.theta,xlab="a",ylab="b",zlab="p(theta > w | y)") # To rotate the figure, you can specify the value of the angles theta and phi by which to tilt the figure persp(a,b,post.prob.theta,xlab="a",ylab="b",zlab="p(theta > 0.8 | y)",main="P(theta > 0.8 | y)", theta=30,phi=20)
Problems
Sample survey
Suppose we are going to sample 100 individuals from a country (of size much larger than 100) and ask each sampled person whether they support policy Z or not. Let $Y_i=1$ if person $I$ in the sample supports the polity and 0 otherwise.
- Assume $Y_1,…,Y_{100}$ are conditional on $θ$, IID binary random variables with expectation $θ$. Write down the joint distribution of $P(Y_1=y_1,Y_2=y_2,…,Y_{100}=y_{100}│θ)$ in a compact form. Also write down the form of $P(\sum{Y_i=y}|θ)$.
- Suppose you believed that $θ\in {0.0,0.1,…,0.9,1.0}$. Given that the results of the survey were $\sum_{i=1}^{100}{Y_i} =57$, compute $P(\sum_{i=1}^{100}{Y_i} =57│θ)$ for each of these 11 values of $θ$ and plot these probabilities as a function of $θ$.
- Now suppose you originally had no prior information to believe one of these $θ$-values over another, and so $P(θ = 0.0) = P(θ = 0.1) = ... = P(θ=0.9)=P(θ= 1.0)$. Use Bayes’ rule to compute $p(θ| \sum_{i=1}^n{Y_i} = 57)$ for each $θ$-value. Make a plot of this posterior distribution as a function of $θ$.
- Next, suppose you allow $θ$ to be any value in the interval $[0,1]$. Using the uniform prior density for $θ$, so that $p(θ) = 1$, plot the posterior density $p(θ)\times P(\sum_{i=1}^n{Y_i} = 57|θ)$ as a function of $θ$.
- As discussed above, the posterior distribution of $θ$ is $Beta(1+ 57,1 + 100 57)$. Plot the posterior density as a function of $θ$. Discuss the relationships among all of the plots you have made for this exercise.
References
- Wikipedia Bayesian Inference
- Christou N, Dinov, ID (2011) Confidence Interval Based Parameter Estimation—A New SOCR Applet and Activity. PLoS ONE 6(5): e19178. doi:10.1371/journal.pone.0019178.
- SOCR Home page: http://www.socr.umich.edu
Translate this page: