SMHS ParamInference
Scientific Methods for Health Sciences - Parametric Inference
IV. HS 850: Fundamentals
Parametric Inference
1) Overview: Statistics aims to retrieving the ‘causes’ (e.g. parameters of a probability density function) from the observations. In statistical inference, we aim to collect information on the underlying population based on a sample drawn from it. The ideal case would be to find the perfect model with unknown parameters based on which we can make further inference about the data (the population) and of which the parameters can be determined with data we have. In this lecture, we are going to introduce to the concept of variables, parametric models and making inference based on the parametric model.
2) Motivation: consider a well-known example of flipping a coin 10 times. Experience tells us that the outcome of the number of heads in one experiment with 10 flips would follow a Binomial Distribution with p=P(head) in one flip. Here, we have chosen the model to be a Binomial (n,p), where n=10. So, the next step would be to determine on the value of p. An obvious way of doing this would be to flip the coin many times (say 100) and get the number of heads and the estimate of p would just be the number of heads in the 100 flips divided by 100, say 63/100. Based on the information, we have the number of heads in our experiment follows a Binomial distribution with (10,0.63). That is, we can infer that we will flip an average of 6.3 heads in 10 flips if we repeat the experiment enough time. So, what is a random variable? How to build up a parametric model based on the data? What kind of inference can we make based on the parametric model?
3) Theory
- 3.1) Random variable: a variable whose value is subject to variations due to chance (i.e., randomness). It can take on a set of values, each with an associated probability for discrete variables or a probability density function for continuous variables. The value of a random variable represents the possible outcomes of a yet-to-be-performed experiment, or the possible outcomes of a past experiment whose already-existing value is uncertain. The possible values of a random variable and their associated probabilities (known as a probability distribution) can be further described with mathematical functions.
There are two types of random variables: Discrete random variables: take on a specified finite or countable list of values, endowed with a probability mass function, characteristic of a probability distribution; Continuous random variables: take on any numerical value in an interval or collection of intervals, via a probability density function that is characteristic of a probability distribution, or a mixture of both types.
- 3.2) Parameters: a characteristic, or measurable factor that can help in defining a particular system. It is an important element to consider in evaluation or comprehension of an event. Say, μ is often used as the mean and σ is often used as the standard deviation in statistics. The following table provides of a list of commonly used parameters with descriptions:
Parameter | Description | Parameter | Description |
x ̅ | Sample mean | α,β,γ | Greek |
μ | Population mean | θ | Lower case for Theta |
σ | Population standard deviation | φ | Lower case for Phi |
σ^2 | Population variance | ω | Lower case for Omega |
s | Sample standard deviation | ∆ | Increment |
s^2 | Sample variance | ν | Nu |
λ | Poisson mean, Lambda | τ | Tau |
χ | χ distribution, Chi | η | Eta |
ρ | The density, Rho | τ | Sometimes used in tau function |
ϕ | Normal density function, Phi | Θ | Parameter space |
Γ | Gamma | Ω | Sample Space, Omega |
∂ | Per/ divided | δ | Lower case for Delta |
S | Sample space | Κ,k | Kappa |
- SOCR Home page: http://www.socr.umich.edu
Translate this page: