# SMHS Probability

## Scientific Methods for Health Sciences - Probability Theory

IV. HS 850: Fundamentals

Probability Theory

1) Overview: Probability theory plays an important role in statistics and its application in many other areas because it provides the theoretical groundwork for statistical inference. Probability theory is concerned with probability, which is the analysis of random phenomena. The central objects are random variables, stochastic processes, and events. Consider an individual coin toss, which can be considered to be a random event, if it is repeated many times then the sequence of random events will exhibit certain patterns. And probability theory helps us to study and predict those patterns. Often, probability theory can be further divided into two separate parts of discrete probability distribution and continuous probability distribution, which we’ll study later in the Distribution section. In this section, we aim to study some fundamental concepts in probability theory as well as the probability theory rules we are going to apply in our following studies.

2) Motivation: Consider you are doing an experiment where a number of outcomes are produced. This set of outcomes is called sample space and power set of the sample space includes all different collections of the possible results of the experiment. Suppose we are rolling a fair dice, which has 6 possible outcomes. The sample space is {1, 2, 3, 4, 5, 6}. Event is any collection of the possible results. For example, the collection of possible results of rolling an even number gives the subset of {2, 4, 6} which is an element of the power set of the sample space in this experiment. What if we want to estimate the chance of rolling three 2’s in line or the chance of roll an odd number in an experiment? Probability is a way of assigning every event a value between 0 and 1, which informs us of the chance that the event occurs.

3) Theory

3.1) Random Sampling: A simple random sample of n items is a sample in which very member of the population has an equal chance of being selected and the members of the sample are chosen independently. For example, consider a survey where 100 students are chosen from the total of 5000 students to take the questionnaires and the chance of chosen is the same for each student. This is a simple example of random sampling. An easy application is random number generator.

3.2) Types of probabilities: Probability models have two components: sample space and probabilities.

• Sample space (S) for a random experiment is the set of all possible outcomes of the experiment.
• Event: a collection of outcomes.
• Event occurs if an outcome making up that event occurs.

• Probabilities for each event in the sample space.
• Probabilities may come from models – say mathematical/physical description of the sample space and the chance of each event. An example may be a fair dice tossing game.
• Probabilities may be derived from data – data observations determine the probability distribution. An example may be tossing a coin 50 times and observe the head counts.
• Subject probabilities: combining data and psychological factors to design a reasonable probability table. An example may be the stock market.

3.3) Axioms of probability

• First axiom: the probability of an event is a non-negative real number.
• Second axiom: the probability that some elementary event in the entire sample space will occur is 1. More specifically, there are no elementary events outside the sample space P(S)=1.
• Third axiom: An countable sequence of pair-wise disjoint events E_1,E_2,… satisfies P(E_1∪E_2∪…)=∑_i〖P(E_i)〗.

3.4) Event manipulations:

• Complement: the complement of event A is denoted as A^C or A', it occurs if and only if A does not occur. The sum of A and A^C make up the whole sample space.
• A∪B contains all outcomes in A or B (or both). P(A∪B)=P(A)+P(B)-P(A∩B).
• A∩B contains all outcomes which are in both A and B.
• Mutually exclusive events are events that cannot occur at the same time.
• Conditional Probability: The conditional probability of event A occurring given that event B occurs is P(A│B)=(P(A∩B))/(P(B)). When A and B are independent then giving that B occurs gives no information on the probability of A and P(A│B)=P(A).
• Multiplication rule: P(A∩B)=P(A│B)P(B), in general: P(A_1∩A_2∩A_3∩…∩A_n )=P(A_1 )P(A_1│A_2 )P(A_3│A_1∩A_2 )…P(A_n│A_1∩A_1∩A_2∩A_3∩…∩A_(n-1) ).
• Law of total probability: P(B)=P(B│A_1 )P(A_1 )+P(B│A_2 )P(A_2 )+⋯P(B│A_n )P(A_n) where {A_1,…,A_n} partition the sample space S.
• Invert of the order of conditioning: P(B│A)=(P(A∩B))/(P(A))=(P(A∩B))/(P(B)) (P(B))/(P(A))=P(A│B) P(B)/P(A) . Hence: P(A∩B)=P(A│B)P(B)=P(B│A)P(A).
• Bayesian Rule: If {A_1,…,A_n} partition the sample space S and A and B are any events that are subsets of S then we have:

P(A│B)=(P(B│A))/(P(B))

=(P(B│A)P(A))/(P(B│A_1 )P(A_1 )+P(B│A_2 )P(A_2 )+⋯P(B│A_n )P(A_n)).