Difference between revisions of "AP Statistics Curriculum 2007 Distrib Multinomial"

From SOCR
Jump to: navigation, search
(New page: == General Advance-Placement (AP) Statistics Curriculum - Multinomial Random Variables and Experiments== The multinomial experiments (and multinomial di...)
 
m (Text replacement - "{{translate|pageName=http://wiki.stat.ucla.edu/socr/" to ""{{translate|pageName=http://wiki.socr.umich.edu/")
 
(36 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
==[[AP_Statistics_Curriculum_2007 | General Advance-Placement (AP) Statistics Curriculum]] - Multinomial Random Variables and Experiments==
 
==[[AP_Statistics_Curriculum_2007 | General Advance-Placement (AP) Statistics Curriculum]] - Multinomial Random Variables and Experiments==
  
The multinomial experiments (and multinomial distribtuions) directly extend the their [[AP_Statistics_Curriculum_2007_Distrib_Binomial |bi-nomial counterparts]].  
+
The multinomial experiments (and multinomial distributions) directly extend their [[AP_Statistics_Curriculum_2007_Distrib_Binomial |bi-nomial counterparts]].  
  
* Examples of Multinomial experiments
+
===Multinomial experiments===
** Rolling a hexagonal Die 5 times: Where the outcome space is the colection of 5-tuples, where each element is a <math>1\leq value\leq 6</math>.
+
A multinomial experiment is an experiment that has the following properties:
 +
* The experiment consists of '''k repeated trials'''.
 +
* Each trial has a '''discrete''' number of possible outcomes.
 +
* On any given trial, the probability that a particular outcome will occur is '''constant'''.
 +
* The trials are '''independent'''; that is, the outcome on one trial does not affect the outcome on other trials.
  
* The Multinomial random variable (RV): Mathematically, a (k) multinomial trial is modeled by a random variable <math>X(outcome) = \begin{cases}x_o,\\
+
====Examples of Multinomial experiments====
x_1,\\
+
* Suppose we have an urn containing 9 marbles. Two are red, three are green, and four are blue (2+3+4=9). We randomly select 5 marbles from the urn, ''with replacement''. What is the probability (''P(A)'') of the event ''A={selecting 2 green marbles and 3 blue marbles}''?
\cdots,\\
 
x_k.\end{cases}</math>
 
  
If <math>p_i=P(X=i)</math>, then:
+
*To solve this problem, we apply the multinomial formula, we know the following:
: ''expected value'' of X, <math>E[X]=\sum_{i=1}^k{x_i\times p_i}</math>.  
+
** The experiment consists of 5 trials, so k = 5.
: standard deviation of X, <math>SD[X]=\sqrt{\sum_{i=1}^k{(x_i-E[X})^2\times p_i}}</math>.
+
** The 5 trials produce 0 red, 2 green marbles, and 3 blue marbles; so <math>r_1=r_{red} = 0</math>, <math>r_2=r_{green} = 2</math>, and <math>r_3=r_{blue} = 3</math>.
 +
** For any particular trial, the probability of drawing a red, green, or blue marble is 2/9, 3/9, and 4/9, respectively. Hence, <math>p_1=p_{red} = 2/9</math>, <math>p_2=p_{green} = 1/3</math>, and <math>p_3=p_{blue} = 4/9</math>.
 +
 
 +
Plugging these values into the multinomial formula we get the probability of the event of interest to be:
 +
 
 +
: \(P(A) = {5\choose r_1,r_2,r_3}p_1^{r_1}p_2^{r_2}p_3^{r_3}\). In this specific case, \(P(A) = {5\choose 0,2,3}p_1^{0}p_2^{2}p_3^{3}\).
 +
 
 +
: <math>P(A) = {5! \over 0!\times 2! \times 3! }\times (2/9)^0 \times (1/3)^2\times (4/9)^3=0.0975461.</math>
 +
 
 +
Thus, if we draw 5 marbles with replacement from the urn, the probability of drawing no red, 2 green, and 3 blue marbles is ''0.0975461''.
 +
 
 +
* Let's again use the urn containing 9 marbles, where the number of red, green and blue marbles are 2, 3 and 4, respectively. This time we select 5 marbles from the urn, but are interested in the probability (''P(B)'') of the event ''B={selecting 2 green marbles}''! (Note that 2 < 5)
 +
**To solve this problem, we classify balls into '''green''' and '''others'''! Thus the multinomial experiment consists of 5 trials (k = 5), <math>r_1=r_{green} = 2</math>, <math>r_2=r_{other} = 3</math>. In this case, the probabilities of drawing a '''green''' or '''other''' marble are 3/9, and 6/9, respectively. Notice now the P('''other''') is the sum of the probabilities of the '''other''' colors (complement of green)! Hence,
 +
: <math>P(B) = {5\choose 2, 3}p_1^{r_1}p_2^{r_2} = {5! \over 2! \times 3! }\times (3/9)^2 \times (6/9)^3=0.329218.</math>
 +
 
 +
This probability is equivalent to the binomial probability (success=green; failure=other color), ''B(n=5, p=1/3)''.
  
 
===Synergies between Binomial and Multinomial processes/probabilities/coefficients===
 
===Synergies between Binomial and Multinomial processes/probabilities/coefficients===
  
* The Binomial vs. Multinomial '''Coefficients'''
+
* The Binomial vs. Multinomial '''Coefficients''' (See this [http://www.ohrt.com/odds/binomial.php Binomial Calculator])
: <math>({n\choose i}=\frac{n!}{k!(n-k)!}</math>
+
: <math>{n\choose i}=\frac{n!}{i!(n-i)!}</math>
  
 
: <math>{n\choose i_1,i_2,\cdots, i_k}= \frac{n!}{i_1! i_2! \cdots i_k!}</math>
 
: <math>{n\choose i_1,i_2,\cdots, i_k}= \frac{n!}{i_1! i_2! \cdots i_k!}</math>
  
 
* The Binomial vs. Multinomial '''Formulas'''
 
* The Binomial vs. Multinomial '''Formulas'''
: <math>(a+b)^n = \sum_{i=1}^n{{n\choose i}a^1 \times b^{n-i}}</math>
+
: <math>(a+b)^n = \sum_{i=1}^n{{n\choose i}a^i \times b^{n-i}}</math>
 
: <math>(a_1+a_2+\cdots +a_k)^n = \sum_{i_1+i_2\cdots +i_k=n}^n{ {n\choose i_1,i_2,\cdots, i_k}
 
: <math>(a_1+a_2+\cdots +a_k)^n = \sum_{i_1+i_2\cdots +i_k=n}^n{ {n\choose i_1,i_2,\cdots, i_k}
 
a_1^{i_1} \times a_2^{i_2} \times \cdots \times a_k^{i_k}}</math>
 
a_1^{i_1} \times a_2^{i_2} \times \cdots \times a_k^{i_k}}</math>
  
* The Binomial vs. Multinomial '''Probabilities'''
+
* The Binomial vs. Multinomial '''Probabilities''' (See this [http://socr.ucla.edu/Applets.dir/Normal_T_Chi2_F_Tables.htm Binomial distribution calculator] and the [http://socr.ucla.edu/htmls/dist/Multinomial_Distribution.html SOCR Multinomial Distribution calculator])
 +
: <math>p=P(X=r)={n\choose r}p^r(1-p)^{n-r}, \forall 0\leq r \leq n</math>
 +
: <math>p=P(X_1=r_1 \cap X_2=r_2 \cap \cdots \cap X_k=r_k | r_1+r_2+\cdots+r_k=n)=</math>
 +
: <math>={n\choose i_1,i_2,\cdots, i_k}p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}, \forall r_1+r_2+\cdots+r_k=n</math>
  
===Binomial Random Variables===
+
===Expectation and variance===
Suppose we conduct an experiment observing an n-trial (fixed) Bernoulli process. If we are interested in the RV '''X={Number of successes in the n trials}''', then X is called a '''Binomial RV''' and its distribution is called '''Binomial Distribution'''.  
+
The expected number of times for observing the outcome ''i'', over ''n'' trials, is
 +
:<math>E(X_i) = n p_i.</math>
  
====Examples====
+
Since each diagonal entry is the variance of a binomially distributed random variable, the [[AP_Statistics_Curriculum_2007_Distrib_MeanVar#Properties_of_Variance | variance-covariance matrix]] is defined by:
*Roll a standard die ten times. Let X be the number of times {6} turned up. The distribution of the random variable X is a binomial distribution with n = 10 (number of trials) and p = 1/6 (probability of "success={6}). The distribution of X may be explicitly written as (P(X=x) are rounded of, you can compute these exactly by going to [http://socr.ucla.edu/htmls/SOCR_Distributions.html SOCR Distributions] and selecting Binomial):
+
: Diagonal-terms (variances): <math>VAR(X_i)=np_i(1-p_i)</math>, for each ''i'', and
 +
: Off-diagonal terms (covariances): <math>COV(X_i,X_j)=-np_i p_j</math>, for <math>i\not= j</math>.
 +
 
 +
===Example===
 +
Suppose we study N independent trials with results falling in one of k possible categories labeled <math>1,2, \cdots, k</math>. Let <math>p_i</math> be the probability of a trial resulting in the <math>i^{th}</math> category, where <math>p_1+p_2+ \cdots +p_k = 1</math>. Let <math>N_i</math> be the number of trials resulting in the <math>i^{th}</math> category, where <math>N_1+N_2+ \cdots +N_k = N</math>.
 +
 
 +
For instance, suppose we have 9 people arriving at a meeting according to the following information:
 +
: P(by Air) = 0.4,  P(by Bus) = 0.2, P(by Automobile) = 0.3,  P(by Train) = 0.1
 +
 
 +
* Compute the following probabilities
 +
: P(3 by Air, 3 by Bus, 1 by Auto, 2 by Train) = ?
 +
: P(2 by air) = ?
 +
 
 +
===SOCR Multinomial Examples===
 +
Suppose we roll 10 loaded hexagonal (6-face) dice 8 times and we are interested in the probability of observing the event A={3 ones, 3 twos, 2 threes, and 2 fours}. Assume the dice are loaded to the small outcomes according to the following probabilities of the 6 outcomes (''one'' is the most likely and ''six'' is the least likely outcome).
 
<center>
 
<center>
 
{| class="wikitable" style="text-align:center; width:75%" border="1"
 
{| class="wikitable" style="text-align:center; width:75%" border="1"
 
|-
 
|-
| x || 0 || 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9 || 10
+
| ''x'' || 1 || 2 || 3 || 4 || 5 || 6  
 
|-
 
|-
| P(X=x) || 0.162 || 0.323 || 0.291  || 0.155 || 0.0543 || 0.013  || 0.0022  ||  0.00025 || 0.000019  ||  8.269e-7 || 1.654e-8
+
| ''P(X=x)'' || 0.286 || 0.238 || 0.19 || 0.143 || 0.095 || 0.048
 
|}
 
|}
 
</center>
 
</center>
  
<center>[[Image:SOCR_EBook_Dinov_RV_Binomial_013008_Fig1.jpg|400px]]</center>
+
: ''P(A)=?'' Note that the complete description of the event of interest is:
 
+
: A={3 ones, 3 twos, 2 threes, 2 fours, and 0 others (5's or 6's!)}
* Suppose 10% of the human population carries the green-eye allele. If we choose 1,000 people randomly and let the RV X be the number of green-eyed people in the sample. Then the distribution of X is binomial distribution with n = 1,000 and p = 0.1 (denoted as <math>X \sim B(1,000, 0.1)</math>. In a sample of 1,000, how many are we expecting to have this allele?
 
  
===Binomial Modeling===
+
====Exact Solution====
====Exact Binomial Model====
+
Of course, we can compute this probability exactly:
The Binomial distribution (i.e., biased-coin tossing experiment) is an exact physical model for any experiment which can be characterized as a series of trials where:
 
* Each trial has only two outcomes: success or failure;
 
* P(success)=p is the same for every trial; and
 
* Trials are independent.
 
  
====Approximate Binomial Model====
+
* By-hand calculations:
Suppose we observe an experiment comprised of n identical trials involving a large population (of size N). Assume the population contains a sub-population of subjects that have a characteristic of interest and the sub-population proportion is p (0<p<1). Then the distribution of X={the number of outcomes in the sample with that characteristic of interest}, is approximately Binomial(n, p). This approximation is adequate if the ratio n/N < 0.05.
+
: <math>P(A) = {10! \over 3!\times 3! \times 2! \times 2! \times 0! \times 0!} \times 0.286^3 \times 0.238^3\times 0.19^2 \times 0.143^2 \times 0.095^0 \times 0.048^0= </math>
 +
:<math>=0.00586690138260962656816896.</math>
  
* Example: Polling the US population to see what proportion is/has-been married. Because we sample without replacement (can't ask the same individual twice), the second assumption of the (exact) Binomial model is (slightly) violated. Yet, the small ration of sample to population size yields that the Binomial model is approximately valid (i.e., the proportion of subjects that is/has-been married does not change significantly as we poll one subject, and therefore remove him/her from the complete pool of subjects we poll).
+
* Using the [http://socr.ucla.edu/htmls/dist/Multinomial_Distribution.html SOCR Multinomial Distribution Calculator]: Enter the above given information in the [http://socr.ucla.edu/htmls/dist/Multinomial_Distribution.html SOCR Multimonial distribution applet] to get the probability density and cumulative distribution values for the given outcome ''{3,3,2,2,0}'', as shown on the image below. Note that since the event A does not contain the die outcomes of 5 or 6 , we can reduce the case of 6 outcomes <math>\{1,2,3,4,5,6 \}</math> to a case of 5 outcomes <math>\{1,2,3,4,other \}</math> with corresponding probabilities <math>\{0.286, 0.238, 0.19, 0.143, 0.143 \}</math>, where we pool together the probabilities of the die outcomes of 5 and 6, i.e., P(other)=0.095+0.048=0.143.
  
===Binomial Probabilities===
+
<center>[[Image:SOCR_EBook_Dinov_Multinimial_102209_Fig2.png|500px]]</center>
If the random variable ''X'' follows the Binomial distribution with (fixed) parameters ''n'' (sample-size) and ''p'' (probability of success at one trial), we write ''X'' ~ B(''n'', ''p''). The probability of getting exactly ''x'' successes is given by the Binomial probability (or mass) function: <math>P(X=x)={n\choose k}p^k(1-p)^{n-k}</math>, for ''x'' = 0, 1, 2, ..., ''n'', where <math>{n\choose k}=\frac{n!}{k!(n-k)!}</math> is the binomial coefficient.
 
  
This probability expression has an easy and intuitive interpretation. The probability of the ''x'' successes in the ''n'' trials is (''p''<sup>''x''</sup>). Similarly, the probability of the ''n-x'' failures is (1 &minus; ''p'')<sup>''n-x''</sup>. However, the ''x'' successes can be [[AP_Statistics_Curriculum_2007_Prob_Count | arranged]] anywhere among the ''n'' trials, and there are <math>{n\choose k}=\frac{n!}{k!(n-k)!}</math> different ways of arranging the ''x'' successes in a sequence of ''n'' trials, see the [[AP_Statistics_Curriculum_2007_Prob_Count |Counting section]].
+
====Approximate Solution====
 +
We can also find a pretty close empirically-driven estimate using the [[SOCR_EduMaterials_Activities_DiceExperiment | SOCR Dice Experiment]].
  
===Binomial Expectation and Variance===
+
For instance, running the [http://socr.ucla.edu/htmls/SOCR_Experiments.html SOCR Dice Experiment] 1,000 times with number of dice n=10, and the loading probabilities listed above, we get an output like the one shown below.
If X is the random variable representing the r(random) number of heads in n coin toss trials, where the P(Head) = p, i.e., <math>X\sim B(n, p)</math>. Then we have the following expressions for the [[AP_Statistics_Curriculum_2007#Expectation_.28Mean.29_and_Variance | expected value, variance and the standard deviation]] of X :
 
* Mean (expected value): <math>E[X] = n\times p</math>,
 
* Variance: <math>VAR[X] = n\times p \times(1-p)</math>, and
 
* Standard deviation: <math>SD[X] = \sqrt{n\times p \times(1-p)}</math>
 
* [http://en.wikipedia.org/wiki/Binomial_distribution The complete description of the Binomial distribution is available here]
 
  
===Examples===
+
<center>[[Image:SOCR_EBook_Dinov_Multinimial_030508_Fig1.jpg|500px]]</center>
====Binomial Coin Toss Example====
 
Refer to the SOCR [[SOCR_EduMaterials_Activities_BinomialCoinExperiment | Binomial Coin Toss Experiment]] and use the [http://socr.ucla.edu/htmls/SOCR_Experiments.html SOCR Binomial Coin Toss Applet] to perform an experiment of tossing a biased coin, P(Head) = 0.3, 5 times and computing the expectation of the number of Heads in such experiment. You should recognize that there are two distinct ways of computing the expected number of Heads:
 
  
* Theoretical calculation, using the Binomial Probabilities;<math>E[X]=\sum_x{xP(X=x)} = </math>
+
Now, we can actually count how many of these 1,000 trials generated the event ''A'' as an outcome. In one such experiment of 1,000 trials, there were 8 outcomes of the type {3 ones, 3 twos, 2 threes and 2 fours}. Therefore, the relative proportion of these outcomes to 1,000 will give us a fairly accurate estimate of the exact probability we computed above
<math>={5\choose 0}0.3^0(0.7)^{5}+{5\choose 1}0.3^1(0.7)^{4}+{5\choose 2}0.3^2(0.7)^{3}+{5\choose 3}0.3^3(0.7)^{2}+{5\choose 4}0.3^4(0.7)^{1}+{5\choose 5}0.3^5(0.7)^{0} =</math>
+
: <math>P(A) \approx {8 \over 1,000}=0.008</math>.
<math> \cdots = (n\times p) = 5\times 0.3 = 1.5.</math>
 
  
* Empirical calculation, using the outcomes 100 repeated coin tosses of 5 coins. The image below illustrates this approximate calculation of the expectation for the number of heads when <math>X\sim B(5, 0.3)</math>. Notice the slight difference between the theoretical expectation (<math>n\times p = 5 \times 0.3 = 1.5</math>) and its empirical approximation of 1.39!
+
Note that this approximation is close to the exact answer above. By the [[AP_Statistics_Curriculum_2007_Limits_LLN | Law of Large Numbers (LLN)]], we know that this SOCR empirical compared to the exact multinomial probability of interest will significantly improve as we increase the number of trials in this experiment to 10,000.
<center>[[Image:SOCR_EBook_Dinov_RV_Binomial_013008_Fig2.jpg|400px]]</center>
 
  
* Notes: Of course, the theoretical calculation is exact and the empirical calculation is only approximate. However, the power of the empirical approximation to the expected number of Heads becomes clear when we increase the number of coins from 5 to 100, or more. In these cases, the exact calculations become very difficult, and for even larger number of coins become intractable.
+
===[[EBook_Problems_Distrib_Multinomial|Problems]]===
 
 
<hr>
 
  
===References===
+
===See also===
 +
* [[AP_Statistics_Curriculum_2007_Distrib_Dists| Negative Binomial and other Distributions]]
 +
* [[AP_Statistics_Curriculum_2007_Distrib_Dists#Negative_Multinomial_Distribution_.28NMD.29| Negative Multinomial Distribution]]
  
 
<hr>
 
<hr>
 
* SOCR Home page: http://www.socr.ucla.edu
 
* SOCR Home page: http://www.socr.ucla.edu
  
{{translate|pageName=http://wiki.stat.ucla.edu/socr/index.php?title=AP_Statistics_Curriculum_2007_Distrib_Multinomial}}
+
"{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=AP_Statistics_Curriculum_2007_Distrib_Multinomial}}

Latest revision as of 11:43, 3 March 2020

General Advance-Placement (AP) Statistics Curriculum - Multinomial Random Variables and Experiments

The multinomial experiments (and multinomial distributions) directly extend their bi-nomial counterparts.

Multinomial experiments

A multinomial experiment is an experiment that has the following properties:

  • The experiment consists of k repeated trials.
  • Each trial has a discrete number of possible outcomes.
  • On any given trial, the probability that a particular outcome will occur is constant.
  • The trials are independent; that is, the outcome on one trial does not affect the outcome on other trials.

Examples of Multinomial experiments

  • Suppose we have an urn containing 9 marbles. Two are red, three are green, and four are blue (2+3+4=9). We randomly select 5 marbles from the urn, with replacement. What is the probability (P(A)) of the event A={selecting 2 green marbles and 3 blue marbles}?
  • To solve this problem, we apply the multinomial formula, we know the following:
    • The experiment consists of 5 trials, so k = 5.
    • The 5 trials produce 0 red, 2 green marbles, and 3 blue marbles; so \(r_1=r_{red} = 0\), \(r_2=r_{green} = 2\), and \(r_3=r_{blue} = 3\).
    • For any particular trial, the probability of drawing a red, green, or blue marble is 2/9, 3/9, and 4/9, respectively. Hence, \(p_1=p_{red} = 2/9\), \(p_2=p_{green} = 1/3\), and \(p_3=p_{blue} = 4/9\).

Plugging these values into the multinomial formula we get the probability of the event of interest to be:

\(P(A) = {5\choose r_1,r_2,r_3}p_1^{r_1}p_2^{r_2}p_3^{r_3}\). In this specific case, \(P(A) = {5\choose 0,2,3}p_1^{0}p_2^{2}p_3^{3}\).

\[P(A) = {5! \over 0!\times 2! \times 3! }\times (2/9)^0 \times (1/3)^2\times (4/9)^3=0.0975461.\]

Thus, if we draw 5 marbles with replacement from the urn, the probability of drawing no red, 2 green, and 3 blue marbles is 0.0975461.

  • Let's again use the urn containing 9 marbles, where the number of red, green and blue marbles are 2, 3 and 4, respectively. This time we select 5 marbles from the urn, but are interested in the probability (P(B)) of the event B={selecting 2 green marbles}! (Note that 2 < 5)
    • To solve this problem, we classify balls into green and others! Thus the multinomial experiment consists of 5 trials (k = 5), \(r_1=r_{green} = 2\), \(r_2=r_{other} = 3\). In this case, the probabilities of drawing a green or other marble are 3/9, and 6/9, respectively. Notice now the P(other) is the sum of the probabilities of the other colors (complement of green)! Hence,

\[P(B) = {5\choose 2, 3}p_1^{r_1}p_2^{r_2} = {5! \over 2! \times 3! }\times (3/9)^2 \times (6/9)^3=0.329218.\]

This probability is equivalent to the binomial probability (success=green; failure=other color), B(n=5, p=1/3).

Synergies between Binomial and Multinomial processes/probabilities/coefficients

\[{n\choose i}=\frac{n!}{i!(n-i)!}\]

\[{n\choose i_1,i_2,\cdots, i_k}= \frac{n!}{i_1! i_2! \cdots i_k!}\]

  • The Binomial vs. Multinomial Formulas

\[(a+b)^n = \sum_{i=1}^n{{n\choose i}a^i \times b^{n-i}}\] \[(a_1+a_2+\cdots +a_k)^n = \sum_{i_1+i_2\cdots +i_k=n}^n{ {n\choose i_1,i_2,\cdots, i_k} a_1^{i_1} \times a_2^{i_2} \times \cdots \times a_k^{i_k}}\]

\[p=P(X=r)={n\choose r}p^r(1-p)^{n-r}, \forall 0\leq r \leq n\] \[p=P(X_1=r_1 \cap X_2=r_2 \cap \cdots \cap X_k=r_k | r_1+r_2+\cdots+r_k=n)=\] \[={n\choose i_1,i_2,\cdots, i_k}p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}, \forall r_1+r_2+\cdots+r_k=n\]

Expectation and variance

The expected number of times for observing the outcome i, over n trials, is \[E(X_i) = n p_i.\]

Since each diagonal entry is the variance of a binomially distributed random variable, the variance-covariance matrix is defined by:

Diagonal-terms (variances)\[VAR(X_i)=np_i(1-p_i)\], for each i, and
Off-diagonal terms (covariances)\[COV(X_i,X_j)=-np_i p_j\], for \(i\not= j\).

Example

Suppose we study N independent trials with results falling in one of k possible categories labeled \(1,2, \cdots, k\). Let \(p_i\) be the probability of a trial resulting in the \(i^{th}\) category, where \(p_1+p_2+ \cdots +p_k = 1\). Let \(N_i\) be the number of trials resulting in the \(i^{th}\) category, where \(N_1+N_2+ \cdots +N_k = N\).

For instance, suppose we have 9 people arriving at a meeting according to the following information:

P(by Air) = 0.4, P(by Bus) = 0.2, P(by Automobile) = 0.3, P(by Train) = 0.1
  • Compute the following probabilities
P(3 by Air, 3 by Bus, 1 by Auto, 2 by Train) = ?
P(2 by air) = ?

SOCR Multinomial Examples

Suppose we roll 10 loaded hexagonal (6-face) dice 8 times and we are interested in the probability of observing the event A={3 ones, 3 twos, 2 threes, and 2 fours}. Assume the dice are loaded to the small outcomes according to the following probabilities of the 6 outcomes (one is the most likely and six is the least likely outcome).

x 1 2 3 4 5 6
P(X=x) 0.286 0.238 0.19 0.143 0.095 0.048
P(A)=? Note that the complete description of the event of interest is:
A={3 ones, 3 twos, 2 threes, 2 fours, and 0 others (5's or 6's!)}

Exact Solution

Of course, we can compute this probability exactly:

  • By-hand calculations:

\[P(A) = {10! \over 3!\times 3! \times 2! \times 2! \times 0! \times 0!} \times 0.286^3 \times 0.238^3\times 0.19^2 \times 0.143^2 \times 0.095^0 \times 0.048^0= \] \[=0.00586690138260962656816896.\]

  • Using the SOCR Multinomial Distribution Calculator: Enter the above given information in the SOCR Multimonial distribution applet to get the probability density and cumulative distribution values for the given outcome {3,3,2,2,0}, as shown on the image below. Note that since the event A does not contain the die outcomes of 5 or 6 , we can reduce the case of 6 outcomes \(\{1,2,3,4,5,6 \}\) to a case of 5 outcomes \(\{1,2,3,4,other \}\) with corresponding probabilities \(\{0.286, 0.238, 0.19, 0.143, 0.143 \}\), where we pool together the probabilities of the die outcomes of 5 and 6, i.e., P(other)=0.095+0.048=0.143.
SOCR EBook Dinov Multinimial 102209 Fig2.png

Approximate Solution

We can also find a pretty close empirically-driven estimate using the SOCR Dice Experiment.

For instance, running the SOCR Dice Experiment 1,000 times with number of dice n=10, and the loading probabilities listed above, we get an output like the one shown below.

SOCR EBook Dinov Multinimial 030508 Fig1.jpg

Now, we can actually count how many of these 1,000 trials generated the event A as an outcome. In one such experiment of 1,000 trials, there were 8 outcomes of the type {3 ones, 3 twos, 2 threes and 2 fours}. Therefore, the relative proportion of these outcomes to 1,000 will give us a fairly accurate estimate of the exact probability we computed above \[P(A) \approx {8 \over 1,000}=0.008\].

Note that this approximation is close to the exact answer above. By the Law of Large Numbers (LLN), we know that this SOCR empirical compared to the exact multinomial probability of interest will significantly improve as we increase the number of trials in this experiment to 10,000.

Problems

See also


"-----


Translate this page:

(default)
Uk flag.gif

Deutsch
De flag.gif

Español
Es flag.gif

Français
Fr flag.gif

Italiano
It flag.gif

Português
Pt flag.gif

日本語
Jp flag.gif

България
Bg flag.gif

الامارات العربية المتحدة
Ae flag.gif

Suomi
Fi flag.gif

इस भाषा में
In flag.gif

Norge
No flag.png

한국어
Kr flag.gif

中文
Cn flag.gif

繁体中文
Cn flag.gif

Русский
Ru flag.gif

Nederlands
Nl flag.gif

Ελληνικά
Gr flag.gif

Hrvatska
Hr flag.gif

Česká republika
Cz flag.gif

Danmark
Dk flag.gif

Polska
Pl flag.png

România
Ro flag.png

Sverige
Se flag.gif