Difference between revisions of "SOCR EduMaterials Activities LawOfLargeNumbers"

From SOCR
Jump to: navigation, search
('''Exercise 1''')
m
Line 4: Line 4:
 
This is part I of a heterogeneous activity that demonstrates the theory and applications of the Law of Large Numbers (LLN). [[SOCR_EduMaterials_Activities_LawOfLargeNumbers2 | Part II]] and [[SOCR_EduMaterials_Activities_LawOfLargeNumbersExperiment | Part III]] of this activity contain more examples and diverse experiments.
 
This is part I of a heterogeneous activity that demonstrates the theory and applications of the Law of Large Numbers (LLN). [[SOCR_EduMaterials_Activities_LawOfLargeNumbers2 | Part II]] and [[SOCR_EduMaterials_Activities_LawOfLargeNumbersExperiment | Part III]] of this activity contain more examples and diverse experiments.
  
==== Example ====
+
===Example===
 
The average weight of 10 students from a class of 100 students is most likely closer to the ''real average'' weight of all 100 students, compared to the average weight of 3 randomly chosen students from that same class. This is because the sample of 10 is a ''larger number'' than the sample of only 3 and better represents the entire class. At the extreme, a sample of 99 of the 100 students will produce a sample average almost exactly the same as the average for all 100 students. On the other extreme, sampling a single student will be an extremely variant estimate of the overall class average weight.
 
The average weight of 10 students from a class of 100 students is most likely closer to the ''real average'' weight of all 100 students, compared to the average weight of 3 randomly chosen students from that same class. This is because the sample of 10 is a ''larger number'' than the sample of only 3 and better represents the entire class. At the extreme, a sample of 99 of the 100 students will produce a sample average almost exactly the same as the average for all 100 students. On the other extreme, sampling a single student will be an extremely variant estimate of the overall class average weight.
  
==== Statement of the Law of Large Numbers ====
+
===Statement of the Law of Large Numbers===
 
If an event of probability p is observed repeatedly during '''independent repetitions''', the ratio of the observed frequency of that event to the total number of repetitions converges towards p as the number of repetitions becomes arbitrarily large.
 
If an event of probability p is observed repeatedly during '''independent repetitions''', the ratio of the observed frequency of that event to the total number of repetitions converges towards p as the number of repetitions becomes arbitrarily large.
  
==== The theory behind the LLN ====
+
===The theory behind the LLN===
 
Complete details about the ''weak'' and ''strong'' laws of large numbers may be found [http://en.wikipedia.org/wiki/Law_of_large_numbers here].
 
Complete details about the ''weak'' and ''strong'' laws of large numbers may be found [http://en.wikipedia.org/wiki/Law_of_large_numbers here].
  
Line 28: Line 28:
 
* Remember that the more experiments you run the closer the theoretical and sample proportions will be (by LLN). Go in '''Continuous run mode''' and watch the convergence of the sample proportion to <math>p</math>. Can you explain in words, why can't we expect the second variable of interest (the differences of Heads and Tails) to converge? [[Image:SOCR_Activities_LLN_Dinov_022007_Fig2.jpg|200px]]
 
* Remember that the more experiments you run the closer the theoretical and sample proportions will be (by LLN). Go in '''Continuous run mode''' and watch the convergence of the sample proportion to <math>p</math>. Can you explain in words, why can't we expect the second variable of interest (the differences of Heads and Tails) to converge? [[Image:SOCR_Activities_LLN_Dinov_022007_Fig2.jpg|200px]]
  
== '''Exercise 2'''==
+
==Exercise 2==
 
The second SOCR demonstration of the law of large numbers will be quite different and practically useful. Here we show how the LLN implies practical algorithms for estimation of [http://en.wikipedia.org/wiki/Transcendental_number transcendental numbers]. The two most popular transcendental numbers are [http://en.wikipedia.org/wiki/Pi <math>\pi</math>] and [http://en.wikipedia.org/wiki/E_%28mathematical_constant%29 ''e''].
 
The second SOCR demonstration of the law of large numbers will be quite different and practically useful. Here we show how the LLN implies practical algorithms for estimation of [http://en.wikipedia.org/wiki/Transcendental_number transcendental numbers]. The two most popular transcendental numbers are [http://en.wikipedia.org/wiki/Pi <math>\pi</math>] and [http://en.wikipedia.org/wiki/E_%28mathematical_constant%29 ''e''].
  
Line 49: Line 49:
 
* [[SOCR_EduMaterials_Activities_LawOfLargeNumbersExperiment | Part III of this activity]]
 
* [[SOCR_EduMaterials_Activities_LawOfLargeNumbersExperiment | Part III of this activity]]
  
== '''Common Misconceptions regarding the LLN'''==
+
== Common Misconceptions regarding the LLN==
 
* '''Misconception 1''': If we observe a streak of 10 consecutive heads (when p=0.5, say) the odds of the <math>11^{th}</math> trial being a Head is > p! This is of course, incorrect, as the coin tosses are independent trials (an example of a ''memoryless'' process).
 
* '''Misconception 1''': If we observe a streak of 10 consecutive heads (when p=0.5, say) the odds of the <math>11^{th}</math> trial being a Head is > p! This is of course, incorrect, as the coin tosses are independent trials (an example of a ''memoryless'' process).
 
* '''Misconception 2''': If run large number of coin tosses, the '''number of heads''' and '''number of tails''' become more and more equal. This is incorrect, as the LLN only guarantees that the sample proportion of heads will converge to the true population proportion (the p parameter that we selected). In fact, the difference |Heads - Tails| diverges!
 
* '''Misconception 2''': If run large number of coin tosses, the '''number of heads''' and '''number of tails''' become more and more equal. This is incorrect, as the LLN only guarantees that the sample proportion of heads will converge to the true population proportion (the p parameter that we selected). In fact, the difference |Heads - Tails| diverges!

Revision as of 14:18, 18 December 2007

SOCR Educational Materials - Activities - SOCR Law of Large Numbers Activity

Overview

This is part I of a heterogeneous activity that demonstrates the theory and applications of the Law of Large Numbers (LLN). Part II and Part III of this activity contain more examples and diverse experiments.

Example

The average weight of 10 students from a class of 100 students is most likely closer to the real average weight of all 100 students, compared to the average weight of 3 randomly chosen students from that same class. This is because the sample of 10 is a larger number than the sample of only 3 and better represents the entire class. At the extreme, a sample of 99 of the 100 students will produce a sample average almost exactly the same as the average for all 100 students. On the other extreme, sampling a single student will be an extremely variant estimate of the overall class average weight.

Statement of the Law of Large Numbers

If an event of probability p is observed repeatedly during independent repetitions, the ratio of the observed frequency of that event to the total number of repetitions converges towards p as the number of repetitions becomes arbitrarily large.

The theory behind the LLN

Complete details about the weak and strong laws of large numbers may be found here.

Exercise 1

This exercise illustrates the statement and validity of the LLN in the situation of tossing (biased or fair) coins repeatedly. Suppose we let H and T denote Heads and Tails, the probabilities of observing a Head or a Tail at each trial are \(0<p<1\) and \(0<1-p<1\), respectfully. The sample space of this experiment consist of sequences of H's and Ts. For example, an outcome may be \(\{H, H, T, H, H, T, T, T, ....\}\). If we toss a coin n times, the size of the sample-space is \(2^n\), as the coin tosses are independent. Binomial Distribution governs the probability of observing \(0\le k\le n\) Heads in \(n\) experiments, which is evaluated by the binomial density at \(k\).

In this case we will be interested in two random variables associated with this process. The first variable will be the proportion of Heads and the second will be the differences of the number of Heads and Tails. This will empirically demonstrate the LLN and its most common misconceptions (presented below). Point your browser to the SOCR Experiments and select the Coin Toss LLN Experiment from the drop-down list of experiments in the top-left panel. This applet consists of a control toolbar on the top followed by a graph panel in the middle and a results table at the bottom. Use the toolbar to flip coins one at a time, 10, 100, 1,000 at a time or continuously! The toolbar also allows you to stop or reset an experiment and select the probability of Heads (p) using the slider. The graph panel in the middle will dynamically plot the values of the two variables of interest (proportion of heads and difference of Heads and Tails). The outcome table at the bottom presents the summaries of all trials of this experiment. From this table, you can copy and paste the summary for further processing using other computational resources (e.g., SOCR Modeler or MS Excel).

  • Note: We report the normalized differences of the number of Heads minus the number of Tails in the graph and result table. Let \(H\) and \(T\) are the number of Heads and Tails, up to the current trial (\(k\)), respectively. Then we define the normalized difference \(| H - T|\) = \(p+ ((1-p)H-pT )/(2/3 \times Max_k)\), where \(Max_k = \max_{1 \le i \le k}{||H-T||_i}\) and \(||H-T||_i\) is the maximum difference of Heads and Tails up to the \(i^{th}\) trial. Observe that the expectation of the normalized difference \(E(|H-T|)=p\), since \(E((1-p)H-pT)=0\). This ensures that the normalized differences oscillate around the chosen \(p\) (the LLN limit of the proportion of Heads) and they are visible within the graph window.
SOCR Activities LLN Dinov 022007 Fig1.jpg

Now, select n=100 and p=0.5. The figure below shows a snapshot of the applet. Remember that each time you run the applet the random samples will be different and the figures and results will generally vary. Click on the Run or Step buttons to perform the experiment and observe the proportion of heads and differences evolve over time. Choosing Continuous from the number of experiments drop-down list in the tool bar will run the experiment in a continuous mode (use the Stop button to terminate the experiment in this case). The statement of the LLN in this experiment is simply that as the number of experiments increases the sample proportion of Heads (red curve) will approach the theoretical (user preset) value of p (in this case p=0.5). Try to change the value of p and run the experiment interactively several times. Notice the behavior of the graphs of the two variables we study. Try to pose and answer questions like these:

  • If we set p=0.4, how large of a sample-size is needed to ensure that the sample-proportion stays within [0.4; 0.6]?
  • What is the behavior of the curve representing the differences of Heads and Tails (red curve)?
  • Is the convergence of the sample-proportion to the theoretical proportion (that we preset) dependent on p?
  • Remember that the more experiments you run the closer the theoretical and sample proportions will be (by LLN). Go in Continuous run mode and watch the convergence of the sample proportion to \(p\). Can you explain in words, why can't we expect the second variable of interest (the differences of Heads and Tails) to converge? SOCR Activities LLN Dinov 022007 Fig2.jpg

Exercise 2

The second SOCR demonstration of the law of large numbers will be quite different and practically useful. Here we show how the LLN implies practical algorithms for estimation of transcendental numbers. The two most popular transcendental numbers are \(\pi\) and e.

Estimating e using SOCR simulation

The SOCR E-Estimate Experiment provides the complete details of this simulation. In a nutshell, we can estimate the value of the natural number e using random sampling from Uniform distribution. Suppose \(X_1, X_2, ..., X_n\) are drawn from uniform distribution on (0, 1) and define \(U= {\operatorname{argmin}}_n { \left (X_1+X_2+...+X_n > 1 \right )}\), note that all \(X_i \ge 0\).

Now, the expected value \(E(U) = e \approx 2.7182\). Therefore, by LLN, taking averages of \(\left \{ U_1, U_2, U_3, ..., U_k \right \}\) values, each computed from random samples \(X_1, X_2, ..., X_n \sim U(0,1)\) as described above, will provide a more accurate estimate (as \(k \rightarrow \infty\)) of the natural number e.

The Uniform E-Estimate Experiment, part of SOCR Experiments provides a hands-on demonstration of how the LLN facilitates stochastic simulation-based estimation of e.

SOCR Activities Uniform E EstimateExperiment Dinov 121907 Fig1.jpg

Estimating \(\pi\) using SOCR simulation

Similarly, one may approximate the transcendental number \(\pi\), using the SOCR Buffon’s Needle Experiment. Here, the LLN again provides the foundation for a better approximation of \(\pi\) by virtually dropping needles (many times) on a tiled surface and observing if the needle crosses a tile grid-line. For a tile grid of size 1, the odds of a needle-line intersection are \({ 2 \over \pi} \approx 0.63662\). In practice, to estimate \(\pi\) from a number of needle drops (N), we take the reciprocal of the sample odds-of-intersection.

Other SOCR LLN Activities

Common Misconceptions regarding the LLN

  • Misconception 1: If we observe a streak of 10 consecutive heads (when p=0.5, say) the odds of the \(11^{th}\) trial being a Head is > p! This is of course, incorrect, as the coin tosses are independent trials (an example of a memoryless process).
  • Misconception 2: If run large number of coin tosses, the number of heads and number of tails become more and more equal. This is incorrect, as the LLN only guarantees that the sample proportion of heads will converge to the true population proportion (the p parameter that we selected). In fact, the difference |Heads - Tails| diverges!

Part II of this activity




Translate this page:

(default)
Uk flag.gif

Deutsch
De flag.gif

Español
Es flag.gif

Français
Fr flag.gif

Italiano
It flag.gif

Português
Pt flag.gif

日本語
Jp flag.gif

България
Bg flag.gif

الامارات العربية المتحدة
Ae flag.gif

Suomi
Fi flag.gif

इस भाषा में
In flag.gif

Norge
No flag.png

한국어
Kr flag.gif

中文
Cn flag.gif

繁体中文
Cn flag.gif

Русский
Ru flag.gif

Nederlands
Nl flag.gif

Ελληνικά
Gr flag.gif

Hrvatska
Hr flag.gif

Česká republika
Cz flag.gif

Danmark
Dk flag.gif

Polska
Pl flag.png

România
Ro flag.png

Sverige
Se flag.gif