Difference between revisions of "AP Statistics Curriculum 2007 Distrib MeanVar"

From SOCR
Jump to: navigation, search
(Properties of Variance: extended the covariance definition to discrete and continuous cases)
m (Text replacement - "{{translate|pageName=http://wiki.stat.ucla.edu/socr/" to ""{{translate|pageName=http://wiki.socr.umich.edu/")
 
(39 intermediate revisions by 3 users not shown)
Line 3: Line 3:
 
=== Expectation (Mean)===
 
=== Expectation (Mean)===
 
==== Example====
 
==== Example====
Suppose 10% of the human population carries the green-eye allele. If we choose 1,000 people randomly and let the RV X be the number of green-eyed people in the sample. Then the distribution of X is binomial distribution with n = 1,000 and p = 0.1 (denoted as <math>X \sim B(1,000, 0.1)</math>. In a sample of 1,000 people, how many are we expecting to have this allele? Clearly the count of individuals that carry the green-eye allele will vary between different samples of 1,000 subjects. Well, now much dispersion between the samples can we expect, in terms of the number of individuals carrying this allele? These questions will be answered by computing the mean and the variance (or standard deviation) for this process.
+
Suppose 10% of the human population carries the green-eye allele. If we choose 1,000 people randomly and let the RV X be the number of green-eyed people in the sample. Then the distribution of X is [[AP_Statistics_Curriculum_2007_Distrib_Binomial#Binomial_Random_Variables|binomial distribution]] with n = 1,000 and p = 0.1 (denoted as <math>X \sim B(1,000, 0.1)</math>. In a sample of 1,000 people, how many of them are we expecting to have this allele? Clearly the count of individuals that carry the green-eye allele will vary between different samples of 1,000 subjects. How much dispersion between the samples can we expect, in terms of the number of individuals carrying this allele? These questions will be answered by computing the mean and the variance (or standard deviation) for this process.
  
 
====Definition====
 
====Definition====
 
The Expected Value, Expectation or Mean, of a discrete random variable X is defined by <math>E[X]=\sum_x{xP(X=x)}.</math>
 
The Expected Value, Expectation or Mean, of a discrete random variable X is defined by <math>E[X]=\sum_x{xP(X=x)}.</math>
The expectation of a continuously-values random variable Y is analogously defined by <math>E[Y]=\int{yP(y)dy}</math>, where the integral is over the domain of Y and P(y) is the probability density function of Y.
+
The expectation of a continuous random variable Y is analogously defined by <math>E[Y]=\int{yP(y)dy}</math>, where the integral is over the domain of Y and P(y) is the probability density function of Y.
  
 
====Properties of Expectation====
 
====Properties of Expectation====
Line 18: Line 18:
 
\sum_x{(a\times x)P(X=x)} +\sum_x{bP(X=x)} = </math>
 
\sum_x{(a\times x)P(X=x)} +\sum_x{bP(X=x)} = </math>
 
:<math>=a\times \sum_x{xP(X=x)} + b\times \sum_x{P(X=x)} =
 
:<math>=a\times \sum_x{xP(X=x)} + b\times \sum_x{P(X=x)} =
a\times E[X] + b\times 1 =aE[X] + b\,</math>
+
a\times E[X] + b\times 1 =aE[X] + b.</math>
 +
: <math>E[g(X)] = \sum_x{g(x)\times P(X=x)},</math> where ''g(x)'' is any real-valued function.
  
 
===Variance===
 
===Variance===
Line 24: Line 25:
  
 
====Properties of Variance====
 
====Properties of Variance====
* The Variance is '''not''' quite a [http://en.wikipedia.org/wiki/Linear_functional linear functional]. It has the following properties:
+
* The Variance is '''not''' quite a [http://en.wikipedia.org/wiki/Linear_functional linear function]. It has the following properties:
 
:<math>\operatorname{VAR}(X + c)=  \operatorname{VAR}(X)\,</math> (Data shifts do not affect the dispersion.)
 
:<math>\operatorname{VAR}(X + c)=  \operatorname{VAR}(X)\,</math> (Data shifts do not affect the dispersion.)
 
:<math>\operatorname{VAR}(aX)= a^2 \operatorname{VAR}(X)\,</math>
 
:<math>\operatorname{VAR}(aX)= a^2 \operatorname{VAR}(X)\,</math>
 
:If X and Y are uncorrelated, <math>\operatorname{VAR}(X + Y)=  \operatorname{VAR}(X) + \operatorname{VAR}(Y)\,</math>
 
:If X and Y are uncorrelated, <math>\operatorname{VAR}(X + Y)=  \operatorname{VAR}(X) + \operatorname{VAR}(Y)\,</math>
:If X and Y are dependent (correlated), <math>\operatorname{VAR}(X + Y)=  \operatorname{VAR}(X) + \operatorname{VAR}(Y) +COV(X,Y)\,</math>
+
:If X and Y are dependent (correlated), <math>\operatorname{VAR}(X + Y)=  \operatorname{VAR}(X) + \operatorname{VAR}(Y) +2\times COV(X,Y)\,</math>
 +
: An [http://en.wikipedia.org/wiki/Variance alternative formula for calculating the variance] is: <math>VAR(X) = E(X^2) - (E(X))^2</math>.
  
 
* The '''Covariance''' between two real-valued random variables ''X'' and ''Y'', with corresponding expected values <math>\scriptstyle E(X)\,=\,\mu</math> and <math>\scriptstyle E(Y)\,=\,\nu</math> is defined as
 
* The '''Covariance''' between two real-valued random variables ''X'' and ''Y'', with corresponding expected values <math>\scriptstyle E(X)\,=\,\mu</math> and <math>\scriptstyle E(Y)\,=\,\nu</math> is defined as
: <math>\operatorname{COV}(X, Y) = \operatorname{E}((X - \mu) (Y - \nu))=\begin{cases}\sum_{x,y}{(x-\mu)(y-\nu)P(X=x;Y=y)}, & X,Y = \texttt{discrete},\\
+
: <math>\operatorname{COV}(X, Y) = \operatorname{E}((X - \mu) (Y - \nu))=E(X\times Y) -\mu\times\nu \,</math>
\int_{x,y}{(x-\mu)(y-\nu)P_{X,Y}(x,y)dxdy}, & X,Y = \texttt{continuous}.\end{cases}
+
: <math>COV(X,Y)=\begin{cases}\sum_{x,y}{(x-\mu)(y-\nu)P(X=x;Y=y)}, & X,Y = \texttt{discrete},\\
\,</math>
+
\int_{x,y}{(x-\mu)(y-\nu)P_{X,Y}(x,y)dxdy}, & X,Y = \texttt{continuous}.\end{cases}</math>
  
 
* In general, if we have {<math>X_1, X_2, X_3, \cdots , X_n</math>} correlated variables, then the ''variance of their sum is the sum of their covariances'':
 
* In general, if we have {<math>X_1, X_2, X_3, \cdots , X_n</math>} correlated variables, then the ''variance of their sum is the sum of their covariances'':
 
:<math>VAR\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \sum_{j=1}^n \operatorname{COV}(X_i, X_j).</math>
 
:<math>VAR\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \sum_{j=1}^n \operatorname{COV}(X_i, X_j).</math>
 +
 +
* If the correlation between to identically distributed variables (not necessarily independent) is defined by:
 +
: <math>\rho_{X,Y}=Corr(X,Y)={Cov(X,Y) \over \sigma_X \sigma_Y} ={E[(X-\mu_X)(Y-\mu_Y)] \over \sigma_X\sigma_Y},</math>
 +
Then the ''variance of their sum is'':
 +
:<math>VAR\left(X+Y\right) = Var(X)+Var(Y)+2\times\operatorname{COV}(X, Y) = </math>
 +
:<math>=Var(X)+Var(Y)+2\times\rho_{X, Y}\times\sigma_X\times\sigma_Y.</math>
 +
 +
* The [http://socr.ucla.edu/htmls/exp/Bivariate_Normal_Experiment.html SOCR Bivariate Normal Distribution Experiment] demonstrates the synergy between the theoretical (model) and empirical (sample-driven estimates) in computing variances of sums of random variables (in the case of 2 variables). Change the correlation coefficient between the two variables, Corr(X,Y), and observe how this change affects the additivity property of the variance. Is Var(X+Y) ~ Var(X) + Var(Y), when <math>Corr(X,Y)\not= 0</math>?
  
 
===Standard Deviation===
 
===Standard Deviation===
Line 44: Line 54:
  
 
====Raw Moments====
 
====Raw Moments====
The ''k''<sup>th</sup> '''Raw Moment''' for a discrete random variable ''X'' is defined by <math>E[X]=\sum_x{x^kP(X=x)}.</math> The ''k''<sup>th</sup> '''Raw Moment''' for a continuously-values random variable ''Y'' is analogously defined by <math>E[Y]=\int{y^kP(y)dy},</math> where the integral is over the domain of ''Y'' and ''P(y)'' is the probability density function of ''Y''.
+
The ''k''<sup>th</sup> '''Raw Moment''' for a discrete random variable ''X'' is defined by <math>E[X^k]=\sum_x{x^kP(X=x)}.</math> The ''k''<sup>th</sup> '''Raw Moment''' for a continuously-values random variable ''Y'' is analogously defined by <math>E[Y^k]=\int{y^kP(y)dy},</math> where the integral is over the domain of ''Y'' and ''P(y)'' is the probability density function of ''Y''.
  
 
====Centralized Moments====
 
====Centralized Moments====
The ''k''<sup>th</sup> '''Centralized Moment''' for a discrete random variable ''X'' is defined by <math>E[X]=\sum_x{(x-\mu)^kP(X=x)},</math> where <math>\mu</math> is the expected value of ''X''. The ''k''<sup>th</sup> '''Centralized Moment''' for a continuously-values random variable ''Y'' is analogously defined by <math>E[Y]=\int{(y-\mu)^kP(y)dy},</math> where <math>\mu</math> is the expected value of ''Y'', the integral is over the domain of ''Y'' and ''P(y)'' is the probability density function of ''Y''.
+
The ''k''<sup>th</sup> '''Centralized Moment''' for a discrete random variable ''X'' is defined by <math>E_c[X^k]=\sum_x{(x-\mu)^kP(X=x)},</math> where <math>\mu</math> is the expected value of ''X''. The ''k''<sup>th</sup> '''Centralized Moment''' for a continuously-values random variable ''Y'' is analogously defined by <math>E_c[Y^k]=\int{(y-\mu)^kP(y)dy},</math> where <math>\mu</math> is the expected value of ''Y'', the integral is over the domain of ''Y'' and ''P(y)'' is the probability density function of ''Y''.
  
 
====Standardized Moments====
 
====Standardized Moments====
 
The ''k''<sup>th</sup> '''Standardized Moment''' for a discrete random variable ''X'' is defined by  
 
The ''k''<sup>th</sup> '''Standardized Moment''' for a discrete random variable ''X'' is defined by  
  
: <math>{\sum_x{(x-\mu)^kP(X=x)} \over {(\sum_{i=1}^n (x_i-\mu)^2)^{k/2}}}.</math>  
+
: <math>E_s[X^k]={\sum_x{(x-\mu)^kP(X=x)} \over {(\sum_{x} (x-\mu)^2P(X=x))^{k/2}}}.</math>  
  
The ''k''<sup>th</sup> '''Raw Moment''' for a continuously-values random variable ''Y'' is analogously defined by  
+
The ''k''<sup>th</sup> '''Standardized Moment''' for a continuously-values random variable ''Y'' is analogously defined by  
  
:<math>{\int{(y-\mu)^kP(y)dy} \over \sigma^k},</math> where the integral is over the domain of ''Y'' and ''P(y)'' is the probability density function of ''Y''
+
:<math>E_s[Y^k]={\int{(y-\mu)^kP(y)dy} \over \sigma^k},</math> where the integral is over the domain of ''Y'' and ''P(y)'' is the probability density function of ''Y''
  
 
====Notable Moments====
 
====Notable Moments====
In addition to the mean and variance, the [http://en.wikipedia.org/wiki/Skewness Skewness] and the [http://en.wikipedia.org/wiki/Kurtosis Kurtosis] are two notable (3-rd adn 4-th) moments, respectively.
+
In addition to the mean and variance, the [http://en.wikipedia.org/wiki/Skewness Skewness] and the [http://en.wikipedia.org/wiki/Kurtosis Kurtosis] are two notable (3-rd and 4-th) moments, respectively.
  
 
====Sample Moments====
 
====Sample Moments====
 
Sample Moments are, of course, analogously computed to their theoretical counterparts, using a sample of observations {<math>x_1, x_2, x_3, \cdots, x_N</math>}. For example, the sample skewness and kurtosis (the 3-rd and 4-th sample standardized moments) are defined by
 
Sample Moments are, of course, analogously computed to their theoretical counterparts, using a sample of observations {<math>x_1, x_2, x_3, \cdots, x_N</math>}. For example, the sample skewness and kurtosis (the 3-rd and 4-th sample standardized moments) are defined by
: <math>Skewness(X) ={\sum_{i=1}^N{(x_i-\bar{x})^3} \over (N-1)s^3}, </math> where <math>\bar{x}</math> and ''s'' are the sample mean and sampel standard deviation, respectively.
+
: <math>Skewness(X) ={\sum_{i=1}^N{(x_i-\bar{x})^3} \over (N-1)s^3}, </math> where <math>\bar{x}</math> and ''s'' are the sample mean and sample standard deviation, respectively.
 +
 
 +
: <math>Kurtosis(X) ={\sum_{i=1}^N{(x_i-\bar{x})^4} \over (N-1)s^4}, </math> where <math>\bar{x}</math> and ''s'' are the sample mean and sample standard deviation, respectively.
  
: <math>Kurtosis(X) ={\sum_{i=1}^N{(x_i-\bar{x})^4} \over (N-1)s^4}, </math> where <math>\bar{x}</math> and ''s'' are the sample mean and sampel standard deviation, respectively.
+
====Why are the higher moments important?====
 +
Moments can completely describe the (nice) distributions!
 +
* There are distributions for which knowing all moments does not determine the distribution (e.g., Log-Normal).
 +
** An example of two different distributions with the same moments include the Log-Normal distribution: (density) <math>f(x) = \frac{\sqrt{2\pi}}{x} e^{-\frac{(log x)^2}{2}}</math>, and a modified/perturbed density given by: <math>f_a(x) = f(x) (1 + a\times sin (2\pi \times log x)).</math> Both density functions yield the same moments: the <math>n^{th}</math> moment of each of these is <math>e^{\frac{n^2}{2}}</math>. See Rick Durrett, Probability: Theory and Examples, 3rd edition, pp. 106-107, and C. C. Heyde (1963) On a property of the lognormal distribution, J. Royal. Stat. Soc. B. 29, 392-393.
 +
* For distributions with finite range, the moments always uniquely determine the distributions.
 +
* For infinite-range distributions, the moments uniquely determine the distribution if these series '''diverge''' (where \( \mu_{2j} \) is the 2j-th moment):
 +
: <math>\sum_{j=0}^{\infty}{\frac{1}{( \mu_{2j}) ^{\frac{1}{2j}}}},</math>
 +
: for <math>-\infty<x<\infty</math>, and
 +
: <math>\sum_{j=0}^{\infty}{\frac{1}{( \mu_{2j}) ^{\frac{1}{2j}}}},</math>
 +
: for <math>0<x<\infty</math>.
  
 
===Examples===
 
===Examples===
 
====A Game of Chance====
 
====A Game of Chance====
* Suppose we are offered to play a game of chance under these conditions: it costs us to play $1.50 and the awarded prices are {$1, $2, $3}. Assume the probabilities of winning each price are {0.6, 0.3, 0.1}, respectively. Should we play the game? What are our chances of winning/loosing? Let's let X=awarded price. Then X={1, 2, 3}.
+
* Suppose we are offered to play a game of chance under these conditions: it costs us to play $\$1.5$ and the awarded prices are {$\$1, \$2, \$3 $}. Assume the probabilities of winning each price are {0.6, 0.3, 0.1}, respectively. Should we play the game? What are our chances of winning/loosing? Let's let X=awarded price. Then X={1, 2, 3}.
  
 
<center>
 
<center>
Line 81: Line 102:
 
|}
 
|}
 
</center>
 
</center>
Then the mean of this game (i.e., expected return or expectation) is computed as the weighted (by the outcome probabilities) average of all the outcome prices: <math>E[X] = x_1P(X=x_1) + x_2P(X=x_2)+x_3P(X=x_3) = 1\times 0.6 + 2\times 0.3 + 3\times 0.1 = 1.5</math>. In other words, the expected return of this came is $1.5, which equals the entry fee, and hence the game is fair - neither the player nor the house has an advantage in this game (on the long run!) Of course, each streak of n games will produce different outcomes and may give small advantage to one side, however, on the long run, no one will make money.
+
Then the mean of this game (i.e., expected return or expectation) is computed as the weighted (by the outcome probabilities) average of all the outcome prices: $E[X] = x_1P(X=x_1) + x_2P(X=x_2)+x_3P(X=x_3) = 1\times 0.6 + 2\times 0.3 + 3\times 0.1 = 1.5$. In other words, the expected return of this came is $\$1.5$, which equals the entry fee, and hence the game is fair - neither the player nor the house has an advantage in this game (on the long run) Of course, each streak of n games will produce different outcomes and may give small advantage to one side.  However, on the long run, no one will make money.
  
The variance for this game is computed by <math>VAR[X] = (x_1-1.5)^2P(X=x_1) + (x_2-1.5)^2P(X=x_2)+(x_3-1.5)^2P(X=x_3) = </math>
+
The variance for this game is computed by $VAR[X] = (x_1-1.5)^2P(X=x_1) + (x_2-1.5)^2P(X=x_2)+(x_3-1.5)^2P(X=x_3) = $
<math>=0.25\times 0.6 + 0.25\times 0.3 + 2.25\times 0.1 = 0.45</math>. Thus, the standard deviation is <math>SD[X] = \sqrt{VAR[X]}=0.67</math>.
+
$=0.25\times 0.6 + 0.25\times 0.3 + 2.25\times 0.1 = 0.45$. Thus, the standard deviation is $SD[X] = \sqrt{VAR[X]}=0.67$.
  
* Suppose now we ''alter the rules for the game of chance'' and the new pay-off is as follows: <center>
+
* Suppose now we ''alter the rules for the game of chance'' and the new pay-off is as follows:  
 +
 
 +
<center>
 
{| class="wikitable" style="text-align:center; width:75%" border="1"
 
{| class="wikitable" style="text-align:center; width:75%" border="1"
 
|-
 
|-
Line 96: Line 119:
 
|}
 
|}
 
</center>
 
</center>
 +
 
* What is the ''new expected return'' of the game? Remember, the old expectation was equal to the entrance fee of $1.50, and the game was fair!
 
* What is the ''new expected return'' of the game? Remember, the old expectation was equal to the entrance fee of $1.50, and the game was fair!
 
* The change in the pay-off of the game may be represented by this linear transformation <math>Y = {3(X-1)\over 2}</math>. Therefore, by our rules for computing expectations of linear functions, <math>E(Y)={3E(X)\over 2} - {3\over 2}={3\over 4}=0.75</math>, and the game became clearly biased. Note how easy it is to compute ''E[Y]'', using this formula. At the same time, we could have computed the expectation of ''Y'' using first-principles (adding the values of the last row in the revised table above)!
 
* The change in the pay-off of the game may be represented by this linear transformation <math>Y = {3(X-1)\over 2}</math>. Therefore, by our rules for computing expectations of linear functions, <math>E(Y)={3E(X)\over 2} - {3\over 2}={3\over 4}=0.75</math>, and the game became clearly biased. Note how easy it is to compute ''E[Y]'', using this formula. At the same time, we could have computed the expectation of ''Y'' using first-principles (adding the values of the last row in the revised table above)!
Line 106: Line 130:
 
{| class="wikitable" style="text-align:center; width:75%" border="1"
 
{| class="wikitable" style="text-align:center; width:75%" border="1"
 
|-
 
|-
| Observable Outcomes || {BBB} || {BG; GB; BBG} || {GGB} || {GGG}  
+
| Observable Outcomes || {BBB} || {BG; GB; BBG} || {GGB} || {GGG}  
 
|-
 
|-
 
| x  || 0 || 1 || 2 || 3  
 
| x  || 0 || 1 || 2 || 3  
Line 115: Line 139:
 
|}
 
|}
 
</center>
 
</center>
Therefore, the expected number of girls that each couple participating in this ("odd") experiment will have is given by <math>E[X] = 0 + 5/8 + 2/8 + 3/8 = 1.5</math>. What is the interpretation of this expectation?
+
Therefore, the expected number of girls that each couple participating in this ("odd") experiment will have is given by <math>E[X] = 0 + 5/8 + 2/8 + 3/8 = 1.25</math>. What is the interpretation of this expectation?
  
 
''Can you calculate the variance and standard deviation for this random variable?''
 
''Can you calculate the variance and standard deviation for this random variable?''
 +
 +
===[[EBook_Problems_Distrib_MeanVar |Problems]]===
  
 
<hr>
 
<hr>
Line 126: Line 152:
 
* SOCR Home page: http://www.socr.ucla.edu
 
* SOCR Home page: http://www.socr.ucla.edu
  
{{translate|pageName=http://wiki.stat.ucla.edu/socr/index.php?title=AP_Statistics_Curriculum_2007_Distrib_MeanVar}}
+
"{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=AP_Statistics_Curriculum_2007_Distrib_MeanVar}}

Latest revision as of 14:37, 3 March 2020

General Advance-Placement (AP) Statistics Curriculum - Expectation (Mean) and Variance

Expectation (Mean)

Example

Suppose 10% of the human population carries the green-eye allele. If we choose 1,000 people randomly and let the RV X be the number of green-eyed people in the sample. Then the distribution of X is binomial distribution with n = 1,000 and p = 0.1 (denoted as \(X \sim B(1,000, 0.1)\). In a sample of 1,000 people, how many of them are we expecting to have this allele? Clearly the count of individuals that carry the green-eye allele will vary between different samples of 1,000 subjects. How much dispersion between the samples can we expect, in terms of the number of individuals carrying this allele? These questions will be answered by computing the mean and the variance (or standard deviation) for this process.

Definition

The Expected Value, Expectation or Mean, of a discrete random variable X is defined by \(E[X]=\sum_x{xP(X=x)}.\) The expectation of a continuous random variable Y is analogously defined by \(E[Y]=\int{yP(y)dy}\), where the integral is over the domain of Y and P(y) is the probability density function of Y.

Properties of Expectation

  • Expectation is a linear functional. That is, the expected value operator \(\operatorname{E}\) is linear in the sense that

\[\operatorname{E}(X + c)= \operatorname{E}(X) + c\,\] \[\operatorname{E}(X + Y)= \operatorname{E}(X) + \operatorname{E}(Y)\,\] \[\operatorname{E}(aX)= a \operatorname{E}(X)\,\] \[\operatorname{E}(aX+bY)= a \operatorname{E}(X) + b\operatorname{E}(Y)\,\] for any two random variables \(X\) and \(Y\) (which need to be defined on the same probability space) and any real numbers \(a\) and \(b\). This property follows directly from the definition of expectation. For instance, \[E[aX+b]=\sum_x{(a\times x+b)P(X=x)} = \sum_x{(a\times x)P(X=x)} +\sum_x{bP(X=x)} = \] \[=a\times \sum_x{xP(X=x)} + b\times \sum_x{P(X=x)} = a\times E[X] + b\times 1 =aE[X] + b.\] \[E[g(X)] = \sum_x{g(x)\times P(X=x)},\] where g(x) is any real-valued function.

Variance

The Variance, of a discrete random variable X is defined by \(VAR[X]=\sum_x{(x-E[X])^2P(X=x)}.\) The Variance of a continuously-values random variable Y is analogously defined by \(VAR[Y]=\int{(y-E[Y])^2P(y)dy}\), where the integral is over the domain of Y and P(y) is the probability density function of Y.

Properties of Variance

  • The Variance is not quite a linear function. It has the following properties:

\[\operatorname{VAR}(X + c)= \operatorname{VAR}(X)\,\] (Data shifts do not affect the dispersion.) \[\operatorname{VAR}(aX)= a^2 \operatorname{VAR}(X)\,\]

If X and Y are uncorrelated, \(\operatorname{VAR}(X + Y)= \operatorname{VAR}(X) + \operatorname{VAR}(Y)\,\)
If X and Y are dependent (correlated), \(\operatorname{VAR}(X + Y)= \operatorname{VAR}(X) + \operatorname{VAR}(Y) +2\times COV(X,Y)\,\)
An alternative formula for calculating the variance is\[VAR(X) = E(X^2) - (E(X))^2\].
  • The Covariance between two real-valued random variables X and Y, with corresponding expected values \(\scriptstyle E(X)\,=\,\mu\) and \(\scriptstyle E(Y)\,=\,\nu\) is defined as

\[\operatorname{COV}(X, Y) = \operatorname{E}((X - \mu) (Y - \nu))=E(X\times Y) -\mu\times\nu \,\] \[COV(X,Y)=\begin{cases}\sum_{x,y}{(x-\mu)(y-\nu)P(X=x;Y=y)}, & X,Y = \texttt{discrete},\\ \int_{x,y}{(x-\mu)(y-\nu)P_{X,Y}(x,y)dxdy}, & X,Y = \texttt{continuous}.\end{cases}\]

  • In general, if we have {\(X_1, X_2, X_3, \cdots , X_n\)} correlated variables, then the variance of their sum is the sum of their covariances:

\[VAR\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \sum_{j=1}^n \operatorname{COV}(X_i, X_j).\]

  • If the correlation between to identically distributed variables (not necessarily independent) is defined by:

\[\rho_{X,Y}=Corr(X,Y)={Cov(X,Y) \over \sigma_X \sigma_Y} ={E[(X-\mu_X)(Y-\mu_Y)] \over \sigma_X\sigma_Y},\] Then the variance of their sum is: \[VAR\left(X+Y\right) = Var(X)+Var(Y)+2\times\operatorname{COV}(X, Y) = \] \[=Var(X)+Var(Y)+2\times\rho_{X, Y}\times\sigma_X\times\sigma_Y.\]

  • The SOCR Bivariate Normal Distribution Experiment demonstrates the synergy between the theoretical (model) and empirical (sample-driven estimates) in computing variances of sums of random variables (in the case of 2 variables). Change the correlation coefficient between the two variables, Corr(X,Y), and observe how this change affects the additivity property of the variance. Is Var(X+Y) ~ Var(X) + Var(Y), when \(Corr(X,Y)\not= 0\)?

Standard Deviation

The Standard Deviation, of a discrete random variable X is defined by \(SD[X]=\sqrt{\sum_x{(x-E[X])^2P(X=x)}} = \sqrt{VAR[X]}.\) The standard deviation of a continuously-values random variable Y is analogously defined by \(SD[Y]=\sqrt{\int{(y-E[Y])^2P(y)dy}} = \sqrt{VAR[Y]}\), where the integral is over the domain of Y and P(y) is the probability density function of Y.

Higher Moments

Raw Moments

The kth Raw Moment for a discrete random variable X is defined by \(E[X^k]=\sum_x{x^kP(X=x)}.\) The kth Raw Moment for a continuously-values random variable Y is analogously defined by \(E[Y^k]=\int{y^kP(y)dy},\) where the integral is over the domain of Y and P(y) is the probability density function of Y.

Centralized Moments

The kth Centralized Moment for a discrete random variable X is defined by \(E_c[X^k]=\sum_x{(x-\mu)^kP(X=x)},\) where \(\mu\) is the expected value of X. The kth Centralized Moment for a continuously-values random variable Y is analogously defined by \(E_c[Y^k]=\int{(y-\mu)^kP(y)dy},\) where \(\mu\) is the expected value of Y, the integral is over the domain of Y and P(y) is the probability density function of Y.

Standardized Moments

The kth Standardized Moment for a discrete random variable X is defined by

\[E_s[X^k]={\sum_x{(x-\mu)^kP(X=x)} \over {(\sum_{x} (x-\mu)^2P(X=x))^{k/2}}}.\]

The kth Standardized Moment for a continuously-values random variable Y is analogously defined by

\[E_s[Y^k]={\int{(y-\mu)^kP(y)dy} \over \sigma^k},\] where the integral is over the domain of Y and P(y) is the probability density function of Y

Notable Moments

In addition to the mean and variance, the Skewness and the Kurtosis are two notable (3-rd and 4-th) moments, respectively.

Sample Moments

Sample Moments are, of course, analogously computed to their theoretical counterparts, using a sample of observations {\(x_1, x_2, x_3, \cdots, x_N\)}. For example, the sample skewness and kurtosis (the 3-rd and 4-th sample standardized moments) are defined by \[Skewness(X) ={\sum_{i=1}^N{(x_i-\bar{x})^3} \over (N-1)s^3}, \] where \(\bar{x}\) and s are the sample mean and sample standard deviation, respectively.

\[Kurtosis(X) ={\sum_{i=1}^N{(x_i-\bar{x})^4} \over (N-1)s^4}, \] where \(\bar{x}\) and s are the sample mean and sample standard deviation, respectively.

Why are the higher moments important?

Moments can completely describe the (nice) distributions!

  • There are distributions for which knowing all moments does not determine the distribution (e.g., Log-Normal).
    • An example of two different distributions with the same moments include the Log-Normal distribution: (density) \(f(x) = \frac{\sqrt{2\pi}}{x} e^{-\frac{(log x)^2}{2}}\), and a modified/perturbed density given by\[f_a(x) = f(x) (1 + a\times sin (2\pi \times log x)).\] Both density functions yield the same moments: the \(n^{th}\) moment of each of these is \(e^{\frac{n^2}{2}}\). See Rick Durrett, Probability: Theory and Examples, 3rd edition, pp. 106-107, and C. C. Heyde (1963) On a property of the lognormal distribution, J. Royal. Stat. Soc. B. 29, 392-393.
  • For distributions with finite range, the moments always uniquely determine the distributions.
  • For infinite-range distributions, the moments uniquely determine the distribution if these series diverge (where \( \mu_{2j} \) is the 2j-th moment):

\[\sum_{j=0}^{\infty}{\frac{1}{( \mu_{2j}) ^{\frac{1}{2j}}}},\]

for \(-\infty<x<\infty\), and

\[\sum_{j=0}^{\infty}{\frac{1}{( \mu_{2j}) ^{\frac{1}{2j}}}},\]

for \(0<x<\infty\).

Examples

A Game of Chance

  • Suppose we are offered to play a game of chance under these conditions: it costs us to play $\$1.5$ and the awarded prices are {$\$1, \$2, \$3 $}. Assume the probabilities of winning each price are {0.6, 0.3, 0.1}, respectively. Should we play the game? What are our chances of winning/loosing? Let's let X=awarded price. Then X={1, 2, 3}.
x 1 2 3
P(X=x) 0.6 0.3 0.1
x*P(X=x) 0.6 0.6 0.3

Then the mean of this game (i.e., expected return or expectation) is computed as the weighted (by the outcome probabilities) average of all the outcome prices: $E[X] = x_1P(X=x_1) + x_2P(X=x_2)+x_3P(X=x_3) = 1\times 0.6 + 2\times 0.3 + 3\times 0.1 = 1.5$. In other words, the expected return of this came is $\$1.5$, which equals the entry fee, and hence the game is fair - neither the player nor the house has an advantage in this game (on the long run) Of course, each streak of n games will produce different outcomes and may give small advantage to one side. However, on the long run, no one will make money.

The variance for this game is computed by $VAR[X] = (x_1-1.5)^2P(X=x_1) + (x_2-1.5)^2P(X=x_2)+(x_3-1.5)^2P(X=x_3) = $ $=0.25\times 0.6 + 0.25\times 0.3 + 2.25\times 0.1 = 0.45$. Thus, the standard deviation is $SD[X] = \sqrt{VAR[X]}=0.67$.

  • Suppose now we alter the rules for the game of chance and the new pay-off is as follows:
x 0 1.5 3
P(X=x) 0.6 0.3 0.1
x*P(X=x) 0 0.45 0.3
  • What is the new expected return of the game? Remember, the old expectation was equal to the entrance fee of $1.50, and the game was fair!
  • The change in the pay-off of the game may be represented by this linear transformation \(Y = {3(X-1)\over 2}\). Therefore, by our rules for computing expectations of linear functions, \(E(Y)={3E(X)\over 2} - {3\over 2}={3\over 4}=0.75\), and the game became clearly biased. Note how easy it is to compute E[Y], using this formula. At the same time, we could have computed the expectation of Y using first-principles (adding the values of the last row in the revised table above)!
  • You can play similar games under different conditions for the probability distribution of the prices using the SOCR Binomial Coin or Die experiments.

Children Gender Expectation Example

Suppose we conduct an (unethical!) experiment involving young couple planning to have children. Suppose, the couples are interested in the number of girls they will have, and each couple agrees to have children until one of the following 2 stopping criteria is met: (1) the couple has at least one child of each gender, or (2) the couple has at most 3 children! Let's denote the RV X ={number of Girls}. The distribution of X is given by:

Observable Outcomes {BBB} {BG; GB; BBG} {GGB} {GGG}
x 0 1 2 3
P(X=x) 1/8 5/8 1/8 1/8
x*P(X=x) 0 5/8 2/8 3/8

Therefore, the expected number of girls that each couple participating in this ("odd") experiment will have is given by \(E[X] = 0 + 5/8 + 2/8 + 3/8 = 1.25\). What is the interpretation of this expectation?

Can you calculate the variance and standard deviation for this random variable?

Problems


References


"-----


Translate this page:

(default)
Uk flag.gif

Deutsch
De flag.gif

Español
Es flag.gif

Français
Fr flag.gif

Italiano
It flag.gif

Português
Pt flag.gif

日本語
Jp flag.gif

България
Bg flag.gif

الامارات العربية المتحدة
Ae flag.gif

Suomi
Fi flag.gif

इस भाषा में
In flag.gif

Norge
No flag.png

한국어
Kr flag.gif

中文
Cn flag.gif

繁体中文
Cn flag.gif

Русский
Ru flag.gif

Nederlands
Nl flag.gif

Ελληνικά
Gr flag.gif

Hrvatska
Hr flag.gif

Česká republika
Cz flag.gif

Danmark
Dk flag.gif

Polska
Pl flag.png

România
Ro flag.png

Sverige
Se flag.gif