22.3. Examples¶
This section is a workout in finding expectation and variance by conditioning. As before, if you are trying to find a probability, expectation, or variance, and you think, “If only I knew the value of this other random variable, I’d have the answer,” then that’s a sign that you should consider conditioning on that other random variable.
22.3.1. Mixture of Two Distributions¶
Let \(X\) have mean \(\mu_X\) and SD \(\sigma_X\). Let \(Y\) have mean \(\mu_Y\) and SD \(\sigma_Y\). Now let \(p\) be a number between 0 and 1, and define the random variable \(M\) as follows.
The distribution of \(M\) is called a mixture of the distributions of \(X\) and \(Y\).
One way to express the definition of \(M\) compactly is to let \(I_H\) be the indicator of heads in one toss of a \(p\)-coin; then
To find the expectation of \(M\) we can use the expression above, but here we will condition on \(I_H\) because we can continue with that method to find \(Var(M)\).
The distribution table of the random variable \(E(M \mid I_H)\) is
Value |
\(\mu_X\) |
\(\mu_Y\) |
---|---|---|
Probability |
\(p\) |
\(q\) |
The distribution table of the random variable \(Var(M \mid I_H)\) is
Value |
\(\sigma_X^2\) |
\(\sigma_Y^2\) |
---|---|---|
Probability |
\(p\) |
\(q\) |
So
and
This is true no matter what the distributions of \(X\) and \(Y\) are.
Notice also that the answer for the variance can be written as
That’s what you would have got had you first found \(E(M^2)\) by conditioning on \(I_H\).
22.3.2. Variance of the Geometric Distribution¶
We have managed to come quite far into the course without deriving the variance of the geometric distribution. Let’s find it now by using the results about mixtures derived above.
Toss a coin that lands heads with probability \(p\) and stop when you see a head. The number of tosses \(X\) has the geometric \((p)\) distribution on \(\{ 1, 2, \ldots \}\). Let \(E(X) = \mu\) and \(Var(X) = \sigma^2\). We will use conditioning to confirm that \(E(X) = 1/p\) and also to find \(Var(X)\).
Now
where \(X^*\) is an independent copy of \(X\). By the previous example,
So \(\mu = 1/p\) as we have known for some time.
By the variance formula of the previous example,
So
and so \(Var(X) = \sigma^2 = q/p^2\).
22.3.3. Normal with a Normal Mean¶
Let \(M\) be normal \((\mu, \sigma_M^2)\), and given \(M = m\), let \(X\) be normal \((m, \sigma_X^2)\).
Then
Notice that the conditional variance is a constant; it is the same no matter what the value of \(M\) turns out to be.
So \(E(X) = E(M) = \mu\) and
22.3.4. Random Sum¶
Let \(N\) be a random variable with values \(0, 1, 2, \ldots\), mean \(\mu_N\), and SD \(\sigma_N\). Let $X_1, X_2, \ldots $ be i.i.d. with mean \(\mu_X\) and SD \(\sigma_X\), independent of \(N\).
Define the random sum \(S_N\) as
Then as we have seen before, \(E(S_N \mid N = n) = n\mu_X\) for all \(n\) (including \(n = 0\)) and so
Also
So
This is consistent with intuition: you expect to be adding \(\mu_N\) i.i.d. random variables, each with mean \(\mu_X\). For the variance, intuition needs some guidance, which is provided by our variance decomposition formula.