So pretty much the variance and standard deviation are the exact same thing, except that variance is used to make the formulas look nicer and
working out more cleaner.
Yep.
Rui one more thing, for the Expected value of binomial distribution E(X) = np and Variance Var(X) = np(1-p), do we learn how to derive these formulas in school, cause I havent really seen anything in the books and if not, can u please show me how to derive these formulas???
Thanks
These proofs are actually heavily involved. I had trouble doing them in first year; wasn't until second year university when I started finding it easy. Here's the proof for the expected value at least just for viewing pleasure.
(Note: So therefore you definitely won't need it at the high school level.)
\begin{align*}
E(X) &= \sum_x x P(X=x)\\
&= \sum_{x=0}^n x \binom{n}{x}p^x (1-p)^{n-x}\\
&= \sum_{x=0}^n x \frac{n!}{x!(n-x)!}p^x (1-p)^{n-x}.
\end{align*}
Now, we first evaluate the sum at \(x=0\). We pull the value when \(x=0\)
outside of the sum.
\begin{align*}
E(X) &= 0 \binom{n}{0}p^0 (1-p)^{n-0} + \sum_{x=1}^n x\binom{n}{x}p^x (1-p)^{n-x}\\
&= \sum_{x=1}^n x\binom{n}{x}p^x (1-p)^{n-x}
\end{align*}
because that first term actually evaluates to 0. Now we note that the terms in the sum are indexed by \(x=1, 2, 3, \dots, n\). So every term in the sum now always has the property \(x=0\), and hence we can cancel any \(x\)'s in the numerator and denominator.
In addition to this, we need the trick \(N! = N(N-1)!\). (You can try to convince yourself that this formula is true by simply expanding both of the factorials.) Here, I first use that \(x! = x(x-1)!\). (Note: The assumption for this question is that \(x\) is an integer; not any real number.)
\begin{align*}
E(X)& = \sum_{x=1}^n x\binom{n}{x}p^x (1-p)^{n-x}\\
&= \sum_{x=1}^n x \frac{n!}{x!(n-x)!} p^x (1-p)^{n-x}\\
&= \sum_{x=1}^n \frac{n!}{(x-1)(n-x)!} p^x (1-p)^{n-x}
\end{align*}
Observe that we had to tear apart the binomial coefficient (i.e. convert it to its factorial notation) to cancel out the \(x\) in front. We now have to rebuild a new binomial coefficient using the remaining factorials. It turns out that we now need to manipulate it to obtain \( \binom{n-1}{x-1} \).
This requires that we use \(n! = n(n-1)!\) in the numerator. Once we do that, observe that the sum is in terms of \(x\), and not in terms of \(n\). Hence the extra \(n\) can now be moved in front.
\begin{align*}
E(X) &= \sum_{x=1}^n \frac{n(n-1)!}{(x-1)!(n-x)!}p^x (1-p)^{n-x}\\
&= n \sum_{x=1}^n \frac{(n-1)!}{(x-1)!(n-x)!} p^x (1-p)^{n-x}\\
&= n \sum_{x=1}^n \frac{(n-1)!}{(x-1)![(n-1)-(x-1)!]} p^x (1-p)^{n-x}\\
&= n \sum_{x=1}^n \binom{n-1}{x-1}p^x (1-p)^{n-x}
\end{align*}
Now I'll decompose \(p^x\) into \(p\times p^{x-1}\), which allows me to pull out a factor of \(p\) in front as well.
\begin{align*}
E(X) &= n \sum_{x=1}^n\binom{n-1}{x-1} p\, p^{x-1}(1-p)^{n-x}\\
&= np \sum_{x=1}^n \binom{n-1}{x-1} p^{x-1}(1-p)^{n-x}
\end{align*}
__________________________________________________________________
Observe that our end goal \(np\) has now showed up. It remains to prove that the ugly sum actually equals to 1. To do this, I will now tear apart the sum, by subbing every value of \(x\) from \(1\) to \(n\) in. I'll focus only on the sum here.
\begin{align*}
&\quad \sum_{x=1}^n \binom{n-1}{x-1} p^{x-1}(1-p)^{n-x}\\
&= \binom{n-1}{1-1} p^{1-1} (1-p)^{n-1} + \binom{n-1}{2-1} p^{2-1}(1-p)^{n-2} + \binom{n-1}{3-1} p^{3-1}(1-p)^{n-3} + \cdots + \binom{n-1}{n-1} p^{n-1} (1-p)^{n-n}\\
&= \binom{n-1}{0} (1-p)^{n-1} + \binom{n-1}{1} p(1-p)^{(n-1)-1} + \binom{n-1}{2} p^2 (1-p)^{(n-1)-2} + \cdots + \binom{n-1}{n-1} p^{n-1}
\end{align*}
It may or may not be obvious here, but this now looks disturbing like the statement of the binomial theorem!
Recall: Statement of the binomial theorem
\[ (x+y)^n = \binom{n}{0} x^n + \binom{n}{1} x^{n-1}y + \binom{n}{2} x^{n-2} y^2 + \cdots + \binom{n}{n}y^n. \]
Here, the power should be to \(n-1\) instead. Observe how \(p\) replaces the role of \(x\), and \(1-p\) replaces the role of \(y\).
So from using the binomial theorem, that expression now simplifies to
\[ \left[ p + (1-p)\right]^{n-1}. \]
Hopefully it is clear that this expression indeed evaluates to 1. Thus upon substituting into what we had earlier,
\[ E(X) = np \times 1 = np \]
as required.
For the variance, one uses a similar strategy to this, but they start by computing \( E(X(X-1))\) instead. Since \( E(X(X-1)) = E(X^2 - X) = E(X^2) - E(X)\), this then allows us to find the second moment \(E(X^2)\). And as usual, conclude with \(\operatorname{Var}(X) = E(X^2) - [E(X)]^2\).