Page 127 - Applied Statistics with R
P. 127
8.2. SAMPLING DISTRIBUTIONS 127
Best
Now, if we restrict ourselves to both linear and unbiased estimates, how do we
define the best estimate? The estimate with the minimum variance.
First note that it is very easy to create an estimate for that has very low
1
variance, but is not unbiased. For example, define:
̂
= 5.
̂
Then, since is a constant value,
̂
Var[ ] = 0.
However since,
̂
E[ ] = 5
̂
we say that is a biased estimator unless = 5, which we would not
1
know ahead of time. For this reason, it is a terrible estimate (unless by chance
= 5) even though it has the smallest possible variance. This is part of the
1
reason we restrict ourselves to unbiased estimates. What good is an estimate,
if it estimates the wrong quantity?
̂
̂
So now, the natural question is, what are the variances of and ? They are,
1
0
1 2 ̄
̂
2
Var[ ] = ( + )
0
2
̂
Var[ ] = .
1
These quantify the variability of the estimates due to random chance during
sampling. Are these “the best”? Are these variances as small as we can possi-
bility get? You’ll just have to take our word for it that they are because showing
that this is true is beyond the scope of this course.
8.2 Sampling Distributions
̂
̂
Now that we have “redefined” the estimates for and as random variables,
1
0
we can discuss their sampling distribution, which is the distribution when a
statistic is considered a random variable.
̂
̂
Since both and are a linear combination of the and each is normally
1
0
̂
̂
distributed, then both and also follow a normal distribution.
1
0

