PureBytes Links
Trading Reference Links
|
List,
Alex's post got me thinking about a question Iv'e had
for some time.
I'm not a math whiz, so someone please correct me if
I'm wrong.
My understanding of the difference in calculating the
population vs sample standard deviation is that
subtracting one from 'n' (n-1), in the case of sample
stddev, essentially amounts to introducing an error
factor in the equation to compensate for the fact that
you are basing your analysis on a small "sample" of a
larger "population".
Basically, subtracting one from 'n' decreases the
denominator & increases the final stddev value.
As Alex said though, "Fortunately these two formulas
are approximately equal when n is large enough (like
>20)".
Based on Alex's observation, and if my understanding
of the reasoning behind subtracting one from 'n' is
correct, wouldn't it make more sense to subtract a
percentage of n from n? Ex. (n - (n * .1)) or simply
(n * .9). This would prevent sample stddev from
converging with the more "precise" population stddev
at larger 'n' values
I know all this amounts to splitting hairs, but I'd be
interested to hear what one of you "math guys" has to
say about this.
Thanks,
Lance
|