Least Squares
Least squares is a common way to measure errors in statistical analysis. The least squares formula is best known for favoring things with a lot of small errors over those with a few large errors. Least squares keeps track of all errors, even if some are in one direction and some are in the other direction.
For example, assume that you want to evaluate a method to predict stock prices. For six days you try to predict a value, then compare the predicted value to the actual value. For five days in a row the prediction is perfect, the error is 0. On the sixth day the predictions is off by $10.
= 0 + 0 + 0 + 0 + 0 + 100
= 100
Then you try a second method of predicting stocks. This time your predictions are $2 above the real value each of the six days.
= 4 + 4 + 4 + 4 + 4 + 4
= 24
A third method predicts values which are $2 above the actual value some days, and $2 below the actual value on other days.
= 4 + 4 + 4 + 4 + 4 + 4
= 24
According to least squares, method B is better than method A because the sum of the squares of the errors is lower for method B. Method B and method C are equally good, according to least squares.
Least squares is most often associated with linear regression. However, it can also be used to measure errors in other calculations. The formulas for volatility and standard deviation are almost identical to the equations listed above.
The alerts server uses least squares and related algorithms are used to optimize a variety of models. These are all internal calculations, and the user does not have to know the details.