Prior to this commit, the TSC and local APIC frequencies were calibrated

at boot time by measuring the clocks before and after a one-second sleep.

This was simple and effective, but had the disadvantage of *requiring a

one-second sleep*.

Rather than making two clock measurements (before and after sleeping) we

now perform many measurements; and rather than simply subtracting the

starting count from the ending count, we calculate a best-fit regression

between the target clock and the reference clock (for which the current

best available timecounter is used). While we do this, we keep track

of an estimate of the uncertainty in the regression slope (aka. the ratio

of clock speeds), and stop measuring when we believe the uncertainty is

less than 1 PPM.

In order to avoid the risk of aliasing resulting from the data-gathering

loop synchronizing with (a multiple of) the frequency of the reference

clock, we add some additional spinning depending upon the iteration number.

For numerical stability and simplicity of implementation, we make use of

floating-point arithmetic for the statistical calculations.

This reduces the FreeBSD kernel boot time on x86 systems by between 1900

and 2000 ms.