Heron
Famously, Heron's method of computing the square root of a some number x goes like this:
1. Guess the root, we'll denote the guess by g
2. Compute the quotient q = x/g
3. Update your next guess to be the arithmetic mean of q and g. So g := (x/g + g)/2
4. Goto 2
This can be viewed as a special case of the Newton–Raphson method, and indeed the results become very accurate very fast.
Mckay
Somewhat less famous is the anecdote about Mckay's theorem.
In this paper by Laurence Sherzer, the author tells the story of an 8th grader (in 1973) named Robert Mckay, who suggests a way to find a number that lies between two fractions. Suppose the fractions are a/b and c/d, Mckay's suggestion is to use (a+c)/(b+d).
This is a mathematical education paper, so it's focus is about how the class and the teacher approached the seemingly weird idea of adding numerators and denominators, and how they explained to themselves why this seems to 'magically' work.
The short version of the explanation is: this amounts to taking a weighted average of the two fractions, where the weights are the denominators. When the denominators are equal, this is exactly the arithmetic mean. When they differ by a lot - the result is much closer to the fraction with the larger denominator.
McHeron?
Since taking the mean of two roots, in Heron's method, is annoying at times, why not use Mckay's very simple method instead, dubbing it the McHeron Algorithm?
Mathematically, it will give an increasingly better guess in each round, even though it may not be the arithmetic mean.
Mathematically, it will give an increasingly better guess in each round, even though it may not be the arithmetic mean.
How would that look?
Suppose we want to compute the square root of 2:
1. Our first guess is 3/2
2. The quotient is 2/(3/2) which is 4/3 ('just invert and multiply' as they say)
3. Our next guess needs to be between 3/2 and 4/3, so we use Mckay's method to get 7/5
4. Our next guess is therefore 7/5
5. The next quotient is 2/(7/5) which is just 2*5/7 = 10/7
6. Using Mckay's theorem - our next guess will be (7+10)/(5+7) = 17/12
7. The next quotient is 2/(17/12) which is just 2*12/17 = 24/17
8. Using Mckay's - (17+24)/(12+17) = 41/29
And indeed 41/29 = 1.41379...
Which is approximates the square root of 2 up to 2 decimal points, which is not bad.
Cool, isn't it?
But of course, life is never quite so simple
I specifically chose a nice example where the algorithm converges relatively fast.
Had I chosen 715 (whose square root is 26.7394...) as the initial number, it would have taken me a 100 iterations of the algorithm just to get to 26.76... and I will be dealing with denominators with 145 digits by then.
Why would the algorithm perform so bad?
Well, remember how Mckay's method introduces a bias for large denominators? The algorithm above scales the quotient's denominator by the the input number, so the larger the input number, the greater the bias toward the quotient will be in the 'Mckay' step of the calculation.
Heron's algorithm has quadratic convergence (which is amazing), and I don't think showing exactly how bad the convergence of the McHeron algorithm is difficult (I suspect that asymptotically, to get decent results, you'll have to run it as many iterations as the magnitude of the result, so O(2^(n/2)) for an input with n bits, which is terrible), but I should go shopping for the weekend.
Silver Lining
For small numbers (with small denominators) this works nicely for the same reasons, because Mckay's method gives a decent approximation of the arithmetic mean.
That's all, would love to hear thoughts in the comments, or links if you saw this exact method somewhere else, I haven't (and for a good reason, it yields terrible results in most cases).
Update: a Twitter Tweak
The user suggested on Twitter to bring the denominators to roughly the same scale through multiplying by the appropriate power of 10. Since determining the correct power of 10 and multiplying by it can be done by counting the number of digits and adding zeros respectively, this is very easy to do in practice. The consequence is that the bias introduced by Mckay's weighted mean is now bounded by 1/10, rendering a significantly better convergence rate.
Taking the adversarial example of 715 from before: getting a 2 decimal points precision now requires only 17 iterations, compared to over 100 iterations in the previous version of McHeron. Very nice!
Taking the adversarial example of 715 from before: getting a 2 decimal points precision now requires only 17 iterations, compared to over 100 iterations in the previous version of McHeron. Very nice!