Talk:Nth root algorithm
Too long runtime in Python
[edit]I've been playing around with finding (integer) nth roots for large n. Unfortunately, the following implementation of Newton's method (in Python) is ridiculously slow:
def nthroot(y, n): x, xp = 1, -1 while abs(x - xp) > 1: xp, x = x, x - x/n + y/(n * x**(n-1)) while x**n > y: x -= 1 return x
For example, nthroot(12345678901234567890123456789012345**100,100)
takes several minutes to finish.
Using binary search, the solution can be found in seconds.
Am I just doing something wrong, or is Newton's method inherently inefficient for integers and such large n? - Fredrik | talk 23:22, 24 May 2005 (UTC)
- Brief answer (let me know if you need more information): Are you sure that the standard datatype of Python can handle 125-digit integers? I think you need a special library for this. Besides, it may be necessary to start the iteration with a more sensible initial guess than x = 1 (generally, Newton's method may not converge unless the initial guess is close to the answer, but extracting N-th roots may be a special case). -- Jitse Niesen 19:39, 6 Jun 2005 (UTC)
- I thought a bit more about it now. Basically, you are right: Newton's method is inefficient for large n. There are two reasons for it. Firstly, the computation of x**(n-1) in exact arithmetics is expensive if x is a large integer, so every iteration takes a long time. Secondly, the method needs a lot of iterations. For instance,
nthroot(1234567890123456789012345**10, 10)
requires 4725 iterations (note that I replaced 100 in your example with 10), while a binary search requires about 100 iterations. However, the method improves if you give a good initial guess. After replacing the first line of thenthroot
function withx, xp = int(y**(1.0/n)), -1
, the above example gets the correct answer in only 3 iterations. - The current article is rather bad: it only presents the algorithm, which is trivial if you know Newton's method. There is no analysis, no discussion on how to get the initial guess and no references. I am not convinced that the algorithm is used that often; I thought the common approach is to use exponential and logarithms. Bignum arithmetics is a separate issue. I hope you can find the time to improve the article. -- Jitse Niesen 23:37, 7 Jun 2005 (UTC)
- I thought a bit more about it now. Basically, you are right: Newton's method is inefficient for large n. There are two reasons for it. Firstly, the computation of x**(n-1) in exact arithmetics is expensive if x is a large integer, so every iteration takes a long time. Secondly, the method needs a lot of iterations. For instance,
- The catch with that solution is that
y**(1.0/n)
overflows for large y. Fortunately this can be avoided by usingB**int(log(y, B)/n)
with some integer B instead. However, this still leaves Newton (as implemented above) significantly slower than bsearch for very large n (a test with n=300 is nearly instantaneous with bsearch, but takes several seconds with Newton). I'll do some more detailed benchmarking when I get time, hopefully soon. For reference, I'll provide my bsearch implementation here: - Fredrik | talk 00:24, 8 Jun 2005 (UTC)
- The catch with that solution is that
# Returns a tuple of the (floor, ceil) values of the exact solution def nthroot_bsearch(x, n): guess = 1 # Initial guess must be less than the result step = 1 while 1: w = (guess+step)**n if w == x: return (guess+step,) * 2 elif w < x: step <<= 1 elif step == 1: return guess, guess+1 else: guess += step >> 1 step = 1
More timing results
[edit]Newton steps and time bsearch steps and time 2-root of 123^13 5 in 0.000124 s 45 in 0.000358 s 3-root of 123^13 4 in 0.000081 s 30 in 0.000198 s 15-root of 123^13 1 in 0.000053 s 6 in 0.000077 s 37-root of 123^13 185 in 0.003073 s 2 in 0.000072 s 68-root of 123^13 0 in 0.000040 s 1 in 0.000046 s 111-root of 123^13 0 in 0.000029 s 0 in 0.000030 s 150-root of 123^13 0 in 0.000028 s 0 in 0.000029 s 2-root of 123^15 5 in 0.000083 s 52 in 0.000356 s 3-root of 123^15 7 in 0.000203 s 34 in 0.000229 s 15-root of 123^15 97 in 0.000851 s 6 in 0.000076 s 37-root of 123^15 536 in 0.008276 s 2 in 0.000063 s 68-root of 123^15 0 in 0.000039 s 1 in 0.000046 s 111-root of 123^15 0 in 0.000029 s 0 in 0.000030 s 150-root of 123^15 0 in 0.000028 s 0 in 0.000029 s 2-root of 123^74 9 in 0.000191 s 256 in 0.002559 s 3-root of 123^74 8 in 0.000207 s 171 in 0.001480 s 15-root of 123^74 13 in 0.000239 s 34 in 0.001284 s 37-root of 123^74 679 in 0.015252 s 13 in 0.000179 s 68-root of 123^74 1471 in 0.057066 s 7 in 0.000138 s 111-root of 123^74 4564 in 0.710314 s 4 in 0.000125 s 150-root of 123^74 5358 in 1.115691 s 3 in 0.000112 s 2-root of 123^137 9 in 0.000343 s 475 in 0.006759 s 3-root of 123^137 7 in 0.000333 s 317 in 0.008288 s 15-root of 123^137 28 in 0.000767 s 63 in 0.000929 s 37-root of 123^137 517 in 0.015948 s 25 in 0.000565 s 68-root of 123^137 2814 in 0.278242 s 13 in 0.000247 s 111-root of 123^137 4291 in 0.634543 s 8 in 0.000209 s 150-root of 123^137 4360 in 0.722420 s 6 in 0.000179 s 2-root of 123^150 9 in 0.000388 s 520 in 0.007006 s 3-root of 123^150 8 in 0.000509 s 347 in 0.007097 s 15-root of 123^150 30 in 0.000885 s 69 in 0.001091 s 37-root of 123^150 29 in 0.000684 s 28 in 0.000420 s 68-root of 123^150 705 in 0.025949 s 15 in 0.000300 s 111-root of 123^150 2709 in 0.244051 s 9 in 0.000241 s 150-root of 123^150 13713 in 10.464777 s 6 in 0.000187 s
This clearly shows Newton being superior for small n, and bsearch superior for large n. Interesting, eh? Fredrik | talk 10:47, 25 August 2005 (UTC)
Efficiency depends on choosing a good initial guess
[edit]Today I needed to write a routine to take the integer nth root of a number (i.e. . I think the problem Frederik is seeing above is due to choosing a bad initial guess. If you set
(where is the log base 2) then Newton is very efficient. And it has the advantage that you can use truncating integer arithmetic when calculating
since each is larger than the required answer. For example, taking the 100th root of 12345678901234567890123456789012345^100 with this initial guess (Frederik's first comment) only requires 59 iterations. dbenbenn | talk 10:34, 19 March 2006 (UTC)
Other possible way to do the iteration
[edit]As far as I can see, it could be substantially faster to perform Newton iteration on rather than , by using an iteration like:
- Make an initial guess
- Set
- Repeat step 2 until the desired precision is reached.
since you that way can get rid of the per-iteration division (which is much slower than multiplication on most modern computers). I'm a bit afraid that there could be issues with convergence radius or the WP:NOR guideline, which is why I am not just inserting it into the article itself. 80.202.214.26 16:44, 25 May 2006 (UTC)
- I just redid the derivation from Newton's iteration where and it agrees with the above users derivation. 132.228.195.207 (talk) 02:20, 29 April 2009 (UTC)
Nth-root Extremely Simple Arithmetical Iterations
[edit]It is astonishing to realize that the arithmetical methods shown at:
Nth-Root simple arithmetical methods
do not appear in any text on root-solving since Babylonian times up to now, including of course modern scholar journals. So worrying, indeed, that "experts" on root-solving seem unaware on these trivial arithmetical high-order nth-root-solving methods.
Domingo Gomez Morin. Civil Engineer. Structural Engineer – Arithmonic (talk), 26 October 2006
What about Halley's method?
[edit]Halley's method is a well-documented root finding algorithm that can be applied to finding nth roots. It has the benefit of converging in cubic time as opposed to quadratic time for newton's method which is already covered in the article. The only catch is that the initial seed has to be a good guess for it to converge rapidly:
Cnadolski (talk) 21:01, 16 September 2010 (UTC)
Complicate modulus method
[edit]The n-th root of any Integer can be found by hand, knowing that [P^n-(P-1)^n] is the complicate modulus of any power of integer. This follows from the Telescoping Sum property: any power of an Integer C can be written as a Sum of Gnomons:
Will be used "x" as Index since it helps to stick the result on a Cartesian plane and understand that this is a method to square the area bellow the derivate (not just the first) of any function of the type: .
Since:
- .
So to have the sqaure root of 9 just follow this Recursive Difference:
1) wrote Tartaglia's line for (x-1)^2 (line n=2):
1 -2 1 then we remove the first value
-2 1
we change the signs a re-put the x value (with their powers) then we obtain our M2 modulus :
- .
So we can easlily tabulate (and calculate the square root) using our modulus M2= 2x-1, and rising x:
x 2x-1 Value= 9 Rest 1 1 9-1= 8 2 3 8-3= 5 3 5 5-5= 0
so yes 9 has perfect root that is 3 Rest 0
Example for n=3 , cubic root by hand:
We have, for example the number "28" and we would like it as an integer bubic root and witch is.
From Tartaglia, line n=3 for (x-1)^3
n= 3 1 -3+3-1 we remove the first
-3+ 3-1 and we change the sign so:
- .
And we can calculate:
x 3x^2- 3x +1 Value=28 Rest 1 1 28-1= 27 2 7 27-7= 20 3 19 20-19= 1 <--- rest <>0 so no perfect cube
so: no 28 has no integer cubic root.
For the "n" root just do the same keeping "n" Tartaglia's line
In the following pictures 2 example of how the Two-Hand-Clock works for numbers from 0 to 11, in the case of Square and Cubic Root or Power.
— Preceding unsigned comment added by StefanoMaruelli (talk • contribs) 05:45, 16 June 2017 (UTC)
I call this "Complicate Modulus" clock calculator (now is the Two-Hand-Clock).
Is possible, also to have back more digit forcing the Sum operator to work with rationals:
- .
Where to have m digit choose :.
Example of Rational Complicate Modulus, for 1 digit precision:
- .
Where the new index x starts from 1/10 and rise Step 1/10 (so 1/10,2/10,3/10...) till C (that can be an Integer or a 1 digit Rational).
So the ne Sum, I've called Step Sum works with Rational index in this way:
- .
So f.example if K=10 and C=2.1 we have:
- .
that is (nothing more than a scaled version of the 10 times bigger integer sum) :
x 2x/k-1/k^2 Sum 0,1 0,01 0,01 0,2 0,03 0,04 0,3 0,05 0,09 0,4 0,07 0,16 0,5 0,09 0,25 0,6 0,11 0,36 0,7 0,13 0,49 0,8 0,15 0,64 0,9 0,17 0,81 1 0,19 1 1,1 0,21 1,21 1,2 0,23 1,44 1,3 0,25 1,69 1,4 0,27 1,96 1,5 0,29 2,25 1,6 0,31 2,56 1,7 0,33 2,89 1,8 0,35 3,24 1,9 0,37 3,61 2 0,39 4 2,1 0,41 4,41
etc...
StefanoMaruelli (talk) 08:44, 16 June 2017 (UTC)StefanoMaruelli
Maclaurin Series
[edit]Nth root can be calculated using the maclaurin series of exp and ln like this
Since n is always an integer >= 0, a simple pow function can be used for the calculations.
Using Common Lisp as a programming language.
example:
(defun powint (a b) (if (= b 0) 1 (* a (powint a (- b 1)))))</lang>
Ofcourse it's not possible to calculate up to an infinity in a computer, but the value can be calculated til it is good enough.
Using a library that implements a type that includes nominator and a denominator (like GMP's mpt_q), it is possible to calculate
the Nth root using only integers.
Maybe not the best solution but it could be something like this
(defun factorial (n) (if (= n 0) 1 (* n (factorial (- n 1)))))
(defun powint (a b) (if (= b 0) 1 (* a (powint a (- b 1)))))
(defun logS(x n) (if (= n 0) (* 2 x) (+ (* 2 (/ (powint x (+ (* 2 n) 1)) (+ (* 2 n) 1))) (logS x (- n 1)) ) ))
(defun loge(x n) (logS (/ (- x 1) (+ 1 x)) n))
(defun expon(x n) (if (= n 0) 1 (+ (/ (powint x n) (factorial n)) (expon x (- n 1)))))
(defun pown(x a n) (expon (* a (loge x n)) n))
example output:
[9]> (* (pown 2 1/3 20) 1.0) 1.2599211