Polynomial Root-finding with the Jenkins-Traub Algorithm

The Jenkins-Traub Algorithm is a normal within the area of numerical computation of polynomial roots, essentially developed as a numerical algorithm particularly for the duty of computing polynomial roots. In different phrases, (i) as a result of it was deliberate from the outset for numerical functions moderately than being merely an adaptation of an analytic system, this can be very strong, successfully minimizing the results of pc round-off error, whereas (ii) additionally being extraordinarily environment friendly in comparison with extra basic strategies not written particularly for the duty of computing polynomial roots; in actual fact, the algorithm converges to polynomial roots at a charge higher than quadratic. Moreover, since being launched over thirty years in the past, the algorithm has had time to be rigorously examined and has efficiently confirmed its high quality; because of this, it has gained a large distribution as evidenced by its incorporation in business software program merchandise and the posting on the NETLIB web site of supply code for packages based mostly on the algorithm.

An Introduction to the Area of Root-finding

Given a operate which is outlined by way of a number of unbiased variables, the roots (additionally known as zeros) of the equation are the values of the unbiased variable(s) for which the operate equals [tex]0[/tex]:

[TEX]displaystyle f(z1, z2, z3, z4, z5, . . . ) = 0[/TEX]

Notice that, normally, the z values are advanced numbers, comprised of actual and imaginary parts (indicated by the [TEX]i[/TEX]): [TEX]z = x + iy[/TEX].

Take into account the next equation:

[TEX]displaystyle 2z_{1} + 0.5z_{2}z_{3} + 3.14z_{4}^{2}+1.414ln(z_{5})=23[/TEX]

The values of [TEX]z_{1}, z_{2}, z_{3}, z_{4}[/TEX] and [TEX]z_{5}[/TEX] which fulfill this equation might not be attainable by analytical strategies, so the equation could be rearranged into the next type:

[TEX]displaystyle 2z_{1} + 0.5z_{2}z_{3} + 3.14z_{4}^{2}+1.414ln(z_{5}) – 23 = 0[/TEX]

which is of the shape

[TEX]displaystyle f(z1, z2, z3, z4, z5) = 0[/TEX]

As soon as on this type (the usual type), fixing the unique equation turns into a matter of discovering the [TEX]z_{1}, z_{2}, z_{3}, z_{4}[/TEX] and [TEX]z_{5}[/TEX] values for which f equals [tex]0[/tex], and well-developed fields of arithmetic and pc science present a number of root-finding strategies for fixing such an issue. Notice that for the reason that theme of this text is polynomial root-finding, additional examples will give attention to single-variable equations, particularly, polynomials.

Analytical Root-finding Strategies

Take into account the next quadratic equation:

[TEX]displaystyle f(z) = z^{2} + z – 2[/TEX]

This equation is a second-degree polynomial–the best energy utilized to the unbiased variable is 2. Consequently, this equation has two roots; an n-degree polynomial equation has n roots.

As a result of the equation is an easy polynomial, a primary method to discovering its zeros may be to place the equation into the shape [TEX](z – alpha_{1})(z – alpha_{2})[/TEX] and make just a few educated guesses; an individual might shortly decide that [TEX]alpha_{1} = 1[/TEX] (a strictly actual consequence) is one root of this equation. This root might then be divided out of the unique polynomial

[TEX]displaystyle frac{(z^{2} + z -2)}{(z-1)} = (z+2) [/TEX]

to yield the second root: [TEX]alpha_{2} = -2[/TEX].

On this easy instance, the 2 roots had been discovered simply; the second root was discovered instantly after the primary one was recognized. Nevertheless, in most real-world issues the roots of a quadratic aren’t discovered so simply. Moreover, in issues the place the unique polynomial is of diploma larger than two, when one root is discovered, the opposite roots don’t comply with instantly — extra refined strategies are required.

Among the many strategies out there are analytical ones, in different phrases, strategies that yield express, algebraic outcomes. For instance, within the case of a quadratic equation, express expressions for the 2 roots are attainable through the Quadratic Components: as soon as a quadratic equation is organized into the usual type

[TEX]displaystyle daring{a}z^{2} + daring{b}z + daring{c} = 0[/TEX]

(a, b and c are recognized constants), the 2 roots are discovered through the Quadratic Components:

[TEX]displaystyle alpha = frac{{ – b pm sqrt {b^2 – 4ac} }}{{2a}}[/TEX]

The amount throughout the sq. root signal [TEX](b^2 – 4ac)[/TEX] is named the discriminant and determines whether or not the answer is advanced or strictly actual. If the discriminant is lower than [tex]0[/tex], the numerator incorporates an imaginary part and the roots are, due to this fact, advanced. If the discriminant is bigger than or equal to [tex]0[/tex], the options are actual. The truth is, if the discriminant is [tex]0[/tex], the 2 roots are actual and equal:

[TEX]displaystyle alpha_{1} = frac{{ – b + sqrt {0} }}{{2a}} = frac{-b}{2a}[/TEX]

[TEX]displaystyle alpha_{2} = frac{{ – b – sqrt {0} }}{{2a}} = frac{-b}{2a}[/TEX]

Along with the Quadratic Components for quadratic equations, analogous formulae exist for polynomials of diploma three and 4. Nevertheless, for polynomials of diploma 5 and better, analytical options aren’t attainable (besides in particular instances), solely numerical options are attainable.

Numerical Root-finding Strategies

For the multitude of real-world issues that aren’t amenable to an analytical resolution, numerical root-finding is a well-developed area providing all kinds of instruments from which to decide on; many pc packages written for the computation of roots are based mostly on algorithms that make them relevant to the computation of roots of features along with polynomials, and just about all of them make use of an iterative method that terminates as soon as the specified diploma of tolerance has been achieved. For instance, the bisection technique is an easy and dependable technique for computing roots of a operate when they’re recognized forward of time to be actual solely. The algorithm begins with the idea {that a} zero is someplace on a user-supplied interval [a,b]. In different phrases, the operate worth f(a) is of the other signal of f(b) – in going from a to b, f both goes from being optimistic to being destructive, or it goes from being destructive to being optimistic. Some extent, m, in the midst of the interval [a,b] is then chosen. The operate is evaluated at level m: f(m). If the signal of f(m) is identical because the signal of f(a), the specified zero shouldn’t be within the interval [a,m]. On this case, the interval is reduce in half and the brand new interval turns into [m,b]. However, if the signal of f(m) shouldn’t be the identical because the signal of f(a), the specified zero is within the interval [a,m]. On this case, the interval is reduce in half and the brand new interval turns into [a,m]. This course of is repeated till both the interval converges to a desired tolerance or a worth of f is discovered that’s inside a suitable tolerance, wherein case the worth of z at this level is the worth returned by this system as the basis.

Along with the bisection technique, extra elegant–and sooner converging–strategies can be found: the False Place technique, Brent’s technique, the Newton-Raphson technique, the secant technique, and others. The Newton-Raphson technique makes use of the operate and its spinoff to shortly converge to an answer. This technique is nice for conditions wherein the derivatives are both recognized or could be calculated with a low computational value. In different conditions, the price of computing derivatives could also be too excessive. The secant technique, by comparability, doesn’t require the usage of derivatives, however the calculation of every iterate requires the usage of the two earlier iterates and doesn’t converge to an answer as shortly because the Newton-Raphson technique. In some conditions, this technique, too, could also be deemed impractical. Normally, when some thought is given to an issue, a specific approach could be utilized to it that’s extra acceptable than one other. The truth is, for some issues, one approach could fail to discover a resolution in any respect, whereas one other approach will succeed. For different issues, a number of strategies could, certainly, have the ability to resolve the issue and the numerical analyst could choose the one that’s merely extra computationally environment friendly.

The Jenkins-Traub Algorithm

The sphere of numerical root-finding is so well-developed that the usage of a good-quality numerical program is really helpful — when numerical outcomes are sought — even when an analytical resolution to an issue is thought. In different phrases, though an analytical resolution to, say, the quartic equation exists, writing a pc program that merely implements the textbook system shouldn’t be really helpful; pc round-off errors typically render outcomes of such packages meaningless. The usage of a strong numerical program, based mostly upon sound concept and a very good algorithm, and coded to totally take care of pc round-off errors, is the really helpful motion. The Jenkins-Traub Algorithm is such an algorithm; it’s a three-stage, extraordinarily efficient, globally convergent algorithm designed particularly for computing the roots of polynomials.

Stage One is the “No Shift” stage; the primary objective of this stage is to intensify the smaller zeros. The seek for a zero is began by taking an preliminary guess of [tex]0[/tex] for a set quantity, M, of iterations (M is often assigned the worth 5 on the idea of numerical expertise(1)).

Stage Two is the “Mounted Shift” stage, the aim of this stage is to separate zeros of equal or nearly equal magnitude. As a place to begin on this stage, the next worth is used:

[TEX]displaystyle s = beta e^{itheta}[/TEX]

[TEX]beta[/TEX] is a decrease sure on the magnitudes of the possible zeros within the cluster. [TEX]theta[/TEX] might be taken at random, for the reason that cluster might be wherever within the advanced airplane; nonetheless, in follow [TEX]theta[/TEX] is often initialized to 49°, placing s close to the center of the primary quadrant of the advanced airplane. After a sure variety of iterations, if s doesn’t converge to a root, s is assigned a brand new worth by growing [TEX]theta[/TEX] by 94°. Repeated makes an attempt would have the seek for a root begin with factors in all 4 quadrants of the advanced airplane till the search is returned to the primary quadrant. Ought to the search, certainly, return to the primary quadrant, successive cycles begin at factors 16° away from the place to begin of the previous cycle.

Stage Three is the “Variable Shift” stage, which is terminated when the computed worth of the polynomial at a attainable zero is lower than or equal to a specified sure.

Along with the three elementary levels round which it was developed, the Jenkins-Traub Algorithm incorporates a number of different strategies for making it as efficient as attainable. A type of strategies is deflation of the polynomial by artificial division every time a root is discovered. Take into account the next monic polynomial:

[TEX]displaystyle z^n + daring{a_{n-1}}z^{n-1}+ daring{a_{n-2}}z^{n-2}+daring{a_{n-3}}z^{n-3} + ldots + daring{a_{1}}z + daring{a_0} = 0[/TEX]

The [TEX]daring{a_i}[/TEX] are recognized constants and, normally, are advanced. Now say the basis [TEX]z_1[/TEX] ([TEX]z_1 = x + iy[/TEX]) has been discovered. Artificial division could be employed to divide that root out of the unique polynomial:

[TEX]displaystyle frac{z^n + daring{a_{n-1}}z^{n-1}+ daring{a_{n-2}}z^{n-2}+daring{a_{n-3}}z^{n-3} + ldots + daring{a_0}}{z-z_1}[/TEX]

to yield

[TEX]displaystyle daring{b_{n-1}}z^{n-1}+ daring{b_{n-2}}z^{n-2}+daring{a_{b-3}}z^{n-3} + ldots + daring{b_0}[/TEX]

The [TEX]daring{b_i}[/TEX] are new constants. The foundation-finding course of is then repeated on this new–easier–polynomial. As every root is discovered, the polynomial turns into successively easier and every successive iteration of the algorithm includes, normally, fewer computations.

For polynomials whose coefficients are actual solely, when a posh root is discovered ([TEX]z_1 = x + iy[/TEX]), an extra profit arises: that root’s advanced conjugate can be a root ([TEX]z_2 = x – iy[/TEX]). In different phrases, two roots are computed — and the polynomial could be deflated by two levels — in a single iteration of the algorithm. Moreover, this deflation includes actual solely, moderately than advanced, operations; the product of those two roots is an actual quadratic equation:

[TEX]displaystyle (z-z_1)(z – z_2) = z^2 – 2xz + (x^2 + y^2)[/TEX]

On this case, artificial division is employed on the polynomial as follows:

[TEX]displaystyle frac{z^n + daring{a_{n-1}}z^{n-1}+ daring{a_{n-2}}z^{n-2}+daring{a_{n-3}}z^{n-3} + ldots + daring{a_0}}{z^2 – 2xz + (x^2 + y^2)}[/TEX]

The truth is, for computing roots of polynomials which have solely actual coefficients, a modified model of the Jenkins-Traub Algorithm has been written which includes a number of options that reap the benefits of the traits of real-only polynomials to yield important decreases in program execution time; “If the advanced and actual algorithms are utilized to the identical actual polynomial, the actual algorithm is about 4 occasions as quick.”(4)

The Jenkins-Traub Algorithm not solely deflates the polynomial as roots are computed, it computes the roots roughly so as of accelerating magnitude. This method is taken as a result of deflation of a polynomial could be unstable until achieved by elements so as of accelerating magnitude. By beginning the seek for roots with an preliminary guess of [tex]0[/tex], as is finished in Stage One, roots are, certainly, computed roughly so as of accelerating magnitude, the elements by which the polynomial is successively deflated are roughly so as of accelerating magnitude — the deflation of the polynomial is made fairly steady.

As a fast verify of the effectiveness of the Jenkins-Traub Algorithm, think about a contrived numerical instance:

[TEX]displaystyle f(z) = (z – 1.001)(z – 0.998)(z – 1.00002)(z – 0.99999)[/TEX]

The roots of this polynomial are very shut and may check how successfully the algorithm discerns near-multiple roots:


As well as, the polynomial is real-only in order that readers could affirm the outcomes with a free, ready-to-use, on-line Javascript program (URL given beneath).

Expanded, the check polynomial turns into

[TEX]displaystyle displaystyle z^{4} – 3.99901z^{3} + 5.9970279898z^{2} – 3.9970259795802z + [/TEX] [TEX]0.9990079897802004[/TEX]

and the options computed by the Javascript root-finder comply with:


This system, in actual fact, did an excellent job:

  1. no errors had been returned,
  2. no inaccurate imaginary parts had been discovered,
  3. 4 distinct roots had been computed, and
  4. the computed roots are extraordinarily near the precise roots (round-off errors are unavoidable).


Since its introduction in 1969 (in Michael Jenkins’ PhD thesis) the Jenkins-Traub Algorithm has been rigorously examined and has confirmed itself as a wonderful algorithm for the computation of the roots of polynomials. This can be very strong, is globally convergent for any distribution of zeros, and converges at a greater than quadratic charge. Testifying to its top quality is the truth that supply code for packages (written in FORTRAN) based mostly on the algorithm have been posted on the NETLIB web site, a repository of high-quality numerical packages; the advanced coefficient model is posted as CPOLY (https://www.netlib.org/toms/419) and an actual coefficient solely model has been posted as RPOLY (https://www.netlib.org/toms/493). Translations into C++ and Javascript are additionally out there; for instance, Polynomial Root-finder (https://www.akiti.ca/PolyRootRe.html) is a Javascript translation of RPOLY and has been efficiently examined with information revealed in Jenkins’ PhD thesis. As well as, the algorithm is integrated in business software program merchandise (i.e. Mathematica). The confirmed effectiveness of the Jenkins-Traub Algorithm and its wide-spread distribution make it an especially necessary instrument within the toolbox of numerical analysts.


  1. “A First Course in Numerical Evaluation: Second Version”
    Anthony Ralston and Philip Rabinowitz
    McGraw-Hill Guide Firm
    New York
  2. “Numerical Strategies and Software program”
    David Kahaner, Cleve Moler, Stephen Nash
    Prentice-Corridor, Inc.
    Englewood Cliffs, New Jersey
  3. “Numerical Recipes. The Artwork of Scientific Computing.”
    William H. Press, Brian P. Flannery, Saul A. Teukolsky, and William T. Vetterling
    Cambridge College Press
    New York
    (third version now out there.)
  4. Wikipedia
    Jenkins-Traub Algorithm for Polynomial Zeros
  5. Jenkins, M.A. Three-stage variable-shift iterations for the answer of polynomial equations with a posteriori error bounds for the zeros. Diss., Rep. CS 138, Comput. Sci.
    Dep., Stanford U., Stanford, Cal., 1969.

This text was written by David Binner. You may go to his math web site at https://akiti.ca. In case you’d like to put in writing for Math-Weblog.com, please e mail us at [tex][email protected][/tex].