0
votes

I am trying to find a root of an equation using Newton-Raphson provided by SciPy (scipy.optimize.newton).

At the moment I don't have the fprime values that the documentation advises to use, and as far as I am aware this means that the Secant method is being used to find the roots.

Since the Newton-Raphson method has faster convergence than the Secant method, my gut thinks that maybe I should numerically approximate fprime and provide it so that Newton's method is used.

Which one would generally lead to faster convergence / faster actual computing of my roots?

  1. Just using scipy.optimize.newton without providing fprime (i.e. the Secant Method, or
  2. Using numerical differentiation to compute fprime (e.g. with numpy.diff) and providing it to scipy.optimize.newton so that the Newton-Raphson method is used.
1

1 Answers

3
votes

The book Numerical Recipes in C, 2nd edition, in section "9.4 Newton-Raphson Method Using Derivative" on page 365, states:

The Newton-Raphson formula can also be applied using a numerical difference to approximate the true local derivative,

f'(x) ≈ (f(x + dx) - f(x)) / dx .

This is not, however, a recommended procedure for the following reasons: (i) You are doing two function evaluations per step, so at best the superlinear order of convergence will be only sqrt(2). (ii) If you take dx too small you will be wiped out by roundoff, while if you take it too large your order of convergence will be only linear, no better than using the initial evaluation f'(x_0) for all subsequent steps. Therefore, Newton-Raphson with numerical derivatives is (in one dimension) always dominated by the secant method of section 9.2.

(That was edited to fit the limitations of this site.) Choosing another method to improve the accuracy of the numeric derivative would increase the number of function evaluations and would thus decrease the order of convergence even more. Therefore you should choose your first method, which ends up using the secant method to find a root.