0
votes

I wrote this code for a function that implements the sqrt function using a technique known as the Babylonian function. It approximates the square root of a number, n, by repeatedly performing a calculation using the following formula:

nextGuess = (lastGuess + (n / lastGuess)) / 2

When nextGuess and lastGuess are very close, nextGuess is the approximated square root. The initial guess can be any positive value (e.g., 1). This value will be the starting value for lastGuess. If the difference between nextGuess and lastGuess is less than a very small number, such as 0.0001, then nextGuess is the approximated square root of n. If not, nextGuess becomes lastGuess and the approximation process continues.

def babyl(n):
    lastGuess=1.0
    while True:
        nextGuess=float(lastGuess+float(n/lastGuess))/2.0
        if abs(lastGuess-nextGuess)<0.0001:
            return nextGuess
        else:
            lastGuess=nextGuess
            nextGuess=float(lastGuess+float(n/lastGuess))/2.0
            if abs(lastGuess-nextGuess)<0.0001:
                return nextGuess

The output of the function is:

>>> babyl(9)
3.000000001396984
>>> babyl(16)
4.000000000000051
>>> babyl(81)
9.000000000007091
>>> 

Very long after the dot as you see.

I want to write a test program where the user enters a positive integer And the functions return its approx. sqrt value.

So I coded:

n=input("Please sir, enter a positive integer number and you'll get the approximated sqrt:")
print babyl(n)

And the answer for that is very short:

>>> 
Please sir, enter a positive integer number and you'll get the approximated sqrt:16
4.0
>>> ================================ RESTART ================================
>>> 
Please sir, enter a positive integer number and you'll get the approximated sqrt:4
2.0
>>> ================================ RESTART ================================
>>> 
Please sir, enter a positive integer number and you'll get the approximated sqrt:9
3.0000000014
>>> 

Can someone tell me what is the difference between the function and the test?

2

2 Answers

2
votes

The console uses repr( ) to show the result. print uses str( )

>>> import math; f = math.sqrt(10)
>>> str(f)
'3.16227766017'
>>> repr(f)
'3.1622776601683795'
>>> print f
3.16227766017

It's strange you miss the precision in the output. Your epsilon is 0.0001, several digits shorter, which will result in a very poor precision, at least for these small numbers. Why worry about the output then?

0
votes

print calls __str__() on the float object. Just calling the function at the Python prompt and letting Python show you the result calls __repr__(). The __str__() uses a little less precision precisely because of issues with floating-point accuracy: many fractional values can't be stored precisely and this causes inaccuracies in calculations involving them.