I wrote this code for a function that implements the sqrt function using a technique known as the Babylonian function. It approximates the square root of a number, n, by repeatedly performing a calculation using the following formula:
nextGuess = (lastGuess + (n / lastGuess)) / 2
When nextGuess and lastGuess are very close, nextGuess is the approximated square root. The initial guess can be any positive value (e.g., 1). This value will be the starting value for lastGuess. If the difference between nextGuess and lastGuess is less than a very small number, such as 0.0001, then nextGuess is the approximated square root of n. If not, nextGuess becomes lastGuess and the approximation process continues.
def babyl(n):
lastGuess=1.0
while True:
nextGuess=float(lastGuess+float(n/lastGuess))/2.0
if abs(lastGuess-nextGuess)<0.0001:
return nextGuess
else:
lastGuess=nextGuess
nextGuess=float(lastGuess+float(n/lastGuess))/2.0
if abs(lastGuess-nextGuess)<0.0001:
return nextGuess
The output of the function is:
>>> babyl(9)
3.000000001396984
>>> babyl(16)
4.000000000000051
>>> babyl(81)
9.000000000007091
>>>
Very long after the dot as you see.
I want to write a test program where the user enters a positive integer And the functions return its approx. sqrt value.
So I coded:
n=input("Please sir, enter a positive integer number and you'll get the approximated sqrt:")
print babyl(n)
And the answer for that is very short:
>>>
Please sir, enter a positive integer number and you'll get the approximated sqrt:16
4.0
>>> ================================ RESTART ================================
>>>
Please sir, enter a positive integer number and you'll get the approximated sqrt:4
2.0
>>> ================================ RESTART ================================
>>>
Please sir, enter a positive integer number and you'll get the approximated sqrt:9
3.0000000014
>>>
Can someone tell me what is the difference between the function and the test?