0
votes

I have the following equations:

sqrt((x0 - x)^2 + (y0 - y)^2) - sqrt((x1 - x)^2 + (y1 - y)^2) = c1
sqrt((x3 - x)^2 + (y3 - y)^2) - sqrt((x4 - x)^2 + (y4 - y)^2) = c2

And I would like to find the intersection. I tried using fsolve, and transforming the equations into linear f(x) functions, and it worked for small numbers. I am working with huge numbers and to solve the linear equation there are lots of calculations performed, specifically the calculations reach to a square root of a subtraction, and when handling huge numbers precision is lost, and the left operand is smaller than the right one getting to a math value domain error trying to solve the square root of a negative number.

I am trying to solve this issue in different manners:

  1. Trying to use bigger precision floats. Tried using numpy.float128 but fsolve wont allow using this.
  2. Currently searching for a library that allows to solve non linear equations system, but no luck so far.

Any help/guidance/tip I will appreciate!! Thanks!!

3
I will try your suggestion, thanks! EDIT: I forgot the sqrts in the equations :/ I will still try that - 9uzman7
Due to your edit, this system is no longer the intersection of two lines, it is now the intersection of two hyperbolas, so it is no longer a linear system. - Rory Daulton
exactly, but i tried solving for x the equations to use fsolve and have the problems mentioned. So here is where im stuck :/ - 9uzman7

3 Answers

1
votes

Taking all advice, i ended using code like the following:

for the the system:

0 = x + y - 8

0 = sqrt((-6 - x)^2 + (4 - y)^2) - sqrt((1 - x)^2 + y^) - 5

from math import sqrt
import numpy as np
from scipy.optimize import fsolve


def f(x):
    y = np.zeros(2)
    y[0] = x[1] + x[0] - 8
    y[1] = sqrt((-6 - x[0]) ** 2 + (4 - x[1]) ** 2) - sqrt((1 - x[0]) ** 2 + x[1] ** 2) - 5
    return y


x0 = np.array([0, 0])
solution = fsolve(f, x0)
print "(x, y) = (" + str(solution[0]) + ", " + str(solution[1]) + ")"

Note: the line x0 = np.array([0, 0]) corresponds to the seed that the method uses in fsolve in order to get to a solution. It is important to have a close seed to reach for a solution.

The example provided works :)

0
votes

You might find some use in SymPy, which is a symbolic algebra manipulation in Python.

From it's home page:

SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python.

0
votes

As you have a non-linear equation you need some kind of optimizer to solve it. Probably you can use something like scipy.optimize (https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html). However, as I have no experience with that scipy function can offer you only a solution with the gradient descent method of the tensorflow library. You can find a short guide here: https://learningtensorflow.com/lesson7/ (check out the Gradient descent cahpter). Analog to the method described there you could do something like that:

# These arrays are pseudo code, fill in your values for x0,x1,y0,y1,...
x_array = [x0,x1,x3,x4]
y_array = [y0,y1,y3,y4]
c_array = [c1,c2]

# Tensorflow model starts here
x=tf.placeholder("float")
y=tf.placeholder("float")
z=tf.placeholder("float")

# the array [0,0] are initial guesses for the "correct" x and y that solves the equation
xy_array = tf.Variable([0,0], name="xy_array")

x0 = tf.constant(x_array[0], name="x0")
x1 = tf.constant(x_array[1], name="x1")
x3 = tf.constant(x_array[2], name="x3")
x4 = tf.constant(x_array[3], name="x4")

y0 = tf.constant(y_array[0], name="y0")
y1 = tf.constant(y_array[1], name="y1")
y3 = tf.constant(y_array[2], name="y3")
y4 = tf.constant(y_array[3], name="y4")

c1 = tf.constant(c_array[0], name="c1")
c2 = tf.constant(c_array[1], name="c2")

# I took your first line and subtracted c1 from it, same for the second line, and introduced d_1 and d_2
d_1 = tf.sqrt(tf.square(x0 - xy_array[0])+tf.square(y0 - xy_array[1])) - tf.sqrt(tf.square(x1 - xy_array[0])+tf.square(y1 - xy_array[1])) - c_1
d_2 = tf.sqrt(tf.square(x3 - xy_array[0])+tf.square(y3 - xy_array[1])) - tf.sqrt(tf.square(x4 - xy_array[0])+tf.square(y4 - xy_array[1])) - c_2

# this z_model should actually be zero in the end, in that case there is an intersection
z_model = d_1 - d_2

error = tf.square(z-z_model)

# you can try different values for the "learning rate", here 0.01
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(error)

model = tf.global_variables_initializer()

with tf.Session() as session:
    session.run(model)

    # here you are creating a "training set" of size 1000, you can also make it bigger if you like
    for i in range(1000):
        x_value = np.random.rand()
        y_value = np.random.rand()

        d1_value = np.sqrt(np.square(x_array[0]-x_value)+np.square(y_array[0]-y_value)) - np.sqrt(np.square(x_array[1]-x_value)+np.square(y_array[1]-y_value)) - c_array[0]
        d2_value = np.sqrt(np.square(x_array[2]-x_value)+np.square(y_array[2]-y_value)) - np.sqrt(np.square(x_array[3]-x_value)+np.square(y_array[3]-y_value)) - c_array[1]

        z_value = d1_value - d2_value
        session.run(train_op, feed_dict={x: x_value, y: y_value, z: z_value})

     xy_value = session.run(xy_array)
     print("Predicted model: {a:.3f}x + {b:.3f}".format(a=xy_value[0], b=xy_value[1]))

But be aware: This code will probably run a while... This is why haven't tested it... Also I am currently not sure what will happen if there is no intersection. Probably you get the coordinates of the closest distance of the functions...

Tensorflow can be somewhat difficult if you haven't used it yet, but it is worth to learn it, as you can also use it for any deep learning application (actual purpose of this library).