0
votes

I want to pick up colored cubes with a robot arm that I detect with a camera and OpenCV, in Python. I managed to detect the different colored cubes and I calibrated the camera with the checkerboard process.

The setup:

Setup front Setup top

Cube detection: Cubes detected

The problem is that I don't understand the rest of the process of getting the coordinates from the object with the camera and translating them to the robot arm for pick up.

The following steps have been completed:

  1. Separating color boundaries with HSV and drawing bounding boxes. So I have the pixel x, y of the object.

  2. Calibrating the camera resulting in the following camera matrix and distortion coefficients:

    Camera Matrix: [[1.42715609e+03 0.00000000e+00 9.13700651e+02] [0.00000000e+00 1.43275509e+03 5.58917609e+02] [0.00000000e+00 0.00000000e+00 1.00000000e+00]]

    Distortion:[[ 0.03924722 -0.30622971 0.00124042 -0.00303094 0.49458539]]

  3. Trying to figure out the next steps in the documentation of OpenCV

The result

I read through the API documentation on this page: https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html

But with my skill level I can't seem the extract the practical steps to take, in order to attain my goal.

My questions:

  1. How do I use the camera matrix and distortion coefficients to get coordinates of the object in the image frame?
  2. How do I translate the coordinates on the image frame to the robot end-effector coordinates?
  3. If I keep the camera on a fixed position. Does this mean I only have to do the calibration ones?

******* edit: *******

I went for a different approach. I managed to solve the rotation and translation between two coordinate systems through SVD. But I made an error in thinking I could use pixel coordinates to translate from the camera coordinate system to that of the robot. I think you need to translate the u, v values first.

How can I translate the pixel uv to world coordinates, so that I can use the code below to get translation and rotation to robot arm?

Here's my code:

#######################################################################
# Step 1: Input camera and world coordinates, and calculate centroids #
#######################################################################
print("")

# Camera and robot to world coordinates
Pc = np.matrix([[604,119,0],[473,351,0], [730,329,0]])
print("Camera points matrix: ")
print(Pc)
print("")

Pr = np.matrix([[177,-38,0],[264,-93,0], [258,4.7,0]])
print("Robot points matrix: ")
print(Pr)
print("")

# Calculate centroids
Cc = Pc.mean(axis=0)
Cr = Pr.mean(axis=0)

print("Centroid camera: ")
print(Cc)
print("")
print("Centroid robot: ")
print(Cr)
print("")

# Pc and Pr - centroids of Pc and Pr
Pc_minus_Cc = np.subtract(Pc, Cc)
print("Pc - centroidC: ")
print(Pc_minus_Cc)
print("")

Pr_minus_Cr = np.subtract(Pr, Cr)
print("Pr - centroidR: ")
print(Pr_minus_Cr)
print("")

############################################################################
# Step 2: Calculate H, perform SVD and get rotation and translation matrix #
############################################################################

# Get H
print("(Pr - centroidR) transposed: ")
print(Pr_minus_Cr.T)
print("")
H = np.matmul(Pc_minus_Cc, Pr_minus_Cr.T)
print("H: ")
print(H)
print("")

# Perform SVD
u, s, v = np.linalg.svd(H)
print("SVD result: ")
print("u: ")
print("")
print(u)
print("")
print("s: ")
print(s)
print("")
print("v: ")
print(v)
print("")

# Calculate rotation matrix
R = np.matmul(v,u.T)
print("Rotation matrix: ")
print(R)

# Calculate t
t = -R * Cc.T + Cr.T
print("t: ")
print(t)
1
The camera matrix is related to the camera per se (focal length, image size, etc) I suggest this article. So in other words, for question 3, If you have the same camera/resolution, the calibration (in this case the camera matrix) is the same. For question 1 and 2, you need depth, or know points which you can get the depth out of it. You can take a look to this and thisapi55
Thanks! I'm reading through it now.Mark de Jonge
Hi! I'm a bit further now, see my edit. Could you let me know if you can help me with the final step?Mark de Jonge

1 Answers

0
votes

to 1) you have already code which draws boxes around detected objects. So you have already coordinates in your matrix. If not you could do something like that.

        for c in contours:
        if cv2.contourArea(c) < self.min_area:
            continue
        # detected
        (x, y, w, h) = cv2.boundingRect(c)
        cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

and than x + w / 2 is your x in the matrix

to 2 ) you cannot directly you would need someorientation point so that the Arm would know where (x+y distance) starts your matrix in the world of the arm

to 3) calibartion depends always on your light conditions right ? so as long they don't change your calibartion should be fine. However it turns out the calibration is needed from time to time e.g. with usb cameras