0
votes

I have several cameras working on different threads and I am trying to start detection on each of them. I define the model in the main and pass it to threads

Works with one camera, but as soon as the second camera thread starts, it throws the following error:

Exception in thread Thread-2: Traceback (most recent call last): File "C:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\threading.py", line 916, in _bootstrap_inner self.run() File "C:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "D:\detect\testing\cam.py", line 77, in video_read (locs, preds) = detect_and_predict(frame, person_net, hat_net) File "D:\detect\testing\cam.py", line 115, in detect_and_predict person = cv2.cvtColor(person, cv2.COLOR_BGR2RGB) cv2.error: OpenCV(4.3.0) C:\projects\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'

Exception in thread Thread-1: Traceback (most recent call last):
File "C:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\threading.py", line 916, in _bootstrap_inner self.run() File "C:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "D:\detect\testing\cam.py", line 77, in video_read (locs, preds) = detect_and_predict(frame, person_net, hat_net) File "D:\detect\testing\cam.py", line 115, in detect_and_predict person = cv2.cvtColor(person, cv2.COLOR_BGR2RGB) cv2.error: OpenCV(4.3.0) C:\projects\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'

i tried

if frame is not None:
 #detection code
else:
 print('empty frame')

still throws the same error, am I missing something?

code:

def detect_and_predict(frame, person_net, hat_net):
    (h, w) = frame.shape[:2]
    blob = cv2.dnn.blobFromImage(frame, 1.0, (512, 512),(104.0, 177.0, 123.0))
    person_net.setInput(blob)
    detections = person_net.forward()
    persons = []
    locs = [] 
    preds = []

    for i in range(0, detections.shape[2]):
        confidence = detections[0, 0, i, 2]
        if confidence > 0.6:
            box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
            (startX, startY, endX, endY) = box.astype("int")

            (startX, startY) = (max(0, startX), max(0, startY))
            (endX, endY) = (min(w - 1, endX), min(h - 1, endY))

            person = frame[startY:endY, startX:endX]
            person = cv2.cvtColor(person, cv2.COLOR_BGR2RGB)
            person = cv2.resize(person, (224, 224))
            
            person = img_to_array(person)
            person = preprocess_input(person)
            
            persons.append(person)
            locs.append((startX, startY, endX, endY))
    if len(persons) > 0:
        persons = np.array(persons, dtype="float32")
        preds = hat_net.predict(persons, batch_size=32)
    return (locs, preds)
1
Please post your code snippet to analyze, Nothing can be done just with Traceback - Ajay A

1 Answers

0
votes

I solve it. Just use multiprocessing instead threading