2
votes

I have a system with fixed camera with wide angle lens and moving object. I captured images with 10mm intervals and 2064x40 px while object is moving constant velocity. Besides I captured images 2048x40 without constant velocity. I would like to stitch these captured images.

First of all, I tried OpenCV stitching method by referring to link. However, I got error code 1 and I learnt that between two images has no enough overlap area to stitch.

After that I thought that I can try to concatenate the images for constant velocity object. I used the code below and I put 13 px as a shifting parameter.

Code that I tried:

import numpy as np
import cv2
import os

from Stitching.Blending import UVSSBlendingConcate
from Stitching.DistortionCorrection import load_coefficients


def load_images_from_folder(folder):
    print("\nImages are reading from folder: " + folder)
    images = []
    for filename in os.listdir(folder):
        img = cv2.imread((folder + "/" + filename))
        if img is not None:
            images.append(img)
    return images


def unDistortImages(images):
    mtx, dist = load_coefficients('calibration_chessboard.yml')
    for i in range(len(images)):
        images[i] = cv2.undistort(images[i], mtx, dist, None, None)
    return images


def LineTriggerConcate(dx, images, blending, IsFlip, IsUnDistorted):
    print("\nImage LineTrigger Concate Start")

    if IsUnDistorted:
        images = unDistortImages(images)

    cropped_images = []
    for i in range(len(images) - 1):
        if IsFlip is True:
            cropped_images.append(cv2.flip(images[i][2:2 + dx, 0:2064], 0))
        else:
            cropped_images.append(images[i][2:2 + dx, 0:2064])

    if not blending:
        result = cv2.vconcat(cropped_images)
        return result
    else:
        global blendingResult
        for i in range(len(cropped_images) - 1):
            if i == 0:
                blendingResult = UVSSBlendingConcate(cropped_images[i], cropped_images[i + 1], dx / 2)
            else:
                blendingResult = UVSSBlendingConcate(blendingResult, cropped_images[i + 1], dx / 2)

        print("\nImage LineTrigger Concate Finish")
        return blendingResult


def concateImages(image_list):
    image_h = cv2.vconcat(image_list)
    return image_h


def main():
    images_path = "10mm"
    image_list = load_images_from_folder(images_path)

    # LineTriggerConcate Parameters
    shiftParameter = 13
    IsBlending = False
    IsFlipped = True
    IsUnDistorted = False
    result = LineTriggerConcate(shiftParameter, image_list, IsBlending, IsFlipped, IsUnDistorted)

    cv2.imwrite(images_path + r"//" + str(shiftParameter) + r"_Shift_" + str(IsBlending) + "_Blending_Result.bmp", result)
    print('Successfully saved to %s' % images_path)


if __name__ == '__main__':
    main()

Output image:

Result for 10mm dataset

a closer look to the problem

In the above result, the transition is not smooth and I tried to fix the transition by using blending and undistortion methods but I am not successful.

On the other hand, I assume that the velocity of the object is constant but unfortunately it isn’t in the real case. When the object has acceleration, some parts of the image may be elongated or shortened.

Could someone please advise any methodology or research?
I also share a part of 10mm intervals datasets.

1
blending is an isolated problem you can solve by not blending, just concatenating slices (vconcat, I assume). that one repeating dark row of pixels seems to be caused by an attempt at blending. -- I'd work with the "flat" projection slices, then try feature matching and find an affine transformation (in fact just a translation). or you could install some type of physical sensor that measures the position of the object somewhat precisely. - Christoph Rackwitz

1 Answers

0
votes

Your current solution seems pretty darn close to me. Have you tried using gradient decent to clean up the last little bit of alignment prior to combining? Basically create some form of value function (could be summation of pixel by pixel color distance). You could apply this value function to a single line in the image, several lines or a random scatter of points. But then apply single pixel gradient decent on the whole secondary image.