I find myself doing a lot of convertTo() calls in my C++ opencv code. It's somewhat confusing and I'm not sure when I need to convert the bit depth of an image until I get an error message.
For example, I have a Mat representing an image that is 16U. I then try to call matchTemplate() and get an assertion error that it expects 8U or 32F. Why shouldn't template matching work at 16U? Similar issues when I'm displaying the image as well (although bit depth restrictions make more sense in the case of displaying images). I find myself fiddling with convertTo() and scaling factors and such trying to get images to show up properly with imshow() and wish I were able to do this more elegantly (maybe I'm spoiled by matlab's imagesc function).
Am I missing something fundamental about what openCV expects of bit depth usage? How deal with the opencv library functions' requirements for bit depth in a cleaner way?