3
votes

My current project involves transcribing texts in pdf into text files, and I first tried putting the image file directly into OCR program (tesseract) and it didnt' do that well. The original image files are old news papers, basically, and have some background noises, which I am sure tesseract has problem with. So I am trying to use some image preprocessing before feeding it into tesseract. Is there any suggestion for open source image preprocessing engine that fits well to this situation??? And instructions on how to use it would be even more appreciated !

3

3 Answers

5
votes

I never heard of an "image preprocessing engine" for that purpose, but you can take a look at OpenCV (Open Source Computer Vision Library) and implement your own "pre-processing engine". OpenCV is a computer vision library that offers many features to perform image processing.

One interesting thing you might want test as a preprocessing step is apply a threshold to the image to remove noises and stuff. Anyway, I've talked about this kind of stuff in this thread.

4
votes

Like @karlphillip mentioned, I highly doubt there's a readily available preprocessing engine for your purposes as the preprocessing technique vary greatly with the desired result.

Some common approaches to clearing up the text in noisy images include: 1. Adaptive thresholding (Sauvola or Niblack binarization) 2. Applying a median filter of a size slightly larger than the text to get a background image, then subtract out the background from the original image (to remove the larger noise like creases, stains, handwritten notes, etc.).

OpenCV has implementations of these filters/binarization methods. If you have access to published literature there's quite a bit of work on binarization of noisy documents.

0
votes

Check out ScanTailor. It has pretty impressive pre-processing functionality and it is open source.