Tesseract OCR Pitch Text - ocr

I have a question about some terminology when it comes to optical character recognition software, specifically the Tesseract OCR engine.
I am reading an article on the Tesseract OCR engine, and it frequently mentions something called Fixed pitch text, and I am not entirely sure what that means. I am assuming pitch is the space that a line of text occupies, but that does not really make sense in the context of the article I am reading.
This is a sentence from the article that uses this wording:
Blobs are organized into text lines, and the lines and regions are analyzed for fixed pitch or proportional text.
I have looked online for some definition of this, but most articles on OCR just use it with no explanation. I am guessing this means it is a pretty simple/common term.
If anybody cares, the article I am reading is called "An Overview of the Tesseract OCR Engine", written by Ray Smith.

When first trying to find text in an image, Tesseract will identify possible regions where the text is present with blobs. Once possible regions where text could be present have been identified, Tesseract then looks to see if those regions contain "fixed pitch." Basically, this means that it's trying to figure out if the regions that it found earlier actually contain text. The fixed pitch is the different types of vectors that are used to create a certain letter or word, and Tesseract looks for those to analyze what letter or word it is looking at in an image.

Related

Can you train tesseract with images instead of text and a font?

In the tesseract documentation a method of training with sample text and a font is explained.
I used jTessBoxEditor but works pretty much like the tesseract training tools.
I got somewhat acceptable results with this, but I guess the optimal solution would be training tesseract with the actual kind of images it will have to recognize anyway.
As I only need to recognize digits, I can cut by hand each of them, maybe many versions of each digit, and train tesseract with those images, even setting the boxes by hand.
Is there a way to do this?
If you are trying to train tesseract4, you can use ocrd-train
you basically prepare images corresponding to each line of text with their ground truth and it will do all the remaining work for you.

What is Blob in Tesseract OCR

I am learning Tesseract OCR and reading this article that is based on this article. From first article:
First step is Adaptive Thresholding, which converts the image into
binary images. Next step is connected component analysis which is
used to extract character outlines. This method is very useful
because it does the OCR of image with white text and black background.
Tesseract was probably first to provide this kind of
processing. Then after, the outlines are converted into Blobs.
Blobs are organized into text lines, and the lines and
regions are analyzed for some fixed area or equivalent text
size.
Could anyone explain what is Blob?
From https://tesseract-ocr.repairfaq.org/tess_glossary.html :
Blob
Isolated, small region of the scanned image. It's delineated by the outline. Tesseract 'juggles' the blobs to see if they can be split further into something that improved the confidence of recognition. Sometimes, blobs are 'combined' if that gives a better result. See pithsync.cpp, for example.
Generally a blob (also called a Connected Component) is a connected piece (i.e. not broken) in a binary image. In other words, it's a solid element in a binary image.
Blob finders are a key step in any system that aims extracting/measuring data from digital images.

How can a scanned page be divided into words like the reCaptcha project?

I would like to digitize a book in a similar way to the reCaptcha project. Is there already a system for inputing an image and then outputting little images cropped around words? Any ideas on how to do this?
You should look into the Tesseract OCR project on which reCaptcha was probably based. It has the capability to output the coordinates of recognized words. Then you crop the page to those coords and you are done.
If you just want to split the image in multiple images one word each you could try to find the word bounding boxes and then take those co-ordinates for the splitting. This can be done by taking histograms/projections of the document in horizontal direction and then for each line in vertical direction. An example algorithm with some pictures describing the idea can be found in this paper: "Document Page Decomposition by the Bounding-Box Projection Technique" (http://haralick.org/conferences/71281119.pdf). You could implement this in OpenCV.
Alternativly, you can use Tessaract as mentioned by beppe9000. Perhaps this helps: Getting the bounding box of the recognized words using python-tesseract
But then you get the whole complexity of training OCR even though you only want the bounding boxes.

achieve better recognition results via training tesseract

I have a question regarding achieving better recognition results with tesseract. I am using tesseract to recognize serial numbers. The serial numbes consist of only one font-type, characters A-Z, 0-9 and occur in different sizes and lengths.
At the moment I am able to recognize about 40% of the serial number images correct. Images are taken via mobile phone camera. Therefore the image quality isn't the best.
Special problem characters are 8/B, 5/6. Since I am recognizing only serial numbers, I am not using any dictionary improvements and every character is recognized independently.
My question is: Does someone has already experience in achieving better recognition results with training tesseract? How many images would be needed to be able to get good results.
For training tesseract should I use printed and afterwards photographed serial numbers, or should I use original digital serial numbers, without printing and photographing?
Maybe somebody has already experience in that kind of area.
Regarding training tesseract: I have already trained tesseract with some images. Therefore I have printed all characters in different sizes, photographed and labeled them correctly. Example training photo of the character 5
Is this a good/bad training example? Since I only want to recognize single characters without any dependency, I though I don't have to use words for training.
Actual I only have trained with 3 of these images for the characters B 8 6 5 which doesn't result in a better recognition in comparison with the original english (eng) tesseract database.
best regards,
Christoph
I am currently working on a Sikuli application using Tesseract to read text (Strings and numbers) from screenshots. I found that the best way to achieve accuracy was to process the screenshot before performing the OCR on it. However, most of the text I am reading is green text-on black background, making this my preferred solution. I used Scalr's method within BufferedImage to increase the size of the image:
BufferedImage bufImg = Scalr.resize(...)
which instantly yielded more accurate results with black text on gray background. I then used BufferedImage's options BufferedImage.TYPE_BYTE_GRAY and BufferedImage.TYPE_BYTE_BINARY when creating a new BufferedImage to process the Image to grayscale and black/white, respectively.
Following these steps brought Tesseract's accuracy from a 30% to around an 85% when dealing with green text on black background, and a really-close-to-100% accuracy when dealing with normal black text on white background. (sometimes letters within a word are mistaken by numbers i.e. hel10)
I hope this helps!

How to make tesseract to give relevant results in the presence of noise?

I am using tesseract 3.0.0 and I bumped into the following problem:
When there is something too small for tesseract to recognize it seems it's merged with
other fragments. As a result nothing relevant is returned.
The image below shows 3 cases. Only the rectangle with the dashed line is passed to tesseract. Over the rectangle is the result (V over T means new line).
The last case is the problem one. Is there someway to improve tesseract in situations like this?
As far as I know, Tesseract does not have proper image segmentation yet (or Document Analysis, as it is called in commertial OCR applications.) Typically, before OCR is done, image is get's split on separate areas that contain text, pictures, barcodes, lines and so on. Then you apply OCR only on text ares and don't face problems you have just described.
Earlier versions of Tesseract did not have that functionality at all, and Tesseract was supposed to be used as line recognizer only, or so called field-level recognizer, when you use it on small snippets of text cut from bigger image.
I did not followed throughly what was introduced in 3.0, probably it is already there partially, but obviously it does not work as expected, as you have just found out.
There is another opensource project - OCRopus, that aproached this problem exactly as I described - first Document Analisys (aka Segmentation) and only then OCR. Their earlier versions were actually using Tesseract for OCR after analisys step finished. But later they introduced their own OCR (which is still not very good) and moved Tesseract plugin support down in priorities list.
Here's what you actually can do to address your problem:
If your images have very typical structure, you can try to do some dumb segmentation and cut text from the image yourself before passing it to Tesseract. However, if you expect to have wide variety of images to be supported, just forget it.
You can ckeck OCRopus and see if their segmentation work for your images. If yes, then you can spend some time to make OCRopus + Tesseract work together.
Well, if what you do is not just for fun and you value your time, I would recommend thinking about real OCR engine like ABBYY. You will get much higher accuracy of both segmentaiton and OCR out of the box, and professional customer support of course.
Disclaimer: I work for ABBYY