I have a pdf of 100+ handwritten pages that I need to convert to machine readable text. So far I have tried tesseract and a free online tool with no success. The output seems to be jibberish.
tesseract myscan.png out -l eng
I've attached one example page. It contains both text, mathematical symbols (eg. integral sign) and occasionally pictures.
Maybe I'm using tesseract wrong? Could anyone try and get a decent output off this?
I use http://www.techsupportalert.com/best-free-ocr-software.htm
Watch out for the installer trying to load you up with other stuff
When it works, it just gives you bits to copy and paste.
But don't rush to download this one, try your's again first.
The problem likely isn't with the software, it's probably your input.
Scan at 600 dpi.
Try to increase the contrast and sharpen the image. The thinner and more defined from the background that the letters are, and the clearer the interspacing of the loops are, the better your chance of OCR capture.
These adjustments are best made in your original scanning software. 8MP or better camera can also make the scan.
Use GIMP to tweak after the scan.
Related
I have been thinking about security concerns in regards to OCR programs such as Tesseract.
My theory is that malicious code printed out in plain text can be photographed and saved an image file. ( This leaves the hex and headers free from a year change )
Then using OCR the JPEG could be converted to greyscale and the characters then read and executed. Perhaps via an exploit within the OCR application.
Looking back at the way certain worms could self execute in windows via preview perhaps something similar can be done using the abike method.
I imagine it's one of the key security concerns for a company developing an OCR application so this may be very hard to provide a proof of concept.
If anyone would like to explore this concept or perhaps explain why it's is, or indeed is not possible I would appreciate it.
This is my first post so sorry if any forum rules have been missed.
I tried to improved the results of OpenSource OCR software. I'm using tessaract, because I find it still produces better results than gocr, but with bad quality input it has huge problems. So I tried to prepocess the image with various tools I found in the internet:
unpaper
Fred's ImageMagick Scripts: TEXTCLEANER
manuall using GIMP
But I was not able to get good results with this bad test document: (really just for test, I don't need to content of this file)
http://9gag.com/gag/aBrG8w2/employee-handbook
This online service works surprisingly good with this test document:
http://www.onlineocr.net/
I'm wonderung if it is possible using smart preprocessing to get similar results with tesseract. Are the OpenSource OCR engines really so bad compared to commercial ones? Even google uses tesseract to scan documents, so I was expecting more...
Tesseract's precision in recognition is a little bit lower than the precision of the best commercial one (Abbyy FineReader), but it's more flexible because of its nature.
This flexibility entail sometimes some preprocessing, because it's not possible for Tesseract to manage each situation.
Actually is used by google because is Google its main sponsor!
The first thing you could do is to try to expand the text in order to have at least 20 pixel wide characters or more. Since Tesseract works using as features the main segments of the characters' borders, it needs to have a bigger characters' size comparing with other algorithms.
Another thing that you could try, always referring to the test document you mentioned, is to binarize your image with an adaptive thresholding method (here you can find some infos about that https://dsp.stackexchange.com/a/2504), because some changes in the illumination are present. Tesseract binarizes the image internally, but this could be the case when it fails to do that (it's similar to the example here Improving the quality of the output with Tesseract, where you can also find some other useful informations)
I'm working on getting the Lincoln font to work in Tesseract, and I'm getting abysmal results, even after going through the wildly complicated training process.
This is what the font looks like, so yeah, it's a bit tricky:
I've carefully made a training image, and then used that to make a box file. The training image is here (25MB!). The image is 300 DPI, and has representative characters nicely spaced out vertically and horizontally.
I made a box file for the training image, and it worked properly. I've verified that it's correct using a box file editor.
I took this box file/tif file, and used it to create training data. I did likewise with the 30 or so other sample images/fonts provided by Tesseract.
I created the unicharset file.
I created a font_properties file. There's no guidance on the site about when fraktur should be used. So I've tried it both this way (fraktur on for Lincoln):
eng.lincoln.box 0 0 0 0 1
And this way (fraktur off):
eng.lincoln.box 0 0 0 0 0
And finally, I've tried this with and without dictionary files. When I used dictionary files, they were the wordmap from my search engine, Sphinx, and they have about 15K common words and about 20K uncommon ones.
In all cases, when I try to OCR the first couple lines of this file (3MB), the quality is abysmal. Rather than getting:
United States Court of Appeals
for the Federal Circuit
I get:
OniteiJ %tates C0urt of QppeaIs
for the jfeI1eraICircuit
Why?
I think you'll need a lot more samples (letters) and better training images (clean background, grayscale, 300 DPI, etc.). And try to train with only one font (for instance, Lincoln) first. You can use jTessBoxEditor tool to generate your training images and edit the box files.
Once you master the training process, you can add other fonts to your training. You can test the success of the resultant language data by using it in performing OCR on the training image itself -- the recognition rates should be high.
The font names in font_properties should be like:
lincoln 0 0 0 0 1
I am not a Tesseract expert but I have evaluated nearly every OCR engine available and my comments are based on my experience over the years of analysing OCR errors.
Just wondering why your image has speckles in the background and not a pure white background. I don't know how Tesseract or the training tool works but the background could be causing some problems.
Just reading the sample page is difficult and requires a large amount of concentration. Characters such as F and I are very similar as are U and N. Tesseract like many OCR engines would be using many different techniques to recognise a character and there is not a whole lot difference between many of these characters in terms of the strokes and curves used in the font.
These characters, especially the uppercase characters would confuse many different matching algorithms just because they are so different to standard Latin / Roman type characters. This shows through in your results ie. All capital letters have an OCR error.
I'm trying to use Tesseract-OCR to detect the text of images with pure text in it but these text has a handwritten font called Journal.
Example:
The result is not the best:
Maxima! size` W (35)
Is there any possibility to improve the result or rather to get the exact result?
I am surprised Tesseract is doing so well. With a little bit of training you should be able to train the lower case 'l' to be recognised correctly.
The main problem you have is the top of the large T character. The horizontal line extends across 2 (possibly 3) other character cells and this would cause a problem for any OCR engine when it tries to segment the characters for recognition. Training may be able to help in this case.
The next problem is the . and : which are very light/thin and are possibly being removed with image pre-processing before the OCR even starts.
Overall the only chance to improve the results with Tesseract would be to investigate training. Here are some links which may help.
Alternative to Tesseract OCR Training?
Tesseract OCR Library learning font
Tesseract confuses two numbers
Like Andrew Cash mentioned, it'll be very hard to perform OCR for that T letter because of its intersection with a number of next characters.
For results improvement you may want to try a more accurate SDK. Have a look at ABBYY Cloud OCR SDK, it's a cloud-based OCR SDK recently launched by ABBYY. It's in beta, so for now it's totally free to use. I work # ABBYY and can provide you additional info on our products if necessary. I've sent the image you've attached to our SDK and got this response:
Maximal size: lall (35)
I need an open OCR library which is able to scan complex printed math formulas (for example some formulas which were generated via LaTeX). I want to get some LaTeX-like output (or just some AST-like data).
Is there something like this already? Or are current OCR technics just able to parse line-oriented text?
(Note that I also posted this question on Metaoptimize because some people there might have additional knowledge.)
The problem was also described by OpenAI as im2latex.
SESHAT is a open source system written in C++ for recognizing handwritten mathematical expressions. SESHAT was developed as part of a PhD thesis at the PRHLT research center at Universitat Politècnica de València.
An online demo:http://cat.prhlt.upv.es/mer/
The source: https://github.com/falvaro/seshat
Seshat is an open-source system for recognizing handwritten mathematical expressions. Given a sample represented as a sequence of strokes, the parser is able to convert it to LaTeX or other formats like InkML or MathML.
According to the answers on Metaoptimize and the discussion on the Tesseract mailinglist, there doesn't seem to be an open/free solution yet which can do that.
The only solution which seems to be able to do it (but I cannot verify as it is Windows-only and non-free) is, like a few other people have mentioned, the InftyProject.
InftyReader is the only one I'm aware of. It is NOT free software (it seems the money goes to a non-profit org, IIRC).
http://www.sciaccess.net/en/InftyReader/
I don't know why PDF can't have metadata in LaTeX? As in: put the LaTeX equation in it! Is this so hard? (I dunno anything about PDF syntax, but I imagine it can be done).
LaTeX syntax is THE ONE TRIED AND TRUE STANDARD for mathematics notation. It seems amazingly stupid that folks that produced MathML and other stuff don't take this in consideration. InftyReader generates MathML or LaTeX syntax.
If I want HTML (pure) I then use TTH to read the LaTeX syntax. Just works.
ABBYY FineReader (a great OCR program) claims you can train the software for Math, but this is immensely braindead (who has the time?)
And Unicode has lots of math symbols. That today's OCR readers can't grok them shows the sorry state of software and the brain deficit in this activity.
As to "one symbol at a time", TeX obviously has rules as to where it will place symbols. They can't write software that know those rules?! TeX is even public domain! They can just "use it" in their comercial products.
Check out "Web Equation." It can convert handwritten equations to LaTeX, MathML, or SymbolTree. I'm not sure if the engine is open source.
Considering that current technologies read one symbol at a time (see http://detexify.kirelabs.org/classify.html), I doubt there is an OCR for full mathematical equations.
Infty works fairly well. My former company integrated it into an application that reads equations out loud for blind people and is getting good feedback from users.
http://www.inftyproject.org/en/download.html
Since the output from math OCR for complex formulas will likely have bugs -- even humans have trouble with it -- you will have to proofread th results, at least if they matter. The (human) proofreader will then have to correct the results, meaning you need to have a math formula editor. Given the effort needed by humans, the probably limited corpus of complex formulas, you might find it easier to assign the task to humans.
As a research problem, reading math via OCR is fun -- you need a formalism for 2-D grammars plus a symbol recognizer.
In addition to references already mentioned here, why not google for this? There is work that was done at Caltech, Rochester, U. Waterloo, and UC Berkeley. How much of it is ready to use out of the box? Dunno.
As of August 2019, there are a few options, depending on what you need:
For converting printed math equations/formulas to LaTex, Mathpix is absolutely the best choice. It's free.
For converting handwritten math to LaTex or printed math, MyScript is the best option, although its app costs a few dollars.
You know, there's an application in Win7 just for that: Math Input Panel. It even handles handwritten input (it's actually made for this). Give it a shot if you have Win7, it's free!
there is this great short video: http://www.youtube.com/watch?v=LAJm3J36tLQ
explaining how you can train your Fine Reader to recognize math formulas. If you use Fine Reader already, better to stick with one tool. Of course it is not free ware :(