ahk - ocr failed with camerb's library - ocr

I think that the camerb's library doesn't work very well, you can see the result of ocr in the following picture:
http://i.stack.imgur.com/Kyhqk.jpg
the same result is obtained if I try to do the ocr of a number, especially a float, the comma is often not recognized and the "0" is exchanged with the "o" :(
someone knows a more efficient library? ...thanks for the answers
if you want try the camerb's library, you can download it here:
http://www.autohotkey.com/board/topic/69127-ocrahk-library-for-recognizing-text-in-images/

i have just tried Capture2Text software, it's working pretty good (in a window 650x450) but if i try to do the OCR of a little window (400x320), the maching is not really exactly.
does anyone know if AbbyyFineReader works with command prompt? because the developers of this software tell that it has a precision of 99,8%.

Related

Undocumented opencv function?

may I know what is the difference between an "UNDOCUMENTED" opencv function with a documented one? I have searched online but apparently I do not get the explanation that is clear enough to make me clear my doubt. Thanks
The function is calcBluriness, which is used to determine the blurriness of a given image. Thanks
This question is rather a non question, but here is an answer anyway in case of future viewers.
All of OpenCV is documented, and documentation is part of the development process, you can access the docs here
As for calcBluriness, it is also documented, you can find that here

Can I write a program in binary directly ? How can I get the computer to execute it?

I know that may seem weird and looking for troubles but I think experiencing what the ancient programmers experienced before is something interesting. So how can I execute a program written only in binary? (Suppose that I know what I am doing and not using assembly of course.)
I just want to write a series of bits like 111010111010101010101 and execute that. So how can I do that?
Use a hex editor. You'll need to find out the relevant executable format for your operating system, of course - assuming you want to use an operating system... I suppose you could always write your own bootloader and just run the code directly that way, if you want to get all hardcore.
I don't think you'll really be experiencing what programmers experienced back then though - for one thing, you won't be using punch cards, paper tape etc. For another, your context is completely different - you know what computers are like now, so it'll feel painfully primitive to you... whereas back then, it would have been bleeding edge and exciting just on those grounds.
Use a hex editor, write your bits and save it as an executable file (either just with the file extension .exe in Windows or with chmod a+x filename in Linux).
The problem is: You'd also have to write all the OS-specific stuff in binary format, and you'll have to have a table that translates from assembler code to binary stuff.
Why not, if you want to experience low-level programming, give D.E. Knuth's assembler MMIX a try?
It really depends on the platform you are using. But that's sort of irrelevant based on your proposed purpose. The earliest programmers of modern computers as you think of them did not program in binary -- they programmed in assembly.
You will learn nothing trying to program in binary for a specific Operating System and specific CPU type using a hex editor.
If you want to find out how pre-assembly programmers worked (with plain binary data), look up Punch Cards.
.
Use a hex editor to create your file, be sure to use a format that the loader of your respective OS understands and then double click it.
most assemblers (MMIX assembler for instance see www.mmix.cs.hm.edu) dont care if
you write instructions or data.
So instead of wirting
Main ADD $0,$0,3
SUB $1,$0,4
...
you can write
Main TETRA #21000003
TETRA #25010004
...
So this way you can assemble your program by hand and then have the assembler transform it in a form the loader needs. Then you execute it. Normaly you use hex notatition not binary because keeping track of so many digits is difficult. You can also use decimal, but the charts that tell you which instructions have which codes are typically in hex notation.
Good luck! I had to do things like this when I started programming computers. Everybody was glad to have an assembler or even a compiler then.
Martin
Or he is just writing some malicious code.
I've seen some funny methods that use a AVR as a keyboard emulator, open some simple text editor, write the code that's in the AVR eeprom memory, and pipe it to "debug" (in windows systems), and run it. It's a good way to escape some restrictions too ;)
I imagine that by interacting directly with hardware you could write in binary. To flip the proper binary bits, you could use a magnetized needle on your disk drive. Or butterflies.

HTML Comments Extracter

I am well aware that parsing HTML with regex has its many caveats and vociferous opponents. So rather than trying to re-invent the wheel, I'm looking for a tool that I can point to a web page and say "Get me the comments, b*tch".
Anyone able to advise?
I was reading some OWASP documentation or a security blog, and I'm almost certain I saw a tool performing this task. Google has been zero help unfortunately.
Cheers
If you want a Java solution try HTMLParser and look for RemarkNodes.
Mhhhhh...I think a search in Google with the OS you use and some clever keyword gives you all you want. For UNIX based system looks at: parse HTML with SED and PERL
For Windows OS I think you can search something with VBS (VBScript).

What is the most mature JSON library for Erlang?

I wanted to use YAML but there is not a single mature YAML library for Erlang. I know there are a few JSON libraries, but was wondering which is the most mature?
Have a look at the one from mochiweb: mochijson.erl
1> mochijson:decode("{\"Name\":\"Tom\",\"Age\":10}").
{struct,[{"Name","Tom"},{"Age",10}]}
I prefer Jiffy. It works with binary and is realy fast.
1> jiffy:decode(<<"{\"Name\":\"Tom\",\"Age\":10}">>).
{[{<<"Name">>,<<"Tom">>},{<<"Age">>,10}]}
Can encode as well:
2> jiffy:encode({[{<<"Name">>,<<"Tom">>},{<<"Age">>,10}]}).
<<"{\"Name\":\"Tom\",\"Age\":10}">>
Also check out jsx. "An erlang application for consuming, producing and manipulating json. Inspired by Yajl." I haven't tried it myself yet, but it looks promising.
As a side note; I found this library through Jesse, a json schema validator by Klarna.
I use the json library provided by yaws.
Edit: I actually switched over to Jiffy, see Konstantin's answer.
Trapexit offers a really cool search feature for Erlang projects.
Lookup for JSON there, you'll find almost 13 results. Check the dates of the latest revisions, the user rating, the project activity status.
UPDATE: I've just found a similar question n StackOverflow. Apparently, they are quite happy with the erlang-json-eep-parser parser.
My favourite is mochijson2. The API is straightforward, it's fast enough for me (I never actually bothered to benchmark it though, to be honest--I'm mostly en- and de-coding small packets), and I've been using it in a stable "production server" for a year now or so. Just remember to install mochinum as well, mochijson2 uses it to encode large numbers, if you miss it, and you'll try to encode a large number, it will throw an exception.
See also: mochijson2 examples (stackoverflow)

Need good OCR for printed source code listing, any ideas?

At my work, I sometimes have to take some printed source code and manually type the source code into a text editor. Do not ask why.
Obviously typing it up takes a long time and always extra time to debug typing errors (oops missed a "$" sign there).
I decided to try some OCR solutions like:
Microsoft Document Imaging - has built in OCR
Result: Missed all the leading whitespace, missed all the underscores, interpreted many of the punctuation characters incorrectly.
Conclusion: Slower than manually typing in code.
Various online web OCR apps
Result: Similar or worse than Microsoft Document Imaging
Conclusion: Slower than manually typing in code.
I feel like source code would be very easy to OCR given the font is sans serif and monospace.
Have any of you found a good OCR solution that works well on source code?
Maybe I just need a better OCR solution (not necessarily source code specific)?
With OCR, there are currently three options:
Abbee FineReader and OminPage. Both are commercial products which are about on par when it comes to features and OCR result. I can't say much about OmniPage but FineReader does come with support for reading source code (for example, it has a Java language library).
The best OSS OCR engine is tesseract. It's much harder to use, you'll probably need to train it for your language.
I rarely do OCR but I've found that spending the $150 on the commercial software weights out the wasted time by far.
Two new options exists today (years after the question was asked):
1.)
Windows 10 comes with an OCR engine from Microsoft.
It is in the namespace:
Windows.Media.Ocr.OcrEngine
https://msdn.microsoft.com/en-us/library/windows/apps/windows.media.ocr
There is also an example on Github:
https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/OCR
You need either VS2015 to compile this stuff. Or if you want to use an older version of Visual Studio you must invoke it via traditional COM, then read this article on Codeproject: http://www.codeproject.com/Articles/262151/Visual-Cplusplus-and-WinRT-Metro-Some-fundamentals
The OCR quality is very good. Nevertheless if the text is too small you must amplify the image before. You can download every language that exists in the world via Windows Update - even for handwriting!
2.)
Another option is to use the OCR library from Office. It is a COM DLL. It is available in Office 2003, 2007 and Vista, but has been removed in Office 2010.
http://www.codeproject.com/Articles/10130/OCR-with-Microsoft-Office
The disadvantage is that every Office installation comes with support for few languages. For example a spanish Office installs support for spanish, english, portuguese and french. But I noticed that it nearly makes no difference if you use spanish or english as OCR language to detect a spanish text.
If you convert the image to greyscale you get better results.
The recognition is OK, but it did not satisfy me. It makes approximately as much errors as Tesseract although Tesseract needs much more image preprocessing to get these results.
Try http://www.free-ocr.com/. I have used it to recover source code from a screen grab when my IDE crashes in an editor session without warning. It obviously depends on the font you are using in the editor (I use Courier New 10pt in Delphi). I tried to use Google Docs, which will OCR an image when you upload it - while Google Docs is pretty good on scanned documents, it fails miserably on Pascal source for some reason.
An example of FreeOCR at work: Input image:
gave this:
begin
FileIDToDelete := FolderToClean + 5earchRecord.Name ;
Inc (TotalFilesFound) ;
if (DeleteFile (PChar (FileIDToDelete))) then
begin
Log5tartupError (FormatEx (‘%s file %s deleted‘, [Annotation, Fi eIDToDelete])) ;
Inc (TotalFilesDeleted) ;
end
else
begin
Log5tartupError (FormatEx (‘Error deleting %s file %s‘, [Annotat'on, FileIDToDelete])) ;
Inc (TotalFilesDeleteErrors) ;
end ;
end ;
FindResult := 5ysUtils.FindNext (5earchRecord) ;
end ;
so replacing the indentation is the bulk of the work, then changing all 5's to upper case S. It also got confused by the vertical line at the 80 column mark. Luckily most errors will be picked up by the compiler (with the exception of mistakes inside quoted strings).
It's a shame FreeOCR doesn't have a "source code" option, where white space is treated as significant.
A tip: If your source includes syntax highlighting, make sure you save the image as grayscale before uploading.
Printed text vs handwritten is usually easier for OCR, however it all depends on your source image, I generally find that capturing in PNG format, with reduced colors (grayscale is best) with some manual cleanup (remove any image noise due to scanning etc) works best.
Most OCR are similar in performance and accuracy. OCRs with the ability to train/correct would be best.
OCRopus is also a good open source option. But like Tesseract, there's a rather steep learning curve to use and integrate it effectively.
In general I found that FineReader gives very good results. Normally all products has a trial available. Try as much you can.
Now, program source code can be tricky:
leading whitespace: maybe a post code
pretty printer process can help
underscores and punctuation: maybe a
good product can be trained for that
Google Drive's built-in OCR worked pretty well for me. Just convert scans to a PDF, upload to Google Drive, and choose "Open with... Google Docs". There are some weird things with color and text size, but it still includes semicolons and such.
The original screenshot:
The Google Docs OCR:
Plaintext version:
#include <stdio.h> int main(void) {
char word[51]; int contains = -1; int i = 0; int length = 0; scanf("%s", word); while (word[length] != "\0") i ++; while ((contains == 1 || contains == 2) && word[i] != "\0") {
if (word[i] == "t" || word[i] == "T") {
if (i <= length / 2) {
contains = 1; } else contains = 2;
return 0;