Google Vision Text recognition / OCR define template - ocr

I want to use Google Vision for text recognition, and I have images that follow the same template (like an ID card). Is there a way to provide the Vision API a template of how words are structured so it doesn't split them up in a weird way?
For example, you have
First name Andy
Last name Anderson
I want to tell it that "First name" should be grouped together as one entry, instead of how it's doing it by default, splitting it by spaces.

Related

Is there a name for font families (such as fangchan-secret) that are used to prevent web scraping?

In trying to scrape some data from the website of a housing agency in China (the name of the agency is Anjuke) to gather data for a small personal project I realized that all of the numbers on the website are visually displayed as numbers, but are digitally read as obscure Chinese characters.
Is there a name for this kind of a font or this kind of a technique more specific than "anti-scraping measures"?
Additional information about this specific case:To see this in action you can click on any of the listings from the Anjuke website, and then attempt to copy-and-paste the price (or any HTML element that has the "strongbox" class), and you will see that instead of pasting the number is pastes and obscure Chinese character (such as 驋, 齤, 麣, 龤, or 龒).
Looking at the CSS revealed that these numbers have a font of "fangchan-secret", and a bit of quick googling linked to a blog post in Chinese by zhyuzh3d. I read some Chinese, although not loads. This blog post appears to be a Chinese explanation of how how fangchan-secret is a method to prevent to prevent webscraping, and also an explanation of how to get around around this preventative measure.

How Google determines which piece of content to be shown in the search result?

The Google search result usually contains a title and a piece of content from the indexed html. I can understand how title is extracted but does anyone know how Google determine which part of content to be shown?
Google uses several different algorithms to decide what to display in the search result snippet, so there's no way to define what will show 100% of the time. Google does appear to rely heavily on the "description" meta field, so what you put there is often a good indication of what will appear in the snippet, but once again, it's not a sure thing.

How to embed text from wikipedia?

I have pages on my site of some famous personalities, I want to embed short description of them from wikipedia (similar to what google shows on the side when you search for subject that exists on wikipedia), and have the possibility to style the text too, is there a ways to do that dynamically?
You can actually use the Freebase API (in particular, the Topic API) to do something like this. Basically, you want to fetch the /common/topic/description attribute, like this:
https://www.googleapis.com/freebase/v1/topic/m/02mjmr?filter=/common/topic/description
(You can also use Freebase to get most of the other attributes that display in the Knowledge Graph).

Recognizing superscript characters using OCR

I've started a simple project in which it must get an image containing text with superscripts and then by using OCR (currently I'm using tesseract) it has to recognize the superscript characters + the normal ones.
For example, we have a chemical equation such as Cl², but when I use the tesseract to recognize it, it gives me Cl2 (all in one line).
So, what is the solution for this problem? Is there any other OCR API that has the ability to read superscripts?
Very good question that touches more advanced features of any OCR system.
First of all, to make sure you are NOT overlooking the functionality even though it may be there on an OCR system. Make sure to look at your result test not in plain TXT format, but in some kind of rich text capable viewer. TXT viewers, such as Notepad on Windows, often do not support superscript/subscript characters, so even if OCR were to give you correct characters, your viewer could have converted it to display it. If you are accessing text result programatically, that is less of an issue because you are supposed to get a proper subscript character value when accessing it directly. Just note that viewers must support it for you to actually see it. If you eliminated this possible post-processing conversion and made sure that no subscript is returned from OCR, then it probably does not support it.
Just like in this text box, in your original question you tried to give us a superscript character example, but this text box did not accept it even though you could copy/paste it from elsewhere.
Many OCR will see subscript as any other normal character, if they can see it at all. OCR of your use needs to have technical capability to actually produce superscripts/subscripts, and many of them do, but they tend to be commercial OCR systems not surprisingly.
I made a small testcase before answering this letter. I generated an image with a few superscript/subscript examples for my testing (of course EMC2 was the first example that came to mind :) .
You can find my test image here:
www.ocr-it.com/documents/superscript_subscript_test_page.tif
And processed this image through OCR-IT OCR Cloud 2.0 API using all default settings, but exporting to a rich text format, such as MS Word .DOC.
You can find my test image here:
www.ocr-it.com/documents/superscript_subscript_test_page_result.doc
Also note: When you are interested to extract superscript/subscript characters, pay separate attention to your image quality, more than you would with a typical text. Those characters are tiny and you need sufficient details and resolution to achieve descent OCR quality. Even scanned at 300 dpi images sometimes have issues with tiny characters due to too few pixels. If you are considering mobile and digital cameras, that becomes even more important.
DISCLOSURE: My specialty is implementing internal OCR solutions for companies of different sizes. My company is WiseTREND. Contact me directly if I can assist with anything further.

Multi-language Translation on Web Site

i want to know how did facebook do it, did they add two different html+css or its just css only for changing there theme for different language or is there any special html attribute that changes the direction for complete site? following are two example, one english and other is arabic.
also i want to add another related question, which is, do you think they translated using some api like google api or did they hard code the translation (hiring someone to do the translation)?
Example picture 1
Example picture 2
Depending on what technology you're using, this concept is known as string externalizing, string resourcing, string internationalization, localization etc. It is possible to do it all in CSS+Javascript, but that wouldn't be a very efficient way to go about doing things, especially if your site had a lot of strings and a lot of translations.
The HTML is different - just look at the HTML source if you're curious. The source is different because the in the code behind the website's front end, strings like "Login" are stored externally in a collection file that might look something like this:
## LANGUAGE = ENGLISH ##
LOGIN = "Login"
PASSWORD = "Password"
When you switch languages, the code behind the front end remains the same, but the code then uses a different external language file. For example, might be the Spanish file for the same application:
## LANGUAGE = SPANISH ##
LOGIN = "Iniciar sesión"
PASSWORD = "contraseña"
The idea is that in order to support new languages, all that needs to be done is to have the original identifier translated into a new language file. The translator doesn't have to be a programmer to translate the above snippit easily.
The final comment is that Facebook has enough money to pay professional translators to provide them with very good translations in many world languages. A long time ago, they allowed users to submit translations as a starting point. It generally is a bad idea to use a free translation API to translate application strings, because most of the time those APIs will not get the grammar correct. Translation APIs are most effective at getting the "overall meaning" of some words and phrases right, but it can also be terribly inaccurate at getting the most-correct word translation for any one particular idiom.
The layout change facebook did is with css changes, they have two separate css, one for right to left and other is left to right, but if there was another language inside english or same vice verse , then they literally use html direction tag, to direct the message in box with right direction.