I'm playing a game where our Guild is constantly sharing teams as screenshots in Discord. I'm wondering if I can find something that will convert those team images into text, using a library of exact images for each of the troops.
Online OCR does a marginal job because of the backgrounds behind the text, so it's not a great solution. Also, it basically solves the wrong problem.
I've found mention of the OpenCV library (for Python?), but the GitHub projects I've seen are looking at a much broader problem.
Here's the algorithm I have in mind:
Locate the team in the image and resize to a standard size.
Locate the sub-images for each troop.
Search the library of template images (about 800 -- a big job to
create!) for each troop and find a sufficiently good match.
Look up and return the text associated with each match.
Ideally, I would like to point the software to an image file and it would tell me the names of the troops in copyable text.
This might be an interesting enough project for me to finally install and learn Python (but that's a whole other question). For now, let's see what already exists that might work.
I know this doesn't have a simple answer, but I want to learn how to scan images, and thus videos (frame-by-frame) and identify other images in that initial image with a given amount of error.
Are there any libraries for this? Any hints?
Thank you.
I think you refer to Augmented Reality (or AR, I don't really know if there's a more technical word for it). There are a few libraries out there, mostly FLARToolKit, plus a few forks of it. This is mostly for recognizing markers or patterns within an image (or video).
I know #inspirit is doing some really cool stuff in this area and has been posting a lot in his blog, but afaik he hasn't released anything yet.
There's a lot of scholarly work on HTML content extraction, e.g., Gupta & Kaiser (2005) Extracting Content from Accessible Web Pages, and some signs of interest here, e.g., one, two, and three, but I'm not really clear about how well the practice of the latter reflects the ideas of the former. What is the best practice?
Pointers to good (in particular, open source) implementations and good scholarly surveys of implementations would be the kind of thing I'm looking for.
Postscript the first: To be precise, the kind of survey I'm after would be a paper (published, unpublished, whatever) that discusses both criteria from the scholarly literature, and a number of existing implementations, and analyses how unsuccessful the implementations are from the viewpoint of the criteria. And, really, a post to a mailing list would work for me too.
Postscript the second To be clear, after Peter Rowell's answer, which I have accepted, we can see that this question leads to two subquestions: (i) the solved problem of cleaning up non-conformant HTML, for which Beautiful Soup is the most recommended solution, and (ii) the unsolved problem or separating cruft (mostly site-added boilerplate and promotional material) from meat (the contentthat the kind of people who think the page might be interesting in fact find relevant. To address the state of the art, new answers need to address the cruft-from-meat peoblem explicitly.
Extraction can mean different things to different people. It's one thing to be able to deal with all of the mangled HTML out there, and Beautiful Soup is a clear winner in this department. But BS won't tell you what is cruft and what is meat.
Things look different (and ugly) when considering content extraction from the point of view of a computational linguist. When analyzing a page I'm interested only in the specific content of the page, minus all of the navigation/advertising/etc. cruft. And you can't begin to do the interesting stuff -- co-occurence analysis, phrase discovery, weighted attribute vector generation, etc. -- until you have gotten rid of the cruft.
The first paper referenced by the OP indicates that this was what they were trying to achieve -- analyze a site, determine the overall structure, then subtract that out and Voila! you have just the meat -- but they found it was harder than they thought. They were approaching the problem from an improved accessibility angle, whereas I was an early search egine guy, but we both came to the same conclusion:
Separating cruft from meat is hard. And (to read between the lines of your question) even once the cruft is removed, without carefully applied semantic markup it is extremely difficult to determine 'author intent' of the article. Getting the meat out of a site like citeseer (cleanly & predictably laid out with a very high Signal-to-Noise Ratio) is 2 or 3 orders of magnitude easier than dealing with random web content.
BTW, if you're dealing with longer documents you might be particularly interested in work done by Marti Hearst (now a prof at UC Berkely). Her PhD thesis and other papers on doing subtopic discovery in large documents gave me a lot of insight into doing something similar in smaller documents (which, surprisingly, can be more difficult to deal with). But you can only do this after you get rid of the cruft.
For the few who might be interested, here's some backstory (probably Off Topic, but I'm in that kind of mood tonight):
In the 80's and 90's our customers were mostly government agencies whose eyes were bigger than their budgets and whose dreams made Disneyland look drab. They were collecting everything they could get their hands on and then went looking for a silver bullet technology that would somehow ( giant hand wave ) extract the 'meaning' of the document. Right. They found us because we were this weird little company doing "content similarity searching" in 1986. We gave them a couple of demos (real, not faked) which freaked them out.
One of the things we already knew (and it took a long time for them to believe us) was that every collection is different and needs it's own special scanner to deal with those differences. For example, if all you're doing is munching straight newspaper stories, life is pretty easy. The headline mostly tells you something interesting, and the story is written in pyramid style - the first paragraph or two has the meat of who/what/where/when, and then following paras expand on that. Like I said, this is the easy stuff.
How about magazine articles? Oh God, don't get me started! The titles are almost always meaningless and the structure varies from one mag to the next, and even from one section of a mag to the next. Pick up a copy of Wired and a copy of Atlantic Monthly. Look at a major article and try to figure out a meaningful 1 paragraph summary of what the article is about. Now try to describe how a program would accomplish the same thing. Does the same set of rules apply across all articles? Even articles from the same magazine? No, they don't.
Sorry to sound like a curmudgeon on this, but this problem is genuinely hard.
Strangely enough, a big reason for google being as successful as it is (from a search engine perspective) is that they place a lot of weight on the words in and surrounding a link from another site. That link-text represents a sort of mini-summary done by a human of the site/page it's linking to, exactly what you want when you are searching. And it works across nearly all genre/layout styles of information. It's a positively brilliant insight and I wish I had had it myself. But it wouldn't have done my customers any good because there were no links from last night's Moscow TV listings to some random teletype message they had captured, or to some badly OCR'd version of an Egyptian newspaper.
/mini-rant-and-trip-down-memory-lane
One word: boilerpipe.
For the news domain, on a representative corpus, we're now at 98% / 99% extraction accuracy (avg/median)
Demo: http://boilerpipe-web.appspot.com/
Code: http://code.google.com/p/boilerpipe/
Presentation: http://videolectures.net/wsdm2010_kohlschutter_bdu/
Dataset and slides: http://www.l3s.de/~kohlschuetter/boilerplate/
PhD thesis: http://www.kohlschutter.com/pdf/Dissertation-Kohlschuetter.pdf
Also quite language independent (today, I've learned it works for Nepali, too).
Disclaimer: I am the author of this work.
Have you seen boilerpipe? Found it mentioned in a similar question.
I have come across http://www.keyvan.net/2010/08/php-readability/
Last year I ported Arc90′s Readability
to use in the Five Filters project.
It’s been over a year now and
Readability has improved a lot —
thanks to Chris Dary and the rest of
the team at Arc90.
As part of an update to the Full-Text
RSS service I started porting a more
recent version (1.6.2) to PHP and the
code is now online.
For anyone not familiar, Readability
was created for use as a browser addon
(a bookmarklet). With one click it
transforms web pages for easy reading
and strips away clutter. Apple
recently incorporated it into Safari
Reader.
It’s also very handy for content
extraction, which is why I wanted to
port it to PHP in the first place.
there are a few open source tools available that do similar article extraction tasks.
https://github.com/jiminoc/goose which was open source by Gravity.com
It has info on the wiki as well as the source you can view. There are dozens of unit tests that show the text extracted from various articles.
I've worked with Peter Rowell down through the years on a wide variety of information retrieval projects, many of which involved very difficult text extraction from a diversity of markup sources.
Currently I'm focused on knowledge extraction from "firehose" sources such as Google, including their RSS pipes that vacuum up huge amounts of local, regional, national and international news articles. In many cases titles are rich and meaningful, but are only "hooks" used to draw traffic to a Web site where the actual article is a meaningless paragraph. This appears to be a sort of "spam in reverse" designed to boost traffic ratings.
To rank articles even with the simplest metric of article length you have to be able to extract content from the markup. The exotic markup and scripting that dominates Web content these days breaks most open source parsing packages such as Beautiful Soup when applied to large volumes characteristic of Google and similar sources. I've found that 30% or more of mined articles break these packages as a rule of thumb. This has caused us to refocus on developing very low level, intelligent, character based parsers to separate the raw text from the markup and scripting. The more fine grained your parsing (i.e. partitioning of content) the more intelligent (and hand made) your tools must be. To make things even more interesting, you have a moving target as web authoring continues to morph and change with the development of new scripting approaches, markup, and language extensions. This tends to favor service based information delivery as opposed to "shrink wrapped" applications.
Looking back over the years there appears to have been very few scholarly papers written about the low level mechanics (i.e. the "practice of the former" you refer to) of such extraction, probably because it's so domain and content specific.
Beautiful Soup is a robust HTML parser written in Python.
It gracefully handles HTML with bad markup and is also well-engineered as a Python library, supporting generators for iteration and search, dot-notation for child access (e.g., access <foo><bar/></foo>' usingdoc.foo.bar`) and seamless unicode.
If you are out to extract content from pages that heavily utilize javascript, selenium remote control can do the job. It works for more than just testing. The main downside of doing this is that you'll end up using a lot more resources. The upside is you'll get a much more accurate data feed from rich pages/apps.
The Semantic Web is an awesome idea. And there are a lot of really cool things that have been done using the semantic web concept. But after all this time I am beginning to wonder if it is all just a pipe dream in the end. If we will ever truly succeed in making a fully semantic web, and if we are not going to be able to utilize semantic web to provide our users a deeper experience on the web is it worth spending the time and extra effort to ensure FULLY semantic web pages are created by myself or my team?
I know that semantic pages usually just turn out better (more from attention to detail than anything I would think), so I am not questioning attempting semantic page design, what I am currently mulling over, is dropping the review and revision process of making a partially semantic page, fully semantic in hopes of some return in the future.
On a practical level, some aspects of the semantic web are taking off:
1) Semantic markup helps search engines identify key content and improves keyword results.
2) Online identity is a growing concern, and semantic markup in links like rel='me' help to disambiguate these things. Autodiscovery of social connections is definitely upcoming. (Twitter uses XFN markup for all of your information and your friends, for example)
3) Google (and possibly others) are starting to pay attention to microformats like hCard and hCalendar to gather greater information about people and events going on. This feature is still on the "very new" list, but these microformats are useful examples of the semantic web.
It may take some time for it all to get there, but there are definite possible benefits. I wouldn't put a huge amount of effort into it these days, but its definitely worth keeping in mind when you're developing a site.
Yahoo and Google have both announced support for RDFa annotations in your HTML content. Check out Yahoo SearchMonkey and Google Rich Snippets. If you care about SEO and driving traffic to your site, these are good ways to get better search engine coverage today.
Additionally, the Common Tag vocabulary is an RDFa vocabulary for annotating and organizing your content using semantic tags. Yahoo and Google will make use of these annotations, and existing publishing platforms such as Drupal 7 are investigating adopting the Common Tag format.
I would say no.
The reason I would say this is that the current return for creating a fully semantic web page right now is practically zero. You will have to spend extra time and effort, and there is very little to show for it now.
Effort is not like investing, however, so doing it now has no practical advantage. If the semantic web does start to show potential, then you can always revisit it and tap into that potential later.
It should be friendly to search engines, but going further is not going to provide good ROI.
Furthermore, what are you selling? A lot of the purpose behind being semantic beyond being indexable is easier 3rd party integration and data mining (creating those ontologies). Are these desirable traits for your data sets? If you are selling advertisement, making it easier for others to pull in your content is probably not going to be helpful.
It's all about where you want to spend your time.
You shouldn't do anything without a requirement. Otherwise, how do you know if you've succeeded? Do you have a requirement for being semantic? How much? How do you measure success? How do you measure return on investment?
Don't do anything just because of fads, unless keeping up with fads is a requirement.
Let me ask you a question - would you live in a house or buy a car that wasn't built according to a spec?
"So is this 4x4 lumber, upheld with a steel T-Beam?"
"Nope...we managed to rig the foundation on on PVC Piping...pretty cool, huh."
I'm looking for algorithms, papers, or software to enhance faxes, images from cell phone cameras, and other similar source for readability and OCR.
I'm mainly interested in simple enhancements (eg. things you could do using ImageMagick), but I'm also interested in more sophisticated techniques. I'm already talking to vendors, so for this question I'm mostly looking for algorithms or open source software.
To further clarify: I'm not looking for OCR software or algorithms; I'm looking for algorithms to clean up the image so it looks more readable to the human eye, and can possibly be used for OCR.
I had a similar problem when I was writing some software to do book scanning; floating around on the internet is a program called pagetools that does straightening of scanned-in pages using a fairly clever mathematical trick called the Radon transform.
I also wrote a small routine that would white out the blank space on the page; OCR algorithms tend to do a lot better when they don't have to contend with background noise. What I did, was look for light-colored pixels that were more than a small radius away from dark-colored ones, and then boost those up to being pure white.
It's been a few years, though, so I don't have the exact implementation details handy.
One simple image filter to look into is the "Median Filter" which is a very straightforward, easy to implement yourself, filter to help clean up scanned/photographed text. http://en.wikipedia.org/wiki/Median_filter
As requested, link to Wikipedia: Optical character recognition
Microsoft Research: Optical character recognition papers
CiteSeerX : Papers on optical character recognition