let's assume that I have two text data from which I need to check how these texts are similar to each other (it's easy), but additionally, I have a target that represents the ordinary mark from 1 to 3 (1 is the best, 3 is worst). So, how to build the model to approximate two text discrepancies regarding the target?
I have some idea (look on the formula) but if there is a better solusion? Thank you.
Related
i put a small request on upwork where i am requesting help for a topic which is right now out of my skill zone.
The problem is a fitting problem of small rectangles in a big rectangle via a ANN.
Problem is the first freelancer baffled me a little bit with a comment.
So my thinking was, because the solution is easy verified and rewardable, that you can simply throw a ANN on this problem and with enough time it will perform better and better.
The freelancer requested labeled data first before he can tackle the problem(thats the comment which confuses me).
I was thinking that unlabeled random Input data is enough for the start.
Do I think wrong?
here the link to the job post.
https://www.upwork.com/jobs/~01e040711c31ac0979
edit: directly the original job description
I want python code for training a ANN and using it in a productive enviroment.
The problem it needs to solve is a rectangle fitting problem.
Input are
1000 small Rectangles(groupid,width,heigth,Oriantion(free,restricted,hor or ver), value) --sRect
1 big Rectangles(width, heigth)--bRect
Layout(bool,bool,bool,xpos,ypos,Oriantaion(hor or ver))--Layout
Output
Layout
The bRect will be duplicated to 3 Rectangles where the sRects need to be fitted into.
The Worth of the solution is determined by the sum of the value of sRect inside the bRect.
Further is the value decreased if the sRect is placed in the second bRect or third bRect.
sum(sRect(value))*0.98^nth bRect
Not all sRect needs to be placed.
Layout is structered that the three bool at the start represent at which bRect the sRect is placed. If a sRect is placed at one of the bRect, then the Solution Layout muss stay for this sRect the same.
Restricted Ori means all of the sRect with the same group need to be Oriantated the same way. Hor means the sRect is not turned, ver the sRect is turned by 90degrees.
Other then that normal rules apply, like all sRect needs to be inside the bRect and not Overlapp between sRect.
Looking forward to replys and i am avaible for further explanations.
edit: example picture
important i dont want to optimise for maximum plate usage, because it can happen that a smaller sRect can have a higher value then a bigger sRect.
example fitting problem
Without expected output for each input you cannot use the most standard training methodology - supervised learning. If you only have a way to verify the solution (e.g. in a game of chess you can tell me if I won but you cant tell me how to win) then the most standard approach is reinforcement learning. That being said, it is much more complex problem, not something that say a newcomer to the field of ML will be capable of doing (while supervised learning is something that one can do essentially by following basic tutorials online)
Is there a way to weigh layout according to node attributes in igraph? In other words, how to get nodes that share the same characteristics (but do not have edges between them) cluster more closely together?
While many layout functions can take edge weight into account, the nodes that I want to be closer to each other do not have edges between them. An example of such situation is if the graph is bipartite. Using layouts such as fruchterman.reingold is not very informative as vertices of the two different types are interspersed. However, I do not want it be be as extreme as the layout.bipartite option either as it would be rather messy when there are lots of vertices. What I wish is to have a layout that is somewhere between these two, having vertices of the same type to be on one side, and also cluster according to certain attributes, with edges between the two types.
Any idea or suggestion will be greatly appreciated. Thanks!
igraph layouts are simply matrices with 2 columns and N rows, so you can easily re-use one layout with another graph as long as the two graphs share the same number of nodes. You can make use of this here: create a graph where you connect the nodes that you want to be placed close to each other, calculate the layout using this graph, and then plot your original graph with the layout you have calculated.
I am developing an OCR to detect credit card.
After scanning the image I get a list of words with it´s positions.
Any tips/suggestions about the best approach to detect which words correspond to each field of credit card (number, date, name)?
For example:
position = 96.00 491.00
text = CARDHOLDER
Thanks in advance
Your first problem is that most OCRs are not optimised for small amounts of text that take up most of the "page" (or card image, in your case) in spatially separated chunks. They expect lines, or pages of text from a scanned book or a newspaper. So straight away they're not likely to do that well at analysing the image.
Because the font is fairly uniform they'll likely recognise the characters well, but the layout will confuse the page segmentation algorithm and so the text you get out might not be in the right order. For example, the "1234" of the card number and the smaller "1234" below it constitute a single column of text, likewise the second two sets of four numbers and the expiration date.
For specialized cases where you know the layout in advance you really want to develop your own page segmentation algorithm to break up the image into zones, e.g. card number, card holder name, start and expiration dates. This shouldn't be too hard because I think the location of these components are standardised on credit cards. Assuming good preprocessing and binarization you could basically do a horizontal histogram and split the image at the troughs.
Then extract each zone as a separate image containing just one line of text and feed it to the OCR.
Alternately (the quick and dirty approach)
Instruct the OCR that what you want to recognise consists of a single column (i.e. prevent it from trying to figure out the page layout itself). You can do this with Tesseract using the -psm (page segmentation mode) parameter set to, probably, 6 (but try and see what gives you the best results)
Make Tesseract output hOCR format, which you can set in the configfile. hOCR format includes the bounding boxes of the lines that get output relative to the whole image.
write an algorithm that compares the bounding boxes in the hOCR to where you know each card component should be (looking for some percentage of overlap, it won't match exactly for obvious reasons.)
In addition to the good tips provided by Mikesname, you can greatly improve the recognition result regardless of which OCR engine you use if you use image processing to convert the image to bitonal (pure black and white), such as the attached copy of your image.
I have various product items that I need to decide if they are the same. A quick example:
Microsoft RS400 mouse with middle button should match Microsoft Red Style 400 three buttoned mouse but not Microsoft Red Style 500 mouse
There isn't anything else nice that I can match with apart from the name and just doing it on the ratio of matching words isn't good enough (The error rate is far too high)
I do know about the domain and so I can (for example) hand write the fact that a three buttoned mouse is probably the same as a mouse with a middle button. I also know the manufacturers (or can take a very good guess at them).
The only thought I have had so far is matching them by trying to use hand written rules to reduce the size of the string and then checking the matching words, but I wondered if anyone had any ideas best way of doing this matching was with a better accuracy and precision (or where to start looking) and if anyone knew of any work that had been done in this area? (papers, examples etc).
"I do know about the domain..."
How much exactly do you know about the domain? If you know everything about the domain, then you might be better off building an index of all your manufacturers products (basically the description of the product from the manufacturers webpage). Then instead of trying to match your descriptions to each other, matching them to your index of products.
Advantages to this approach:
presumably all words used in the description of the product have been used somewhere in the promotional literature
if when building the index you were able to weight some of the information (such as product codes) then you may have more success
Disadvantages:
may take a long time to create the index (especially if done by hand)
If you don't know everything about your domain, then you might consider down-ranking words that are very common (you can get lists of common words off the internet), and up-ranking numbers and words that aren't in a dictionary (you can get lists of words off the internet/most linux/unix distributions come with them for spell checking purposes).
I don't know how much you know about search, but in the past I've found the book "Search Engines: Information Retrieval in Practice" by W. Bruce Croft, Donald Metzler, Trevor Strohman to be useful. There are some sample chapters in the publishers website which will tell you if the book's for you or not: pearsonhighered.com
Hope that helps.
In addition to hand-written rules, you may try to use supervised learning with feature extraction.
Let features be the words in description, than look on descriptions as feature vectors.
When teaching the algorithm, let it show you two vectors that look similar by the ratio, and if it's same item, let the algorithm improve weighs for those words.
For example, each pair of words may have bigger weight than simple ratio, as you have done.
[3-button] [middle]
[wheel] [button]
[mouse] [mouse]
By your algorithm, it'll give ratio of 1/3 to similarity. When you set this as "same item" algorithm should add more value to those pair of words, when it reaches them next time.
Just tokenize (you should seperate numbers from letters in that step aswell, so not just a whitespace tokenizer), stem, filter stopwords and uninteresting words like mouse. Perhaps you should have a list with words producers aswell and shorten all not producers and numbers to their first letter. (if you do that, you have to seperate capital letters aswell in the tokenizer)
Microsoft RS400 mouse with middle button -> Microsoft R S 400
Microsoft Red Style 400 three buttoned mouse -> Microsoft R S 400
Microsoft Red Style 500 mouse -> Microsoft R S 500
If you want a better solution
vsm (vector space model) out of plagiarism detection would be nice. (Every word gets a weight, according to their discriminative value and those weights are projected into a multidimensional space. After that you just measure the angular degree between 2 texts)
I would suggest something a lot more generally applicable. As I understand it, you want some nlp processing that will deal with things that you recognize as synonyms. I think that's a pretty simple implementation right there.
If I were you I would make a keyword object that had a list of synonyms as a parameter, then write a script that would scrape whatever text you have for words that only appear occasionally (have some capped frequency at which the keyword is actually considered applicable), then add a list of keywords as a parameter of each keyword that contains it's synonyms. If you were willing to go a step further I would set weights on the synonym list showing how similar they are.
With this kind of nlp problem, the chance that you will get to 100% accuracy is 0, but you could well get above 90%, I would suggest adding an element by which you can adjust the weights in an automated way. I have to be fairly vague here, but in my last job I was tasked with a similar problem, and was able to get accuracy in the high 90's. My implementation was also probably more complicated than what you need, but even a simple implementation should get you pretty good return, but if you aren't dealing with a fairly large data set (~hundreds+) it's probably not worth scripting.
Quick example, in your example the difference can be distilled pretty accurately to just saying that "middle" and "three" are synonyms. You can get more complex if you need to, but that would match a lot.
I have an html table that literally has like 30 columns of data, and I'm having a hard time framing it in such a way that it can be visible without massive left/right scrolling.
One thing I was wondering is if anyone has ever seen anything clever with column headers? Some of them just can't be abbreviated down enough, but the column header is something like "Interview" and the value is numeric (lots of wasted space for the header alone). Granted, I could try and name these columns like INT or whatever, but there are lots of similarly named columns that it could become confusing.
Maybe some sort of auto collapsing columns based on mouse movement? Not sure.. I just need some creative suggestions on how to display this data!
Most likely the user will have a devil of a time comprehending 30 columns of data, regardless of scrolling.
I would recommend showing the most fundamental columns (things like name, description, identifying numbers -- core stuff, hopefully there are only 10 of them or less), and then letting the user toggle on or off whatever columns they need. A bit like google squared.
Use Jquery and CSS to accomplish this in a clean fashion. There may also be Javascript UI libraries that do this for you (Jquery UI, YUI, others...)
create images for the column names and rotate the text in the image 90 degrees. you can then have a long name with equally small widths.
Josh
I agree with the answer from ferocious, toggling columns is a good idea. Also, depending on the data, I would recommend only having a few columns displayed, and when the user clicks on the row they are interested in, it moves to a new page dedicated to the data in that record. This will work for some types of data and not for others