Wikipedia Api get amounts of words - mediawiki

I'm a bit stuck in all the options the Wikipedia api has.
My goals is to get the amount of words of an wikipedia page.
I have the url of the wiki.
The search option does return this value:
http://en.wikipedia.org/w/api.php?format=xml&action=query&list=search&srsearch=camera&srlimit=1
Wil return
<api>
<query-continue>
<search sroffset="1"/>
</query-continue>
<query>
<searchinfo totalhits="68658"/>
<search>
<p ns="0" title="Camera" snippet="A <span class='searchmatch'>camera</span> is an optical instrument that records image s that can be stored directly, transmitted to another location, or both. <b>...</b> " size="43246" wordcount="6348" timestamp="2014-04-29T15:48:07Z"/>
</search>
</query>
</api>
(scroll a bit to the right and you find wordcount
But this query is making a search and shows 1 top result. However, when I search on the wikipedia name in the URL, it doesnt always find that record as the first result.
So is there a way to get this wordcount a Wikipedia page?

No other APIs provide this information, so the kludge with list=search is the only way. If you know the exact title you can get better results by appending &srwhat=nearmatch to the query (it will always return 1 result though). See the docs and try the sandbox to learn more.
Note that word counts are not stored in database so the API has to go to Lucene/Elasticsearch for this information which is not exactly fast, so if you need this information en masse you should download a dump instead.

Related

overpass-turbo.eu find all cities on maps

maybe someone can help me with a overpass-turbo.eu-query.
I'd like to highlight (center of it) all cities of a country or region (or current map).
Is there maybe an "simple" example on web?
(Google was not a good friend with this special request, yet. But I am sure someone must tried to search this way already.)
Many thanks for every idea.
Here is an example for finding all cities, towns, villages and hamlets in the country Andorra:
[out:json][timeout:25];
// fetch area “Andorra” to search in
{{geocodeArea:Andorra}}->.searchArea;
// gather results
(
node[place~"city|town|village|hamlet"](area.searchArea);
);
// print results
out body;
>;
out skel qt;
You can view the result at overpass-turbo.eu after clicking the run button.
Note: When running this query for larger countries you might need to increase the timeout value. Also rendering the result in the browser might not be possible due to performance reasons. In this case use the export button and download the raw data instead.

Understanding openaddresses data format

I have downloaded us-west geolocation data (postal addresses) from openaddresses.io. Some of the addresses in the datasets are not complete i.e., some of them doesn't have info like zip_code. Is there a way to retrieve it or is the data incomplete?
I have tried to search other files hoping to find any related info. The complete dataset doesn't contain any info relate to it. City of Mesa, AZ has multiple zip codes, so it is hard to assign one to the address. Is there any way to address this problem?
This is how data looks like (City of Mesa, AZ)
LON,LAT,NUMBER,STREET,UNIT,CITY,DISTRICT,REGION,POSTCODE,ID,HASH
-111.8747353,33.456605,790,N DOBSON RD,,SRPMIC,,,,,dc0c53196298eb8d
-111.8886227,33.4295194,2630,W RIO SALADO PKWY,,MESA,,,,,c38b700309e1e9ce
-111.8867018,33.4290795,2401,E RIO SALADO PKWY,,TEMPE,,,,,9b912eb2b1300a27
-111.8832045,33.4232903,700,S EVERGREEN RD,,TEMPE,,,,,3435b99ab3f4f828
-111.8761202,33.4296416,2100,W RIO SALADO PKWY,,MESA,,,,,b74349c833f7ee18
-111.8775844,33.4347782,1102,N RIVERVIEW,,MESA,,,,,17d0cf1542c66083
Short Answer: The data incomplete.
The data in OpenAddresses.io is only as complete as the datasource it pulls from. OpenAddresses is just an aggregation of publicly available datasets. There's no real consistency between government agencies that make their data available. As a result, other sections of the OpenAddresses dataset might have city names or zip codes, but there's often something missing.
If you're looking to fill in the missing data, take a look at how projects like Pelias use multiple data sources to augment missing data.
Personally, I always end up going back to OpenStreetMaps (OSM). One could argue that OpenAddresses is better quality because it comes from official sources and doesn't try to fill in data using approximations, but the large gaps of missing data make it far less useful, at least on its own.

Google Maps Api autocomplete: contains instead of starts with

We are using Google Places API for getting some locations in a map. The thing is, we have some issues with autocomplete (I believe related with browser language) because if you want to look for places in Panama City, you can write "Panama" and it will return "Panama City" as a prediction result. But, if your browser is in Spanish (our users speak Spanish), the user could write "Panama" and we would like to have "Ciudad de Panama" as a result, but we don't. The option in this case is to look for "Ciudad de Pa" for getting the result, but this is something that we don't want to have.
May be, the option is getting the predictions by "contains" method, instead of "starts with". Is it possible? Any other idea?
Thanks,
I think Google Place Autocomplete API results are already based on what you said "contains" method. I tried to get what you want and use "Ciudad de Panama" as my input and get many results not only start with input "Ciudad de Panama" but also get results like "Panamá, Doctores, Ciudad de México, México".
https://maps.googleapis.com/maps/api/place/autocomplete/json?input=Ciudad%20de%20Panama&types=address&language=es&key=your_api_key
So I think Autocomplete API does not return all results that contains what you search. Also, I think it gives priority in the search string what word you input. For example, as you observe on the results in the above request, I search the word "Ciudad de Panamá", the first result will give the exact word, the second and the rest will not. So if you use "Panama" as input. It will prioritize to results an address that contains "Panama" itself than a word that contains Panama as second or third word in the results.(xxxxx xxxx Panama)

How to realize a context search based on synomyns?

Lets say an internet user searches for "trouble with gmail".
How can I return entries with "problem|problems|issues|issue|trouble|troubles with gmail|googlemail|google mail"?
I don't like to manually add these linkings between different keywords so the links between "issue <> problem <> trouble" and "gmail <> googlemail <> google mail" are completly unknown. They should be found in an automated process.
Approach to solve the problem
I provide a synonyms/thesaurus plattform like thesaurus.com, synonym.com, etc. or use an synomys database/api and use this user generated input for my queries on a third website.
But this won't cover all synonyms like the "gmail"-example.
Which other options do I have? Maybe something based on the given data and logged search phrases of the past?
You have to think of it ignoring the language.
When you show a baby the same thing using two words, he understand that those words are synonym. He might not have understood perfectly, but he will learn when this is repeated.
You type "problem with gmail".
Two choices:
Your search give results: you click on one item.
The system identify that this item was already clicked before when searching for "google mail bug". That's a match, and we will call it a "relative search".
Your search give poor results:
We will search in our history for a matching search:
We propose : "do you mean trouble with yahoo mail? yes/no". You click no, that's a "no match". And we might propose others suggestions like a list of known "relative search" or a list of might be related playing with both full text search in our history and levenshtein distance.
When a term is sufficiently scored to be considered as a "synonym", you can consider it is. Algorithm might be wrong, but in fact it depends on what you really expect.
If i search "sending a message is difficult with google", and "gmail issue", nothing is synonym, but search are relatively the same. This is more important to me than true synonyms.
And if you really want to get the synonym, i would do it in a second phase comparing words inside "relative searches" and would include a manual check.
I think google algorithm use synonym mainly to highlight search terms in page result, but not to do an actual search where they use the relative search terms, except in known situations, as the result for "gmail" and "google mail" are not the same.
But if you identify 10 relative searches for "gmail" which all contains "google mail", that will be a good start point to guess they are synonyms.
This is a bit long for a comment.
What you are looking for is called a "thesaurus" or "synonyms" list in the world of text searching. Apparently, there is a proposal for such functionality in MySQL. It is not yet implemented. (Here is a related question on Stack Overflow, although the link in the question doesn't seem to work.)
The work-around would be to modify queries before sending them to the database. That is, parse the query into words, then look up all the synonyms for those words, and reconstruct the query. This works better for the natural language searches than the boolean searches (which require more careful reconstruction).
Pseudo-code for getting the final word list with synonyms would be something like:
select #finalwords = concat_ws(' ', group_concat(synonyms separator ' ') )
from synonyms s
where find_in_set(s.baseword, #words) > 0;
Seems to me that you have two problems on your hands:
Lemmatisation, which breaks words down into their lemma, sometimes called the headword or root word. This is more difficult than Stemming, as it doesn't just chop suffixes off of words, but tries to find a true root, e.g. "are" => "be". This is something that is often done programatically, although it appears to be a complex task. Here is an online example of text being lemmatized: http://lemmatise.ijs.si/Services
Searching for synonymous lemmas. This is a very complex problem. One approach to this that I have heard of is modifying the lemmatisation engine to return more than one lemma for a given set of words, i.e. "problems" => "problem" and "issue", thereby allowing a more flexible set of results. However, this means that the synonymous lemmas must be provided to the lemmatisation engine from elsewhere. I truly have no idea how you would build a list of synonyms programatically.
So, you may consider a strategy whereby you lemmatise the text to be searched for, then pass each lemma out to your synonym finder (however that works) to get a final list of lemmas to perform your search with.
I think you have bitten off a very large problem for yourself.
If the system in question is a publicly accessible website, one 'out there' option is to ensure all content can be crawled by Google and then use a Google search on your own site, which should give you the synonym capability 'for free'. There would obviously be some vagaries in the results though and lag in getting match results for newly created content, depending upon how regularly the crawlers hit the site. Probably not suitable in your use case, but for some people, this may be sufficient.
Seeing your revised question, what about using a public API?
http://www.programmableweb.com/category/reference/apis?category=20066&keyword=synonym

Which is a better long term URL design?

I like Stack Overflow's URLs - specifically the forms:
/questions/{Id}/{Title}
/users/{Id}/{Name}
It's great because as the title of the question changes, the search engines will key in to the new URL but all the old URLs will still work.
Jeff mentioned in one of the podcasts - at the time the Flair feature was being announced - that he regretted some design decisions he had made when it came to these forms. Specifically, he was troubled by his pseudo-verbs, as in:
/users/edit/{Id}
/posts/{Id}/edit
It was a bit unclear which of these verb forms he ended up preferring.
Which pattern do you prefer (1 or 2) and why?
I prefer pattern 2 for the simple reason is that the URL reads better. Compare:
"I want to access the USERS EDIT resource, for this ID" versus
"I want to access the POSTS resource, with this ID and EDIT it"
If you forget the last part of each URL, then in the second URL you have a nice recovery plan.
Hi, get /users/edit... what? what do you want to edit? Error!
Hi, get /posts/id... oh you want the post with this ID hmm? Cool.
My 2 pennies!
My guess would be he preferred #2.
If you put the string first it means it always has to be there. Otherwise you get ugly looking urls like:
/users//4534905
No matter what you need the id of the user so this
/user/4534905/
Ends up looking better. If you want fakie verbs you can add them to the end.
/user/4534905/edit
Neither. Putting a non-English numeric ID in the URL is hardly search engine friendly. You are best to utliize titles with spaces replaced with dashes and all lowercase. So for me the correct form is:
/question/how-do-i-bake-an-apple-pie
/user/frank-krueger
I prefer the 2nd option as well.
But I still believe that the resulting URLs are ugly because there's no meaning whatsoever in there.
That's why I tend to split the url creation into two parts:
/posts/42
/posts/42-goodbye-and-thanks-for-all-the-fish
Both URLs refer to the same document and given the latter only the id is used in the internal query. Thus I can offer somewhat meaningful URLs and still refrain from bloating my Queries.
I like number 2:
also:
/questions/foo == All questions called "foo"
/questions/{id}/foo == A question called "foo"
/users/aiden == All users called aiden
/users/{id}/aiden == A user called aiden
/users/aiden?a=edit or /users/aiden/edit == Edit the list of users called Aiden?
/users/{id}/edit or /users/{id}?a=edit is better
/rss/users/aiden == An RSS update of users called aiden
/rss/users/{id} == An RSS feed of a user's activity
/rss/users/{id}/aiden == An RSS feed of Aiden's profile changes
I don't mind GET arguments personally and think that /x/y/z should refer to a mutable resource and GET/POST/PUT should act upon it.
My 2p
/question/how-do-i-bake-an-apple-pie
/question/how-do-i-bake-an-apple-pie-2
/question/how-do-i-bake-an-apple-pie-...