Does anyone have experience in using the Wiki API Sandbox with making REST calls on custom wikis? By custom wiki I mean something like http://bulbapedia.bulbagarden.net/wiki/.
I particularly want to get access to some of the Pokemon content found on Bulbapedia, but not sure where to start or if it's even possible to use REST on custom wikis.
My current solution is to just use a standard wikipedia page with calls like:
To Get All Pokemon:
https://en.wikipedia.org/w/api.php?action=parse&format=json&page=List_of_Pok%C3%A9mon
To Get Bulbasaur:
https://en.wikipedia.org/w/api.php?action=parse&format=json&page=Bulbasaur
I get some JSON that I can work with, but would love to be able to explore the content of a Bulbapedia page AND have access to all of Ken Sugimori's artwork.
Yes, MediaWiki comes with API bundled. Furthermore, since 1.27 it includes a rewritten ApiSandbox that I originally wrote as an extension. So as Bulbapedia is running 1.27.1, it has the sandbox too.
Related
I am doing IOT related project in Labview using Arudino as hardware.
I was able switch off/on an led on Arudino by Pressing OFF/ON on website by using datasocket vi. Now what i want is to control the intensity of led from Website.
I have a range slider in my website and its real time value can be viewed in textarea,div,input type.
Is there any way i can get that real time value that is being changed in HTML DOM in Labview.
I know that datasocket vi returns the html source code but not the HTML DOM.
I dont want to use the Web Publishing Services as they dont work in my Laptop.
This is the link im referring for datasocket.
Datasocket Labview
You can do something like creating a web socket, but I expect the easiest thing is to use a web service. You can create one in LV and add a setLEDIntensity method to it and call it from your JS code. You can find a simple example here and in other documents in that community.
Use WebSocket API for LabVIEW to send and receive data from the web. This is the best option for you.
https://decibel.ni.com/content/docs/DOC-40572
New to the JSON world and I'm trying to find out how to view a JSON object of a webpage. Will every webpage have a JSON object and if so how do I find it in order to get the data and display it on my site? I vaguely remember something about using Firebug?
Thanks,
B
Will every webpage have a JSON object
No.
Many web sites will not use any JSON; many will be completely static (HTML and CSS only).
It may only apply if there is a "Web API" (for programmatic access to content), but there are non-JSON ways to do APIs (the X in AJAX is for XML).
To determine how to access a site programmatically look at the site's developer documentation. If there isn't any documentation then any AJAX web debuggers (like FireBug) show may well be internal only and intended only for the site's own implementation; other uses could well be not welcome (you could be up for violating IP).
This might become a vulnerability to add sensitive JSON to your final HTML page.. JSON should be loaded like an ingredient to the soup, via Ajax for example on authenticated page. If it's not sensitive JSON then you should load it for performance reasons once it is required... it really depends on your choice. I have built a library to handle these kind of requests for web, check it out: https://github.com/alexmano/jsMan
The Beta API provides two different end points for getting the content of a OneNote page from Live, one using the ContentUrl and one the Id. Both work. The GET looks as if it should work in the same way to replace a page by using a PUT instead (it's not documented as doing this, I'm guessing)
PUT https://www.onenote.com/api/beta/pages/{pageId}/content
but I get a 404 if I try this, using he same Id I've just successfuly used for a GET. Can one replace an entire page?
Andrew
PUT operations aren't yet supported on the OneNote API. We do support PATCH operations on a page to update specific elements. More information can be found at
OneNote PATCH reference documentation and OneNote PATCH blog post
Feel free to add a request for the PUT operation at https://onenote.uservoice.com
I am trying to make a web scrpper that, for this example, scrapes news articles from Reuters.com. I want to get the title and date. I know I will ultimately just have to pull the source code from each address and then parse the HTML using something like JSoup.
My question is: How do I ensure I do this for each news article on Reuters.com? How do I know I have hit all the reuters.com addresses? Is there any APIs that can help me with this?
What you are referring to is called web scraping plus web crawling. What you have to do is visit every link matching some criteria (crawling) and then scrape the content (scraping). I've never used them but here are two java frameworks for the job
http://wiki.apache.org/nutch/NutchTutorial
https://code.google.com/p/crawler4j/
Of course you will have to use jsoup (or simillar) for parsing the content after you've collected the urls
Update
Check this out Sending cookies in request with crawler4j? for a better list of crawlers. Nutch is pretty good, but very complicated if the only thing you want is to crawl one site. crawler4j is very simple but I don't know if it supports cookies (and if that matters to you it's a deal breaker).
Try this website http://scrape4me.com/
I was able to generate this url for headline: http://scrape4me.com/api?url=http%3A%2F%2Fwww.reuters.com%2F&head=head&elm=&item[][DIV.topStory]=0&ch=ch
I'm creating a wiki using Mediawiki for the first time. I would like to include automatically all backlinks of the current page in a template (like the "See also" section). I tried to play with the API, successfully, but I still haven't succeed in including the useful section of the result in my template.
I have been querying Google and Stackoverflow for days (maybe in the wrong way) but I'm still stuck.
Can somebody help me?
As far as I know, there is no reasonable way to do that. Probably the closest you could get is to write a JavaScript code that reacts on the presence of a specific HTML element in the page, makes the API request and then updates the HTML to include the result.
It’s not possible in wiki text to execute any JavaScript or use even more uncommon HTML. As such you won’t be able to use the MediaWiki API like that.
There are multiple different options you have to achieve something like this though:
You could use the API by including custom JavaScript code on MediaWiki:Common.js. The code there will be included automatically and can be used to enhance the wiki experience. This obviously requires JavaScript on the client so it might not be the best option; but at least you could use the API directly. You would have to add something to figure out where to place the results correctly though.
A better option would be to use an extension that gives you this output. You can either try to find an extension that already provides this functionality, or write your own that uses the internal MediaWiki API (not the JS one) to access that content.
One extension I could personally recommend you that does this (and many other things), is DynamicPageList (full disclosure: I’m somewhat affiliated with that project). It allows you to perform complex page selections.
For example what you are trying to do is to find all pages which link to your page. This can be easily done by DPL like this:
{{ #dpl: linksto = {{FULLPAGENAME}} }}
I wrote a blog post recently showing how to call the API to get the job queue size and display that inside of the wiki page. You can read about it at Display MediaWiki job queue size inside your wiki. This solution does require the External Data extension however. The code looks like:
{{#get_web_data: url={{SERVER}}{{SCRIPTPATH}}/api.php?action=query&meta=siteinfo&siprop=statistics&format=json
| format=JSON
| data=jobs=jobs}}
{{#external_value:jobs}}
You could easily swap in a different API call to get other data. For the specific item your looking for, #poke's answer above is probably better.