How to include the result of an api request in a template? - mediawiki

I'm creating a wiki using Mediawiki for the first time. I would like to include automatically all backlinks of the current page in a template (like the "See also" section). I tried to play with the API, successfully, but I still haven't succeed in including the useful section of the result in my template.
I have been querying Google and Stackoverflow for days (maybe in the wrong way) but I'm still stuck.
Can somebody help me?

As far as I know, there is no reasonable way to do that. Probably the closest you could get is to write a JavaScript code that reacts on the presence of a specific HTML element in the page, makes the API request and then updates the HTML to include the result.

It’s not possible in wiki text to execute any JavaScript or use even more uncommon HTML. As such you won’t be able to use the MediaWiki API like that.
There are multiple different options you have to achieve something like this though:
You could use the API by including custom JavaScript code on MediaWiki:Common.js. The code there will be included automatically and can be used to enhance the wiki experience. This obviously requires JavaScript on the client so it might not be the best option; but at least you could use the API directly. You would have to add something to figure out where to place the results correctly though.
A better option would be to use an extension that gives you this output. You can either try to find an extension that already provides this functionality, or write your own that uses the internal MediaWiki API (not the JS one) to access that content.
One extension I could personally recommend you that does this (and many other things), is DynamicPageList (full disclosure: I’m somewhat affiliated with that project). It allows you to perform complex page selections.
For example what you are trying to do is to find all pages which link to your page. This can be easily done by DPL like this:
{{ #dpl: linksto = {{FULLPAGENAME}} }}

I wrote a blog post recently showing how to call the API to get the job queue size and display that inside of the wiki page. You can read about it at Display MediaWiki job queue size inside your wiki. This solution does require the External Data extension however. The code looks like:
{{#get_web_data: url={{SERVER}}{{SCRIPTPATH}}/api.php?action=query&meta=siteinfo&siprop=statistics&format=json
| format=JSON
| data=jobs=jobs}}
{{#external_value:jobs}}
You could easily swap in a different API call to get other data. For the specific item your looking for, #poke's answer above is probably better.

Related

Can I get a version of a Wikipedia page as of specified date?

I am trying to access old version of Wiki pages using data instead of "oldid". Usually to access and a version of a wiki page, I have to use the page id like this https://en.wikipedia.org/w/index.php?title=Main_Page&oldid=969106986, is there a way to access the same page using the date without knowing the ID? If i know for example that there is a version of the page published on "12:44, 23 July 2020‎ "
In addition to the "main" API (called the action API by MediaWiki developers), you can also use the REST API. It may or may not be enabled at all wikis, but if you intend to query Wikipedia content.
The revision module of the \action API (linked to in #amirouche's answer) allows you to get the wikitext format of a page. That is the source format that is used by MediaWiki, and it isn't easy to get a HTML from it, which can be easier to analyze (especially if you do ĺingquistic analytics, for instance).
If HTML would be better for your use case, you can use the REST API, see https://en.wikipedia.org/api/rest_v1/#/. For instance, if you're interested in English Wikipedia's Main Page as of July 2008, you can use https://en.wikipedia.org/api/rest_v1/page/html/Main_Page/223883415.
The number (223883415) is the revision ID, which you can get through the action API.
However, keep in mind that re-parses the revision's wikitext into HTML. That means it doesn't need to be exactly what showed as of the date the revision was saved. For instance, the wikitext can contain conditions on current date (that is used for automatically upating the mainpage). If you're intereted in seeing that, you would need to use archive.org.
You can use the MediaWiki API to get revision; refer to the documentation at: https://www.mediawiki.org/wiki/API:Revisions.
You need to map revision ids with dates. It will be straightforward :).

"Reverse" JSON Status API

I've been wondering how to fetch the PlayStation server status. They display it on this page:
https://status.playstation.com/en-us/
But PlayStation is known to use APIs instead of PHP database fetches. After looking around in the source code of the site, I found that they have a separate file called /data.json.
https://status.playstation.com/en-us/data.json
The content of this file is the same as the index file (for some reason). They use stuff like {{endDateTitle}} and {{message}}, but I can't find where it's defined, if it's pulled using a separate file or just pulled from a database using PHP.
How can I "reverse" this site and see if there's a API I can use to display the status on my site?
Maybe I did not get the question right, but it seems pretty straightforward.
If using firefox, open Developer tools, Network. Reload the page.
You can clearly see the requested URL
https://status.playstation.com/data/statuses/region/SCEA.json
It seems that an empty list as a status means "No problems" (since there are no problems I cannot verify this assumption. That's all
The parenthesis {{}} are used by various HTML templating languages, like angular, so you'd have to go through the js code to understand where they get updated.

Using Wikipedia API on custom wikis like Bulbapedia

Does anyone have experience in using the Wiki API Sandbox with making REST calls on custom wikis? By custom wiki I mean something like http://bulbapedia.bulbagarden.net/wiki/.
I particularly want to get access to some of the Pokemon content found on Bulbapedia, but not sure where to start or if it's even possible to use REST on custom wikis.
My current solution is to just use a standard wikipedia page with calls like:
To Get All Pokemon:
https://en.wikipedia.org/w/api.php?action=parse&format=json&page=List_of_Pok%C3%A9mon
To Get Bulbasaur:
https://en.wikipedia.org/w/api.php?action=parse&format=json&page=Bulbasaur
I get some JSON that I can work with, but would love to be able to explore the content of a Bulbapedia page AND have access to all of Ken Sugimori's artwork.
Yes, MediaWiki comes with API bundled. Furthermore, since 1.27 it includes a rewritten ApiSandbox that I originally wrote as an extension. So as Bulbapedia is running 1.27.1, it has the sandbox too.

How to get change in HTML DOM in LabVIEW?

I am doing IOT related project in Labview using Arudino as hardware.
I was able switch off/on an led on Arudino by Pressing OFF/ON on website by using datasocket vi. Now what i want is to control the intensity of led from Website.
I have a range slider in my website and its real time value can be viewed in textarea,div,input type.
Is there any way i can get that real time value that is being changed in HTML DOM in Labview.
I know that datasocket vi returns the html source code but not the HTML DOM.
I dont want to use the Web Publishing Services as they dont work in my Laptop.
This is the link im referring for datasocket.
Datasocket Labview
You can do something like creating a web socket, but I expect the easiest thing is to use a web service. You can create one in LV and add a setLEDIntensity method to it and call it from your JS code. You can find a simple example here and in other documents in that community.
Use WebSocket API for LabVIEW to send and receive data from the web. This is the best option for you.
https://decibel.ni.com/content/docs/DOC-40572

Drupal 7 (VERY) Custom Preview

I have a drupal site that is being used strictly as a CMS that produces JSON feeds using services and services_views, which are consumed by a separate site. What I would like to do (and I have a working proof of concept of this) is allow for a "live preview" on the real site, by intercepting the node form preview / submit, encoding the node as JSON, and loading a special page on the live site that consumes that JSON and displays the page accordingly.
The problem with this JSONized node is, it's different from the JSON being produced by my view (using services_views). My end goal is to produce JSON that is identical for both previewed and non-previewed objects, without having to maintain separate output methods (I could easily hand-customize the json but then when my view for the public api changes I have to make the same changes to the preview json. Trying to avoid this).
I'm looking for feedback on this approach. Is what I'm attempting even possible? The ideas I've been able to come up with so far are:
being able to (conditionally) drive my view with data from a non-databse source
sneakily inserting data into the view object during one of the stages of execution? Kludgy but I'm not above that :)
saving a "clone" node (or revision?) of the node being previewed and let the view use that to display the preview JSON?
Maybe this is the wrong approach altogether and there's something better? (Trying to intercept and format the services output in my module... maybe avoid services_views altogether?)
If anyone can offer some advice, insight or opinions on how to best proceed here, I'd be really grateful.
in a custom module, you could set up a page that grabs the json output from the view page.
$JSON = file_get_contents($url);
that way the preview stays bound to the view, even if the view changes.
First I think it's not an easy task what you are trying to achieve. So before all, good luck.
I think you could intercept the node submission data, then create a node programatically, then render that node, and then export the rendered node to JSON. Inmediately after you get the JSON, delete this node, because the programmatically created node is only for preview.
This task could be more CPU demanding but think that previewing content exactly as the content will look is difficult.
Your rss feeds that your site reads could be filtered with some parameter to avoid programmatically created nodes (prewiew nodes), despite these nodes will be available for a very short time.