I am quite new to wikis but I would like to know if it is possible to basically grab the newest (top) recent change of each page and put it in some kind of feed or any useable format i.e. a list of every page's most recent change. Thanks in advance.
You can use the MediaWiki API to retrieve this data. Try it out, if you run into trouble, come back and we might be able to help : )
Related
According to the docs, one can simple add
$schema": "https://schemas.wp.org/trunk/theme.json",
to the beginning of the JSON file to load the schema
VS code gives me the error that it's missing:
And sure enough, going to https://schemas.wp.org/trunk/theme.json it doesn't exist:
I've seen posts such as this one about missing schema which says to go to https://schemas.wp.org/wp/5.9/theme.json but same result. Searching around it seems like their security certificate expired at some point, but surely they resolved that many months ago.
I'd like to resolve this to get the benefits of schema (autocomplete / intellisense) in the file, especially since the file is getting larger and this is a large project with multiple people working on it.
Help appreciated.
It seems like https://schemas.wp.org/trunk/theme.json is not reachable for you even though it is online (I can reach the URL). The url forwards to ttps://raw.githubusercontent.com/WordPress/gutenberg/trunk/schemas/json/theme.json which might be accessible for you. If yes, you can add this direct url as the value for $schema.
I am trying to access old version of Wiki pages using data instead of "oldid". Usually to access and a version of a wiki page, I have to use the page id like this https://en.wikipedia.org/w/index.php?title=Main_Page&oldid=969106986, is there a way to access the same page using the date without knowing the ID? If i know for example that there is a version of the page published on "12:44, 23 July 2020 "
In addition to the "main" API (called the action API by MediaWiki developers), you can also use the REST API. It may or may not be enabled at all wikis, but if you intend to query Wikipedia content.
The revision module of the \action API (linked to in #amirouche's answer) allows you to get the wikitext format of a page. That is the source format that is used by MediaWiki, and it isn't easy to get a HTML from it, which can be easier to analyze (especially if you do ĺingquistic analytics, for instance).
If HTML would be better for your use case, you can use the REST API, see https://en.wikipedia.org/api/rest_v1/#/. For instance, if you're interested in English Wikipedia's Main Page as of July 2008, you can use https://en.wikipedia.org/api/rest_v1/page/html/Main_Page/223883415.
The number (223883415) is the revision ID, which you can get through the action API.
However, keep in mind that re-parses the revision's wikitext into HTML. That means it doesn't need to be exactly what showed as of the date the revision was saved. For instance, the wikitext can contain conditions on current date (that is used for automatically upating the mainpage). If you're intereted in seeing that, you would need to use archive.org.
You can use the MediaWiki API to get revision; refer to the documentation at: https://www.mediawiki.org/wiki/API:Revisions.
You need to map revision ids with dates. It will be straightforward :).
To make an example Wordpress plugin that shows an image, or at least stores it in the media library, I go out to the NASA Open API and fetch the Astronomy Picture of the Day. I have not yet tested that code, but assuming it works my problem is: then what? I guess I have to parse the return data and somehow get an image file I can upload to the library. I'm used to grabbing pre-known text fields such as data.customerid, etc. but never an image. The API site is not much help. I have gleaned from around here a few references to "base64" but I don't know what that means. I can't find a straightup example if what to do. Any thoughts?
you'll have to do steps along this lines, you are pretty much correct on that:
fetch response from API (example: https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY)
parse url:
use media_sideload_image function in WordPress to save image in media to be able to show it in your plugin or to your users..
I suggest you to start building your plugin from scratch and then when you get stuck post your code here and community can help you better..
I want to make a line chart that automatically updates with data from this page: http://www2.nve.no/h/hd/plotreal/Q/0027.00025.000/knekkpunkt.csv
The .csv is updated once per hour, and contains date and two values for amount of water flowing in a river. How can I set up a Highcharts, or similar, to get data from this file, and render to a graph?
I don't have server access to nve.no, where the data is stored.
Appreciate any ideas!
As a part of the ZingChart team, we've received several questions similar to this one. You're likely to initially run in to cross-domain issues when using a resource that is hosted on another domain, especially since you do not have any access to that server.
There are ways around this, however. One method involves using JSONP and YQL (Yahoo! Query Language). Using YQL, you can pull from the URL that you've provided and have the data returned as JSONP.
Here is a JSbin demo that shows this off: http://jsbin.com/hidel/1/edit?html,output
At the beginning I recommend you to visit the working with data article.
I'm creating a wiki using Mediawiki for the first time. I would like to include automatically all backlinks of the current page in a template (like the "See also" section). I tried to play with the API, successfully, but I still haven't succeed in including the useful section of the result in my template.
I have been querying Google and Stackoverflow for days (maybe in the wrong way) but I'm still stuck.
Can somebody help me?
As far as I know, there is no reasonable way to do that. Probably the closest you could get is to write a JavaScript code that reacts on the presence of a specific HTML element in the page, makes the API request and then updates the HTML to include the result.
It’s not possible in wiki text to execute any JavaScript or use even more uncommon HTML. As such you won’t be able to use the MediaWiki API like that.
There are multiple different options you have to achieve something like this though:
You could use the API by including custom JavaScript code on MediaWiki:Common.js. The code there will be included automatically and can be used to enhance the wiki experience. This obviously requires JavaScript on the client so it might not be the best option; but at least you could use the API directly. You would have to add something to figure out where to place the results correctly though.
A better option would be to use an extension that gives you this output. You can either try to find an extension that already provides this functionality, or write your own that uses the internal MediaWiki API (not the JS one) to access that content.
One extension I could personally recommend you that does this (and many other things), is DynamicPageList (full disclosure: I’m somewhat affiliated with that project). It allows you to perform complex page selections.
For example what you are trying to do is to find all pages which link to your page. This can be easily done by DPL like this:
{{ #dpl: linksto = {{FULLPAGENAME}} }}
I wrote a blog post recently showing how to call the API to get the job queue size and display that inside of the wiki page. You can read about it at Display MediaWiki job queue size inside your wiki. This solution does require the External Data extension however. The code looks like:
{{#get_web_data: url={{SERVER}}{{SCRIPTPATH}}/api.php?action=query&meta=siteinfo&siprop=statistics&format=json
| format=JSON
| data=jobs=jobs}}
{{#external_value:jobs}}
You could easily swap in a different API call to get other data. For the specific item your looking for, #poke's answer above is probably better.