Can I get a version of a Wikipedia page as of specified date? - mediawiki

I am trying to access old version of Wiki pages using data instead of "oldid". Usually to access and a version of a wiki page, I have to use the page id like this https://en.wikipedia.org/w/index.php?title=Main_Page&oldid=969106986, is there a way to access the same page using the date without knowing the ID? If i know for example that there is a version of the page published on "12:44, 23 July 2020‎ "

In addition to the "main" API (called the action API by MediaWiki developers), you can also use the REST API. It may or may not be enabled at all wikis, but if you intend to query Wikipedia content.
The revision module of the \action API (linked to in #amirouche's answer) allows you to get the wikitext format of a page. That is the source format that is used by MediaWiki, and it isn't easy to get a HTML from it, which can be easier to analyze (especially if you do ĺingquistic analytics, for instance).
If HTML would be better for your use case, you can use the REST API, see https://en.wikipedia.org/api/rest_v1/#/. For instance, if you're interested in English Wikipedia's Main Page as of July 2008, you can use https://en.wikipedia.org/api/rest_v1/page/html/Main_Page/223883415.
The number (223883415) is the revision ID, which you can get through the action API.
However, keep in mind that re-parses the revision's wikitext into HTML. That means it doesn't need to be exactly what showed as of the date the revision was saved. For instance, the wikitext can contain conditions on current date (that is used for automatically upating the mainpage). If you're intereted in seeing that, you would need to use archive.org.

You can use the MediaWiki API to get revision; refer to the documentation at: https://www.mediawiki.org/wiki/API:Revisions.
You need to map revision ids with dates. It will be straightforward :).

Related

Send Email with all new OneNote entries

I use Microsoft OneNote daily to take notes. I would like to write a script to send myself an email every night with all the new notes I took that day across notebooks so I can review them. This would usually be straightforward in e.g. a Word doc where I can timestamp all saves and take the latest file, diff it with the last file from the previous day and send the diff. Unfortunately OneNote complicates this for at least two reasons:
OneNote autosaves and as far as I can tell does not offer the ability to rename saves or add a timestamp to the filename
Notebooks and pages mean changes are across "documents" instead of a single file that can be diff'd.
So I am looking for a solution that considers the complications above. Thanks.
The basic approach via the microsoft-graph API
./me/onenote/pages?$filter=lastModifiedDateTime ge yyyy-MM-ddThh:mm:ssZ&$expand=parentNotebook
will yield json data with
title - Page title
links/oneNoteWebUrl - allows opening of the onenote page in web browser
links/oneNoteClientUrl - allows opening of the onenote page in onenote app
parentNotebook/displayName - Notebook name
self - needed to get page content.
for small page numbers this may work but is likely to time out with a 504 error for a drive with many pages.
In that case a two stage approach is required.
./me/onenote/sections?$filter=lastModifiedDateTime ge yyyy-MM-ddThh:mm:ssZ
will return a list of all the sections that have been modified since the defined lastModifiedDateTime.
Next iterate through the returned json data and get pages modified since lastModifiedDateTime with the returned pagesUrls using the format
.me/onenote/sections/1-xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx/pages?$filter=lastModifiedDateTime ge yyyy-MM-ddThh:mm:ssZ&$expand=parentNotebook
yielding the same data as noted previously.
Once you have this data you can generate an email containing a list of the modified Notebooks,page names and page links.
If you need the actual page data(content) then you need to call
./me/onenote/pages/1-1c13bcbae2fdd747a95b3e5386caddf1!1-xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx/content?includeIDs=true&includeInkML=true&preAuthenticated=true
Which will give you text/html, ink and links to other resources from each page.

"Reverse" JSON Status API

I've been wondering how to fetch the PlayStation server status. They display it on this page:
https://status.playstation.com/en-us/
But PlayStation is known to use APIs instead of PHP database fetches. After looking around in the source code of the site, I found that they have a separate file called /data.json.
https://status.playstation.com/en-us/data.json
The content of this file is the same as the index file (for some reason). They use stuff like {{endDateTitle}} and {{message}}, but I can't find where it's defined, if it's pulled using a separate file or just pulled from a database using PHP.
How can I "reverse" this site and see if there's a API I can use to display the status on my site?
Maybe I did not get the question right, but it seems pretty straightforward.
If using firefox, open Developer tools, Network. Reload the page.
You can clearly see the requested URL
https://status.playstation.com/data/statuses/region/SCEA.json
It seems that an empty list as a status means "No problems" (since there are no problems I cannot verify this assumption. That's all
The parenthesis {{}} are used by various HTML templating languages, like angular, so you'd have to go through the js code to understand where they get updated.

Using Wikipedia API on custom wikis like Bulbapedia

Does anyone have experience in using the Wiki API Sandbox with making REST calls on custom wikis? By custom wiki I mean something like http://bulbapedia.bulbagarden.net/wiki/.
I particularly want to get access to some of the Pokemon content found on Bulbapedia, but not sure where to start or if it's even possible to use REST on custom wikis.
My current solution is to just use a standard wikipedia page with calls like:
To Get All Pokemon:
https://en.wikipedia.org/w/api.php?action=parse&format=json&page=List_of_Pok%C3%A9mon
To Get Bulbasaur:
https://en.wikipedia.org/w/api.php?action=parse&format=json&page=Bulbasaur
I get some JSON that I can work with, but would love to be able to explore the content of a Bulbapedia page AND have access to all of Ken Sugimori's artwork.
Yes, MediaWiki comes with API bundled. Furthermore, since 1.27 it includes a rewritten ApiSandbox that I originally wrote as an extension. So as Bulbapedia is running 1.27.1, it has the sandbox too.

PUT page to OneNote via live-sdk

The Beta API provides two different end points for getting the content of a OneNote page from Live, one using the ContentUrl and one the Id. Both work. The GET looks as if it should work in the same way to replace a page by using a PUT instead (it's not documented as doing this, I'm guessing)
PUT https://www.onenote.com/api/beta/pages/{pageId}/content
but I get a 404 if I try this, using he same Id I've just successfuly used for a GET. Can one replace an entire page?
Andrew
PUT operations aren't yet supported on the OneNote API. We do support PATCH operations on a page to update specific elements. More information can be found at
OneNote PATCH reference documentation and OneNote PATCH blog post
Feel free to add a request for the PUT operation at https://onenote.uservoice.com

How to include the result of an api request in a template?

I'm creating a wiki using Mediawiki for the first time. I would like to include automatically all backlinks of the current page in a template (like the "See also" section). I tried to play with the API, successfully, but I still haven't succeed in including the useful section of the result in my template.
I have been querying Google and Stackoverflow for days (maybe in the wrong way) but I'm still stuck.
Can somebody help me?
As far as I know, there is no reasonable way to do that. Probably the closest you could get is to write a JavaScript code that reacts on the presence of a specific HTML element in the page, makes the API request and then updates the HTML to include the result.
It’s not possible in wiki text to execute any JavaScript or use even more uncommon HTML. As such you won’t be able to use the MediaWiki API like that.
There are multiple different options you have to achieve something like this though:
You could use the API by including custom JavaScript code on MediaWiki:Common.js. The code there will be included automatically and can be used to enhance the wiki experience. This obviously requires JavaScript on the client so it might not be the best option; but at least you could use the API directly. You would have to add something to figure out where to place the results correctly though.
A better option would be to use an extension that gives you this output. You can either try to find an extension that already provides this functionality, or write your own that uses the internal MediaWiki API (not the JS one) to access that content.
One extension I could personally recommend you that does this (and many other things), is DynamicPageList (full disclosure: I’m somewhat affiliated with that project). It allows you to perform complex page selections.
For example what you are trying to do is to find all pages which link to your page. This can be easily done by DPL like this:
{{ #dpl: linksto = {{FULLPAGENAME}} }}
I wrote a blog post recently showing how to call the API to get the job queue size and display that inside of the wiki page. You can read about it at Display MediaWiki job queue size inside your wiki. This solution does require the External Data extension however. The code looks like:
{{#get_web_data: url={{SERVER}}{{SCRIPTPATH}}/api.php?action=query&meta=siteinfo&siprop=statistics&format=json
| format=JSON
| data=jobs=jobs}}
{{#external_value:jobs}}
You could easily swap in a different API call to get other data. For the specific item your looking for, #poke's answer above is probably better.