Find all "This page does not exist" in MediaWiki - mediawiki

I wote some articles on my wiki but made links to pages I was going to create later. Right now When you view the page there is a "page does not exist"
Is there a way to get a list of all of the "page does not exists" on every page on the wiki?
I tried broken redirects but all i get is
Broken redirects Jump to: navigation, search The following redirects
link to non-existent pages:
There are no results for this report.

What you're looking for is called "wanted pages" in Mediawiki. See your Special:WantedPages page. One especially useful feature is the pages are ordered by those with the most links, so the most "desired" pages are at the top. On a standard installation it looks like this.

Related

How to enable "Featured Article" on the main page? Mediawiki

I want to display a random article and a random image once a day on the front page. Like on Wikipedia.
I installed the "FeaturedFeeds" extension, checked if it was enabled, and created the "MediaWiki:Ffeed-*-page", as described in the manual. Replacing "*" with my name. My page is called "MediaWiki:Ffeed-MainFeed-page". However, I don't understand what to do next. The page is empty.
I understand that I need to create some kind of template and put it on the main page, but I don’t understand what exactly to write in the template.
I don't think there's an easy way to do it. Wikipedia's main page features are prepared manually - there is a set of hand-written featured blurbs, like Wikipedia:Today's featured article/November 1, 2022, and a template on the main page selects the one for today's date. See the links here for more information. FeaturedFeeds just generates an RSS feed from the contents of a given page / set of pages.

Wanted pages on my MediaWiki is not working How do i fix this?

So I have been working on my wiki for a while now and I wanted to check if there are any links that I have missed making. So I clicked on the wanted pages page but there are no links I found this surprising. So I made 10 new pages all with a few Lorim Ipsum text paragraphs and then a link to the same uncreated page. I did this with multiple links so that I could have multiple results. But I get none at all.
The only thing it displays is:
List of non-existing pages with the most links to them, excluding pages which only have redirects linking to them. For a list of non-existent pages that have redirects linking to them, see the list of broken redirects.
The following data is cached and may not be up to date. A maximum of 1,000 results are available in the cache.
There are no results for this report.
So that is a problem What I would want is for the page to work and show every link that is missing. So not just when multiple of the same link is missing. But when there is any missing red link.
how would I go about fixing/doing this?
It seems that you should run maintenance/updateSpecialPages.php. Usually, a cron job is created for this.

Chrome extension webscraper.io - how does pagination work with selecting "next"

I am trying to scrape tables of a website using the google chrome extension webscraper.io. In the tutorial of the extension, it is documented how to scrape a website with different pages, say, "page 1", "page 2" and "page 3" where each of the pages is directly linked on the main page.
In the example of the website I am trying to scrape, however, there is only a "next" button to access the next site. If I follow the steps in the tutorial and create a link for the "next" page, it will only consider page 1 and 2. Creating a "next" link for each page is not feasible because they are too many. How can I get the webscraper to include all pages? Is there a way to loop through pages using the webscraper extension?
I am aware of this possible duplicate: pagination Chrome web scraper. However, it was not well received and contains no useful answers.
Following the advanced documentation here, the problem is solved by making the "pagination" link a parent of its own. Then, the scraping software will recursively go through all pages and their "next" page. In their words,
To extract items from all of the pagination links including the ones that are not visible at the beginning you need to create another Link selector that selects the pagination links. Figure 2 shows how the link selector should be created in the sitemap. When the scraper opens a category link it will extract items that are available in the page. After that it will find the pagination links and also visit those. If the pagination link selector is made a child to itself it will recursively discover all pagination pages.

Share post on facebook, strange error

I need to implement a facebook share button on a website I am currently developing. The theory is simple and works with a simple link, e.g. https://www.facebook.com/share.php?u=https%3A%2F%2Freddit.com%2Fr%2Fall%2F. This works for me for random pages and my home page.
However, this does not work for all pages of that web site except the home page (/).
Example:
Page http://www.youmatch.global/approach/
Share link https://www.facebook.com/share.php?u=http%3A%2F%2Fwww.youmatch.global%2Fapproach%2F
The sharing dialog states "Object not found". I am already trying for two days but I have no clue what might be the problem.
Any ideas?
You may take look at this page to see what's wrong in your sharer : https://developers.facebook.com/tools/debug/sharing/

How to get internal link from latest revision of a wikipedia page?

I'm trying to extract internal links from wikipedia pages. This is the query I'm using
/w/api.php?action=query&prop=links&format=xml&plnamespace=0&pllimit=max&titles=pageTitle
However, the result does not reflect what's on the wiki page. Take for example a random article here. There are only a dozen of links on this page. However, when I make the query,
/w/api.php?action=query&prop=links&format=xml&plnamespace=0&pllimit=max&titles=Von_Mises%E2%80%93Fisher_distribution
I got back 187 links. I guess the API might has a database of all the links that have ever added to the page including all the revisions. Is that the case? How can I get the links from only the last revision?
The database has the correct list of the links in the current version of the articles. All the links you get from the API are in fact in the article. However, most of them are hidden in the (twice collapsed) navigation box at the bottom (scroll to the bottom, click "show" on the blue bar, then click "show" on the additional blue bars you now see).
Note that these links are on the page, but not defined in the wikitext - they come from the {{ProbDistributions}} navigation template (and the template that template in turn includes).
Sadly, there is no good way to list only the links that are directly/explicitly defined on a page, since template substitution happens before the actual parsing of the wiki syntax.