Mediawiki - Automatic two-way links between page sections - mediawiki

I want my MediaWiki install to have two classes of pages. (In the users' eyes - the wiki won't have to know the difference.)
I want some pages to be on topics, and others on sources (name of book, video, etc.)
I want to have a topic page "FAA Licenses" like:
==Medical Certificates==
===3rd Class===
Required for student license, and before student solo flights. {{{link/reference/whatever generally around here to Jeppesen Book#pg27-28}}}
And a source page "Jeppesen Book" like:
==pg27-28==
{{{link to FAA Licenses#3rd Class}}}
These source pages will track the source's (book or video) content. I imagine a source page for a book to have page numbers, and for a video to have start and stop times, or section numbers. (The book or video itself won't be on the source pages.)
So, the source pages will really serve two purposes. First, it will be fairly easy to see which parts of the sources have had notes taken and put into the topic pages. (So non-linear note-taking of sources will be easy -- skipping from source to source on topics, rather than digesting an entire source at once.) Second, it will be easy from a topic page to see where to go back to for a more in-depth review.
There's two issues I'm writing about.
(1) I want the workflow to be the user edits the topic page, putting in links to source pages and sections. I want this one user-addition to automatically make the source page link back to this spot. I want the system to handle the two-way-linking, assuming the user won't be perfect.
(2) I want the user to be able to put links in the topic page to source pages and sections that might not exist yet. I'd need those links to show up as red, to indicate they need to be created. But, still, once created, I want the system to handle the two-way-linking, even if there were multiple red links to the same area. (I could see building up quite a few red links, then having an unorganized "purge" of them by creating the missing pages and sections, and don't want to have to search for all the links to the new areas.) Ideally, I'd love for these source pages to be auto-generated -- so pages and sections were made as links were made to them, and automatically deleted (or at least the backlinks removed) as links were removed to them.
I don't think the MediaWiki what links here functionality does the job. I want this to work on a per-section rather than per-page basis. And, I don't want the user to have to add to each section a "what links here tag" -- I want it to be automatic.

The extension Semantic MediaWiki will allow you to get bidirectional linking in a semi-automatic fassion.
https://www.semantic-mediawiki.org/wiki/Help:Link_Template
shows a high level example.
If you dig deeper into SMW and SemanticForms you'll find how with e.g. SemanticForms you can get a user experience that is close to what you are asking for.
See e.g. http://smw.referata.com/wiki/Discourse_DB and http://www.discoursedb.org/wiki/Main_Page for an application of these principles.

I don't think there is an easy way to do that. You could write an extension that provides a parser function that your users can enter, save the source page + source section + target page + target section in a database at links update, then use the ParserSectionCreate hook to show links based on that. Or you can create two types of templates and write a bot that keeps them in sync.

Related

Sitemap for site with few dynamic HTML files and many possible URLs

I'm nearing the end of my first web development project and I'm looking to build a sitemap for our website as part of Search Engine Optimalisation. If I understand correctly a sitemap, when done correctly, is a file that shows a content tree (similar to paths in windows explorer) to all the public pages of my website.
For the purpose of my question you're going to need some background information on the site and how it works. The site is about bird migration, a user enters the site on a homepage that holds a searchbox, he or she is able to search for a species of birds and if we have data on it the user is able to go to a seperate page with information on this bird. From there the user can access statistical data about this species. The page will look something like below, filled with content that we get from a database.
The URL will look something like http://domain.com/searchbird.html?bird=Sedge%20Warbler?lang=1 for the informational page, and http://domain.com/statistics.html?bird=Sedge%20Warbler?lang=1 for the statistical page.
Every bird species uses the same base HTML file (searchbird.html) that is filled with data based on the ?bird="" parameter. I have about four HTML files in my webroot (lets call them: index.html, searchbird.html, statistics.html, about.html).
So when I go to create a sitemap using some sort of sitemap generation tool, I get a sitemap that contains those 4 .html files, which is great! Yet I'm missing the 500 bird species that users are going to be able to find.
Is there a way for me to include every possible URL in the sitemap automatically, and how would I go about doing such a thing? I've used HTML, CSS and Javascript in the past. but I'm only a beginner. If an executable tool exists for this that'd be great, but my Google searches haven't been successful yet.
You have to generate the list of URLs for your existing pages.
So dig into your data source (database or whatever you use), find all existing bird species, and generate the two URLs per species.
Directory for users/bots
It would probably be a good idea (for visitors as well as for bots) to output these links on your website, too. Visitors would have two ways to find a species (search for it or browse the directory), and as most bots don’t use search functions, they wouldn’t be able to find the links on your site otherwise (they would have to use your sitemap, which not all bots do, or they would have to hope to find the links from some other external website).
(If you do this, you could also use a sitemap generator service; but it’s usually better do generate it yourself.)
URL design
By the way, you might want to consider changing your URL design to a more human-friendly one. Instead of
http://example.com/searchbird.html?bird=Sedge%20Warbler?lang=1
http://example.com/statistics.html?bird=Sedge%20Warbler?lang=1
you could use something like
http://example.com/en/birds/sedge-warbler
http://example.com/en/birds/sedge-warbler/statistics
where en is the language code for "English" (these are standardized, and users have a chance to understand them, contrary to lang=1), and where http://example.com/en/birds could lead to the page listing all species. For other languages, you would of course ideally translate "birds" and "statistics".
Changing the URL design is possible with URL rewriting.
U can use sitemap generator. U can use https://www.xml-sitemaps.com/. U only need put url index. That website will search all link and generate sitemap automatically.
If u use wordpress u can use plugin wordpress like https://wordpress.org/plugins/google-sitemap-generator/.
Hope that help

Show different Main Pages based on host name in MediaWiki

I have two domains pointing to the same wiki sharing the same database.
I would like it so that with domainA.com the main page is MainPageA and with domainB.com it is MainPageB.
The only way to change the the main page of MediaWiki that I know of is to edit MediaWiki:Mainpage, but that is stored in the MySQL database. Since both wikis are sharing the same database, both main pages get changed too.
The reason that the databases are shared is because all articles apply to both wikis, just that the logo of the wiki etc. is different.
Is there some kind of PHP conditional variable I can set to set the Main Page?
You could do this in wikicode, by making your Main Page source look something like this:
{{#switch:{{SERVERNAME}}
|domainA.com={{:Main Page for domainA.com}}
|domainB.com={{:Main Page for domainB.com}}
|#default=<span class=error>Unrecognized domain {{SERVERNAME}}.</span>
}}
or even just:
{{:Main Page for {{SERVERNAME}}}}
For more information, see Help:Magic words at mediawiki.org. (Note that the first version also requires the ParserFunctions extension.)
Ps. There might be some issues with MediaWiki's parser caching that could cause the wrong Main Page to appear. If so, a quick and dirty workaround would be to install the MagicNoCache extension and add __NOCACHE__ to the Main Page.
Pps. A better solution for cache issues might be to make sure that the different sites have separate cache keys, by adding the following line to your LocalSettings.php:
$wgRenderHashAppend .= "!$wgServer";

Mediawiki: configuring the entry page, adding a new page

Have a wiki installed in our organization, and want to start using it.
Failed to find the answers for the next 2 basic questions:
How do I configure the entry page to show a list of all existing pages
How do I create a new page (!). Only succeeded doing it by typing a url of an non existing page. Guess there are nicer methods for this
Thanks
Gidi
For how to show a list of all pages, look at DynamicPageList, which is part of MediaWiki. (There's a more advanced third-party version, but it's not needed for such a simple task.)
Creating a new page really is exactly as you said: Type a URL and save some edits. Most beginning editors will edit a link into a page, and then use that link to browse to the page, so that they don't accidentally forget the spelling and lose the page to the Ether. (Of course it would show up in the recently edited and other special pages.)
This is more of a webapps.stackexchange.com question though.

What is the "one-document-per-URL paradigm"?

what does "one-document-per-URL paradigm" mean with reference to web development..
That if you go to a URI, you get a document, and you always get the same document.
The best way to explain it is to describe how to break it - which is usually achieved with frames or Ajax.
Frames gives you a document containing a frameset. You click a link and the page loaded in one of the frames changes. You are viewing "About" instead of "Home" but the URL in the address is unchanged so if you copy the link or bookmark it, you end up at "Home" instead of "About"
You get the same effect when Ajax is overused.
It usually means that under one URL, you should serve only one resource.
Example of right uses: Page with one news article, information about one specific product, etc.
Next step from there would be to allow user to see same resource in multiple ways. Ie, by visiting example.com/some/url?xml visitor is able to get information about given resource in XML format. If your page was list of resources, you could offer ?rss form of your list... etc.
In contrast to good uses, bad use would be that different things appear under same URL. For instance, when you have page to search for some product, you would have to avoid using POST for searching, because then you would be violating this principle (URL always leads to first search page, not to result page).
I hope I provided some answer and did not confuse you. :)

How should I handle autolinking in wiki page content?

What I mean by autolinking is the process by which wiki links inlined in page content are generated into either a hyperlink to the page (if it does exist) or a create link (if the page doesn't exist).
With the parser I am using, this is a two step process - first, the page content is parsed and all of the links to wiki pages from the source markup are extracted. Then, I feed an array of the existing pages back to the parser, before the final HTML markup is generated.
What is the best way to handle this process? It seems as if I need to keep a cached list of every single page on the site, rather than having to extract the index of page titles each time. Or is it better to check each link separately to see if it exists? This might result in a lot of database lookups if the list wasn't cached. Would this still be viable for a larger wiki site with thousands of pages?
In my own wiki I check all the links (without caching), but my wiki is only used by a few people internally. You should benchmark stuff like this.
In my own wiki system my caching system is pretty simple - when the page is updated it checks links to make sure they are valid and applies the correct formatting/location for those that aren't. The cached page is saved as a HTML page in my cache root.
Pages that are marked as 'not created' during the page update are inserted into the a table of the database that holds the page and then a csv of pages that link to it.
When someone creates that page it initiates a scan to look through each linking page and re-caches the linking page with the correct link and formatting.
If you weren't interested in highlighting non-created pages however you could just have a checker to see if the page is created when you attempt to access it - and if not redirect to the creation page. Then just link to pages as normal in other articles.
I tried to do this once and it was a nightmare! My solution was a nasty loop in a SQL procedure, and I don't recommend it.
One thing that gave me trouble was deciding what link to use on a multi-word phrase. Say you had some text saying "I am using Stack Overflow" and your wiki had 3 pages called "stack", "overflow" and "stack overflow"....which part of your phrase gets linked to where? It will happen!
My idea would be to query the titles like SELECT title FROM articles and simply check if each wikilink is in that array of strings. If it is you link to the page, if not, you link to the create page.
In a personal project I made with Sinatra (link text) after I run the content through Markdown, I do a gsub to replace wiki words and other things (like [[Here is my link]] and whatnot) with proper links, on each checking if the page exists and linking to create or view depending.
It's not the best, but I didn't build this app with caching/speed in mind. It's a low resource simple wiki.
If speed was more important, you could wrap the app in something to cache it. For example, sinatra can be wrapped with the Rack caching.
Based on my experience developing Juli, which is an offline personal wiki with autolink, generating static HTML approach may fix your issue.
As you think, it takes long time to generate autolinked Wiki page. However, in generating static HTML situation, regenerating autolinked Wiki page happens only when a wikipage is newly added or deleted (in other words, it doesn't happen when updating wikipage) and the 'regenerating' can be done in background so that usually I don't matter how it take long time. User will see only the generated static HTML.