Trouble Getting a Locally Hosted Copy of the English Language Wiktionary to include the Translations Sections - mediawiki

I used MWDumper - http://www.mediawiki.org/wiki/Mwdumper - to import the xml dump of the English Language Wiktionary (specifically the file named enwiktionary-20120930-pages-meta-current.xml,) to my local server.
I have found that under the Translations section (on each page for each English word,) next to the name of each language where I should be able to see the definition in a foreign language, I instead see Template:Tø, Template:T+, or Template:T- and I am not sure why this is.
As an experiment, I also used WikiTaxi - http://www.yunqa.de/delphi/doku.php/products/wikitaxi/index - with the exact same XML dump and did not have this problem when viewing under WikiTaxi.exe.
I have been searching through mediawiki.org looking for the answer, but have so far not been successful.

Okay, I found out that MWDumper did the right thing importing the xml dump. All the translations are there. I just had to click on the Template:T+, Template:T- and Template:Tø links and add a template according to the instructions at http://www.mediawiki.org/wiki/Templates.

Related

Mediawiki dumpBackup parameters

I fail to understand some options in the dumpBackup.php maintenance script of Mediawiki.
What is the effect of --include-files? In my test wiki, dumpBackup.php --current --include-files and dumpBackup.php --current both contain the pages of the File: namespace and I see no difference.
What is the effect of --uploads? In my test wiki I see that the xml file contains a tiny bit more of xml but, to me, it looks like this is all information which is there already as part of the File: page. What is the use of this flag?
When I add both --include-files and --uploads I get the next surprise. I actually expected the combined effect of both options, but what I get is the file content of the uploaded files and the upload record. Why did I not get the file contents when I used --include-files alone?
When I use only --include-files and --uploads but no --current I would have expected to get the content of the uploaded files and the upload record (and none of the other pages). However ,I get the warning no valid action specified and no further information at all
I am completely confused since I do not understand the logic behind all of this.

MediaWiki: an imported template returns many errors

I've installed a mediawiki and imported an example page from Wikipedia. But the template is not shown properly. https://wordpress-251650-782015.cloudwaysapps.com/wiki/Cheeta
Any hint on what could be the cause?
You're most likely missing one or more required templates/Lua modules this template relies on. If you want to get all the required templates/modules you can get them via https://en.wikipedia.org/wiki/Special:Export by inserting the template name and ticking the box saying Include templates, and then importing the file generated from that via http://wordpress-251650-782015.cloudwaysapps.com/wiki/Speciale:Importa. However in most cases, except if you desperately want the exact look and feel its easier to write your one template, because Wikipedia templates get enormously complex

DNN database search and replace tool

I have a DNN (9.3.x) website with CKEditor, 2sxc etc installed.
Now old URLs need to be changed into new URLs because the domain name changed. Does anyone know a tool for searching & replacing URLs in a database of DNN?
I tried "DNN Search and Replace Tool" by Evotiva, but it goes only through native DNN database-tables, leaving 2sxc and other plugin /modules tables untouched.
Besides that, there are data in JSON-format in database-tables of 2sxc, also containing old URLs.
I'm pretty sure that the Evotiva tool can be configured to search and replace in ANY table in the DNN database.
"Easy configuration of the search targets (table/column pairs. Just point and click to add/remove items. The 'Available Targets' can be sorted, filtered, and by default all 'textual' columns of 250 characters or more are included as possible targets."
It's still a text search.
As a comment, you should be trying to use relative URLs and let DNN handle the domain name part..
I believe the Engage F3 module will search Text/HTML modules for replacement strings, but it's open-source, so you could potentially extend it to inspect additional tables.

How to download the wikipedia articles that are listed in PetScan tool?

I had shortlisted a list of Wikipedia articles using the Petscan tool. Below is the link https://petscan.wmflabs.org/
I have used "Diseases & disorders" category from wikipedia with a depth value of 2. Approx 10000 articles were listed in the results.
My question is how do I download the articles to my computer. I am new to these things so need help.
I think I figured it out with the help of comment from #Tgr above. I navigated to output options and found a bunch of formats for exporting the file. They are as follows,
HTML
JSON
CSV
Pagepile
Simply choose the option and get the required output.
Just navigate to Output tab, and choose the format you need to export to, then click Do it. After that, the data will be downloaded to your PC.

How to search a word in a html file without any java coding?

I'm doing a project in Java which creates a user manual (html files that are linked together like Windows "Help and support centre") of software. Now once a user manual is created I have only html files remaining. Now I want to search html file that contains specified keyword(Search Engine).How can I do this without Java code??
grep, find, python script, or open any file with a text editor and try edit->search
(on windows use windows search in file)
If all of your other code is written in java, then it'll be sensible (without knowing your usecase) to use java for searching as well. You might of course use some commandline programs as grep or find - or built in search functionality in a webbrowser, but if the search should be part of a java application anyway, why not go for java and e.g. Lucene?
If this 'help' is going to be online than you can embed google search in it (limiting the search results to specified site:). Alternatively if you're hosting the pages yourself you can use htdig for indexing the pages.
However if it's going to offilne you'll be better of by generating a static index page with links to topics. In order to create a more help-system-alike user experience you can hide the contents of the index in the invisible html DIV tags and add a JavaScript that takes searched phrase as an input and that unhides the matched words with their links.
Maybe I'm missing something, but have you looked at javahelp? It has indexing and searching built in, and can be used online or offline.