Possible to build Jekyll site into two dist folders based on YAML front matter? - jekyll

So I have this site, lets call it example.com, that has had multiple languages; English and Swedish. Up to this point we’ve served the Swedish version on example.com/sv/{pages-in-swedish} and the English pages and posts on example.com/{pages-in-english}.
Now we’ve decided to move the Swedish version to example.se and serve only the English pages on the .com domain.
Preferably I don’t want to have two parallel repos with duplicate sets of JS and CSS (since they’re identical) but rather based on a YAML front matter tag (language: en vs language: sv) spit out two different dist folders (for example dist/ and dist-sv/) during build.
Is this possible?

There is still no standard for internationalization with Jekyll but I can point you to this useful discussion on Jekyll talk on how to manage I18n.

Related

Multilingual Mediawiki installation using Wiki Family Vs single multilingual MediaWiki Extension

I am trying to setup a multilingual encyclopedia (4 languages), where I can have both:
Articles that are translations of other languages, and
Articles that are in a specific language only.
As the wiki grows, I understand that the content of each language can vary.
However, I want to be able to work as fluently as possible between languages.
I checked this article, dating back to 2012, which has a comment from Tgr that basically condemns both solutions.
I also checked this Mediawiki Help Article, but it gives no explanation about the differences between both systems.
My questions are:
1- what is the preferred option now for a multilingual wiki environment that gives the most capabilities and best user experience, given that some of the languages I want are right to left, and some are left to right.
So I want the internationalization of category names, I need to link the categories their corresponding translations, and want users to see the interface in the language that the article is written in.
So Basically as if I have 4 encyclopedias, but the articles are linked to their corresponding translations.
2- Which system would give me a main page per language? So the English readers would see an English homepage, and the French readers see a French homepage..etc?
EDIT:
I have a dedicated server, so the limitation of shared hosting is not there.
Thank you very much.
The Translate extension is meant for maintaining identical translations and tracking up-to-date status while other solutions (interwiki links, Wikibase, homegrown language templates) typically just link equivalent pages together. Translate is useful for things like documentation, but comes with lots of drawbacks (for example, WYSIWYG editing becomes pretty much impossible and even source editing requires very arcane syntax). It's best used for content which is created once and then almost never changes.
You cannot get internationalized category names in a single wiki as far as I know. (Maybe if you wait a year or so... there is ongoing work to fix that, by more powerful Wikibase integration.) Large multi-language wikis like Wikimedia Commons just do that manually (create a separate category page for each category in each language).

mediawiki Special:Export

I've just set up a mediawiki server. I wanted to export data from wikipedia, but it doesn't allow for a pagelink_depth higher than 0 by default. It seems that you can only change the maximum pagelink_depth by setting up your own mediawiki and adjusting the $wgExportMaxLinkDepth. Now I've done all that, but obviously my own mediawiki has no content. So I was wondering if there was a way to bulk copy all of wikipedia into my own server. From the information I've read this seems only doable with about a 100 pages at a time. If that's the case there'd be absolutely 0 purpose for the Special:Export in general, as you'd need to know exactly which pages you want to import prior to doing the export, which defeats the purpose altogether. Any help would be much appreciated.
Special:Export isn't meant for a complete export of a wiki, especially not using the web-interface and with so much pages in the database. Special:Export should be used, if you want to export a known page with all contents to import this page (or a small amount of pages) into another wiki, e.g. to export and import a template from one wiki into the other one. So, the Special:Export special page has a valid purpose, but you try to use it for another use case, for which it wasn't developed for ;)
If you want to export any page of a MediaWiki wiki, you should use the maintenance script (run-able through the command line) dumpBackup.php or any other backup script in the maintenance folder. This will ensure, that you get what you want.
For the case of Wikipedia, you can't access these scripts (I mentioned this for general purpose only), but the Wikimedia foundation provides database dumps of the Wikimedia-Wikis, including Wikipedia.
"So I was wondering if there was a way to bulk copy all of wikipedia into my own server" I would recommend against this simply on the sheer size of the data & the vast number of open links (or "redlinks" or "bad links") you would be adding if you didn't actually copy it all in. A better approach is to follow all the Wikipedia conventions about page NAMING, to the punctuation mark.. then write a script that checks say once a night whether you have linked to something that is already defined in Wikipedia, and then imports ONLY THAT PAGE and adds a link up top to the EXACT VERSION OF IT that was imported. That way you only bring in what you actually reference, but your database can integrate with Wikipedia's.
This will also come in immensely handy if you have to support multiple languages, like Spanish or French, as well, since Wikipedia has links to 'the same article in another language' thus translating at least those concepts for you.

Import local ontology to mediawiki

I just set up a mediawiki project and I have an ontology.owl file i want to upload to the wiki. As per my understanding, I am guessing the classes in my ontology and sub-classes will map on to categories and sub categories respectively in my wiki. I also get that individuals will also be pages. However the whole concept is still a bit confusing. Can anyone give me clear insight as to what I am doing if I am making sense. I actually added semantic mediawiki extension and RDFIO extension and so I now have a sparql endpoint(not sure what exactly this means though) so I should be able to load my rdf code I think. But how do i do this is my question since i have the ontology locally on my computer. I developed it with protege software. Any help will be appreciated.

Sphinx pages vs wiki pages

If you just want to create an HTML-documentation, what are the advantages of using Sphinx as a documentation api against using MediaWiki?
It really all depends on the workflow that you and, if applicable, people working with you, are used to. More than advantages it is a matter of taste and what you are used to. For me the perceived advantages of sphinx-doc are:
Easier editing of the files in an external editor
Version control of documentation using a VCS (svn, fossil or git)
I'm used to markdown syntax
But i think that for each of these points someone could argue an advantage for mediawiki, hence I think it is a matter of taste and preference.
However, if you are collaborating with non-technical people: mediawiki or fossil-wiki will be easier for them to add to the project / review pages and make corrections. By the same token if your documentation needs to be in sync with other documents (code or data) then sphinx-doc in the same version control system would have my preference.
I mention fossil-wiki because it offers a combination of both options and for some projects gives the best of both worlds. You can have wiki-pages that people can edit in the traditional sense (through an online front-end) and .wiki pages that are edited as if they are source and cannot be changed through the front end. Again, whether that makes sense to you depends on your use case and writing style and habits.

GEDCOM to HTML and RDF

I was wondering if anyone knew of an application that would take a GEDCOM genealogy file and convert it to HTML format for viewing and publishing on the web. I'd like to have separate html files for each individual and perhaps additional files for other content as well. I know there are some tools out there but I was wondering if anyone used any tools and could advise on this. I'm not sure what format to look for such applications. They could be Python or php files that one can edit, or even JavaScript (maybe) or just executable files.
The next issue might be appropriate for a topic in itself. Export of GEDCOM to RDF. My interest here would be to align the information with specific vocabularies, such as BIO or REL which both are extended from FOAF.
Thanks,
Bruce
Like Rob Kam said, Ged2Html was the most popular such program for a long time.
GRAMPS can also create static HTML sites and has the advantage of being free software and having a native XML format which you could easily modify to fit your needs.
Several years ago, I created a simple Java program to turn gedcom into xml. I then used xslt to generate html and rdf. The html I generate is pretty rudimentary, so it would probably be better to look elsewhere for that, but the rdf might be useful to you:
http://jay.askren.net/Projects/SemWeb/
There are a number of these. All listed at http://www.cyndislist.com/gedcom/gedcom-to-web-page-conversion/
Ged2html used to be the most popular and most versatile, but is now no longer being developed. It's an executable, with output customisable through its own scripting syntax.
Family Historian http://www.family-historian.co.uk will create exactly what you are looking for, eg one file per person using the built in Web Site creator. As will a couple of the other Major genealogy packages. I have not seen anything for the RDF part of your question.
I have since tried to produce a Genealogy application using Semantic MediaWiki - MediaWiki, the software behind Wikipedia, and Semantic MediaWiki includes various extensions related to the Semantic Web. I thought it is very easy to use with the forms and the ability to upload a GEDCOM but some feedback from people into genealogy said that it appeared too technical and didn't seem to offer anything new.
So, now the issue is whether to stay with MediaWiki and make it more user friendly or create an entirely new application that allows for adding and updating data in a triple store as well as displaying. I'm not sure how to generate a family tree graphical view of the data, like on sites like ancestry.com, where one can click on a box to see details about the person and update that info or one could click on a right or left arrow around a box to navigate the tree. The data comes from SPARQL queries sent to the data set/triple store both when displaying the initial view and when navigating the tree, where an Ajax call is needed to get more data.
Bruce