How can I have articles in Wintersmith not in their own subdirectory? - blogs

In Wintersmith, the default blog template generates posts from content/articles/<article>/index.md. This is fine as it allows associated files like images to be included with the article. But in practice, most "blog posts" are just text content associated with a template. Having to create subdirs is a minor annoyance, and if editing multiple entries in a tabbed editor, it's annoying having everything named index.md.
The site generator will spit out articles/basic-post.html files, but does not include these in the generated index or archive pages. How can I get the latter to work without breaking anything?
This may or may not be a simple problem, but I'm new to Wintersmith and haven't seen how to do this. I'm not sure it's as trivial as editing the default paginator (and I am not that used to CoffeeScript, which maybe it's time to address that :)
In paginator.coffee:
getArticles = (contents) ->
# helper that returns a list of articles found in *contents*
# note that each article is assumed to have its own directory in the articles directory
articles = contents[options.articles]._.directories.map (item) -> item.index
articles.sort (a, b) -> b.date - a.date
return articles
This looks like the place, however it seems like a bad idea to directly edit a plugin, for potential future updates to work.
Wintersmith is pretty awesome btw.

You were right: theanswer lies into the paginator plugin.
Wintersmith will constently watch the contents folder, building a ContentTree array.
That objet array will contain a descriptor for each file and folder within contents.
The getArticles just filter this possible candidates, and you just need to enhance it to get plain markdown files in the contents/articles folder.
getArticles = (contents) ->
# helper that returns a list of articles found in *contents*
# include articles with dedicated directory
articles = contents[options.articles]._.directories.map (item) -> item.index
# add articles that are in the *contents/articles* folder
articles = articles.concat contents[options.articles]._.pages
articles.sort (a, b) -> b.date - a.date
return articles

Related

Generate HTML pages locally, from a template and just change the title at the top of the page

I have a list of US cities and a template html page. Can I write a script that take a city name, generates a folder with the same name as the city, and place in that folder the template html page with the title in the head section set to the city name.
Would it be more efficient to use that template and load it each time the page is called and simply pass the city name etc into that template?
PHP, or any MVC platform works beautifully for just this task, and avoids the need to create multiple directories/files of duplicate code..
If you absolutely, positively, need to generate all those repetitive files, you could easily write a program in C (or your preferred language) to iterate through the list of cities and scrape code from the template and paste it into a new file with a unique name for that city.
pseudo:
// open the list of cities
cities_file = openfile("citylist")
for (city in cities_file)
// open template file
temp_file = openfile("template")
// open/create new city file
new_city_file = openfile(city.name + "-file")
// read line from template
x = copyline(temp_file)
// replace substring if "title" parameter appears
y = x.replace("<title></title>", "<title>" + city.name + "</title>")
// write line of city file w/updated text
writeline(new_city_file, y)
// close files
closefile(new_city_file)
closefile(temp_file)
loop
Of course, you can manipulate the files and directories any way you choose, it just depends on the environment you're working with.
For the sake of simplicity and efficiency, however, I would strongly recommend using the template as it's intended and simply populating certain items when it's rendered rather than creating multiple repetitive files. If, for example, you wish to change the copyright date at the bottom of each page, you would then need a script to update each of those various files instead of just updating the one template file.

Copying fits-file data and/or header into a new fits-file

Similar question was asked before, but asked in an ambigous way and using a different code.
My problem: I want to make an exact copy of a .fits-file header into a new file. (I need to process a fits file in way, that I change the data, keep the header the same and save the result in a new file). Here a short example code, just demonstrating the tools I use and the discrepancy I arrive at:
data_old, header_old = fits.getdata("input_file.fits", header=True)
fits.writeto('output_file.fits', data_old, header_old, overwrite=True)
I would expect now that the the files are exact copies (headers and data of both being same). But if I check for difference, e.g. in this way -
fits.printdiff("input_file.fits", "output_file.fits")
I see that the two files are not exact copies of each other. The report says:
...
Files contain different numbers of HDUs:
a: 3
b: 2
Primary HDU:
Headers contain differences:
Headers have different number of cards:
a: 54
b: 4
...
Extension HDU 1:
Headers contain differences:
Keyword GCOUNT has different comments:
...
Why is there no exact copy? How can I do an exact copy of a header (and/or the data)? Is a key forgotten? Is there an alternative simple way of copy-pasting a fits-file-header?
If you just want to update the data array in an existing file while preserving the rest of the structure, have you tried the update function?
The only issue with that is it doesn't appear to have an option to write to a new file rather than update the existing file (maybe it should have this option). However, you can still use it by first copying the existing file, and then updating the copy.
Alternatively, you can do things more directly using the object-oriented API. Something like:
with fits.open(filename) as hdu_list:
hdu = hdu_list[<name or index of the HDU to update>]
hdu.data = <new ndarray>
# or hdu.data[<some index>] = <some value> i.e. just directly modify the existing array
hdu.writeto('updated.fits') # to write just that HDU to a new file, or
# hdu_list.writeto('updated.fits') # to write all HDUs, including the updated one, to a new file
There's nothing not "pythonic" about this :)

How do I protect only part of a Mediawiki article from editing?

I'm building a Mediawiki site which will include a few thousand Bot-generated articles. I want users to be able to edit lower sections of each article, but not edit the bot-generated sections.
I found an abandoned extension called ProtectSection which did this, but I don't have the skills to update it to work with the current Mediawiki release.
I'm considering making the Bot-generated articles protected, and then transcluding them into user-editable articles. If I do that, can I hide the original Bot-generated articles from search engines, and from being navigable within the wiki?
Also, I'd like users to be able to reference prior versions of the bot-generated articles, as their text will be updated from time to time by the bot. If I transclude and hide the bot-generated articles, I'm assuming their history then will be inaccessible. This wouldn't be a problem if I could keep the bot-generated articles available, with user-editable sections in them.
I got a bad news. It's really difficult to make part of article protected. Current mediawiki architecture doesn't allow it from scratch.
What I suggest you to do is create custom namespase and place all bot's articles there.
// Define constants for my additional namespaces.
define("NS_FOO", 3000); // This MUST be even.
define("NS_FOO_TALK", 3001); // This MUST be the following odd integer.
// Add namespaces.
$wgExtraNamespaces[NS_FOO] = "Foo";
$wgExtraNamespaces[NS_FOO_TALK] = "Foo_talk"; // Note underscores in the namespace name.
Resrict ordinary users edit this custom namespace, here is some info. But allow users to watch history of this pages.
# Only allow autoconfirmed users to edit Project namespace
$wgNamespaceProtection[NS_PROJECT] = array( 'autoconfirmed' );
# Don't allow anyone to edit non-talk pages until they've confirmed their
# e-mail address (assuming we have no custom namespaces and allow edits
# from non-emailconfirmed users to start with)
# Note for 1.13: emailconfirmed group and right were removed from default
# setup, if you want to use it, you'll have to re-enable it manually
$wgNamespaceProtection[NS_MAIN] = $wgNamespaceProtection[NS_USER] =
$wgNamespaceProtection[NS_PROJECT] = $wgNamespaceProtection[NS_IMAGE] =
$wgNamespaceProtection[NS_TEMPLATE] = $wgNamespaceProtection[NS_HELP] =
$wgNamespaceProtection[NS_CATEGORY] = array( 'emailconfirmed' );
# Only allow sysops to edit "Policy" namespace
$wgGroupPermissions['sysop']['editpolicy'] = true;
$wgNamespaceProtection[NS_POLICY] = array( 'editpolicy' );
Last step you already know - use Tranclution.

Japanese Wikipedia categories DAG defined in "categorylinks" table has a cycle

I have found a reference cycle in the DAG defined by "categorylinks" and "page" tables inside the japanese Wikipedia database.
Is this a bug in the data?
Page ids reference cycle:
2904319 -> 133683 -> 988775 -> 424676 -> 2904319
(行動 -> 生活 -> 人間関係 -> コミュニケーション -> 行動)
I am considering only sub-categories (page_namespace = 14).
In the Wiki Category documentation it explicitly states that:
All categories (except root category 1) should be contained in at least one other category, and there should be no cycles (i.e. a category should not contain itself, directly or indirectly).
Could the data be broken?
Am I misunderstanding anything?
The data is probably not broken. There is nothing in MediaWiki that prevents category loops, or cycles. Category structures like A < B < C < A are valid, and not uncommon in MediaWiki installations. Categories can also be orphaned, not belonging to any category.
The text you are quoting is not from the MediaWiki documentation, but from a Wikimedia help page. It refers to a recommendation by Wikimedia to try and keep categories hierarchical on Wikimedia wikis (e.g. Wikipedia). However, as this depends on the editors, you will find plenty of exceptions in any major Wikimedia wiki. Sometimes they are unintentional, and sometimes they are considered acceptable by the community for one reason or another.
A more relevant place to look in your case, is the corresponding help page, and the policy page for categories on Japanese Wikipedia. You'll find them here:
Help:カテゴリ
Wikipedia:カテゴリの方針

Mediawiki and databases

Is there a way I can create a database from which to pull data into my mediawiki table? Or is there a way to have a database like drupal and place a mediawiki type interface on it?
There is no way to directly do this in stock MediaWiki, although you can fake it up somewhat with templates. For example, you could can a template something like this:
{{#switch:{{{key}}}
| key1 = value1
| key2 = value2
| key3 = value3
...
}}
Template:NUMBEROF/data on the English Wikipedia is an example of this style (with two levels of keys).
Or you can create a set of templates, one for each "record", that each take an "output formatter" template as a parameter and pass that output formatter a named parameter for each column in the record. The Country data templates on the English Wikipedia are an example of this pattern.
Or you could combine the above two styles, with one parameter to select the row (as in the first style) and a second to provide the output formatter (as in the second).
If you don't mind installing extensions, you could use the Labeled Section Transclusion extension to transclude sections of a data page. Or you could install the Semantic MediaWiki extension, which I hear allows all sorts of querying of data from the wiki's pages. Or you could install one of the many Database extensions that may allow you to do what you want. Or you could write your own database extension.
You could also have a look at http://www.mediawiki.org/wiki/Extension:Data_Transfer, which do not require Semantic MediaWiki even though it's written for use with SMW. (If you use SMW there are, as noted in an earlier reply, plenty extensions and built in options.)