I have several MediaWiki files, which include a list of templates inside them. If I edit one of these files I can see the edit history. But if I edit a Template file, it's is not shown in the composite file. I know logically, the history belongs to the Template file, not the main file. But is there a way I can see the edit history of a template be included in a composite file as well?
Not in Mediawiki, but since each history is available as a RSS feed, you could combine them to get a single flow RSS which probably solve your needs. Big downside is that you have to create each combination manually. I used rssmix.com to generate this example out of an article's history and a template used on that page: http://www.rssmix.com/u/4247980/rss.xml
Related
I can upload an image, then on its File page I can transclude a Cargo-enabled template that stores some metadata about that image, and later query that template's table in order to create a gallery. However, the manual addition of the template to the File page is tedious and error-prone (e.g. incorrectly naming other pages in various template fields). Is there an extension, perhaps something like Page Forms, that would allow me to simplify this process, so that I could upload an image and populate its metadata on a single page? Is there any simpler workflow in base MediaWiki to achieve this result?
I'm not familiar with Page Form based solution,
What i've done in a similar case (added a templates to 3 sets of ~1k pages) is to use pywikibot (its a library that allows you to do some automated processes in your mediawiki, that an external tool).
The solution is depend on your template, Does your template receive any arguments?
Template without arguments, its enough just to add "{{My_Template}}" to the page, You can achieve this with pywikibot's add text.py script
Template with arguments, that more complicated, In this case i would write a simple python script that will use pywikibot and add the required text (There are several options here)
2.1. Add of relevant files to category with category script, Then in your script iterate over all pages in the category using:
"from pywikibot import pagegenerators" and
"pagegenerators.CategorizedPageGenerator"
2.2. Using:
"pagegenerators.SearchPageGenerator" and passing a namespace + filtering the files you want by predefine knowledge.
BTW, if you are uploading many files, you can use BatchUpload
After basic knowledge of HTML/CSS/JS and Jquery, I got myself into WordPress. In order to save time and not build things from zero, I would use pre-made templates, and modify them according to the built of the desired future webpage. There might be a huge misconception in my head, but so far I havent found reply for this solution.
I have a locally running WordPress webpage with the help of WAMP. My webpage would consist 3 separate HTML files, lets say "index.html, contact.html, about.html". My issue is that after generating those pages in WordPress, I dont find any way to modify the HTML file of those sites. Nor locally in my computer, nor in the surface of WordPress. I found the "editor" function in WP, but apparently it lets me to edit only the CSS file.
My main goal is to generate the file with a template, than import it to BRACKETS / ATOM / etc and custom-shape the HTML and CSS on it. What am I missing ?
Thanks,
Wordpress only has templates it uses according to the type of content (page, blog post or any other custom post type you define in the theme) requested. All your actual data is stored in the mysql database. This data is retrieved and inserted into the template and then the generated file is sent to the client. So, you wont find any .html files in the wordpress core. My suggestion is to view the source in the browser, copy, paste and edit in your favourite editor.
I think you are using HTML files as a template which are not dynamically converted into wordpress theme. that's why you can't edit these files. You need to follow these steps.
1. your index file must be in index.php not index.html
2. style.css file with valid codes and most important thing is you need to know wordpress theme development. https://developer.wordpress.org/themes/basics/template-files/ This will help you
I've a large .xml file (about 500mb) which is a dump of site based on mediawiki.
My goal is to find all url links, which contain image filename extensions. Then group links by second level domain and export result containing only links in above order.
Example: there're many links beginning with domain.com/.png, host.com/.png and image.com/*.png. Grouping them in separate files divided by specific second level domain with it's links - that's a final result.
So you want to parse the links in the wikitext. Writing a MediaWiki parser is a pain, so you should use an existing parser.
The easiest way (easiest but not easy) is probably to import your dump into a MediaWiki install and rebuild some tables id needed, then export the externallinks table.
Imagine I've created a new javascript framework, and want to showcase some examples that utilise it, and let other people add examples if they want. Crucially I want this to all be on github.
I imagine I would need to provide a template HTML document which includes the framework, and sorts out all the header and footer correctly. People would then add examples into the examples folder.
However, doing it this way, I would just end up with a long list of HTML files. What would I need to do if I wanted to add some sort of metadata about each example, like tags/author/date etc, which I could then provide search functionality on? If it was just me working on this, I think I would probably set up a database. But because it's a collaboration, this is a bit tricky.
Would it work if each HTML file had a corresponding entry in a JSON file listing all the examples where I could put this metadata? Would I be able to create some basic search functionality using this? Would it be a case of: Step 1 : create new example file, step 2: add reference to file and file metadata to JSON file?
A good example of something similar to what I want is wbond's package manager http://wbond.net/sublime_packages/community
(There is not going to be a lot of create/update/destroy going on - mainly just reading.
Check out this Javascript database: http://www.taffydb.com/
There are other Javascript databases that let you load JSON data and then do database operations. Taffy lets you search for documents.
It sounds like a good idea to me though - making HTML files and an associated JSON document that has meta data about it.
I have created a rrdcgi script to display information about the system performance with graphs. Now I would like to add an option for the users to create PDF on the fly with the details on current page (images and information) and header and footer. I also want the generated PDF files to be saved in some location so that that can be easily accessed next time. Is this possible to do with rrdcgi or any Perl code would be really appreciated.
I need this options
You need to consider what you want to put in the PDF: Do you want an exact replica of the web page the user is viewing (too hard to be close to impossible without having the user's browser installed on your side and using its print output) or do you want the same information in a roughly similar layout?
An important issue is how you are generating the HTML: I did something similar once to generate PDF receipts for experiment participants (now, I just output HTML with print styles).
The HTML is generated using HTML::Template although Template.pm would be just as fine.
It is then trivial to write another template, one that generates a LATEX document which can be processed using pdflatex. If you save the data the time the snapshot is requested, you can add the snapshot to a queue that generates documents asynchronously so that requests do not tie up the web server.
Update: Looking at rrdcgi, I now realize that it already does use a template. That is perfect: Instead of putting HTML in the template, put LATEX code in the template and run rrdcgi with the --filter option to create a LATEX source file which you can run through pdflatex. I guess the problem to solve there is to be able to use the exact same data that was used to generate the page the user is looking at.
If it is not possible to re-run rrdcgi with the exact same data, consider adding some JavaScript that submits the HTML source of the page the user is reviewing (or some JSON representation thereof) to a CGI script that parses the HTML and outputs LATEX. Writing clean HTML in the original template and judicious use of class and id attributes would help there.
I do not have time to test any of these ideas right now, but I will take a look again within the next couple of days.
Is it worth the effort?
Why don't you add a FAQ explaining how to setup a PDF-printer on Windows/MAC/Linux and provide a 'clean' page that can then be printed?
Since you apparently have to create the PDF,
take a look at this (what-is-the-best-perl-module-to-use-for-creating-a-pdf-from-scratch) post here on SO.
There is also this post, that could combine the 'clean' HTML page and a server-side print.
Regarding the LaTeX route, if you have rrdcgi generate graphs in pdf format, pdflatex will be able to integrate them directly into the document, producing super quality pdf with graphs ... very slick. Sorry, no code.