I have created a rrdcgi script to display information about the system performance with graphs. Now I would like to add an option for the users to create PDF on the fly with the details on current page (images and information) and header and footer. I also want the generated PDF files to be saved in some location so that that can be easily accessed next time. Is this possible to do with rrdcgi or any Perl code would be really appreciated.
I need this options
You need to consider what you want to put in the PDF: Do you want an exact replica of the web page the user is viewing (too hard to be close to impossible without having the user's browser installed on your side and using its print output) or do you want the same information in a roughly similar layout?
An important issue is how you are generating the HTML: I did something similar once to generate PDF receipts for experiment participants (now, I just output HTML with print styles).
The HTML is generated using HTML::Template although Template.pm would be just as fine.
It is then trivial to write another template, one that generates a LATEX document which can be processed using pdflatex. If you save the data the time the snapshot is requested, you can add the snapshot to a queue that generates documents asynchronously so that requests do not tie up the web server.
Update: Looking at rrdcgi, I now realize that it already does use a template. That is perfect: Instead of putting HTML in the template, put LATEX code in the template and run rrdcgi with the --filter option to create a LATEX source file which you can run through pdflatex. I guess the problem to solve there is to be able to use the exact same data that was used to generate the page the user is looking at.
If it is not possible to re-run rrdcgi with the exact same data, consider adding some JavaScript that submits the HTML source of the page the user is reviewing (or some JSON representation thereof) to a CGI script that parses the HTML and outputs LATEX. Writing clean HTML in the original template and judicious use of class and id attributes would help there.
I do not have time to test any of these ideas right now, but I will take a look again within the next couple of days.
Is it worth the effort?
Why don't you add a FAQ explaining how to setup a PDF-printer on Windows/MAC/Linux and provide a 'clean' page that can then be printed?
Since you apparently have to create the PDF,
take a look at this (what-is-the-best-perl-module-to-use-for-creating-a-pdf-from-scratch) post here on SO.
There is also this post, that could combine the 'clean' HTML page and a server-side print.
Regarding the LaTeX route, if you have rrdcgi generate graphs in pdf format, pdflatex will be able to integrate them directly into the document, producing super quality pdf with graphs ... very slick. Sorry, no code.
Related
I have an excel which gets live data from a thrid party. I want to display that live data from excel in a webpage. Can someone guide on how to do it?
Any inputs appreciate
Thank you
Your server will need to make a query of the same source. The following is one of dozens of ways to do this.
Set up a cron job to to the following:
1:Use curl to pull the data.
2:Use an awk program to reformat the data to a data file as a table with html table markup.
3:Concatenate a header file, your data file, and a trailer file to make a valid html file that you want.
4:Store that file on the web server.
If you want to do live updates rather than have the user reload the file you either have to push the data, or write JavaScript to reload the webpage element at intervals.
This is an excellent project. Learn this and you’ll have a big step up in your web building skills.
I can upload an image, then on its File page I can transclude a Cargo-enabled template that stores some metadata about that image, and later query that template's table in order to create a gallery. However, the manual addition of the template to the File page is tedious and error-prone (e.g. incorrectly naming other pages in various template fields). Is there an extension, perhaps something like Page Forms, that would allow me to simplify this process, so that I could upload an image and populate its metadata on a single page? Is there any simpler workflow in base MediaWiki to achieve this result?
I'm not familiar with Page Form based solution,
What i've done in a similar case (added a templates to 3 sets of ~1k pages) is to use pywikibot (its a library that allows you to do some automated processes in your mediawiki, that an external tool).
The solution is depend on your template, Does your template receive any arguments?
Template without arguments, its enough just to add "{{My_Template}}" to the page, You can achieve this with pywikibot's add text.py script
Template with arguments, that more complicated, In this case i would write a simple python script that will use pywikibot and add the required text (There are several options here)
2.1. Add of relevant files to category with category script, Then in your script iterate over all pages in the category using:
"from pywikibot import pagegenerators" and
"pagegenerators.CategorizedPageGenerator"
2.2. Using:
"pagegenerators.SearchPageGenerator" and passing a namespace + filtering the files you want by predefine knowledge.
BTW, if you are uploading many files, you can use BatchUpload
My question is whether or not anybody knows of a better way to do what I'm already doing. I'm creating a report as a list, and trying to render it both in HTML and Excel.
I'm developing a shiny app that generates reports for Qualtrics surveys.
The results table is a list of HTML strings that I paste together and display in a shinydashboard. Here's a dput of the example results tables.
Here's how I'm creating the html results tables list -- the html_tabelize() function in my package. Here's a dput of the example input.
In the shiny server.R file the way I create the Excel file is with the following code:
output$downloadResults <- downloadHandler(
filename = 'tables.xls',
content = function(file) {
write(html_tabelize(main()[['blocks']]), file)
}
)
To summarize: I get the blocks, I run html_tabelize on them, and then I write the HTML output to a file called "tables.xls". When I open that file, because Excel can render HTML, it renders something like this:
My concern and problem with what I'm doing are two-fold:
If I were writing an Excel document instead of simply rendering HTML in Excel, then I could perhaps get a better formatted document. I'd like that.
When you download the results tables xls file and try to open it, you get a warning from Excel. I don't want the users of my app to see this warning, because it's distracting and could worry them about something that isn't really a concern.
I know that options exist for writing Excel files in R, but so far what I've seen indicates that their input must be either a data frame, or a list of data frames. The list I am rendering from has different types of components, like the question text, as well as data frames of results. Originally I was using pandoc, but pandoc, even when run from R, is a system binary, and it's difficult to list as a dependency (and if I can't list it as a dependency, it's tough to make sure it's installed for the users of my app). Additionally, I found out pandoc doesn't even convert to "real" Excel -- it also just saves HTML in a .xls file. Does anybody have any suggestions as to how I can improve this part of my app?
Imagine I've created a new javascript framework, and want to showcase some examples that utilise it, and let other people add examples if they want. Crucially I want this to all be on github.
I imagine I would need to provide a template HTML document which includes the framework, and sorts out all the header and footer correctly. People would then add examples into the examples folder.
However, doing it this way, I would just end up with a long list of HTML files. What would I need to do if I wanted to add some sort of metadata about each example, like tags/author/date etc, which I could then provide search functionality on? If it was just me working on this, I think I would probably set up a database. But because it's a collaboration, this is a bit tricky.
Would it work if each HTML file had a corresponding entry in a JSON file listing all the examples where I could put this metadata? Would I be able to create some basic search functionality using this? Would it be a case of: Step 1 : create new example file, step 2: add reference to file and file metadata to JSON file?
A good example of something similar to what I want is wbond's package manager http://wbond.net/sublime_packages/community
(There is not going to be a lot of create/update/destroy going on - mainly just reading.
Check out this Javascript database: http://www.taffydb.com/
There are other Javascript databases that let you load JSON data and then do database operations. Taffy lets you search for documents.
It sounds like a good idea to me though - making HTML files and an associated JSON document that has meta data about it.
Wanted to know if the following scenario is possible -
I have some data that is in an excel file. I want to make an html page which will have this data inside it (no other source of data). And inside the Html page, will I be able to put textfields, buttons etc for a user to input data and based on that, i need to write queries (jqueries i guess) to show the data that is the result of those queries
Can this be done? I have not done anything so far. I just wanted to know if this is possible and please someone point me in the right direction for me to start. I wanna learn on my own how to do this.
Thanks in advance.
HTML is a markup language - it is the structure of a webpage, and has no mechanisms for storing or processing dynamic data.
You will have to use a client-side language JavaScript + cookies, or a server-side language like PHP + MySQL.
You want to look at using JavaScript in the page. On the server (I presume) you need to read the Excel file, and generate JS objects on the page that hold the values. That is, the JS when run creates a collection of JS objects with the values in it. This script can be embedded in the page so that no other data access is needed.
You can then write more JS linked to the buttons that select data out of these objects, and displays them on the page. You probably don't want to do this from scratch -- there are good JS libraries and frameworks to leverage. Consider either GWM or YUI.
Perhaps the simplest way is to open the file in Excel and save it as text (tab-separated; comma-separated would do, too), then insert this text data into your HTML document between the tags <script type="text/plain"> and </script>. You can then write, in a rather straighforward way, JavaScript code that reads the content of this element and constructs a JavaScript array of objects (or some other suitable data structire) from it. It will then be easy to access the data in JavaScript.
This will make it possible to run queries and display data. Modifying the data would be a completely different matter.