How can I easily save the HTML pages of my Rails app, to give to Designer? - html

I am working on a Rails app, and a designer is designing the raw HTML pages separately. Its tough to get his environment set up to use the application directly, so I would like to be able to somehow "store" the HTML of all of the pages that my application generates, to a directory somewhere to that I can pass the curent version off to the designer.
Does anyone know of a gem or rake task that would help me do something likee this?
I am also open to other suggestions for working in parallel with designers who don't know rails.
Thanks
Edit
I guess an amendment to my question, would be, does anyone also know of ways of generating the list of page links to feed to wget, other than going through them by hand
Edit 2
Just thinking out loud... to generate every possible page in an app, you'd need to call every action in every controller. So i'd need a program to find which controllers exist in all of my app/gems/plugins, and then find all of the public methods in them.. Or.. maybe I could just use the actions that are routable from the list of routes
Then, you might want to filter out the actions that didn't render html
Then you might want to filter out destructive actions (unless this program ran in a test environment, and rebuilt the system every time).
Then as many actions depend on the parameters that are supplied, you'd need to have control over which parameters are sent to each action...
Then you'd also have to be able to send session cookies to log in
what else..

wget -m http://somewhere.com
This command will fetch all the files / pages from http://somewhere.com and download them to a local directory, to form a local "mirror."
-m
--mirror
Turn on options suitable for mirroring. This option turns
on recursion and time-stamping, sets infinite recursion depth and
keeps FTP directory listings. It is currently equivalent to -r -N -l
inf --no-remove-listing.
Note: I don't believe Mac OS X ships with wget. If you are using a Mac, I'd suggest installing Homebrew and then running brew install wget.
Read more: man wget

Related

Alternatives to CGI.pm for header() and param()?

I've been an avid user of CGI.pm since the previous millennium so I was a bit surprised when it disappeared from my old Ubuntu server when I upgraded it recently. My short-term fix was sudo cpan install CGI, but a quick web search to find out why it was missing in the first place revealed CGI::Alternatives which explains why it has gone and offers some suggestions for alternatives. For my purposes, HTML::Tiny looks best for replacing my programmatic HTML generation, but Alternatives is strangely silent on the subject of HTTP headers and CGI parameters.
I broadened by search and found lighter alternatives to CGI.pm on perlmonks where one response suggests CGI::Simple, but the recommendation is less than whole-hearted - "its not quite as up to date as CGI.pm".
So is CGI::Simple the way to go, or is there a better alternative?
Please don't spend time suggesting "rewrite everything using framework XXX". I really don't have the time or energy for that. I'm happy to replace all my HTML generation with HTML::Tiny, so I'm looking for something with a similar (or lower!) amount of rework to replace header() and param().
You're missing the point if you're looking for an alternative that provides header and param.
The argument for the removal of CGI.pm from core (but not from CPAN) is that you shouldn't have to deal with CGI yourself; you should be using a framework that handles this for you.
If you don't agree with this — if you're looking for an equivalent to header and param — go ahead and keep using CGI.pm.
If you do agree, CGI::Simple is no better than CGI.pm.
As others have said, there's no reason not to use CGI together with HTML::Tiny. So that's the answer to your question. For the last five years that I was using CGI, my programs all started something like:
use CGI qw[param header];
which is the approach you're talking about here.
If you wait a year or two, the plan is for the HTML generation functions to be removed from the main module, so your problems will all go away at that point.
But that's not what I'd do in your situation. I'd switch to using PSGI and Plack. You said that you don't want anyone to suggest a new framework, so I'm not going to do that. Plack isn't a framework, it's a toolbox for writing PSGI applications. Certainly, I'd use a framework like Dancer, but you don't have to. You can happily use Plack without any of the frameworks built on top of it.
You'll still get most of the advantages of PSGI. You'll be able to deploy your applications in any way you like. You'll have access to all the awesome Plack middleware. Testing your program will be far easier.
When you're using "raw" Plack, the equivalent of CGI::param is Plack::Request::parameters and the equivalent of CGI::header is Plack::Response::headers.
So there are three answers to your question.
Carry on using CGI.pm. Just stop using the HTML generation functions and replace them with HTML::Tiny
Use raw PSGI/Plack and bring your web development into the 21st century
Use one of Perl's many great web frameworks.
Unfortunately, you don't seem to like any of those answers.
The issue with CGI.pm is not that it's going away, merely that it will no longer be distributed as part of the core Perl distribution. However that doesn't mean you have to install from CPAN. On your Ubuntu system you can just do:
sudo apt-get install libcgi-pm-perl
and you'll be off and running with the same old CGI you know and love :-)
The correct answer to my question is that use CGI::Simple is better than use CGI qw(header param) because it loads faster.
Answers along the lines of "Use Plack, it's the future of Perl for websites" weren't helpful to me because I didn't have time to learn a new programming paradigm or to discover how to reconfigure my web server to make it work, no matter how insistent the Plack Evangelists were that I was wrong in what I was trying to do.
I've now had a bit of time to wade through the links to documentation and presentation slides I was offered and I can see what they were getting at, but one failing in what I've read so far is the lack of a concise end-to-end working example to help get my head around things ... so here's what I knocked together to get me started (and, no, I haven't finished yet!). I hope that others beginning the journey from CGI to PSGI will find this useful to help get them underway...
First you need to install Plack. I'm running an Ubuntu 14.04 installation so it was simply a matter of running sudo apt-get install libplack-perl. The generic way is to install Task::Plack from CPAN.
Next you need to know where your cgi-bin directory is located. You ought to know already if you're a CGI die-hard! Since I'm running Apache mine is defined in /etc/apache2/conf-available/serve-cgi-bin.conf by ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/.
Now for the magic. We're going to create a CGI script that runs a PSGI app, handing it data from the CGI environment. This is good for experimentation and testing but NOT for deployment, as you don't get any of the speed benefit that PSGI can give you (for that you need something like Plack::Handler::Apache2, Plack::Handler::FCGI or mod_psgi in Apache, or a dedicated PSGI server such as Starman or Starlet, or one of the other handlers mentioned on PlackPerl.org). Create /usr/lib/cgi-bin/psgi-cgi.pl with the following contents and make it executable:-
#!/usr/bin/perl
use Plack::Util;
use Plack::Handler::CGI;
my $app = Plack::Util::load_psgi($ENV{PATH_TRANSLATED});
Plack::Handler::CGI->new->run($app);
Next we need to tell Apache to pass PSGI app files to this handler. I did this by creating /etc/apache2/conf-available/psgi-cgi.conf containing:-
Action psgi-cgi /cgi-bin/psgi-cgi.pl
AddHandler psgi-cgi .psgi
then loaded it into my Apache server by running sudo a2enconf psgi-cgi and sudo service apache2 reload. Basically you need to get these lines into your httpd.conf file and restart the server.
Finally, my first PSGI script, which I created in my server's DocumentRoot as /var/www/html/hello.psgi:-
use Plack::Request;
my $app = sub {
my $env = shift;
my $req = Plack::Request->new($env);
my $par = $req->parameters;
return [
200,
[ 'Content-Type', 'text/plain' ],
[ "Hello world!\n",
map("$_ = ".join(", ", $par->get_all($_))."\n", sort keys %$par),
]
];
};
The application is a coderef which returns a 3-element arrayref: the first is the HTTP status code, the second is the name,value pairs for the HTTP header, the third is the body of the response (which could be generated using HTML::Tiny for a web page). The first 2 elements answer the question of what you need instead of the CGI::header function - nothing! (though for more complex handling you'll need Plack::Response::headers). The example also shows how to replace CGI::param - use Plack::Request::parameters, which returns a Hash::MultiValue object containing the values of URL (GET) and BODY (POST) parameters, including the ones with multiple values.
Finally, a test:-
$ wget -q -O- 'http://localhost/hello.psgi?a=1&a=2&a=3&b=1&b=4'
Hello world!
a = 1, 2, 3
b = 1, 4
I hope this is useful to other CGI die-hards in taking their first steps towards PSGI proficiency, and I hope the Plack Evangelists will acknowledge that it takes a lot of reading and comprehension to get even this far.
CGI::Minimal would a good option, it is much lighter than CGI & CGI::Simple, but it lacks advanced methods like CGI & CGI::Simple

mediawiki move and upgrade at once

I'm considering to move one of the company internal wikis (very basic wiki with few/no extensions and not so many pages) to another machine and wondering if at the same time I can upgrade the mediawiki version, passing from 1.6 to the current latest 1.25 (in order to use extensions only available for the latest versions)
The Upgrade guide
https://www.mediawiki.org/wiki/Manual:Upgrading
seems to omit the scenario in which an upgrade of the underlying software (apache,mysql) is also required for setting up the target version.
and the Moving guide
https://www.mediawiki.org/wiki/Manual:Moving_a_wiki
strictly recommends that source and target wikis share the same software level.
So I'm a bit stuck. I would attempt an export/import of an xml dump, but I'm not confident for the above reason (there is a huge version gap in source and target wikis)
Or is there a better way to approach the problem? Thx
Edit after some tests
I consider Florian's answer the most safe and advisable, but I would share the final solution I came up with.
Install the new wiki (blank)
Export an xml dump of the original wiki
php maintenance\dumpBackup.php --full > dump.xml
I first encountered a "Cannot connect to database" error. So i had to add in the LocalSettings.php the lines
$wgDBadminuser=...
$wgDBadminpassword=...
Import the xml dump in the new wiki (first try in dry-run mode)
php maintenance\importDump.php --dry-run < dump.xml
php maintenance\importDump.php < dump.xml
Then I was prompted to run
php maintenance\rebuildrecentchanges.php
Copy the physical files from the old to the new wiki, in the same path(for common wikis they should be in the "images" folder. That was not my case).
Re-create the users (manually) in the new wiki
Last edited the LocalSettings.php with the most essential settings I wanted to preserve (groups, restrictions,...) .
And the moving was done! The new wiki is ok and already usable at this stage: pages are there, links are working..
In fact, it should work, if you move the wiki from one server to another and after that upgrade it on the new server. Like you may already know, it's important to backup all files and data you have for the wiki in the "old" environment, so you can easily restore it from there.
If I would want to do, what you want to do, I would first follow the "Moving a wiki" guide except the "Test" section. After that I would upgrade the wiki to the newest version. Now I can test the wiki intensively to see, if anything worked well.
If you don't want to do that, you really need to upgrade the wiki in the "old" source and move it after that. If I understand you correctly, that would require an update of the server software (I expect php and mysql?).

Wrong HTML report when using -i option in cppcheck

I work for a middleware company. We would like to integrate Cppcheck into our build system to help preventing errors and issues in our code. Our code is big, and it's distributed in several modules (each module in a different folder). These modules have many dependencies between them.
When running cppcheck, we want to run it only once over the whole code to give the whole view to the tool. However, some modules are not related to the core ones, and we want to skip those modules from the analysis. Besides, we have implemented APIs for different languages. So for example, we have some modules for C++ that we would like to analyze separately from the C modules.
We have basically two options: 1) call cppcheck with a list of the modules that we want to analyze, or 2) call cppcheck from the top level folder of the code, and use -i options to ignore all the modules that shouldn't be analyzed.
Both approaches worked fine up to the point of creating the XML report. The problem appears when calling cppcheck-htmlreport. We observed that no index.html or stats.html were generated. Besides, only some of the results appearing in the XML were translated into HTML reports. For many results, the HTML pages were not generated.
Any memory problem can be discarded. We already verified this. Besides, the tool doesn't start creating HTML reports from the XML results consecutively and then it stops. Actually, what happens is that the HTML reports go jumping. I mean, the HTML report for error number 1 in the XML is created, then maybe next one is number 5, and so on.
We called cppcheck-htmlreport with --source-code option pointing to the top level folder of the code. I think the problem may be caused by this. I tried to call cppcheck just from the top level folder, with no -i options, and then the HTML reports were generated without issues. So it looks like the XML created by using -i options cannot be correctly understood by cppcheck-htmlreport.
Is there a way to provide -i options to cppcheck-htmlreport as well? I think this could solve the problem...
I have also noticed that the problem only seems to appear when many modules and code is analyzed. When analyzing only a few modules the HTML report was correct, although we still called cppcheck-htmlreport providing the top level folder as ---source-dir.
Is this a known issue in cppcheck HTML generator? Is there any way to solve this?
Any advice is very much appreciated.
Thanks,
Sonia

How to allow web-component-tester to run tests stored with my components

I am experimenting with the framework to build an SPA using polymer. This will include a large number of custom elements at various levels in the overall application hierarchy. I would like to use web-component-tester to run the module tests on them.
web-component-tester seems to be opinionated about where the tests will be stored - in a separate test directory, where it will run all files found.
I am of an opposite opinion. I would like to store tests in the same directory as the element definition. I would like to differentiate tests by naming them xxx.test.html (or possibly xxx.test.js). I also want to run different "sets" of tests controlled by gulp some of which will be watching for changes and then running the tests (for the app side of my project) and some of which will be elements that use core-ajax to unit test my server side scripts. These will more than likely be in a totally different directory hierarchy (my dist directory) and will be served by a proper web server.
I "think" the "suite" config option wct-conf.js file in my project root might be how I can define this, or alternatively a wct command with some file globs. Unfortunately web-component-tester's README is somewhat confusing on any detail and when you have your own web server it says "You'll need to save WCT's browser.js in order to go this route." What does that mean?
Can someone enlighten me on how can get WCT to run each of the elements/**/*.test.html files as its own "suite" ( I actually intend to use describe, it format - but I assume I still need to use the term suite).
Can someone also explain what I need to do the browser.js when I have my own web server.
I ran some experiments and did a bit of debugging with node-inspector. Firstly, the command line overwrites the suites parameter in the config file
wct app/elements**/*.test.html
does find all my module tests if I have them stored with the elements and ignores the contents of the wct.conf.js file's suites parameter.
also putting the same value (ie app/elements/**/*.test.html) in the wct-conf.js file for the suite parameter does the same job. In fact in this mode, gulp test:local
Also works correctly
So to run different tests for module and distribution, I just need to set up for wct.conf.js for my module tests, and set up gulp to run a command line with the correct location of my test file
I still haven't understood the instructions for running with your own web server.

Autoupdate ala Google Chrome workflow

In the company I am I was asked to write an autoupdate function a la chrome. I.e. It should check periodically whether a new version is available, download the new version and apply it silently the next time the application starts.
I already have something up and running but it is more like a dirty hack than something I feel happy about it. So, I would like to know how to design and implement such a solution. My horrible hack works as this:
Have a mechanism to check whether a new version exists (a database query or a web service)
Download a full zip with the whole new version.
Check file signature. If everything went alright, set a registry value: must update to true.
When the application restarts, if the must update value is true, launch an update program and exist.
The update deletes the contents of the application folder, unzips the update and replaces the old contents, launches the application and exits.
Now, I would like to change it, so it works cleaner. I am planning to send the update as a bsdiff file. It gets downloaded. But the question is, what happens next?
When do apply the update?
Who is in charge of applying the patch? is it the program itself or is it a third program, as I did, which is in charge of applying the patch and relaunch the application?
If your going down the C++ route you can go to chromium and download the Chrome source code and dig around to see how the update is done, this might give you a better idea on how to approach it. Here's an article that might help.
If your familiar with .NET the recently release nuget also has an auto update feature that might be useful to look at, you can get the source code from here. David Ebbo has a blog about how its done here.
I'm not up to date on Delphi but you might be able to use either of the above options.
The workflow you proposed is more or less like it should work, but there's no need to re-invent the wheel - there are plenty libraries out there that will do this for you. Using a 3rd party library has the benefit of keeping your code cleaner while making sure the dirty process of auto-update is contained and working flawlessly.
Trust me, I know. I'm the author of NAppUpdate, an app update framework for .NET (which you might want to try out or learn from).
So, after giving it a lot of though, this is what I came with (for active directory I will refer to the directory where the main program lies, active program is the main program and update program is the one that replaces the active program and its resource files):
The active program checks if there is a new version every certain amount of time. If so, download it
Prepare new version in a separate folder (this can be done by copying the contents of the directory with the program to a subdirectory and applying a binary patch, or simply unziping the new version).
Set a flag that indicates that a new version is ready.
When a program is exiting (and one has to control for different interrupts here):
The active program checks the new version ready flag. Launch the update program and exit.
The update program checks if it can write in the active directory. If so, replaces the contents with the prepared version.
The update program has to recheck links and update them accordingly.
So guys, if you have a better workflow, please tell me.
You could literally use the Google Chrome update workflow by using the Google Chrome updater:
http://code.google.com/p/omaha/
They open sourced it Feb 2009.