dovecot v1 "ST" tagged files removing via command line - dovecot

Is it ok to write a little script to just remove dovecot v1 emails marked seen & deleted files (ending ST)? Or do they have to be purged through mail client?
I have a customer that is using outlook & they have been just getting flagged for the last number of years & are taking up a lot of space. They are afraid to click a mouse let alone walk through a purge.
If can do myself would be sweet.

dovecot in general is not restrictive for file manipulations. Version 2+ has a tool named doveadm that can be used in most cases. But direct operations are ok too. You can run something like that to delete all the files having flags set to ST:
find /var/mail -type f -name *:*,*ST -exec rm {} +
Complicated pattern is used to avoid the friendly fire on the plain files matching the *ST pattern.
It is also good idea to delete all the dovecot's index files in the mailboxes. If absent dovecot recreate them on the next access but without mention of all the purged files.

Related

Keyboard short for uploading two files in PhpStorm

The problem
In PhpStorm I have a style.css- and a app.js-file that I have to upload to a server over and over again. I'm trying to automate it.
They're compiled by Webpack, so they are generated/compiled. Which means that I can't simply use the 'Tools' >> 'Deployment' >> 'Upload to...' (since that file isn't and won't every be open).
What I currently do
At the moment, every time I want to see the changed I've done, then I do this (for each file):
Navigate to the files in the file-tree (using the mouse)
Select it
The I've set up a shortcut for Main menu >> Tools >> Deployment >> Upload to..., where-after I select the server I want to upload to.
I do this approximately 100+ times per day.
The ideal solution
The ideal solution would be, that if I pressed a shortcut like CMD + Option + Shift + G
That it then uploaded a selection of files (a scope?) to a predefined remote server.
Solution attempts
Open and upload.
Changing to those files (using CMD + p) and then uploading them (once they're open). But the files are generated, which means that it takes PhpStorm a couple of seconds to render the content (which is necessary before I can do anything with the file) - so that's not faster.
Macro.
Recording a macro, uploading the two files, looking like this:
If I go to the menu and trigger the Macro, then it works. So far so good.
But if I assign a shortcut key and trigger that shortcut while in a file, then it shows me this:
And if I press '1' (for it to upload to number 1 on the list), then it uploads the file that I'm currently in(!?), and not the two files from my macro.
I've tried several different shortcuts (to rule out some kind of keyboard-shortcut-clash):
CMD + Option + CTRL + 0
CMD + Shift 0
CMD + ;
... Same result.
And the PhpStorm Macro's doesn't seem to give me that many options anyways.
Keyboard Maestro.
I've tried doing it using Keyboard Maestro.
But I can't get it setup right. Because if it can't find the folders (if they're off-screen or if I'm in a different project and forgot to adjust they shortcuts), then it blasts through the rest of the recorded actions, resulting in chaos. Ideally it should stop, if it can't find the file on the screen.
Update1 - External program
Even if it's not possible to do in PhpStorm, - are there then another program that I could achieve this with?
Update2 - Automatic Deployment in PhpStorm
I've previously used this, - but I've had happen a few times that I started sync'ing waaaay to many files, overwriting critical core files. It seems smart, but can possibly tear down walls if I've forgotten to define an ignore properly.
I wish there was an 'Automatic Deployment for theses files'-function.
Update3 - File Watchers
I looked into file-watchers ( recommendation from #LazyOne ). Based on this forum thread, then file watchers cannot be used to upload files.
It is possible to accomplish it using external program scp (Secure Copy Protocol):
Steps:
1. Create a Scope (for compiled files app.js and style.css)
2. Create a Custom File Watcher with scp over that Scope
Start with Scope:
Create a Local Scope with name scp files for your compiled files directory (I will assume that your webpack compiles into dist directory):
Then, to add dist directory into Scope, select that folder and click on Include Recursively. Apply and Move to File Watchers
Create a custom template for File Watcher:
Choose a Name
Choose File type as Any
Choose Scope as scp files(created earlier)
Choose Program as scp
Choose Arguments as $FileName$ REMOTE_USER#REMOTE_HOST:/REMOTE_DIR_PATH/$FileName$
Choose Working directory as $FileDir$
That's it, basically what we have done is every time when a file in that scope changes, that file is copied with scp to the remote server to the corresponding path.
Voila. Apply Everything and recompile your project and you will see that everything is uploaded to the server.
(I assumed that you have already set up your ssh client; Generated public/private keys; Added a public key in your remote server; And, know ssh credentials to connect to your remote server)
I figured this out myself. I posted the answer here.
The two questions are kind of similar but not identical.
This way I found is also not the best, since it stores the server password in clean text. So I'll leave the question open, in case someone can come up with a better way to achieve this.

Apache forbidden error when trying to run a Perl script [duplicate]

I have a Perl CGI script that isn't working and I don't know how to start narrowing down the problem. What can I do?
Note: I'm adding the question because I really want to add my very lengthy answer to Stack Overflow. I keep externally linking to it in other answers and it deserves to be here. Don't be shy about editing my answer if you have something to add.
This answer is intended as a general framework for working through
problems with Perl CGI scripts and originally appeared on Perlmonks as Troubleshooting Perl CGI Scripts. It is not a complete guide to every
problem that you may encounter, nor a tutorial on bug squashing. It
is just the culmination of my experience debugging CGI scripts for twenty (plus!) years. This page seems to have had many different homes, and I seem
to forget it exists, so I'm adding it to the StackOverflow. You
can send any comments or suggestions to me at
bdfoy#cpan.org. It's also community wiki, but don't go too nuts. :)
Are you using Perl's built in features to help you find problems?
Turn on warnings to let Perl warn you about questionable parts of your code. You can do this from the command line with the -w switch so you don't have to change any code or add a pragma to every file:
% perl -w program.pl
However, you should force yourself to always clear up questionable code by adding the warnings pragma to all of your files:
use warnings;
If you need more information than the short warning message, use the diagnostics pragma to get more information, or look in the perldiag documentation:
use diagnostics;
Did you output a valid CGI header first?
The server is expecting the first output from a CGI script to be the CGI header. Typically that might be as simple as print "Content-type: text/plain\n\n"; or with CGI.pm and its derivatives, print header(). Some servers are sensitive to error output (on STDERR) showing up before standard output (on STDOUT).
Try sending errors to the browser
Add this line
use CGI::Carp 'fatalsToBrowser';
to your script. This also sends compilation errors to the browser window. Be sure to remove this before moving to a production environment, as the extra information can be a security risk.
What did the error log say?
Servers keep error logs (or they should, at least).
Error output from the server and from your script should
show up there. Find the error log and see what it says.
There isn't a standard place for log files. Look in the
server configuration for their location, or ask the server
admin. You can also use tools such as CGI::Carp
to keep your own log files.
What are the script's permissions?
If you see errors like "Permission denied" or "Method not
implemented", it probably means that your script is not
readable and executable by the web server user. On flavors
of Unix, changing the mode to 755 is recommended:
chmod 755 filename. Never set a mode to 777!
Are you using use strict?
Remember that Perl automatically creates variables when
you first use them. This is a feature, but sometimes can
cause bugs if you mistype a variable name. The pragma
use strict will help you find those sorts of
errors. It's annoying until you get used to it, but your
programming will improve significantly after awhile and
you will be free to make different mistakes.
Does the script compile?
You can check for compilation errors by using the -c
switch. Concentrate on the first errors reported. Rinse,
repeat. If you are getting really strange errors, check to
ensure that your script has the right line endings. If you
FTP in binary mode, checkout from CVS, or something else that
does not handle line end translation, the web server may see
your script as one big line. Transfer Perl scripts in ASCII
mode.
Is the script complaining about insecure dependencies?
If your script complains about insecure dependencies, you
are probably using the -T switch to turn on taint mode, which is
a good thing since it keeps you have passing unchecked data to the shell. If
it is complaining it is doing its job to help us write more secure scripts. Any
data originating from outside of the program (i.e. the environment)
is considered tainted. Environment variables such as PATH and
LD_LIBRARY_PATH
are particularly troublesome. You have to set these to a safe value
or unset them completely, as I recommend. You should be using absolute
paths anyway. If taint checking complains about something else,
make sure that you have untainted the data. See perlsec
man page for details.
What happens when you run it from the command line?
Does the script output what you expect when run from the
command line? Is the header output first, followed by a
blank line? Remember that STDERR may be merged with STDOUT
if you are on a terminal (e.g. an interactive session), and
due to buffering may show up in a jumbled order. Turn on
Perl's autoflush feature by setting $| to a
true value. Typically you might see $|++; in
CGI programs. Once set, every print and write will
immediately go to the output rather than being buffered.
You have to set this for each filehandle. Use select to
change the default filehandle, like so:
$|++; #sets $| for STDOUT
$old_handle = select( STDERR ); #change to STDERR
$|++; #sets $| for STDERR
select( $old_handle ); #change back to STDOUT
Either way, the first thing output should be the CGI header
followed by a blank line.
What happens when you run it from the command line with a CGI-like environment?
The web server environment is usually a lot more limited
than your command line environment, and has extra
information about the request. If your script runs fine
from the command line, you might try simulating a web server
environment. If the problem appears, you have an
environment problem.
Unset or remove these variables
PATH
LD_LIBRARY_PATH
all ORACLE_* variables
Set these variables
REQUEST_METHOD (set to GET, HEAD, or POST as appropriate)
SERVER_PORT (set to 80, usually)
REMOTE_USER (if you are doing protected access stuff)
Recent versions of CGI.pm ( > 2.75 ) require the -debug flag to
get the old (useful) behavior, so you might have to add it to
your CGI.pm imports.
use CGI qw(-debug)
Are you using die() or warn?
Those functions print to STDERR unless you have redefined
them. They don't output a CGI header, either. You can get
the same functionality with packages such as CGI::Carp
What happens after you clear the browser cache?
If you think your script is doing the right thing, and
when you perform the request manually you get the right
output, the browser might be the culprit. Clear the cache
and set the cache size to zero while testing. Remember that
some browsers are really stupid and won't actually reload
new content even though you tell it to do so. This is
especially prevalent in cases where the URL path is the
same, but the content changes (e.g. dynamic images).
Is the script where you think it is?
The file system path to a script is not necessarily
directly related to the URL path to the script. Make sure
you have the right directory, even if you have to write a
short test script to test this. Furthermore, are you sure
that you are modifying the correct file? If you don't see
any effect with your changes, you might be modifying a
different file, or uploading a file to the wrong place.
(This is, by the way, my most frequent cause of such trouble
;)
Are you using CGI.pm, or a derivative of it?
If your problem is related to parsing the CGI input and you
aren't using a widely tested module like CGI.pm, CGI::Request,
CGI::Simple or CGI::Lite, use the module and get on with life.
CGI.pm has a cgi-lib.pl compatibility mode which can help you solve input
problems due to older CGI parser implementations.
Did you use absolute paths?
If you are running external commands with
system, back ticks, or other IPC facilities,
you should use an absolute path to the external program.
Not only do you know exactly what you are running, but you
avoid some security problems as well. If you are opening
files for either reading or writing, use an absolute path.
The CGI script may have a different idea about the current
directory than you do. Alternatively, you can do an
explicit chdir() to put you in the right place.
Did you check your return values?
Most Perl functions will tell you if they worked or not
and will set $! on failure. Did you check the
return value and examine $! for error messages? Did you check
$# if you were using eval?
Which version of Perl are you using?
The latest stable version of Perl is 5.28 (or not, depending on when this was last edited). Are you using an older version? Different versions of Perl may have different ideas of warnings.
Which web server are you using?
Different servers may act differently in the same
situation. The same server product may act differently with
different configurations. Include as much of this
information as you can in any request for help.
Did you check the server documentation?
Serious CGI programmers should know as much about the
server as possible - including not only the server features
and behavior, but also the local configuration. The
documentation for your server might not be available to you
if you are using a commercial product. Otherwise, the
documentation should be on your server. If it isn't, look
for it on the web.
Did you search the archives of comp.infosystems.www.authoring.cgi?
This use to be useful but all the good posters have either died or wandered off.
It's likely that someone has had your problem before,
and that someone (possibly me) has answered it in this
newsgroup. Although this newsgroup has passed its heyday, the collected wisdom from the past can sometimes be useful.
Can you reproduce the problem with a short test script?
In large systems, it may be difficult to track down a bug
since so many things are happening. Try to reproduce the problem
behavior with the shortest possible script. Knowing the problem
is most of the fix. This may be certainly time-consuming, but you
haven't found the problem yet and you're running out of options. :)
Did you decide to go see a movie?
Seriously. Sometimes we can get so wrapped up in the problem that we
develop "perceptual narrowing" (tunnel vision). Taking a break,
getting a cup of coffee, or blasting some bad guys in [Duke Nukem,Quake,Doom,Halo,COD] might give you
the fresh perspective that you need to re-approach the problem.
Have you vocalized the problem?
Seriously again. Sometimes explaining the problem aloud
leads us to our own answers. Talk to the penguin (plush toy) because
your co-workers aren't listening. If you are interested in this
as a serious debugging tool (and I do recommend it if you haven't
found the problem by now), you might also like to read The Psychology
of Computer Programming.
I think CGI::Debug is worth mentioning as well.
Are you using an error handler while you are debugging?
die statements and other fatal run-time and compile-time errors get
printed to STDERR, which can be hard to find and may be conflated with
messages from other web pages at your site. While you're debugging your
script, it's a good idea to get the fatal error messages to display in your
browser somehow.
One way to do this is to call
use CGI::Carp qw(fatalsToBrowser);
at the top of your script. That call will install a $SIG{__DIE__} handler (see perlvar)display fatal errors in your browser, prepending it with a valid header if necessary. Another CGI debugging trick that I used before I ever heard of CGI::Carp was to
use eval with the DATA and __END__ facilities on the script to catch compile-time errors:
#!/usr/bin/perl
eval join'', <DATA>;
if ($#) { print "Content-type: text/plain:\n\nError in the script:\n$#\n; }
__DATA__
# ... actual CGI script starts here
This more verbose technique has a slight advantage over CGI::Carp in that it will catch more compile-time errors.
Update: I've never used it, but it looks like CGI::Debug, as Mikael S
suggested, is also a very useful and configurable tool for this purpose.
I wonder how come no-one mentioned the PERLDB_OPTS option called RemotePort; although admittedly, there aren't many working examples on the web (RemotePort isn't even mentioned in perldebug) - and it was kinda problematic for me to come up with this one, but here it goes (it being a Linux example).
To do a proper example, first I needed something that can do a very simple simulation of a CGI web server, preferably through a single command line. After finding Simple command line web server for running cgis. (perlmonks.org), I found the IO::All - A Tiny Web Server to be applicable for this test.
Here, I'll work in the /tmp directory; the CGI script will be /tmp/test.pl (included below). Note that the IO::All server will only serve executable files in the same directory as CGI, so chmod +x test.pl is required here. So, to do the usual CGI test run, I change directory to /tmp in the terminal, and run the one-liner web server there:
$ cd /tmp
$ perl -MIO::All -e 'io(":8080")->fork->accept->(sub { $_[0] < io(-x $1 ? "./$1 |" : $1) if /^GET \/(.*) / })'
The webserver command will block in the terminal, and will otherwise start the web server locally (on 127.0.0.1 or localhost) - afterwards, I can go to a web browser, and request this address:
http://127.0.0.1:8080/test.pl
... and I should observe the prints made by test.pl being loaded - and shown - in the web browser.
Now, to debug this script with RemotePort, first we need a listener on the network, through which we will interact with the Perl debugger; we can use the command line tool netcat (nc, saw that here: Perl如何remote debug?). So, first run the netcat listener in one terminal - where it will block and wait for connections on port 7234 (which will be our debug port):
$ nc -l 7234
Then, we'd want perl to start in debug mode with RemotePort, when the test.pl has been called (even in CGI mode, through the server). This, in Linux, can be done using the following "shebang wrapper" script - which here also needs to be in /tmp, and must be made executable:
cd /tmp
cat > perldbgcall.sh <<'EOF'
#!/bin/bash
PERLDB_OPTS="RemotePort=localhost:7234" perl -d -e "do '$#'"
EOF
chmod +x perldbgcall.sh
This is kind of a tricky thing - see shell script - How can I use environment variables in my shebang? - Unix & Linux Stack Exchange. But, the trick here seems to be not to fork the perl interpreter which handles test.pl - so once we hit it, we don't exec, but instead we call perl "plainly", and basically "source" our test.pl script using do (see How do I run a Perl script from within a Perl script?).
Now that we have perldbgcall.sh in /tmp - we can change the test.pl file, so that it refers to this executable file on its shebang line (instead of the usual Perl interpreter) - here is /tmp/test.pl modified thus:
#!./perldbgcall.sh
# this is test.pl
use 5.10.1;
use warnings;
use strict;
my $b = '1';
my $a = sub { "hello $b there" };
$b = '2';
print "YEAH " . $a->() . " CMON\n";
$b = '3';
print "CMON " . &$a . " YEAH\n";
$DB::single=1; # BREAKPOINT
$b = '4';
print "STEP " . &$a . " NOW\n";
$b = '5';
print "STEP " . &$a . " AGAIN\n";
Now, both test.pl and its new shebang handler, perldbgcall.sh, are in /tmp; and we have nc listening for debug connections on port 7234 - so we can finally open another terminal window, change directory to /tmp, and run the one-liner webserver (which will listen for web connections on port 8080) there:
cd /tmp
perl -MIO::All -e 'io(":8080")->fork->accept->(sub { $_[0] < io(-x $1 ? "./$1 |" : $1) if /^GET \/(.*) / })'
After this is done, we can go to our web browser, and request the same address, http://127.0.0.1:8080/test.pl. However, now when the webserver tries to execute the script, it will do so through perldbgcall.sh shebang - which will start perl in remote debugger mode. Thus, the script execution will pause - and so the web browser will lock, waiting for data. We can now switch to the netcat terminal, and we should see the familiar Perl debugger text - however, output through nc:
$ nc -l 7234
Loading DB routines from perl5db.pl version 1.32
Editor support available.
Enter h or `h h' for help, or `man perldebug' for more help.
main::(-e:1): do './test.pl'
DB<1> r
main::(./test.pl:29): $b = '4';
DB<1>
As the snippet shows, we now basically use nc as a "terminal" - so we can type r (and Enter) for "run" - and the script will run up do the breakpoint statement (see also In perl, what is the difference between $DB::single = 1 and 2?), before stopping again (note at that point, the browser will still lock).
So, now we can, say, step through the rest of test.pl, through the nc terminal:
....
main::(./test.pl:29): $b = '4';
DB<1> n
main::(./test.pl:30): print "STEP " . &$a . " NOW\n";
DB<1> n
main::(./test.pl:31): $b = '5';
DB<1> n
main::(./test.pl:32): print "STEP " . &$a . " AGAIN\n";
DB<1> n
Debugged program terminated. Use q to quit or R to restart,
use o inhibit_exit to avoid stopping after program termination,
h q, h R or h o to get additional info.
DB<1>
... however, also at this point, the browser locks and waits for data. Only after we exit the debugger with q:
DB<1> q
$
... does the browser stop locking - and finally displays the (complete) output of test.pl:
YEAH hello 2 there CMON
CMON hello 3 there YEAH
STEP hello 4 there NOW
STEP hello 5 there AGAIN
Of course, this kind of debug can be done even without running the web server - however, the neat thing here, is that we don't touch the web server at all; we trigger execution "natively" (for CGI) from a web browser - and the only change needed in the CGI script itself, is the change of shebang (and of course, the presence of the shebang wrapper script, as executable file in the same directory).
Well, hope this helps someone - I sure would have loved to have stumbled upon this, instead of writing it myself :)
Cheers!
For me, I use log4perl . It's quite useful and easy.
use Log::Log4perl qw(:easy);
Log::Log4perl->easy_init( { level => $DEBUG, file => ">>d:\\tokyo.log" } );
my $logger = Log::Log4perl::get_logger();
$logger->debug("your log message");
Honestly you can do all the fun stuff above this post.
ALTHOUGH, the simplest and most proactive solution I found was to just "print it".
In example:
(Normal code)
`$somecommand`;
To see if it's doing what I really want it to do:
(Trouble shooting)
print "$somecommand";
It will probably also be worth mentioning that Perl will always tell you on what line the error occurs when you execute the Perl script from the command line. (An SSH Session for example)
I will usually do this if all else fails. I will SSH into the server and manually execute the Perl script. For example:
% perl myscript.cgi
If there is a problem then Perl will tell you about it. This debugging method does away with any file permission related issues or web browser or web server issues.
You may run the perl cgi-script in terminal using the below command
$ perl filename.cgi
It interpret the code and provide result with HTML code.
It will report the error if any.

Selectively updating working directory

I'm working on some code with a partner. Our make files differ slightly courtesy of different build setups. Because of this, so far we have not been tracking this file. However it would be nice to have at least one of ours tracked. The problem is, when that is done and the other person runs hg update, their copy gets update and the code won't compile.
Is there a way to track the file, but have it such that you can update the working directory selectively? Or is there some other way I should deal with this problem?
This is a slight variant of the standard "how do I deal with a config file" question. The standard answer in SVN, Mercurial, and Git is: don't track the file, instead track <file>.example. Then each user copies that over to <file> and tweaks it as needed.
But Makefiles are a bit smarter than config files: they execute code and can include other files. In which case, it starts making sense to track the Makefile normally and have it include another local file if it's present that overrides the default rules. For instance, the following will work with GNU Make:
# pull in any local user tweaks
-include Makefile.local
MQ extension is the best and The Right Way (tm) to do it (not easiest, but...)
Store common part of file in repo, individual personalisation - in own MQ-patches
Is it possible to combine your Makefiles? Then there is not chance of losing your different configurations by not storing them in version control.
For example, you could add a conditional statement based on the username. My username is ryan and this code echos my name, but if it is run on your computer, it probably will echo "not ryan."
all:
if [ `whoami` = "ryan" ]; then echo "ryan"; else echo "not ryan"; fi

Solution to programmatically generate an export script from a directory hierarchy

I often have the following scenario: in order to reproduce a bug for reporting, I create a small sample project, sometimes a maven multi module project. So there may be a hierarchy of directories and it will usually contain a few small text files. Standard procedure would of course be to create a zip file and send that. But on some mailing lists attachments are not allowed, and so I am looking for a way to automatically create an installation script, that I can post to such mailing lists.
Basically I would be happy with a Unix flavor only that creates mkdir statements to create directories and >> statements to write the file contents. (Actually, apart from the relative path delimiters, the windows and unix versions can probably be identical)
Does such a tool exist somewhere? If not, I'll probably write one in java, but I'm happy to accept solutions in all kinds of languages.
(The tool could run under windows or unix, but the target platform for the generated scripts should be either unix or configurable)
I think you're looking for shar, which creates a shell archive (shell script that when run produces a given directory hierarchy). It is available on most systems; you can use GNU sharutils if you don't already have it.
Normal usage for packing up a directory tree would be something like:
shar `find somedirectory -print` > archive.sh
If you're using GNU sharutils, and want to create "vanilla" archives which use only the most portable of shell builtins, mkdir, and sed, then you should invoke it as shar -V. You can remove some more extra baggage from the scripts by using -xQ; -x to remove checks for existing files, and -Q to remove verbose output from the archive.
shar -VxQ `find somedir -print` > archive.sh
If you really want something even simpler, here's a dirt-simple version of shar as a shell script. It takes filenames on standard input instead of arguments for simplicity and to be a little more robust.
#!/bin/sh
while read filename
do
if test -d $filename
then
echo "mkdir -p '$filename'"
else
echo "sed 's/^X//' <<EOF > '$filename'"
sed 's/^/X/' < "$filename"
echo 'EOF'
fi
done
Invoke as:
find somedir -print | simpleshar > archive.sh
You still need to invoke sed, as you need some way of ensuring that no lines in the here document begin with the delimiter, which would close the document and cause later lines to be interpreted as part of the script. I can't think of any really good way to solve the quoting problem using only shell builtins, so you will have to rely on sed (which is standard on any Unix-like system, and has been practically forever).
if your problem are non-text-file-hating filters:
in times long forgotten, we used uuencode to get past 8-bit eating relays -
is that a way to get past attachment eating mail boxes these days ?
So why not zip and uuencode ?
(or base64, which is its younger cousin)

Mercurial - How do I create a .zip of files changed between two revisions?

I have a personal Mercurial repository tracking some changes I am working on. I'd like to share these changes with a collaborator, however they don't have/can't get Mercurial, so I need to send the entire file set and the collaborator will merge on their end. I am looking for a way to extract the "tip" version of the subset of files that were modified between two revision numbers. Is there a way to easily do this in Mercurial?
Adding a bounty - This is still a pain for us. We often work with internal "customers" who take our source code releases as a .zip, and testing a small fix is easier to distribute as a .zip overlay than as a patch (since we often don't know the state of their files).
The best case scenario is to put the proper pressure on these folks to get Mercurial, but barring that, a patch is probably better than a zipped set of files, since the patch will track deletes and renames. If you still want a zip file, I've written a short script that makes a zip file:
import os, subprocess, sys
from zipfile import ZipFile, ZIP_DEFLATED
def main(revfrom, revto, destination, *args):
root, err = getoutput("hg root")
if "no Merurial repository" in err:
print "This script must be run from within an Hg repository"
return
root = root.strip()
filelist, _ = getoutput("hg status --rev %s:%s" % (revfrom, revto))
paths = []
for line in filelist.split('\n'):
try:
(status, path) = line.split(' ', 1)
except ValueError:
continue
if status != 'D':
paths.append(path)
if len(paths) < 1:
print "No changed files could be found."
return
z = ZipFile(destination, "w", ZIP_DEFLATED)
os.chdir(root)
for path in paths:
z.write(path)
z.close()
print "Done."
def getoutput(cmd):
p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return p.communicate()
if __name__ == '__main__':
main(*sys.argv[1:])
The usage would be nameofscript.py fromrevision torevision destination. E.g., nameofscript.py 45 51 c:\updates.zip
Sorry about the poor command line interface, but hey the script only took 25 minutes to write.
Note: this should be run from a working directory within a repository.
Well. hg export $base:tip > patch.diff will produce a standard patch file, readable by most tools around.
In particular, the GNU patch command can apply the whole patch against the previous files. Isn't it enough? I dont see why you would need the set of files: to me, applying a patch seems easier than extracting files from a zip and copying them to the right place. Plus, if your collaborator has local changes, you will overwrite them. You're not using a Version Control tool to bluntly force the other person to merge manually the changes, right? Let patch deal with that, honestly :)
In UNIX this can be done with:
hg status --rev 1 --rev 2 -m -a -n | xargs zip changes.zip
I also contributed an extension, see the hgexportfiles extension on bitbucket for more info. The export files extension works on a given revision or revision range and creates the set of changed files in a specified directory. It's easy to zip the directory as part of a script.
To my knowledge, there's not a handy tool for this (though a mercurial plugin might be doable). You can export a patch for the fileset, using hg export from:to (where from and to identify revisions.) If you really need the entire files as seen on tip, you could probably hack something together based on the output of hg diff --stat -r from:to , which outputs a list of files with annotations about how many lines were changed, like:
...
src/test/scala/RegressionTest.scala | 25 +++++++++++++----------
src/test/scala/SLDTest.scala | 2 +-
15 files changed, 111 insertions(+), 143 deletions(-)
If none of your files have spaces or special characters in their names, you could use something like:
hg diff -r156:159 --stat | head - --lines=-1 | sed 's!|.*$!!' | xargs zip ../diffed.zip
I'll leave dealing with special characters as an exercise for the reader ;)
Here is a small and ugly bash script that will do the job, at least if you work in an Linux environment. This has absolutely no checks what so ever and will most likely break when you have moved a file but it is a start.
Command:
zipChanges.sh REVISION REPOSITORY DESTINATION
zipChanges.sh 3 /home/hg/repo /home/hg/files.tgz
Code:
#!/bin/sh
REV=$1
SRC_REPO=$2
DST_ZIP=$3
cd $SRC_REPO
FILES=$(hg status --rev $1 $SRC_REPO | cut -c3-)
IFS=$'\n'
FILENAMES=""
for line in ${FILES}
do
FILENAMES=$FILENAMES" \""$SRC_REPO"/"$line"\""
done
CMD="tar czf \"$DST_ZIP\" $FILENAMES"
eval $CMD
I know you already have a few answers to this one but a friend of mine had a similar issue and I created a simple program in VB.Net to do this for him perhaps it could help for you too, the prog and a copy of the source is at the bottom of the article linked below.
http://www.simianenterprises.co.uk/blog/mercurial-export-changed-files-80.html
Although this does not let you pick an end revision at the moment, it would be very easy to add that in using the source, however you would have to update to the target revision manually before extracting the files.
If needed you could even mod it to create the zip instead of a folder of files (which is also nice and easy to manually zip)
hope this helps either you or anyone else who wants this functionality.
i just contributed an extension here https://sites.google.com/site/alessandronegrin/pack-mercurial-extension
I ran into this problem recently. My solution:
hg update null
hg debugsetparents (starting revision)
hg update (ending revision)
This will have the effect of deleting all tracked files that were not changed between those two revisions. You will have to remove any untracked files yourself, though. After doing this, the local branch will be in an inconsistent state; you can fix this by running hg debugrebuildstate (or simply deleting the local branch, if you no longer need it).