IPFS: How to add a file to an existing folder? - ipfs
Given a rather large folder, that has already been pushed to the network, and deleted locally. How would a file be added to that folder, without re-downloading the entire folder it?
You can only do it by using ipns after downloading it again with ipfs get, which should be fast if it's still pinned to your local storage:
(1) first add (i.e. re-add) your folder to ipfs recursively: ipfs add -r /path/to/folder. The second column of the last stdout line has the ipfs hash of the parent folder you just added. (The original files are still the same, so the hashes will be the same too.)
(2) then publish that hash: ipfs name publish /ipfs/<CURRENT_PARENTFOLDER_HASH>. This will return your peer ID, and you can share the link as /ipns/<PEER_ID>; repeat this step (ipfs name publish) whenever the folder contents (and therefore the parent folder hash) changes. The ipns object will then always point to the latest version of your folder.
(3) if you plan on sharing a lot, you can create a new keypair for each folder you share: ipfs key gen --type=rsa --size=2048 new-share-key … and then use that key (instead of your default key) to publish (and later republish) that folder: ipfs name publish --key=new-share-key /ipfs/<CURRENT_PARENTFOLDER_HASH>
See also the documentation here: https://docs.ipfs.io/reference/cli/#ipfs-name-publish
I'm a bit late to answer this, but I found the 2 existing answers a bit unclear.
Tl;Dr; Just commands and minimal info
If you want a thorough detailed explanation, scroll down to the section starting with The 2 keys to mutability.
If you just need the commands you should run, and barebones usage info so you know how to actually adjust the command for your use case, then read this TL;DR; section.
Use IPNS / DNSLink for references to IPFS objects that can be updated
IPNS
Create a key, back it up if using in production, then use ipfs name publish to change the object that your key currently points to. Access your key by prefixing /ipns/ to commands / URLs instead of /ipfs/.
ipfs key gen test
# backup your key if used in production
ipfs key export -o /home/somewhere/safe/test.key test
umount /ipns
ipfs name publish -k test QmWRsWoZjiandZUXLyczXSoWi84hXNHvBQ49BiQx9hPdjs
# Published to k51qzi5uqu5dkqxbxeulacqmz5ekmopr3nsh9zmgve1dji0dccdy86uqyhq1m0: /ipfs/QmWRsWoZjiandZUXLyczXSoWi84hXNHvBQ49BiQx9hPdjs
ipfs ls /ipns/k51qzi5uqu5dkqxbxeulacqmz5ekmopr3nsh9zmgve1dji0dccdy86uqyhq1m0
# Qme85tx5Wnsjc5pZZs1JGogBNUVM2WThC18ERh6t2YFJSK 37 lorem.txt
ipfs name publish -k test QmaDDLFL3fM4sQkQfV82LdNqtNnyaeAmgC46Qc7FDQdkq8
# Published to k51qzi5uqu5dkqxbxeulacqmz5ekmopr3nsh9zmgve1dji0dccdy86uqyhq1m0: /ipfs/QmaDDLFL3fM4sQkQfV82LdNqtNnyaeAmgC46Qc7FDQdkq8
# Since it's not a folder this time, we use 'ipfs cat' to read
# it to the console, since we know the file was plain text.
ipfs cat /ipns/k51qzi5uqu5dkqxbxeulacqmz5ekmopr3nsh9zmgve1dji0dccdy86uqyhq1m0
# foo bar foo bar foo foo foo
# bar foo foo bar bar foo bar
DNSLink
Set a TXT record on _dnslink above the (sub)domain you want to use as an IPNS reference. Set the value to dnslink=/ipns/<id> or dnslink=/ipfs/<id> depending on whether you're pointing it at an IPFS object or an IPNS address, and replace <id> with the object ID / IPNS address you want to point it to.
Domain: privex.io
(Subdomain) Name: _dnslink.test
Record Type: TXT
Value: dnslink=/ipns/k51qzi5uqu5dkqxbxeulacqmz5ekmopr3nsh9zmgve1dji0dccdy86uqyhq1m0
TTL (expiry): 120 (seconds)
Just like normal IPNS, you should now be able to query it with IPFS CLI tools, or IPFS gateways by using /ipns/<your_domain> instead of /ipfs/<object_id>.
If we now cat /ipns/test.privex.io we can see it's working properly, pointing to the foo bar text file (no wrapped folder).
ipfs#privex ~ $ ipfs cat /ipns/test.privex.io
foo bar foo bar foo foo foo
bar foo foo bar bar foo bar
Add an existing IPFS object ID to another IPFS object (wrapped folder)
Using the following command, you can add an individual IPFS file, or an entire wrapped folder to an existing object using their respective object IDs, and the command will output a new object ID, referencing a new object that contains both the original folder data, and the new data that you wanted to add.
The syntax for the command is: ipfs object patch add-link [object-to-add-to] [name-of-newly-added-file-or-folder] [object-to-inject]
ipfs#privex:~$ ipfs object patch add-link QmXCfnzXHThHwaTvSSAKeErxK48XkyVoL6ZNEhkpKmZyW3 hello/foo.txt QmaDDLFL3fM4sQkQfV82LdNqtNnyaeAmgC46Qc7FDQdkq8
QmaWoYZnSXnKqzskrBwtmZPE74qKe4AF5YfwaY83nzeCCL
The 2 keys to mutability
1. Having an IPFS object ID that stays the same despite the content changing
Unfortunately, IPFS object IDs (the ones starting with Q) are immutable, meaning their contents cannot be altered in the future without getting a new ID, due to the fact an object ID is effectively a hash (usually a form of SHA256).
HOWEVER, both IPNS and DNSLink have a solution for this.
IPNS is "Interplantary Name System", which is strongly integrated into IPFS. It allows you to generate an address (public key) and a private key, similar to how Bitcoin and many other cryptocurrencies work. Using your private key, you can point your IPNS
First, you'll want to generate a key (note: you'll need a key per individual IPNS address you want)
ipfs#privex:~$ ipfs key gen test
k51qzi5uqu5dkqxbxeulacqmz5ekmopr3nsh9zmgve1dji0dccdy86uqyhq1m0
If you plan to use your IPNS address for something other than testing, you should export the private key and keep a copy of it somewhere safe. Note that the private key is a binary file, so if you want to store it somewhere that expects plain text, you can convert it into base64 like so: base64 test.key
ipfs key export -o /home/somewhere/safe/test.key test
Next we'll publish a random IPFS folder to the IPNS address, which contains one file (lorem.txt) with a few lines of lorem ipsum text. If you use the FUSE /ipns folder, you may need to unmount it before you're able to publish via IPNS:
ipfs#privex:~$ umount /ipns
ipfs#privex:~$ ipfs name publish -k test QmWRsWoZjiandZUXLyczXSoWi84hXNHvBQ49BiQx9hPdjs
Published to k51qzi5uqu5dkqxbxeulacqmz5ekmopr3nsh9zmgve1dji0dccdy86uqyhq1m0: /ipfs/QmWRsWoZjiandZUXLyczXSoWi84hXNHvBQ49BiQx9hPdjs
ipfs#privex:~$ ipfs ls /ipns/k51qzi5uqu5dkqxbxeulacqmz5ekmopr3nsh9zmgve1dji0dccdy86uqyhq1m0
Qme85tx5Wnsjc5pZZs1JGogBNUVM2WThC18ERh6t2YFJSK 37 lorem.txt
That's just one example though - to prove that the IPNS address can actually be updated with different content, in this next example, I'll publish an individual text file directly to the IPNS address (not a wrapped folder).
# Publish the IPFS object 'QmaDDLFL3fM4sQkQfV82LdNqtNnyaeAmgC46Qc7FDQdkq8'
# to our existing named key 'test'
ipfs#privex:~$ ipfs name publish -k test QmaDDLFL3fM4sQkQfV82LdNqtNnyaeAmgC46Qc7FDQdkq8
# Since it's not a folder this time, 'ipfs ls' won't return anything.
# So instead, we use 'ipfs cat' to read it to the console, since we
# know the file was plain text.
ipfs#privex:~$ ipfs cat /ipns/k51qzi5uqu5dkqxbxeulacqmz5ekmopr3nsh9zmgve1dji0dccdy86uqyhq1m0
foo bar foo bar foo foo foo
bar foo foo bar bar foo bar
DNSLink
DNSLink is a part of IPNS that allows for human readable IPNS addresses through the standard domain system (e.g. example.com).
Since the IPNS section was rather long, I'll keep this one short and sweet. If you want to know more about DNSLink, please visit dnslink.io.
First, either you already have a domain to use, or you acquire a domain from a registrar such as Namecheap.
Go to your domain record management panel - if you use Cloudflare, then they are your domain management panel. Add a TXT record for _dnslink.yourdomain.com or if you want to use a subdomain, _dnslink.mysub.yourdomain.com (on most registrars, you only enter the part before the domain you're managing, i.e. _dnslink or _dnslink.mysub).
In the value box, enter dnslink= followed by either /ipfs/ or /ipns/ depending on whether you want to use an IPFS object ID or an IPNS name address, then enter your object ID / IPNS name to the end.
For example, if you were pointing your domain to the IPNS address in the earlier example, you'd enter:
dnslink=/ipns/k51qzi5uqu5dkqxbxeulacqmz5ekmopr3nsh9zmgve1dji0dccdy86uqyhq1m0
Or if you wanted to point it to the example folder containing lorem.txt with a few lines of lorem ipsum, it would be
dnslink=/ipfs/QmWRsWoZjiandZUXLyczXSoWi84hXNHvBQ49BiQx9hPdjs
For example purposes, here's a summary of how I setup test.privex.io
Domain: privex.io
(Subdomain) Name: _dnslink.test
Record Type: TXT
Value: dnslink=/ipns/k51qzi5uqu5dkqxbxeulacqmz5ekmopr3nsh9zmgve1dji0dccdy86uqyhq1m0
TTL (expiry): 120 (seconds)
(note: most people are fine with "auto" TTL, or the somewhat standard 600 TTL. If you intend to change the DNSLink value regularly, or you're experimenting and likely updating it constantly, you may want a low TTL of 60 or even 30)
After setting it up, with the IPNS address still pointing at the raw foo bar text data, I used ipfs cat to read the data that the domain pointed to:
ipfs#privex:~$ ipfs cat /ipns/test.privex.io
foo bar foo bar foo foo foo
bar foo foo bar bar foo bar
2. Add existing IPFS objects to your object, without having to download/organise the object being added.
First we create the IPFS object - a wrapped folder containing hello/lorem.txt - which has the object ID QmXCfnzXHThHwaTvSSAKeErxK48XkyVoL6ZNEhkpKmZyW3
ipfs#privex:~$ mkdir hello
ipfs#privex:~$ echo -e "lorem ipsum dolor\nlorem ipsum dolor\n" > hello/lorem.txt
ipfs#privex:~$ ipfs add -p -r -w hello
added Qme85tx5Wnsjc5pZZs1JGogBNUVM2WThC18ERh6t2YFJSK hello/lorem.txt
added QmWRsWoZjiandZUXLyczXSoWi84hXNHvBQ49BiQx9hPdjs hello
added QmXCfnzXHThHwaTvSSAKeErxK48XkyVoL6ZNEhkpKmZyW3
37 B / 37 B [=======================================================================] 100.00%
ipfs#privex:~$ ipfs ls QmXCfnzXHThHwaTvSSAKeErxK48XkyVoL6ZNEhkpKmZyW3
QmWRsWoZjiandZUXLyczXSoWi84hXNHvBQ49BiQx9hPdjs - hello/
ipfs#privex:~$ ipfs ls QmXCfnzXHThHwaTvSSAKeErxK48XkyVoL6ZNEhkpKmZyW3/hello
Qme85tx5Wnsjc5pZZs1JGogBNUVM2WThC18ERh6t2YFJSK 37 lorem.txt
Next, for the sake of creating an example external object ID that isn't part of the original wrapped folder, I created foo.txt containg a couple of lines of random foo bar text, and uploaded it to IPFS on its own. Its object ID is QmaDDLFL3fM4sQkQfV82LdNqtNnyaeAmgC46Qc7FDQdkq8
ipfs#privex:~$ echo -e "foo bar foo bar foo foo foo\nbar foo foo bar bar foo bar\n" > foo.txt
ipfs#privex:~$ ipfs add foo.txt
added QmaDDLFL3fM4sQkQfV82LdNqtNnyaeAmgC46Qc7FDQdkq8 foo.txt
57 B / 57 B [======================================================================] 100.00%
Finally, we use ipfs object patch add-link to add the foo.txt object (QmaDDLFL3fM4sQkQfV82LdNqtNnyaeAmgC46Qc7FDQdkq8) I created before, inside of the hello/ folder of the original wrapped folder I created (QmXCfnzXHThHwaTvSSAKeErxK48XkyVoL6ZNEhkpKmZyW3).
The syntax for the command is: ipfs object patch add-link [object-to-add-to] [name-of-newly-added-file-or-folder] [object-to-inject]
ipfs#privex:~$ ipfs object patch add-link QmXCfnzXHThHwaTvSSAKeErxK48XkyVoL6ZNEhkpKmZyW3 hello/foo.txt QmaDDLFL3fM4sQkQfV82LdNqtNnyaeAmgC46Qc7FDQdkq8
QmaWoYZnSXnKqzskrBwtmZPE74qKe4AF5YfwaY83nzeCCL
It outputs a new object ID QmaWoYZnSXnKqzskrBwtmZPE74qKe4AF5YfwaY83nzeCCL which is the ID of the newly created object that contains both hello/lorem.txt from the original, and hello/foo.txt which was injected later on.
NOTE: This command ALSO works when adding entire wrapped folders to another wrapped folder, however, be careful to avoid double nesting. e.g. you have Qxxxx/hello/world and Qyyyy/lorem/ipsum - if you add Qyyyy to Qxxxx specifying the name lorem - it will be added as Qzzzz/lorem/lorem/ipsum
If we now do ipfs ls on the new object ID, we can see that the hello/ sub-folder contains BOTH foo.txt and lorem.txt - confirming that foo.txt was successfully injected into the duplicate, without needing to download both the original and foo.txt - then organising them properly in a folder before uploading.
ipfs#privex:~$ ipfs ls QmaWoYZnSXnKqzskrBwtmZPE74qKe4AF5YfwaY83nzeCCL
QmbU3BwdMarL8n6KCzVdYqMh6HEjCv6pLJQZhoVGWZ5bWW - hello/
ipfs#privex:~$ ipfs ls QmaWoYZnSXnKqzskrBwtmZPE74qKe4AF5YfwaY83nzeCCL/hello
QmaDDLFL3fM4sQkQfV82LdNqtNnyaeAmgC46Qc7FDQdkq8 57 foo.txt
Qme85tx5Wnsjc5pZZs1JGogBNUVM2WThC18ERh6t2YFJSK 37 lorem.txt
Summary
As explained in the first section, IPFS object IDs are immutable, thus while it's possible to merge existing objects on IPFS, it still results in a new object ID.
BUT, by using IPNS key addresses and/or DNSLink, you can have a mutable (editable) reference that points to any IPFS object, and can be updated to point to a new object ID on-demand, e.g. whenever you update the contents of an existing object, or if you decide you simply want your IPNS key/domain to point at something completely different, you're free to do so :)
This should be easy with the files API. Assuming you have already added the new file to ipfs and obtained its hash, try:
ipfs files cp /ipfs/QmExistingLargeFolderHash /folder-to-modify
ipfs files cp /ipfs/QmNewFileHash /folder-to-modify/new-file
This of course does not add a file to an existing folder (because folders and files are immutable), it just creates a copy/new version of the folder with a new file added. Hence, it will have a new hash:
ipfs files stat /folder-to-modify
The files API does not pin the files that are referenced or retrieve any subfolders unless necessary, so this can be done on any node in the network without incurring lots of traffic.
[Edit]
A while later, I learn that there are a few more things you can do:
Instead of
ipfs files cp /ipfs/QmNewFileHash /folder-to-modify/new-file
you can use ipfs files write -te if you haven't added the file to ipfs yet.
You can enable write features of the HTTP API to use PUT requests to obtain hashes of new versions of a folder. See this blogpost.
You can mount ipns via fuse and write to …/ipns/local.
And probably best: you can use ipfs object patch add-link /ipfs/QmExistingLargeFolderHash new-file /ipfs/QmNewFileHash to do it in one step
Related
Rancher - Is it possible to spin-up / recreate an entire namespace available in one environment on a new environment
Rancher: v2.2.4 In Rancher GUI, I see on one of our environment (Dev) and it contains a namespace 'n1'. This namespace under different sections (i.e. Workloads, LoadBalancers, ConfigMaps, Volumes etc) have few entries (containers/settings etc). I want to create the same namespace on a new environment where Rancher is running. This environment lets say is (Test). After getting all the required docker images (sudo docker image pull <server:port>/<imagename:imageversion>), do I need to download YAMLs of all these entries under each sections and import them to the target environment? (possibly changing volumes-id, container image entries i.e. name: <server:port>/<imagename:imageversion> locations (if any), controller-uid to keep the one on the target (TEST) environement)? My understanding is, if I create a new workload/add anything under a respective section, the label/annotations will generate a fresh controller-id value! so, I'm wondering before importing the YAML, if I should leave the controller-uid entry value blank (not sure if it'll barf). Is there a simple way to spin up/create an entire namespace 'n1' on TEST environment (i.e. namespace replica of n1 Dev in Test) with auto-generating the necessary Storage bits (volume classes/volumes and persistent volumes - all of these have some Vol ID/name/uid associated with each entity), Deployments bits (uid/controller-uids) etc? What's an efficient way to do this so that I don't have to manually download YAMLs (from Dev) and import them in Test at each component level (i.e. Volumes YAMLs, Volume Class YAML, Workloads/Deployment YAMLs etc - one by one)?
You can use the following to grab all resources from a namespace and apply them in a new namespace. #!/bin/bash SourceNamespace="SourceNS" TargetNamespace="TargetNS" TempDir="./tmp" echo "Grabbing all resources in $SourceNamespace" for APIResource in `kubectl api-resources --verbs=list --namespaced -o name` do kubectl -n "$SourceNamespace" get "$APIResource" -o yaml > "$TempDir"/"$APIResource".yaml done echo "Deploying all resources in $TargetNamespace" for yaml in `ls $TempDir` do kubectl apply -n "$TargetNamespace" -f "$TempDir"/"$yaml" done
How do I call an internal Midnight Commander command from menu entries or key bindings?
I try to automate some things for my Midnight Commander setup and want to call an internal Midnight Commander command from menu entries or key bindings. For example, I have a large number of ssh sites defined in .ssh/config, # ssh (secure shell) configuration file Host test1 HostName 123.456.789.0 Port 980 User MyUserName Host test2 HostName test.mynet.local User test CheckHostIP no .. I want to filter and sort the aliases from .ssh/config (for example with): grep '^Host ' .ssh/config | cut -d ' ' -f 2 | sort Store the resulting list in a Midnight Commander internal list box or selection panel. Select one of the entries and call the remote shell for the right file panel (like mc sh://%s...). At least I want to store the procedure to a key binding or a Midnight Commander menu entry. Could this be done with Midnight Commander board tools or configuration files?
What you're trying is impossible as mc is not a scriptable file manager. It doesn't even have keyboard macros. But I can think of a few weaker alternatives. Use F2-called menu (see the manpage for format, section "Menu File Edit"). In the menu run your grep command, pass the list of hosts to a program like dialog to select a host and run mc sh://$host. The problems with the approach: you need to learn dialog; there will be a second copy of mc which detects presence of the first and asks if you really wants to run the second. Ouch! Alternatively write a script that will run the grep command, get the lists of hosts and programmatically edit ~/.cache/mc/history. The file is ini-like file. You need to edit section [inp:fishlink_cmd: Shell link to machine ]. The keys are just consecutive numbers, the values are host names. Example: [inp:fishlink_cmd: Shell link to machine ] 0=Host1 1=Host2 Now press F9, R[ight], h (for Shell command) — in the opened dialog there will be the list of hosts. Press Alt-p/Alt-n for previous/next host or open the list with mouse.
Mercurial's authors description
I'm trying to change Mercurial's template. When I push some files to my repository, the log files list my PC's name, not user's name that is logged in. I don't want the PC's name, I want logged user's name shows up. How I do it?
#Praveen-K has the right answer and Lazy Badger and Lasse have the details you're missing. Here it is spelled out, but go pick Praveen's answer: The user name you're seeing in/on your remote repository are completely unrelated to: Any settings on your repository/server The username you use to authenticate to your repository Instead that string, called 'author' is burned into the changeset (commit) at commit time and is entirely crafted on your "PC". You could set it to anything you want and once you push that commit to the repository that's how it will display. At your current skill level you're not going to successfully change that string in commits you've already made, but if you dive into a good explanation (not lookup commands) like the hg book you'll come away understanding things.
Make the entry in to your hgrc file. This file should be in your .hg/ directory (it may be in your repo or you can do in your home directory) and if it is not exist make the file with the name of hgrc in that folder. [ui] username = Your Name <your#mail>
Mercurial hg serve multiple repositories
I am setting up a central mercurial server, and want to host multiple repositories. Every web page I look at about this says to set up a config file that looks like this: [collections] repos/ = repos/ Where /repos is the folder and /repos is the path to use in the URL. My question is which /repos is which??? I may want to use a name that is not the same as the path, as in: [collections] A/ = B/ Is A the physical path or the url path? Such a simple question you would think would have been answered, but I could not find any nontrivial examples.
Ok, I got it. This is on Windows, and here is everything that I need in the hg.conf file: [paths] foo = C:\Data\repositories-hg/foo-hg bar = C:\Data\repositories-hg/bar-hg This lets met access the repo at the location C:\Data\repositories-hg/foo-hg as: http://server:8000/foo Therefore A is the url alias and B is the physical path. There's of course more to set up, but this accomplishes what need for now.
Mercurial - How do I create a .zip of files changed between two revisions?
I have a personal Mercurial repository tracking some changes I am working on. I'd like to share these changes with a collaborator, however they don't have/can't get Mercurial, so I need to send the entire file set and the collaborator will merge on their end. I am looking for a way to extract the "tip" version of the subset of files that were modified between two revision numbers. Is there a way to easily do this in Mercurial? Adding a bounty - This is still a pain for us. We often work with internal "customers" who take our source code releases as a .zip, and testing a small fix is easier to distribute as a .zip overlay than as a patch (since we often don't know the state of their files).
The best case scenario is to put the proper pressure on these folks to get Mercurial, but barring that, a patch is probably better than a zipped set of files, since the patch will track deletes and renames. If you still want a zip file, I've written a short script that makes a zip file: import os, subprocess, sys from zipfile import ZipFile, ZIP_DEFLATED def main(revfrom, revto, destination, *args): root, err = getoutput("hg root") if "no Merurial repository" in err: print "This script must be run from within an Hg repository" return root = root.strip() filelist, _ = getoutput("hg status --rev %s:%s" % (revfrom, revto)) paths = [] for line in filelist.split('\n'): try: (status, path) = line.split(' ', 1) except ValueError: continue if status != 'D': paths.append(path) if len(paths) < 1: print "No changed files could be found." return z = ZipFile(destination, "w", ZIP_DEFLATED) os.chdir(root) for path in paths: z.write(path) z.close() print "Done." def getoutput(cmd): p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE) return p.communicate() if __name__ == '__main__': main(*sys.argv[1:]) The usage would be nameofscript.py fromrevision torevision destination. E.g., nameofscript.py 45 51 c:\updates.zip Sorry about the poor command line interface, but hey the script only took 25 minutes to write. Note: this should be run from a working directory within a repository.
Well. hg export $base:tip > patch.diff will produce a standard patch file, readable by most tools around. In particular, the GNU patch command can apply the whole patch against the previous files. Isn't it enough? I dont see why you would need the set of files: to me, applying a patch seems easier than extracting files from a zip and copying them to the right place. Plus, if your collaborator has local changes, you will overwrite them. You're not using a Version Control tool to bluntly force the other person to merge manually the changes, right? Let patch deal with that, honestly :)
In UNIX this can be done with: hg status --rev 1 --rev 2 -m -a -n | xargs zip changes.zip
I also contributed an extension, see the hgexportfiles extension on bitbucket for more info. The export files extension works on a given revision or revision range and creates the set of changed files in a specified directory. It's easy to zip the directory as part of a script.
To my knowledge, there's not a handy tool for this (though a mercurial plugin might be doable). You can export a patch for the fileset, using hg export from:to (where from and to identify revisions.) If you really need the entire files as seen on tip, you could probably hack something together based on the output of hg diff --stat -r from:to , which outputs a list of files with annotations about how many lines were changed, like: ... src/test/scala/RegressionTest.scala | 25 +++++++++++++---------- src/test/scala/SLDTest.scala | 2 +- 15 files changed, 111 insertions(+), 143 deletions(-) If none of your files have spaces or special characters in their names, you could use something like: hg diff -r156:159 --stat | head - --lines=-1 | sed 's!|.*$!!' | xargs zip ../diffed.zip I'll leave dealing with special characters as an exercise for the reader ;)
Here is a small and ugly bash script that will do the job, at least if you work in an Linux environment. This has absolutely no checks what so ever and will most likely break when you have moved a file but it is a start. Command: zipChanges.sh REVISION REPOSITORY DESTINATION zipChanges.sh 3 /home/hg/repo /home/hg/files.tgz Code: #!/bin/sh REV=$1 SRC_REPO=$2 DST_ZIP=$3 cd $SRC_REPO FILES=$(hg status --rev $1 $SRC_REPO | cut -c3-) IFS=$'\n' FILENAMES="" for line in ${FILES} do FILENAMES=$FILENAMES" \""$SRC_REPO"/"$line"\"" done CMD="tar czf \"$DST_ZIP\" $FILENAMES" eval $CMD
I know you already have a few answers to this one but a friend of mine had a similar issue and I created a simple program in VB.Net to do this for him perhaps it could help for you too, the prog and a copy of the source is at the bottom of the article linked below. http://www.simianenterprises.co.uk/blog/mercurial-export-changed-files-80.html Although this does not let you pick an end revision at the moment, it would be very easy to add that in using the source, however you would have to update to the target revision manually before extracting the files. If needed you could even mod it to create the zip instead of a folder of files (which is also nice and easy to manually zip) hope this helps either you or anyone else who wants this functionality.
i just contributed an extension here https://sites.google.com/site/alessandronegrin/pack-mercurial-extension
I ran into this problem recently. My solution: hg update null hg debugsetparents (starting revision) hg update (ending revision) This will have the effect of deleting all tracked files that were not changed between those two revisions. You will have to remove any untracked files yourself, though. After doing this, the local branch will be in an inconsistent state; you can fix this by running hg debugrebuildstate (or simply deleting the local branch, if you no longer need it).