I see in the documents for fswatch it has $ fswatch -0 [opts] [paths] | xargs -0 -n 1 -I {} [command] but I don't really understand how I'm supposed to add multiple paths to that - I'm watching two paths lib and test. I've tried:
fswatch -r lib,test, fswatch -r lib test, and finally fswatch -r [lib test] How do I watch multiple paths with fswatch at the same time?
Separate the paths with a space (i.e. ' ')
For example:
fswatch "path/one" "path/two" echo "whatever"
The only possibility I found is to execute the fswatch command multiple times:
do_backup() {
// what you want to do
rsync -ahhvzPR --delete $FILE $BACKUP_DIR
}
fswatch -r lib | while read FILE; do
do_backup
done &
fswatch -r test | while read FILE; do
do_backup
done &
This will start the process for both directories in detached mode.
More about the detached mode can be found here.
just use "," among the multi path.
Related
I have a large project with unittest binaries running on the other machines. So, the gcda files were generated on the other machines. Then, I download them to the local machine but the different dirs. Each of the dirs has the sources code.
For example: dir gcda1/src/{*.gcda, *.gcno, *.h, *.cpp}..., dir gcda2/src/{*.gcda, *.gcno, *.h, *.cpp}....
Because the project is very large, so I have to run multiple lcov processes at the same time to generate info files to save time. And then merge these info files.
The problem is, when I merge these info files, it will take dir infos, for example:
gcda1/src/unittest1.cpp
gcda2/src/unittest1.cpp
I want this:
src/unittest1.cpp
#src/unittest1.cpp # this is expected to merge with above
The commands I use:
$ cd gcda1
$ lcov --rc lcov_branch_coverage=1 -c -d ./ -b ./ --no-external -o gcda1.info
$ cd ../gcda2
$ lcov --rc lcov_branch_coverage=1 -c -d ./ -b ./ --no-external -o gcda2.info
$ cd ..
$ lcov -a gcda1/gcda1.info -a gcda1/gcda2.info -o gcda.info
$ genhtml gcda.info -o output
The root dir contains the source code.
Description
Well, I have found a method to solve this problem finally.
The info files lcov generated are plain text file. So we can edit them directly.
Once you open these files, you will see every file line start with SF. Like below:
SF:/path/to/your/source/code.h
SF:/path/to/your/source/code.cpp
...
Problem
In my problem, these will be:
// file gcda1.info
SF:/path/to/root_dir/gcda1/src/unittest1.cpp
// file gcda2.info
SF:/path/to/root_dir/gcda2/src/unittest1.cpp
And, after lcov merge, it will be:
// file gcda.info
SF:/path/to/root_dir/gcda1/src/unittest1.cpp
SF:/path/to/root_dir/gcda2/src/unittest1.cpp
But, I expect this:
// file gcda.info
SF:/path/to/root_dir/src/unittest1.cpp
Method
My method to solve the problem is editing the info files directly.
First, edit gcda1.info and gcda2.info, change /path/to/root_dir/gcda1/src/unittest1.cpp to /path/to/root_dir/src/unittest1.cpp, and /path/to/root_dir/gcda2/src/unittest1.cpp to /path/to/root_dir/src/unittest1.cpp.
Then merge them like below and generate html report:
$ lcov -a gcda1.info -a gcda2.info -o gcda.info
$ genhtml gcda.info -o output
In a large project, we could not manually edit each info file, otherwise you will collapse.
We can use sed to help us. Like below:
$ sed "s/\(^SF.*\/\)gcda[0-9]+\/\(.*\)/\1\2/g" gcda_tmp.info > gcda.info
I've added a bunch of files via ipfs add. How do I unpin and remove all of these at once?
to unpin all added content:
ipfs pin ls --type recursive | cut -d' ' -f1 | xargs -n1 ipfs pin rm
then optionally run storage garbage collection to actually remove things:
ipfs repo gc
Additionally to jclay's answer, you might also want to delete everything on MFS:
ipfs files ls / | while read f; do ipfs files rm -r "/$f"; done
(Obligatory warning that this won't work if paths contain newlines.)
Based on Daniel's answer, here's how to do it in a Docker container
docker exec ipfs_container_name ipfs pin ls --type recursive | cut -d' ' -f1 | xargs -n1 docker exec ipfs_container_name ipfs pin rm
Replace ipfs_container_name with the name of your docker container.
Annotate gets you as far as seeing the most recent change to that line, if that change is a merge then I have no choice but to trawl through the revision history and find the next time it was modified.
I've also tried hg grep -l '[contents of line]' but:
a) I can't figure out how to target specific files (so it takes forever in a moderately sized repo)
b) It seems to only return the last revision number
The following link is vaguely similar -
How to find all changsets where a reference to a function was modified?
Use Tortoisehg:
View -> Manifest
Right click on file interested in and click "File History"
click "annotate with revision numbers"
The top panel allows you to quickly see the history of the file in terms of commits, the bottom panel shows the annotated file based upon the selected version in the top panel.
Using the response of arielf without an extra script:
UNIX:
command:
hg log --template '{rev}\n' <FILE> |
xargs -I # hg grep <PATTERN> -r # <FILE>
you can use this to add an alias to your configuration file (.hgrc):
[alias]
_grep_line_in_history = ! $HG log --template '{rev}\n' $2 |
xargs -I # hg grep '$1' -r # $2
WINDOWS:
command:
FOR /F "usebackq" %i IN (`hg log --template "{rev}\n" <FILE>`) DO
#(echo %i & hg grep <PATTERN> -r %i <FILE>)
alias:
[alias]
_grep_line_in_history = ! FOR /F "usebackq" %i IN
(`%HG% log --template "{rev}\n" "$2"`) DO #(echo %i & %HG% grep "$1" -r %i "$2")
I think this requires a bit of (two-step) programming.
The following shell script works pretty well for me. It prints both the revisions and the matching lines. If you only want the revisions list, you may add a step to strip the matching text and leave only the revision prefix, and possibly pipe through 'sort -u':
#!/bin/bash
#
# script to grep for a pattern in all revisions of a file
# Usage: scriptname 'pattern' filepath
#
function fatal() {
echo "$#" 1>&2
exit 1
}
function usage() {
echo "$#" 1>&2
fatal Usage: $0 pattern file
}
case "$1" in
'') usage 'missing pattern to search for' ;;
*) Pat="$1" ;;
esac
if [ "$2" != '' ]; then
File="$2"
else
usage 'must pass file as 2nd argument'
fi
# -- generate list of revisions (change-sets) involving $File
for rev in `hg log --template '{rev}\n' $File`; do
# -- grep the wanted pattern in that particular revision
hg grep "$Pat" -r $rev $File
done
Notes:
not fully foolproof (e.g. quotes in the pattern)
I don't check for file existence to support renamed/removed files as well
I am new to Mercurial and after a cleanup of the image folder in my project, I have a ton of files showing with ! in the 'hg status'. I can type a 'hg forget ' for each, but there must be an easier way.
So how can I tell mercurial to forget about all the removed (status = !) files in a folder?
If you're also okay with adding any files that exist and aren't ignored then:
hg addremove
would a popular way to do that.
With fileset (Mercurial 1.9):
hg forget "set:deleted()"
In general, on Linux or Mac:
hg status -dn | while read file ; do hg forget "$file" ; done
Or, if your shell allows it, if there are not too many files, and if the filenames do not contain spaces or special characters, then:
hg forget $(hg st -dn)
I
You can try:
hg forget -I '*'
in order to include all files in your forget command.
By using the -d flag for status, which displays missing files:
for file in $(hg status -d | cut -d " " -f 2); do echo hg forget $file; done
Run this in the root of your repo, and if you're happy with the results, remove the echo
This has the bonus over the accepted answer of not doing any additional work, e.g. adding a bunch of untracked files.
more shorter instead of
for file in $(hg status -d | cut -d " " -f 2); do echo hg forget $file; done
this
hg status -d | cut -d " " -f 2 | xargs echo hg forget # test case
hg status -d | cut -d " " -f 2 | xargs hg forget # real work
How can I extract extract a .depot file on HPUX?
The .depot file is a tarred dir stucture, with some of the files gzipped under the same name as original.
Note that my environment is quite limited - I can't have root, I don't have swinstall.
http://forums13.itrc.hp.com/service/forums/questionanswer.do?admit=109447627+1259826031876+28353475&threadId=1143807
At best, the solution should work on Linux too.
I have tried to untar and gunzip -f -r -d -v --suffix= .
But the problem is that the gzipped files have no suffix, so in the end, gzip deletes them.
It was relatively easy:
for f in `find -type f` ; do
mv $f $f.gz
gunzip $f.gz
done