In my .bash_profile, I have
cd(){ builtin cd $1 ls -F }
To change cd to cd ls -F. This seems to work for the most part, but when I want to cd to a multi-word directory, it doesn't work. It thinks that the two words are two separate inputs. To try fixing this, I also tried:
cd "word1 word2"
cd "word1\ word2"
dir=$"word1 word2"
cd "$dir"
and none of these have worked. Do I need to modify my cd function? Or am I just overlooking a clever input method?
This does what (I think) you want:
cd() {
builtin cd "${1-$(echo ~)}" && ls -F
}
Note a few things:
The variable is quoted, so that cd 'some dir with spaces' will work.
There's a && between cd and ls, so that the latter will not happen if the former fails. (They could just be on different lines, but then ls will be run even if cd fails, and cd's message about failing will be ‘lost’ among ls's output. (Not sure how just having the commands on the same line was supposed to work.))
$1 defaults to ~ so that just cd works as expected.
Related
For example, let's say you have a directory dir/ with an arbitrary number of subdirectories including dir/subdir/, and you want to mount dir/ to a podman container with every subdirectory also mounted except dir/subdir/.
Is this possible in podman? If so, is it possible to do this purely with the arguments of a podman run command?
Is not possible, the entire folder will be available inside the container.
You can overcome this with permissions, acl or even symbolic links. In the last case, create a second folder with links pointing to only the folders you want to be available inside the container.
Use an extra bind-mount to hide the directory dir/subdir/
In other words, first bind-mount dir/ and then bind-mount an empty directory over dir/subdir to hide its contents.
$ mkdir dir
$ mkdir dir/subdir
$ mkdir dir/subdir2
$ mkdir emptydir
$ touch dir/subdir/file1.txt
$ touch dir/subdir2/file2.txt
$ podman pull -q docker.io/library/fedora
b2aa39c304c27b96c1fef0c06bee651ac9241d49c4fe34381cab8453f9a89c7d
$ podman run --rm \
-v ./dir:/dir:Z \
-v ./emptydir:/dir/subdir:Z \
docker.io/library/fedora find /dir
/dir
/dir/subdir
/dir/subdir2
/dir/subdir2/file2.txt
In the output from the command find /dir there is no file dir/subdir/file1.txt
I'm having trouble searching for answers because I don't know the terminology to use, but so far all my searching has failed me. I have the following setup:
prependix.html contains the <html> tag, <head>...</head> and the various links to CSS files, etc.
appendix.html contains the closing tags for most things, i.e. </html> etc.
Then I have a list of files in a content/ directory, with things like foo.html which is basically just the <body>...</body> snippet that I generate from emacs org mode via pandoc.
Here is the makefile I currently use:
CXX=cat
CXXPRE=templates/prependix.html
CXXPOST=templates/appendix.html
TARGETS=staging/index.html staging/foo.html staging/bar.html staging/baz.html staging/quux.html
default: $(TARGETS)
stage1/%.html: content/%.org
mkdir -p stage1/
pandoc $< -o $#
staging/%.html: stage1/%.html
mkdir -p staging/
$(CXX) $(CXXPRE) $< $(CXXPOST) > $#
clean:
rm -rf staging/
rm -rf stage1/
deploy:
mkdir -p staging/css
cp content/css/styles.css staging/css/
mkdir -p staging/img
cp content/img/*.png staging/img/
cp content/img/*.jpg staging/img/
rsync -a --delete staging/ $(URI):/home/me/www/mysite.tld
That makefile works, but what I realized is that I can't specify per-file <meta> tags, and I would like to do so. So I will split the prependix into two and and provide e.g. foo.meta, bar.meta, etc. which will contain just the <meta> tags. If I were doing a single concatenation on the command line I would perform it as such:
$ cat templates/prependix.html foo.meta templates/prependix2.html foo.html templates/appendix.html > final-product.html
As a make rule, something like:
$(CXX) $(CXXPRE) [somehow specify the .meta file here] $(CXXPRE2) $< $(CXXPOST) > $#
How can I do this? Is it even possible?
Please find below code which will solve the issue.
You can put all your .meta files in the stage1 folder. Example: stage1/foo.meta and use stage1/$(*).meta . This will be realised as stage1/foo.meta
CXX=cat
CXXPRE=templates/prependix.html
CXXPOST=templates/appendix.html
TARGETS=staging/index.html staging/foo.html staging/bar.html staging/baz.html staging/quux.html
default: $(TARGETS)
stage1/%.html: content/%.org
mkdir -p stage1/
pandoc $< -o $#
# You can add dependency stage1/%.meta if needed or remove it as per your need
staging/%.html: stage1/%.meta stage1/%.html
mkdir -p staging/
$(CXX) $(CXXPRE) stage1/$(*).meta $< $(CXXPOST) > $#
clean:
rm -rf staging/
rm -rf stage1/
deploy:
mkdir -p staging/css
cp content/css/styles.css staging/css/
mkdir -p staging/img
cp content/img/*.png staging/img/
cp content/img/*.jpg staging/img/
rsync -a --delete staging/ $(URI):/home/me/www/mysite.tld
Please note the above was just an example to give you a redable and working solution.
Also based on Raspy comments you can replace stage1/$(*).meta with $(word 2, $^)$(CXXPOST) > $# , both will give the same results but avoid discrepancies between dependency and recipe
I have a large project with unittest binaries running on the other machines. So, the gcda files were generated on the other machines. Then, I download them to the local machine but the different dirs. Each of the dirs has the sources code.
For example: dir gcda1/src/{*.gcda, *.gcno, *.h, *.cpp}..., dir gcda2/src/{*.gcda, *.gcno, *.h, *.cpp}....
Because the project is very large, so I have to run multiple lcov processes at the same time to generate info files to save time. And then merge these info files.
The problem is, when I merge these info files, it will take dir infos, for example:
gcda1/src/unittest1.cpp
gcda2/src/unittest1.cpp
I want this:
src/unittest1.cpp
#src/unittest1.cpp # this is expected to merge with above
The commands I use:
$ cd gcda1
$ lcov --rc lcov_branch_coverage=1 -c -d ./ -b ./ --no-external -o gcda1.info
$ cd ../gcda2
$ lcov --rc lcov_branch_coverage=1 -c -d ./ -b ./ --no-external -o gcda2.info
$ cd ..
$ lcov -a gcda1/gcda1.info -a gcda1/gcda2.info -o gcda.info
$ genhtml gcda.info -o output
The root dir contains the source code.
Description
Well, I have found a method to solve this problem finally.
The info files lcov generated are plain text file. So we can edit them directly.
Once you open these files, you will see every file line start with SF. Like below:
SF:/path/to/your/source/code.h
SF:/path/to/your/source/code.cpp
...
Problem
In my problem, these will be:
// file gcda1.info
SF:/path/to/root_dir/gcda1/src/unittest1.cpp
// file gcda2.info
SF:/path/to/root_dir/gcda2/src/unittest1.cpp
And, after lcov merge, it will be:
// file gcda.info
SF:/path/to/root_dir/gcda1/src/unittest1.cpp
SF:/path/to/root_dir/gcda2/src/unittest1.cpp
But, I expect this:
// file gcda.info
SF:/path/to/root_dir/src/unittest1.cpp
Method
My method to solve the problem is editing the info files directly.
First, edit gcda1.info and gcda2.info, change /path/to/root_dir/gcda1/src/unittest1.cpp to /path/to/root_dir/src/unittest1.cpp, and /path/to/root_dir/gcda2/src/unittest1.cpp to /path/to/root_dir/src/unittest1.cpp.
Then merge them like below and generate html report:
$ lcov -a gcda1.info -a gcda2.info -o gcda.info
$ genhtml gcda.info -o output
In a large project, we could not manually edit each info file, otherwise you will collapse.
We can use sed to help us. Like below:
$ sed "s/\(^SF.*\/\)gcda[0-9]+\/\(.*\)/\1\2/g" gcda_tmp.info > gcda.info
I'm looking for an elegant way to populate Mercurial with different versions of the same program, from 50 old versions that have numbered filenames:
prog1.py, prog2.py ... prog50.py
For each version I'd like to retain the dates and original filename, perhaps in the change comment.
I'm new to Mercurial and have searched without finding an answer.
hg commit has -d to specify a date and -m to specify a comment.
hg init
copy prog1.py prog.py /y
hg ci -A prog.py -d 1/1/2015 -m prog1.py
copy prog2.py prog.py /y
hg ci -A prog.py -d 1/2/2015 -m prog2.py
# repeat as needed
One can of course automate the whole thing in a small bash script:
You obtain the modification date of a file via stat -c %y ${FILENAME}. Thus assuming that the files are ordered:
hg init
for i in /path/to/old/versions/*.py do;
cp $i .
hg ci -d `stat -c %y $i` -m "Import $i"
done
Mind, natural filename sorting is prog1, prog11 prog12, ... prog19, prog2, prog21, .... You might want to rename prog1 to prog01 etc to ensure normal sorting or sort the filenames before processing them, e.g.:
hg init
for i in `ls -tr /path/to/old/versions/*.py` do;
cp /path/to/old/versions/$i .
hg ci -d `stat -c %y /path/to/old/versions/$i` -m "Import $i"
done
What's the best way to check in script if there're uncommitted changes in mercurial's working tree.
(the way I would with git diff --quiet in git)
In mercurial 1.4 and later you can use the summary command, which gives output like this when changes exist:
$ hg summary
parent: 0:ad218537bdef tip
commited
branch: default
commit: 1 modified
update: (current)
and this post-commit:
$ hg summary
parent: 1:ef93d692f646 tip
sfsdf
branch: default
commit: (clean)
update: (current)
Alternately, you could install the prompt extension and do something like this:
$ hg prompt '{status}'
which will output a ! or ? or nothing as appropriate.
Both of those, of course, are just alternate text outputs. I couldn't find anything that used the exit code directly, but since $? checks the last command in a pipe you could do?
hg summary | grep -q 'commit: (clean)'
which will set $? non-zero if any changes are uncommitted:
$ hg summary | grep -q 'commit: (clean)' ; echo $?
0
$ echo more >> that
$ hg summary | grep -q 'commit: (clean)' ; echo $?
1
You can also run hg id. If the hash ends with a + it indicates the working copy has changes. This should even work with old versions of hg.
It sounds like you're already using zsh; well, a couple days ago I helped to update the Mercurial support for the built-in VCS_INFO for putting VCS info in your prompt. Slated for the next release is support for showing changes to the working directory (among other things). If you don't want to wait you can grab the necessary files from CVS.
At the moment my prompt includes this (using only built-in zsh functionality):
(hg)[1801+ branchname somemq.patch, anycurrentbookmarks]
I use:
hg status -m -a -r -d -u
If no changes with tracked files, then the command output is an empty string.
I use this bash-snippet for some time now:
if [[ $(hg status 2>/dev/null) ]]
then
# do something
fi
Both id and summary are slower than status, so this is the fastest way I currently know, ignoring untracked files:
[[ -z `hg status | grep -v '^?'` ]] && echo no-changes || echo has-changes
There should be something more elegant than simply
[ `hg st |wc -l` -eq 0 ] && echo hi