How to make gsutil rsync skip symlinks and return error code 0? - google-compute-engine

I have noticed that when gsutil rsync is working, it will return a non-zero error code out if it encounters a symlink which it can not resolve:
$ gsutil -m rsync -r -C /my_folder/ gs://my_bucket/
CommandException: Error opening file "file:////my_folder/my_symlink": .
CommandException: 1 files/objects could not be copied/removed.
Is there any way I can exclude such symlinks during the sync and make gsutil return error code 0?
I do not know the names of the symlinks.

As stated in the gsutil rsync documentation the -e parameter is used to ignore symbolic links.
Your command would look like:
gsutil -m rsync -r -C -e /my_folder/ gs://my_bucket/
I hope this is what you are looking for.

Related

Pass flags to the Sphinx runner?

So I've got the following project OpenFHE-development and when I run the build process, there are lots of warnings. However, most of these warnings are fine to ignore (we vet them before pushing to the main branch)
Specifically, is there a way to take
pth/python -m sphinx -T -E -b readthedocssinglehtmllocalmedia -d _build/doctrees -D language=en . _build/localmedia
and convert it to
pth/python -m sphinx -T -E -b readthedocssinglehtmllocalmedia -d _build/doctrees -D language=en . _build/localmedia 2> errors.txt
(pipe the stderr to a file instead of having it display on stdout)?
Does not seem to be possible at the moment. See git discussion

How do I access the data in a bucket using gsutil

C:\Users\goura\AppData\Local\Google\Cloud SDK>gsutil cp -r gs://299792458bucket/X
CommandException: Wrong number of arguments for "cp" command.
getting this error
You need to give it a location to copy to probably?
Try:
gsutil cp -r gs://299792458bucket/X .
(be sure you're in a directory that doesn't have a lot of other files in it)

How to generate code coverage with running lcov program at the same time

I have a large project with unittest binaries running on the other machines. So, the gcda files were generated on the other machines. Then, I download them to the local machine but the different dirs. Each of the dirs has the sources code.
For example: dir gcda1/src/{*.gcda, *.gcno, *.h, *.cpp}..., dir gcda2/src/{*.gcda, *.gcno, *.h, *.cpp}....
Because the project is very large, so I have to run multiple lcov processes at the same time to generate info files to save time. And then merge these info files.
The problem is, when I merge these info files, it will take dir infos, for example:
gcda1/src/unittest1.cpp
gcda2/src/unittest1.cpp
I want this:
src/unittest1.cpp
#src/unittest1.cpp # this is expected to merge with above
The commands I use:
$ cd gcda1
$ lcov --rc lcov_branch_coverage=1 -c -d ./ -b ./ --no-external -o gcda1.info
$ cd ../gcda2
$ lcov --rc lcov_branch_coverage=1 -c -d ./ -b ./ --no-external -o gcda2.info
$ cd ..
$ lcov -a gcda1/gcda1.info -a gcda1/gcda2.info -o gcda.info
$ genhtml gcda.info -o output
The root dir contains the source code.
Description
Well, I have found a method to solve this problem finally.
The info files lcov generated are plain text file. So we can edit them directly.
Once you open these files, you will see every file line start with SF. Like below:
SF:/path/to/your/source/code.h
SF:/path/to/your/source/code.cpp
...
Problem
In my problem, these will be:
// file gcda1.info
SF:/path/to/root_dir/gcda1/src/unittest1.cpp
// file gcda2.info
SF:/path/to/root_dir/gcda2/src/unittest1.cpp
And, after lcov merge, it will be:
// file gcda.info
SF:/path/to/root_dir/gcda1/src/unittest1.cpp
SF:/path/to/root_dir/gcda2/src/unittest1.cpp
But, I expect this:
// file gcda.info
SF:/path/to/root_dir/src/unittest1.cpp
Method
My method to solve the problem is editing the info files directly.
First, edit gcda1.info and gcda2.info, change /path/to/root_dir/gcda1/src/unittest1.cpp to /path/to/root_dir/src/unittest1.cpp, and /path/to/root_dir/gcda2/src/unittest1.cpp to /path/to/root_dir/src/unittest1.cpp.
Then merge them like below and generate html report:
$ lcov -a gcda1.info -a gcda2.info -o gcda.info
$ genhtml gcda.info -o output
In a large project, we could not manually edit each info file, otherwise you will collapse.
We can use sed to help us. Like below:
$ sed "s/\(^SF.*\/\)gcda[0-9]+\/\(.*\)/\1\2/g" gcda_tmp.info > gcda.info

Can I use wildcards in oc exec commands?

I am trying to run remote commands on the openshift pods to delete some files in certain directory and the below command works:
oc exec mypod -i -t -- rm -f /tmp/mydir/1.txt
However, i am unable to use wildcards e.g *.txt to remove all .txt files. The command with wildcards does not give any errors but doesn't delete any files.
Any suggestions will be appreciated.
The following command worked:
oc exec mypod -i -t -- find /tmp/mydir -type f -name '*.txt' -delete
Hopefully it will be useful to someone else.

Atomicity of mkdir

I'm encountering an odd issue on an NFS v3 file system (I feel this is important) running two processes in parallel doing (following the comment below and my own knowledge in the matter I don't think the language should matter, and I think this is readable enough):
if { ! [file isdirectory $dir]} {
if {[catch { file mkdir $dir} err]} {
error "-E- failed to mkdir $dir: $err"
}
}
For those not familiar, file mkdir in tcl behaves much like mkdir -p - it should only fail if the directory exists and is not a directory. I'm nearly 100% (there is no 100% ever) that nothing is creating that file in any process, only file mkdir. The problem does not happen always, but often enough while running our regressions we might hit a:
Error: can't create directory "$dir": file already exists
This should only happen if during the file mkdir processing $dir is an existing non-directory file. Two questions, the first is more important for me:
Is mkdir not atomic here? In particular could the file node in the filesystem exist as a non-directory for any amount of time during creation?
Assuming this really is the error, is there a simple atomic way to do this? I thought about exec mkdir -p, but if I'm right this will suffer from the same problem.
It's hard enough to reproduce this so I'd rather be as sure as I can before I attempt a fix. I came here after following a hint that says the nfs FS maybe the issue, but I need more expert advice. I don't care if both succeed, I just don't want them to fail (on first try).
Final note
I circled back to this after a long while - and this is indeed a tcl issue, but not only on nfs, though nfs seems to make it worse!.
Still looking for answers explaining why I'm seeing what I'm seeing - see answer.
Opened this as a bug
https://core.tcl.tk/tcl/tktview/270f78ca95b642fbed81ed03ad381d64a0d0f7df
Bug already fixed!
The people at tcl core are fast!
The guys and girls at TCL core have fixed this a day after I posted the bug!
https://core.tcl.tk/tcl/tktview/270f78ca95b642fbed81ed03ad381d64a0d0f7df
Fixed in 1c12ee9e45222d6c.
A thanks to mrcalvin for the suggestion.
The old testing attempts:
After a long while I circled back to this, and made the following tests (on ext4):
Two terminals with tclsh:
1: while {1} {file mkdir bla}
2: while {1} {file mkdir bla; file delete bla}
Error eventually on 1::
can't create directory "bla": no such file or directory
Two terminals with tclsh:
1: while {1} {exec mkdir -p bla}
2: while {1} {exec mkdir -p bla; file delete bla}
No error.
One terminal Bash one tclsh:
1: while [ 1 ]; do mkdir -p bla; done
2: while {1} {file mkdir bla; file delete bla}
eventually I get on 1::
mkdir: cannot create directory ‘bla’: File exists
but oddly enough
1: while [ 1 ]; do mkdir -p bla; rm -rf bla; done
2: while {1} {file mkdir bla}
no error (delete is the culprit?) and
1: while [ 1 ]; do mkdir -p bla; done
2: while {1} {exec mkdir -p bla; file delete bla}
much less chance of error (so delete not as bad?). Of course two bash shells do not conflict:
1: while [ 1 ]; do mkdir -p bla; rm -rf bla; done
2: while [ 1 ]; do mkdir -p bla; done
On NFS but not on EXT4
1: while {1} {file mkdir bla; exec rm -rf bla}
2: while {1} {file mkdir bla}
fails with
can't create directory "bla": file already exists
on both 1: 2: (randomly).
Conclusion
file mkdir is not as "thin" a layer as I thought and can produce race conditions where one mkdir thinks a directory being made is a file. file delete may also have this or a similar issue. It may be also contributing in my tests to failures, but not in my original question - the matter is worsened for NFS systems, where file mkdir alone is easily reproducing the error.
The solution is to use exec mkdir -p. So far this is working for us across the board.