How do I access the data in a bucket using gsutil - google-cloud-functions

C:\Users\goura\AppData\Local\Google\Cloud SDK>gsutil cp -r gs://299792458bucket/X
CommandException: Wrong number of arguments for "cp" command.
getting this error

You need to give it a location to copy to probably?
Try:
gsutil cp -r gs://299792458bucket/X .
(be sure you're in a directory that doesn't have a lot of other files in it)

Related

OpenGrok) How can I use '--symlink' command in OpenGrok?

I'm not sure how to use the --symlink command in OpenGrok, so I'm asking.
OpenGrok's source root folder is '/opengrok/src'.
In this folder, I created a symbolic link file with the following command.
ln -s /home/A/workspace/tmp tmp
And I did indexing with the following command.
java -Djava.util.logging.config.file=/opengrok/etc/logging.properties -jar /opengrok/dist/lib/opengrok.jar -c /usr/local/bin/ctags -s /opengrok/src -d /opengrok/data -P -S -W /opengrok/etc/configuration.xml --symlink /opengrok/src/tmp -U http://localhost:8080/source
When I connect to localhost/source, the tmp file is displayed, but when I click it, the files in tmp are not displayed and the following error message is displayed.
Error: File not found!
The requested resource is not available.
Resource lacks history info. Was remote SCM side up when indexing occurred? Cleanup history cache dir(or just the .gz for the file or db record) and rerun indexer making sure remote side will respond during indexing.
How can I access and view the files in tmp using OpenGrok?

How to make gsutil rsync skip symlinks and return error code 0?

I have noticed that when gsutil rsync is working, it will return a non-zero error code out if it encounters a symlink which it can not resolve:
$ gsutil -m rsync -r -C /my_folder/ gs://my_bucket/
CommandException: Error opening file "file:////my_folder/my_symlink": .
CommandException: 1 files/objects could not be copied/removed.
Is there any way I can exclude such symlinks during the sync and make gsutil return error code 0?
I do not know the names of the symlinks.
As stated in the gsutil rsync documentation the -e parameter is used to ignore symbolic links.
Your command would look like:
gsutil -m rsync -r -C -e /my_folder/ gs://my_bucket/
I hope this is what you are looking for.

Cannot delete files on docker host

I'm using the following shell script to extract my databases in the entrypoint and startup the container.
#!/bin/bash
if [ ! -d "/var/lib/mysql/assetmanager" ]; then
tar -zxvf mysql.tar.gz
fi
exec /usr/bin/mysqld_safe
On startup I mount a local directory to the /var/lib/mysql directory with the -v parameter and extract then the files with the above script.
But now I can't delete the extracted files on my host, because permission denied error.
Can someone help me with this problem.
Thx
You cannot delete them because by default process in container executed by root user and extracted files belong to root. if you don't need these files in mapped dir, use different location for it -v ...:/myassets and in script:
if [ ! -d "/var/lib/mysql/assetmanager" ]; then
tar -zxvf /myassets/mysql.tar.gz
fi
you also could map a single file instead of whole directory if you need only that file.
There are many other solutions, depends what you need:
you could delete these files as root: sudo rm ...
you could delete them in container before exit
you could create user in container and create files from this user

hg archive to Remote Directory

Is there any way to archive a Mercurial repository to a remote directory over SSH? For example, it would be nice if one could do the following:
hg archive ssh://user#example.com/path/to/archive
However, that does not appear to work. It instead creates a directory called ssh: in the current directory.
I made the following quick-and-dirty script that emulates the desired behavior by creating a temporary ZIP archive, copying it over SSH, and unzipping the destination directory. However, I would like to know if there is a better way.
if [[ $# != 1 ]]; then
echo "Usage: $0 [user#]hostname:remote_dir"
exit
fi
arg=$1
arg=${arg%/} # remove trailing slash
host=${arg%%:*}
remote_dir=${arg##*:}
# zip named to match lowest directory in $remote_dir
zip=${remote_dir##*/}.zip
# root of archive will match zip name
hg archive -t zip $zip
# make $remote_dir if it doesn't exist
ssh $host mkdir --parents $remote_dir
# copy zip over ssh into destination
scp $zip $host:$remote_dir
# unzip into containing directory (will prompt for overwrite)
ssh $host unzip $remote_dir/$zip -d $remote_dir/..
# clean up zips
ssh $host rm $remote_dir/$zip
rm $zip
Edit: clone-and-push would be ideal, but unfortunately the remote server does not have Mercurial installed.
Nope, this is not possible -- we always assume that there is a functioning Mercurial installation on the remote host.
I definitely agree with you that this functionality would be nice, but I think it would have to be made in an extension. Mercurial is not a general SCP/FTP/rsync file-copying program, so don't expect to see this functionality in the core.
This reminds me... perhaps you can built on the FTP extension to make it do what you want. Good luck! :-)
Have you considered simply having a clone on the remote and doing hg push to archive?
Could you use a ssh tunnel to mount a remote directory on your local machine and then just do standard hg clone and hg push operations 'locally' (as far as HG knows) but where they actually write to a filesystem which is on the remote computer?
It looks like there are several stackoverflow questions about doing this:
How do I mount a remote Linux folder in Windows through SSH?
Map SSH drive in Windows
How can I mount a remote directory on my computer?
I am often in a similar situation. The way I get around it is with sshfs.
sshfs me#somewhere-else:path/to/repo local/path/to/somewhere-else
hg archive local/path/to/somewhere-else
fusermount -r somewhere-else
The only disadvantage is sshfs is slower than nfs, samba or rsync. Generally I don't notice as I only rarely need to do anything in the remote file-system.
You could also simply execute hg on the remote host:
ssh user#example.com "cd /path/to/repo; hg archive -r 123 /path/to/archive"

How to extract .depot file on HPUX?

How can I extract extract a .depot file on HPUX?
The .depot file is a tarred dir stucture, with some of the files gzipped under the same name as original.
Note that my environment is quite limited - I can't have root, I don't have swinstall.
http://forums13.itrc.hp.com/service/forums/questionanswer.do?admit=109447627+1259826031876+28353475&threadId=1143807
At best, the solution should work on Linux too.
I have tried to untar and gunzip -f -r -d -v --suffix= .
But the problem is that the gzipped files have no suffix, so in the end, gzip deletes them.
It was relatively easy:
for f in `find -type f` ; do
mv $f $f.gz
gunzip $f.gz
done