how to do profile based operations in gstuil while copying files from S3 to GCS? - boto

In AWSCLI we can create profiles in .boto file and we can switch profiles while doing operations like cp, ls, mv etc. How to do the same profile based operations in gsutil tool? I can added access_key and secrect_key information in /etc/boto.cfg but how to add profiles section and how to use it?
gsutil cp s3://<bucket-name>/ gs://<bucket-name>

If you want functionality similar to AWS named profiles for gcloud you can use gcloud named configurations. Command details here.

Related

List / browse files / directories in OpenShift 3

I want to see folder structure in OpenShift 3. Is it possible to ssh in? I see with rsync can copy in / out, but how to list content?
To access the container which is running your application use the oc rsh command. This will give you an interactive shell and you can use normal Unix commands to change directory, list files etc.
Consider reading the free eBook at https://www.openshift.com/promotions/for-developers.html and work through exercises at https://learn.openshift.com to learn more about using OpenShift. You can also find various blog posts at blog.openshift.com
If your container doesn't include a shell, you can also use oc exec to run commands directly. Example for a specific container and a command with arguments (note the double dash) :
oc exec -it POD -c CONTAINER -- ls -lrt /tmp/

Is it possible to copy a directory from a Google Compute Engine instance to my local machine?

With scp I can add the -r flag to download directories to my local machine via ssh.
When using:
gcloud compute scp -r
it sais that '-r' is not an available option.
Without -r I get an error saying that my source path is a directory. (Implying I can only download single files.)
Is there an equivalent to -r flag for gcloud compute scp command?
Found it!
GCE offers an equivalent and it is --recurse.
My final command looks like this:
gcloud compute scp --recurse username#instance_name:./* "local_dir"
For some reason I also needed the * behind the source folder to avoid some security issue.
Your gutils already has the right credentials, so just simply do
gcloud compute scp --recurse [the_instance_name]:[the_path_on_gcp_instance_folder] [the_path_on_your_machine]

How do you create a deployment configuration in OpenShift? Is it automatic for new-app based on a docker image?

I'm creating a new-app based on an image stream that corresponds to a docker image in a private OpenShift docker registry. The command is:
oc new-app mynamespace/my-image:latest -n=my-project
Question 1: Does this command automatically create a deployment configuration (dc) that can be referrenced as dc/my-image? Is this deployment configuration associated with my-project?
Question 2: What is the oc command to create a deployment configuration? The OpenShift developer guide has a section titled Creating a Deployment Configuration, but surprisingly it does not say how to create a DC or give any examples. It just shows a JSON structure and says DCs can be managed with the oc command.
Yes, your command will create stuff in the specified project. You can check what objects are created using the oc get command. i.e. to check what DCs you have, you'd do oc get dc or oc get deploymentconfigs.
Other useful commands are oc describe - similar to get but more information. oc status -v - see more broad information about project including warnings and errors.
You create DC and any other resource types using the oc create command. e.g. you copy the example DC off the URL you link to and put it into a file. Finally you do oc create -f mydc.yaml. Both YAML and JSON are supported.
As you see some commands can create DCs by themselves without you providing them with YAML or JSON. You can later modify existing resources with oc edit service/my-app. There is the oc patch command suitable for scripting.
You can see existing resource YAML doing oc get dc/myds -o yaml. Same with any other resource. Keep in mind you are presently using the desired project or use the -n option as you are doing in your example.
Not that hard once you understand some basics and learn to use the oc describe and oc logs command to debug issues with your images/pods. e.g. oc describe pod/my-app-1-asdfg, oc logs my-app-1-asdfg, oc logs -f dc/my-app.
HTH

Simply uploading a file to google compute

I want to upload a file to the disk attached to my google compute vm from my local machine.
abhigenie92_gmail_com#instance-1:~$ pwd
/home/abhigenie92_gmail_com
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: ./
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: /home/abhigenie92_gmail_com
ERROR: (gcloud.compute.copy-files) All sources must be
edit2: Get the following error now:
RE: edit2
Since gcloud's copy-files is a custom implementation of scp, you need to specify the complete path on your VM where you want to copy the files to. In your specific case:
LOCAL-FILE-PATH> gcloud compute copy-files [FILENAMES] [VM-NAME]:[FULL-REMOTE-PATH]
In your specific example:
C:\Users\sony\Desktop> gcloud compute copy-files copy.nlogo instance-1:/home/abhigenie92_gmail_com/
This command will then place the file(s) into your user's home directory root. Just make sure the remote path exists, and that you user has write rights to the destination.
From the looks of what you posted, you're trying to copy things from your local machine to a cloud instance from inside the instance. I'm afraid you can't do that.
I take it you have already installed the gcloud compute tool? If not, install that on your local machine (follow the link) and open up the windows command line, type gcloud auth login to authenticate, then you should be able to do what you want to with the following command:
gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28\ Runge\ kutta\ 2nd\ order\ try.nlogo <VM Name>:~/
Note that I have escaped the spaces in your filename - it's a good idea to get out of the habit of spaces in filenames - and made a couple of assumptions:
Your VM is running linux
You are okay with copying up to your home directory on the VM
If any of these assumptions is incorrect, you may have problems. To copy somewhere else, change the path in the <VM Name>:~/ part
Edit: I mangled a file extension in the original, fixed now!

gsutil not working in GCE

So when I bring up a GCE instance using the standard debian 7 image, and issue a "gsutil config" command, it fails with the following message:
jcortez#master:~$ gsutil config
Failure: No handler was ready to authenticate. 4 handlers were checked. ['ComputeAuth', 'OAuth2Auth', 'OAuth2ServiceAccountAuth', 'HmacAuthV1Handler'] Check your credentials.
I've tried it on the debian 6 and centos instances and had the same results. Issuing "gcutil config" works fine however. I gather I need to set up my ~/.boto file but I'm not sure what to.
What am I doing wrong?
Using service account scopes as E. Anderson mentions is the recommended way to use gsutil on Compute Engine, so the images are configured to get OAuth access tokens from the metadata server in /etc/boto.cfg:
[GoogleCompute]
service_account = default
If you want to manage gsutil config yourself, rename /etc/boto.cfg, and gsutil config should work:
$ sudo mv /etc/boto.cfg /etc/boto.cfg.orig
$ gsutil config
This script will create a boto config file at
/home/<...snipped...>/.boto
containing your credentials, based on your responses to the following questions.
<...snip...>
Are you trying to use a service account to have access to Cloud Storage without needing to enter credentials?
It sounds like gsutil is searching for an OAuth access token with the appropriate scopes and is not finding one. You can ensure that your VM has access to Google Cloud Storage by requesting the storage-rw or storage-full permission when starting your VM via gcutil, or by selecting the appropriate privileges under "Project Access" on the UI console. For gcutil, something like the following should work:
> gcutil addinstance worker-1 \
> --service_account_scopes=https://www.googleapis.com/auth/devstorage.read_write,https://www.googleapis.com/auth/compute.readonly
When you configured your GCE instance, did you set it up with a service account configured? Older versions of gsutil got confused when you attempted to run gsutil config when you already had service account credentials configured.
If you already have a service account configured you shouldn't need to run gsutil config - you should be able to simply run gsutil ls, cp, etc. (it will use credentials located elsewhere than your ~/.boto file).
If you really do want to run gsutil config (e.g., to set up credentials associated with your login identity, rather than service account credentials), you could try downloading the current gsutil from http://storage.googleapis.com/pub/gsutil.tar.gz, unpacking it, and running that copy of gsutil. Note that if you do this, the personal credentials you create by running gsutil config will essentially "hide" your service account credentials (i.e., you would need to move your .boto file aside if you ever want to user your service account credentials again).
Mike Schwartz, Google Cloud Storage team
FYI I'm working on some changes to gsutil now that will handle the problem you encountered more smoothly. That version should be out within the next week or two.
Mike