gsutil not working in GCE - boto

So when I bring up a GCE instance using the standard debian 7 image, and issue a "gsutil config" command, it fails with the following message:
jcortez#master:~$ gsutil config
Failure: No handler was ready to authenticate. 4 handlers were checked. ['ComputeAuth', 'OAuth2Auth', 'OAuth2ServiceAccountAuth', 'HmacAuthV1Handler'] Check your credentials.
I've tried it on the debian 6 and centos instances and had the same results. Issuing "gcutil config" works fine however. I gather I need to set up my ~/.boto file but I'm not sure what to.
What am I doing wrong?

Using service account scopes as E. Anderson mentions is the recommended way to use gsutil on Compute Engine, so the images are configured to get OAuth access tokens from the metadata server in /etc/boto.cfg:
[GoogleCompute]
service_account = default
If you want to manage gsutil config yourself, rename /etc/boto.cfg, and gsutil config should work:
$ sudo mv /etc/boto.cfg /etc/boto.cfg.orig
$ gsutil config
This script will create a boto config file at
/home/<...snipped...>/.boto
containing your credentials, based on your responses to the following questions.
<...snip...>

Are you trying to use a service account to have access to Cloud Storage without needing to enter credentials?
It sounds like gsutil is searching for an OAuth access token with the appropriate scopes and is not finding one. You can ensure that your VM has access to Google Cloud Storage by requesting the storage-rw or storage-full permission when starting your VM via gcutil, or by selecting the appropriate privileges under "Project Access" on the UI console. For gcutil, something like the following should work:
> gcutil addinstance worker-1 \
> --service_account_scopes=https://www.googleapis.com/auth/devstorage.read_write,https://www.googleapis.com/auth/compute.readonly

When you configured your GCE instance, did you set it up with a service account configured? Older versions of gsutil got confused when you attempted to run gsutil config when you already had service account credentials configured.
If you already have a service account configured you shouldn't need to run gsutil config - you should be able to simply run gsutil ls, cp, etc. (it will use credentials located elsewhere than your ~/.boto file).
If you really do want to run gsutil config (e.g., to set up credentials associated with your login identity, rather than service account credentials), you could try downloading the current gsutil from http://storage.googleapis.com/pub/gsutil.tar.gz, unpacking it, and running that copy of gsutil. Note that if you do this, the personal credentials you create by running gsutil config will essentially "hide" your service account credentials (i.e., you would need to move your .boto file aside if you ever want to user your service account credentials again).
Mike Schwartz, Google Cloud Storage team

FYI I'm working on some changes to gsutil now that will handle the problem you encountered more smoothly. That version should be out within the next week or two.
Mike

Related

Openshift on AWS is automatically allowing container to be deployed as root user

I tried to deploy library/cassandra image cassandra container in Sandbox Openshift cluster but it threw me this error in pod logs,
"Running Cassandra as root user or group is not recommended - please start Cassandra using a different system user.
If you really want to force running Cassandra as root, use -R command line option."
When I checked the container description, I could see that SCC is set to Restricted...So looks like in Sandbox openshift, SCC "Restricted" is set for "Default" Service account by default..
But in AWS when I tried to install openshift with installer option, I didnt face this error with same library/cassandra image..
Looks like default Service account is not by default associated with "Restricted" SCC...
could someone clarify what is the difference in Sandbox environment which throws this error? and How can I set the same config in AWS openshift so that default Service account can be associated with restricted SCC?
I can't see your specific environment, but from the error message I suspect it's being triggered by the GROUP=0, not user=0.
To confirm:
$ oc get pods (whatever) -o yaml | grep openshift.io/scc
This will show you which SCC admitted the pod into the cluster. It should be "restricted" based on what you said. If so, then we've got some good evidence that it's just the group.
Next, you can look for something like this:
$ oc rsh (podname) id -a
uid=1000640000(1000640000) gid=0(root) groups=0(root),1000640000
UID (user) is in the expected billion+ range defined in the namespace annotation. GID (group) is zero.
With that in place, you can either ignore the error, knowing it's own group=0 that's in place, or you can set a securityContext for your pod (or container) to specify a different gid.
I came to know that "default" project has different set of permissions so even a container with user id 0 can be deployed in default namespace..
In Sandbox cluster, the project is dev or stage so it works with correct security level..

Zabbix external checks cannot be executed due to SELinux

I try to implement external checks in Zabbix 2.2. I've created simple bash script for SSL verification which should be executed by zabbix service. The script is located in /var/lib/zabbixsrv/externalchecks directory. Even if there are 777 permission for the .sh script I still receive message telling
unable to execute /var/lib/zabbixsrv/externalscripts/test.sh: Permission denied
I've got same message when I try to run the command even as root. The ls -Z /var/lib/zabbixsrv/externalscripts/test.sh command output says:
-rwxrwxrwx. zabbixsrv zabbixsrv unconfined_u:object_r:default_t:s0 /var/lib/zabbixsrv/externalscripts/test.sh
There is no message relating this in /var/log/massages. Does anybody know how to force selinux to allow execute zabbixsrv user the script without disabling selinux?
Which zabbix service (zabbix-server, zabbix-agent, ...) should execute the external checks script?
Did you tried to set AllowRoot=1 in /etc/zabbix/zabbix_agentd.conf?
The main issue was in /etc/fstab configuration file. The Zabbix has defined as default values for script /var/lib/zabbixsrv/excernalscripts directory. My server has /var mounted with rw and noexec permissions.
I've already moved the script to different location and change the configuration file accordingly. Checks are working fine now.
Thanks everybody for any contribution relating this topic.

Simply uploading a file to google compute

I want to upload a file to the disk attached to my google compute vm from my local machine.
abhigenie92_gmail_com#instance-1:~$ pwd
/home/abhigenie92_gmail_com
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: ./
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: /home/abhigenie92_gmail_com
ERROR: (gcloud.compute.copy-files) All sources must be
edit2: Get the following error now:
RE: edit2
Since gcloud's copy-files is a custom implementation of scp, you need to specify the complete path on your VM where you want to copy the files to. In your specific case:
LOCAL-FILE-PATH> gcloud compute copy-files [FILENAMES] [VM-NAME]:[FULL-REMOTE-PATH]
In your specific example:
C:\Users\sony\Desktop> gcloud compute copy-files copy.nlogo instance-1:/home/abhigenie92_gmail_com/
This command will then place the file(s) into your user's home directory root. Just make sure the remote path exists, and that you user has write rights to the destination.
From the looks of what you posted, you're trying to copy things from your local machine to a cloud instance from inside the instance. I'm afraid you can't do that.
I take it you have already installed the gcloud compute tool? If not, install that on your local machine (follow the link) and open up the windows command line, type gcloud auth login to authenticate, then you should be able to do what you want to with the following command:
gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28\ Runge\ kutta\ 2nd\ order\ try.nlogo <VM Name>:~/
Note that I have escaped the spaces in your filename - it's a good idea to get out of the habit of spaces in filenames - and made a couple of assumptions:
Your VM is running linux
You are okay with copying up to your home directory on the VM
If any of these assumptions is incorrect, you may have problems. To copy somewhere else, change the path in the <VM Name>:~/ part
Edit: I mangled a file extension in the original, fixed now!

Couchbase Mobile (Sync Gateway) sample TODOlite application doesn't replicate; complains _facebook doesn't exist

My objective: get https://github.com/couchbaselabs/ToDoLite-iOS syncing with a Couchbase Server and sync gateway on localhost rather than the default demo URL.
I run sync gateway like so: bin/sync_gateway -url http://localhost:8091
And then the only thing I changed in the example is:
-#define kSyncGatewayUrl #"http://demo.mobile.couchbase.com/todolite"
+#define kSyncGatewayUrl #"http://localhost:4984/sync_gateway/"
And when I run
Error: Error Domain=CBLHTTP Code=404 "404 not_found" UserInfo=0x7ff11941fb50 {NSURL=http://localhost:4984/sync_gateway/_facebook, NSLocalizedFailureReason=not_found, NSLocalizedDescription=404 not_found}
How do I fix this?
I solved it. The reason is that I ran sync_gateway without enabling Facebook registration support.
Normally this is done in config.json file. In fact, this configuration file was supplied in ToDoLite all along.
It is crucial that you launch sync_gateway with this configuration file. The README actually states this but in a loose and casual way...
cd ToDoLite-iOS
sync_gateway -url http://localhost:8091 sync-gateway-config.json
NB: I assume above that sync_gateway has been made accessible through $PATH. It's a good idea to do that anyway.
Also, I didn't pay attention to the dbname. So you'll need to replace
#define kSyncGatewayUrl #"http://demo.mobile.couchbase.com/todolite"`
with
#define kSyncGatewayUrl #"http://localhost:4984/todos"
So, what's the complete sequence of steps to get it working?:
If you want to wipe everything on the server, rm -rf Library/Application\ Support/Couchbase and start over. Homebrew cask hides this setting somewhere else where it's hard to reset so a manual install is very recommended.
Install Couchbase Server
Set up login credentials if fresh install; otherwise just login
Create a bucket (a database) with name todos on the cluster. This is the dbname used by TODOLite.
Launch sync gateway. Be sure to pass in the replication URL AND the JSON config file.
bin/sync_gateway -url http://localhost:8091 sync-gateway-config.json; keep sync gateway running
In the TODOLite AppDelegate.m, change kSyncGatewayUrl:
#define kSyncGatewayUrl #"http://localhost:4984/todos". Notice the name of the database is necessary!
(Optionally) Access the administrator interface of the sync gateway by going to http://localhost:4985/_admin/db/sync_gateway/sync. You can set up the sync function here.
In case you're wondering where those port numbers came from, check out
ports Couchbase Server uses
ports Sync Gateway uses
4984 — SG API port
4985 — SG admin server
The default remote sync URL will be defined in different files depending on the version of the project you download (iOS, Android, PhoneGap, and Motion). To find the appropriate string to change simply search through your project for the URL "http://demo.mobile.couchbase.com/todolite" and replace it with the URL of your new sync gateway database.

CruiseControl.net Fails A Build When There Are no Source Code Changes

CruiseControl.net correctly detects that there are "No modifications detected" when I run the program with a visible terminal and shows green build reports but after I quit the program and start the service, the builds fail with the following stack trace:
ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: . Process command: C:\Program Files\TortoiseHg\hg.exe pull https://redacted.kilnhg.com/Code/Repositories/Group/HealthTracker
at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo)
at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Mercurial.Mercurial.HgPull(IIntegrationResult result)
at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Mercurial.Mercurial.GetModifications(IIntegrationResult from, IIntegrationResult to)
at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild)
at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to)
at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request)
Project: HealthTracker
System Information:
Windows 7 x64
CCnet 1.8.5.0
Where do I start to debug the problem?
When Cruisecontrol runs as a service it runs with the service account's credentials. It is probably running as network service. You will either need to provide a password for Mercurial in the ccnet.config file or you will need to copy your authentication certificates from your user account folder to the network service's account folder. That folder is in different places in different versions of Windows.
Since I use a token provided by FogCreek (documented at http://help.fogcreek.com/8375/access-tokens-and-continuous-integration-servers), I have a kiln.prefix, kiln.username, and a kiln.password. The three values are stored inside mercurial.ini. Unfortunately, there is no corresponding file for the NETWORK SERVICE user account so the solution is to run the CruiseControl.NET service with a normal Windows user account's credentials and configure that account with the correct mercurial.ini settings.