I'm using gcutil to access Google Compute Engine instances.
I noticed that when I spin up a new instance, the new instance has a user that I used on a previous machine in this project.
I want to remove that user - not just from this machine, which of course I can do via the normal *nix processes, but, I want to ensure it is not used for any future Compute Engine instances.
How can I do this?
By default, once a user has run the gcloud auth login command and authenticated with the cloud project, their ssh key is added to the projects Compute Engine Common metadata, stored under the sshKeys key/value pair, these are then inherited by all instances within the project, providing access to login via ssh to the instances.
To prevent an existing user from having ssh permissions on a projects instances you will need to modify this value, keeping only the public keys of the users you wish to have access. This can be found in the Cloud Console within you project, under Compute Engine and then Metadata. In your case the all users may be you, just logged in from different clients.
However you cannot modify the existing metadata from there, you need to use the gcutil setcommoninstancemetadata command, to re-insert the modified sshKeys value (see https://developers.google.com/compute/docs/metadata#common), from my experimentation this appears to reset ALL common metadata for the project, so if you have more than just the default sshKeys set on your project, you will need to add them back in at the same time from the command line.
Related
I'm trying to set up read/write access to a Cloud Storage bucket from a GCE instance, using a service account, but don't get the permissions. I have done the following:
Created service account, let's say 'my-sa'
Created a bucket, let's say 'my-bucket'
In IAM console for my project, assign role 'Cloud Storage admin' to service account
Created a new GCE instance via the console, assigned to service account 'my-sa'. Access scope is then automatically set to cloud-platform
Connect to instance using gcloud compute ssh as my user (project owner)
Run gsutil ls gs://my-bucket
Expected behaviour: get list of items in bucket
Observed behaviour:
gsutil takes about 5 seconds to think, then gives:
AccessDeniedException: 403 my-sa#my-project.iam.gserviceaccount.com does not have storage.objects.list access to bucket my-bucket.
Things I've tried:
gcloud auth list on the instance does show the service account, and shows it as being active
I've added more permissions to the service account (up to project owner), doesn't make a difference
I also can't use other permissions from the instance. When I give Compute Engine Admin role to the service account, I can't run gcloud compute instances list from the instance
I've removed the .gsutil dir to make sure the cache is cleared
With the default Compute Engine service account, I can list the buckets, but not write (as expected). When I add the Cloud Storage read/write access scope from the console, I can also write
I really don't have a clue on how to debug this anymore, so any help would be much apprreciated
I have a Linux VM on Google Compute Engine that I am accessing via SSH. It works just fine, but when I go to the Cloud Console, it asks me if I want to create a new VM as if I have none. I know I'm on the right account because it shows my billing balance has gone down.enter image description here Where did my server go?
It is weird. But it is important to make a differentiation that is not obvious once you start using Google Cloud Platform. The credentials you are using to access the Platform ( your email or a service account), the projects where an entity that any resource must be attached to and the billing account that is the payment profile that can have several projects associated.
In that case you could be in a different project, that is associated to the same billing account.
To check you can the project where your machine is, in the shell
Gcloud compute instances list
Here you will see the instances in your actual project. If nothing appears, reset gcloud configuration.
gcloud init
And change the project.
I am being billed for an unused IP address. I can't find the item that's
charging me.
I've gone through the project using console.cloud.google.com looking in Compute Engine and Networking settings, but I can't find any IP addresses.
I'm only using the project for Cloud Storage of 1 text file, and a git
repository. I run these commands on the terminal, and I am getting 0 items.
$ gcloud --project=PROJECTNAME compute addresses list
The above command listed 0 items.
$ gcloud --project=PROJECTNAME compute forwarding-rules list
The above command listed 0 items.
Is there a way of telling where this static IP address is, or how I
can disable it? I can't find it anywhere. I'd rather not delete the entire
project because some of the services are being used by my production app.
I know that it's a global IP address because I can see it listed in my
Compute Engine quota. For me to be able to use a command line option to delete the address, I think that I need the name of the address, but I can't find that listed anywhere.
I'm thinking this could be related to me having one of these two
things enabled for the project in the past:
I was running an AppEngine project, but have since terminated it.
For the AppEngine project, I registered a custom domain to point
to it.
I had used AppEngine Flexible (aef). The unused IP was from my stopped version. This blocks the releasing of the static IP and so it was advised to first delete this version before trying to release the IP address again.
You cannot delete your previous version if that's the only one you have as you need to have at least one version for the default module.
To fix you could deploy a new version, say a Flexible VM (deployed to another region), or a Standard VM. Then as a workaround, if you do not have any app to replace it right now, you can deploy an empty app instead. You would need to create an app.yaml that uses only static files that does not have any script to execute so you would not be charged for any instance.
For a more detailed guide in doing this workaround, you may check this documentation [1].
[1] http://stackoverflow.com/questions/37679552/cannot-delete-version
Since Google Compute Engine does not allow root user nor assign any password to the default Owner Account.
I though the SSH console in the Compute Engine backend can SSH to the instance regardless the SSH Config.
Obviously I was wrong, I modified sshd_config file and did not put the default owner account in the allowUsers parameter. Right now, I cannot SSH to the instance using owner account thus lost any SUDOER right and was stuck.
I however have set up a normal user which has no SUDOER rights but can SSH to the instance.
Is there any way to solve this or I have to rebuild the server?
You can get around by attaching the boot disk of the instance in question as a data disk to another instance and editing sshd_config file.
Here's what I'm trying to do: set up a backup server on Google Compute Engine, where employees at my company can have their computers backup nightly via rdiffbackup. A cron job will run that runs rdiffbackup, which uses SSH to send files just like SCP.
With a "normal" server, I can create each employee a new user, and set permissions so they cannot read another employee's files.
It seems like using the "gcloud compute ssh" tool, or configuring regular ssh using "gcloud compute config-ssh", only allows you to allow users to connect who are added to the project and have connected their computer to their google account. My issue with this is that I don't see a way for a user to have read-write abilities on a server without also being a sudoer (anyone added to a project with "Can Edit" can get sudo as far as I know). Obviously if they have sudo, they can read others' files.
Can I give someone the ability to SSH remotely without having sudo? Thank you.
I recommend avoiding gcloud all together for this. gcloud's SSH tools are geared towards easily administering a constantly changing set of machines in your project. It is not made to cover all use cases that would also use SSH.
Instead, I recommend you setup your backup service as you would a normal server:
assign a static address
(optional) assign a dns name
setup users on the box using adduser
You have couple of options
1) You can manage non-root users on your instances as you would on any normal Linux machine with by manually adding them with the standard commands like 'adduser' and not gsutil/UI/metadata update path.
2) Alternatively if you need to manage a large cluster of machines you can disable the entire ACL management provided by Google and run your own LDAP server for this. The file which is responsible for the account updates and needs to be disabled to run is this one
https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google-daemon/etc/init/google-accounts-manager-service.conf
3) Finally you can lock down write access to the root users ie. disable writes propagating from metadata server by setting the immutable flag on the sudoers file 'chattr +i /etc/sudoers' Its not a graceful solution but effective. This way you lock in Root for the already added users and any new users will be added as non-root privileged, any new root level user needs to be added manually machine by machine though.