How can I prevent GCE from copying ssh keys to all new instances? - google-compute-engine

When I create a new VM instance via Cloud Console, homedirs are automatically created for users that I have created manually on previous instances, and ssh-keys are copied to ~/.ssh/authorized_keys in respective homedirs.
I don't want that! This is IMHO a serious security flaw.
I don't want any users automatically created, I don't want any ssh keys automatically copied.
How can I achieve that?

You can specify the specific users & SSH keys to use for an instance by setting the instance level sshKeys metadata key. You can also do this from the command line using gcutil's --authorized_ssh_keys option:
$ gcutil addinstance --authorized_ssh_keys=username1:/path/to/keyfile1,username2:/path/to/keyfile2,...
If you want to make sure that no instances get the full set of users/keys, you can remove the sshKeys project level metadata key. From the Console, click Compute Engine, then Metadata, then click the trash can icon next to the sshKeys key. You will then need to specify keys for each instance, or you will not be able to log in at all. (which may be what you want in a fully automated environment)
Note: Running gcutil ssh will generate a key-pair (if needed) and add it to the sshKeys key.

Google adds these ssh keys to the project ssh-keys automatically. So you need to block project-wide SSH keys: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#block-project-keys
You can do it via meta-data:
"block-project-ssh-keys": "true"

Related

How to disable logs for whole cluster in gce

Could it be possible for already created (Yarn/Hadoop) cluster to disable logging for all servers inside ?
I can't find anything like it. Is there anything in Dataproc or Compute Engine which can help me to disable the logs ?
One easy way would be to create an exclusion in Stackdriver Logging that would prevent logs from that cluster from being ingested into Stackdriver.
You can create a resuorce based exclusion in Stacdriver - select a DataProc cluster you want and it will stop collecting any logs - hence bill you for that.
Go to Logs Ingestion page, select Exclusions and click blue button "create exclusion".
As a resource type select "Cloud Dataproc Cluster" > your_cluster_name > All cluster_uuid as shown below. Also - select "no limit" for time frame.
Fill the "Name" field on the right and again click blue button "Create Exlusion".
You can create up to 50 exclusion queries in StackDriver.
With little help and suggestion from Google support, there is complete solution to skip logging for whole yarn/hadoop cluster.
That can be possible only when create new cluster from dataproc either by google cloud page or console.
Property which should to be set in cluster properties filed:
dataproc:dataproc.logging.stackdriver.enable to be false
More info at: https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/cluster-properties
If you create cluster through console you can referral https://cloud.google.com/sdk/gcloud/reference/dataproc/clusters/create#--properties and to use command like:
gcloud.cmd --properties 'dataproc:dataproc.logging.stackdriver.enable=false'

Why can I ask for a password when using the su command problem when using GCE?

I am awkward because I am trying a translation tool, but please forgive me
As a problem su command is no longer available
I think that the same account was used as the cause and that access was done simultaneously from different terminals
Please tell me how to resolve
I had the same problem today. You need to enable oslogin by adding the necessary meta data to your Instance (likely your project as well).
Instructions below. Solved it for me. Hope it helps
https://cloud.google.com/compute/docs/instances/managing-instance-access#enable_oslogin
To set enable-oslogin in project-wide metadata so that it applies to all of the instances in your project:
Go to the Metadata page.
Click Edit.
Add a metadata entry where the key is enable-oslogin and the value is TRUE. Alternatively, set the value to FALSE to disable the feature.
Click Save to apply the changes.
To Set enable-oslogin in metadata of an existing instance:
Go to the VM instances page.
Click the name of the instance on which you want to set the metadata value.
At the top of the instance details page, click Edit to edit the instance settings.
Under Custom metadata, add a metadata entry where the key is enable-oslogin and the value is TRUE. Alternatively, set the value to FALSE to exclude the instance from the feature.
At the bottom of the instance details page, click Save to apply your changes to the instance.
To Set enable-oslogin in instance metadata when you create an instance:
In the GCP Console, go to the VM Instances page.
Click Create instance.
On the Create a new instance page, fill in the desired properties for your instance.
In the Metadata section, add a metadata entry where the key is enable-oslogin and the value is TRUE. Alternatively, set the value to FALSE to exclude the instance from the feature.
Click Create to create the instance.

Run container without assigning a public IP using devops

I want to run a docker container (using Bluemix DevOps Services) without assigning a public IP. Wondering how to do that...its always assigning a public IP.
Thx
The current default deploy script (you can see the git in the script box) for a single container is https://github.com/Osthanes/deployscripts/blob/master/deploycontainer.sh
Looking at that, the port field is optional, but if not set, it defaults it to 80, like you're seeing. Simplest solution would be to point it to an unused port and ignore it, or you could fork the script and modify the git to clone your fork instead.
To not assign a public ip - one way is to switch from the default 'red_black' deployment strategy to 'simple'. A side effect is that simple does not clean up the previous deploy, so if you want it to still do that, add an additional instance of the job on that same stage, with the strategy set to 'clean', and that will remove old instances. As before, if you choose to fork the scripts, you can change that behavior in yours to whatever you like.
The public IP when you create a container on the IBM container service is optional.
You only need to bind the IP when you want to use it from the Internet.
What tool in devops are you using a maybe it is missing an option.
Ralph

Cannot create a Google Compute VM instance

I thought that this would be relatively straight forward, but I cannot start a Google Compute Engine instance at all. I am creating an instance through the web interface, but get an error after clicking the "Create" button.
The error that appears in the activity log is:
Invalid value for field 'resource.type':
'https://www.googleapis.com/compute/v1/projects//zones/asia-east1-b/diskTypes/pd-standard'.
Must be a URL to a valid Compute resource of the correct type.
Here is a screen shot of my instance settings:
Any ideas about what is going wrong? I have tried different zones and VM sizes.
Not sure why you're unable to create the instance, but you can always create it directly from the terminal with gcutil.
This command should do the trick: gcutil addinstance --zone=asia-east1-b --image=debian-7-wheezy-v20140606 --machine_type=g1-small bigquery-bi-instance

how do you make use of AclExtension and mercurial-server/hg-ssh?

mercurial-server manages user database under keys folder. Users and groups are represented by files and folders.
AclExtension relies on linux user group through ssh.
they don't seem to match. or did I miss something?
I have managed to make mercurial-server work. but just don't see how to integrate AclExtension with it so I may have finer grained access control.
Unfortunately, the AclExtension does key its access off of usernames. If you are creating separate UNIX user accounts for each using with hg-ssh you've got everything you need, but if all of your ssh users are using the same Unix user account then the AclExtension isn't going to work for you.
Unless...
I did just look into the acl.py file and it looks like it uses the getpass.py module's getuser which checks the environment for the user name using this code:
for name in ('LOGNAME', 'USER', 'LNAME', 'USERNAME'):
user = os.environ.get(name)
if user:
return user
so it might be possible to fake that out by setting an environment variable in the hg-ssh user's authorized_keys file like this:
command="hg-ssh path/to/repo" environment="LOGNAME=fakeusername" ssh-dss ...
where then you could put fakeusername in ACL rules, and could have a different fakeusername for each key, all running under the same UNIX account.
BTW: Everyone seems to just use hg-ssh alone, I never see the (non-official) mercurial-server app used anymore.
The environment trick doesn't seem to work on my Solaris box; my solution was to pass in the fakeusername as a parameter to hg-ssh and have that set os.environ['LOGNAME'] so that getpass sees it.
command="hg-ssh fakeusername" ssh-dss ...