Google Compute Engine Cannot SSH using Owner Account - google-compute-engine

Since Google Compute Engine does not allow root user nor assign any password to the default Owner Account.
I though the SSH console in the Compute Engine backend can SSH to the instance regardless the SSH Config.
Obviously I was wrong, I modified sshd_config file and did not put the default owner account in the allowUsers parameter. Right now, I cannot SSH to the instance using owner account thus lost any SUDOER right and was stuck.
I however have set up a normal user which has no SUDOER rights but can SSH to the instance.
Is there any way to solve this or I have to rebuild the server?

You can get around by attaching the boot disk of the instance in question as a data disk to another instance and editing sshd_config file.

Related

Using SSH tunnel to connect to remote MYSQL database from Node-Red

I have a set of data rolling out of Node-Red that I want to send to a remote MYSQL database. The Node-Red system is running on a Raspberry Pi. How do I make this work? I know how to it using Node.JS but im not sure how to do this in Node-Red. The IP-adress of the Pi is dynamic so simply authorizing its Ip address does not work sadly.
Thanks in advance!
EDIT for clarification:
I want to connect to a remote MYSQL database that is hosted by my webhosting. I have connected a Raspberry Pi to a battery, and I want to save this information in the aforementioned database. Since there will be several battery setups in different locations, I cannot save the data locally. So, one way or another I need to access the remote database through Node-Red. Authorizing one IP-address does't work, since the IP of the Raspberry Pi network is dynamic and thus changes. I think a SSH-Tunnel might be the solution, but I have no idea how to this in Node-Red, and google isnt very helpful.
OK, so as I said in the comments you can make a Username/Password pair for MySQL can be granted permission to any IP address (which is less secure if the username/password is compromised. Set the host to '%' to allow all hosts when setting up the grant options).
To reduce the risk you can restrict the Username/Password to a specific subnet. This could be a wifi network or the subnet associated to the piblic IP (it needs to be the public range as nearly all cellular ISPs use CGNAT) range of the cellular provider you may be using. (See this question for details How to grant remote access to MySQL for a whole subnet?).
If you want to use a SSH tunnel then this will normally be done outside Node-RED with the ssh command line e.g.
ssh -L localhost:3306:localhost:3306 remote.host.com
Then configure the Node-RED MySQL node to point to localhost.
Since the connection will look like it's coming from localhost on the MySQL machine you need make sure the Username/Password is locked down to a that host.
You will probably also want to set up public/private key authentication for the ssh connection.
You may be able to run the ssh command in the node-red-daemon node, which should restart the connection if it gets dropped.

GCE instance does not get permissions from service acount

I'm trying to set up read/write access to a Cloud Storage bucket from a GCE instance, using a service account, but don't get the permissions. I have done the following:
Created service account, let's say 'my-sa'
Created a bucket, let's say 'my-bucket'
In IAM console for my project, assign role 'Cloud Storage admin' to service account
Created a new GCE instance via the console, assigned to service account 'my-sa'. Access scope is then automatically set to cloud-platform
Connect to instance using gcloud compute ssh as my user (project owner)
Run gsutil ls gs://my-bucket
Expected behaviour: get list of items in bucket
Observed behaviour:
gsutil takes about 5 seconds to think, then gives:
AccessDeniedException: 403 my-sa#my-project.iam.gserviceaccount.com does not have storage.objects.list access to bucket my-bucket.
Things I've tried:
gcloud auth list on the instance does show the service account, and shows it as being active
I've added more permissions to the service account (up to project owner), doesn't make a difference
I also can't use other permissions from the instance. When I give Compute Engine Admin role to the service account, I can't run gcloud compute instances list from the instance
I've removed the .gsutil dir to make sure the cache is cleared
With the default Compute Engine service account, I can list the buckets, but not write (as expected). When I add the Cloud Storage read/write access scope from the console, I can also write
I really don't have a clue on how to debug this anymore, so any help would be much apprreciated

Allowing users to connect with SSH without having sudo access?

Here's what I'm trying to do: set up a backup server on Google Compute Engine, where employees at my company can have their computers backup nightly via rdiffbackup. A cron job will run that runs rdiffbackup, which uses SSH to send files just like SCP.
With a "normal" server, I can create each employee a new user, and set permissions so they cannot read another employee's files.
It seems like using the "gcloud compute ssh" tool, or configuring regular ssh using "gcloud compute config-ssh", only allows you to allow users to connect who are added to the project and have connected their computer to their google account. My issue with this is that I don't see a way for a user to have read-write abilities on a server without also being a sudoer (anyone added to a project with "Can Edit" can get sudo as far as I know). Obviously if they have sudo, they can read others' files.
Can I give someone the ability to SSH remotely without having sudo? Thank you.
I recommend avoiding gcloud all together for this. gcloud's SSH tools are geared towards easily administering a constantly changing set of machines in your project. It is not made to cover all use cases that would also use SSH.
Instead, I recommend you setup your backup service as you would a normal server:
assign a static address
(optional) assign a dns name
setup users on the box using adduser
You have couple of options
1) You can manage non-root users on your instances as you would on any normal Linux machine with by manually adding them with the standard commands like 'adduser' and not gsutil/UI/metadata update path.
2) Alternatively if you need to manage a large cluster of machines you can disable the entire ACL management provided by Google and run your own LDAP server for this. The file which is responsible for the account updates and needs to be disabled to run is this one
https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google-daemon/etc/init/google-accounts-manager-service.conf
3) Finally you can lock down write access to the root users ie. disable writes propagating from metadata server by setting the immutable flag on the sudoers file 'chattr +i /etc/sudoers' Its not a graceful solution but effective. This way you lock in Root for the already added users and any new users will be added as non-root privileged, any new root level user needs to be added manually machine by machine though.

GCE SSH Access to VM Instance

I have a few servers that host customer websites. These customers access the system via SSH or SFTP for data manipulation. In GCE, I don't know what the best approach for this type of access is considering our hosting application creates a jailed account for the users via a control panel and billing system.
I thought about altering sshd_config to allow SSH access with passwords for users. However, GCE documentation reveals that if an instance is rebooted or upgraded to a different machine type that SSH settings would be reset based on the image. Therefore I would lose my sshd_config alterations. I was under the impression that as long as I have a persistent boot disk that I wouldn't loose such changes.
What options do I have to allow our customers to access the server via SSH without them having to use gcutil and be able to authenticate with passwords.
After some testing, I have found that enabling SSH is as simple as modifying your sshd_config file. This file DOES NOT get reverted back to GCE defaults if using a persistent disk. So, a reboot or a VM instance migration/upgrade should keep all SSH settings intact as long as you are using a persistent disk or recovering from a snapshot.
I tested by doing the following:
Modifying SSH for password authentication (as needed)
Test VM connectivity with just ssh vm_fqdn without using gcutil and was successful
Rebooted the VM instance, which kept all sshd_config changes allowing me to still connect with passwords outside of gcutil
Recreated a different instance of GCE with the persistent disk, which also kept my SSH settings allowing me to login without gcutil
Seems like the documentation for all SSH settings/authentication methods are geared to VM instances that are not using persistent disks if you do reboot. Settings with non-persistent disks would trigger new SSH default settings.

Remove user from Google Compute Engine instances

I'm using gcutil to access Google Compute Engine instances.
I noticed that when I spin up a new instance, the new instance has a user that I used on a previous machine in this project.
I want to remove that user - not just from this machine, which of course I can do via the normal *nix processes, but, I want to ensure it is not used for any future Compute Engine instances.
How can I do this?
By default, once a user has run the gcloud auth login command and authenticated with the cloud project, their ssh key is added to the projects Compute Engine Common metadata, stored under the sshKeys key/value pair, these are then inherited by all instances within the project, providing access to login via ssh to the instances.
To prevent an existing user from having ssh permissions on a projects instances you will need to modify this value, keeping only the public keys of the users you wish to have access. This can be found in the Cloud Console within you project, under Compute Engine and then Metadata. In your case the all users may be you, just logged in from different clients.
However you cannot modify the existing metadata from there, you need to use the gcutil setcommoninstancemetadata command, to re-insert the modified sshKeys value (see https://developers.google.com/compute/docs/metadata#common), from my experimentation this appears to reset ALL common metadata for the project, so if you have more than just the default sshKeys set on your project, you will need to add them back in at the same time from the command line.