Here's what I'm trying to do: set up a backup server on Google Compute Engine, where employees at my company can have their computers backup nightly via rdiffbackup. A cron job will run that runs rdiffbackup, which uses SSH to send files just like SCP.
With a "normal" server, I can create each employee a new user, and set permissions so they cannot read another employee's files.
It seems like using the "gcloud compute ssh" tool, or configuring regular ssh using "gcloud compute config-ssh", only allows you to allow users to connect who are added to the project and have connected their computer to their google account. My issue with this is that I don't see a way for a user to have read-write abilities on a server without also being a sudoer (anyone added to a project with "Can Edit" can get sudo as far as I know). Obviously if they have sudo, they can read others' files.
Can I give someone the ability to SSH remotely without having sudo? Thank you.
I recommend avoiding gcloud all together for this. gcloud's SSH tools are geared towards easily administering a constantly changing set of machines in your project. It is not made to cover all use cases that would also use SSH.
Instead, I recommend you setup your backup service as you would a normal server:
assign a static address
(optional) assign a dns name
setup users on the box using adduser
You have couple of options
1) You can manage non-root users on your instances as you would on any normal Linux machine with by manually adding them with the standard commands like 'adduser' and not gsutil/UI/metadata update path.
2) Alternatively if you need to manage a large cluster of machines you can disable the entire ACL management provided by Google and run your own LDAP server for this. The file which is responsible for the account updates and needs to be disabled to run is this one
https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google-daemon/etc/init/google-accounts-manager-service.conf
3) Finally you can lock down write access to the root users ie. disable writes propagating from metadata server by setting the immutable flag on the sudoers file 'chattr +i /etc/sudoers' Its not a graceful solution but effective. This way you lock in Root for the already added users and any new users will be added as non-root privileged, any new root level user needs to be added manually machine by machine though.
Related
Since Google Compute Engine does not allow root user nor assign any password to the default Owner Account.
I though the SSH console in the Compute Engine backend can SSH to the instance regardless the SSH Config.
Obviously I was wrong, I modified sshd_config file and did not put the default owner account in the allowUsers parameter. Right now, I cannot SSH to the instance using owner account thus lost any SUDOER right and was stuck.
I however have set up a normal user which has no SUDOER rights but can SSH to the instance.
Is there any way to solve this or I have to rebuild the server?
You can get around by attaching the boot disk of the instance in question as a data disk to another instance and editing sshd_config file.
I have a few servers that host customer websites. These customers access the system via SSH or SFTP for data manipulation. In GCE, I don't know what the best approach for this type of access is considering our hosting application creates a jailed account for the users via a control panel and billing system.
I thought about altering sshd_config to allow SSH access with passwords for users. However, GCE documentation reveals that if an instance is rebooted or upgraded to a different machine type that SSH settings would be reset based on the image. Therefore I would lose my sshd_config alterations. I was under the impression that as long as I have a persistent boot disk that I wouldn't loose such changes.
What options do I have to allow our customers to access the server via SSH without them having to use gcutil and be able to authenticate with passwords.
After some testing, I have found that enabling SSH is as simple as modifying your sshd_config file. This file DOES NOT get reverted back to GCE defaults if using a persistent disk. So, a reboot or a VM instance migration/upgrade should keep all SSH settings intact as long as you are using a persistent disk or recovering from a snapshot.
I tested by doing the following:
Modifying SSH for password authentication (as needed)
Test VM connectivity with just ssh vm_fqdn without using gcutil and was successful
Rebooted the VM instance, which kept all sshd_config changes allowing me to still connect with passwords outside of gcutil
Recreated a different instance of GCE with the persistent disk, which also kept my SSH settings allowing me to login without gcutil
Seems like the documentation for all SSH settings/authentication methods are geared to VM instances that are not using persistent disks if you do reboot. Settings with non-persistent disks would trigger new SSH default settings.
I'm using gcutil to access Google Compute Engine instances.
I noticed that when I spin up a new instance, the new instance has a user that I used on a previous machine in this project.
I want to remove that user - not just from this machine, which of course I can do via the normal *nix processes, but, I want to ensure it is not used for any future Compute Engine instances.
How can I do this?
By default, once a user has run the gcloud auth login command and authenticated with the cloud project, their ssh key is added to the projects Compute Engine Common metadata, stored under the sshKeys key/value pair, these are then inherited by all instances within the project, providing access to login via ssh to the instances.
To prevent an existing user from having ssh permissions on a projects instances you will need to modify this value, keeping only the public keys of the users you wish to have access. This can be found in the Cloud Console within you project, under Compute Engine and then Metadata. In your case the all users may be you, just logged in from different clients.
However you cannot modify the existing metadata from there, you need to use the gcutil setcommoninstancemetadata command, to re-insert the modified sshKeys value (see https://developers.google.com/compute/docs/metadata#common), from my experimentation this appears to reset ALL common metadata for the project, so if you have more than just the default sshKeys set on your project, you will need to add them back in at the same time from the command line.
Where does hudson CI get user to run the cmd.exe ?
I'm trying to start and stop some remote services on various slaves and special credentials that are different than what hudson is using are needed. I can't find a place to override the user. I've tried running the server as various users, but it doesn't change anything.
Any other ideas?
Since you want to start and stop the services on the remote machine you need to login with these credentials on the remote machine, since I haven't found a way to start and stop a service on remote machine.
There are different ways to do that. You can create a slave that runs on the remote machines with the correct credentials. You can even create more than one slave for the same machine without any issues, than you can use different credentials for the same machine. These can then fire up the net stop and net start command.
You can also use the SSH plugin. This allows you to configure pre- and post-build ssh scripts. You 'just' need and ssh server on the windows machine. The password for the connection will be stored encrypted.
Use a commad line tool. So far I haven't found a Windows on board tool to have a scripted login to the remote machine. I would use plink for that task. plink is the scripted version of putty. Putty supports different connection types. So you can also use the build in telnet service (not recommended since telnet does not encrypt the connection). Disadvantage is that you will have the password unencrypted in the job configuration.
We had a similar problem, and I resorted to using PsExec. To my advantage, our machines exist on a separate LAN, within 2 firewalls, so I was OK with unencrypted passwords floating around. I had also explored SSH w/ Putty, which seemed to work, but not straightforward.
If someone can help with single line runas command, that could work too.
You don't say how your slaves are connected to Hudson, but I'll assume it's through the "hudson slave" service, since that's probably the most popular way to connect Windows slaves.
If so, the CMD.EXE is run with the same permissions as the user running the service. This can be checked by:
1. run services.msc
2. double-click hudson-slave service
3. go to Log On tab
By default, the slave service runs as "LocalSystem", which is the most powerful account on the system. It should be able to do whatever you need it to do. (i.e. start/stop services)
I'm new to MySQL and I'm using a desktop DB management app called "Querious" to simplify the process while I learn.
I want to work on (mainly just structure & basic population) a database that's hosted elsewhere, but the host won't allow any remote MySQL calls on their server.
What is their reasoning for restricting MySQL calls to localhost only? Is this a security or a performance concern?
This is a security concern. The idea is that if people can't remotely connect, they have to compromise the system. Not just the files that hold the database information.
You may be able to request that just add your IP address to a trusted host file, but I doubt they'll do that either.
It's fairly common practice to not allow remote DB connections
I've run into this problem with GoDaddy where they implement this by default. You can change this, however, by indicating that you want to allow remote access. If you've already created your DB, though, you can't change it, so I would recommend creating a new DB and deleting your other one.
The reason why is for security. If only your app can call your DB, you don't have to worry about other people trying to access it.
Distill,
An improperly-configured MySQL instance is dangerous, whether the user is remote or local. This could allow malicious attackers to cause crashes or remote execution of arbitrary code (i.e., owning the machine).
You can use PuTTY to create a tunnel if it's allowed by the server so that your application traffic goes through ssh and then is forwarded to the correct port on localhost.