I have a few servers that host customer websites. These customers access the system via SSH or SFTP for data manipulation. In GCE, I don't know what the best approach for this type of access is considering our hosting application creates a jailed account for the users via a control panel and billing system.
I thought about altering sshd_config to allow SSH access with passwords for users. However, GCE documentation reveals that if an instance is rebooted or upgraded to a different machine type that SSH settings would be reset based on the image. Therefore I would lose my sshd_config alterations. I was under the impression that as long as I have a persistent boot disk that I wouldn't loose such changes.
What options do I have to allow our customers to access the server via SSH without them having to use gcutil and be able to authenticate with passwords.
After some testing, I have found that enabling SSH is as simple as modifying your sshd_config file. This file DOES NOT get reverted back to GCE defaults if using a persistent disk. So, a reboot or a VM instance migration/upgrade should keep all SSH settings intact as long as you are using a persistent disk or recovering from a snapshot.
I tested by doing the following:
Modifying SSH for password authentication (as needed)
Test VM connectivity with just ssh vm_fqdn without using gcutil and was successful
Rebooted the VM instance, which kept all sshd_config changes allowing me to still connect with passwords outside of gcutil
Recreated a different instance of GCE with the persistent disk, which also kept my SSH settings allowing me to login without gcutil
Seems like the documentation for all SSH settings/authentication methods are geared to VM instances that are not using persistent disks if you do reboot. Settings with non-persistent disks would trigger new SSH default settings.
Related
Since Google Compute Engine does not allow root user nor assign any password to the default Owner Account.
I though the SSH console in the Compute Engine backend can SSH to the instance regardless the SSH Config.
Obviously I was wrong, I modified sshd_config file and did not put the default owner account in the allowUsers parameter. Right now, I cannot SSH to the instance using owner account thus lost any SUDOER right and was stuck.
I however have set up a normal user which has no SUDOER rights but can SSH to the instance.
Is there any way to solve this or I have to rebuild the server?
You can get around by attaching the boot disk of the instance in question as a data disk to another instance and editing sshd_config file.
Here's what I'm trying to do: set up a backup server on Google Compute Engine, where employees at my company can have their computers backup nightly via rdiffbackup. A cron job will run that runs rdiffbackup, which uses SSH to send files just like SCP.
With a "normal" server, I can create each employee a new user, and set permissions so they cannot read another employee's files.
It seems like using the "gcloud compute ssh" tool, or configuring regular ssh using "gcloud compute config-ssh", only allows you to allow users to connect who are added to the project and have connected their computer to their google account. My issue with this is that I don't see a way for a user to have read-write abilities on a server without also being a sudoer (anyone added to a project with "Can Edit" can get sudo as far as I know). Obviously if they have sudo, they can read others' files.
Can I give someone the ability to SSH remotely without having sudo? Thank you.
I recommend avoiding gcloud all together for this. gcloud's SSH tools are geared towards easily administering a constantly changing set of machines in your project. It is not made to cover all use cases that would also use SSH.
Instead, I recommend you setup your backup service as you would a normal server:
assign a static address
(optional) assign a dns name
setup users on the box using adduser
You have couple of options
1) You can manage non-root users on your instances as you would on any normal Linux machine with by manually adding them with the standard commands like 'adduser' and not gsutil/UI/metadata update path.
2) Alternatively if you need to manage a large cluster of machines you can disable the entire ACL management provided by Google and run your own LDAP server for this. The file which is responsible for the account updates and needs to be disabled to run is this one
https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google-daemon/etc/init/google-accounts-manager-service.conf
3) Finally you can lock down write access to the root users ie. disable writes propagating from metadata server by setting the immutable flag on the sudoers file 'chattr +i /etc/sudoers' Its not a graceful solution but effective. This way you lock in Root for the already added users and any new users will be added as non-root privileged, any new root level user needs to be added manually machine by machine though.
I have a MySQL database on Amazon RDS. When I created this database I unselected the "public access" option which can not be changed after creating the database. This means that my database instance can only be accessed from inside the VPC.
So now I would like to access the database in my local computer with setting the MySQL host with a EC2 VPS I have inside the network with access to the database. I want this server to act as my MySQL server so I can access it locally.
I just had to do this same thing. The process is to set up an SSH tunnel through the EC2 instance to the database. I wrote a post about the whole process that should be helpful
There's a couple of options -
Take a snapshot of the database and spin up a new copy that does allow public access. You can then use the security groups to only allow access from your ip - that way you have the benefit of the non public access security plus the ease of access from your machine.
If you don't want to do that, as datasage mentions your other option is to use an ssh tunnel - this will mean creating an ec2 instance in the same vpc that CAN access the rds, then using putty or your favourite ssh client to tunnel traffic through the 'bastion' ec2 instance to your database. This has the added layer of security but it's also more work to manage, depending on your familiarity with ssh. Not to mention the added cost of the ec2 instance.
Where does hudson CI get user to run the cmd.exe ?
I'm trying to start and stop some remote services on various slaves and special credentials that are different than what hudson is using are needed. I can't find a place to override the user. I've tried running the server as various users, but it doesn't change anything.
Any other ideas?
Since you want to start and stop the services on the remote machine you need to login with these credentials on the remote machine, since I haven't found a way to start and stop a service on remote machine.
There are different ways to do that. You can create a slave that runs on the remote machines with the correct credentials. You can even create more than one slave for the same machine without any issues, than you can use different credentials for the same machine. These can then fire up the net stop and net start command.
You can also use the SSH plugin. This allows you to configure pre- and post-build ssh scripts. You 'just' need and ssh server on the windows machine. The password for the connection will be stored encrypted.
Use a commad line tool. So far I haven't found a Windows on board tool to have a scripted login to the remote machine. I would use plink for that task. plink is the scripted version of putty. Putty supports different connection types. So you can also use the build in telnet service (not recommended since telnet does not encrypt the connection). Disadvantage is that you will have the password unencrypted in the job configuration.
We had a similar problem, and I resorted to using PsExec. To my advantage, our machines exist on a separate LAN, within 2 firewalls, so I was OK with unencrypted passwords floating around. I had also explored SSH w/ Putty, which seemed to work, but not straightforward.
If someone can help with single line runas command, that could work too.
You don't say how your slaves are connected to Hudson, but I'll assume it's through the "hudson slave" service, since that's probably the most popular way to connect Windows slaves.
If so, the CMD.EXE is run with the same permissions as the user running the service. This can be checked by:
1. run services.msc
2. double-click hudson-slave service
3. go to Log On tab
By default, the slave service runs as "LocalSystem", which is the most powerful account on the system. It should be able to do whatever you need it to do. (i.e. start/stop services)
Hi Thanks for reading my question. I currently use Mac Terminal to use MySQL. I connect to either localhost or a remote server. Should I be using SSH?
You won't need SSH to access a DB on your local machine.
You can use SSH to access a remote DB using MySQL. You can also use an app with a GUI like Sequel Pro to access the remote DB via an SSH tunnel.
What specifically are you trying to achieve?
There is not enough information to answer your question.
Normally SSH tunnels are used more for adhoc work, while preserving high level of security (can be used in production, too).
MySQL normally uses unencrypted traffic, but it can be setup to use SSL, so that's another path you can take.
Other alternatives are VPNs, for example OpenVPN among other solutions, but this is more of an infrastructure decision.
EDIT: For completeness
On local machine clients can communicate with mysqld over socket or local IP. Normally it is not necessary to encrypt such connections.
For remote connections (which are over IP), as stated mysql uses unencrypted connection and FTP by default also uses unencrypted connection. This might or might not be a security risk (if that particular network segment is on its own VLAN or inside already encrypted tunnel or on physically secured network).
If unsure - encrypt it.