GCE - different username if i use SSH or log in from terminal? - google-compute-engine

I created a new project with nothing in it.
When I created my first micro instance I did the following
Connect to it using the browser window SSH.
I see:
user_name#instance-1:~$
If I connect using the gcloud command:
gcloud compute --project "projectname-165421" ssh --zone "us-east1-b" "instance-1"
I am brought to:
username#instance-1:~$
Why is this happening, and how do I resolve it?
This is creating two separate users which is creating a great deal of confusion for me!
Thank you

By default, the cloudSDK will try to connect using the user running the command.
If you look at the docs,
It says you can specify the username like you would, with your default ssh client.
Meaning on your computer:
gcloud compute --project "projectname-165421" ssh --zone "us-east1-b" "user_name#instance-1"
Alternatively, switch user in the browser window SSH:
sudo su username

I use this command to fix it on MacOS:
LOGNAME=ubuntu gcloud compute ssh app1

In answer to some of the comments: yes, you can change the default username. This is stored under the environment variable $USER. You can view the current value of this variable by running in your local terminal:
echo $USER
You can set the value of this variable to the default user you would like, by running the following (assuming you are using any of bash/zsh):
export USER=the_new_default_user
For example, if the new desired default user is "pablo" then:
export USER=pablo
Now, every time you run the following command to ssh in your instance, you'll ssh into pablo:
gcloud compute ssh MY_INSTANCE
I hope this helps!

Related

Mysql container password security

I have a couple of questions on password security in mysql container. I use mysql/mysql-server:8.0 image.
The 1st question is
Is using MYSQL_PASSWORD env var in mysql container based on the image above secure?
I elaborate a bit more about this below.
I set mysql password for mysql container by k8s env var injection, that is, setting MYSQL_PASSWORD env var in mysql container by using k8s secrets via env var in k8s manifest file. Is this secure? That is my 1st question. Notes following table in this page say using MYSQL_PWD(note this is not MYSQL_PASSWORD) env var is extremely insecure because ps cmd can display the environment of running processes and any other user can exploit it. Does this also apply to container situation using MYSQL_PASSWORD instead of MYSQL_PWD?
The 2nd question is
Is running mysql -h 127.0.0.1 -p${MYSQL_PASSWORD} in the same mysql container secure?
I need to run similar cmd in k8s readiness probe. The warning section of this page says running mysql -phard-coded-password is not secure. I'm not sure if the password is still not secure even if the env var is used like above and I'm also not sure if this warning applies to container case.
Thanks in advance!
If your security concerns include protecting your database against an attacker with legitimate login access to the host, then the most secure option is to pass the database credentials in a file. Both command-line options and environment variables, in principle, are visible via ps.
For the case of the database container, the standard Docker Hub images don't have paths to provide credentials this way. If you create the initial database elsewhere and then mount the resulting data directory on your production system (consider this like restoring a backup) then you won't need to set any of the initial data variables.
here$ docker run -it -v "$PWD/mysql:/var/lib/mysql" -e MYSQL_PASSWORD=... mysql
^C
here$ scp -r ./mysql there:
here$ ssh there
# without any -e MYSQL_*=... options
there$ docker run -v "$PWD/mysql:/var/lib/mysql" -p 3306:3306 mysql
More broadly, there are two other things I'd take into account here:
Anyone who can run any docker command at all can very easily root the entire host. So if you're broadly granting Docker socket access to anyone with login permission, they can easily find out the credentials (if nothing else they can docker exec a cat command in the container to dump the credentials file).
Any ENV directives in a Dockerfile will be visible in docker history and docker inspect output to anyone who gets a copy of the image. Never put any sort of credentials in your Dockerfile!
Practically, I'd suggest that, if you're this concerned about your database credentials, you're probably dealing with some sort of production system; and if you're dealing with a production system, the set of people who can log into it is limited and trusted. In that case an environment variable setting isn't exposing credentials to anyone who couldn't read it anyways.
(In the more specific case of a Kubernetes Pod with environment variables injected by a Secret, in most cases almost nobody will have login access to an individual Node and the Secret can be protected by Kubernetes RBAC. This is pretty safe from prying eyes if set up correctly.)

Can I enter password once for multiple mysql command line invocations, where the queries are not known upfront?

You can avoid re-entering mysql command line password by putting the queries into a file.
In my case, the later queries are not determined until after the first queries have finished.
This happens in a non-interactive script so running a mysql console is not an option.
Is there any notion of a session for mysql command line interactions? Or can I set it up to listen for commands on a local unix socket (the output is required to be returned)? Or something like that?
User #smcjones mentions using the .my.cnf file or mysql_config_editor. Those are good suggestions, I give my +1 vote to him.
Another solution is to put the credentials in any file of your choosing and then specify that file when you invoke MySQL tools:
mysql --defaults-extra-file=my_special.cnf ...other arguments...
And finally, just for completeness, you can use environment variables for some options, like host and password. But strangely, not the user. See http://dev.mysql.com/doc/refman/5.7/en/environment-variables.html
export MYSQL_HOST="mydbserver"
export MYSQL_PWD="Xyzzy"
mysql ...other arguments...
I don't really recommend using an environment variable for the password, since anyone who can run ps on your client host can see the environment variables for the mysql client process.
There are a few ways to handle this in MySQL.
Put password in hidden .my.cnf in the home directory of the user the script is running as.
[client]
user=USER
password=PASSWORD
Use mysql_config_editor
mysql_config_editor set --login-path=client --host=localhost
--user=localuser --password
When prompted to enter your password, enter it like you otherwise would.
IMO this is the worst option, but I'll add it for the sake of completeness.
You could always create a function wrapper for MySQL that appends your set password.
#! /bin/bash
local_mysql_do_file() {
mysql -u localuser -h localhost -pPASSWORD_NO_SPACE < $1
}
# usage
local_mysql_do_file file.sql

not able to run remote command in zabbix

I am trying to restart flanneld service running on one VM from my zabbix server UI using triggers and action. I followed the zabbix-docs. zabbix user has sudo permissions ( verified this by logging as zabbix user and running command sudo yum update ). Command used to start flanneld is sudo /usr/bin/flanneld. Does anyone know the cause ?
Configuration done :
Action is created on trigger "flanneld service not running" as-
Conditions :-
Trigger = my Zabbix server: flanneld service not running
Host = my Zabbix server
Operations :-
Target list : Host: my Zabbix server
Execute on Zabbix agent
Commands : sudo /usr/bin/flanneld
thanks in advance.
I would firstly append sudo within your script to the relevant sections so you do not need to worry about that and simply add the location to your script within Zabbix.
You will then need to ensure that you have enabled "EnableRemoteCommands" as by default running a remote command is disabled on a Zabbix agent.
You can do this by simply adding the following line in your zabbix_agent.conf file.
EnableRemoteCommands=1

How to get password for sudo

I'm trying to use the command sudo -i -u postgres for PostgreSQL, and the Google Compute Engine VM is asking me for my password for my account (not root).
As I never issued a password, and I always login to my server via SSH key, I'm not sure what the password is, how I can reset it, or where it can be found.
Please tell me where I can get my password?
To become another non-root user on a GCE VM, first become root via password-less sudo (since that's how sudo is configured on GCE VM images):
sudo su -
and then switch to the user you want to become or run a command as another use, e.g., in your case, that's:
sudo -i -u postgres
Per https://cloud.google.com/compute/docs/instances ,
The instance creator and any users that were added using the metadata
sshKeys value are automatically administrators to the account, with
the ability to run sudo without requiring a password.
So you don't need that non-existent password -- you need to be "added using the metadata sshKeys value"! The canonic way to do that, and I quote from that same page:
$ echo user1:$(cat ~/.ssh/key1.pub) > /tmp/a
$ echo user2:$(cat ~/.ssh/key2.pub) >> /tmp/a
$ gcloud compute project-info add-metadata --metadata-from-file sshKeys=/tmp/a
or you can use the Google Developers Console for similar purposes, see https://cloud.google.com/compute/docs/console#sshkeys if you'd prefer that.
Summary
While creating the VM, specify the ssh user in the "Enter the entire key data" box.
Details
generate the ssh key pair and identify the public key:
if ssh-keygen, a file ending with ".pub"
if PuTTYgen, the text in box "Public key for pasting ..."
Notice the fields, all one one line, separated by spaces: "protocol key-blob username".
For username, you may find your Windows user name or a string like "rsa-key-20191106". You will replace that string with your choice of Linux user name.
Paste the public key info into the "Enter the entire key data" box.
Change the 3rd field to the actual user that you want to create on the VM. If, for example, "gcpuser", then:
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAjUIG3Z8wKWf+TZQ7nVQzB4s8U5rKGVE8NAt/LxlUzEjJrhPI5m+8llLWYY2PH4atZzuIYvYR0CVWhZvZQzLQc33vDOjQohxV9Lg26MwSqK+bj6tsr9ZkMs2zqNbS4b2blGnr37+dnwz+FF7Es9gReqyPxL9bn5PU/+mK0zWMHoZSEfUkXBrgqKoMQTsYzbMERluByEpZm9nRJ6ypvr9gufft9MsWC2LPhEx0O9YDahgrCsL/yiQVL+3x00DO9sBOXxi8kI81Mv2Rl4JSyswh1mzGAsT1s4q6fxtlUl5Ooz6La693IjUZO/AjN8sZPh03H9WiyewowkhMfS0H06rtGQ== gcpuser
create your VM. (Debian, for example)
Connect to the VM
directly from ssh or PuTTY (not browser window)
use the private key
specify the user
Notice that your public key is present:
gcpuser#instance-1:~/.ssh$ cat authorized_keys
# Added by Google
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAjUIG3Z8wKWf+TZQ7nVQzB4s8U5rKGVE8NAt/LxlUzEjJrhPI5m+8llLWYY2PH4atZzuIYvYR0CVWhZvZQzLQc33vDOjQohxV9Lg26MwSqK+bj6tsr9ZkMs2zqNbS4b2blGnr37+dnwz+FF7Es9gReqyPxL9bn5PU/+mK0zWMHoZSEfUkXBrgqKoMQTsYzbMERluByEpZm9nRJ6ypvr9gufft9MsWC2LPhEx0O9YDahgrCsL/yiQVL+3x00DO9sBOXxi8kI81Mv2Rl4JSyswh1mzGAsT1s4q6fxtlUl5Ooz6La693IjUZO/AjN8sZPh03H9WiyewowkhMfS0H06rtGQ== gcpuser
Notice that you are in group google-sudoers
gcpuser#instance-1:~/.ssh$ id
uid=1000(gcpuser) gid=1001(gcpuser) groups=1001(gcpuser),4(adm),30(dip),44(video),46(plugdev),1000(google-sudoers)
sudo to root with no password
gcpuser#instance-1:~$ sudo -i -u root
root#instance-1:~#
Notice the sudoers file:
root#instance-1:~# cat /etc/sudoers.d/google_sudoers
%google-sudoers ALL=(ALL:ALL) NOPASSWD:ALL
Conclusion
Specifying the username in "Enter the entire key data" has these results:
creating the user in the virtual machine.
uploading the key to ~/.ssh
membership in a passwordless sudo group

MySQL-Python code to query a MYSQL database through an SSH tunnel

I have access to a MySQL database through ssh,
Could someone direct me to a MySQL-python code that will let me do this?
I need to save my query results on my local WINDOWS computer,
Thanks,
You can use SSH port forwarding to do this.. in fact first google hit looks to walk you through this exact thing:
http://www.howtogeek.com/howto/ubuntu/access-your-mysql-server-remotely-over-ssh/
And since you're on windows, translate that to using PuTTY:
https://intranet.cs.hku.hk/csintranet/contents/technical/howto/putty-portforward.jsp
You'll then connect to localhost:3306 with your python script, SSH will forward that over to the other machine and you'll end up connecting to the remote mysql instance.
You need to open up an SSH Tunnel to your sql server and then you can run paramiko to connect locally to the port you are using locally. This is done quite easily in *nix systems and I am sure you can download ssh command line too for windows. Try putty or plink, see here. What I do is I run a shell script like so, then I execute my paramiko python script, then I kill the
ssh -N remote_server#54.221.226.240 -i ~/.ssh/my_ssh_key.pem -L 5433:localhost:5432
python paramiko_connect.py
kill pkill -f my_ssh_key.pem # kill using the pattern,
#see ''ps aux | grep my_ssh_key.pem'' to see what it will kill
-N means don't execute any commands, -L is the local port to tunnel from, followed by the remotes server port, assuming you are connected to that server already.
Works like a charm for me for my postgres server & I did try it on mysql too.