I'm trying to use the command sudo -i -u postgres for PostgreSQL, and the Google Compute Engine VM is asking me for my password for my account (not root).
As I never issued a password, and I always login to my server via SSH key, I'm not sure what the password is, how I can reset it, or where it can be found.
Please tell me where I can get my password?
To become another non-root user on a GCE VM, first become root via password-less sudo (since that's how sudo is configured on GCE VM images):
sudo su -
and then switch to the user you want to become or run a command as another use, e.g., in your case, that's:
sudo -i -u postgres
Per https://cloud.google.com/compute/docs/instances ,
The instance creator and any users that were added using the metadata
sshKeys value are automatically administrators to the account, with
the ability to run sudo without requiring a password.
So you don't need that non-existent password -- you need to be "added using the metadata sshKeys value"! The canonic way to do that, and I quote from that same page:
$ echo user1:$(cat ~/.ssh/key1.pub) > /tmp/a
$ echo user2:$(cat ~/.ssh/key2.pub) >> /tmp/a
$ gcloud compute project-info add-metadata --metadata-from-file sshKeys=/tmp/a
or you can use the Google Developers Console for similar purposes, see https://cloud.google.com/compute/docs/console#sshkeys if you'd prefer that.
Summary
While creating the VM, specify the ssh user in the "Enter the entire key data" box.
Details
generate the ssh key pair and identify the public key:
if ssh-keygen, a file ending with ".pub"
if PuTTYgen, the text in box "Public key for pasting ..."
Notice the fields, all one one line, separated by spaces: "protocol key-blob username".
For username, you may find your Windows user name or a string like "rsa-key-20191106". You will replace that string with your choice of Linux user name.
Paste the public key info into the "Enter the entire key data" box.
Change the 3rd field to the actual user that you want to create on the VM. If, for example, "gcpuser", then:
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAjUIG3Z8wKWf+TZQ7nVQzB4s8U5rKGVE8NAt/LxlUzEjJrhPI5m+8llLWYY2PH4atZzuIYvYR0CVWhZvZQzLQc33vDOjQohxV9Lg26MwSqK+bj6tsr9ZkMs2zqNbS4b2blGnr37+dnwz+FF7Es9gReqyPxL9bn5PU/+mK0zWMHoZSEfUkXBrgqKoMQTsYzbMERluByEpZm9nRJ6ypvr9gufft9MsWC2LPhEx0O9YDahgrCsL/yiQVL+3x00DO9sBOXxi8kI81Mv2Rl4JSyswh1mzGAsT1s4q6fxtlUl5Ooz6La693IjUZO/AjN8sZPh03H9WiyewowkhMfS0H06rtGQ== gcpuser
create your VM. (Debian, for example)
Connect to the VM
directly from ssh or PuTTY (not browser window)
use the private key
specify the user
Notice that your public key is present:
gcpuser#instance-1:~/.ssh$ cat authorized_keys
# Added by Google
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAjUIG3Z8wKWf+TZQ7nVQzB4s8U5rKGVE8NAt/LxlUzEjJrhPI5m+8llLWYY2PH4atZzuIYvYR0CVWhZvZQzLQc33vDOjQohxV9Lg26MwSqK+bj6tsr9ZkMs2zqNbS4b2blGnr37+dnwz+FF7Es9gReqyPxL9bn5PU/+mK0zWMHoZSEfUkXBrgqKoMQTsYzbMERluByEpZm9nRJ6ypvr9gufft9MsWC2LPhEx0O9YDahgrCsL/yiQVL+3x00DO9sBOXxi8kI81Mv2Rl4JSyswh1mzGAsT1s4q6fxtlUl5Ooz6La693IjUZO/AjN8sZPh03H9WiyewowkhMfS0H06rtGQ== gcpuser
Notice that you are in group google-sudoers
gcpuser#instance-1:~/.ssh$ id
uid=1000(gcpuser) gid=1001(gcpuser) groups=1001(gcpuser),4(adm),30(dip),44(video),46(plugdev),1000(google-sudoers)
sudo to root with no password
gcpuser#instance-1:~$ sudo -i -u root
root#instance-1:~#
Notice the sudoers file:
root#instance-1:~# cat /etc/sudoers.d/google_sudoers
%google-sudoers ALL=(ALL:ALL) NOPASSWD:ALL
Conclusion
Specifying the username in "Enter the entire key data" has these results:
creating the user in the virtual machine.
uploading the key to ~/.ssh
membership in a passwordless sudo group
Related
As per offical documentation by Openshift , we can get kubadmin password as below:
crc console --credentials
To login as a regular user, run 'oc login -u developer -p developer https://api.crc.testing:6443'.
To login as an admin, run 'oc login -u kubeadmin -p gALwE-jY6p9-poc9U-gRcdu https://api.crc.testing:6443'
However , I can login successfully with developer/developer .kubeadmin will fail with "Login failed (401 Unauthorized)" . Restart CRC muiltiple times . Still not works ... Any idea about this ?
$ oc login -u developer -p developer https://api.crc.testing:6443
Login successful.
You have one project on this server: "demo"
Using project "demo"
$ oc login -u kubeadmin -p gALwE-jY6p9-poc9U-gRcdu https://api.crc.testing:6443
Login failed (401 Unauthorized)
Verify you have provided correct credentials.
Any inputs will be appreciated . Thanks in advance..
You said you restarted CRC. Have you tried deleting and recreating the cluster?
One of the first steps in productionizing a cluster is to remove the kubeadmin account - is it possible that you've done that and the "crc console --credentials" is now only displaying what it used to be?
If you have another admin account try:
$ oc get -n kube-system secret kubeadmin
The step to remove that account (see: https://docs.openshift.com/container-platform/4.9/authentication/remove-kubeadmin.html) is to simply delete that secret. If you've done that at some point in this cluster's history you'll either need to use your other admin accounts in place of kubeadmin, or recreate the CRC instance (crc stop; crc delete; crc setup)
Just in case others are having this issue and the issue persists even after trying crc stop, crc delete, crc cleanup, crc setup, crc start, I was able to sign in as kubeadmin by NOT using the following command after crc start got my CodeReady Container up and running.
eval $(crc oc-env)
Instead, I issue the crc oc-env command. In this example that the output returns /home/john.doe/.crc/bin/oc.
~]$ crc oc-env
export PATH="/home/john.doe/.crc/bin/oc:$PATH"
# Run this command to configure your shell:
# eval $(crc oc-env)
I then list the contents of the /home/john.doe/.crc/bin/oc directory which shows that the /home/john.doe/.crc/bin/oc directory is symbolically linked to the /home/john.doe/.crc/cache/crc_libvirt__amd64/oc file.
~]$ ll /home/john.doe/.crc/bin/oc
lrwxrwxrwx. 1 john.doe john.doe 61 Jun 8 20:27 oc -> /home/john.doe/.crc/cache/crc_libvirt_4.10.12_amd64/oc
And I was then able to sign in using the absolute path to the oc command line tool.
~]$ /home/john.doe/.crc/cache/crc_libvirt_4.10.12_amd64/oc login -u kubeadmin -p 28Fwr-Znmfb-V6ySF-zUu29 https://api.crc.testing:6443
Login successful.
I'm sure I could dig a bit more into this, checking the contents of my users $PATH, but suffice to say, this at least is a work around for me that gets me to be able to sign in as kubeadmin.
You can avoid re-entering mysql command line password by putting the queries into a file.
In my case, the later queries are not determined until after the first queries have finished.
This happens in a non-interactive script so running a mysql console is not an option.
Is there any notion of a session for mysql command line interactions? Or can I set it up to listen for commands on a local unix socket (the output is required to be returned)? Or something like that?
User #smcjones mentions using the .my.cnf file or mysql_config_editor. Those are good suggestions, I give my +1 vote to him.
Another solution is to put the credentials in any file of your choosing and then specify that file when you invoke MySQL tools:
mysql --defaults-extra-file=my_special.cnf ...other arguments...
And finally, just for completeness, you can use environment variables for some options, like host and password. But strangely, not the user. See http://dev.mysql.com/doc/refman/5.7/en/environment-variables.html
export MYSQL_HOST="mydbserver"
export MYSQL_PWD="Xyzzy"
mysql ...other arguments...
I don't really recommend using an environment variable for the password, since anyone who can run ps on your client host can see the environment variables for the mysql client process.
There are a few ways to handle this in MySQL.
Put password in hidden .my.cnf in the home directory of the user the script is running as.
[client]
user=USER
password=PASSWORD
Use mysql_config_editor
mysql_config_editor set --login-path=client --host=localhost
--user=localuser --password
When prompted to enter your password, enter it like you otherwise would.
IMO this is the worst option, but I'll add it for the sake of completeness.
You could always create a function wrapper for MySQL that appends your set password.
#! /bin/bash
local_mysql_do_file() {
mysql -u localuser -h localhost -pPASSWORD_NO_SPACE < $1
}
# usage
local_mysql_do_file file.sql
You can avoid re-entering mysql command line password by putting the queries into a file.
In my case, the later queries are not determined until after the first queries have finished.
This happens in a non-interactive script so running a mysql console is not an option.
Is there any notion of a session for mysql command line interactions? Or can I set it up to listen for commands on a local unix socket (the output is required to be returned)? Or something like that?
User #smcjones mentions using the .my.cnf file or mysql_config_editor. Those are good suggestions, I give my +1 vote to him.
Another solution is to put the credentials in any file of your choosing and then specify that file when you invoke MySQL tools:
mysql --defaults-extra-file=my_special.cnf ...other arguments...
And finally, just for completeness, you can use environment variables for some options, like host and password. But strangely, not the user. See http://dev.mysql.com/doc/refman/5.7/en/environment-variables.html
export MYSQL_HOST="mydbserver"
export MYSQL_PWD="Xyzzy"
mysql ...other arguments...
I don't really recommend using an environment variable for the password, since anyone who can run ps on your client host can see the environment variables for the mysql client process.
There are a few ways to handle this in MySQL.
Put password in hidden .my.cnf in the home directory of the user the script is running as.
[client]
user=USER
password=PASSWORD
Use mysql_config_editor
mysql_config_editor set --login-path=client --host=localhost
--user=localuser --password
When prompted to enter your password, enter it like you otherwise would.
IMO this is the worst option, but I'll add it for the sake of completeness.
You could always create a function wrapper for MySQL that appends your set password.
#! /bin/bash
local_mysql_do_file() {
mysql -u localuser -h localhost -pPASSWORD_NO_SPACE < $1
}
# usage
local_mysql_do_file file.sql
I'm trying to lock down access to a MySQL user account to one IP address, but it seems that every time you start a docker container, the IP address changes.
docker run -it company/my-app bash
Setup mysql-client on it
apt-get update
apt-get upgrade
apt-get install mysql-client
Now I would connect using:
mysql -u blah -h database.host.com -p
Access denied for user 'blah'#'172.17.0.63' (using password: YES)
Then I would grant all privileges for blah'#'172.17.0.63 and I'd be able to access the database from the container. Now I would start a new docker container and repeat the above steps and I would once again get:
Access denied for user 'blah'#'172.17.0.64' (using password: YES)
The IP address seems to increment every time you start a docker container.
I can limit the hosts to %.%.%.%, but that just means any IP address can connect which is not as secure as I want it.
Is there some sort of way to limit access to a mysql account to only one docker container or group of containers?
You can configure a small dnsmasq instance to be used by MySQL, and run a script to automatically update the DNS record when the container's IP address has changed.
I've written a small script to do this (pasted below), which automatically update DNS record which has the same name as the containers' name and points them to the containers' IP addresses:
#!/bin/bash
# 10 seconds interval time by default
INTERVAL=${INTERVAL:-10}
# dnsmasq config directory
DNSMASQ_CONFIG=${DNSMASQ_CONFIG:-.}
# commands used in this script
DOCKER=${DOCKER:-docker}
SLEEP=${SLEEP:-sleep}
TAIL=${TAIL:-tail}
declare -A service_map
while true
do
changed=false
while read line
do
name=${line##* }
ip=$(${DOCKER} inspect --format '{{.NetworkSettings.IPAddress}}' $name)
if [ -z ${service_map[$name]} ] || [ ${service_map[$name]} != $ip ] # IP addr changed
then
service_map[$name]=$ip
# write to file
echo $name has a new IP Address $ip >&2
echo "host-record=$name,$ip" > "${DNSMASQ_CONFIG}/docker-$name"
changed=true
fi
done < <(${DOCKER} ps | ${TAIL} -n +2)
# a change of IP address occured, restart dnsmasq
if [ $changed = true ]
then
systemctl restart dnsmasq
fi
${SLEEP} $INTERVAL
done
Then, create your MySQL user with the host equal to the container's name, e.g. your container's name is blah then you create MySQL user as 'you'#'blah'.
I think your current approach is wrong. You can simply use the official MySQL container and link to it the cointainers you want to have access:
docker run --name some-app --link some-mysql:mysql -d app-that-uses-mysql
This will add an entry to the some-app /etc/hosts file with the name "mysql" pointing to the MySQL container, as described in the docker linking docs.
i have downloaded and installed mySQL my double clicking on its icon. It was installed successfully.
When i goto startup and preference i see the icon of mysql added and when i click on it i see a screen where it says 'MySQL server instance is running'.
But when i open terminal and cd to /usr/local/mysql and then when i type sudo ./bin/mysqld_safe i was prompted for a password. and i have not added a password when i installed mySQL, so i tried leaving it blank, and then i tried various passwords to login but all attempts failed.
So now i need to know how to login to mySQL via the terminal ?
mysql version - 5.5.24-osx10.6x86_64
my Mac OS - 10.7.3
What I found installing mysql on MacOs, there are a few differences. One is that it installs it without a password. The other thing is that it by default allows for anonymous logins.
Use this to set the password:
mysqladmin -u root -h localhost password yourpassword
You can remove anonymous logins this way:
shell> mysql -u root -p
Enter password: (enter root password here)
mysql> DROP USER ''#'localhost';
mysql> DROP USER ''#'host_name';
The other thing is that I found that the install does not modify the path variable. What I did to run mysql from the command line was to add /usr/local/mysql/bin to path by adding it to /etc/paths or /etc/paths.d . This may be what you need in order to run mysql. Like someone said in the comments, mysqld_safe is one way to start the mysql server, and it seems that is already set to run.
Here are specific instructions to add something to /etc/paths.d
$ cd /etc/paths.d
$ cat > mysql
/usr/local/bin/mysql
(and then type Ctrl-D
that should put a file there)
you may have to sudo if you do not have permissions.
The sudo command, by default, lets anyone in the admin group run a command as root by giving his own password. That's why it asked for your password when you typed "sudo ./bin/mysqld_safe". It has nothing whatsoever to do with mysql.
If you don't have a password, you cannot use sudo in the default configuration. Either give yourself a password, or edit the sudoers file. (I would strongly suggest the former over the latter, especially if you have no idea what sudo does.)
For more information, type "man sudo" (and then "man sudoers") from your Terminal.
Meanwhile, the reason "it says -bash: mysql: command not found when i type mysql in the terminal" is because you've clearly installed it into /usr/local/mysql/bin/mysql, and that isn't on your path. If it were on your path, you could have just done "sudo mysqld_safe" above, instead of "sudo ./bin/mysqld_safe". Since it's not, you have to do "./bin/mysqld_safe".
For more information, consult a good primer on the Unix shell.
Finally, if you've got the mysql daemon running, and are trying to start the client, it's "mysql" that you want to run, not "mysqld_safe".