trying to change admin password for openshift 3.11 cluster - openshift

I have set up an openshift 3.11 cluster and want to change the admin password. Below are the steps I have run. The final command to my understanding is supposed to apply the change but it prompts me to set the password again which I think is the problem.
ssh to cluster as root then I ran these commands:
htpasswd -c -b /etc/origin/master/htpasswd ocadmin NEWPASSWORD
htpasswd -v /etc/origin/master/htpasswd ocadmin
The new password verified correctly
htpasswd /etc/origin/master/htpasswd ocadmin
This just prompted me to edit the password again. It returns a result stating the password is updating but nothing changes

Issue is I have multiple master nodes in my cluster. After updating the file and copying it to each of the master nodes it worked:
Final process is:
ssh to master00 node
htpasswd -c -b /etc/origin/master/htpasswd ocadmin NEWPASSWORD
scp /etc/origin/master/htpasswd clusternamemaster-01:/etc/origin/master/htpasswd
scp /etc/origin/master/htpasswd clusternamemaster-02:/etc/origin/master/htpasswd
etc

Related

login openshift with kubadmin fail: Login failed (401 Unauthorized)

As per offical documentation by Openshift , we can get kubadmin password as below:
crc console --credentials
To login as a regular user, run 'oc login -u developer -p developer https://api.crc.testing:6443'.
To login as an admin, run 'oc login -u kubeadmin -p gALwE-jY6p9-poc9U-gRcdu https://api.crc.testing:6443'
However , I can login successfully with developer/developer .kubeadmin will fail with "Login failed (401 Unauthorized)" . Restart CRC muiltiple times . Still not works ... Any idea about this ?
$ oc login -u developer -p developer https://api.crc.testing:6443
Login successful.
You have one project on this server: "demo"
Using project "demo"
$ oc login -u kubeadmin -p gALwE-jY6p9-poc9U-gRcdu https://api.crc.testing:6443
Login failed (401 Unauthorized)
Verify you have provided correct credentials.
Any inputs will be appreciated . Thanks in advance..
You said you restarted CRC. Have you tried deleting and recreating the cluster?
One of the first steps in productionizing a cluster is to remove the kubeadmin account - is it possible that you've done that and the "crc console --credentials" is now only displaying what it used to be?
If you have another admin account try:
$ oc get -n kube-system secret kubeadmin
The step to remove that account (see: https://docs.openshift.com/container-platform/4.9/authentication/remove-kubeadmin.html) is to simply delete that secret. If you've done that at some point in this cluster's history you'll either need to use your other admin accounts in place of kubeadmin, or recreate the CRC instance (crc stop; crc delete; crc setup)
Just in case others are having this issue and the issue persists even after trying crc stop, crc delete, crc cleanup, crc setup, crc start, I was able to sign in as kubeadmin by NOT using the following command after crc start got my CodeReady Container up and running.
eval $(crc oc-env)
Instead, I issue the crc oc-env command. In this example that the output returns /home/john.doe/.crc/bin/oc.
~]$ crc oc-env
export PATH="/home/john.doe/.crc/bin/oc:$PATH"
# Run this command to configure your shell:
# eval $(crc oc-env)
I then list the contents of the /home/john.doe/.crc/bin/oc directory which shows that the /home/john.doe/.crc/bin/oc directory is symbolically linked to the /home/john.doe/.crc/cache/crc_libvirt__amd64/oc file.
~]$ ll /home/john.doe/.crc/bin/oc
lrwxrwxrwx. 1 john.doe john.doe 61 Jun 8 20:27 oc -> /home/john.doe/.crc/cache/crc_libvirt_4.10.12_amd64/oc
And I was then able to sign in using the absolute path to the oc command line tool.
~]$ /home/john.doe/.crc/cache/crc_libvirt_4.10.12_amd64/oc login -u kubeadmin -p 28Fwr-Znmfb-V6ySF-zUu29 https://api.crc.testing:6443
Login successful.
I'm sure I could dig a bit more into this, checking the contents of my users $PATH, but suffice to say, this at least is a work around for me that gets me to be able to sign in as kubeadmin.

Bash script for interactive ssh and mysql commands

I'm studying MySQL, and every time I have to
Enter ssh XXX#XXX command, and enter my password to the school server.
Enter mysql -u XXX -p command, and enter MySQL password.
I want to create a Bash script for performing the steps above automatically.
I can accomplish the first step with this code:
#!/usr/bin/expect -f
set address xxx.com
set password xxx
set timeout 10
spawn ssh xxx#$address
expect { "*yes/no" { send "yes\r"; exp_continue} "*password:" { send "$password\r" } }
send clear\r
interact
But I don't know how to automatically input the next command (mysql -u xxx -p) and the password.
How can I do this?
You don't need such a complex script to just enter the MySQL console on remote machine. Use the features of the ssh tool:
ssh -tt user#host -- mysql -uuser -ppassword
The -t option forces pseudo-terminal allocation. Multiple -t force tty allocation, even if ssh has no local tty (see man ssh). Note the use of -p option. There must be no spaces between -p and password (see the manual for mysql).
Or even connect via mysql directly, if the MySQL host is accessible from your local machine:
mysql -hhost -uuser -p
Don't forget to adjust the shebang:
#!/bin/bash -
Use my.cnf to store your password securly like ssh keys.
https://easyengine.io/tutorials/mysql/mycnf-preference/
Same way ssh is also possible through ssh -i parameter and passing the private key path of the remote host.
Best of luck!

MySQL dump CronJob

I'm trying to create a cron that daily backups my MySQL slave. The backup.sh content:
#!/bin/bash
#
# Backup mysql from slave
#
#
sudo mysql -u root -p'xxxxx' -e 'STOP SLAVE SQL_THREAD;'
sudo mysqldump -u root -p'xxxxx' ng_player | gzip > database_`date +\%Y-\%m-\%d`.sql.gz
sudo mysqladmin -u root -p'xxxxx' start-slave
I made it executable by sudo chmod +x /home/dev/backup.sh
and entered in to crontab by:
sudo crontab -e
0 12 * * * /home/dev/backup.sh
but it doesn't work, if I only run in the command line it works but not in crontab.
FIXED:
I used the script from this link: mysqldump doesn't work in crontab
Break the problem in half. First try sending only email from the cron job to see if you are getting it to even run. Put this above in a file and have your cron job point to it:
#!/bin/bash
/bin/mail -s "test subject" "yourname#yourdomain" < /dev/null
The good thing about using this tester is that it is very simple and more likely to give you some results. It does not depend on your current working directory, which can sometimes be not what you expect it to be.
Try use full link to mysql bin directory in .sh file
example :
sudo /var/lib/mysql -u root -p'xxxxx' -e 'STOP SLAVE SQL_THREAD;'
I had this same problem.
I figured out that you can't use the command sudo in a non-interactive script.
The sudo command would create a field where you would type in the password to your account (root).
If you are logged into a command prompt like ssh sudo works without typing in any passwords, but when another program runs sudo it would ask for password.
Try this instead su command doesn't require any logins and it does the same thing.
su --session-command="mysql -u root -p'xxxxx' -e 'STOP SLAVE SQL_THREAD;'" root
su --session-command="mysqldump -u root -p'xxxxx' ng_player | gzip > database_`date +\%Y-\%m-\%d`.sql.gz" root
su --session-command="mysqladmin -u root -p'xxxxx' start-slave" root
Replace root with your linux username.
EDIT:
Look at this thread for a different answer.
https://askubuntu.com/questions/173924/how-to-run-cron-job-using-sudo-command
Let's start with the silly stuff in the script.
The only command which you don't run via 'sudo' is the, spookily enough, only command which I would expect you might need to run via sudo (depending on the permissions of the target file).
Prefixing the commands in a script with sudo without a named user (i.e. running as root) serves no useful function if you are invoking the script as root.
On a typical installation, the mysql, mysqladmin and gzip programs are typically executable by any user - the authentication and authorization of the commands to the DBMS are authenticated by the DBMS using the authentication credentials passed as arguments - hence I would not expect that any of the operations here, except possibly writing to the output file (depending on its permissions).
You don't specify a path for the backup file - maybe it's writing it somewhere other than you expect?
(similarly, you should check if any of the executables are in a location which is not in the $PATH for the crontab execution environment).
but it doesn't work
....is not an error message.
The output of any command run via cron is mailed to the owner of the crontab - go read your mail.

auto authenticate password in mysql

I am new to MySql. In postgres, we can use .pgpass and save user password so that the database can automatically authenticate your password whenever you access or execute your sql script. I don't have to enter password.
So is there any way to do the same thing for mysql on linux?
Thanks
Yes, you can store default credentials and other options in your home directory, in a file called $HOME/.my.cnf
$ cat > $HOME/.my.cnf
[client]
user = scott
password = tiger
host = mydbserver
^D
In MySQL 5.6, you can also store an encrypted version of this file in $HOME/.mylogin.cnf, see http://dev.mysql.com/doc/refman/5.6/en/mysql-config-editor.html
$ mysql_config_editor set --user=scott --host=mydbserver --password
Enter password: ********
WARNING : 'client' path already exists and will be overwritten.
Continue? (Press y|Y for Yes, any other key for No) : y
$ mysql_config_editor print --all
[client]
user = scott
password = *****
host = mydbserver
You could use the command-line parameters available to the MySQL executable within a quick Bash script to accomplish this. See http://dev.mysql.com/doc/refman/5.6/en/mysql.html for
the details. Basically, the following line would log you into MySQL
$>mysql --user=root --password=toor my_database
The command above would log you into the mysql database "my_database" as root using the password "toor"
Now but this into a quick Bash script (run_mysql.sh):
#!/bin/bash
/usr/bin/mysql --user=root --password=toor my_database
Make sure the script is executable:
chmod +x ./run_mysql.sh
Of course make sure this script is safely stored somewhere other users cannot access it such as your home folder and set the permissions accordingly.

How to perform a mysqldump without a password prompt?

I would like to know the command to perform a mysqldump of a database without the prompt for the password.
REASON:
I would like to run a cron job, which takes a mysqldump of the database once everyday. Therefore, I won't be able to insert the password when prompted.
How could I solve this ?
Since you are using Ubuntu, all you need to do is just to add a file in your home directory and it will disable the mysqldump password prompting. This is done by creating the file ~/.my.cnf (permissions need to be 600).
Add this to the .my.cnf file
[mysqldump]
user=mysqluser
password=secret
This lets you connect as a MySQL user who requires a password without having to actually enter the password. You don't even need the -p or --password.
Very handy for scripting mysql & mysqldump commands.
The steps to achieve this can be found in this link.
Alternatively, you could use the following command:
mysqldump -u [user name] -p[password] [database name] > [dump file]
but be aware that it is inherently insecure, as the entire command (including password) can be viewed by any other user on the system while the dump is running, with a simple ps ax command.
Adding to #Frankline's answer:
The -p option must be excluded from the command in order to use the password in the config file.
Correct:
mysqldump –u my_username my_db > my_db.sql
Wrong:
mysqldump –u my_username -p my_db > my_db.sql
.my.cnf can omit the username.
[mysqldump]
password=my_password
If your .my.cnf file is not in a default location and mysqldump doesn't see it, specify it using --defaults-file.
mysqldump --defaults-file=/path-to-file/.my.cnf –u my_username my_db > my_db.sql
A few answers mention putting the password in a configuration file.
Alternatively, from your script you can export MYSQL_PWD=yourverysecretpassword.
The upside of this method over using a configuration file is that you do not need a separate configuration file to keep in sync with your script. You only have the script to maintain.
There is no downside to this method.
The password is not visible to other users on the system (it would be visible if it is on the command line). The environment variables are only visible to the user running the mysql command, and root.
The password will also be visible to anyone who can read the script itself, so make sure the script itself is protected. This is in no way different than protecting a configuration file. You can still source the password from a separate file if you want to have the script publicly readable (export MYSQL_PWD=$(cat /root/mysql_password) for example). It is still easier to export a variable than to build a configuration file.
E.g.,
$ export MYSQL_PWD=$(>&2 read -s -p "Input password (will not echo): "; echo "$REPLY")
$ mysqldump -u root mysql | head
-- MySQL dump 10.13 Distrib 5.6.23, for Linux (x86_64)
--
-- Host: localhost Database: mysql
-- ------------------------------------------------------
-- Server version 5.6.23
/*!40101 SET #OLD_CHARACTER_SET_CLIENT=##CHARACTER_SET_CLIENT */;
/*!40101 SET #OLD_CHARACTER_SET_RESULTS=##CHARACTER_SET_RESULTS */;
/*!40101 SET #OLD_COLLATION_CONNECTION=##COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
MariaDB
MariaDB documents the use of MYSQL_PWD as:
Default password when connecting to mysqld. It is strongly recommended to use a more secure method of sending the password to the server.
The page has no mentions of what a "more secure" method may be.
MySQL
This method is still supported in the latest documented version of MySQL: https://dev.mysql.com/doc/refman/8.0/en/environment-variables.html though it comes with the following warning:
Use of MYSQL_PWD to specify a MySQL password must be considered extremely insecure and should not be used. Some versions of ps include an option to display the environment of running processes. On some systems, if you set MYSQL_PWD, your password is exposed to any other user who runs ps. Even on systems without such a version of ps, it is unwise to assume that there are no other methods by which users can examine process environments.
The security of environment variables is covered in much details at https://security.stackexchange.com/a/14009/10002 and this answer also addresses the concerns mentioned in the comments. TL;DR Irrelevant for over a decade.
Having said that, the MySQL documentation also warns:
MYSQL_PWD is deprecated as of MySQL 8.0; expect it to be removed in a future version of MySQL.
To which I'll leave you with maxschlepzig's comment from below:
funny though how Oracle doesn't deprecate passing the password on the command line which in fact is extremely insecure
Final thoughts
Connecting to a system using a single factor of authentication (password) is indeed insecure. If you are worried about security, you should consider adding mutual TLS on top of the regular connection so both the server and the client are properly identified as being authorized.
To use a file that is anywhere inside of OS, use --defaults-extra-file eg:
mysqldump --defaults-extra-file=/path/.sqlpwd [database] > [desiredoutput].sql
Note: .sqlpwd is just an example filename. You can use whatever you desire.
Note: MySQL will automatically check for ~/.my.cnf which can be used instead of --defaults-extra-file
If your using CRON like me, try this!
mysqldump --defaults-extra-file=/path/.sqlpwd [database] > "$(date '+%F').sql"
Required Permission and Recommended Ownership
sudo chmod 600 /path/.sqlpwd && sudo chown $USER:nogroup /path/.sqlpwd
.sqlpwd contents:
[mysqldump]
user=username
password=password
Other examples to pass in .cnf or .sqlpwd
[mysql]
user=username
password=password
[mysqldiff]
user=username
password=password
[client]
user=username
password=password
If you wanted to log into a database automatically, you would need the [mysql] entry for instance.
You could now make an alias that auto connects you to DB
alias whateveryouwant="mysql --defaults-extra-file=/path/.sqlpwd [database]"
You can also only put the password inside .sqlpwd and pass the username via the script/cli. I'm not sure if this would improve security or not, that would be a different question all-together.
For completeness sake I will state you can do the following, but is extremely insecure and should never be used in a production environment:
mysqldump -u [user_name] -p[password] [database] > [desiredoutput].sql
Note: There is NO SPACE between -p and the password.
Eg -pPassWord is correct while -p Password is incorrect.
Yeah it is very easy .... just in one magical command line no more
mysqldump --user='myusername' --password='mypassword' -h MyUrlOrIPAddress databasename > myfile.sql
and done :)
For me, using MariaDB I had to do this: Add the file ~/.my.cnf and change permissions by doing chmod 600 ~/.my.cnf. Then add your credentials to the file. The magic piece I was missing was that the password needs to be under the client block (ref: docs), like so:
[client]
password = "my_password"
[mysqldump]
user = root
host = localhost
If you happen to come here looking for how to do a mysqldump with MariaDB. Place the password under a [client] block, and then the user under a [mysqldump] block.
You can achieve this in 4 easy steps
create directory to store script and DB_backups
create ~/.my.cnf
create a ~/.script.sh shell script to run the mysqldump
Add a cronjob to run the mysql dump.
Below are the detailed steps
Step 1
create a directory on your home directory using sudo mkdir ~/backup
Step 2
In your home directory run sudo nano ~/.my.cnf and add the text below and save
[mysqldump]
#use this if your password has special characters (!##$%^&..etc) in it
password="YourPasswordWithSpecialCharactersInIt"
#use this if it has no special characters
password=myPassword
Step 3
cd into ~/backup and create another file script.sh
add the following text to it
SQLFILE=/path/to/where/you/want/to/dump/dbname.sql
DATABASE=dbname
USER=myUsername
mysqldump --defaults-file=~/.my.cnf -u ${USER} ${DATABASE}|gzip > dbName_$(date +\%Y\%m\%d_\%H\%M).sql.gz
Step 4
In your console, type crontab -e to open up the cron file where the auto-backup job will be executed from
add the text below to the bottom of the file
0 0 * * * ./backup/script.sh
The text added to the bottom of the cron file assumes that your back up shall run daily at midnight.
That's all you need folk
;)
Here is a solution for Docker in a script /bin/sh :
docker exec [MYSQL_CONTAINER_NAME] sh -c 'exec echo "[client]" > /root/mysql-credentials.cnf'
docker exec [MYSQL_CONTAINER_NAME] sh -c 'exec echo "user=root" >> /root/mysql-credentials.cnf'
docker exec [MYSQL_CONTAINER_NAME] sh -c 'exec echo "password=$MYSQL_ROOT_PASSWORD" >> /root/mysql-credentials.cnf'
docker exec [MYSQL_CONTAINER_NAME] sh -c 'exec mysqldump --defaults-extra-file=/root/mysql-credentials.cnf --all-databases'
Replace [MYSQL_CONTAINER_NAME] and be sure that the environment variable MYSQL_ROOT_PASSWORD is set in your container.
Hope it will help you like it could help me !
Check your password!
Took me a while to notice that I was not using the correct user name and password in ~/.my.cnf
Check the user/pass basics before adding in extra options to crontab backup entries
If specifying --defaults-extra-file in mysqldump then this has to be the first option
A cron job works fine with .my.cnf in the home folder so there is no need to specify --defaults-extra-file
If using mysqlpump (not mysqldump) amend .my.cnf accordingly
The ~/.my.cnf needs permissions set so only the owner has read/write access with:
chmod 600 ~/.my.cnf
Here is an example .my.cnf:
[mysql]
host = localhost
port = 3306
user = BACKUP_USER
password = CORRECTBATTERYHORSESTAPLE
[mysqldump]
host = localhost
port = 3306
user = BACKUP_USER
password = CORRECTBATTERYHORSESTAPLE
[mysqlpump]
host = localhost
port = 3306
user = BACKUP_USER
password = CORRECTBATTERYHORSESTAPLE
The host and port entries are not required for localhost
If your user name in linux is the same name as used for your backup purposes then user is not required
Another tip, whilst you are doing a cronjob entry for mysqldump is that you can set it to be a low priority task with ionice -c 3 nice 19. Combined with the --single-transaction option for InnoDB you can run backups that will not lock tables or lock out resources that might be needed elsewhere.
I have the following.
/etc/mysqlpwd
[mysql]
user=root
password=password
With the following alias.
alias 'mysql -p'='mysql --defaults-extra-file=/etc/mysqlpwd'
To do a restore I simply use:
mysql -p [database] [file.sql]
This is how I'm backing-up a MariaDB database using an expanding variable.
I'm using a "secrets" file in a Docker-Compose setup to keep passwords out of Git, so I just cat that in an expanding variable in the script.
NOTE: The below command is executed from the Docker host itself:
mysqldump -h192.168.1.2 -p"$(cat /docker-compose-directory/mariadb_root_password.txt)" -uroot DB-Name > /backupsDir/DB-Name_`date +%Y%m%d-%H:%M:%S`.sql
This is tested and known to work correctly in Ubuntu 20.04 LTS with mariadb-client.
I'm doing mine a different way, using Plink(Putty command line) to connect to remotehost, then the below command is in the plink file that runs on the remote server, then I use RSYNC from windows to get it and backup to an onprem NAS.
sudo mysqldump -u root --all-databases --events --routines --single-transaction > dump.sql
I have keys setup on the remote host and using PowerShell that's scheduled via task scheduler to run weekly.
what about --password=""
worked for me running on 5.1.51
mysqldump -h localhost -u <user> --password="<password>"
Definitely I think it would be better and safer to place the full cmd line in the root crontab , with credentails.
At least the crontab edit is restricred (readable) to someone who already knows the password.. so no worries to show it in plain text...
If needed more than a simple mysqldump... just place a bash script that accepts credentails as params and performs all amenities inside...
The bas file in simple
#!/bin/bash
mysqldump -u$1 -p$2 yourdbname > /your/path/save.sql
In the Crontab:
0 0 * * * bash /path/to/above/bash/file.sh root secretpwd 2>&1 /var/log/mycustomMysqlDump.log
You can specify the password on the command line as follows:
mysqldump -h <host> -u <user> -p<password> dumpfile
The options for mysqldump are Case Sensitive!