Bash for all elements in massive exec shut down - mysql

i have to do shut down of my vm's, smth like garbage collector, my script find name of vm's in database which are empty and ready for off. I have got trouble when i path to massive names of vm the output if echo is trully, but when i try to do turn off it takes only one elenent in mass and didn't path to variable.
How can i do shut down for all elements in massive? and path the names of vm's to property?
arr1=($(/usr/bin/mysql --skip-column-names -u $DB_USER -p$DB_PASS $DB -e "SELECT VM_NAME FROM VM_LIST Where BUSY="0" AND POOL='LINUX';"))
echo ${#arr1[#]}
if arr1 -gt 0
then
for availiable_node in "${arr1[#]}"
do
shutdown $availiable_node
echo $availiable_node

Though I couldn't understand everything, still if someone wants to do such activity,
It should be like:
#!/bin/bash
vmNode=(db query )
for node in ${vmNode[*]}
do
echo $node
ssh user#${node} "sudo shutdown -h now" #user should be same on all nodes.
# sudo use is subject to requirement
done
Requirements for this to work:
1) ssh should be passwordless
2) sudo should be passwordless and should not need tty
solution will still work if sudo or ssh are not password less but you shall have to enter password 2 times for eveynode, once for ssh and then for sudo
3) The node you are firing this script from, should be last or should not be there in vmNode array.
4) If query is fetching node host name then they should be configured in the /etc/hosts and ~/.ssh/known_hosts
This will work for OS shutdown(*nix) only.

Related

Docker IP changes makes locking down MySQL access tricky

I'm trying to lock down access to a MySQL user account to one IP address, but it seems that every time you start a docker container, the IP address changes.
docker run -it company/my-app bash
Setup mysql-client on it
apt-get update
apt-get upgrade
apt-get install mysql-client
Now I would connect using:
mysql -u blah -h database.host.com -p
Access denied for user 'blah'#'172.17.0.63' (using password: YES)
Then I would grant all privileges for blah'#'172.17.0.63 and I'd be able to access the database from the container. Now I would start a new docker container and repeat the above steps and I would once again get:
Access denied for user 'blah'#'172.17.0.64' (using password: YES)
The IP address seems to increment every time you start a docker container.
I can limit the hosts to %.%.%.%, but that just means any IP address can connect which is not as secure as I want it.
Is there some sort of way to limit access to a mysql account to only one docker container or group of containers?
You can configure a small dnsmasq instance to be used by MySQL, and run a script to automatically update the DNS record when the container's IP address has changed.
I've written a small script to do this (pasted below), which automatically update DNS record which has the same name as the containers' name and points them to the containers' IP addresses:
#!/bin/bash
# 10 seconds interval time by default
INTERVAL=${INTERVAL:-10}
# dnsmasq config directory
DNSMASQ_CONFIG=${DNSMASQ_CONFIG:-.}
# commands used in this script
DOCKER=${DOCKER:-docker}
SLEEP=${SLEEP:-sleep}
TAIL=${TAIL:-tail}
declare -A service_map
while true
do
changed=false
while read line
do
name=${line##* }
ip=$(${DOCKER} inspect --format '{{.NetworkSettings.IPAddress}}' $name)
if [ -z ${service_map[$name]} ] || [ ${service_map[$name]} != $ip ] # IP addr changed
then
service_map[$name]=$ip
# write to file
echo $name has a new IP Address $ip >&2
echo "host-record=$name,$ip" > "${DNSMASQ_CONFIG}/docker-$name"
changed=true
fi
done < <(${DOCKER} ps | ${TAIL} -n +2)
# a change of IP address occured, restart dnsmasq
if [ $changed = true ]
then
systemctl restart dnsmasq
fi
${SLEEP} $INTERVAL
done
Then, create your MySQL user with the host equal to the container's name, e.g. your container's name is blah then you create MySQL user as 'you'#'blah'.
I think your current approach is wrong. You can simply use the official MySQL container and link to it the cointainers you want to have access:
docker run --name some-app --link some-mysql:mysql -d app-that-uses-mysql
This will add an entry to the some-app /etc/hosts file with the name "mysql" pointing to the MySQL container, as described in the docker linking docs.

MySQL dump CronJob

I'm trying to create a cron that daily backups my MySQL slave. The backup.sh content:
#!/bin/bash
#
# Backup mysql from slave
#
#
sudo mysql -u root -p'xxxxx' -e 'STOP SLAVE SQL_THREAD;'
sudo mysqldump -u root -p'xxxxx' ng_player | gzip > database_`date +\%Y-\%m-\%d`.sql.gz
sudo mysqladmin -u root -p'xxxxx' start-slave
I made it executable by sudo chmod +x /home/dev/backup.sh
and entered in to crontab by:
sudo crontab -e
0 12 * * * /home/dev/backup.sh
but it doesn't work, if I only run in the command line it works but not in crontab.
FIXED:
I used the script from this link: mysqldump doesn't work in crontab
Break the problem in half. First try sending only email from the cron job to see if you are getting it to even run. Put this above in a file and have your cron job point to it:
#!/bin/bash
/bin/mail -s "test subject" "yourname#yourdomain" < /dev/null
The good thing about using this tester is that it is very simple and more likely to give you some results. It does not depend on your current working directory, which can sometimes be not what you expect it to be.
Try use full link to mysql bin directory in .sh file
example :
sudo /var/lib/mysql -u root -p'xxxxx' -e 'STOP SLAVE SQL_THREAD;'
I had this same problem.
I figured out that you can't use the command sudo in a non-interactive script.
The sudo command would create a field where you would type in the password to your account (root).
If you are logged into a command prompt like ssh sudo works without typing in any passwords, but when another program runs sudo it would ask for password.
Try this instead su command doesn't require any logins and it does the same thing.
su --session-command="mysql -u root -p'xxxxx' -e 'STOP SLAVE SQL_THREAD;'" root
su --session-command="mysqldump -u root -p'xxxxx' ng_player | gzip > database_`date +\%Y-\%m-\%d`.sql.gz" root
su --session-command="mysqladmin -u root -p'xxxxx' start-slave" root
Replace root with your linux username.
EDIT:
Look at this thread for a different answer.
https://askubuntu.com/questions/173924/how-to-run-cron-job-using-sudo-command
Let's start with the silly stuff in the script.
The only command which you don't run via 'sudo' is the, spookily enough, only command which I would expect you might need to run via sudo (depending on the permissions of the target file).
Prefixing the commands in a script with sudo without a named user (i.e. running as root) serves no useful function if you are invoking the script as root.
On a typical installation, the mysql, mysqladmin and gzip programs are typically executable by any user - the authentication and authorization of the commands to the DBMS are authenticated by the DBMS using the authentication credentials passed as arguments - hence I would not expect that any of the operations here, except possibly writing to the output file (depending on its permissions).
You don't specify a path for the backup file - maybe it's writing it somewhere other than you expect?
(similarly, you should check if any of the executables are in a location which is not in the $PATH for the crontab execution environment).
but it doesn't work
....is not an error message.
The output of any command run via cron is mailed to the owner of the crontab - go read your mail.

Expect scripting: remote database backup automation

I'm looking for a kind of remote database backup automation.
Then, I came across a scripting language which commonly used for administrative tasks, "Expect scripting" and I believe it could serve my purpose very well.
what I'd like to do is I want to perform login to a remote server using the following bash script from my local linux box. (supposed everything has been set properly, SSH authentication via generated key pair, so no password is required)
For the most important part, I'd like to send a mysqldump command to perform backup for my database on that server.
#!/usr/bin/expect
set login "root"
set addr "192.168.1.1"
spawn ssh $login#$addr
expect "#"
send "cd /tmp\r"
expect "#"
send "mysqldump -u root -ppassword my_database > my_database.sql\r"
expect "#"
send "exit\r"
The only problem I found here was after the line send "mysqldump -u root....... ".
It was never waiting until the process to finish, but immediately exit the shell with 'send "exit\r"' command line.
what do I do to make it waits until mysqldump command finish and log off the SSH properly?
I don't know the answer to your question: add exp_internal 1 to the top of the program to see what's going on.
However, since you have ssh keys set up, you don't really need expect at all:
ssh $login#$addr 'cd /tmp && mysqldump -u root -ppassword my_database > my_database.sql'

execute mysql on remote server via bash script

I need to execute a mysql command on a remote server but seem to be hitting problem when it comes to executing the actual mysql bit
#!/usr/bin/expect -f
spawn /usr/bin/ssh -t root#10.0.0.2
expect "password: "
sleep 1
send "password\r"
sleep 2
/usr/bin/mysql databasename -e "update device_log set status = 'Y' where device_id in ('1','2');"
basically I want to change the flag to Y on device id's 1&2
but the script outputs
invalid command name "/usr/bin/mysql"
Just append the mysql command to the ssh command to run it in one go, like this:
#!/usr/bin/expect -f
spawn /usr/bin/ssh -t root#10.0.0.2 /usr/bin/mysql databasename -e "the query"
expect "password: "
sleep 1
send "password\r"
I'm not very much into expect, but I'm expecting that your attempt in the mysql line isn't actually valid syntax for expect to run a command.
Additionally:
You should use SSH keys for passwordless login instead of having a root password hardcoded in a script.
Consider running MySQL remotely e.g. mysql -h 10.0.0.2 -e "the query", or
Use port forwarding in SSH to connect to MySQL securely, e.g. run ssh -L 3307:localhost:3306 root#10.0.0.2 in the background and then connect to TCP port 3307 on localhost mysql -h 127.0.0.1 -P 3307.
It sounds like /usr/bin/mysql is not the the path to the mysql binary on that remote server. You could use just mysql instead, assuming that the binary is somewhere in that remote server's PATH. Otherwise you will have to go and find out where the binary is actually located and alter the absolute path accordingly.

Making an alias for SSH MySQL login using ENDSSH heredoc

I want to make an alias that is kept in my bashrc file to log into a remote MySQL db via SSH.
Assume that I can't add/alter any files on the remote machine that I'm SSHing into. Here's the relevant code.
function ssh_mysql {
echo "SSHing to $server"
ssh -t -t $suser#$server <<ENDSSH
eval "mysql -h "$host" -u $user -p $pass $db"
ENDSSH
}
alias wt_mysql=ssh_mysql
The Problem: Entering 'wt_mysql' into the terminal as an alias SSHs and logs into MySQL fine.. but when trying to enter any command/query/etc at the MySQL prompt, none of what I've submitted is executed/run. Including the 'exit' command. I have to ctrl C to get back to my local terminal. although its a bit out of my understanding I believe the problem is related to this topic, Terminating SSH session executed by bash script
How can I make sure that mysql and any subsequent commands are executed remotely?
Thanks!
I don't understand why you're using eval (or why you're passing the -t switch twice).
I would expect this ssh command to do what you want:
ssh -t $suser#$server "mysql -h '$host' -u $user -p $pass $db"