I have a folder with o lot of sql scripts. I want to run all of them without specifying names of them. Just specify a folder name. Is it possible?
You can not do that natively, but here's simple bash command:
for sql_file in `ls -d /path/to/directory/*`; do mysql -uUSER -pPASSWORD DATABASE < $sql_file ; done
here USER, PASSWORD and DATABASE are the corresponding credentials and /path/to/directory is full path to folder that contains your files.
If you want to filter, for example, only sql files, then:
for sql_file in `ls /path/to/directory/*.sql`; do mysql -uUSER -pPASSWORD DATABASE < $sql_file ; done
That was what worked for me:
1. Created a shell script in the folder of my scripts
for f in *.sql
do
echo "Processing $f file..."
mysql -u user "-pPASSWORD" -h HOST DATABASE < $f
done
Related
I'm trying to import GZiped MySQL databases listed in a folder.
GZiped files are located at .mysqldumps/.
$NAME tries to extract database name (as files are always named database_name.sql.gz) and pass it to mysql command line.
Also, as username and database name are the same, the same argument is passed ($NAME).
As files are GZiped, we try to zcat them (so gunzip -c) before pipe them to mysql.
The full script is:
#!/bin/bash
FILES='.mysqldumps/*'
PASSWORD='MyPassword'
for f in $FILES
do
NAME=dbprefix_`basename $f .sql.gz`
echo "Processing $f"
set -x
zcat $f | mysql -u "$NAME" -p$PASSWORD "$NAME"
done
But, when i run the script it outputs:
./.mysqlimport
Processing .mysqldumps/first_database.sql.gz
+ mysql -u dbprefix_first_database -pMyPassword dbprefix_first_database
+ zcat .mysqldumps/first_database.sql.gz
ERROR 1044 (42000) at line 22: Access denied for user 'dbprefix_first_database'#'localhost' to database 'first_database'
As you can see, the selected database is 'first_database' instead of 'dbprefix_first_database' and this just trowns an error of corse, and i just can't understand why $NAME is not correctly parse as database name.
What i'm doing wrong?
After some investigation, the problem comes from the DUMP and not from the script.
While using mysqldump the option --databases was used which includes the USE 'dbname'; and when importing, that name was used instead of $NAME.
Problem solved!
We have a script which is working perfectly, we use it to copy a huge database from server A to server B. Now I want to copy just a table from server A to server B having the table name as a variable so the script should only ask for the table name and copy the table from A to B.
This is the script I have made, I must confess I am not very experienced in shell scripting.
#!/bin/sh
#Run on Server A
TABLENAME=$1
echo $TABLENAME
_now=$(date +"%A %d-%m-%Y "at" %T")
#Copy table $TABLENAME from server A to server B
#Dump table into /directory server A
mysqldump -u admin -p'*****' database_name $TABLENAME > /directory/$TABLENAME.sql
# Copie table to server B
scp /directory/$TABLENAME.sql root#server_b.domain.com:/directory/
# Replace table in database on server B
ssh root#server_b.domain.com "mysql -f -u admin -p'******' database_name -e 'source /directory/$TABLENAME.sql'"
#Remove file on server B
ssh root#server_b.domain.com "rm /directory/$TABLENAME.sql"
#Remove file on A
rm /directory/$TABLENAME.sql
this is the error i get:
.sql
./script_file_name: line 19: unexpected EOF while looking for matching `"'
./script_file_name: line 22: syntax error: unexpected end of file
thank you
You are missing quotes (",') as part of the command.
ssh root#server_b.domain.com "mysql -f -u admin -p'******' database_name -e 'source /directory/$TABLENAME'"
Try this as your combined ssh and mysql statement:
ssh root#server_b.domain.com "mysql -h localhost -u admin -p'******' database_name -e 'source /directory/$TABLENAME'"
Added -h and the database host.
Removed the -f switch.
Let me know how it went.
sorry to have bothered you, this is the solution of my problem:
1. i had 2 $ in my mysql password which needed to be escaped like 'fds\$fds\$gfds\$' i did not know this before.
2. The script will not ask for a table name but the table name must be typed as a parameter after the run command like this:
./filename table_name
Just to complete my input with the following solution that worked fine for me.
Except some changed directory names and , this is the same principle and problem solved.
Hope this is useful.
#!/bin/sh
# Copy table $TABLENAME (passed as argument)
# from server A to server B.
# Run on Server A
TABLENAME=$1
echo $TABLENAME
# Dump table into /root/bash on server A.
mysqldump -h localhost -u root -p'***' tablename $TABLENAME > /root/bash/$TABLENAME.sql
# Copy table to server B.
scp /root/bash/$TABLENAME.sql root#<ip address>:/root/bash/$TABLENAME.sql2
# Replace table in database on server B
ssh root#<ip address> "mysql -f -u root -p'***' tablename -e 'source /root/bash/$TABLENAME.sql2'"
# Remove file on server B.
ssh root#<ip address> "rm /root/bash/$TABLENAME.sql2"
# Remove file on A
rm /root/bash/$TABLENAME.sql
I have a bunch of jpeg images stored as blobs on a mysql database which I need to download to my local machine, the following does not work, can someone please advise?
Note I know the below code just overwrites the same file but for the purpose of this exercise it does not matter.
IFS=$'\n'
for i in `mysql -sN -u******* -p******** -h****** -e "select my_images from mutable"; do
echo $i > myimage.jpg
done
I'm not really sure what is not working with your code, but you should be able to fetch all data and save each image like this:
#!/bin/bash
counter=0;
for i in `mysql -s -N -u******* -p******** -h****** -e"select my_images from mutable"`; do
echo $i > "image${counter}.jpg";
counter=$((counter+1));
done
#!/bin/bash
counter=0;
for i in `mysql -N -u * -p* -e "SELECT id from multable"`; do
mysql -N -u * -p* -e "SELECT my_images FROM multable WHERE id=$i INTO DUMPFILE
'/var/lib/mysql-files/tmp.jpg';"
mv /var/lib/mysql-files/tmp.jpg "image${counter}.jpg"
counter=$((counter+1));
done
This code can extract image from blob and the image will not be invalid.
The keypoint is to use INTO DUMPFILE and then the file can save to the folder indicated by ##secure_file_priv.
You can see the ##secure_file_priv by using the following command
$ echo "SELECT ##secure_file_priv" | mysql -u * -p
And you can change ##secure_file_priv value by setting it in my.cnf.
There are many my.cnf in mysql and you can check the loading sequence by following command
$ mysqld --help --verbose | grep cnf -A2 -B2
But I still suggest to use /var/lib/mysql-files/ to load image file from database first and then copy it because some version of MySQL can only use /var/lib/mysql-files folder even you have changed the ##priv_secure_file to indicate another folder.
D:
cd Tools/MySQL5/bin
mysql -u root mysql
use xyz;
source C:/Users/abc/Desktop/xyz.sql;
\q
When I run the above lines in command prompt it works fine but when I save it as a batch file and run the file it connects to mysql but doesn't perform the sql scripts.
Now what I see is wrong here is that while executing the above commands one by one in your prompt, once you run mysql -u root mysql, you are in the mysql console. So your source command would work there but would not work in your batch since you are not in mysql console while running the batch file.
Solution:
What you can do for this is, instead of using source in mysql you can use
mysql dbname < filename
in your batch file in place of
mysql -u root mysql
use xyz;
source C:/Users/abc/Desktop/xyz.sql;
This link can assist you further if needed
This should work
mysql -u root xyz < C:/Users/abc/Desktop/xyz.sql;
It sources the SQL commands from your file
You could write something like this
mysql -u dbUsername yourDatabase -e "SELECT * FROM table;"
Or to run repeating tasks create a runtasks.bat file, save under the root of your project then write your cmd tasks inside
mysql -u dbUser -e "DROP DATABASE IF EXISTS testDatabase;"
mysql -u dbUser -e "CREATE DATABASE testDatabase;"
php index.php migration latest #runs your migration files
cd application\tests
phpunit
This would work.
mysql.exe -u user_name -p -h _host_ _schema_ -e "select 1 from dual;"
This will also give you output on same command terminal
I all,
I have a series of MYSQL databases with different users and passwords, nevertheless the DB structure is the same for all databases.
I can't create a user with the same username and password to all of them and I need to quickly perform operations on all of them.
I was thinking about a bash script to run via cron.
Any suggestion? I was thinking to something like this but it is not working :(
#!/bin/bash
uconn=(
'mysql -u user_db1 --password=pass_db1 db1 '
'mysql -u user_db2 --password=pass_db2 db2 '
)
for f in "${uconn[#]}"
do
exec ${f}
echo `mysql show tables`
echo `mysql exit`
done
exit
Why not use the documented way?
do
${f} <<EOF
show tables
\\q
EOF
done
Just pasting the full code taking into consideration #Ansgar Wiechers:
#!/bin/bash
uconn=(
'mysql -u user_db1 --password=pass_db1 db1'
'mysql -u user_db2 --password=pass_db2 db2'
)
for f in "${uconn[#]}"
do
${f} <<EOF
show tables
\\q
EOF
done
exit
To execute the code from the local machine to the remote one this works for me:
ssh ssh_user#mydomain.com 'bash -s' < /local/path/to/multiple_db_connections.sh
where the content of multiple_db_connections.sh is the code above