I wrote the below line to create a mysql backup , for some reasons i'm getting Errcode 13 .
E:\Xampp\xampp\mysql\bin\\mysqldump -u root --add-drop-database -B project_db -r C:\Documents and Settings\Administrator\My Documents\a.sql
Why does the above line fail to execute ? I'm trying to create a DB backup using the above line ? Pls Help
It's probably because file "C:\Documents" does not exist. You probably want this:
-r "C:\Documents and Settings\Administrator\My Documents\a.sql"
^ ^
Try this
E:\Xampp\xampp\mysql\bin\mysqldump -u root --add-drop-database -B project_db > "C:\Documents and Settings\Administrator\My Documents\a.sql"
" " were missing
E:\Xampp\xampp\mysql\bin\mysqldump -u root --add-drop-database -B project_db -r "C:\Documents and Settings\Administrator\My Documents\a.sql"
above command may work
Related
I'm writing a script to create backups of a MySQL database running in a docker container. The database is correctly up and running.
My current code is
#!/bin/bash
PATH=/usr/bin:/usr/local/bin:/root/.local/bin:$PATH
docker-compose exec -T db mkdir -p /opt/booking-backup
docker_backup_path="/opt/booking-backup/dump_prod_$(date +%F_%R).sql"
copy_backup_path="/root/backup_scripts/booking_prod/dump_prod_$(date +%F_%R).sql"
docker-compose exec db mysqldump --add-drop-database --add-drop-table --user=root --password="pw" booking > "$docker_backup_path"
docker-compose exec db mysqldump --add-drop-database --add-drop-table --user=root --password="pw" booking > "/opt/booking-backup/dump_prod.sql"
[ -d ./backup ] || mkdir ./backup
docker cp $(docker-compose ps -q db):$docker_backup_path $copy_backup_path
However, when I execute it it throws this error:
Error: No such container:path: f0baa241becd20d2690bb901fb257a4bbec8cac17e6f1ce6d50adb9532bbae03:/opt/booking-backup/dump_prod_2019-05-28_14:23.sql
What makes this weirder is that I have the exact same code (but with booking switched out for abc, and with PSQL instead of MySQL) that works correctly.
It appears that this line
docker-compose exec db mysqldump --add-drop-database --add-drop-table --user=root --password="pw" booking > $docker_backup_path
does not create the output file, but when I use tee I can see the contents of the dump and they are correct.
What's going wrong here?
The shell redirections
docker-compose exec db mysqldump ... > "$docker_backup_path"
docker-compose exec db mysqldump ... > "/opt/booking-backup/dump_prod.sql"
# -----------------------------------^ here
... will be expanded by your local shell, not inside the container. Meaning the files are written to your local filesystem not to the container's filesystem.
I'm trying to set up a simple script to backup a mysql database:
#!/bin/bash
#
# Dumps mydatabase to ~/Developer/mydatabase_dumps/
#
dumpfile="~/Developer/mydatabase_dumps/$(date +"%Y-%m-%d-%H-%M-%S").sql"
echo "Dumping mydatabase to $dumpfile ..."
mysqldump -u root -p mydatabase > "$dumpfile"
echo "Dump completed."
That shouldn't be too complicated, but I always get "No such file or directory":
user$: pwd
/home/myname/Developer/bash_scripts
user$: ls -l
-rwxrw-r-- 1 myname myname 268 Apr 27 15:15 dbdump # name of the script
user$: dbdump # ./dbdump or sh dbdump produce the same result
Dumping mydatabase to ~/Developer/mydatabase_dumps/2017-04-27-15-22-06.sql ...
/home/myname/Developer/bash_scripts/dbdump: line 8: ~/Developer/mydatabase_dumps/2017-04-27-15-22-06.sql: No such file or directory
Dump completed.
The script was written on Ubuntu, so it shouldn't be a problem with Windows line endings. Changing the permissions to 777 didn't work either. I've run a bit out of ideas here, would be glad if someone could point me in the right direction.
I'm just getting started with Docker and was able to set up MySQL according to my needs, by running tutum/lamp and doing a bunch of exec. For example:
docker run -d -p 80:80 -p 3306:3306 --name test tutum/lamp
...
docker exec test mysqldump --host somehost --user someuser --password --databases somedatabase > dump.sql
docker exec test mysql -u root < dump.sql
However, I'm having issues converting this to a Dockerfile. Specifically, the following results in ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock':
FROM tutum/lamp
EXPOSE 80 3306
...
RUN mysqldump --host=$DB_IP --user=$DB_USER --password=$DB_PASSWORD --databases somedatabase > dump.sql
RUN mysql -u root < dump.sql
You will need to override run.sh in order to do that, because when you run a container it will install mysql for the first time.
That is why you can not connect to mysql prior to that (in my previous answer I wasn't aware of that).
I've managed to execute mysql command by adding this to Dockerfile
FROM tutum/lamp
ADD . /custom
RUN chmod 755 /custom/run.sh
CMD ["/custom/run.sh"]
Then in the same folder create a file run.sh
#!/bin/bash
VOLUME_HOME="/var/lib/mysql"
sed -ri -e "s/^upload_max_filesize.*/upload_max_filesize = ${PHP_UPLOAD_MAX_FILESIZE}/" \
-e "s/^post_max_size.*/post_max_size = ${PHP_POST_MAX_SIZE}/" /etc/php5/apache2/php.ini
if [[ ! -d $VOLUME_HOME/mysql ]]; then
echo "=> An empty or uninitialized MySQL volume is detected in $VOLUME_HOME"
echo "=> Installing MySQL ..."
mysql_install_db > /dev/null 2>&1
echo "=> Done!"
/create_mysql_admin_user.sh
else
echo "=> Using an existing volume of MySQL"
fi
( sleep 20 ; mysql -u root < /custom/dump.sql ; echo "*** IMPORT ***" ) &
exec supervisord -n
This file is the same as /run.sh with one line added to run sql import after 20 seconds to make sure mysql service is up and running (there must be more elegant way to run a command just after mysql is started, of course).
What I have is a few script files that are used for crons for different buildings in my company, but what I'm running into is I'm having to go into each file and change the OAK3 to a different building id, as well as the oak3(lowercase). The files are all located in there respectives warehouses folder ex: Desktop/CRON/OAK3. What I would like it to do, is if it's OAK3 use OAK3, and oak3(lowercase) instead of having to go into each file everytime we create a new db for a warehouse.
I am new to the linux world so I'm not sure if there is a way, and haven't found anything on google.
Example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/oak3_count_portal.txt --ignore-lines=1
Desired effect is possible
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/$WAREHOUSE_ID/$warehouse_id_count_portal.txt --ignore-lines=1
If i get what you want, which I´m not sure, this will help to do all new databases
databases=`mysql -B -r -u ${user} --skip-column-names -p${pass} --execute='show databases'`
for db in $databases; do
## now loop through the above array
echo $db # current DB
mysqldump -u $user --password=$pass $db > "$db.sql" #dump db to file
done
Using a combination of dirname and basename with the Bash special variable $0, you can get all of what you need.
The running script's filename is $0. Meanwhile dirname $0 will give you the directory path of the executing file. But you don't want the full path, just the last part, which basename will provide. realpath is used to expand the directory so . is not used.
Getting just the last directory name:
$ ls
tmp.sh # Ok, there's our file
$ dirname tmp.sh
. # The . is current directory
$ dirname $(realpath tmp.sh)
/home/mjb/OAK3 # so we expand it with realpath
$ basename $(dirname $(realpath tmp.sh))
OAK3 # then take only the last one with basename
So here's how it will work for you:
# Get the directory name
warehouse=$(basename $(dirname $(realpath $0)))
# And lowercase it with `tr` into a new variable
warehouse_lcase=$(echo $warehouse | tr '[:upper:]' '[:lower:]')
# Substitute the variables
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/${warehouse}/${warehouse_lcase}_count_portal.txt --ignore-lines=1
See also: Can a Bash script tell which directory it's stored in?
There is lot easier way to figure out the basename of the current-working-directory: pwd -PL | sed sg.\*/ggg
[san#alarmp OAK3]$ pwd; pwd -PL | sed sg.\*/ggg
/opt/local/OAK3
OAK3
So, if I understand your requirement correctly, if you don't wanna change the script(s) manually by hand, you can do this whilst inside that particular directory:
$ cat example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/oak3_count_portal.txt --ignore-lines=1
#
$ this_dir=$(pwd -PL | sed sg.\*/ggg)
#
$ sed -e "s/${this_dir}/\${WAREHOUSE_ID}/g" example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/${WAREHOUSE_ID}/oak3_count_portal.txt --ignore-lines=1
#
$ sed -e "s/$(echo $this_dir | tr '[:upper:]' '[:lower:]')/\${warehouse_id}/g" example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/${warehouse_id}_count_portal.txt --ignore-lines=1
Use -i option to make the change permanent in-file (without creating a new one) like this:
sed -ie "s/${this_dir}/\${WAREHOUSE_ID}/g" example.sh
How is it possible to run this and output the innobackupex output to a file (but still send output to the display)?
innobackupex --user=root --password=pass --databases="db" --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz
I need to ouput the innobackupex log with ... completed OK! in the last line to a file? How can I do that?
I've also noticed that it is a bit challenging to save the "OK" output from xtrabackup to the log file, as the Perl script playing with tty. Here is what worked for me.
If you need execute innobackupex from the command line, you can do:
nohup innobackupex --user=root --password=pass --databases="db" --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz 2>/path/mybkp.log
if you need to script it and get an OK message you can do:
/bin/bash -c "innobackupex --user=root --password=pass --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz" 2>/path/mybkp.log
Please note that in the second command, the double quote closes before the 2>
Prepend
2> >(tee file)
to your command.