I need to take a backup of my database using mysqldump tool.
My test table has a TIMESTAMP column that I have to use as filter.
I'm using a bash script with the following code:
#!/bin/bash
cmd="mysqldump --verbose --opt --extended-insert --result-file /home/backups/mysql/test/test_table.20161205.bak test --where=\"timestamp_c>='2016-12-03 00:00:00'\" --tables \"test_table\""
echo $cmd
$cmd
I'm printing the command that I assume should work. The above script produces the output:
mysqldump --verbose --opt --extended-insert --result-file /home/backups/mysql/test/test_table.20161205.bak test --where="timestamp_c>='2016-12-03 00:00:00'" --tables "test_table"
If I copy the printed command on the terminal, it works;
however, the command on the script print the error:
mysqldump: Couldn't find table: "00:00:00'""
Is there something that I'm not understanding about quotes escape?
Variables hold data, not code. Define a function instead.
cmd () {
mysqldump --verbose --opt --extended-insert \
--result-file /home/backups/mysql/test/test_table.20161205.bak test \
--where="timestamp_c>='2016-12-03 00:00:00'" \
--tables "test_table"
}
Try executing your command using 'eval'. Example:
#!/bin/bash
cmd="mysqldump --verbose --opt --extended-insert --result-file /home/backups/mysql/test/test_table.20161205.bak test --where=\"timestamp_c>='2016-12-03 00:00:00'\" --tables \"test_table\""
eval $cmd
Related
I'm writing a script to create backups of a MySQL database running in a docker container. The database is correctly up and running.
My current code is
#!/bin/bash
PATH=/usr/bin:/usr/local/bin:/root/.local/bin:$PATH
docker-compose exec -T db mkdir -p /opt/booking-backup
docker_backup_path="/opt/booking-backup/dump_prod_$(date +%F_%R).sql"
copy_backup_path="/root/backup_scripts/booking_prod/dump_prod_$(date +%F_%R).sql"
docker-compose exec db mysqldump --add-drop-database --add-drop-table --user=root --password="pw" booking > "$docker_backup_path"
docker-compose exec db mysqldump --add-drop-database --add-drop-table --user=root --password="pw" booking > "/opt/booking-backup/dump_prod.sql"
[ -d ./backup ] || mkdir ./backup
docker cp $(docker-compose ps -q db):$docker_backup_path $copy_backup_path
However, when I execute it it throws this error:
Error: No such container:path: f0baa241becd20d2690bb901fb257a4bbec8cac17e6f1ce6d50adb9532bbae03:/opt/booking-backup/dump_prod_2019-05-28_14:23.sql
What makes this weirder is that I have the exact same code (but with booking switched out for abc, and with PSQL instead of MySQL) that works correctly.
It appears that this line
docker-compose exec db mysqldump --add-drop-database --add-drop-table --user=root --password="pw" booking > $docker_backup_path
does not create the output file, but when I use tee I can see the contents of the dump and they are correct.
What's going wrong here?
The shell redirections
docker-compose exec db mysqldump ... > "$docker_backup_path"
docker-compose exec db mysqldump ... > "/opt/booking-backup/dump_prod.sql"
# -----------------------------------^ here
... will be expanded by your local shell, not inside the container. Meaning the files are written to your local filesystem not to the container's filesystem.
I pass the following as my GCE startup script but it always logs in as the root user and never as the demo-user. How do I fix it?
let startupScript = `#!/bin/bash
su demo-user
WHO_AM_I=$(whoami)
echo WHO_AM_I: $WHO_AM_I &>> debug.txt
cd..`
I think it should work like that:
#! /bin/bash
sudo -u demo-user bash -c 'WHO_AM_I=$(whoami);
echo WHO_AM_I; $WHO_AM_I &>> debug.txt;'
use "sudo-u" to specify the user, then bash -c 'with all the commands between these particular quotes '' and separated by ;
For example: bash -c 'command1; command2;'
You can try an easier test (it worked for me), for example:
#! /bin/bash
sudo -u demo-user bash -c 'touch test.txt'
And then check with ls -l /home/demo-test/text.txt that demo-test is the owner of the new file.
What I have is a few script files that are used for crons for different buildings in my company, but what I'm running into is I'm having to go into each file and change the OAK3 to a different building id, as well as the oak3(lowercase). The files are all located in there respectives warehouses folder ex: Desktop/CRON/OAK3. What I would like it to do, is if it's OAK3 use OAK3, and oak3(lowercase) instead of having to go into each file everytime we create a new db for a warehouse.
I am new to the linux world so I'm not sure if there is a way, and haven't found anything on google.
Example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/oak3_count_portal.txt --ignore-lines=1
Desired effect is possible
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/$WAREHOUSE_ID/$warehouse_id_count_portal.txt --ignore-lines=1
If i get what you want, which I´m not sure, this will help to do all new databases
databases=`mysql -B -r -u ${user} --skip-column-names -p${pass} --execute='show databases'`
for db in $databases; do
## now loop through the above array
echo $db # current DB
mysqldump -u $user --password=$pass $db > "$db.sql" #dump db to file
done
Using a combination of dirname and basename with the Bash special variable $0, you can get all of what you need.
The running script's filename is $0. Meanwhile dirname $0 will give you the directory path of the executing file. But you don't want the full path, just the last part, which basename will provide. realpath is used to expand the directory so . is not used.
Getting just the last directory name:
$ ls
tmp.sh # Ok, there's our file
$ dirname tmp.sh
. # The . is current directory
$ dirname $(realpath tmp.sh)
/home/mjb/OAK3 # so we expand it with realpath
$ basename $(dirname $(realpath tmp.sh))
OAK3 # then take only the last one with basename
So here's how it will work for you:
# Get the directory name
warehouse=$(basename $(dirname $(realpath $0)))
# And lowercase it with `tr` into a new variable
warehouse_lcase=$(echo $warehouse | tr '[:upper:]' '[:lower:]')
# Substitute the variables
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/${warehouse}/${warehouse_lcase}_count_portal.txt --ignore-lines=1
See also: Can a Bash script tell which directory it's stored in?
There is lot easier way to figure out the basename of the current-working-directory: pwd -PL | sed sg.\*/ggg
[san#alarmp OAK3]$ pwd; pwd -PL | sed sg.\*/ggg
/opt/local/OAK3
OAK3
So, if I understand your requirement correctly, if you don't wanna change the script(s) manually by hand, you can do this whilst inside that particular directory:
$ cat example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/oak3_count_portal.txt --ignore-lines=1
#
$ this_dir=$(pwd -PL | sed sg.\*/ggg)
#
$ sed -e "s/${this_dir}/\${WAREHOUSE_ID}/g" example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/${WAREHOUSE_ID}/oak3_count_portal.txt --ignore-lines=1
#
$ sed -e "s/$(echo $this_dir | tr '[:upper:]' '[:lower:]')/\${warehouse_id}/g" example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/${warehouse_id}_count_portal.txt --ignore-lines=1
Use -i option to make the change permanent in-file (without creating a new one) like this:
sed -ie "s/${this_dir}/\${WAREHOUSE_ID}/g" example.sh
I'm having a problem, I need to save the tail output to mysql. I can save the output to a file,
Here is the tail command:
tail -f file_ | egrep --line-buffered param_ > path_destinty
For my application it is necessary to save the information in the time that it is written.
Any tips?
Example:
tail -f file_ | \
grep -E --line-buffered param_ | \
while read line; do \
mysql -E -u root -p root -h 127.0.0.1 'INSERT INTO `test`.`test` (`text`, `updated`) VALUES ("'${line}'", NOW());'; done
Pipes:
tail your file
because egrep is deprecated, use grep -E
cycle for parsing data and send them to MySQL
Params of MySQL:
-E Execute query
-u Username
-p Password for this user
-h Host/IP
`test` is the name of the database and table
${line} our varible with text
tmp-file contains:
database_1
database_2
database_3
I want to run a command like "mysqldump DATABASE > database.sql && gzip database.sql" for each line in the above file.
I've got as far as cat /tmp/database-list | xargs -L 1 mysqldump -u root -p
I guess I want to know how to put the data passed to xargs in more than once (and not just on the end)
EDIT: the following command will dump each database into its own .sql file, then gzip them.
mysql -u root -pPASSWORD -B -e 'show databases' | sed -e '$!N; s/Database\n//' | xargs -L1 -I db mysqldump -u root -pPASSWORD -r db.backup.sql db; gzip *.sql
In your own example you use && to use two commands on one line - so why not do
cat file | xargs -L1 -I db mysqldump db > db.sql && cat file | xargs -L1 -I db gzip database.sql
if you really want to do it all in one line using xargs only. Though I believe that
cat file | xargs -L1 -I db mysqldump db > db.sql && cat file; gzip *.sql
would make more sense.
If you have a multicore CPU (most of us have these days) then GNU Parallel http://www.gnu.org/software/parallel/ may improve the time to run:
mysql -u root -pPASSWORD -B -e 'show databases' \
| sed -e '$!N; s/Database\n//' \
| parallel -j+0 "mysqldump -u root -pPASSWORD {} | gzip > {}.backup.sql"
-j+0 will run as many jobs in parallel as you have CPU cores.