I'm trying to run a mysqldump from inside the db service of my Docker Compose app.
Problem: Instead of dumping the file to /tmp/mydb.sql, it is printing the output to the screen.
Here is my command: docker compose exec db mysqldump -uroot mydb > /tmp/mydb.sql
There is no file in /tmp when this command is done running. I also confirmed that /tmp is writable.
How can I have the command write to /tmp/mydb.sql?
Using the shell redirection can be tricky because by default, it is interpreted by the shell where you ran the docker command, not the shell inside the container.
If you want to ensure the output is redirected by the mysqldump command inside the container, I suggest you use mysqldump with the --result-file=/tmp/mydb.sql option (see the link for documentation) instead of getting confused with shell redirection.
Related
I have a database backup command that takes a mysql dump and then uploads that dump file to AWS S3, when I run the command as a normal user it works perfectly but when I use the same command in a cron job it fails.
I have checked the syslog and there is no error message saying there was a problem after the job. There is only a line saying the job is run and then goes on to run the next cron job.
The command is as follows, I have removed the sensitive parts:
mysqldump -u {{ db_user }} -p{{ db_password }} {{ db_name }} > /home/db_backup.sql | aws s3 cp /home/db_backup.sql s3://{{ s3_url }}/$(date --iso-8601=seconds)_db.sql --profile backupprofile
When this command is run by a normal user there is a warning output not to use the the mysql password in command line but this is essential for the command to work without interaction. There is also a second line ofor the S3 to say that the upload worked. Could these outputs be effecting the cronjob is someway?
You will need to have full paths on your cronjobs, I see you missed them out on mysqldump and also your aws for the connection URL. I would do whereis mysqldump and whereis aws to find which full path you need to run it.
Try checking the the environment variable, cron passes a minimal set of environment variables to your jobs. You can set the PATH easily inside crontab
PATH=/usr/local/bin:/usr/sbin
Also many cron execute command using sh, and you might be using another shell in your script . You can tell cron to run all commands in bash by setting the shell at the top of your crontab:
SHELL=/bin/bash
Cron tries to interpret % symbol, you need to escape it if you have that somewhr in your command.
If the output that comes first after running your command is interactive that is if this asks you to hit enter or something like that thn this is the issue otherwise thr shouldn't be any problem with this.
I am using Ubuntu-server. Can I setup / configure 'mysql' to auto-connect to another host? I mean if I type mysql on my terminal, it will connect automatically to specific host.
Thanks.
There are two parts for the answer.
First - how to make an alias, a word that if you type it, a specific command will execute (taken from here:
Create ~/.bash_aliases if not exists
touch ~/.bash_aliases
Open ~/.bash_aliases in a editor and append alias MY_COMMAND_ALIAS="THE COMMAND YOU WANT"
or use command echo 'alias MY_COMMAND_ALIAS="THE COMMAND YOU WANT"' >> ~/.bash_aliases
Save it
Use reload source ~/.bash_profile command to reload profile or reopen the terminal
Second part - the command you want to run mysql command, it will be something like:
mysql -u $user -p$passsword -Bse "command1;command2;....;commandn"
You can write a simple wrapper like this:
#!/bin/sh
/usr/bin/mysql --user $MYSQL_USER -p $MYSQL_PASSWORD --host $MYSQL_HOST $*
Where that should work almost identically to the default mysql command with a few tiny exceptions, like how LOAD DATA INFILE will read files only on the server, not your local machine.
You may need to do the same for mysqldump and other related commands like mysqladmin if you use those.
Be sure to specify the actual path to the mysql binary you want to run. I'm using /usr/bin/mysql here but it could be something else.
I have a mysql docker container and it is up and running. I can get into it and see the mysql prompt. I do not want to mount external storage on it. Everytime I start this container, I want to execute a test.sql script from my file system to create DB and do few such actions. When I run the script which exists in my current working directory, it complaints. I know this is trivial but I am unable to catch the issue.
/microservices/mysqlcontainer]docker exec 3a21e5e3669d /bin/sh -c 'mysql -u root -ppwd <./test.sql'
/bin/sh: ./test.sql: No such file or directory
Since your script is outside the container, there's no need to catch the shell input redirection. You should be able to run the following:
docker exec -i 3a21e5e3669d mysql -u root -ppwd <./test.sql
I am using docker with docker compose on my server (debian 7.5).
I have 1 mysql container and one postgresql container for 2 applications.
When i want to create a backup of my database (for example for the database "mydb" from my mysql container) I do like this :
docker exec -it <my_container_id> mysqldump --opt mydb > "/backup/mydb$(date +%Y%m%dà%H%M).sql"
It works very well. The backup is saved in my local server.
The thing is I want to create this task everyday. So i created a file "backup.sh" in /etc/cron.daily/. (and then I did chmod +x backup.sh).
But the problem is, the cron doesn't work at all. I can see in the log that the file is executed, but I don't have anything in my backup folder.
Someone has an issue ?
Thank you very much
Cron does not set the PATH unless you specify it.
Try with the docker absolute path (/usr/bin/docker most likely) in your shell script.
Trying to dump a remote wordpress database using that shell script:
db_name=$(php -r 'include("public_html/local-config.php");echo DB_NAME;');
db_user=$(php -r 'include("public_html/local-config.php");echo DB_USER;');
db_password=$(php -r 'include("public_html/local-config.php");echo DB_PASSWORD;');
db_host=$(php -r 'include("public_html/local-config.php");echo DB_HOST;');
mysqldump --user="$db_user" --password="$db_password" "$db_name";
To have the dump locally I run this in my local terminal (host in .ssh/config):
ssh host 'bash -s' < thatscript.sh > dump.sql
Then I would do mysql localdb < dump.sql to import it locally.
Now I tried to put the ssh and local mysql part into the script as well, so that I can just fire up the script and it does everything.
I tried to put the original script into a function and then call it using
mysql localdb < $(ssh host 'bash -s' < myDumpFunction)
but that assumes a file called myDumpFunction and obviously does not work. How would I call the function in that case correctly?
Any other ideas on how to achieve a one line database sync for a local dev system are appreciated as well.
What about writing another .sh file and starting it with ssh connection ? Do not forget to change chmod. Script should be executable.
example:
ssh username#ipaddr(remotePc) start.sh