I am currently using MAMP pro and i want to create a simple terminal script (or a .bat file like on the PC) which allows me to create a sql dump (where i go in and preconfigure the database name) and it automatically creates me the .sql file.
This is what i know so far:
1: The script should first visit cd /applications/MAMP/library/bin
2: When it in that directory, it should run the following
./mysqldump -u root -p databaseName > /Applications/MAMP/htdocs/website/db.sql
Finally, in that directory, when i click the script, it should not replace the older file, but create a new one, maybe just add the date or a integer after it.
Is that possible?
Apologies, i have no idea the best way of doing this.
Thanks in advance.
You can have this:
#!/bin/sh
TS=$(exec date '+%s')
cd /applications/MAMP/library/bin && \
./mysqldump -u root -p databaseName > "/Applications/MAMP/htdocs/website/db.$TS.sql"
It would create files in the form of db.(timestamp).sql.
If you want to have other forms, just changhe the date command. '+%s' specifies the format to produce which is the timestamp. You can see other formats with date --help or man date.
Just a variation on the other answer (vote for it!). You don't have to change directory to execute mysqldump :
#!/bin/bash
TS=$(exec date '+%s')
/applications/MAMP/library/bin/mysqldump -u root -p databaseName \
> "/Applications/MAMP/htdocs/website/db.$TS.sql"
Please beware that you have both /Application and /application directories. Depending your FS case sensitivity this might be an error.
Are you using the OSx?
#!/bin/bash
cd /applications/MAMP/library/bin && mysqldump -u root -p databaseName > /Applications/MAMP/htdocs/website/db.sql
After this, you'll need set the permission to your script:
chmod +x your_script_file
Now, you can call...
./your_script_file
or
sh yout_script_file
If you want to replace the old file, this command will do this, because the new file name is the same the older file name
Related
I've read a post saying that I would have to use this command below, to dump my sql files
$ mysqldump -u [uname] -p db_name > db_backup.sql
However, my question is, I don't quite get where the db_backup.sql comes from. Am I suppose to create a file or is it being created as I enter any name for the backup sql?
Also, where can I find the backup.sql at?
More information: I am doing this on Mariadb/phpmyadmin on the xampp shell
Whichever directory you are in when you run that command will have db_backup.sql.
The statement:
mysqldump -u user -p db_name generates an output on the terminal screen, assuming you are SSH'ed into the server. Instead of the output written on the screen, you can redirect the output to a file.
To do that, you use > db_backup.sql after the command.
If you are in directory /home/hyunjae/backups and run the command:
$ mysqldump -u [uname] -p db_name > db_backup.sql
You will see a new file created called db_backup.sql. If there's already a file with that name, it will be overwritten.
If you don't know what directory you are in, you can type pwd for present working directory.
db_backup.sql is automatically generated by the mysqldump command, and is located in the current directory (.) by default. If you need to specify a directory, you can directly specify /path/to/target/db_backup.sql.
I would like to append date in file name when taking backup using mysqldump. I am storing the command in properties file and running it via ProcessBuilder and shell script. I have tried multiple ways to add the date (BTW all the answers here were only if we run the command directly in linux)
mysqldump -u <user> -p <database> | gzip > <backup>$(date +%Y-%m-%d-%H.%M.%S).sql.gz
Got the error: No table found for "+%Y-%m-%d-%H.%M.%S"
mysqldump -u root -ppassword dbName --result-file=/opt/backup/`date -Iminutes`.dbName.sql
Got the error: unknow option -I
Is there a way around for this to add date in the command itself? I cannot append the date in java method or shell script.
I can't tell from your question whether you're running in a shell. If so, try these three lines to generate your backup file.
DATE=`date +%Y-%m-%d-%H.%M.%S`
FILENAME=backup${DATE}.sql.gz
mysqldump -u user -p database | gzip > ${FILENAME}
Notice how you should surround the date command in the first line with backticks, not ${}, to get its result into the DATE shell variable.
I'm trying to add a cronjob in the crontab (ubuntu server) that backups the mysql db.
Executing the script in the terminal as root works well, but inserted in the crontab nothing happens. I've tried to run it each minutes but no files appears in the folder /var/db_backups.
(Other cronjobs work well)
Here is the cronjob:
* * * * * mysqldump -u root -pHERE THERE IS MY PASSWORD
--all-databases | gzip > /var/db_backups/database_`date +%d%m%y`.sql.gz
what can be the problem?
You need to escape % character with \
mysqldump -u 'username' -p'password' DBNAME > /home/eric/db_backup/liveDB_`date +\%Y\%m\%d_\%H\%M`.sql
I was trying the same but I found that dump was created with 0KB. Hence, I got to know about the solution which saved my time.
Command :
0 0 * * * mysqldump -u 'USERNAME' -p'PASSWORD' DATEBASE > /root/liveDB_`date +\%Y\%m\%d_\%H\%M\%S`.sql
NOTE:
1) You can change the time setting as per your requirement. I have set every day in above command.
2) Make sure you enter your USERNAME, PASSWORD, and DATABASE inside single quote (').
3) Write down above command in Crontab.
I hope this helps someone.
Check cron logs (should be in /var/log/syslog) You can use grep to filter them out.
grep CRON /var/log/syslog
Also you can check your local mail box to see if there are any cron mails
/var/mail/username
You can also set up other receiving mail in you crontab file
MAILTO=your#mail.com
Alternatively you can create a custom command mycommand. To which you can add more options. You must give execute permissions.
It is preferable to have a folder where they store all your backups, in this case using a writable folder "backup" which first create in "your home" for example.
My command in "usr/local/bin/mycommand":
#!/bin/bash
MY_USER="your_user"
MY_PASSWORD="your_pass"
MY_HOME="your_home"
case $1 in
"backupall")
cd $MY_HOME/backup
mysqldump --opt --password=$MY_PASSWORD --user=$MY_USER --all-databases > bckp_all_$(date +%d%m%y).sql
tar -zcvf bckp_all_$(date +%d%m%y).tgz bckp_all_$(date +%d%m%y).sql
rm bckp_all_$(date +%d%m%y).sql;;
*) echo "Others";;
esac
Cron: Runs the 1st day of each month.
0 0 1 * * /usr/local/bin/mycommand backupall
I hope it helps somewhat.
Ok, I had a similar problem and was able to get it fixed.
In your case you could insert that mysqldump command to a script
then source the profile of the user who is executing the mysqldump command
for eg:
. /home/bla/.bash_profile
then use the absolute path of the mysqldump command
/usr/local/mysql/bin/mysqldump -u root -pHERE THERE IS MY PASSWORD --all-databases | gzip > /var/db_backups/database_`date +%d%m%y`.sql.gz
Local Host mysql Backup:
0 1 * * * /usr/local/mysql/bin/mysqldump -uroot -ppassword --opt database > /path/to/directory/filename.sql
(There is no space between the -p and password or -u and username - replace root with a correct database username.)
It works for me. no space between the -p and password or -u and username
Create a new file and exec the code there to dump into a file location and zip it . Run that script via a cron
I am using Percona Server (a MySQL fork) on Ubuntu. The package (very likely the regular MySQL package as well) comes with a maintenance account called debian-sys-maint. In order for this account to be used, the credentials are created when installing the package; and they are stored in /etc/mysql/debian.cnf.
And now the surprise: A symlink /root/.my.cnf pointing to /etc/mysql/debian.cnf gets installed as well.
This file is an option file read automatically when using mysql or mysqldump. So basically you then had login credentials given twice - in that file and on command line. This was the problem I had.
So one solution to avoid this condition is to use --no-defaults option for mysqldump. The option file then won't be read. However, you provide credentials via command line, so anyone who can issue a ps can actually see the password once the backup runs. So it's best if you create an own option file with user name and password and pass this to mysqldump via --defaults-file.
You can create the option file by using mysql_config_editor or simply in any editor.
Running mysqldump via sudo from the command line as root works, just because sudo usually does not change $HOME, so .my.cnf is not found then. When running as a cronjob, it is.
You might also need to restart the service to load any of your changes.
service cron restart
or
/etc/init.d/cron restart
Currently I am learning MySql using commandline in ubuntu and made a backup of my database named 'sandwich' using mysqldump command.
mysql> #mysqldump -u root -p123456 sandwich > db_backup.sql;
Where I can find this 'db_backup.sql' file on the disk. Please tell me a specific file path where i can find this file.
It depends where you were when you executed the command - db_backup.sql will be found there as you didn't specify a full path.
Try your home dir or the web dir if you can't remember - anywhere you might have been when entering mysql. If all else fails you can use find:
find / -name 'db_backup.sql'
this may take some time so if you can narrow down the area of search and replace / with ~/ for example, that would help.
run following command without logging into mysql terminal.
mysqldump -u root -p db_name > db_dump.sql
It will ask for password. Enter password.
When dump is complete type following command.
ls
You will see db_dump.sql file in the list.
This should be quick and simple, but after researching on Google quite a bit I am still stumped. I am mostly newbie with: server admin, CLI, MySQL.
I am developing my PHP site locally, and now need to move some new MySQL tables from my local dev setup to the remote testing site. First step for me is just to dump the tables, one at a time.
I successfully login to my local MySQL like so:
Govind% /usr/local/mysql/bin/mysql -uroot
but while in this dir (and NOT logged into MySQL):
/usr/local/mysql/bin
...when I try this
mysqldump -uroot -p myDBname myTableName > myTestDumpedTable.sql
..then I keep getting this:
"myTestDumpedTable.sql: Permission denied."
Same result if I do any variation on that (try to dump the whole db, drop the '-p', etc.)
I am embarrassed as I am sure this is going to be incredibly simple, or just reveal a gaping (basic) hole in my knowledge. .. but please help ;-)
The answer came from a helpful person on the MySQL list:
As you guys (Anson and krazybean) were thinking - I did not have permission to be writing to the /usr/local/mysql/bin/ dir. But starting from any other directory, calls to mysqldump were failing because my shell PATH var (if I said that right) is not yet set up to handle mysqldump from another dir. Also, for some reason I do not really understand yet, I also needed to use a full path on the output, even if I was calling mysqldump effectively, and even if I had permission to write to the output dir (e.g. ~/myTestDumpedTable.sql. So here was my ticket, for now (quick answer):
Govind% /usr/local/mysql/bin/mysqldump -uroot -p myDBname myTableName > /Users/Govind/myTestDumpedTable.sql
You can write to wherever your shell user has permission to do so. I just chose my user's home dir.
Hope this helps someone someday.
Cheers.
Generally I stick with defining the hostname anyways, but as you being root doesn't seem like it would be the problem, I would question where are you writing this to? What happens when you dump to > ~/myTestDumpedTable.sql
Even I was facing the same problem, the issue is with user access to 'root/bin' dir.
switch your user access as root
sudo -s
Then execute the command
mysqldump -uroot -p homestayadvisorDB > homestayadvisor_backup.sql
This will resolve the issue. Let me know if this doesn't work.
Take a look at the man page for mysqldump for correct argument usage. You need a space between the -u flag and the username, like so:
mysqldump -u root -p myDBname myTableName > myTestDumpedTable.sql
Alternatively you can do
mysqldump --user=root -p myDBname myTableName > myTestDumpedTable.sql
Since you're not providing a password in the list of arguments, you should be prompted for one. You can always provide the password in the list of arguments, but the downside to that is it appears in cleartext and will show up in the shell's command history.
In my case I'd created the directory with $ sudo mkdir /directory/to/store/sql/files. The owner of that directory is root. So changing the owner by using $ sudo chown me:me /directory/to/store/sql/files and also changing permissions to maybe $ sudo chmod 744 /directory/to/store/sql/files did the trick for me.
mysqldump don't work with sudo, if you are using
sudo mysqldump then try below solution:
sudo su
mysqldump -u[username] -p[password] db_name > newbackupfile.bkp
You should provide with a full path for SQL backup file, such as
mysqldump -u root -p databasexxx > /Users/yourusername/Sites/yoursqlfile.sql
I think you're missing the ./ from the command, try:
being inside
/usr/local/mysql/bin$ ./mysqldump -u root -p myDBname > "/Users/yourUserName/Documents/myTestDumpedTable.sql"
So it is a script, and in linux you execute a script with ./myscript.
I found it just today, and for me, in my mac OSX, I didn't use the -p, maybe because password not needed, don't know already. I mean, try also:
./mysqldump -u root myDBname > "/Users/yourUserName/Documents/myTestDumpedTable.sql"