how do i automated MySQL data base restore everyday with crontab? - mysql

i have some demo website user can register and login and share post add products with some function but its for demo purpose only so daily 10 to 15 people register and test how its work
but i don't need everyone data i have new mysql.sql file in this SQL don't have any much data i want to do automate task with crone tab
every day its will delete current database and upload my mysql.sql file
how can i do this?
os:ubuntu 19.04

Initially, if your database contains stored procedure as well, just make sure your restore file contains query to delete the stored procedure as well before restoring the database.
To delete stored procedures add the following line in your mysql.sql file.
DELETE FROM mysql.proc WHERE db like '%{{database_name}}%' AND type = 'PROCEDURE'
After this you have to add a cron job which will restore your database everyday, to do this open terminal and type sudo crontab -e
Now enter 0 13 * * * mysql -u {{user_name}} -p {{password}} {{database_name}} < {{path_to_your_sql_file}} assuming you have to restore the database at 1 PM daily.
After adding the job, save the file.
Once the job is added you can check the it by typing sudo crontab -l in terminal

Since all you want to do is "reinstall" your DB on a daily basis (is that correct?). You can add to your install script on the first line:
DROP database <your databese>
# Here you re-create your DB again (your current sql script)
Let's say you call this script "reinstall.sql"; you can add to your cron table the following line (which runs the command everyday # 1am):
0 1 * * * mysql -u username -p password database_name < /path/to/your/reinstall.sql
To open the cron table you can do this:
sudo crontab -l -u [USER_WITH_RIGHTS_TO_EDIT]
Hope it helps!

MySQL keeps all DB data inside one directory. By default this directory resides within the MySQL installation and is called data. For example if your default installation is at c:/Users/prakash/mysql-8.0.17, a directory named data will be available inside it.
In principle, you will have to keep a fresh copy of this data directory (without any online user information, as it was when you built the database first time by running DDL scripts) somewhere, let us say at c:/Users/prakash/mysql-8.0.17/fresh. You can then write a crone job to achieve following and schedule it at any convenient time.
Shutdown the database
Delete data directory (recursively)
Copy fresh directory (recursively) where data directory resides
Rename the copied directory to data
Restart the database

Related

MySQL Data Directory on Network Drive?

I am new to databases and MySQL and am still in the process of learning it. I have been tasked to see if it is possible to store the MySQL Data Directory in a Network Drive... The purpose is to have a backup of the directory and allowing multiple users to point to that particular directory.
I have been able to successfully move the data directory to a different location on my PC but have been unsuccessful when I tried moving the data directory into a Network Drive.
Is it possible to move the data directory into a shared Network Drive, and if so, what steps should I take?
Notes:
Windows 10
Attempted moving the directory and editing the my.ini
file
Perhaps your approach is not optimal or I'm misunderstanding the question (or whoever gave you the task isn't clear on the best ways to back up MySQL databases). If I were you, I'd put it to whoever asked you to do this task that making plain-text SQL (*.sql) dumps of the databases and putting those into the backup directory would be easier/simpler than making a backup of the data directory itself, which contains binary file representations of the databases.
From the MySQLdump manual page:
To dump all databases:
$ mysqldump --all-databases > dump.sql
To dump only specific databases, name them on the command line and use the --databases option:
$ mysqldump --databases db1 db2 db3 > dump.sql
The --databases option causes all names on the command line to be treated as database names. Without this option, mysqldump treats the first name as a database name and those following as table names.
With --all-databases or --databases, mysqldump writes CREATE DATABASE and USE statements prior to the dump output for each database. This ensures that when the dump file is reloaded, it creates each database if it does not exist and makes it the default database so database contents are loaded into the same database from which they came. If you want to cause the dump file to force a drop of each database before recreating it, use the --add-drop-database option as well. In this case, mysqldump writes a DROP DATABASE statement preceding each CREATE DATABASE statement.
To dump a single database, name it on the command line:
$ mysqldump --databases test > dump.sql
Exercise for the reader: Write a script (crontab) or set up a scheduled task to dump the databases and move the output to the network drive.
If that's not what is required, but access to the database by multiple people is, create user accounts using the MySQL Server RDBMS instead. (You might need to configure the server to allow remote access. In that case, remove any test or anonymous/blank password accounts and change the root password to something more secure than root, admin or password1.)

Openshift 3 mysql.user table is damaged. please run mysql_upgrade openshift

Issue
When starting a mysql deployment on OpenShift V3 I get the following exception:
mysql.user table is damaged. Please run mysql_upgrade
I cannot run mysql_upgrade as pod isn't ready.
Questions
I have the following questions
How can I fix this, or
How can I backup the data
If the pod won't start, you can mount the volume with your data to another pod and download (oc rsync, interactive tutorial here) what you had mounted in the database pod under /var/lib/mysql/data/. Then, you can try recovering the data from that.
Generally, this could happen if you process an older database dump sql script (created using mysqldump) on a newer MySQL version. In that case, chances are the root user was removed from the table too (if it was not in the old database). If you created such a dump, still have the older dump, and it's "good enough", you should be able to proceed as follows to import the original data again and prevent this situation:
Create a backup copy of the database dump that you have previously created using mysqldump from the older MySQL database version, so that you can always get back to it, if things go south.
Edit the database dump sql file and remove all the content that manipulates the mysql.user table; that is, delete lines under the -- Table structure for table 'user' and -- Dumping data for table 'user' sections and save the modified file. I assume here that you have your user and password specified in environment variables, in the MySQL deployment configuration).
Scale down your database pod to 0 replicas.
Delete your mysql persistent volume claim; this will delete the database that you have hopefully downloaded after mounting the volume to another pod, as mentioned above.
Recreate the PVC, under the same name.
Scale up your MySQL pod to one replica. That will initialize the database and create a user as per the environment variables .
Copy the modified sql dump file created in step 2 (that is, the one not affecting the mysql.user table) to the database pod using oc rsync.
In the MySQL pod, restore the database using the uploaded file, as per this migration guide (step 6).
Grant all privileges to your user on the application database by GRANT ALL PRIVILEGES ON <database> TO '<mysql_user>'#'%' (Replace <database> and <mysql_user> appropriately.)
Exit the MySQL CLI on the pod, and run mysql_upgrade -u root in the shell.

How to schedule a MySQL database backup in a remote Ubuntu server to a Dropbox folder in Windows PC?

I have a MySQL database that I want to daily backup to my Dropbox folder on my Windows PC.
How can I do that automatically from Windows 7?
One of the simplest ways to backup a mysql database is by creating a dump file. And that is what mysqldump is for. Please read the documentation for mysqldump.
In its simplest syntax, you can create a dump with the following command:
mysqldump [connection parameters] database_name > dump_file.sql
where the [connection parameters] are those you need to connect your local client to the MySQL server where the database resides.
mysqldump will create a dump file: a plain text file which contains the SQL instructions needed to create and populate the tables of a database. The > character will redirect the output of mysqldump to a file (in this example, dump_file.sql). You can, of course, compress this file to make it more easy to handle.
You can move that file wherever you want.
To restore a dump file:
Create an empty database (let's say restore) in the destination server
Load the dump:
mysql [connection parameters] restore < dump_file.sql
There are, of course, some other "switches" you can use with mysqldump. I frequently use these:
-d: this wil tell mysqldump to create an "empty" backup: the tables and views will be exported, but without data (useful if all you want is a database "template")
-R: include the stored routines (procedures and functions) in the dump file
--delayed-insert: uses insert delayed instead of insert for populating tables
--disable-keys: Encloses the insert statements for each table between alter table ... disable keys and alter table ... enable keys; this can make inserts faster
You can include the mysqldump command and any other compression and copy / move command in a batch file.
My solution to extract a backup and push it onto Dropbox is as below.
A sample of Ubuntu batch file can be downloaded here.
In brief
Prepare a batch script backup.sh
Run backup.sh to create a backup version e.g. backup.sql
Copy backup.sql to Dropbox folder
Schedule Ubuntu/Windows task to run backup.sh task e.g. every day at night
Detail steps
All about backing up and restoring an MySQL database can be found here.
Back up to compressed file
mysqldump -u [uname] -p [dbname] | gzip -9 > [backupfile.sql.gz]
How to remote from Windows to execute the 'backup' command can be found here.
plink.exe -ssh -pw -i "Path\to\private-key\key.ppk" -noagent username#server-ip
How to bring the file to Dropbox can be found here
Create a app
https://www2.dropbox.com/developers/apps
Add an app and choose Dropbox API App. Note the created app key and app secret
Install Dropbox API in Ubuntu; use app key and app secret above
$ wget https://raw.github.com/andreafabrizi/Dropbox-Uploader/master/dropbox_uploader.sh
$ chmod +x dropbox_uploader.sh
Follow the instruction to authorize access for the app e.g.
http://www2.dropbox.com/1/oauth/authorize?oauth_token=XXXXXXX
Test the app if it is working right - should be ok
$ ./dropbox_uploader.sh info
The app is created and a folder associating with it is YourDropbox\Apps\<app name>
Commands to use
List files
$ ./dropbox_uploader.sh list
Upload file
$ ./dropbox_uploader.sh upload <filename> <dropbox location>
e.g.
$ ./dropbox_uploader.sh upload backup.sql .
This will store file backup.sql to YourDropbox\Apps\<app name>\backup.sql
Done
How to schedule a Ubuntu can be view here using crontab
Call command
sudo crontab -e
Insert a line to run backup.sh script everyday as below
0 0 * * * /home/userName/pathTo/backup.sh
Explaination:
minute (0-59), hour (0-23, 0 = midnight), day (1-31), month (1-12), weekday (0-6, 0 = Sunday), command
Or simply we can use
#daily /home/userName/pathTo/backup.sh
Note:
To mornitor crontab tasks, here is a very good guide.

Configure MySQL to Periodically Dump all Data

I would like to configure MySQL to periodically dump all data to a .sql file so that if something happens to my server and I need to start over, I can use the file to put all of the data back in place. How can I do this? I can take suggestions either involving use of the MySQL Workbench user interface to configure it or using a query.
You'll want to look into mysqldump.exe. This is probably the easiest way to dump the entire database into a raw SQL file.
Then you'll want to set up a script that calls mysqldump.exe with the appropriate parameters and set up a Windows Scheduled Task or Linux chron job to call the script.
You can use mysqldump command. Assuming you want to make a backup of all databases now:
mysqldump -hMY_HOST.COM -uDB_USERNAME -pDB_PASSWORD USERNAME_DATABASENAME > MysqlDump.sql
To run periodic backups, you can setup a server cron which runs the mysqldump command after some interval (e.g. 24 hours)
mysqldump -hMY_HOST.COM -uDB_USERNAME -pDB_PASSWORD USERNAME_DATABASENAME > MysqlDump.sql
After creating the dump file. Setup another cron to copy this dump to the target server(say a local server) make this execute with same interval of above cron. You can use scp (secure copy) for this:
scp user#MY_HOST.COM:/some/path/file user2#MY_HOST2.COM:/some/path/file
NOTE: The mysqldump command can cause high server load depending on the database size (make sure you are executing them when server having minimum load).
Cron Job:
To setup a cron job, you put the above backup command in a .sh file, then you create the cron by running:
sudo crontab -e // Open your root crontab file
0 0 * * * USERNAME /path/to/script.sh // Run a cronjob at midnight every day
More on cronjobs: http://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/

Replicate MYSQL data from stage to dev with a script

I have two versions of my application, one "stage" and one "dev."
Right now, "stage" is exposed to the real world for beta-testing.
From time to time, I want an exact replica of the data to be replicated into the "dev" database.
Both databases are on the same hosted Linux machine.
Sometimes I create "dummy" data in the development environment. At this stage, I'd be fine if it needs to get written over in stage.
Thanks.
Be sure to add security to your script so only the user you are authorizing is able to run that script. basically you want to use mysql and mysqldump commands.
mysqldump -u username --password=userpass --add-drop-database --add=locks --create-options --disable-keys --extend-insert --result-file=database.sql databasename
mysql -u username --password=userpass -e "source database.sql;"
The first command will make the backup the second command will bring the backup to another database engine. Be careful because if you run it on the same exact process of mysql you are only backing up the database adn then restoring it to the same database, you have to change the database name.
Hope this helps.
Just use mysqldump to create a backup of the staging database and then load the dump file over your dev database. This will give you an exact copy of the stage data.