MySQL DataBase automatic reset after 1 hour for Demo Mode - mysql

I wish to create a demo mode for my CMS and is there anyway with which I can set my MySQL database automatically reset after 1 hour of time.

If you have access to the hosting console, you need a database running with the initial data set. Only once, you need to get a database dump in a file:
mysqldump -u DBUSER -pDBPASS --opt DBNAME > /path/to/my/backup.sql
Then, create a cron job (run crontab -e) to run a database restore using your dump file (see http://www.adminschoice.com/crontab-quick-reference for more information about cron tabs)
mysql -u DBUSER -pDBPASS DBNAME < /path/to/my/backup.sql
For example:
# crontab -e
00 * * * * mysql -u root -p123456 demo < /path/to/my/demo_backup.sql
This will restore your database to the original state every hour (minute 00)
Note: You need to consider, also, that if some user is actually trying your demo and the database is running the reset process, the user data will be lost in the middle of the session.

Related

mysqldump does not want import data inside mariadb [duplicate]

I was given a MySQL database file that I need to restore as a database on my Windows Server 2008 machine.
I tried using MySQL Administrator, but I got the following error:
The selected file was generated by
mysqldump and cannot be restored by
this application.
How do I get this working?
If the database you want to restore doesn't already exist, you need to create it first.
On the command-line, if you're in the same directory that contains the dumped file, use these commands (with appropriate substitutions):
C:\> mysql -u root -p
mysql> create database mydb;
mysql> use mydb;
mysql> source db_backup.dump;
It should be as simple as running this:
mysql -u <user> -p < db_backup.dump
If the dump is of a single database you may have to add a line at the top of the file:
USE <database-name-here>;
If it was a dump of many databases, the use statements are already in there.
To run these commands, open up a command prompt (in Windows) and cd to the directory where the mysql.exe executable is (you may have to look around a bit for it, it'll depend on how you installed mysql, i.e. standalone or as part of a package like WAMP). Once you're in that directory, you should be able to just type the command as I have it above.
You simply need to run this:
mysql -p -u[user] [database] < db_backup.dump
If the dump contains multiple databases you should omit the database name:
mysql -p -u[user] < db_backup.dump
To run these commands, open up a command prompt (in Windows) and cd to the directory where the mysql.exe executable is (you may have to look around a bit for it, it'll depend on how you installed mysql, i.e. standalone or as part of a package like WAMP). Once you're in that directory, you should be able to just type the command.
mysql -u username -p -h localhost DATA-BASE-NAME < data.sql
look here - step 3: this way you dont need the USE statement
When we make a dump file with mysqldump, what it contains is a big SQL script for recreating the databse contents. So we restore it by using starting up MySQL’s command-line client:
mysql -uroot -p
(where root is our admin user name for MySQL), and once connected to the database we need commands to create the database and read the file in to it:
create database new_db;
use new_db;
\. dumpfile.sql
Details will vary according to which options were used when creating the dump file.
Run the command to enter into the DB
# mysql -u root -p
Enter the password for the user Then Create a New DB
mysql> create database MynewDB;
mysql> exit
And make exit.Afetr that.Run this Command
# mysql -u root -p MynewDB < MynewDB.sql
Then enter into the db and type
mysql> show databases;
mysql> use MynewDB;
mysql> show tables;
mysql> exit
Thats it ........ Your dump will be restored from one DB to another DB
Or else there is an Alternate way for dump restore
# mysql -u root -p
Then enter into the db and type
mysql> create database MynewDB;
mysql> show databases;
mysql> use MynewDB;
mysql> source MynewDB.sql;
mysql> show tables;
mysql> exit
If you want to view the progress of the dump try this:
pv -i 1 -p -t -e /path/to/sql/dump | mysql -u USERNAME -p DATABASE_NAME
You'll of course need 'pv' installed. This command works only on *nix.
I got it to work following these steps…
Open MySQL Administrator and connect to server
Select "Catalogs" on the left
Right click in the lower-left box and choose "Create New Schema"
MySQL Administrator http://img204.imageshack.us/img204/7528/adminsx9.th.gif enlarge image
Name the new schema (example: "dbn")
MySQL New Schema http://img262.imageshack.us/img262/4374/newwa4.th.gif enlarge image
Open Windows Command Prompt (cmd)
Windows Command Prompt http://img206.imageshack.us/img206/941/startef7.th.gif enlarge image
Change directory to MySQL installation folder
Execute command:
mysql -u root -p dbn < C:\dbn_20080912.dump
…where "root" is the name of the user, "dbn" is the database name, and "C:\dbn_20080912.dump" is the path/filename of the mysqldump .dump file
MySQL dump restore command line http://img388.imageshack.us/img388/2489/cmdjx0.th.gif enlarge image
Enjoy!
As a specific example of a previous answer:
I needed to restore a backup so I could import/migrate it into SQL Server. I installed MySql only, but did not register it as a service or add it to my path as I don't have the need to keep it running.
I used windows explorer to put my dump file in C:\code\dump.sql. Then opened MySql from the start menu item. Created the DB, then ran the source command with the full path like so:
mysql> create database temp
mysql> use temp
mysql> source c:\code\dump.sql
You can try SQLyog 'Execute SQL script' tool to import sql/dump files.
Using a 200MB dump file created on Linux to restore on Windows w/ mysql 5.5 , I had more success with the
source file.sql
approach from the mysql prompt than with the
mysql < file.sql
approach on the command line, that caused some Error 2006 "server has gone away" (on windows)
Weirdly, the service created during (mysql) install refers to a my.ini file that did not exist. I copied the "large" example file to my.ini
which I already had modified with the advised increases.
My values are
[mysqld]
max_allowed_packet = 64M
interactive_timeout = 250
wait_timeout = 250
./mysql -u <username> -p <password> -h <host-name like localhost> <database-name> < db_dump-file
You cannot use the Restore menu in MySQL Admin if the backup / dump wasn't created from there. It's worth a shot though. If you choose to "ignore errors" with the checkbox for that, it will say it completed successfully, although it clearly exits with only a fraction of rows imported...this is with a dump, mind you.
One-liner command to restore the generated SQL from mysqldump
mysql -u <username> -p<password> -e "source <path to sql file>;"
Assuming you already have the blank database created, you can also restore a database from the command line like this:
mysql databasename < backup.sql
You can also use the restore menu in MySQL Administrator. You just have to open the back-up file, and then click the restore button.
If you are already inside mysql prompt and assume your dump file dump.sql, then we can also use command as below to restore the dump
mysql> \. dump.sql
If your dump size is larger set max_allowed_packet value to higher. Setting this value will help you to faster restoring of dump.
How to Restore MySQL Database with MySQLWorkbench
You can run the drop and create commands in a query tab.
Drop the Schema if it Currently Exists
DROP DATABASE `your_db_name`;
Create a New Schema
CREATE SCHEMA `your_db_name`;
Open Your Dump File
Click the Open an SQL script in a new query tab icon and choose your db dump file.
Then Click Run SQL Script...
It will then let you preview the first lines of the SQL dump script.
You will then choose the Default Schema Name
Next choose the Default Character Set utf8 is normally a safe bet, but you may be able to discern it from looking at the preview lines for something like character_set.
Click Run
Be patient for large DB restore scripts and watch as your drive space melts away! 🎉
Local mysql:
mysql -u root --password=YOUR_PASS --database=YOUR_DB < ./dump.sql
And if you use docker:
docker exec -i DOCKER_NAME mysql -u root --password=YOUR_PASS --database=YOUR_DB < ./dump.sql

mysqldump doesn't work in crontab

I'm trying to add a cronjob in the crontab (ubuntu server) that backups the mysql db.
Executing the script in the terminal as root works well, but inserted in the crontab nothing happens. I've tried to run it each minutes but no files appears in the folder /var/db_backups.
(Other cronjobs work well)
Here is the cronjob:
* * * * * mysqldump -u root -pHERE THERE IS MY PASSWORD
--all-databases | gzip > /var/db_backups/database_`date +%d%m%y`.sql.gz
what can be the problem?
You need to escape % character with \
mysqldump -u 'username' -p'password' DBNAME > /home/eric/db_backup/liveDB_`date +\%Y\%m\%d_\%H\%M`.sql
I was trying the same but I found that dump was created with 0KB. Hence, I got to know about the solution which saved my time.
Command :
0 0 * * * mysqldump -u 'USERNAME' -p'PASSWORD' DATEBASE > /root/liveDB_`date +\%Y\%m\%d_\%H\%M\%S`.sql
NOTE:
1) You can change the time setting as per your requirement. I have set every day in above command.
2) Make sure you enter your USERNAME, PASSWORD, and DATABASE inside single quote (').
3) Write down above command in Crontab.
I hope this helps someone.
Check cron logs (should be in /var/log/syslog) You can use grep to filter them out.
grep CRON /var/log/syslog
Also you can check your local mail box to see if there are any cron mails
/var/mail/username
You can also set up other receiving mail in you crontab file
MAILTO=your#mail.com
Alternatively you can create a custom command mycommand. To which you can add more options. You must give execute permissions.
It is preferable to have a folder where they store all your backups, in this case using a writable folder "backup" which first create in "your home" for example.
My command in "usr/local/bin/mycommand":
#!/bin/bash
MY_USER="your_user"
MY_PASSWORD="your_pass"
MY_HOME="your_home"
case $1 in
"backupall")
cd $MY_HOME/backup
mysqldump --opt --password=$MY_PASSWORD --user=$MY_USER --all-databases > bckp_all_$(date +%d%m%y).sql
tar -zcvf bckp_all_$(date +%d%m%y).tgz bckp_all_$(date +%d%m%y).sql
rm bckp_all_$(date +%d%m%y).sql;;
*) echo "Others";;
esac
Cron: Runs the 1st day of each month.
0 0 1 * * /usr/local/bin/mycommand backupall
I hope it helps somewhat.
Ok, I had a similar problem and was able to get it fixed.
In your case you could insert that mysqldump command to a script
then source the profile of the user who is executing the mysqldump command
for eg:
. /home/bla/.bash_profile
then use the absolute path of the mysqldump command
/usr/local/mysql/bin/mysqldump -u root -pHERE THERE IS MY PASSWORD --all-databases | gzip > /var/db_backups/database_`date +%d%m%y`.sql.gz
Local Host mysql Backup:
0 1 * * * /usr/local/mysql/bin/mysqldump -uroot -ppassword --opt database > /path/to/directory/filename.sql
(There is no space between the -p and password or -u and username - replace root with a correct database username.)
It works for me. no space between the -p and password or -u and username
Create a new file and exec the code there to dump into a file location and zip it . Run that script via a cron
I am using Percona Server (a MySQL fork) on Ubuntu. The package (very likely the regular MySQL package as well) comes with a maintenance account called debian-sys-maint. In order for this account to be used, the credentials are created when installing the package; and they are stored in /etc/mysql/debian.cnf.
And now the surprise: A symlink /root/.my.cnf pointing to /etc/mysql/debian.cnf gets installed as well.
This file is an option file read automatically when using mysql or mysqldump. So basically you then had login credentials given twice - in that file and on command line. This was the problem I had.
So one solution to avoid this condition is to use --no-defaults option for mysqldump. The option file then won't be read. However, you provide credentials via command line, so anyone who can issue a ps can actually see the password once the backup runs. So it's best if you create an own option file with user name and password and pass this to mysqldump via --defaults-file.
You can create the option file by using mysql_config_editor or simply in any editor.
Running mysqldump via sudo from the command line as root works, just because sudo usually does not change $HOME, so .my.cnf is not found then. When running as a cronjob, it is.
You might also need to restart the service to load any of your changes.
service cron restart
or
/etc/init.d/cron restart

How to import multiple database at once through SSH?

I am attempting to restore my mysql database to my website, and all of the tables in my database get dropped into individual files, so I am trying to figure out how can I restore all of the database .sql files through SSH with a single (or easy command) instead of restoring all 100 tables individually.
cat *.sql > data.sql
mysql -u <username> -p < data.sql
It depends on how you created the individual files - if they have all the instructions for recreating the tables (i.e., "Drop if exists...", "Create ...", and "Insert into ..."), then you can either concatenate them into mysql:
cat *.sql | mysql -u xxx -pxxx dbname
or write a script to do it
#!/bin/sh
mysql -u xxx -pxxx dbname < file001.sql
mysql -u xxx -pxxx dbname < file002.sql
The second choice lets you more easily control the order of files processed.
Finally, you might want to create your backups in a more convenient way - check out mysqldump for how to dump a database (or several!) into one file (basically, "mysqldump -u xxx -pxxx dbname > dbname.sql", but there are some helpful flags you might want to add).

Restoring a MySQL table back to the database

I have a trouble in restoring MySQL table back to the database from command line. Taking backup of a table is working with mysqldump.Taking backup and restoring of a database is also working properly. I have used:
mysql -uroot -p DatabaseName TableName < path\TableName.sql
Thanks in advance
Ah, I think I see the problem here.
Your backup script looks fine. tbl_name works correctly as the optional 2nd argument.
To restore, you should simply run
mysql -uroot -p DatabaseName < path\TableName.sql
Running man mysql would have shown you the correct arguments and options
mysql [options] db_name
As your backup script only contains one table, only that table will be restored into your database.
Taking backup
mysqldump -u -p mydatabase table1 > database_dump.sql
restoring from backup flie need not include table name
mysql -u -p mydatabase < database_dump.sql
Best way to restore your database:
open cmd at bin folder
login to mysql:
mysql -uroot -pyour_password
show databases;
use db_name;
now hit source and put the complete path from address bar where your sql file is stored and hit ;
for example :
source db_name.sql;
Copy your db.sql file to your Mysql Server if you are in a remote machine:
$rsync -Cravzp --progress db.sql user#192.168.10.1:/home/user
Now you can go to your remote server as:
$ssh -l user 192.168.10.1
In the Mysql Server you must to do this:
user#machine:~$mysql -h localhost -u root -p
Obs: The file db.sql must be in the same place (/home/user).
Now type this command in you Mysql Server:
mysql>'\'. db.sql + Enter. Obs: Remove all ' from this command to work

Copy data from 1 DB to another DB

I have to delete some table data in prod. db and for the records that are going to be deleted a backup of records should be copied to another local db. This involves two databases, residing in two different servers/instances.
Is it possible to do via sql (mysql) query to do this?
I would use mysqldump with a where condition to get the records out. Once you have everything saved, you can then delete them from prod. These commands should work from the command line, including the password to avoid the prompt is optional.
mysqldump -u user -pPassword -h hostname1 dbname tablename
--where 'field1="delete"'
--skip-add-drop-table --no-create-db --no-create-info > deleted.sql
mysql -u user -pPassword -h hostname2 dbname < deleted.sql
mysql -u user -pPassword -h hostname1 dbname -e 'DELETE FROM tablename WHERE field1="delete"'
I'm trying to do exactly the same thing, copy data from a table to another server, then delete it from the original.
So far I see two options:
copy data to a local database then replicate that database to another server
use Federated storage engine
Both require some serious reconfiguration of our servers as neither Federated or binary logging (required for replication) are not enabled. This would take time and it would be best if other solutions could be found.
The process needs to be executed on a daily basis, so it needs to be fully automated.
Perhaps a third option is to automate things with a cron job:
copy the data to a separate database on the same server
backup that database with mysqldump in a folder which is linked on the other server too
on the second server, restore the database from the sql dump