By default, mysqldump takes the backup of an entire database. I need to backup a single table in MySQL. Is it possible? How do I restore it?
Dump and restore a single table from .sql
Dump
mysqldump db_name table_name > table_name.sql
Dumping from a remote database
mysqldump -u <db_username> -h <db_host> -p db_name table_name > table_name.sql
For further reference:
http://www.abbeyworkshop.com/howto/lamp/MySQL_Export_Backup/index.html
Restore
mysql -u <user_name> -p db_name
mysql> source <full_path>/table_name.sql
or in one line
mysql -u username -p db_name < /path/to/table_name.sql
Dump and restore a single table from a compressed (.sql.gz) format
Credit: John McGrath
Dump
mysqldump db_name table_name | gzip > table_name.sql.gz
Restore
gunzip < table_name.sql.gz | mysql -u username -p db_name
mysqldump can take a tbl_name parameter, so that it only backups the given tables.
mysqldump -u -p yourdb yourtable > c:\backups\backup.sql
try
for line in $(mysql -u... -p... -AN -e "show tables from NameDataBase");
do
mysqldump -u... -p.... NameDataBase $line > $line.sql ;
done
$line cotent names tables ;)
We can take a mysql dump of any particular table with any given condition like below
mysqldump -uusername -p -hhost databasename tablename --skip-lock-tables
If we want to add a specific where condition on table then we can use the following command
mysqldump -uusername -p -hhost databasename tablename --where="date=20140501" --skip-lock-tables
You can either use mysqldump from the command line:
mysqldump -u username -p password dbname tablename > "path where you want to dump"
You can also use MySQL Workbench:
Go to left > Data Export > Select Schema > Select tables and click on Export
You can use easily to dump selected tables using MYSQLWorkbench tool ,individually or group of tables at one dump then import it as follow: also u can add host information if u are running it in your local by adding -h IP.ADDRESS.NUMBER after-u username
mysql -u root -p databasename < dumpfileFOurTableInOneDump.sql
You can use the below code:
For Single Table Structure alone Backup
-
mysqldump -d <database name> <tablename> > <filename.sql>
For Single Table Structure with data
-
mysqldump <database name> <tablename> > <filename.sql>
Hope it will help.
You can use this code:
This example takes a backup of sugarcrm database and dumps the output to sugarcrm.sql
# mysqldump -u root -ptmppassword sugarcrm > sugarcrm.sql
# mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql
The sugarcrm.sql will contain drop table, create table and insert command for all the tables in the sugarcrm database. Following is a partial output of sugarcrm.sql, showing the dump information of accounts_contacts table:
--
-- Table structure for table accounts_contacts
DROP TABLE IF EXISTS `accounts_contacts`;
SET #saved_cs_client = ##character_set_client;
SET character_set_client = utf8;
CREATE TABLE `accounts_contacts` (
`id` varchar(36) NOT NULL,
`contact_id` varchar(36) default NULL,
`account_id` varchar(36) default NULL,
`date_modified` datetime default NULL,
`deleted` tinyint(1) NOT NULL default '0',
PRIMARY KEY (`id`),
KEY `idx_account_contact` (`account_id`,`contact_id`),
KEY `idx_contid_del_accid` (`contact_id`,`deleted`,`account_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
SET character_set_client = #saved_cs_client;
--
just use mysqldump -u root database table
or if using with password mysqldump -u root -p pass database table
I've come across this and wanted to extend others' answers with our fully working example:
This will backup the schema in it's own file, then each database table in its own file.
The date format means you can run this as often as your hard drive space allows.
DATE=`date '+%Y-%m-%d-%H'`
BACKUP_DIR=backups/
DATABASE_NAME=database_name
mysqldump --column-statistics=0 --user=fake --password=secure --host=10.0.0.1 --routines --triggers --single-transaction --no-data --databases ${DATABASE_NAME} | gzip > ${BACKUP_DIR}${DATE}-${DATABASE_NAME}--schema.sql.gz
for table in $(mysql --user=fake --password=secure --host=10.0.0.1 -AN -e "SHOW TABLES FROM ${DATABASE_NAME};");
do
echo ""
echo ""
echo "mysqldump --column-statistics=0 --user=fake --password=secure --host=10.0.0.1 --routines --triggers --single-transaction --databases ${DATABASE_NAME} --tables ${table} | gzip > ${BACKUP_DIR}${DATE}-${DATABASE_NAME}-${table}.sql.gz"
mysqldump --column-statistics=0 --user=fake --password=secure --host=10.0.0.1 --routines --triggers --single-transaction --databases ${DATABASE_NAME} --tables ${table} | gzip > ${BACKUP_DIR}${DATE}-${DATABASE_NAME}-${table}.sql.gz
done
We run this as bash script on an hourly basis, and actually have HOUR checks and only backup some tables through the day, then all tables in the night.
to keep some space on the drives, the script also runs this to remove backups older than X days.
# HOW MANY DAYS SHOULD WE KEEP
DAYS_TO_KEEP=25
DAYSAGO=$(date --date="${DAYS_TO_KEEP} days ago" +"%Y-%m-%d-%H")
echo $DAYSAGO
rm -Rf ${BACKUP_DIR}${DAYSAGO}-*
echo "rm -Rf ${BACKUP_DIR}${DAYSAGO}-*"
Related
10 million rows. Want to backup data in SQL files with 100k rows (breaking the data into chunks). Is it possible to run a query like this:
SELECT * FROM `myTable` WHERE `counter` > 200000 and `counter` <= 300000 ---> Send to .sql file
I want to replace the psuedocode at the end of that statement with real code.
Using the "export" feature of PHPMyAdmin more than 100 times would take too long.
You should be able to use the mysqldump command:
mysqldump -u root -p [database_name] [tablename]
--where="'counter' > 200000 and 'counter' <= 300000" > [dumpfile.sql]
Additional info on the command here:
https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html
You can try to use mysqldump command with script to backup your script.
for i in {1..100}
do
beginRow=$(( ($i - 1) * 100000 ))
endRow=$(( $i * 100000 ))
mysqldump -h <hostname> -u <username> -p <databasename> myTable --where="counter = > $beginRow and counter <= $endRow" --no-create-info > "./data-$i.sql"
done
I have a set of tables in my database that I have to take a dump ( :D ) of. My problem is I want to take some data from some tables that only date back certain days and would like to keep the remaining tables in tact.
The query I came up with was something like:
mysqldump -h<hostname> -u<username> -p <databasename>
<table1> <table2> <table3>
<table4> --where 'created > DATE_SUB(now(), INTERVAL 7 DAY)',
<table5> --where 'created > DATE_SUB(now(), INTERVAL 7 DAY)
--single-transaction --no-create-info | gzip
> $(date +%Y-%m-%d-%H)-dump.sql.gz
The trouble with the above code is that table1, table2 and table3 will try to take the where clause of table4. I don't want that cause that would spit out an error that created field does not exist in these tables.
I tried putting comma (,) after table names as I did after where clause but it doesn't work.
At this point I'm pretty much stuck and have no more alternative expect create two different sql dump files, which I wouldn't want to do.
make two dumps or if you dont want to make two dumps then try two command
a.
mysqldump -h<hostname> -u<username> -p
<databasename> <table1> <table2> <table3>
--single-transaction --no-create-info > dumpfile.sql
b.
mysqldump -h<hostname> -u<username> -p <databasename>
<table4> --where 'created > DATE_SUB(now(), INTERVAL 7 DAY)',
<table5> --where 'created > DATE_SUB(now(), INTERVAL 7 DAY)
--single-transaction --no-create-info >> dumpfile.sql
c.
gzip dumpfile.sql
To dump multiple tables with multiple where conditions.
mysqldump -h <hostname> -u <username> -p'<password>' <db_name> <table_name> -where 'condition=true' --no-create-info > dumpfile.sql
Then, for second table, use ">>". It will append the previous dump file.
--no-create-info >> dumpfile.sql
mysqldump -h <hostname> -u <username> -p'<password>' <db_name> <table_name_2> -where 'condition=true' --no-create-info >> dumpfile.sql
So the solution above won't work unless the tables have a common foreign key field.
If you look at my example below, the user_addresses, user_groups, and user_payment_methods all have the user_id field i common. When mysqldump executes the where clause it will filter those tables.
mysqldump -u <username> -p <password>
user_addresses user_groups user_payment_methods
-w "user_id
in (select id from users where email like '%#domain.com')"
--single-transaction| gzip > sqldump.sql.gz
Basically my batch file contains:
mysql -u root -pMypassword use myTableDB update myTable set extracted='Y'
but for some syntax error it doesn't update the table. However, when i run through command line:
mysql -u root -pMypassword use myTableDB
mysql update myTable set extracted='Y'
through command line it works. Anyone can point me what syntax error i have on the batch file.
The cleanest way would be the following:
mysql -u root -pMypassword -DmyTableDB -ANe"update myTable set extracted='Y'"
or if you want the SQL command placed in a variable, you could do this
set sqlstmt=update myTable set extracted='Y'
mysql -u root -pMypassword -DmyTableDB -ANe"%sqlstmt%"
Here is an example i just ran
set sqlstmt=show databases
mysql -u root -pMypassword -DmyTableDB -ANe"%sql%"
and I got this
C:\WINDOWS\system32> set sqlstmt=show databases
C:\WINDOWS\system32> mysql ... -ANe"%sql%"
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| test |
+--------------------+
C:\WINDOWS\system32>
mysql client reads SQL commands from STDIN. To do what you want, you would have to do something like the following in your batch file:
echo "update myTable set extracted='Y'" | mysql -u root -pMypassword myTableDB
I need some help, I have this command:
mysqldump -u myusername -pmypassword --skip-add-drop-table --no-data --single-transaction database_name | sed 's/CREATE TABLE/CREATE TABLE IF NOT EXISTS/g' > db.sql
that can add CREATE TABLE IF NOT EXISTS in my mysqldump, but I also want to add TRUNCATE TABLE command before the CREATE TABLE IF NOT EXISTS command, how should I do this?
Just add a little more to your regex in sed:
mysqldump -u myusername -pmypassword --skip-add-drop-table --no-data --single-transaction database_name | sed -r 's/CREATE TABLE (`[^`]+`)/TRUNCATE TABLE \1; CREATE TABLE IF NOT EXISTS \1/g' > db.sql
I am trying to copy a table "table1" from "db1" on "server1" to "db1" on "server2". Here is what I attempted:
mysqldump -u USER -pPASSWORD --single-transaction db1 table1 \ | mysql --host=SERVER1 -u USER -pPASSWORD db1 table1;
My username and password on both servers are the same. Database name and table name on both servers are same.
But this returns the warnings:
Warning: Using unique option prefix database instead of databases is deprecated and will be removed in a future release. Please use the full name instead.
Warning: mysqldump: ignoring option '--databases' due to invalid value ''
mysqldump: Couldn't find table: "table1"
Try this:
mysqldump -u -p db_name table_name > table_name.sql
Now take this table_name.sql file to server2, create a database (db_name), exit from mysql command line and use the following command:
mysql -p -u db_name < table_name.sql
The following worked:
mysqldump -u USER -pPASSWORD --single-transaction --add-drop-table db1 table1 | mysql --host=SERVER1 -u USER -pPASSWORD db1
I shouldn't have specified the table name in the end and use add-drop table after single-transaction!