I'm using mysqldump to export old records. However, the dump file has slightly extra rows than the specified --where condition. My table has 2905338 rows. The exported rows should be 635314, but mysqldump is exporting 134 rows extra.
mysqldump table --no-create-db --no-create-info --skip-add-drop-table --skip-add-locks --skip-disable-keys --skip-set-charset --skip-triggers --where "created BETWEEN '2013-01-01 00:00:00' and '2016-12-01 00:00:00'"
It is exporting rows of 1 hr extra than the condition. The same is happening with other huge tables.
Try using
mysqldump table --no-create-db --no-create-info --skip-add-drop-table --skip-
add-locks --skip-disable-keys --skip-set-charset --skip-triggers --where
"created >= '2013-01-01 00:00:00' and created <= '2016-12-01 00:00:00'"
Related
10 million rows. Want to backup data in SQL files with 100k rows (breaking the data into chunks). Is it possible to run a query like this:
SELECT * FROM `myTable` WHERE `counter` > 200000 and `counter` <= 300000 ---> Send to .sql file
I want to replace the psuedocode at the end of that statement with real code.
Using the "export" feature of PHPMyAdmin more than 100 times would take too long.
You should be able to use the mysqldump command:
mysqldump -u root -p [database_name] [tablename]
--where="'counter' > 200000 and 'counter' <= 300000" > [dumpfile.sql]
Additional info on the command here:
https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html
You can try to use mysqldump command with script to backup your script.
for i in {1..100}
do
beginRow=$(( ($i - 1) * 100000 ))
endRow=$(( $i * 100000 ))
mysqldump -h <hostname> -u <username> -p <databasename> myTable --where="counter = > $beginRow and counter <= $endRow" --no-create-info > "./data-$i.sql"
done
What I'm trying to do is find the command to export a table excluding specific entries. Here are the various options I tried but without achieving the desired result.
The approach works only for export of a table but excluding one record:
mysqldump --user=... --password=... --host=... DB_NAME usertable --where username!='root'
If I try to use other operator for exclude more data, as below, the export failed:
mysqldump --user=... --password=... --host=... DB_NAME usertable --where username!='root' and username!='root2'
mysqldump --user=... --password=... --host=... DB_NAME usertable --where "username not in('root','root2')"
What is a functional approach?
your syntax for the where is wrong as mysql needs a string for that
mysqldump --user=... --password=... --host=... DB_NAME usertable --where="username!='root' and username!='root2'"
mysqldump --user=... --password=... --host=... DB_NAME usertable --where="username not in('root','root2')"
please read more in the manual
In MySQL how to copy data from one table to another in the same database table?
I know insert into select, but it is taking forever to do this, especially on a live database, we can't take a risk.
Some conditions are there:
1. table1 is a source table and table1_archives is a destination table.
2. table1_archives already have data so we can only append.
My attempt:
time mysqldump --log-error=$logfile --complete-insert --insert-ignore
--no-create-info --skip-triggers --user=$dbuser --host=$host $dbname table1
--where="created < now()-interval 10 month" > $filename
But it has the name of table1, so I can't insert it into table1_archives.
Any guidance will be appreciated.
Thanks in advance.
In your output file, you need to change the table name table1 to table1_archives. Unfortunately mysqldump does not have any way to do this. You will have to do this on the fly using sed, which will rename everything in output file from table1 to table1_archives.
Since your columns can also contain the content like table1, its better to search and replace by enclosing them in backticks.
You can also use gzip to compress the output file.
Here is the command that worked for me
mysqldump -u USER -h HOST -p --skip-add-drop-table --no-create-info --skip-triggers --compact DB table1 |\
sed -e 's/`table1`/`table1_archives`/' | gzip > filename.sql.gz
"but it is taking forever to do this"
There is a small trick to avoid this and then insert into will work faster:
Insert into table1 select * from table2
Trick:
step-1: drop all indices from table2
step-2: execute query
step-3: create indices again
I have a set of tables in my database that I have to take a dump ( :D ) of. My problem is I want to take some data from some tables that only date back certain days and would like to keep the remaining tables in tact.
The query I came up with was something like:
mysqldump -h<hostname> -u<username> -p <databasename>
<table1> <table2> <table3>
<table4> --where 'created > DATE_SUB(now(), INTERVAL 7 DAY)',
<table5> --where 'created > DATE_SUB(now(), INTERVAL 7 DAY)
--single-transaction --no-create-info | gzip
> $(date +%Y-%m-%d-%H)-dump.sql.gz
The trouble with the above code is that table1, table2 and table3 will try to take the where clause of table4. I don't want that cause that would spit out an error that created field does not exist in these tables.
I tried putting comma (,) after table names as I did after where clause but it doesn't work.
At this point I'm pretty much stuck and have no more alternative expect create two different sql dump files, which I wouldn't want to do.
make two dumps or if you dont want to make two dumps then try two command
a.
mysqldump -h<hostname> -u<username> -p
<databasename> <table1> <table2> <table3>
--single-transaction --no-create-info > dumpfile.sql
b.
mysqldump -h<hostname> -u<username> -p <databasename>
<table4> --where 'created > DATE_SUB(now(), INTERVAL 7 DAY)',
<table5> --where 'created > DATE_SUB(now(), INTERVAL 7 DAY)
--single-transaction --no-create-info >> dumpfile.sql
c.
gzip dumpfile.sql
To dump multiple tables with multiple where conditions.
mysqldump -h <hostname> -u <username> -p'<password>' <db_name> <table_name> -where 'condition=true' --no-create-info > dumpfile.sql
Then, for second table, use ">>". It will append the previous dump file.
--no-create-info >> dumpfile.sql
mysqldump -h <hostname> -u <username> -p'<password>' <db_name> <table_name_2> -where 'condition=true' --no-create-info >> dumpfile.sql
So the solution above won't work unless the tables have a common foreign key field.
If you look at my example below, the user_addresses, user_groups, and user_payment_methods all have the user_id field i common. When mysqldump executes the where clause it will filter those tables.
mysqldump -u <username> -p <password>
user_addresses user_groups user_payment_methods
-w "user_id
in (select id from users where email like '%#domain.com')"
--single-transaction| gzip > sqldump.sql.gz
By default, mysqldump takes the backup of an entire database. I need to backup a single table in MySQL. Is it possible? How do I restore it?
Dump and restore a single table from .sql
Dump
mysqldump db_name table_name > table_name.sql
Dumping from a remote database
mysqldump -u <db_username> -h <db_host> -p db_name table_name > table_name.sql
For further reference:
http://www.abbeyworkshop.com/howto/lamp/MySQL_Export_Backup/index.html
Restore
mysql -u <user_name> -p db_name
mysql> source <full_path>/table_name.sql
or in one line
mysql -u username -p db_name < /path/to/table_name.sql
Dump and restore a single table from a compressed (.sql.gz) format
Credit: John McGrath
Dump
mysqldump db_name table_name | gzip > table_name.sql.gz
Restore
gunzip < table_name.sql.gz | mysql -u username -p db_name
mysqldump can take a tbl_name parameter, so that it only backups the given tables.
mysqldump -u -p yourdb yourtable > c:\backups\backup.sql
try
for line in $(mysql -u... -p... -AN -e "show tables from NameDataBase");
do
mysqldump -u... -p.... NameDataBase $line > $line.sql ;
done
$line cotent names tables ;)
We can take a mysql dump of any particular table with any given condition like below
mysqldump -uusername -p -hhost databasename tablename --skip-lock-tables
If we want to add a specific where condition on table then we can use the following command
mysqldump -uusername -p -hhost databasename tablename --where="date=20140501" --skip-lock-tables
You can either use mysqldump from the command line:
mysqldump -u username -p password dbname tablename > "path where you want to dump"
You can also use MySQL Workbench:
Go to left > Data Export > Select Schema > Select tables and click on Export
You can use easily to dump selected tables using MYSQLWorkbench tool ,individually or group of tables at one dump then import it as follow: also u can add host information if u are running it in your local by adding -h IP.ADDRESS.NUMBER after-u username
mysql -u root -p databasename < dumpfileFOurTableInOneDump.sql
You can use the below code:
For Single Table Structure alone Backup
-
mysqldump -d <database name> <tablename> > <filename.sql>
For Single Table Structure with data
-
mysqldump <database name> <tablename> > <filename.sql>
Hope it will help.
You can use this code:
This example takes a backup of sugarcrm database and dumps the output to sugarcrm.sql
# mysqldump -u root -ptmppassword sugarcrm > sugarcrm.sql
# mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql
The sugarcrm.sql will contain drop table, create table and insert command for all the tables in the sugarcrm database. Following is a partial output of sugarcrm.sql, showing the dump information of accounts_contacts table:
--
-- Table structure for table accounts_contacts
DROP TABLE IF EXISTS `accounts_contacts`;
SET #saved_cs_client = ##character_set_client;
SET character_set_client = utf8;
CREATE TABLE `accounts_contacts` (
`id` varchar(36) NOT NULL,
`contact_id` varchar(36) default NULL,
`account_id` varchar(36) default NULL,
`date_modified` datetime default NULL,
`deleted` tinyint(1) NOT NULL default '0',
PRIMARY KEY (`id`),
KEY `idx_account_contact` (`account_id`,`contact_id`),
KEY `idx_contid_del_accid` (`contact_id`,`deleted`,`account_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
SET character_set_client = #saved_cs_client;
--
just use mysqldump -u root database table
or if using with password mysqldump -u root -p pass database table
I've come across this and wanted to extend others' answers with our fully working example:
This will backup the schema in it's own file, then each database table in its own file.
The date format means you can run this as often as your hard drive space allows.
DATE=`date '+%Y-%m-%d-%H'`
BACKUP_DIR=backups/
DATABASE_NAME=database_name
mysqldump --column-statistics=0 --user=fake --password=secure --host=10.0.0.1 --routines --triggers --single-transaction --no-data --databases ${DATABASE_NAME} | gzip > ${BACKUP_DIR}${DATE}-${DATABASE_NAME}--schema.sql.gz
for table in $(mysql --user=fake --password=secure --host=10.0.0.1 -AN -e "SHOW TABLES FROM ${DATABASE_NAME};");
do
echo ""
echo ""
echo "mysqldump --column-statistics=0 --user=fake --password=secure --host=10.0.0.1 --routines --triggers --single-transaction --databases ${DATABASE_NAME} --tables ${table} | gzip > ${BACKUP_DIR}${DATE}-${DATABASE_NAME}-${table}.sql.gz"
mysqldump --column-statistics=0 --user=fake --password=secure --host=10.0.0.1 --routines --triggers --single-transaction --databases ${DATABASE_NAME} --tables ${table} | gzip > ${BACKUP_DIR}${DATE}-${DATABASE_NAME}-${table}.sql.gz
done
We run this as bash script on an hourly basis, and actually have HOUR checks and only backup some tables through the day, then all tables in the night.
to keep some space on the drives, the script also runs this to remove backups older than X days.
# HOW MANY DAYS SHOULD WE KEEP
DAYS_TO_KEEP=25
DAYSAGO=$(date --date="${DAYS_TO_KEEP} days ago" +"%Y-%m-%d-%H")
echo $DAYSAGO
rm -Rf ${BACKUP_DIR}${DAYSAGO}-*
echo "rm -Rf ${BACKUP_DIR}${DAYSAGO}-*"