I'm on MySQL 5.7
I have a table that is about 150GB, the storage on the computer is only 200GB.
So I wanted to get rid of data older than 9 months on this table.
So my plan was to take a dump of the table with the where clause. Then truncate the table, and reinsert the dump.
Does creating a dump with a where clause create a temp table, where I would run out of storage before being able to export all that data?
What I ran into where I tried regular delete statement was table locking and storage filling up quickly from temp table being created to delete. At least I think this is what happened when I tried to just delete
You can make mysqldump run without using any temp space. Use the --opt switch on the command line. At a minimum use the --quick switch.
You can use a simple WHERE clause and it will still work.
And be sure to run the command on a machine with enough hard drive space to store the output .sql file.
I have an situation when I've restored databases( with flag --all-databases) 2 weeks ago, but now I need newer data in few DB, so my question is can I just mysqldump < newbackup with already installed databases or I need to remove all data? Please provide me simplest or fastest way to restore new instance.
In sqldump I have sentences like CREATE DATABASE /*!32312 IF NOT EXISTS*/
By default, your dump file will include DROP TABLE and CREATE TABLE statements, so any tables that exist and the data in them will be dropped first. Then the tables as they exist in your dump will be restored.
Note this might even mean that the tables themselves might not have the same columns and indexes. They'll be recreated as they are defined in your dump file. Any alterations made since the dump was created will be lost when the table is dropped.
Any tables that exist in your current database that aren't in the dump file won't be touched. That is, if you created another table since your dump file was created, then there's no DROP & CREATE in the dump file. So restoring the dump won't do anything to the newer tables. This might lead to some inconsistencies, if the newer tables reference data in the restored tables.
It's possible that the DROP TABLE and CREATE TABLE statements will be missing from the dump file. There are options for mysqldump that make the dump omit these statements (refer to documentation or mysqldump --help). But by default, these statements are present.
I have a mysqldump backup that was created using --no-create-info option. I want to restore it to a new database that does not have certain tables (approximately 50 tables remove from target database as they were no longer needed).
So I am getting Table 'table_name' doesn't exist for the obvious reason.
So what is the mysql way of restoring to a database that does not have all the tables present in backup file.
I may user --insert-ignore to avoid this failure but I doubt this may also ignore some genuine errors such as data type mismatch etc.
You cannot insert rows to a table that doesn't exist, obviously.
To restore the data in your dump file, you need to create those tables first. You could go back to your source MySQL instance and dump those table definitions with mysqldump --no-data
If you don't care about the data, and you only want to restore data for tables that do exist, then you could filter out the INSERT statements before trying to import that script.
You could use grep -v for example to eliminate the rows.
Or you could use sed to delete lines between "-- Dumping data for table tablename" and whatever the next table is.
If you don't want to filter the data but you don't care about restoring data for the tables that don't exist, you could create dummy tables with the right fields, but define the tables with the BLACKHOLE storage engine, so the INSERTs won't actually result in saving any data.
One more option: Import the dump file with mysql --force so it continues even if it gets errors on some of the INSERTs.
I have a large database, "devDB" that I want to duplicate on the same server to become my live database, "liveDB". Can I make a duplicate without using mysqldump? Last time I used mysqldump it took a really long time. Seems like there could be a quicker way if its just a matter of copying the files. Can you create a new database and copy all the tables?
If you don't want to use mysqldump, create you databases/schema,
and copy the tables from one DB to the other:
CREATE TABLE `liveDB.sample_table` SELECT * FROM `devDB.sample_table`;
Michael's answer above is a good idea if you want to put the newDB in the same MySQL instance as devDB. If you want to put liveDB on a separate Instance, you could use mysqldump to "pipe" the output directly into the "source" of liveDB, so that you could avoid Disk I/O. Also to improve performance, you could disable MySQL's binlog on the target DB while Inserting data.
If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc..
Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database.
Using MySQLDump etc.. won't work. Here is the detail. For example: I have FOUR old databases, I can name them as 'DB_a', 'DB_b', 'DB_c', 'DB_d'. Now the old table A has 28 columns, I want to add each row in table A into the new database with a new column ID 'DB_x' (x to indicate which database it comes from). If I can't differentiate the database ID by the row's content, the only way I can identify them is going through some user input parameters.
Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while.
Thanks!!
I don't entirely understand your situation with the columns (wouldn't it be more sensible to add any new columns after migration?), but one of the arguably fastest methods to copy a database across servers is mysqlhotcopy. It can copy myISAM only and has a number of other requirements, but it's awfully fast because it skips the create dump / import dump step completely.
Generally when you migrate a database to new servers, you don't apply a bunch of schema changes at the same time, for the reasons that you're running into right now.
MySQL has a dump tool called mysqldump that can be used to easily take a snapshot/backup of a database. The snapshot can then be copied to a new server and installed.
You should figure out all the changes that have been done to your "new" database, and write out a script of all the SQL commands needed to "upgrade" the old database to the new version that you're using (e.g. ALTER TABLE a ADD COLUMN x, etc). After you're sure it's working, take a dump of the old one, copy it over, install it, and then apply your change script.
Use mysqldump to dump the data, then echo output.txt > msyql. Now the old data is on the new server. Manipulate as necessary.
Sure there are tools that can help you achieving what you're trying to do. Mysqldump is a premier example of such tools. Just take a glance here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
What you could do is:
1) You make a dump of the current db, using mysqldump (with the --no-data option) to fetch the schema only
2) You alter the schema you have dumped, adding new columns
3) You create your new schema (mysql < dump.sql - just google for mysql backup restore for more help on the syntax)
4) Dump your data using the mysqldump complete-insert option (see link above)
5) Import your data, using mysql < data.sql
This should do the job for you, good luck!
Adding extra rows can be done on a live database:
ALTER TABLE [table-name] ADD [row-name] MEDIUMINT(8) default 0;
MySql will default all existing rows to the default value.
So here is what I would do:
make a copy of you're old database with MySql dump command.
run the resulting SQL file against you're new database, now you have an exact copy.
write a migration.sql file that will modify you're database with modify table commands and for complex conversions some temporary MySql procedures.
test you're script (when fail, go to (2)).
If all OK, then goto (1) and go live with you're new database.
These are all valid approaches, but I believe you want to write a sql statement that writes other insert statements that support the new columns you have.