After searching SO, I found answers to the following:
How to copy an entire MySQL schema using mysqldump
How to copy an entire MySQL schema using PHP
How to copy an entire MySQL schema using the enterprise edition of MySQL
How to copy an entire Microsoft SQL Server schema using the menus.
I also found a few hints about copying a MySQL schema using SQL commands.
My question: If I use the following SQL commands to copy a MySQL schema, what parts of the old schema would not be copied? Indexes? Constraints? Views? Anything else?
CREATE SCHEMA new_schema DEFAULT CHARACTER SET utf8;
CREATE TABLE new_schema.table1 LIKE old_schema.table1;
CREATE TABLE new_schema.table2 LIKE old_schema.table2;
CREATE TABLE new_schema.table3 LIKE old_schema.table3;
...;
INSERT INTO new_schema.table1 SELECT * FROM old_schema.table1;
INSERT INTO new_schema.table2 SELECT * FROM old_schema.table2;
INSERT INTO new_schema.table3 SELECT * FROM old_schema.table3;
...;
The CREATE TABLE ... LIKE will take care of indexes and constraints.
You should take care to SET FOREIGN_KEY_CHECKS=0 while you run this, because if table1 has a foreign key to table2, then creating table1 will fail. Likewise inserting data into the tables in the wrong order will fail.
Your script does not cover:
Views
Triggers
Stored procedures
Stored functions
Events
There are no CREATE... LIKE... statements for these other objects. You'll have to use SHOW CREATE... and then run it against in the context of the new schema. See the various SHOW CREATE... statements here: http://dev.mysql.com/doc/refman/5.6/en/show.html
I also caution that the way you INSERT INTO... SELECT FROM... will work, but can fill up your rollback segment if the table is very large. Tools like pt-archiver try to copy tables in batches, ascending along the primary key, to avoid this problem.
I think routines can't be copied directly with sql commands (as far as I know there's not such anything like create procedure myProc like old.myProc).
I would recommend you use mysqldump, since it takes care of copying everything, including the data (if you don't want to copy the data, you can use the -d switch to prevent creating the insert statements).
If you want to create a "template" (a database that is exactly like another database, but without the data), you can use the following:
mysqldump [connectionParameters] -d -R -v yourOldDatabase > databaseTemplate.sql
The options explained:
[connectionParameters]: host, user and password
-d: Don't copy data
-R: Include routines in the dump
-v: Output what mysqldump is doing to the console
You can open this "light" sql script to check how the objects were created.
Hope this helps
Related
I have a dump of a part of a table from specific date and I would like to restore this dump in a replica database in the specific table, but when I try to restore it, the mysql gives me an error: The table is already exist.
In case it helps, the way I do the dump is the next:
mysqldump --user=root my_db my_table --where="YEAR(created)='2021' AND MONTH(created)='21'" > week21.sql
I know that I can create the dump with --optoption, but this option drop first the whole table, so I would lose the current data in this table right?
Any Idea to do that?
Thanks
mysqldump (or mariadb-dump) emits a mess of SQL statements into its output file. You can read those statements by looking at the file in a text editor. And, you can edit the file if need be (but that's a brittle way to handle a workflow like yours).
You need to get it to write the correct SQL statements for your particular application. In your case the CREATE TABLE statements mess up your workflow, so leave them out.
If you use the command-line option --no-create-info mysqldump won't write CREATE TABLE statements into its output file. So that will solve your immediate problem.
If the rows you attempt to restore with your mysqldump output might already exist in your new table, you can use mysqldump's --insert-ignore command line option to get it to write INSERT IGNORE statements rather than plain INSERT statements.
I have a mysql broken database and I have another one which is good and with all tables and columns.
How can I import in broken database only missing info which is in good database? I mean tables and columns and values not stored info.
I exported good database and when I try to import in broken database I get: #1060 - Duplicate column name 'id_advice'
So, what I need is to skip if duplicate items and continue to add only info which does not exist.
You can make use of Mysqldump. There is a selection to use no data.
mysqldump -uYourUserName -p=YourPassword databasename --no-data --routines > "dump.sql"
The you can import the table stucture. there is also different options to use create if not exists or drop if exists so you can tailor make it for your needs. I recommend downloading Mysql Workbench, its easily done with that tool.
Info about mysqldump
http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html
You can use "IF EXIST" in your SQL statement.
You get every record/table/what-you-want with PHP or an other programmation language.
Then, you build some statement like this :
IF NOT EXISTS (SELECT what-you-want FROM BrokenTable WHERE some-test=some-value)
INSERT INTO ClearTable(value, that, you, need, to, insert);
Hope I helped you.
I have a 22GB .sql file (100+ tables) and i only need, let's say, 5 of them. I have tried all oracle tools, but none of them is capable of extracting only specific tables.
Is there ANY way to extract only specific tables ?
If you created the file with mysqldump, I believe you can use text utilities to extract the CREATE TABLE and INSERT statements.
Specifically, you can use sed addresses to extract all the lines between two regular expressions. It won't have trouble with a 22 gig file.
I dumped my sandbox database (a small database I use mainly for answering questions on SO) for testing.
In the version of MySQL that I have installed here, this sed one-liner extracts the CREATE table statement and INSERT statements for the table "DEP_FACULTY".
$ sed -n -e '/^CREATE TABLE `DEP_FACULTY`/,/UNLOCK TABLES/p' mysql.sql > output.file
This regular expression identifies the start of the CREATE TABLE statement.
/^CREATE TABLE DEP_FACULTY/
CREATE TABLE statements seem to always be immediately followed by INSERT statements. So we just need a regular expression that identifies the end of the INSERT statements.
/UNLOCK TABLES/
If your version of mysqldump produces the same output, you should be able to just replace the table name, change the name of the output file to something meaningful, and go drink a cup of coffee.
I just stumpled over a very interesting script that creates a single .sql for each table that exists in the huge main .sql: MYSQLDUMPSPLITTER.sh
http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script
In case the link is 404, please see the github gist here.
If you created the dump, then you should go back and create a new one with --tables option.
mysqldump --tables table1 table2 table3
If you didn't then just load the whole database, do the above and then blow it away.
I would like to select data from a second MySQL database in order to migrate data from one server to another.
I'm looking for syntax like
SELECT * FROM username:password#serverip.databaseName.tableName
Is this possible? I would be able to do this in Microsoft SQL Server using linked servers, so I'm assuming it's possible in MySQL as well.
You can create a table using FEDERATED storage engine:
CREATE TABLE tableName (id INT NOT NULL, …)
ENGINE=FEDERATED
CONNECTION='mysql://username:password#serverip/databaseName/tableName'
SELECT *
FROM tableName
Basically, it will serve as a view over the remote tableName.
There are generally two approaches you can take, although neither of them sound like what you're after:
Use replication and set up a master/slave relationship between the two databases.
Simply dump the data (using the command line mysqldump tool) from the 1st database and import it into the 2nd.
However, both of these will ultimately migrate all of the data (i.e.: not a subset), although you can specify specific table(s) via mysqldump. Additionally, if you use the mysqldump approach and you're not using InnoDB you'll need to ensure that the source database isn't in use (i.e.: has integrity) when the dump is created.
You can't do this directly, but as someone else alluded to in a comment, you can use mysqldump to export the contents of a table as a SQL script.
At that point you could run the script on the new server to create the table, or if more manipulation of the data is required, import that data into a table with a different name on the new server, then write a query to copy the data from there.
If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc..
Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database.
Using MySQLDump etc.. won't work. Here is the detail. For example: I have FOUR old databases, I can name them as 'DB_a', 'DB_b', 'DB_c', 'DB_d'. Now the old table A has 28 columns, I want to add each row in table A into the new database with a new column ID 'DB_x' (x to indicate which database it comes from). If I can't differentiate the database ID by the row's content, the only way I can identify them is going through some user input parameters.
Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while.
Thanks!!
I don't entirely understand your situation with the columns (wouldn't it be more sensible to add any new columns after migration?), but one of the arguably fastest methods to copy a database across servers is mysqlhotcopy. It can copy myISAM only and has a number of other requirements, but it's awfully fast because it skips the create dump / import dump step completely.
Generally when you migrate a database to new servers, you don't apply a bunch of schema changes at the same time, for the reasons that you're running into right now.
MySQL has a dump tool called mysqldump that can be used to easily take a snapshot/backup of a database. The snapshot can then be copied to a new server and installed.
You should figure out all the changes that have been done to your "new" database, and write out a script of all the SQL commands needed to "upgrade" the old database to the new version that you're using (e.g. ALTER TABLE a ADD COLUMN x, etc). After you're sure it's working, take a dump of the old one, copy it over, install it, and then apply your change script.
Use mysqldump to dump the data, then echo output.txt > msyql. Now the old data is on the new server. Manipulate as necessary.
Sure there are tools that can help you achieving what you're trying to do. Mysqldump is a premier example of such tools. Just take a glance here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
What you could do is:
1) You make a dump of the current db, using mysqldump (with the --no-data option) to fetch the schema only
2) You alter the schema you have dumped, adding new columns
3) You create your new schema (mysql < dump.sql - just google for mysql backup restore for more help on the syntax)
4) Dump your data using the mysqldump complete-insert option (see link above)
5) Import your data, using mysql < data.sql
This should do the job for you, good luck!
Adding extra rows can be done on a live database:
ALTER TABLE [table-name] ADD [row-name] MEDIUMINT(8) default 0;
MySql will default all existing rows to the default value.
So here is what I would do:
make a copy of you're old database with MySql dump command.
run the resulting SQL file against you're new database, now you have an exact copy.
write a migration.sql file that will modify you're database with modify table commands and for complex conversions some temporary MySql procedures.
test you're script (when fail, go to (2)).
If all OK, then goto (1) and go live with you're new database.
These are all valid approaches, but I believe you want to write a sql statement that writes other insert statements that support the new columns you have.