Duplicate MYSQL Database Same Server Without Using mysqldump - mysql

I have a large database, "devDB" that I want to duplicate on the same server to become my live database, "liveDB". Can I make a duplicate without using mysqldump? Last time I used mysqldump it took a really long time. Seems like there could be a quicker way if its just a matter of copying the files. Can you create a new database and copy all the tables?

If you don't want to use mysqldump, create you databases/schema,
and copy the tables from one DB to the other:
CREATE TABLE `liveDB.sample_table` SELECT * FROM `devDB.sample_table`;

Michael's answer above is a good idea if you want to put the newDB in the same MySQL instance as devDB. If you want to put liveDB on a separate Instance, you could use mysqldump to "pipe" the output directly into the "source" of liveDB, so that you could avoid Disk I/O. Also to improve performance, you could disable MySQL's binlog on the target DB while Inserting data.

Related

Auto backup MySQL Database using batch file

I am trying to make some normal /restorable backup of MySQL database. My problem is, that I only need to back up a single table, which was last created, or edited.
Is it possible to set mysqldump to do that?
MySQL can find the last inserted table, but how can I include it in mysqldump command? I need to do that without locking the table, and the database has partitioning enabled.

copy tables between databases

I am using MySQL v5.1.
I am developing a Rails app. and writing a ruby script to copy database. So far, I have got an array of table names, the number of tables is 2090. I need to create all the tables in a new database, my code looks like:
#"table_names" is fetched by execute 'show tables' SQL commands
table_names.each { |tbl_name|
ActiveRecord::Base.connection.execute("CREATE TABLE #{new_db_name}.#{tbl_name} LIKE #{old_db_name}.#{tbl_name}")
}
This code works, but it took a long time to complete, because the code has to execute the CREATE TABLE command one by one and there are 2090 tables to create.
I am wondering is there any way to have bulk creating of tables (like bulk inserting of data) in SQL to save the time? If not, how can I improve the speed of creating the tables? That's copy all 2090 tables from one database to another.
P.S. I don't want to hard code all 2090 table names in SQL file.
Simplest method in mysql is to do a mysqldump of the database in question, then restore it to the new database, e.g:
mysql_dump -pPASSWORD -uUSERNAME name_of_db > name_of_db.sql
mysql -pPASSWORD -uUSERNAME name_of_db < name_of_db.sql
the dump file will contain all the necessary DDL/DML queries to recreate the database, plus disabling foreign keys and whatnot so that the dump can be loaded without causing any foreign key problems while the restored DB is in a halfway state.
Sounds like what you're looking for is a SchemaCompare tool as opposed to a DataCompare tool. This is built into Visual Studio for SQL. This tool will do that: http://toadformysql.com/index.jspa

MySQL backup multi-client DB for single client

I am facing a problem for a task I have to do at work.
I have a MySQL database which holds the information of several clients of my company and I have to create a backup/restore procedure to backup and restore such information for any single client. To clarify, if my client A is losing his data, I have to be able to recover such data being sure I am not modifying the data of client B, C, ...
I am not a DB administrator, so I don't know if I can do this using standard mysql tools (such as mysqldump) or any other backup tools (such as Percona Xtrabackup).
To backup, my research (and my intuition) led my to this possibile solution:
create the restore insert statement using the insert-select syntax (http://dev.mysql.com/doc/refman/5.1/en/insert-select.html);
save this inserts into a sql file, either in proper order or allowing this script to temporarily disable the foreign key checks to meet foreign keys' constraint;
of course, I do this for all my clients on a daily base, using a file for each client (and day).
Then, in the case I have to restore the data for a specific client:
I delete all his data left;
I restore the correct data using his sql file I created during the backup.
This way I believe I may recover the right data of client A without touching the data of client B. Is my solution eventually working? Is there any better way to achieve the same result? Or do you need more information about my problem?
Please, forgive me if this question is not well-formed, but I am new here and this is my first question so I may be unprecise...thanks anyway for the help.
Note: we will also backup the entire database with mysqldump.
You can use the --where parameter, you could provide a condition like *client_id=N* . Of course I am making an assumption since you don't provide any information on your schema.
If you have a Star schema , then you could probably write a small script that backups all lookup tables (considering they are adequately small) by using this parameter --tables and use the --where condition for your client data table. For additional performance, perhaps you could partition the table by the client_id.

Select from second MySQL Server

I would like to select data from a second MySQL database in order to migrate data from one server to another.
I'm looking for syntax like
SELECT * FROM username:password#serverip.databaseName.tableName
Is this possible? I would be able to do this in Microsoft SQL Server using linked servers, so I'm assuming it's possible in MySQL as well.
You can create a table using FEDERATED storage engine:
CREATE TABLE tableName (id INT NOT NULL, …)
ENGINE=FEDERATED
CONNECTION='mysql://username:password#serverip/databaseName/tableName'
SELECT *
FROM tableName
Basically, it will serve as a view over the remote tableName.
There are generally two approaches you can take, although neither of them sound like what you're after:
Use replication and set up a master/slave relationship between the two databases.
Simply dump the data (using the command line mysqldump tool) from the 1st database and import it into the 2nd.
However, both of these will ultimately migrate all of the data (i.e.: not a subset), although you can specify specific table(s) via mysqldump. Additionally, if you use the mysqldump approach and you're not using InnoDB you'll need to ensure that the source database isn't in use (i.e.: has integrity) when the dump is created.
You can't do this directly, but as someone else alluded to in a comment, you can use mysqldump to export the contents of a table as a SQL script.
At that point you could run the script on the new server to create the table, or if more manipulation of the data is required, import that data into a table with a different name on the new server, then write a query to copy the data from there.

question about MySQL database migration

If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc..
Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database.
Using MySQLDump etc.. won't work. Here is the detail. For example: I have FOUR old databases, I can name them as 'DB_a', 'DB_b', 'DB_c', 'DB_d'. Now the old table A has 28 columns, I want to add each row in table A into the new database with a new column ID 'DB_x' (x to indicate which database it comes from). If I can't differentiate the database ID by the row's content, the only way I can identify them is going through some user input parameters.
Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while.
Thanks!!
I don't entirely understand your situation with the columns (wouldn't it be more sensible to add any new columns after migration?), but one of the arguably fastest methods to copy a database across servers is mysqlhotcopy. It can copy myISAM only and has a number of other requirements, but it's awfully fast because it skips the create dump / import dump step completely.
Generally when you migrate a database to new servers, you don't apply a bunch of schema changes at the same time, for the reasons that you're running into right now.
MySQL has a dump tool called mysqldump that can be used to easily take a snapshot/backup of a database. The snapshot can then be copied to a new server and installed.
You should figure out all the changes that have been done to your "new" database, and write out a script of all the SQL commands needed to "upgrade" the old database to the new version that you're using (e.g. ALTER TABLE a ADD COLUMN x, etc). After you're sure it's working, take a dump of the old one, copy it over, install it, and then apply your change script.
Use mysqldump to dump the data, then echo output.txt > msyql. Now the old data is on the new server. Manipulate as necessary.
Sure there are tools that can help you achieving what you're trying to do. Mysqldump is a premier example of such tools. Just take a glance here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
What you could do is:
1) You make a dump of the current db, using mysqldump (with the --no-data option) to fetch the schema only
2) You alter the schema you have dumped, adding new columns
3) You create your new schema (mysql < dump.sql - just google for mysql backup restore for more help on the syntax)
4) Dump your data using the mysqldump complete-insert option (see link above)
5) Import your data, using mysql < data.sql
This should do the job for you, good luck!
Adding extra rows can be done on a live database:
ALTER TABLE [table-name] ADD [row-name] MEDIUMINT(8) default 0;
MySql will default all existing rows to the default value.
So here is what I would do:
make a copy of you're old database with MySql dump command.
run the resulting SQL file against you're new database, now you have an exact copy.
write a migration.sql file that will modify you're database with modify table commands and for complex conversions some temporary MySql procedures.
test you're script (when fail, go to (2)).
If all OK, then goto (1) and go live with you're new database.
These are all valid approaches, but I believe you want to write a sql statement that writes other insert statements that support the new columns you have.