How to copy data from one table to other in same database? - mysql

In MySQL how to copy data from one table to another in the same database table?
I know insert into select, but it is taking forever to do this, especially on a live database, we can't take a risk.
Some conditions are there:
1. table1 is a source table and table1_archives is a destination table.
2. table1_archives already have data so we can only append.
My attempt:
time mysqldump --log-error=$logfile --complete-insert --insert-ignore
--no-create-info --skip-triggers --user=$dbuser --host=$host $dbname table1
--where="created < now()-interval 10 month" > $filename
But it has the name of table1, so I can't insert it into table1_archives.
Any guidance will be appreciated.
Thanks in advance.

In your output file, you need to change the table name table1 to table1_archives. Unfortunately mysqldump does not have any way to do this. You will have to do this on the fly using sed, which will rename everything in output file from table1 to table1_archives.
Since your columns can also contain the content like table1, its better to search and replace by enclosing them in backticks.
You can also use gzip to compress the output file.
Here is the command that worked for me
mysqldump -u USER -h HOST -p --skip-add-drop-table --no-create-info --skip-triggers --compact DB table1 |\
sed -e 's/`table1`/`table1_archives`/' | gzip > filename.sql.gz

"but it is taking forever to do this"
There is a small trick to avoid this and then insert into will work faster:
Insert into table1 select * from table2
Trick:
step-1: drop all indices from table2
step-2: execute query
step-3: create indices again

Related

MySQL: Insert into a table from another HOST

So I would like to copy some records from one table to another. But the trick is that the another table is in a different HOST. I will try to explain by giving you a mysql query pseudo code.
Another_host = "192.168.X.X";
INSERT INTO database_original.table_1( id, name, surname)
SELECT id, name, surname
FROM Another_host.database_another.table_2
WHERE Another_host.database_another.table_2.id > 1000;
I would probably have to declare the user for the "Another_host" somewhere.
This is what I am trying to do..is this even possible like I imagine it?
Thx
There is one workaround solution which will do the same what you want.
Step 1:
Take dump of select query
mysql -e "select * from myTable" -h <<firsthost>> -u myuser -pxxxxxxxx mydatabase > mydumpfile.sql
Step 2: Restore the dump
mysql -h <<secondhost>> -u myuser -pxxxxxxxx < mydumpfile.sql

How to use mysqldump to preserve dbspace name on table statements

Background
We currently dump our database basically like this:
mysqldump --complete-insert --opt --hex-blob --all-databases -u -p
The dump will look something like this:
USE `DB1`
-- Table structure for table `MYTABLE`
DROP TABLE IF EXISTS `MYTABLE`
CREATE TABLE `MYTABLE`
...
INSERT INTO `MYTABLE` ...
-- Table structure for table `NEXTABLE`
...
USE `DB2`
-- Table structure for table `MYTABLE`
DROP TABLE IF EXISTS `MYTABLE`
CREATE TABLE `MYTABLE`
...
INSERT INTO `MYTABLE` ...
-- Table structure for table `NEXTABLE`
Problem
in some recovery scenarios we need to pull a specific table out of the backup. We might do something like this:
cat backup | sed -n -e '/Table structure for table .MYTABLE.$/,/Table structure for table .NEXTABLE.$/p' | mysql -u -p DB2
Because the individual table statements do not qualify the dbspace then in this case the table information for DB1.MYTABLE is going to be extract and thus DB2 is going to be populated with the backup from DB1
Question
Is there a way to get the backup to qualify the dbspace name on each table statement such that the USE statement becomes unnecessary for this scenario? E.g.
USE `DB2`
-- Table structure for table `DB2`.`MYTABLE`
DROP TABLE IF EXISTS `DB2`.`MYTABLE`
CREATE TABLE `DB2`.`MYTABLE`
...
INSERT INTO `DB2`.`MYTABLE` ...
-- Table structure for table `DB2`.`NEXTABLE`
With no answer and seemingly no way to add the space name to the dump I am forced to scan the dump differently. Also note that this is a recovery scenario so we cannot simply change the way we already dumped the database as it is too late at that point.
Since the table name within a space is unique what I ended up going with was to first isolate the dbspace instructions in the dump and then isolate that table.
Use this to restore the table from dump.sql to the same space it came from:
sed -n '/^USE .SPACENAMEHERE.;$/,/^USE .*$/p' dump.sql | sed -E -n '/^(USE .*;|-- Table structure for table .TABLENAMEHERE.)$/,/^-- Table structure for table /p' | mysql -u -p
You will substitute the SPACENAMEHERE with the dbspace name and TABLENAMEHERE with the table name. Because this usage of sed will include the USE statement in the output we do not need to qualify which database to connect to on the mysql command line. As long as the user has permissions to "USE" that space it will work. But if you want to insert this into a different dbspace (i.e. temporary) then you will use this.
Use this to restore the table from dump.sql to a different space (e.g. a temporary one):
sed -n '/^USE .SPACENAMEHERE.;$/,/^USE .*$/p' dump.sql | sed -E -n '/^-- Table structure for table .TABLENAMEHERE.$/,/^-- Table structure for table /p' | mysql -u -p DESTINATIONSPACE

Easiest way to copy a table from one database to another?

What is the best method to copy the data from a table in one database to a table in another database when the databases are under different users?
I know that I can use
INSERT INTO database2.table2 SELECT * from database1.table1
But here the problem is that both database1 and database2 are under different MySQL users. So user1 can access database1 only and user2 can access database2 only. Any idea?
CREATE TABLE db1.table1 SELECT * FROM db2.table1
where db1 is the destination and db2 is the source
If you have shell access you may use mysqldump to dump the content of database1.table1 and pipe it to mysql to database2. The problem here is that table1 is still table1.
mysqldump --user=user1 --password=password1 database1 table1 \
| mysql --user=user2 --password=password2 database2
Maybe you need to rename table1 to table2 with another query. On the other way you might use sed to change table1 to table2 between the to pipes.
mysqldump --user=user1 --password=password1 database1 table1 \
| sed -e 's/`table1`/`table2`/' \
| mysql --user=user2 --password=password2 database2
If table2 already exists, you might add the parameters to the first mysqldump which dont let create the table-creates.
mysqldump --no-create-info --no-create-db --user=user1 --password=password1 database1 table1 \
| sed -e 's/`table1`/`table2`/' \
| mysql --user=user2 --password=password2 database2
If you are using PHPMyAdmin, it could be really simple.
Suppose you have following databases:
DB1 & DB2
DB1 have a table users which you like to copy to DB2
Under PHPMyAdmin, open DB1, then go to users table.
On this page, click on the "Operations" tab on the top right.
Under Operations, look for section Copy table to (database.table):
& you are done!
MySql Workbench: Strongly Recommended
This will easily handle migration problems. You can migrate selected tables of selected databases between MySql and SqlServer. You should give it a try definitely.
I use Navicat for MySQL...
It makes all database manipulation easy !
You simply select both databases in Navicat and then use.
INSERT INTO Database2.Table1 SELECT * from Database1.Table1
it's worked good for me
CREATE TABLE dbto.table_name like dbfrom.table_name;
insert into dbto.table_name select * from dbfrom.table_name;
If your tables are on the same mysql server you can run the following
CREATE TABLE destination_db.my_table SELECT * FROM source_db.my_table;
ALTER TABLE destination_db.my_table ADD PRIMARY KEY (id);
ALTER TABLE destination_db.my_table MODIFY COLUMN id INT AUTO_INCREMENT;
Here is another easy way:
use DB1; show create table TB1;
copy the syntax here in clipboard to create TB1 in DB2
use DB2;
paste the syntax here to create the table TB1
INSERT INTO DB2.TB1 SELECT * from DB1.TB1;
I know this is old question, just answering so that anyone who lands here gets a better approach.
As of 5.6.10 you can do
CREATE TABLE new_tbl LIKE orig_tbl;
Refer documentation here: https://dev.mysql.com/doc/refman/5.7/en/create-table-like.html
Use MySql Workbench's Export and Import functionality.
Steps:
1. Select the values you want
E.g. select * from table1;
Click on the Export button and save it as CSV.
create a new table using similar columns as the first one
E.g. create table table2 like table1;
select all from the new table
E.g. select * from table2;
Click on Import and select the CSV file you exported in step 2
Try mysqldbcopy (documentation)
Or you can create a "federated table" on your target host. Federated tables allow you to see a table from a different database server as if it was a local one. (documentation)
After creating the federated table, you can copy data with the usual insert into TARGET select * from SOURCE
With MySQL Workbench you can use Data Export to dump just the table to a local SQL file (Data Only, Structure Only or Structure and Data) and then Data Import to load it into the other DB.
You can have multiple connections (different hosts, databases, users) open at the same time.
One simple way to get all the queries you need is to use the data from information_schema and concat.
SELECT concat('CREATE TABLE new_db.', TABLE_NAME, ' LIKE old_db.', TABLE_NAME, ';') FROM `TABLES` WHERE TABLE_SCHEMA = 'old_db';
You'll then get a list of results that looks like this:
CREATE TABLE new_db.articles LIKE old_db.articles;
CREATE TABLE new_db.categories LIKE old_db.categories;
CREATE TABLE new_db.users LIKE old_db.users;
...
You can then just run those queries.
However it won't work with MySQL Views. You can avoid them by appending AND TABLE_TYPE = 'BASE TABLE' from the initial query:
First create the dump. Added the --no-create-info --no-create-db flags if table2 already exists:
mysqldump -u user1 -p database1 table1 > dump.sql
Then enter user1 password. Then:
sed -e 's/`table1`/`table2`/' dump.sql
mysql -u user2 -p database2 < dump.sql
Then enter user2 password.
Same as helmor's answer but the approach is more secure as passwords aren't exposed in raw text to the console (reverse-i-search, password sniffers, etc). Other approach is fine if it's executed from a script file with appropriate restrictions placed on it's permissions.
Is this something you need to do regularly, or just a one off?
You can do an export (eg using phpMyAdmin or similar) that will script out your table and its contents to a text file, then you could re-import that into the other Database.
use below steps to copy and insert some columns from one database table to another database table-
CREATE TABLE tablename ( columnname datatype (size), columnname datatype (size));
2.INSERT INTO db2.tablename SELECT columnname1,columnname2 FROM db1.tablename;
For me I need to specific schema to "information_schema.TABLES"
for example.
SELECT concat('CREATE TABLE new_db.', TABLE_NAME, ' LIKE old_db.', TABLE_NAME, ';') FROM information_schema.TABLES WHERE TABLE_SCHEMA = 'old_db';
IN xampp just export the required table as a .sql file and then import it to the required
create table destination_customer like sakila.customer(Database_name.tablename), this will only copy the structure of the source table, for data also to get copied with the structure do this create table destination_customer as select * from sakila.customer

How to dump data from one table and insert to another

I have two databases. I want to dump data from one table in 1st database and insert to another table with an another name in 2nd database.
So I have DB1 that has tables tbl1 and tabl2, and DB2 that has tables tbl3 and tbl4. I know that tabl1 and tabl3 have the same structure. How to copy data from one to another by using mysqldump command?
I've tried to do this, but it's not work.
mysqldump --user root --password=password --no-create-info DB1 tbl1 > c:/dump.sql
mysql --user root --password=password DB2 tbl3 < c:/dump.sql
This is not going to work due to different table name
if both database are sitting in the same server using the same daemon, you can directly
insert into DB2.tbl3 select * from DB1.tbl1;
if tbl1 is not existing in DB2,
pseudo code for this :
# import as tbl1 from DB1 into tbl1 in DB2
mysqldump DB1 tbl1 | mysql DB2
# then rename tbl1 in DB2 to tbl3
mysql DB2 -N <<< "rename table tbl1 to tbl3"
I am using in a linux shell command line
mysqldump --user=username --password=xxxx dbname | mysql --host=remotehost.com --user=username --password=xxxx -C dname
this transfers it from the local host to a remote host, the whole database.
IF you want to also copy the contents of the table you can do:
CREATE TABLE `new_table_name` LIKE `old_table_name`;
INSERT INTO `new_table_name` SELECT * FROM `old_table_name`;
If you have to copy table from one database to another database then use following
CREATE TABLE `db1.new_table_name` LIKE `db2.old_table_name`;
INSERT INTO `db1.new_table_name` SELECT * FROM `db2.old_table_name`;
It works for me as dumping single table and importing was throwing syntax error with MariaDB

How do I dump only results of one table in a database in MySQL and replicate to another remote empty table with the same structure?

It's a long sentence, anyone knows?
To create the dump:
mysqldump --user [username] --password=[password] --no-create-info [database name] [table name] > /tmp/dump.sql
To restore:
mysql --u [username] --password=[password] [database name] [table name] < /tmp/dump.sql
Some thing like this:
SELECT *
INTO new_table_name [IN externaldatabase]
FROM old_tablename
WHERE 1=0
I'm not sure this works with My SQL but you get the idea.
This DOES NOT duplicate PK, FK, Indexes, Stored Procedures, or anything else. Just the columns and data types.
W3Schools
D'oh
I must have misunderstood your question. It is almost 3am way past bedtime.
Ok so there is probably a better way to do this, but this would get the job done.
SELECT CONCAT('INSERT INTO (COL1, COL2) VALUES (',COL1,COL2,');');
You will need to add ' for some data types so.
SELECT CONCAT('INSERT INTO (COL1, COL2) VALUES (',COL1,''' ''',COL2,');');
The take the output and run on the remote db.