INSERT INTO ... SELECT in mysql - mysql

I have a big table(more than 60k rows), I am trying to copy unique rows from this table to another table. Query is as follows
INSERT INTO tbl2(field1, field2)
SELECT DISTINCT field1, field2
FROM tbl1;
But it is taking ages to run this query, can someone suggest any way to accelerate this process

Execute a mysqldump of your table, generating a sql file, then filter duplicated data with a shell command:
cat dump.sql | uniq > dump_filtered.sql
Check the generated file. Then create your new table and load your dump_filtered.sql file with LOAD DATA INFILE.

Try this:
1. drop the destination table: DROP DESTINATION_TABLE;
2. CREATE TABLE DESTINATION_TABLE AS (SELECT * FROM SOURCE_TABLE);

Related

duplicating a MariaDB table with a small /tmp filesystem

I am duplicating a table with this command:
CREATE TABLE new_table LIKE old_table;
INSERT INTO new_table SELECT * FROM old_table;
Unfortunately, on my system /tmp is placed on separate filesystem with only 1gb of space.
If a large query is executed, that 1gb gets filled by mariadb very quickly, making it impossible to execute large queries. It is a production server so I'd rather leave the file systems as they are and instruct mariadb to make smaller temporary files and delete them on the fly.
How do you instruct mariadb to split large query into multiple temporary files so that /tmp doesn't get jammed with temporary files that cause query termination?
You can always split the INSERT into smaller pieces.
In below example I assume that i is the primary key:
WITH RECURSIVE cte1 AS (
SELECT 1 as s, 9999 as e
UNION ALL
SELECT e+1, e+9999
FROM cte1
WHERE e<=(select count(*) from old_table)
)
select CONCAT('INSERT INTO new_table SELECT * FROM old_table WHERE i BETWEEN ',s,' AND ',e) from cte1 limit 10;
This script produces several insert statement that you can run 1 after the other...
output:
INSERT INTO new_table SELECT * FROM old_table WHERE i BETWEEN 1 AND 9999 |
INSERT INTO new_table SELECT * FROM old_table WHERE i BETWEEN 10000 AND 19998 |
INSERT INTO new_table SELECT * FROM old_table WHERE i BETWEEN 19999 AND 29997
You are, of course, free to change the 9999 to any other number.
It may be related to the table content.
As explained in the doc,
Some query conditions prevent the use of an in-memory temporary table, in which case the server uses an on-disk table instead
If you have the privileges, you can try to set another disk location using
Temporary files are created in the directory defined by the tmpdir variable
Other than that, the doc is pretty clear unfortunately :
In some cases, the server creates internal temporary tables while processing statements. Users have no direct control over when this occurs.

mysqlimport won't load records to a table created with LIKE statement

I have some mysql table products, I use mysqlimport to load data to it from CSV file. I created another table copy_products using the following command: CREATE TABLE copy_products LIKE products;
Now when I try to load data to copy_products using mysqlimport, it gives the same message like the old one: "db.copy_products: Records: 1000 Deleted: 0 Skipped: 0 Warnings: 0" which indicates that all rows in the CSV file are inserted (as I understand). However the table is empty and has no records! So any clue here ? Are tables created using LIKE statement special in some way?
I think you need after create the clone of table. You need to insert the records in the new table as the following:
> CREATE TABLE copy_products LIKE products;
> INSERT INTO copy_products SELECT * FROM products GROUP BY id;
If You want to make it in one statement, try the following:
CREATE TABLE copy_products SELECT * FROM products group by id;

how to duplicate all the databases with limited rows in the tables

How can I duplicate my databases with limited number of rows in the tables.
Basically the duplicated db must have the same properties of original database but limited rows in the tables.
Try this, first create a similar table using
CREATE TABLE tbl_name_duplicate LIKE tlb_name;
then insert limited number of records into it using
INSERT INTO tbl_name_duplicate(SELECT * FROM tlb_name LIMIT 10);
to insert 10 records
Another approach, is to use the --where option in the mysqldump, so you could create something similar to a SQL query:
SELECT * FROM table_name WHERE id > (SELECT MAX(id) FROM table_name) - 10
re-written for the mysqldump (but you'll have to dump each table at a time, not the whole database):
mysqldump [options] --where="id > (SELECT MAX(id) FROM table_name) - 10" | mysql --host=host --user=user --password=password some_database
More information at MySQL Reference Guide.

Easiest way to copy a table from one database to another?

What is the best method to copy the data from a table in one database to a table in another database when the databases are under different users?
I know that I can use
INSERT INTO database2.table2 SELECT * from database1.table1
But here the problem is that both database1 and database2 are under different MySQL users. So user1 can access database1 only and user2 can access database2 only. Any idea?
CREATE TABLE db1.table1 SELECT * FROM db2.table1
where db1 is the destination and db2 is the source
If you have shell access you may use mysqldump to dump the content of database1.table1 and pipe it to mysql to database2. The problem here is that table1 is still table1.
mysqldump --user=user1 --password=password1 database1 table1 \
| mysql --user=user2 --password=password2 database2
Maybe you need to rename table1 to table2 with another query. On the other way you might use sed to change table1 to table2 between the to pipes.
mysqldump --user=user1 --password=password1 database1 table1 \
| sed -e 's/`table1`/`table2`/' \
| mysql --user=user2 --password=password2 database2
If table2 already exists, you might add the parameters to the first mysqldump which dont let create the table-creates.
mysqldump --no-create-info --no-create-db --user=user1 --password=password1 database1 table1 \
| sed -e 's/`table1`/`table2`/' \
| mysql --user=user2 --password=password2 database2
If you are using PHPMyAdmin, it could be really simple.
Suppose you have following databases:
DB1 & DB2
DB1 have a table users which you like to copy to DB2
Under PHPMyAdmin, open DB1, then go to users table.
On this page, click on the "Operations" tab on the top right.
Under Operations, look for section Copy table to (database.table):
& you are done!
MySql Workbench: Strongly Recommended
This will easily handle migration problems. You can migrate selected tables of selected databases between MySql and SqlServer. You should give it a try definitely.
I use Navicat for MySQL...
It makes all database manipulation easy !
You simply select both databases in Navicat and then use.
INSERT INTO Database2.Table1 SELECT * from Database1.Table1
it's worked good for me
CREATE TABLE dbto.table_name like dbfrom.table_name;
insert into dbto.table_name select * from dbfrom.table_name;
If your tables are on the same mysql server you can run the following
CREATE TABLE destination_db.my_table SELECT * FROM source_db.my_table;
ALTER TABLE destination_db.my_table ADD PRIMARY KEY (id);
ALTER TABLE destination_db.my_table MODIFY COLUMN id INT AUTO_INCREMENT;
Here is another easy way:
use DB1; show create table TB1;
copy the syntax here in clipboard to create TB1 in DB2
use DB2;
paste the syntax here to create the table TB1
INSERT INTO DB2.TB1 SELECT * from DB1.TB1;
I know this is old question, just answering so that anyone who lands here gets a better approach.
As of 5.6.10 you can do
CREATE TABLE new_tbl LIKE orig_tbl;
Refer documentation here: https://dev.mysql.com/doc/refman/5.7/en/create-table-like.html
Use MySql Workbench's Export and Import functionality.
Steps:
1. Select the values you want
E.g. select * from table1;
Click on the Export button and save it as CSV.
create a new table using similar columns as the first one
E.g. create table table2 like table1;
select all from the new table
E.g. select * from table2;
Click on Import and select the CSV file you exported in step 2
Try mysqldbcopy (documentation)
Or you can create a "federated table" on your target host. Federated tables allow you to see a table from a different database server as if it was a local one. (documentation)
After creating the federated table, you can copy data with the usual insert into TARGET select * from SOURCE
With MySQL Workbench you can use Data Export to dump just the table to a local SQL file (Data Only, Structure Only or Structure and Data) and then Data Import to load it into the other DB.
You can have multiple connections (different hosts, databases, users) open at the same time.
One simple way to get all the queries you need is to use the data from information_schema and concat.
SELECT concat('CREATE TABLE new_db.', TABLE_NAME, ' LIKE old_db.', TABLE_NAME, ';') FROM `TABLES` WHERE TABLE_SCHEMA = 'old_db';
You'll then get a list of results that looks like this:
CREATE TABLE new_db.articles LIKE old_db.articles;
CREATE TABLE new_db.categories LIKE old_db.categories;
CREATE TABLE new_db.users LIKE old_db.users;
...
You can then just run those queries.
However it won't work with MySQL Views. You can avoid them by appending AND TABLE_TYPE = 'BASE TABLE' from the initial query:
First create the dump. Added the --no-create-info --no-create-db flags if table2 already exists:
mysqldump -u user1 -p database1 table1 > dump.sql
Then enter user1 password. Then:
sed -e 's/`table1`/`table2`/' dump.sql
mysql -u user2 -p database2 < dump.sql
Then enter user2 password.
Same as helmor's answer but the approach is more secure as passwords aren't exposed in raw text to the console (reverse-i-search, password sniffers, etc). Other approach is fine if it's executed from a script file with appropriate restrictions placed on it's permissions.
Is this something you need to do regularly, or just a one off?
You can do an export (eg using phpMyAdmin or similar) that will script out your table and its contents to a text file, then you could re-import that into the other Database.
use below steps to copy and insert some columns from one database table to another database table-
CREATE TABLE tablename ( columnname datatype (size), columnname datatype (size));
2.INSERT INTO db2.tablename SELECT columnname1,columnname2 FROM db1.tablename;
For me I need to specific schema to "information_schema.TABLES"
for example.
SELECT concat('CREATE TABLE new_db.', TABLE_NAME, ' LIKE old_db.', TABLE_NAME, ';') FROM information_schema.TABLES WHERE TABLE_SCHEMA = 'old_db';
IN xampp just export the required table as a .sql file and then import it to the required
create table destination_customer like sakila.customer(Database_name.tablename), this will only copy the structure of the source table, for data also to get copied with the structure do this create table destination_customer as select * from sakila.customer

how to row wise manipulate the data from a "show databases" query result set

Suppose I have a database data1 which gives me this:
show tables;
table1
table2
table3
Now instead of individually executing "select * from each table" i want to create a procedure which goes through each database shown in "show databases;" resultset, and then executes select * from each table of that database. I thought of using cursors which would scroll down the resultset, hold each database name in a variable and then execute select statement on each table of that database traversing in the same way. Can someone kindly help me out with how to use cursors in this case, as i am only aware of using cursors for SELECT and UPDATE statements.
btw i use MYSQL.
I'll refrain from asking why you would do this. Here is a general strategy in pseudocode;
[words in parentheses are (SQL commands and tables) which will run on your mySQL server.]
Connect to your mySQL server with your favourite tool/programming language and:
-- (USE information_schema;)
for db in (select distinct table_schema from tables;)
do:
for table in (select table_name from tables where table_schema='$db';)
do:
select field,column,attribute from $table;
done
done
Good luck!
you can get the query from infromation_Schema instead of 'show datbases'