I want to copy data from one table to another but only which has processed='1' in the column value after a specific date.
I have code which could do it but its taking a long time to execute.
"INSERT INTO eamglo5_billingsystem.`consignment` (
`consignment_status`,
`account`,
`awb`,
`hawb`,
`service`,
`handling`,
`reference`,
`date_submitted`,
`date_imported`,
`date_printed`,
`printed_file_id`,
`date_received`,
`date_booked`,
`booked_file_id`,
`date_exported`,
`export_file_id`,
`company`,
`contact`,
`address_line_1`,
`address_line_2`,
`address_line_3`,
`id`
)
SELECT
'Y',
`account`,
`awb`,
`hawb`,
`service`,
`handling`,
`reference`,
`date_submitted`,
`date_imported`,
`date_printed`,
`printed_file_id`,
`date_received`,
`date_booked`,
`booked_file_id`,
`date_exported`,
`export_file_id`,
`company`,
`contact`,
`address_line_1`,
`address_line_2`,
`address_line_3`,
`id`
FROM `eamglo5_singaporelive`.`consignment`
left join (
SELECT eamglo5_billingsystem.`consignment`.`id` as id1
FROM eamglo5_billingsystem.`consignment`
) t ON `eamglo5_singaporelive`.`consignment`.id >id1
WHERE `eamglo5_singaporelive`.`consignment`.`processed`=1
and `eamglo5_singaporelive`.`consignment`.date_booked>'2018-07-17'
Expected: Should copy data from eamglo5_singaporelive.consignment table into eamglo5_billingsystem.consignment table with only processed=1 values.
Actual: Taking an infinite time to execute and fetch the rows.
Your LEFT JOIN with the condition consignment.id >id1 is almost creating a catesian product. What you probably want, is to insert only rows with a higher id from the source table than the highest id1 in the destination table. You should use a SELECT MAX(id) subquery instead:
SELECT [..]
FROM `eamglo5_singaporelive`.`consignment`
WHERE `eamglo5_singaporelive`.`consignment`.`processed`=1
and `eamglo5_singaporelive`.`consignment`.date_booked>'2018-07-17'
and `eamglo5_singaporelive`.`consignment`.id > (
SELECT MAX(id1) FROM eamglo5_billingsystem.`consignment`
)
Related
I have a query that goes something like this :
INSERT IGNORE INTO `destination_table` (`id`, `field1`, `field2`, `field3`)
SELECT `id`, `field1`, `field2`, `field3`
FROM `source_table`
WHERE `source_table`.`id` IN (
SELECT DISTINCT `id` FROM `some_table`
UNION DISTICT SELECT DISTINCT `id` FROM `some_other_table`
);
This does not work -- the query hangs indefinitely. The size of the tables is definitely not the problem, all tables have a fairly small number of records ( < 100k records). The query is fine and quite fast if I run it without the UNION :
INSERT IGNORE INTO `destination_table` (`id`, `field1`, `field2`, `field3`)
SELECT `id`, `field1`, `field2`, `field3`
FROM `source_table`
WHERE `source_table`.`id` IN (
SELECT DISTINCT `id` FROM `some_table` -- I tried with `some_other_table` too, same result
);
or
INSERT IGNORE INTO `destination_table` (`id`, `field1`, `field2`, `field3`)
SELECT `id`, `field1`, `field2`, `field3`
FROM `source_table`
both work and are nice and fast (well under a second). So I imagine that the UNION DISTICT SELECT ... is the culprit here, but I don't know why.
What's wrong with that query and why does it hang ?
Using mysql 5.7 is that makes a difference
Your first query seems to have a few typos, but I would suggest using exists logic here:
INSERT IGNORE INTO destination_table (id, field1, field2, field3)
SELECT id, field1, field2, field3
FROM source_table t1
WHERE
EXISTS (SELECT 1 FROM some_table s1 WHERE s1.id = t1.id) OR
EXISTS (SELECT 1 FROM some_other_table s2 WHERE s2.id = t1.id);
The possible advantage of using exists in this way is that MySQL can stop searching as soon as it finds the first matching id in either of the subqueries on the two tables. You may find that adding an index on the id columns in the two other would help (assuming that id be not already indexed):
CREATE INDEX some_idx_1 ON some_table (id);
CREATE INDEX some_idx_2 ON some_other_table (id);
This should speed up the lookup of the id in the two dependent tables.
You could work around the problem by rephrasing the query:
INSERT IGNORE INTO `destination_table` (`id`, `field1`, `field2`, `field3`)
SELECT `id`, `field1`, `field2`, `field3`
FROM `source_table`
WHERE `source_table`.`id` IN (
SELECT DISTINCT `id` FROM `some_table`
)
OR `source_table`.`id` IN (
SELECT DISTINCT `id` FROM `some_other_table`
);
So here is the perfectly working query I need to run though short of the necessary condition:
INSERT INTO content (`id`,`id_pages`,`content`, `date`)
SELECT `id`, `id`, `content`, `date_modified` FROM `pages`;
Unfortunately not all the databases are synced properly so some of the tables are populated and some are not.
How do I INSERT data from table A to table B IF table A is empty?
A couple queries I've tried:
IF (
SELECT count(id) FROM content='0',
INTO content (`id`,`id_pages`,`content`, `date`)
SELECT `id`, `id`, `content`, `date_modified` FROM `pages`)
...as well as:
IF (SELECT count(id) FROM content)=0
THEN (INSERT INTO content (`id`,`id_pages`,`content`, `date`)
SELECT `id`, `id`, `content`, `date_modified` FROM `pages`);
Try this:
INSERT INTO content (`id`,`id_pages`,`content`, `date`)
SELECT `id`, `id`, `content`, `date_modified`
FROM `pages`
WHERE NOT EXISTS (SELECT 1 FROM content)
The SELECT of the above INSERT statement will return all pages records unless there is at least on record in content table.
Demo with empty table | Demo with not empty table
I am working on a scraping project to crawl items and their scores over different schedules.Schedule is a user defined period (date) when the script is intended to run.
Table structure is as follows:
--
-- Table structure for table `test_join`
--
CREATE TABLE IF NOT EXISTS `test_join` (
`schedule_id` int(11) NOT NULL,
`player_name` varchar(50) NOT NULL,
`type` enum('celebrity','sportsperson') NOT NULL,
`score` int(11) NOT NULL,
PRIMARY KEY (`schedule_id`,`player_name`,`type`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
--
-- Dumping data for table `test_join`
--
INSERT INTO `test_join` (`schedule_id`, `player_name`, `type`, `score`) VALUES
(1, 'sachin', 'sportsperson', 100),
(1, 'ganguly', 'sportsperson', 80),
(1, 'dravid', 'sportsperson', 60),
(1, 'sachin', 'celebrity', 100),
(2, 'sachin', 'sportsperson', 120),
(2, 'ganguly', 'sportsperson', 100),
(2, 'sachin', 'celebrity', 120);
The scraping is done over periods and for each schedule it is expected to have about 10k+ entries.The schedules could be made in daily basis,hence the data would grow to be be around 2 million in 5-6 months.
Over this data I need to perform queries to aggregate the player who come across each schedules in a selected range of schedules.
For example:
I need aggregate same players who come across multiple schedules. If schedule 1 and 2 are selected,items which come under both of the schedules only will be selected.
I am using the following query to aggregate results based on the type,
For schedule 1:
SELECT fullt.type,COUNT(*) as count,SUM(fullt.score) FROM
(SELECT tj.*
FROM `test_join` tj
RIGHT JOIN
(SELECT `player_name`,`type`,COUNT(`schedule_id`) as c FROM `test_join` WHERE `schedule_id` IN (1,2) GROUP BY `player_name`,`type` HAVING c=2) stj
on tj.player_name = stj.player_name
WHERE tj.`schedule_id`=1
GROUP BY tj.`type`,tj.`player_name`)AS fullt
GROUP BY fullt.type
Reason for c = 2;
WHERE `schedule_id` IN (1,2) GROUP BY `player_name`,`type` HAVING c=2
Here we are selecting two schedules,1 and 2.Hence the count 2 is taken to make the query to to fetch records which belongs to both the schedules and occurs twice.
It would generate a results as follows,
Schedule 1 :Expected Results
Schedule 2 :Expected Results
This is my expected result and the query returns the results as above.
(In the real case I have to work across pretty big MySQL tables)
On my understanding of standardized MySQL queries, using sub queries,WHERE IN, varchar comparison fields ,multiple GROUP BY's would affect in the query performance.
I need the aggregate results in real time and query speed and well as standards are a concern too.How this could be optimized for better performance in this context.
EDIT:
I had reduced sub queries now:
SELECT fullt.type,COUNT(*) as count,SUM(fullt.score) FROM (
SELECT t.*
FROM `test_join` t
INNER JOIN test_join t1 ON t.`player_name` = t1.player_name AND t1.schedule_id = 1
INNER JOIN test_join t2 ON t.player_name = t2.player_name AND t2.schedule_id = 2
WHERE t.schedule_id = 2
GROUP BY t.`player_name`,t.`type`) AS fullt
GROUP BY fullt.type
Is this a better way to do so.I had replaced WHERE IN with JOINS.
Any advise would be highly appreciated.I would be happy to provide any supporting information if needed.
try below SQL Query in MYSQL:
SELECT tj.`type`,COUNT(*) as count,SUM(tj.`score`) FROM
`test_join` tj
where tj.`schedule_id`=1
and `player_name` in
(
select tj1.`player_name` from `test_join` tj1
group by tj1.`player_name` having count(tj1.`player_name`) > 1
)
group by tj.`type`
Actuallly I tried same data in Sybase as i dont have MySQL installed in my machine.It worked as exepected !
CREATE TABLE #test_join
(
schedule_id int NOT NULL,
player_name varchar(50) NOT NULL,
type1 varchar(15) NOT NULL,
score int NOT NULL,
)
INSERT INTO #test_join (schedule_id, player_name, type1, score) VALUES
(1, 'sachin', 'sportsperson', 100)
INSERT INTO #test_join (schedule_id, player_name, type1, score) VALUES(1, 'ganguly', 'sportsperson', 80)
INSERT INTO #test_join (schedule_id, player_name, type1, score) VALUES(1, 'dravid', 'sportsperson', 60)
INSERT INTO #test_join (schedule_id, player_name, type1, score) VALUES(1, 'sachin', 'celebrity', 100)
INSERT INTO #test_join (schedule_id, player_name, type1, score) VALUES(2, 'sachin', 'sportsperson', 120)
INSERT INTO #test_join (schedule_id, player_name, type1, score) VALUES(2, 'ganguly', 'sportsperson', 100)
INSERT INTO #test_join (schedule_id, player_name, type1, score) VALUES(2, 'sachin', 'celebrity', 120)
select * from #test_join
Print 'Solution #1 : Inner join'
select type1,count(*),sum(score) from
#test_join
where schedule_id=1 and player_name in (select player_name from #test_join t1 group by player_name having count(player_name) > 1 )
group by type1
select player_name,type1,sum(score) Score into #test_join_temp
from #test_join
group by player_name,type1
having count(player_name) > 1
Print 'Solution #2 using Temp Table'
--select * from #test_join_temp
select type1,count(*),sum(score) from
#test_join
where schedule_id=1 and player_name in (select player_name from #test_join_temp )
group by type1
I hope This Helps :)
I have two tables, first "users_counts"
id int(11) AUTO_INCREMENT
name varchar(250)
And I have second table "counts_data"
id int(11) AUTO_INCREMENT
id_user int(11)
count int(11)
date datetime
I want to select all records from the first table and get some data from a second, and then I want to merge they. I want create temp (for one request) column where collect last count with order by date in second table and second column where collect collect penultimate count with order by date in second table.
INSERT INTO `users_counts` (`id`,`name`) VALUES ('1','John');
INSERT INTO `users_counts` (`id`,`name`) VALUES ('2','Michael');
INSERT INTO `users_counts` (`id`,`name`) VALUES ('3','Den');
INSERT INTO `counts_data` (`id`,`id_user`, `count`, `date`) VALUES ('1','1', '200', '2012.09.09');
INSERT INTO `counts_data` (`id`,`id_user`, `count`, `date`) VALUES ('2','1', '212', '2012.09.01');
INSERT INTO `counts_data` (`id`,`id_user`, `count`, `date`) VALUES ('3','2', '20', '2012.01.09');
INSERT INTO `counts_data` (`id`,`id_user`, `count`, `date`) VALUES ('4','3', '210', '2012.02.09');
INSERT INTO `counts_data` (`id`,`id_user`, `count`, `date`) VALUES ('5','3', '2033', '2012.03.09');
INSERT INTO `counts_data` (`id`,`id_user`, `count`, `date`) VALUES ('6','3', '1', '2012.04.09');
In the end, after a request I want to get something like this
id name count count_before
1 John 200 212
2 Michael 20 0
3 Den 1 2033
Thank.
Another possible way to do this:
select uc.id,
uc.name,
(select count
from counts_data cd
where cd.id_user = uc.id
order by date desc limit 1) as count,
ifnull((select count
from counts_data cd
where cd.id_user = uc.id
order by date desc limit 1 offset 1),0) as count_before
from users_counts uc;
Since you only need one value from the counts_data for each row/record, you can use in-line queries in mySQL
SQL Fiddle
select uc.id
, uc.name
, cd1.count
, cd3.count as count_before
from users_counts uc
left join
counts_data cd1
on cd1.id_user = uc.id
and cd.date =
(
select max(date)
from counts_data cd2
where cd2.id_user = uc.id_user
)
left join
counts_data cd3
on cd3.id_user = uc.id
and cd.date =
(
select max(date)
from counts_data cd4
where cd4.id_user = uc.id_user
and cd4.date <> cd1.date
)
i want to read all data from one table and insert some data in to another table. my query is
INSERT INTO mt_magazine_subscription (
magazine_subscription_id,
subscription_name,
magazine_id,
status )
VALUES (
(SELECT magazine_subscription_id,
subscription_name,
magazine_id
FROM tbl_magazine_subscription
ORDER BY magazine_subscription_id ASC), '1')
but i got an error that
#1136 - Column count doesn't match value count at row 1
please help me.
You can use INSERT...SELECT syntax. Note that you can quote '1' directly in the SELECT part.
INSERT INTO mt_magazine_subscription (
magazine_subscription_id,
subscription_name,
magazine_id,
status )
SELECT magazine_subscription_id,
subscription_name,
magazine_id,
'1'
FROM tbl_magazine_subscription
ORDER BY magazine_subscription_id ASC
If you want insert all data from one table to another table there is a very simply sql
INSERT INTO destinationTable (SELECT * FROM sourceDbName.SourceTableName);
It wont work like this.
When you try to insert the row using a query all values should be there in query.
With the above problem you want to insert
magazine_subscription_id, subscription_name, magazine_id, status
in select query you have
magazine_subscription_id, subscription_name, magazine_id, status 1 it is not possible.
If you want to insert either you need to insert using query of direct values
Actually the mysql query for copy data from one table to another is
Insert into table2_name (column_names) select column_name from table1
where, the values copied from table1 to table2
If there is a primary key like "id" you have to exclude it for example my php table has: id, col2,col3,col4 columns. id is primary key so if I run this code:
INSERT INTO php (SELECT * FROM php2);
I probably get this error:
#1062 - Duplicate entry '1' for key 'PRIMARY'
So here is the solution, I excluded "id" key:
INSERT INTO php ( col2,col3,col4) (SELECT col2,col3,col4 FROM php2);
So my new php table has all php2 table rows anymore.
INSERT INTO mt_magazine_subscription (
magazine_subscription_id,
subscription_name,
magazine_id,
status )
VALUES (
(SELECT magazine_subscription_id,
subscription_name,
magazine_id,'1' as status
FROM tbl_magazine_subscription
ORDER BY magazine_subscription_id ASC));
Insert data from one table to other with condition in MySQL and same will work in SQL Server as well. Only non existing data will get updated. Both table have same structure so column need not to pass.
insert into table_A
select * from table_A_copy
where not exists
(
select * from table_A where table_A_copy.clm_a=table_A.clm_a and table_A_copy.clm_b=table_A.clm_b and table_A_copy.clm_c=table_A.clm_c
);
Try this. Your doing in wrong way.
INSERT INTO mt_magazine_subscription(
magazine_subscription_id,
subscription_name,
magazine_id, status) VALUES (
(SELECT magazine_subscription_id, subscription_name,
magazine_id,1 as status FROM tbl_magazine_subscription
ORDER BY magazine_subscription_id ASC)
)
INSERT INTO mt_magazine_subscription (
magazine_subscription_id,
subscription_name,
magazine_id,
status )
VALUES (
(SELECT magazine_subscription_id,
subscription_name,
magazine_id,'1' as status
FROM tbl_magazine_subscription
ORDER BY magazine_subscription_id ASC))
Try to use this
INSERT INTO mt_magazine_subscription (
magazine_subscription_id,
subscription_name,
magazine_id,
status )
SELECT magazine_subscription_id,
subscription_name,
magazine_id,
'1'
FROM tbl_magazine_subscription
ORDER BY magazine_subscription_id ;
Use the hard coded value in select clause
INSERT INTO destination_table (
Field_1,
Field_2,
Field_3)
SELECT Field_1,
Field_2,
Field_3
FROM source_table;
BUT this is a BAD MYSQL
Do this instead:
drop the destination table: DROP DESTINATION_TABLE;
CREATE TABLE DESTINATION_TABLE AS (SELECT * FROM SOURCE_TABLE);
INSERT INTO mt_magazine_subscription SELECT *
FROM tbl_magazine_subscription
ORDER BY magazine_subscription_id ASC