I've seen lots of questions with similar headlines or interest but different from what I am looking for.
I have a table with some data in it already, with an id column. I want the id's of the data already in the table, a couple thousand rows, to remain the same. I want to INSERT different data from another table, which also includes id's - just over 92,000 rows in this set -- and I need these id's to change to some other non-existing ID so as not to overwrite or displace the already existing data.
Is this possible? Is there some kind of increment I can do upon INSERT?
I tried an INSERT IGNORE statement but it displaced the data already there.
Any advice on what to try?
If id in your first table is an auto_increment column then you can just omit it
INSERT INTO table1 (column2, column3, column4, ...) -- id is omitted
SELECT column2, column3, column4, ... -- id is also omitted
FROM table2
If it's not the case, and assuming that it's a one-time operation, you can try to assign new ids in a following way
INSERT INTO table1 (id, column2, column3, column4, ...)
SELECT t.max_id + s.id, s.column2, s.column3, s.column4, ...
FROM
(
SELECT #n := #n + 1 id, column1
FROM table2 CROSS JOIN (SELECT #n := 0) i
ORDER BY id
) s JOIN
(
SELECT MAX(id) max_id
FROM table1
) t
Here is SQLFiddle demo
To start with an AUTO_INCREMENT value other than 1, set that value with CREATE TABLE or ALTER TABLE, like this:
mysql> ALTER TABLE tbl AUTO_INCREMENT = 100;
Related
I tried to insert a complete row to a table. But the problem is, that I use the same IDs. Is there a easy way to insert a complete row expect the one value (ID)?
INSERT INTO new_table SELECT * FROM old_table WHERE q_value = 12345
I would not insert every single value, because there are hunderds of columns.
Thanks for your help in advance,
Yab86
May be Something like this
You can do without getting column2
Insert Into new_table (Column1,Column3,Column4 ...)
Select Column1,Column3,Column4 ... From old_table
WHERE q_value = 12345
If ID is part of a unique key the only (good) way is to write out all the other columns, like this
insert into new_table
values (col1, col2, col3)
select col1, col2, col3
from old_table
where q_value = 12345;
If ID isn't part of the unique key there might be some ways to do it in two queries (easier to write but perhaps not better)
insert into new_table
select *
from old_table
where q_value = 12345;
update new_table
set ID = null
where q_value = 12345;
If the first column in new_table is your auto_increment primary id, you could use this, but then you cannot use the asterisk and have to list all columns except the first one (0 will do the trick for your auto_increment, that means it will insert the incremented value of the autoindex):
INSERT INTO new_table
SELECT 0, column2, column3, ...
FROM old_table WHERE q_value = 12345
I am trying to fetch all rows from a particular table if value found in any column of the particular table.
you can just use IN. eg
SELECT *
FROM tbName
WHERE yourValue IN (column1, column2, column3, ....)
You can probably use the EXISTS to do your job.
Below query will get the stores name only if city 2 has a name.
SELECT store_name FROM stores WHERE EXISTS (SELECT name FROM cities WHERE id = 2);
I'm trying to update or insert data dependent on whether the account number is already in the existing data.
Firstly I added the new variables to those with an acocunt number already in the table using this
drop table #test1
select a.*, B.Delq_Sep12, b.Bal_Sep12, b.Queue_Sep12
into #test1
from pcd1 a
left join #pcd_sep12 b on (a.ACCOUNT_NUMBER = B.account_number)
Then I add all those records whose account number is not in test1 (created above) from #pcd_sep12 into test1
INSERT #test1
SELECT * FROM #pcd_sep12 WHERE account_number NOT IN(SELECT account_number FROM #test1)
I get the error Column name or number of supplied values does not match table definition.
I realise its because theres not the same number of fields but is there a way around this?
Why not use the MERGE (aka "upsert") statement?
MERGE INTO pcd1 M
USING (SELECT * FROM #pcd_sep12) src ON M.account_number = src.account_number
WHEN MATCHED
-- UPDATE stuff
WHEN NOT MATCHED BY TARGET
-- INSERT stuff;
This way you don't need a temp table or any tests: these won't be concurrency-safe under load
You have to specify the columns like this
INSERT INTO #test1 (column1, column2, column3)
(SELECT column1, column2, column3 FROM #pcd_sep12 WHERE account_number NOT IN(SELECT account_number FROM #test1))
if i am not mistaken, your #test1 table has columns from pcd1 table also. but in your second query, you are only selecting from #pcd_sep12.
I have a table that contains some duplicate redords. I want to make records unique. I created a new table (say, destination) and I specified a unique column in it. How can copy records from table1 (source) such that, if the record inserted in the destination table, it does not insert it again.
You can use the "select into" construct and select insert only distinct rows, like this:
insert into table_without_dupes (column0, column1) select distinct column0, column1 from table_with_dupes
If you have autoincrement or other columns that makes the rows distinct, you can just leave them out of the insert and select parts of the statement.
Edit:
If you want to detect duplicates by a single column, you can use group by:
insert into table_without_dupes (column0, column1) select column0, column1 from table_with_dupes group by column0
MySQL will allow you to refer non-aggregated columns in select, but remember that the documentation says "The server is free to choose any value from each group", if you want to select one specific row of the groups, you might find this example useful.
Generic approach
insert into destination(col1,col2)
select DISTINCT col1,col2 from source as s where not exists
(select * from destination as d where d.col1=s.col1)
Edited
insert into destination(col1,col2)
SELECT distinct col1,col2 from source
Edited (Assuming col3 is duplicated and you want only one copy of it.)
insert into destination(col1,col2,col3,....,colN)
SELECT col1,col2,col3,...,colN from source as s1 inner join
(
select col1,col2,max(col3) as col3
from source
group by col1,col2
) as s2 on t1.col1=t2.col1 and t1.col2=t2.col2 and t1.col3=t2.col3
insert into <dest_table_name>
select distinct * from <source_table_name>;
I have a trigger on a table, it's basically this
ALTER TRIGGER xx
FOR UPDATE,DELETE,INSERT
AS
DELETE FROM other WHERE id in (SELECT id from deleted)
DELETE FROM other WHERE id in (SELECT id from inserted)
INSERT INTO other() VALUES() WHERE id in (SELECT id from inserted)
GO
It runs extremely slow when it does the insert (20 seconds). The deletes are fast. Playing around I tried doing this instead:
ALTER TRIGER xx
FOR UPDATE,DELETE,INSERT
AS
DECLARE #tinserted TABLE ( id int)
INSERT INTO #tinserted select id from inserted;
DELETE FROM other WHERE id in (SELECT id from deleted)
DELETE FROM other WHERE id in (SELECT id from inserted)
INSERT INTO other() VALUES() WHERE id in (SELECT id from #tinserted)
GO
By using a table variable it now runs instantly (under 1 second).
I'm not sure why though. Is there any reason why changing to a table variable would make such a difference?
Not sure why you'd need the WHERE clause at all for the INSERT operation.
INSERT INTO other(column1, column2, ...)
SELECT column1, column2, ...
FROM inserted;