I would like to have an INSERT INTO query which one of the fields I update is a calculated var 'mycount' - which will be the number of the inserted row in the query.
For example:
If I insert 3 rows, I'd like this var to be '1' for the first row inserted, '2' for the second, and so forth.
SET #mycount=0;
INSERT INTO my_table
(#mycount,field1,field2,field3...)
SET #mycount=#mycount+1;
SELECT #mycount,field1,field2,field3..
FROM my_table
WHERE id IN (id1,id2,id3..);
This code returns an error.
How can I declare a variable inside an INSERT INTO query and have it incremented with every row inserted ?
IMPORTANT - I do not need an AUTO-INCREMENT column - this is a part of a calculation that needs to be performed only in this specific INSERT INTO query, and it is only part of the calculation.
What I need is really a calculation of (number_of_inserted_row+some_other_calculation) but I just simplified it for the sake of the question.
Well, usually an auto_increment column is used for this. If you don't want to for whatever reason, you can do it like this:
INSERT INTO my_table
(your_quasi_auto_increment_column, field1, field2, field3...)
SELECT (#mycount := #mycount + 1) + <other_calculation>, field1, field2, field3..
FROM my_table
, (SELECT #mycount := 0) var_init_query_alias
WHERE id IN (id1,id2,id3..);
you can do this using add new filed which you want to insert as like count_number and define auto increment this filed and not need to insert in this query
SET #test=1;
INSERT INTO test (`test`,`test2`,`test3`)
SELECT (#test := #test +1) AS `test`,`test2`,`test3`
FROM test
if you want to add new file then
then check this code
SET #count_value=0;
INSERT INTO test (`count_value`,`test`,`test2`,`test3`)
SELECT (#count_value := #count_value +1) AS `count_value`,`test`,`test2`,`test3`
FROM test
in my_table1 add field count_number and type int and add auto increment then it will work
Related
How to assign unique auto incrementing values to a certain column? Kind of like AUTO_INCREMENT does but it should be NULL at the time of insertion and assigned at some later point.
I have a table that gets regular data inserts and a few workers that process that data and set processed_at datetime field when they're done. Now I want incrementally select new processed rows since the last call. If I naively use where processed_at > #last_update_time I'm afraid there might be a situation where some records are processed at the same second and I miss some rows.
update: Can I just do
begin;
select #max := max(foo) from table1;
update table1 set foo = #max + 1 where id = 'bar' limit 1;
commit;
if foo column is indexed?
You can use a trigger to implement that.
CREATE TABLE my_increment (value INT, table_name TEXT);
INSERT INTO my_increment VALUES (0, 'your_table_name');
CREATE TRIGGER pk AFTER UPDATE ON your_table_name
BEGIN
UPDATE my_increment
SET value = value + 1
WHERE table_name = 'your_table_name';
UPDATE your_table_name
SET ID2 = (
SELECT value
FROM my_increment
WHERE table_name = 'your_table_name')
WHERE ROWID = new.ROWID;
END;
But bear in mind that this trigger will work on every execution of the Update query.
You can also do it manually:
Create the table to store increment value:
CREATE TABLE my_increment (value INT, table_name TEXT);
INSERT INTO my_increment VALUES (0, 'your_table_name');
Then when you want to update the table, get the last value from this table and insert value+1 to your column needed to be incremented.
There is a table with three column: id, field1, field2.
And there is a row: id=1, field1=1, field2=1.
Run a update SQL: UPDATE my_table SET field1=field2+1, field2=field1+1 WHERE id=1;
I expected the result is: id=1, field1=2, field2=2. But in fact I got: id=1, field1=2, field2=3. Because when calculating field2=field1+1, the value of field1 has changed!
I figure out a SQL to solve this problem:
UPDATE my_table dest, (SELECT * FROM my_table) src
SET dest.field1=src.field2+1, dest.field2=src.field1+1
WHERE dest.id=1;
However I want to insert a record, and if the row was existed then do a update just like above.
INSERT INTO my_table (id, field1, field2) VALUES(1, 1, 1)
ON DUPLICATE KEY UPDATE
field1=field2+1, field2=field1+1;
This SQL has problem same as the first SQL. So how can I do this update using the value before change with ON DUPLICATE KEY UPDATE clause?
Thanks for any help!
Couldn't think of anything else but a temp variable. However, couldn't think of a way to make SQL syntax work, other than this:
set #temp = 0;
update test.test set
f1 = (#temp:=f1),
f1 = f2 + 1,
f2 = #temp + 1
where id = 1;
Hope this helps, and hope even more it helps you find a better way :)
I find a trick way to do this.
Use the IF clause to create temp variable. Field update use temp variable to calculate.
INSERT INTO my_table (id, f1, f2) VALUES(1, 1, 1)
ON DUPLICATE KEY UPDATE
id=IF((#t1:=f1 & #t2:=f2), 1, 1), f1=#t2+1, f2=#t1+1;
There is some point to notice:
The performance is a bit slow. Especially copy TEXT value to temp variable.
If field id need to use IF clause, the expr will be more complicated like:
((#t1:=f1 & #t2:=f2) || TRUE) AND (Your Condition)
I have this Statement:
INSERT INTO qa_costpriceslog (item_code, invoice_code, item_costprice)
VALUES (1, 2, (SELECT item_costprice FROM qa_items WHERE item_code = 1));
I'm trying to insert a value copy the same data of item_costprice, but show me the error:
Error Code: 1136. Column count doesn't match value count at row 1
How i can solve this?
Use numeric literals with aliases inside a SELECT statement. No () are necessary around the SELECT component.
INSERT INTO qa_costpriceslog (item_code, invoice_code, item_costprice)
SELECT
/* Literal number values with column aliases */
1 AS item_code,
2 AS invoice_code,
item_costprice
FROM qa_items
WHERE item_code = 1;
Note that in context of an INSERT INTO...SELECT, the aliases are not actually necessary and you can just SELECT 1, 2, item_costprice, but in a normal SELECT you'll need the aliases to access the columns returned.
You can just simply e.g.
INSERT INTO modulesToSections (fk_moduleId, fk_sectionId, `order`) VALUES
((SELECT id FROM modules WHERE title="Top bar"),0,-100);
I was disappointed at the "all or nothing" answers. I needed (again) to INSERT some data and SELECT an id from an existing table.
INSERT INTO table1 (id_table2, name) VALUES ((SELECT id FROM table2 LIMIT 1), 'Example');
The sub-select on an INSERT query should use parenthesis in addition to the comma as deliminators.
For those having trouble with using a SELECT within an INSERT I recommend testing your SELECT independently first and ensuring that the correct number of columns match for both queries.
Your insert statement contains too many columns on the left-hand side or not enough columns on the right hand side. The part before the VALUES has 7 columns listed, but the second part after VALUES only has 3 columns returned: 1, 2, then the sub-query only returns 1 column.
EDIT: Well, it did before someone modified the query....
As a sidenote to the good answer of Michael Berkowski:
You can also dynamically add fields (or have them prepared if you're working with php skripts) like so:
INSERT INTO table_a(col1, col2, col3)
SELECT
col1,
col2,
CURRENT_TIMESTAMP()
FROM table_B
WHERE b.col1 = a.col1;
If you need to transfer without adding new data, you can use NULL as a placeholder.
If you have multiple string values you want to add, you can put them into a temporary table and then cross join it with the value you want.
-- Create temp table
CREATE TEMPORARY TABLE NewStrings (
NewString VARCHAR(50)
);
-- Populate temp table
INSERT INTO NewStrings (NewString) VALUES ('Hello'), ('World'), ('Hi');
-- Insert desired rows into permanent table
INSERT INTO PermanentTable (OtherID, NewString)
WITH OtherSelect AS (
SELECT OtherID AS OtherID FROM OtherTable WHERE OtherName = 'Other Name'
)
SELECT os.OtherID, ns.NewString
FROM OtherSelect os, NewStrings ns;
This way, you only have to define the strings in one place, and you only have to do the query in one place. If you used subqueries like I initially did and like Elendurwen and John suggest, you have to type the subquery into every row. But using temporary tables and a CTE in this way, you can write the query only once.
In MySQL I am trying to copy a row with an autoincrement column ID=1 and insert the data into same table as a new row with column ID=2.
How can I do this in a single query?
Use INSERT ... SELECT:
insert into your_table (c1, c2, ...)
select c1, c2, ...
from your_table
where id = 1
where c1, c2, ... are all the columns except id. If you want to explicitly insert with an id of 2 then include that in your INSERT column list and your SELECT:
insert into your_table (id, c1, c2, ...)
select 2, c1, c2, ...
from your_table
where id = 1
You'll have to take care of a possible duplicate id of 2 in the second case of course.
IMO, the best seems to use sql statements only to copy that row, while at the same time only referencing the columns you must and want to change.
CREATE TEMPORARY TABLE temp_table ENGINE=MEMORY
SELECT * FROM your_table WHERE id=1;
UPDATE temp_table SET id=0; /* Update other values at will. */
INSERT INTO your_table SELECT * FROM temp_table;
DROP TABLE temp_table;
See also av8n.com - How to Clone an SQL Record
Benefits:
The SQL statements 2 mention only the fields that need to be changed during the cloning process. They do not know about – or care about – other fields. The other fields just go along for the ride, unchanged. This makes the SQL statements easier to write, easier to read, easier to maintain, and more extensible.
Only ordinary MySQL statements are used. No other tools or programming languages are required.
A fully-correct record is inserted in your_table in one atomic operation.
Say the table is user(id, user_name, user_email).
You can use this query:
INSERT INTO user (SELECT NULL,user_name, user_email FROM user WHERE id = 1)
This helped and it supports a BLOB/TEXT columns.
CREATE TEMPORARY TABLE temp_table
AS
SELECT * FROM source_table WHERE id=2;
UPDATE temp_table SET id=NULL WHERE id=2;
INSERT INTO source_table SELECT * FROM temp_table;
DROP TEMPORARY TABLE temp_table;
USE source_table;
For a quick, clean solution that doesn't require you to name columns, you can use a prepared statement as described here:
https://stackoverflow.com/a/23964285/292677
If you need a complex solution so you can do this often, you can use this procedure:
DELIMITER $$
CREATE PROCEDURE `duplicateRows`(_schemaName text, _tableName text, _whereClause text, _omitColumns text)
SQL SECURITY INVOKER
BEGIN
SELECT IF(TRIM(_omitColumns) <> '', CONCAT('id', ',', TRIM(_omitColumns)), 'id') INTO #omitColumns;
SELECT GROUP_CONCAT(COLUMN_NAME) FROM information_schema.columns
WHERE table_schema = _schemaName AND table_name = _tableName AND FIND_IN_SET(COLUMN_NAME,#omitColumns) = 0 ORDER BY ORDINAL_POSITION INTO #columns;
SET #sql = CONCAT('INSERT INTO ', _tableName, '(', #columns, ')',
'SELECT ', #columns,
' FROM ', _schemaName, '.', _tableName, ' ', _whereClause);
PREPARE stmt1 FROM #sql;
EXECUTE stmt1;
END
You can run it with:
CALL duplicateRows('database', 'table', 'WHERE condition = optional', 'omit_columns_optional');
Examples
duplicateRows('acl', 'users', 'WHERE id = 200'); -- will duplicate the row for the user with id 200
duplicateRows('acl', 'users', 'WHERE id = 200', 'created_ts'); -- same as above but will not copy the created_ts column value
duplicateRows('acl', 'users', 'WHERE id = 200', 'created_ts,updated_ts'); -- same as above but also omits the updated_ts column
duplicateRows('acl', 'users'); -- will duplicate all records in the table
DISCLAIMER: This solution is only for someone who will be repeatedly duplicating rows in many tables, often. It could be dangerous in the hands of a rogue user.
If you're able to use MySQL Workbench, you can do this by right-clicking the row and selecting 'Copy row', and then right-clicking the empty row and selecting 'Paste row', and then changing the ID, and then clicking 'Apply'.
Copy the row:
Paste the copied row into the blank row:
Change the ID:
Apply:
insert into MyTable(field1, field2, id_backup)
select field1, field2, uniqueId from MyTable where uniqueId = #Id;
A lot of great answers here. Below is a sample of the stored procedure that I wrote to accomplish this task for a Web App that I am developing:
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON
-- Create Temporary Table
SELECT * INTO #tempTable FROM <YourTable> WHERE Id = Id
--To trigger the auto increment
UPDATE #tempTable SET Id = NULL
--Update new data row in #tempTable here!
--Insert duplicate row with modified data back into your table
INSERT INTO <YourTable> SELECT * FROM #tempTable
-- Drop Temporary Table
DROP TABLE #tempTable
You can also pass in '0' as the value for the column to auto-increment, the correct value will be used when the record is created. This is so much easier than temporary tables.
Source:
Copying rows in MySQL
(see the second comment, by TRiG, to the first solution, by Lore)
I tend to use a variation of what mu is too short posted:
INSERT INTO something_log
SELECT NULL, s.*
FROM something AS s
WHERE s.id = 1;
As long as the tables have identical fields (excepting the auto increment on the log table), then this works nicely.
Since I use stored procedures whenever possible (to make life easier on other programmers who aren't too familiar with databases), this solves the problem of having to go back and update procedures every time you add a new field to a table.
It also ensures that if you add new fields to a table they will start appearing in the log table immediately without having to update your database queries (unless of course you have some that set a field explicitly)
Warning: You will want to make sure to add any new fields to both tables at the same time so that the field order stays the same... otherwise you will start getting odd bugs. If you are the only one that writes database interfaces AND you are very careful then this works nicely. Otherwise, stick to naming all of your fields.
Note: On second thought, unless you are working on a solo project that you are sure won't have others working on it stick to listing all field names explicitly and update your log statements as your schema changes. This shortcut probably is not worth the long term headache it can cause... especially on a production system.
INSERT INTO `dbMyDataBase`.`tblMyTable`
(
`IdAutoincrement`,
`Column2`,
`Column3`,
`Column4`
)
SELECT
NULL,
`Column2`,
`Column3`,
'CustomValue' AS Column4
FROM `dbMyDataBase`.`tblMyTable`
WHERE `tblMyTable`.`Column2` = 'UniqueValueOfTheKey'
;
/* mySQL 5.6 */
Try this:
INSERT INTO test_table (SELECT null,txt FROM test_table)
Every time you run this query, This will insert all the rows again with new ids. values in your table and will increase exponentially.
I used a table with two columns i.e id and txt and id is auto increment.
I was looking for the same feature but I don't use MySQL. I wanted to copy ALL the fields except of course the primary key (id). This was a one shot query, not to be used in any script or code.
I found my way around with PL/SQL but I'm sure any other SQL IDE would do. I did a basic
SELECT *
FROM mytable
WHERE id=42;
Then export it to a SQL file where I could find the
INSERT INTO table (col1, col2, col3, ... , col42)
VALUES (1, 2, 3, ..., 42);
I just edited it and used it :
INSERT INTO table (col1, col2, col3, ... , col42)
VALUES (mysequence.nextval, 2, 3, ..., 42);
insert into your_table(col1,col2,col3) select col1+1,col2,col3 from your_table where col1=1;
Note:make sure that after increment the new value of col1 is not duplicate entry if col1 is primary key.
CREATE TEMPORARY TABLE IF NOT EXISTS `temp_table` LIKE source_table;
DELETE FROM `purchasing2` ;
INSERT INTO temp_table SELECT * FROM source_table where columnid = 2;
ALTER TABLE temp_table MODIFY id INT NOT NULL;
ALTER TABLE temp_table DROP PRIMARY KEY;
UPDATE temp_table SET id=NULL ;
INSERT INTO source_table SELECT * FROM temp_table;
DROP TEMPORARY TABLE IF EXISTS temp_table ;
Dump the row you want to sql and then use the generated SQL, less the ID column to import it back in.
I want to do all these update in one statement.
update table set ts=ts_1 where id=1
update table set ts=ts_2 where id=2
...
update table set ts=ts_n where id=n
Is it?
Use this:
UPDATE `table` SET `ts`=CONCAT('ts_', `id`);
Yes you can but that would require a table (if only virtual/temporary), where you's store the id + ts value pairs, and then run an UPDATE with the FROM syntax.
Assuming tmpList is a table with an id and a ts_value column, filled with the pairs of id value, ts value you wish to apply.
UPDATE table, tmpList
SET table.ts = tmpList.ts_value
WHERE table.id = tmpList.id
-- AND table.id IN (1, 2, 3, .. n)
-- above "AND" is only needed if somehow you wish to limit it, i.e
-- if tmpTbl has more idsthan you wish to update
A possibly table-less (but similar) approach would involve a CASE statement, as in:
UPDATE table
SET ts = CASE id
WHEN 1 THEN 'ts_1'
WHEN 2 THEN 'ts_2'
-- ..
WHEN n THEN 'ts_n'
END
WHERE id in (1, 2, ... n) -- here this is necessary I believe
Well, without knowing what data, I'm not sure whether the answer is yes or no.
It certainly is possible to update multiple rows at once:
update table table1 set field1='value' where field2='bar'
This will update every row in table2 whose field2 value is 'bar'.
update table1 set field1='value' where field2 in (1, 2, 3, 4)
This will update every row in the table whose field2 value is 1, 2, 3 or 4.
update table1 set field1='value' where field2 > 5
This will update every row in the table whose field2 value is greater than 5.
update table1 set field1=concat('value', id)
This will update every row in the table, setting the field1 value to 'value' plus the value of that row's id field.
You could do it with a case statement, but it wouldn't be pretty:
UPDATE table
SET ts = CASE id WHEN 1 THEN ts_1 WHEN 2 THEN ts_2 ... WHEN n THEN ts_n END
I think that you should expand the context of the problem. Why do you want/need all the updates to be done in one statement? What benefit does that give you? Perhaps there's another way to get that benefit.
Presumably you are interacting with sql via some code, so certainly you can simply make sure that the three updates all happen atomically by creating a function that performs all three of the updates.
e.g. pseudocode:
function update_all_three(val){
// all the updates in one function
}
The difference between a single function update and some kind of update that performs multiple updates at once is probably not a very useful distinction.
generate the statements:
select concat('update table set ts = ts_', id, ' where id = ', id, '; ')
from table
or generate the case conditions, then connect it to your update statement:
select concat('when ', id, ' then ts_', id) from table
You can use INSERT ... ON DUPLICATE KEY UPDATE. See this quesion: Multiple Updates in MySQL
ts_1, ts_2, ts_3, etc. are different fields on the same table? There's no way to do that with a single statement.