I am trying to insert a new row and set the customer_id with max()+1. The reason for this is the table already has a auto_increatment on another column named id and the table will have multiple rows with the same customer_id.
With this:
INSERT INTO customers
( customer_id, firstname, surname )
VALUES
((SELECT MAX( customer_id ) FROM customers) +1, 'jim', 'sock')
...I keep getting the following error:
#1093 - You can't specify target table 'customers' for update in FROM clause
Also how would I stop 2 different customers being added at the same time and not having the same customer_id?
You can use the INSERT ... SELECT statement to get the MAX()+1 value and insert at the same time:
INSERT INTO
customers( customer_id, firstname, surname )
SELECT MAX( customer_id ) + 1, 'jim', 'sock' FROM customers;
Note: You need to drop the VALUES from your INSERT and make sure the SELECT selected fields match the INSERT declared fields.
Correct, you can not modify and select from the same table in the same query. You would have to perform the above in two separate queries.
The best way is to use a transaction but if your not using innodb tables then next best is locking the tables and then performing your queries. So:
Lock tables customers write;
$max = SELECT MAX( customer_id ) FROM customers;
Grab the max id and then perform the insert
INSERT INTO customers( customer_id, firstname, surname )
VALUES ($max+1 , 'jim', 'sock')
unlock tables;
Use alias name for the inner query like this
INSERT INTO customers
( customer_id, firstname, surname )
VALUES
((SELECT MAX( customer_id )+1 FROM customers cust), 'sharath', 'rock')
SELECT MAX(col) +1 is not safe -- it does not ensure that you aren't inserting more than one customer with the same customer_id value, regardless if selecting from the same table or any others. The proper way to ensure a unique integer value is assigned on insertion into your table in MySQL is to use AUTO_INCREMENT. The ANSI standard is to use sequences, but MySQL doesn't support them. An AUTO_INCREMENT column can only be defined in the CREATE TABLE statement:
CREATE TABLE `customers` (
`customer_id` int(11) NOT NULL AUTO_INCREMENT,
`firstname` varchar(45) DEFAULT NULL,
`surname` varchar(45) DEFAULT NULL,
PRIMARY KEY (`customer_id`)
)
That said, this worked fine for me on 5.1.49:
CREATE TABLE `customers` (
`customer_id` int(11) NOT NULL DEFAULT '0',
`firstname` varchar(45) DEFAULT NULL,
`surname` varchar(45) DEFAULT NULL,
PRIMARY KEY (`customer_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1$$
INSERT INTO customers VALUES (1, 'a', 'b');
INSERT INTO customers
SELECT MAX(customer_id) + 1, 'jim', 'sock'
FROM CUSTOMERS;
insert into table1(id1) select (max(id1)+1) from table1;
Use table alias in subquery:
INSERT INTO customers
( customer_id, firstname, surname )
VALUES
((SELECT MAX( customer_id ) FROM customers C) +1, 'jim', 'sock')
None of the about answers works for my case. I got the answer from here, and my SQL is:
INSERT INTO product (id, catalog_id, status_id, name, measure_unit_id, description, create_time)
VALUES (
(SELECT id FROM (SELECT COALESCE(MAX(id),0)+1 AS id FROM product) AS temp),
(SELECT id FROM product_catalog WHERE name="AppSys1"),
(SELECT id FROM product_status WHERE name ="active"),
"prod_name_x",
(SELECT id FROM measure_unit WHERE name ="unit"),
"prod_description_y",
UNIX_TIMESTAMP(NOW())
)
Your sub-query is just incomplete, that's all. See the query below with my additions:
INSERT INTO customers ( customer_id, firstname, surname )
VALUES ((SELECT MAX( customer_id ) FROM customers) +1), 'jim', 'sock')
You can't do it in a single query, but you could do it within a transaction. Do the initial MAX() select and lock the table, then do the insert. The transaction ensures that nothing will interrupt the two queries, and the lock ensures that nothing else can try doing the same thing elsewhere at the same time.
We declare a variable 'a'
SET **#a** = (SELECT MAX( customer_id ) FROM customers) +1;
INSERT INTO customers
( customer_id, firstname, surname )
VALUES
(**#a**, 'jim', 'sock')
This is select come insert sequel.
I am trying to get serial_no maximum +1 value and its giving correct value.
SELECT MAX(serial_no)+1 into #var FROM sample.kettle;
Insert into kettle(serial_no,name,age,salary) values (#var,'aaa',23,2000);
Related
I have two tables user & user_tmp having primary key as id only. The fields of both the tables are likely as :
`id` PRIMARY AUTO_INCREMENT INT,
`Name` VARCHAR(80),
`Contact No` VARCHAR(10),
`Status` INT
I need a simple query based on IF EXISTS ON user_tmp table THEN update value in user ELSE insert in user table.
I tried with REPLACE & ON DUPLICATE KEY, but it din't work as id is the PRIMARY KEY and the value varies in both the tables.
Regards.
You need to do it as two queries. One query to update the records that match:
UPDATE user AS u
JOIN user_tmp AS t ON u.Contact_No = t.Contact_no
SET u.Name = t.Name, u.Status = t.Status;
and a second to insert the ones that don't already exist:
INSERT INTO user (Name, Contact_No, Status)
SELECT t.Name, t.Contact_No, t.Status
FROM user_tmp AS t
LEFT JOIN user AS u ON u.Contact_No = t.Contact_no
WHERE t.id IS NULL
See Return row only if value doesn't exist for an explanation of how the SELECT query in the INSERT works to find rows that don't match.
I am cleaning records that have poorly recorded and inconsistent socio-demographic information over time, for the same person. I want to take the most commonly occuring value (the mode) for each person.
One way to do that is to partition by id and then count how many times each value occurs, retaining the highest count for each id:
DROP TABLE dbo.table
SELECT DISTINCT [id], [ethnic_group] AS [ethnic_mode], ct INTO dbo.table
FROM (
SELECT row_number() OVER (PARTITION BY [id] ORDER BY count([ethnic_group]) DESC) as rn, count([ethnic_group]) as ct, [ethnic_group], [id]
FROM
dbo.mytable GROUP BY [id], [ethnic_group]) ranked
where rn = 1
ORDER BY ct DESC
But I want to do this for several variables (ethnic group, income group and several more).
How can I select the mode for several variables within one statement and insert into one table (rather than creating separate tables for each variable)?
The table below illustrates an example of what I want to do:
DROP TABLE mytable;
CREATE TABLE mytable(
id VARCHAR(2) NOT NULL PRIMARY KEY
,ethnic_group VARCHAR(12) NOT NULL
,ethnic_mode VARCHAR(11) NOT NULL
,income VARCHAR(6) NOT NULL
,income_mode VARCHAR(11) NOT NULL
);
INSERT INTO mytable(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('id','ethnic_group','ethnic_mode','income','income_mode');
INSERT INTO mytable(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('1','white','white','middle','middle');
INSERT INTO mytable(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('1','white','white','middle','middle');
INSERT INTO mytable(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('1','mixed','white','high','middle');
INSERT INTO mytable(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('2','asian','asian','middle','middle');
INSERT INTO mytable(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('2','mixed','asian','middle','middle');
INSERT INTO mytable(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('2','asian','asian','middle','middle');
I would use sub-queries to accomplish this in 1 insert statement.
Here is an example based on the table structure from your illustration:
/* This is the original table and contains duplicate ID's */
DECLARE #source_table TABLE(
id VARCHAR(2) NOT NULL
,ethnic_group VARCHAR(12) NULL
,ethnic_mode VARCHAR(11) NULL
,income VARCHAR(6) NULL
,income_mode VARCHAR(11) NULL
);
/* This is the destination table and will not contain duplicate ID's */
DECLARE #destination_table TABLE(
id VARCHAR(2) NOT NULL PRIMARY KEY
,ethnic_group VARCHAR(12) NULL
,income VARCHAR(6) NULL
);
/* Populate the source table with data */
INSERT INTO #source_table(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('1','white','white','middle','middle');
INSERT INTO #source_table(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('1','white','white','middle','middle');
INSERT INTO #source_table(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('1','mixed','white','high','middle');
INSERT INTO #source_table(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('2','asian','asian','middle','middle');
INSERT INTO #source_table(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('2','mixed','asian','middle','middle');
INSERT INTO #source_table(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('2','asian','asian','middle','middle');
INSERT INTO #source_table(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('3','asian', NULL, NULL, NULL);
INSERT INTO #source_table(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('3',NULL, NULL,'middle', NULL);
INSERT INTO #source_table(id,ethnic_group,ethnic_mode,income,income_mode) VALUES ('3',NULL, NULL, NULL, NULL);
/* Insert from source into destination (removing duplicates) */
INSERT INTO #destination_table
(
id
, ethnic_group
, income
)
SELECT st.id
, (
SELECT TOP 1 ethnic_group
FROM #source_table sub_st
WHERE sub_st.id = st.id
GROUP BY ethnic_group
ORDER BY COUNT(sub_st.id) DESC
)
, (
SELECT TOP 1 income
FROM #source_table sub_st
WHERE sub_st.id = st.id
GROUP BY income
ORDER BY COUNT(sub_st.id) DESC
)
FROM #source_table st
GROUP BY st.id
/* View the destination to see there are no duplicates */
SELECT id
, ethnic_group
, income
FROM #destination_table
Here are the two tables created:
CREATE TABLE category_tbl(
id int NOT NULL AUTO_INCREMENT,
name varchar(255) NOT NULL,
subcategory varchar(255) NOT NULL,
PRIMARY KEY(id),
CONSTRAINT nameSubcategory UNIQUE KEY(name, subcategory)
) ENGINE=InnoDB;
CREATE TABLE device(
id INT NOT NULL AUTO_INCREMENT,
cid INT DEFAULT NULL,
name VARCHAR(255) NOT NULL,
received DATE,
isbroken BOOLEAN,
PRIMARY KEY(id),
FOREIGN KEY(cid) REFERENCES category_tbl(id)
) ENGINE=InnoDB;
Below is the instruction that was given to me:
-- insert the following devices instances into the device table (you should use a subquery to set up foriegn keys referecnes, no hard coded numbers):
-- cid - reference to name: phone subcategory: maybe a tablet?
-- name - Samsung Atlas
-- received - 1/2/1970
-- isbroken - True
I'm getting errors on the insert statement below from attempting to use a sub-query within an insert statement. How would you solve this issue?
INSERT INTO devices(cid, name, received, isbroken)
VALUES((SELECT id FROM category_tbl WHERE subcategory = 'tablet') , 'Samsung Atlas', 1/2/1970, 'True');
You have different table name in CREATE TABLE and INSERT INTO so just choose one device or devices
When insert date format use the good one like DATE('1970-02-01')
When insert boolean - just TRUE with no qoutes I beleive.
http://sqlfiddle.com/#!9/b7180/1
INSERT INTO devices(cid, name, received, isbroken)
VALUES((SELECT id FROM category_tbl WHERE subcategory = 'tablet') , 'Samsung Atlas', DATE('1970-02-01'), TRUE);
It's not possible to use a SELECT in an INSERT ... VALUES ... statement. The key here is the VALUES keyword. (EDIT: It is actually possible, my bad.)
If you remove the VALUES keyword, you can use the INSERT ... SELECT ... form of the INSERT statement statement.
For example:
INSERT INTO mytable ( a, b, c) SELECT 'a','b','c'
In your case, you could run a query that returns the needed value of the foreign key column, e.g.
SELECT c.id
FROM category_tbl c
WHERE c.name = 'tablet'
ORDER BY c.id
LIMIT 1
If we add some literals in the SELECT list, like this...
SELECT c.id AS `cid`
, 'Samsung Atlas' AS `name`
, '1970-01-02' AS `received`
, 'True' AS `isBroken`
FROM category_tbl c
WHERE c.name = 'tablet'
ORDER BY c.id
LIMIT 1
That will return a "row" that we could insert. Just precede the SELECT with
INSERT INTO device (`cid`, `name`, `received`, `isbroken`)
NOTE: The expressions returned by the SELECT are "lined up" with the columns in the column list by position, not by name. The aliases assigned to the expressions in the SELECT list are arbitrary, they are basically ignored. They could be omitted, but I think having the aliases assigned makes it easier to understand when we run just the SELECT portion.
I am using INSERT ... SELECT to insert a data from specific columns from specific rows from a view into a table. Here's the target table:
CREATE TABLE IF NOT EXISTS `queue` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`customerId` int(11) NOT NULL,
`productId` int(11) NOT NULL,
`priority` int(11) NOT NULL,
PRIMARY KEY (`ID`),
KEY `customerId` (`customerId`),
KEY `productId` (`productId`),
KEY `priority` (`priority`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ;
The INSERT ... SELECT SQL I have works, but I would like to improve it if possible, as follows: I would like the inserted rows to start with 1 in the priority column, and each subsequent row to increment the priority value by 1. So, if three rows were inserted, the first would be priority 1, the second 2, and the third 3.
A exception to the "start at 1" rule: if there are existing rows in the target table for the specified customer, I would like the inserted rows to start with MAX(priority)+1 for that customer.
I thought I could use a subquery, but here's the problem: sometimes the subquery returns NULL (when there are no records in the queue table for the specified customer), which breaks the insert, as the priority column does not allow nulls.
I tried to CAST the column to an integer, but that still gave me NULL back when there are no records with that customer ID in the table.
I've hardcoded the customer ID in this example, but naturally in my application that would be an input parameter.
INSERT INTO `queue`
(
`customerId`,
`productId`,
`priority`,
`status`,
`orderId`)
SELECT
123, -- This is the customer ID
`PRODUCT_NO`,
(SELECT (MAX(`priority`)+1) FROM `queue` WHERE `customerId` = 123),
'queued',
null
FROM
`queue_eligible_products_view`
Is there a way to do this in one SQL statement, or a small number of SQL statements, i.e., less than SQL statement per row?
I do not think I can set the priority column to auto_increment, as this column is not necessarily unique, and the auto_increment attribute is used to generate a unique identity for new rows.
As Barmar mentions in the comments : use IFNULL to handle your sub query returning null. Hence:
INSERT INTO `queue`
(
`customerId`,
`productId`,
`priority`,
`status`,
`orderId`)
SELECT
123, -- This is the customer ID
`PRODUCT_NO`,
IFNULL((SELECT (MAX(`priority`)+1) FROM `queue` WHERE `customerId` = 123),1),
'queued',
null
FROM
`queue_eligible_products_view`
Here's how to do the incrementing:
INSERT INTO queue (customerId, productId, priority, status, orderId)
SELECT 123, product_no, #priority := #priority + 1, 'queued', null
FROM queue_eligible_products_view
JOIN (SELECT #priority := IFNULL(MAX(priority), 0)
FROM queue
WHERE customerId = 123) var
I have to insert a lot of rows to a table basic_data like this:
basic_data
customer | max_sale | total_sales
CREATE TABLE `basic_data` (
`id` int(11) NOT NULL AUTO_INCREMENT ,
`customer` int(11) NOT NULL ,
`total_sales` decimal(15,2) NOT NULL ,
`max_sale` tinyint(2) NOT NULL ,
PRIMARY KEY (`id`),
UNIQUE INDEX `customer` USING BTREE (`customer`)
)
As I get (customer, max_sale) and (customer, total_sales) in different queries, I would like to know if it is possible to fill the table so that one customer has just one row with both values max_sale and total_sales.
My insert queries are:
INSERT INTO
basic_data (customer, max_sale)
SELECT a.customer, a.max_sale
FROM sales a
....
GROUP BY customer;
INSERT INTO
basic_data (customer,total_value)
SELECT customer, SUM(sales) total_value
FROM sales a
GROUP BY customer;
And I have read I can use ON DUPLICATE KEY UPDATE but cannot manage to use it properly.
My attempts on
INSERT INTO
basic_data (customer,total_value)
SELECT customer, SUM(sales) total_value
FROM sales a
GROUP BY customer
ON DUPLICATE KEY person UPDATE total_value = ...
have been unsuccessful and I think it is because ON DUPLICATE KEY does not work with GROUP BY.
Any idea how to store data this way? Big thanks.
UPDATE 1
INSERT INTO basic_data (customer,total_value)
SELECT customer, total_value
FROM
(
SELECT customer, SUM(sales) total_value
FROM sales a
GROUP BY customer
) c
ON DUPLICATE KEY UPDATE total_value = c.total_value