INSERT statement for MySQL table - mysql

CREATE TABLE IF NOT EXISTS `MyTable` (
`ID` SMALLINT NOT NULL AUTO_INCREMENT,
`Name` VARCHAR(50) NOT NULL,
PRIMARY KEY (`ID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
INSERT INTO MyTable (ID,Name) VALUES (ID=4,Name='xxx')
or
INSERT INTO MyTable (Name) VALUES (Name='xxx')
The problem is that both INSERT statements produce the entry (4,0). Why 0 instead of "xxx"?
UPDATE: Primary key changed.

This should do the job :
INSERT INTO MyTable (ID, Name) VALUES (4, 'xxx')

I'm pretty sure it would be something like this, instead...
INSERT INTO MyTable (Name) VALUES ('xxx')
No need for the Name= part, since you've already specified which column you wish to insert into with the first (Name) definition.

Because the expression Name='xxx' is false, hence evaluates as zero.
You use the column=expression method use in on duplicate key update clauses as described here, not in the "regular" section of inserts. An example of that:
insert into mytable (col1,col2) values (1,2)
on duplicate key update col1 = col1 + 1
You should be using the syntax:
INSERT INTO MyTable (ID,Name) VALUES (4,'xxx')

Is that syntax of Name='xxx' valid? Never seen it before, i assume it is seeing it as an unquoted literal, trying to convert it to a number and coming up with 0? I'm not sure at all
Try this:
INSERT INTO MyTable (Name) VALUES ('xxx')

This is because you should mention the name of the column in the values part. And also because you do not define you primary key correctly (airlineID is not part of the field list)
CREATE TABLE IF NOT EXISTS `MyTable` (
`ID` SMALLINT NOT NULL AUTO_INCREMENT,
`Name` VARCHAR(50) NOT NULL,
PRIMARY KEY (`ID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
INSERT INTO MyTable (ID,Name) VALUES (4,'xxx')
INSERT INTO MyTable (Name) VALUES ('xxx')

Try this
INSERT INTO MyTable (ID,Name) VALUES (4,xxx)
For more Info just visit this link

Related

Duplicate entry '111-222' for key 'PRIMARY' when inserting a new value to MySQL database

I have a MySQL table running on AWS RDS with structure like the following:
CREATE TABLE `my_table` (
`col1` int(11) NOT NULL,
`col2` int(11) NOT NULL DEFAULT '0',
`f_name` varchar(45) DEFAULT NULL,
`l_name` varchar(45) DEFAULT NULL,
PRIMARY KEY (`col1`,`col2`),
KEY `idx_col1` (`col1`),
KEY `idx_col2` (`col2`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
The query
SELECT * FROM my_table WHERE col1=111 AND col2=222;
returns 0 row.
But when I run an insert query
INSERT INTO my_table
(col1, col2, f_name, l_name)
VALUES (111, 222, 'John', 'Doe')
I got an error saying
Duplicate entry '111-222' for key 'PRIMARY'.
Why does this happen? The table doesn't contain a row with col1=111 and col2=222.
There's already a row with values col1=111, col2=111, f_name='John', and l_name='Doe'. But I don't think this would cause a duplicate entry error.
=========================== EDIT ======================================
There's a trigger that generates the duplicate error. Here's the script to reproduce the error.
# Initialize the tables
CREATE TABLE `my_table` (
`col1` int(11) NOT NULL,
`col2` int(11) NOT NULL DEFAULT '0',
`f_name` varchar(45) DEFAULT NULL,
`l_name` varchar(45) DEFAULT NULL,
PRIMARY KEY (`col1`,`col2`),
KEY `idx_col1` (`col1`),
KEY `idx_col2` (`col2`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `triggered_table` (
`col1` int(11) NOT NULL,
`col2` int(11) NOT NULL DEFAULT '0',
`update_date` bigint(20) DEFAULT NULL,
PRIMARY KEY (`col1`,`col2`),
KEY `idx_col1` (`col1`),
KEY `idx_col2` (`col2`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
# Insert the data that cause duplicate error
INSERT INTO triggered_table (col1, col2) VALUES (111, 222);
# Create the trigger
DELIMITER $$
CREATE TRIGGER weird_trigger AFTER INSERT
ON my_table
FOR EACH ROW
BEGIN
INSERT INTO triggered_table
(col1, col2)
VALUES (NEW.col1, NEW.col2);
END$$
DELIMITER ;
# Create the duplicate error
INSERT INTO my_table
(col1, col2, f_name, l_name)
VALUES (111, 222, 'John', 'Doe');
I really don't understand why the developers created the triggered_table table. Why didn't they put update_date column to my_table?
This is so weird.
All you have to do is:
Truncate your table then run (Assuming that you have only a test data but if not, you have to do some backup first)
INSERT INTO my_table
(col1, col2, f_name, l_name)
VALUES (111, 222, 'John', 'Doe')
Now if the error still exists, this is a pretty much problem.
Your error seems like you concatinated col1 and col2 as your primary key ('111-222')
You can try
select * from yourTable where FieldPrimary = '111-222' if it is already exists
The duplicate key error does not come from the my_table table but from the triggered_table table instead. When you add a row in triggered_table for the key (111, 222) and then add a new row in the my_table table (with the same key), your trigger will also try to add a new row with the key (111, 222) in your triggered_table. However there is already such a key in use and you will get the duplicate key error.
Depending on what you want to do with the my_table and triggered_table tables, you might want to change the trigger to use REPLACE INTO instead of INSERT INTO. Or you run a check with SELECT first to see if you need to add a new row or not. After that you can run an UPDATE query to change the value of update_date. But to answer your question, the duplicate key error comes from the duplicate key in the triggered_table table.

I unable to Insert a value from a char that has been CAST as Integer and added by 1

I convert an id which is in a char column datatype. after that, I want to add it by 1 (plus 1).
Could you help me? why my query is not working?
query:
INSERT INTO `countries` (`id`, `country_name`) VALUES ((SELECT MAX(CAST(`id` as INTEGER)) AS `max_id` FROM `countries`) + 1, 'India');
The following would run:
INSERT INTO `countries` (`id`, `country_name`)
SELECT MAX(CAST(`id` as INTEGER)) + 1, 'India'
FROM `countries`;
But I think it would be easier if you just make the id column an AUTO_INCREMENT.
This is not how you should be doing identifiers.
If you want incrementing id values, you want to use the AUTO_INCREMENT feature when creating your table.
Your way is dangerous, there's always a possibility of two transactions running at the same time picking the same "next ID".
Just create a table with the flag on:
CREATE TABLE countries (
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (id)
);
INSERT INTO countries (`name`) VALUES ('India');

ON DUPLICATE KEY UPDATE with WHERE

I've got an INSERT ... ON DUPLICATE KEY UPDATE query and I am trying to add WHERE clause to it:
INSERT INTO `product_description` (
`product_id`,`language_id`,`name`,
`description`,`meta_description`,
`meta_keyword`,`tag`
) VALUES (
$getProductId, $languageId, '$pName', '$pDescription', '', '', ''
)
ON DUPLICATE KEY UPDATE
`name` = '$pName',
`description` = '$pDescription'
I want to restrict the UPDATE to those 2 conditions:
WHERE `model` = 'specific-model' AND `sku` NOT LIKE '%B15%'
If I add this part of query to the end of the original query I get a MySQL syntax error. What would be a working solution?
Update: Please note that model and sku are in another table, and the common key is product_id
I would suggest you to use some sort of prepared statement instead of concatenating strings, so you should do something like this:
INSERT INTO `product_description` (
`product_id`, `language_id`, `name`,
`description`, `meta_description`,
`meta_keyword`, `tag`
) VALUES (?, ?, ?, ?,'','','')
but this is not part of the question.
I was thinking of answering with a simple CASE WHEN but the challenging part of your question is that the restrict conditions are not in the product_description table but are from another table. So I think we can just use a TRIGGER:
CREATE TRIGGER product_description_upd
BEFORE UPDATE ON product_description
FOR EACH ROW
IF NOT EXISTS(SELECT * FROM models
WHERE product_id=new.product_id
AND model='Abc' AND `sku` NOT LIKE '%B15%') THEN
SET new.name=old.name;
SET new.description=old.description;
END IF;
//
then you can use an INSERT query like:
INSERT INTO `product_description` (col1, col2, ...)
VALUES (..., ..., ...)
ON DUPLICATE KEY
UPDATE name=VALUE(name),description=VALUE(description)
Please see a fiddle here.
The only thing to note here is that even a standard UPDATE query will be affected.
CREATE TABLE product_description (
product_id INT PRIMARY KEY,
name VARCHAR(100),
description VARCHAR(100)
);
CREATE TABLE models (
product_id INT,
model VARCHAR(100),
sku VARCHAR(100)
);
INSERT INTO models VALUES
(1, "Abc", "ZZZ"),
(2, "Abc", "B15");
INSERT INTO product_description VALUES
(1, "Car", "Red"),
(2, "Truck", "Pink");
INSERT INTO `product_description` VALUES (1, "NewCar", "DeepRed")
ON DUPLICATE KEY UPDATE name=VALUES(name), description=VALUES(description);
Assuming, product_id must be in models.
INSERT INTO `product_description` (product_id, name, description)
SELECT models.product_id, "SuperCar" as name, "DarkRed" as description
FROM `models` WHERE model="Abc" AND `sku` NOT LIKE "%B15%"
ON DUPLICATE KEY UPDATE name="UpdatedCar", description="UpdatedRed";
refer to http://sqlfiddle.com/#!9/69624e/1
Hopefully this solves the problem. You can play with SELECT query for different result.

Keep from creating duplicate entry above zero

Would it be possible to create a database-level restriction to prevent creating a row that has a column x INT with a value that already exists and is above 0?
Is there a way to use CONSTRAINT for this purpose?
A possible solution is to do the following:
CREATE TABLE test
(
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
myfield INT,
CONSTRAINT check_myfield UNIQUE ( myfield )
);
Now, the column myfield might be NULL. So when we do the following, there will be a total of 0 errors.
INSERT INTO `test` VALUES ( '', '1' );
INSERT INTO `test` VALUES ( '', '0' );
INSERT INTO `test` VALUES ( '', '5' );
INSERT INTO `test` VALUES ( '', '7' );
etc, you get the point...
Each and every row has a unique value in the column myfield, but there is still the possibility to create rows where the value in this particular column is NULL which is almost exactly what I wanted. I wanted all values above 0 unique, this is all above NULL. The beauty of this solution is that it feels more 'professional', no unnecessary logic.

Insert on duplicate key causing problems in auto increment field

CREATE TABLE IF NOT EXISTS `foo` (
`foo_id` INT NOT NULL AUTO_INCREMENT ,
`unique` CHAR(255) NULL ,
`not_unique` CHAR(255) NULL ,
PRIMARY KEY (`foo_id`) ,
UNIQUE INDEX `unique_UNIQUE` (`unique` ASC) )
ENGINE = InnoDB;
This is the table.
INSERT INTO foo (`unique`,`not_unique`) VALUES ('John','Doe')
ON DUPLICATE KEY UPDATE `foo_id`=LAST_INSERT_ID(`foo_id`);
SELECT LAST_INSERT_ID();
LAST_INSERT_ID here returns 1. That is correct.
INSERT INTO foo (`unique`,`not_unique`) VALUES ('John','Doe')
ON DUPLICATE KEY UPDATE `foo_id`=LAST_INSERT_ID(`foo_id`);
SELECT LAST_INSERT_ID();
LAST_INSERT_ID here returns 1. That is correct.
INSERT INTO foo (`unique`,`not_unique`) VALUES ('Jane','Doe')
ON DUPLICATE KEY UPDATE `foo_id`=LAST_INSERT_ID(`foo_id`);
SELECT LAST_INSERT_ID();
LAST_INSERT_ID here returns 3. Why? I was hoping it to be 2. If this is a bug, is there a workaround for it?
The id was taken at the beginning of the attempted insert, and was discarded on failure.