insert update multiple rows mysql - mysql

I need to add multiple records to a mysql database. I tried with multiple queries and its working fine, but not efficient. So I tried it with just one query like below,
INSERT INTO data (block, length, width, rows) VALUES
("BlockA", "200", "10", "20"),
("BlockB", "330", "8", "24"),
("BlockC", "430", "7", "36")
ON DUPLICATE KEY UPDATE
block=VALUES(block),
length=VALUES(length),
width=VALUES(width),
rows=VALUES(rows)
But it always update the table (columns are block_id, block, length, width, rows).
Should I do any changes on the query with adding block_id also. block_id is the primary key. Any help would be appreciated.

I've run your query without any problem, are you sure you don't have other keys defined with the data table ? And also make sure you have 'auto increment' set for the id field. without auto_increment, the query always update existing row
***** Updated **********
Sorry I've mistaken your questions. Yes, with only one auto_increment key, you query will always insert new rows instead of updating existing one ( because the primary key is the only way to detect 'existing' / duplication ), since the key is auto_increment, there's never a duplication if the primary key is not given in the insert query.
I think what you want to achieve is different, you might want to set up composite unique key on all fields (i.e. block, field, width, rows )
By the way, i've set up a SQL fiddle for you.
http://sqlfiddle.com/#!2/e7216/1
The syntax to add the unique key:
CREATE TABLE `data` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`block` varchar(10) DEFAULT NULL,
`length` int(11) DEFAULT NULL,
`width` int(11) DEFAULT NULL,
`rows` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `uniqueme` (`block`,`length`,`width`,`rows`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;

Related

How to suppress unique key checking while sql insert

I got a MySQL database with some tables.
In one of these tables i want to insert by a SQL script some new rows.
Unfortunately i have to insert in two columns an empty string and the two columns are part of an unique key for that table.
So i tried to set UNIQUE_CHECKS before and after the insert, but i'm getting errors because of duplicate entries.
Here is the definition of the table:
CREATE TABLE `Table_A` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(100) NOT NULL,
`number` varchar(25) DEFAULT NULL,
`changedBy` varchar(150) DEFAULT NULL,
`changeDate` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `name` (`name`,`number`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
And the INSERT statement which causes error:
SET UNIQUE_CHECKS = 0;
INSERT INTO `Table_A`
(`name`, `number`, `changedBy`, `changeDate`)
SELECT DISTINCT '', 'myUser', CURRENT_TIMESTAMP
FROM Table_A
AND id NOT IN
(
SELECT DISTINCT id
FROM Table_A
);
SET UNIQUE_CHECKS = 1;
As You can see, i'm using UNIQUE_CHECKS.
But as i said this doesn't work properly.
Any help or suggestion would be appreciated.
Patrick
Switching off Unique Keys for the insert operation doesn't indicate that it will check uniqueness only for the operations that happen after you switch it on again. It just means that database will not waste time to check the constraint during the time it is switch off but it will check the constraint when you switch it on again.
What it measn is that you nead to ensure that column has unique value in a columns with Unique Keys before you can turn it on. Which you don't do.
If you want to maintain Uniqueness somehow for new records you insert after some point in time you would need to create trigger and manually check the new records against already existing data. The same possibly goes for updates. But I don't recommend it - you should probably redesign data so either the Unique Key is not there or the data is truly unique for all the records there are and will be.

update table takes long time in mysql?

CREATE TABLE fa (
book varchar(100) DEFAULT NULL,
PRODUCTION varchar(1000) DEFAULT NULL,
VENDOR_LEVEL varchar(100) DEFAULT NULL,
BOOK_NO int(10) DEFAULT NULL,
UNSTABLE_TIME_PERIOD varchar(100) DEFAULT NULL,
`PERIOD_YEAR` int(10) DEFAULT NULL,
promo_3_visuals_manual_drag int(10) DEFAULT NULL,
BOOK_NO int(10) DEFAULT NULL,
PRODUCT_LEVEL_DIST varchar(100) DEFAULT NULL,
PRODUCT_LEVEL_ACV_TREND varchar(100) DEFAULT NULL,
KEY book (BOOK_NO),
KEY period (PERIOD_YEAR)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
Index we added to column
Index : BOOK_NO and PERIODIC_YEAR has added
we cant add unique nor primary key to both column as it has plenty of duplicate values in it.
There are 46 millions rows.
We tried partitioning to period year and catno for sub partition, but doesn't worked as it is still takes long time
When i run the update query :
update fa set UNSTABLE_TIME_PERIOD = NULL where BOOK_NO = 0 and periodic_year = 201502;
It taking me more than 7 min , how can i OPTIMIZE the query?
Instead of creating 2 different keys, create single composite key for both the columns like:
KEY book_period (BOOK_NO, PERIOD_YEAR)
Also, first filter the records based on the column which will return the small set of records as compare to other.
If you think BOOK_NO will return less number of records as compare to PERIOD_YEAR, Use BOOK_NO first in where clause else use PERIOD_YEAR first and create the key accordingly.
As Álvaro González said, you should use some sort of key (eg. a Primary Key).
Adding a Primary Key:
CREATE TABLE fa (
<your_id>,
{...},
PRIMARY KEY(<your_id>),
{...}
)
or
CREATE TABLE fa (
<your_id> PRIMARY KEY,
{...}
)
It'd be a good idea to make your PRIMARY KEY AUTO_INCREMENT too for convenience, but this is not essenitial.

MySQL: Enforce an unique column without using an unique key

I have a column with data that exceeds MySQL's index length limit. Therefore, I can't use an unique key.
There's a solution here to the problem without using an unique key: MySQL: Insert record if not exists in table
However, in the comments, people are having issues with inserting the same value into multiple columns. In my case, a lot of my values are 0, so I'll get duplicate values very often.
I'm using Node and node-mysql to access the database. I'm thinking I can have a variable that keeps track of all values that are currently being inserted. Before inserting, I check if the value is currently being inserting. If so, I'll wait until it finishes inserting, then continue execution as if the value was originally inserted. However, I feel like this will be very error prone.
Here's part of my table schema:
CREATE TABLE `links` (
`id` int(10) UNSIGNED NOT NULL,
`url` varchar(2083) CHARACTER SET latin1 COLLATE latin1_general_cs NOT NULL,
`likes` int(10) UNSIGNED NOT NULL,
`tweets` int(10) UNSIGNED NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
ALTER TABLE `links`
ADD PRIMARY KEY (`id`),
ADD KEY `url` (`url`(50));
I cannot put an unique key on url because it can be 2083 bytes, which is over MySQL's key size limit. likes and tweets will often be 0, so the linked solution will not work.
Is there another possible solution?
If you phrase your INSERT in a certain way, you can make use of WHERE NOT EXISTS to check first if the URL does not exist before completing the insert:
INSERT INTO links (`url`, `likes`, `tweets`)
SELECT 'http://www.google.com', 10, 15 FROM DUAL
WHERE NOT EXISTS
(SELECT 1 FROM links WHERE url='http://www.google.com');
This assumes that the id column is a primary key/auto increment, and MySQL will automatically assign a value to it.

Unique (multiple columns) and null in one column

I have simple categories table. Category can have parent category (par_cat column) or null if it is main category and with the same parent category there shouldn't be 2 or more categories with the same name or url.
Code for this table:
CREATE TABLE IF NOT EXISTS `categories` (
`id` int(10) unsigned NOT NULL,
`par_cat` int(10) unsigned DEFAULT NULL,
`lang` varchar(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'pl',
`name` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`url` varchar(120) COLLATE utf8_unicode_ci NOT NULL,
`active` tinyint(3) unsigned NOT NULL DEFAULT '1',
`accepted` tinyint(3) unsigned NOT NULL DEFAULT '1',
`priority` int(10) unsigned NOT NULL DEFAULT '1000',
`entries` int(10) unsigned NOT NULL DEFAULT '0',
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00'
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=3 ;
ALTER TABLE `categories`
ADD PRIMARY KEY (`id`),
ADD UNIQUE KEY `categories_name_par_cat_unique` (`name`,`par_cat`),
ADD UNIQUE KEY `categories_url_par_cat_unique` (`url`,`par_cat`),
ADD KEY `categories_par_cat_foreign` (`par_cat`);
ALTER TABLE `categories`
MODIFY `id` int(10) unsigned NOT NULL AUTO_INCREMENT,AUTO_INCREMENT=3;
ALTER TABLE `categories`ADD CONSTRAINT `categories_par_cat_foreign`
FOREIGN KEY (`par_cat`) REFERENCES `categories` (`id`);
The problem is that even if I have unique keys it doesn't work. If I try to insert into database 2 categories that have par_cat set to null and same name and url, those 2 categories can be inserted into database without a problem (and they shouldn't). However if I select for those categories other par_cat (for example 1 assuming category with id 1 exists), only first record will be inserted (and that's desired behaviour).
Question - how to handle this case? I read that:
A UNIQUE index creates a constraint such that all values in the index
must be distinct. An error occurs if you try to add a new row with a
key value that matches an existing row. This constraint does not apply
to NULL values except for the BDB storage engine. For other engines, a
UNIQUE index permits multiple NULL values for columns that can contain
NULL. If you specify a prefix value for a column in a UNIQUE index,
the column values must be unique within the prefix.
however if I have unique on multiple columns I expected it's not the case (only par_cat can be null, name and url cannot be null). Because par_cat references to id of the same table but some categories don't have parent category it should allow null values.
This works as defined by the SQL standard. NULL means unknown. If you have two records of par_cat = NULL and name = 'X', then the two NULLs are not regarded to hold the same value. Thus they don't violate the unique key constraint. (Well, one could argue that the NULLs still might mean the same value, but applying this rule would make working with unique indexes and nullable fields almost impossible, for NULL could as well mean 1, 2 or whatever other value. So they did well to define it such as they did in my opinion.)
As MySQL does not support functional indexes where you could have an index on ISNULL(par_cat,-1), name, your only option is to make par_cat a NOT NULL column with 0 or -1 or whatever for "no parent", if you want your constraints to work.
I see that this was asked in 2014.
However it is often requested from MySQL: https://bugs.mysql.com/bug.php?id=8173 and https://bugs.mysql.com/bug.php?id=17825 for example.
People can click on affects me to try and get attention from MySQL.
Since MySQL 5.7 we can now use the following workaround:
ALTER TABLE categories
ADD generated_par_cat INT UNSIGNED AS (ifNull(par_cat, 0)) NOT NULL,
ADD UNIQUE INDEX categories_name_generated_par_cat (name, generated_par_cat),
ADD UNIQUE INDEX categories_url_generated_par_cat (url, generated_par_cat);
The generated_par_cat is a virtual generated column, so it has no storage space. When a user inserts (or updates) then the unique indexes cause the value of generated_par_cat to be generated on the fly which is a very quick operation.
Just in case you come from Laravel...
This is Laravel's Migration version for Virtual Column to workaround the UNIQUE issue when one of the columns is NULL in value
$table->integer('generated_par_cat')->virtualAs('ifNull(par_cat, 0)');
$table->unique(['name', 'generated_par_cat'], 'name_par_cat_unique');

Order by two fields - Indexing

So I've got a table with all users, and their values. And I want to order them after how much "money" they got. The problem is that they have money in two seperate fields: users.money and users.bank.
So this is my table structure:
CREATE TABLE IF NOT EXISTS `users` (
`id` int(4) unsigned NOT NULL AUTO_INCREMENT,
`username` varchar(54) COLLATE utf8_swedish_ci NOT NULL,
`money` bigint(54) NOT NULL DEFAULT '10000',
`bank` bigint(54) NOT NULL DEFAULT '10000',
PRIMARY KEY (`id`),
KEY `users_all_money` (`money`,`bank`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_swedish_ci AUTO_INCREMENT=100 ;
And this is the query:
SELECT id, (money+bank) AS total FROM users FORCE INDEX (users_all_money) ORDER BY total DESC
Which works fine, but when I run EXPLAIN it shows "Using filesort", and I'm wondering if there is any way to optimize it?
Because you want to sort by a derived value (one that must be calculated for each row) MySQL can't use the index to help with the ordering.
The only solution that I can see would be to create an additional total_money or similar column and as you update money or bank update that value too. You could do this in your application code or it would be possible to do this in MySQL with triggers too if you wanted.