this is a very strange issue...
select count(*) from imprint_users;
count 461
INSERT INTO coresource.imprint_users (imprint_sid, users_sid) VALUES (2387,165);
Query OK, 1 row affected (0.00 sec)
select count(*) from imprint_users;
count 461
1) cannot see anything in mysql-error
2) checked the status of the table just in case
+--------------------------+-------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+--------------------------+-------+----------+----------+
| coresource.imprint_users | check | status | OK |
+--------------------------+-------+----------+----------+
3)here is the create statement
CREATE TABLE "imprint_users" (
"imprint_sid" bigint(20) NOT NULL,
"users_sid" bigint(20) NOT NULL,
PRIMARY KEY ("imprint_sid","users_sid"),
KEY "FK__IMPRINT_U__IMPRI__47E69B3D" ("imprint_sid"),
KEY "FK__IMPRINT_U__USERS__48DABF76" ("users_sid"),
CONSTRAINT "fk__imprint_u__impri__47e69b3d" FOREIGN KEY ("imprint_sid") REFERENCES "imprint" ("imprint_sid"),
CONSTRAINT "fk__imprint_u__users__48dabf76" FOREIGN KEY ("users_sid") REFERENCES "users" ("users_sid")
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci |
4) This is in a M-M setup and we are writing to M1
Server version: 5.6.24-72.2-log Percona Server (GPL), Release 72.2,
Any help or direction would be appreciated.
try a test.
first you flush table & autocommit is set to zero then set it to 1.
flush table;
set #autocommit=1;
select count(*) from imprint_users;
INSERT INTO coresource.imprint_users (imprint_sid, users_sid) VALUES (2387,165);
select count(*) from imprint_users;
If still the problem remains then see is there any TRIGGER OR EVENT working in background. So disable all triggers, disable the entire scheduler (all events)-
SET #disable_triggers = 1;
SET #event_scheduler = OFF;
run the query & test
Related
I created a sysbench table shown below with 25,000,000 records (5.7G in size):
Create Table: CREATE TABLE `sbtest1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`k` int(11) NOT NULL DEFAULT '0',
`c` char(120) NOT NULL DEFAULT '',
`pad` char(60) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `k_1` (`k`)
) ENGINE=InnoDB AUTO_INCREMENT=25000001 DEFAULT CHARSET=latin1
Then added an index on c using the ALTER statement directly, which took about 18 minutes to update the table as shown below:
mysql> alter table sbtest1 add index c_1(c);
Query OK, 0 rows affected (18 min 47.32 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> show create table sbtest1\G
*************************** 1. row ***************************
Table: sbtest1
Create Table: CREATE TABLE `sbtest1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`k` int(11) NOT NULL DEFAULT '0',
`c` char(120) NOT NULL DEFAULT '',
`pad` char(60) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `k_1` (`k`),
KEY `c_1` (`c`)
) ENGINE=InnoDB AUTO_INCREMENT=25000002 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
During the 18 minutes of the table update process, i tried to perform some transactions on the table by inserting new records and also update existing records on column c, and which to my surprise all worked when i expected a lock to prevent this from happening. I have always understood that performing an ALTER on an InnoDB table, especially a large table, can result on a record lock for the duration of the process, so wondering why i was able t perform inserts and updates without any problems?
Here are some info about my server:
mysql> show variables like '%isolation%';
+-----------------------+-----------------+
| Variable_name | Value |
+-----------------------+-----------------+
| transaction_isolation | REPEATABLE-READ |
| tx_isolation | REPEATABLE-READ |
+-----------------------+-----------------+
mysql> select version()
-> ;
+-----------+
| version() |
+-----------+
| 5.7.25-28 |
+-----------+
To me, it now seems like in MySQL 5.7, its okay to directly run the ALTER statement without any worries about locks? Is this an accurate conclusion?
UPDATED
When i tried to delete the added index c_1, it only took less than a second, which also surprised me coz i expected this too take longer than actually adding an index. I have always believed that adding an index is simple and quick, yet deleting or updating one takes a long time as the entire table structure has to be altered. So a bit confused about this???
Adding secondary index can be done inplace and permit concurrent DML.
I've got a MYISAM table of 90 million records over 18GB of data, and tests suggest it's a candidate for partitioning.
Original schema:
CREATE TABLE `email_tracker` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`hash` varchar(65) COLLATE utf8_unicode_ci NOT NULL,
`userId` int(11) NOT NULL,
`dateSent` datetime NOT NULL,
`dateViewed` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `userId` (`userId`),
KEY `dateSent` (`dateSent`),
KEY `dateViewed` (`dateViewed`),
KEY `hash` (`hash`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci 1 row in set (0.01 sec)
I've previously partitioned the table on a test server with "ALTER TABLE email_tracker PARTITION BY HASH..." and run typical queries against it, and there were no problems with the queries. To avoid locking the table on the production DB, I'm testing again on the test server using this approach as we can afford to lose some tracking data while this runs:
RENAME TABLE email_tracker TO email_tracker_orig; CREATE TABLE email_tracker LIKE email_tracker_orig;
CREATE TABLE email_tracker_part LIKE email_tracker_orig;
ALTER TABLE email_tracker_part DROP PRIMARY KEY, ADD PRIMARY KEY (id, userId);
ALTER TABLE email_tracker_part PARTITION BY HASH (id + userId) partitions 30;
INSERT INTO email_tracker_part (SELECT * FROM email_tracker_orig);
The _orig table has 90,795,103 records. After the query, the _part table only has 68,282,298. And I have no idea why that might be. Any ideas?
mysql> select count(*) from email_tracker_orig;
+----------+
| count(*) |
+----------+
| 90795103 |
+----------+
1 row in set (0.00 sec)
mysql> select count(*) from email_tracker_part;
+----------+
| count(*) |
+----------+
| 68274818 |
+----------+
1 row in set (0.00 sec)
(On subsequent tests, the _part table contains slightly different numbers of records which is weirder still)
Edit #1: Just realised that half of the partition table are empty due to auto-increment-increment = 2 for replication, so going to repartition BY KEY (userId) and see how that works out.
Edit #2 - Still the same after re-partitioning so trying to identify missing rows to establish a pattern.
I am not sure of your requirements, but the mysql documentation states that "the use of hashing expressions involving multiple columns is not particularly recommended." I would recommend that you just partition by id. Partitioning by id + userId doesn't give any obviously better distribution of your elements across the partitions.
Looks like the INSERT query merely terminated prematurely - exactly 40 mins in this case. Just re-running this for the missing records is doing the trick:
INSERT INTO email_tracker_part (SELECT * FROM email_tracker_orig WHERE id > 148893974);
There's nothing in the my.cnf that suggests a timeout of 40 mins, and I've been running longer queries than this on this test server, but I have my solution so I'll close this even though the underlying reason remains unclear to me.
Suppose i have two below table:
CREATE TABLE post (
id bigint(20) NOT NULL AUTO_INCREMENT,
text text ,
PRIMARY KEY (id)
) ENGINE=InnoDB AUTO_INCREMENT=1;
CREATE TABLE post_path (
ancestorid bigint(20) NOT NULL DEFAULT '0',
descendantid bigint(20) NOT NULL DEFAULT '0',
length int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (ancestorid,descendantid),
KEY descendantid (descendantid),
CONSTRAINT f_post_path_ibfk_1
FOREIGN KEY (ancestorid) REFERENCES post (id)
ON DELETE CASCADE
ON UPDATE CASCADE,
CONSTRAINT f_post_path_ibfk_2
FOREIGN KEY (descendantid) REFERENCES post (id)
ON DELETE CASCADE
ON UPDATE CASCADE
) ENGINE=InnoDB;
And inserted these rows:
INSERT INTO
post (text)
VALUES ('a'); #// inserted row by id=1
INSERT INTO
post_path (ancestorid ,descendantid ,length)
VALUES (1, 1, 0);
When i want to update post row id:
UPDATE post SET id = '10' WHERE post.id =1
MySQL said:
#1452 - Cannot add or update a child row: a foreign key constraint fails (test.post_path, CONSTRAINT f_post_path_ibfk_2 FOREIGN KEY (descendantid) REFERENCES post (id) ON DELETE CASCADE ON UPDATE CASCADE)
Why? what is wrong?
Edit:
When i inserted these rows:
INSERT INTO
post (text)
VALUES ('b'); #// inserted row by id=2
INSERT INTO
post_path (ancestorid, descendantid, length)
VALUES (1, 2, 0);
And updated:
UPDATE post SET id = '20' WHERE post.id =2
Mysql updated successfully both child and parent row.
so Why i can not update first post (id=1)?
Ok, I ran your schema and queries through a test database I have access too and noticed the following; after inserting both rows to both tables, and before any updates the data looks like:
mysql> select * from post;
+----+------+
| id | text |
+----+------+
| 1 | a |
| 2 | b |
+----+------+
2 rows in set (0.00 sec)
mysql> select * from post_path;
+------------+--------------+--------+
| ancestorid | descendantid | length |
+------------+--------------+--------+
| 1 | 1 | 0 |
| 1 | 2 | 0 |
+------------+--------------+--------+
2 rows in set (0.00 sec)
After I issue the update statement, to update post.id to 20:
mysql> UPDATE `post` SET `id` = '20' WHERE `post`.`id` =2;
Query OK, 1 row affected (0.08 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> select * from post_path;
+------------+--------------+--------+
| ancestorid | descendantid | length |
+------------+--------------+--------+
| 1 | 1 | 0 |
| 1 | 20 | 0 |
+------------+--------------+--------+
2 rows in set (0.00 sec)
Notice that the ancestorid is still 1, this appears to be an issue with MySQL:
If you use a multiple-table UPDATE statement involving InnoDB tables for which there are foreign key constraints, the MySQL optimizer might process tables in an order that differs from that of their parent/child relationship. In this case, the statement fails and rolls back. Instead, update a single table and rely on the ON UPDATE capabilities that InnoDB provides to cause the other tables to be modified accordingly. See Section 14.3.5.4, “InnoDB and FOREIGN KEY Constraints”.
The reason why your first query is failing, is because ancestorid is not updating to 10, but descendantid is, and because you are trying to set post.id to 10, and ancestorid in post_path table is still referencing the value 1, which would no longer exist.
You should consider altering your schema to avoid this, and to also avoid updating an auto_increment column so you avoid collisions.
I believe the solution to your problem is to remove descendantid as a constraint and use a trigger to perform an update on the field.
delimiter $$
CREATE TRIGGER post_trigger
AFTER UPDATE ON post
FOR EACH ROW
BEGIN
UPDATE post_path SET post_path.descendantid = NEW.id WHERE post_path.descendantid = OLD.id
END$$
The main reason why the second one worked is that you have kept different values for ancestorid and descendantid. When you are making two different constraints on the basis of a change on a particular attributes. only the first constraint will work, not the second one. Which is the case in your first update try.
The reason the first update fails and second does not is because in the second instance your ancestorid and descendantid reference different rows in your post table,
ancestorid = 1
descendantid = 2
The first update fails when it attempts to update post_path.ancestorid as in doing so the constraint between post.id and post_path.descendantid fails as these values would no longer match (1 !== 10).
Assuming that any given post cannot be both an ancestor and a descendant then the issue here is only in the execution of the first insert:
INSERT INTO `post_path` (`ancestorid` ,`descendantid` ,`length`) VALUES (1, 1, 0);
I have this MySQL table:
video (id int, name varchar(30), view_count int)
I tried to run the following query on my computer:
update `video` set view_count = view_count + 1 where id = 1;
It works fine.
However, after I moved the database to another server, the above update query sometimes work, sometimes not work. It doesn't have any error message, but the value doesn't change.
I tried to run:
update `video` set name = 'testing', view_count = view_count + 1 where id = 1;
The name can be updated successfully, but the vide_count doesn't change..
Anyone know what problem it is?
Here is the output from MySQL prompt:
update `video` set view_count = view_count + 1 where id = 1;
Query OK, 1 row affected (0.13 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> show create table video \G;
*************************** 1. row ***************************
Table: video
Create Table: CREATE TABLE `video` (
`id` int(10) unsigned NOT NULL auto_increment,
`name` varchar(30) default NULL,
`view_count` int(10) default NULL,
PRIMARY KEY (`id`),
) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
mysql> select * from video;
| id | name | view_count |
| 1 | ABC | 8510 |
MyISAM performs a full table lock on any DML (INSERT, UPDATE, DELETE)
If the video table is MyISAM, that should take care of this problem because only one person is allowed to have exclusive write privilege on the table executing a single DML statement, even if 1000 people are trying to execute
update `video` set name = 'testing', view_count = view_count + 1 where id = 1;
InnoDB performs MVCC around any data you are updating. Since the default is REPEATABLE-READ, 1000 people may see the same value for view_count just before attempting to increment it. Thus, if DB Connection #1 sees view_count as 12, increments it, it should be 13. However, if DB Connection #2 sees view_count as 12, it wiil increment it, and be 13 AGAIN !!!
Keep in mind that I am only speculating based on multiple DB Connections attempting the exact same UPDATE query.
Check that view_count doesn t contain a null value
Consider the following SQL:
CREATE TABLE USER1
(
pkUSER1_ID INT UNSIGNED NOT NULL AUTO_INCREMENT,
DATE_UPDATED TIMESTAMP NULL DEFAULT NULL,
NAME VARCHAR(25) NOT NULL,
CONSTRAINT PRIMARY KEY (pkUSER1_ID),
CONSTRAINT UNIQUE (NAME)
)
ENGINE = INNODB;
INSERT INTO USER1
SET NAME = 'asdf'
ON DUPLICATE KEY
UPDATE DATE_UPDATED = NOW();
INSERT INTO USER1
SET NAME = 'asdf'
ON DUPLICATE KEY
UPDATE DATE_UPDATED = NOW();
INSERT INTO USER1
SET NAME = 'asdf1'
ON DUPLICATE KEY
UPDATE DATE_UPDATED = NOW();
SELECT * FROM USER1;
And now notice the result set. The auto_increment was increased despite nothing being inserted.
+------------+---------------------+-------+
| pkUSER1_ID | DATE_UPDATED | NAME |
+------------+---------------------+-------+
| 1 | 2010-02-09 13:29:15 | asdf |
| 3 | NULL | asdf1 |
+------------+---------------------+-------+
I get different behavior on two separate servers... the output above is from MySQL v5.0.45 running on 2.6.9-023stab048.6-enterprise (I think it's Red Hat). The problem doesn't exist on MySQL v5.0.51a-24+lenny2-log running on 2.6.26-2-amd64 (which is obviously Debian).
Is there a configuration setting I can change to avoid this? I have around 300 users in my database, but due frequency that the insert/update statement is run, the latest user id is over 600,000.
This is a bug... http://bugs.mysql.com/bug.php?id=28781
Not sure why the client is running a 3 year old version of MySQL.