MySQL keeps ignoring MAX_EXECUTION_TIME - mysql

I am running a node server with mysql8.
This query keeps popping up and freezing.
SELECT /*+ MAX_EXECUTION_TIME(2000) */ COUNT(*) FROM my_table
Even though this query have MAX_EXECUTION_TIME mentioned in it, It keeps executing well past this limit (basically it never stops). The longest I have seen it running was around 20-30 days and even after that it didn't stopped I had restarted the server.
Main issue is I can't even kill this query. Even after killing it, It keeps executing and never stops.
I can't even restart mysql. After trying to restart the mysql. It just shuts down and never starts back.
I had to reboot the server (which is totally unacceptable).
SHOW OPEN TABLES;
Shows that this table is in use. It doesn't effect Reading, updating or inserting of data in the table. But as soon as I try to alter this table or any other table which has any reference from or to this table. It freezes whole node server as deadlock of queries occur and every query keeps waiting for the first query to end, which as already stated, never does.
This query in question isn't there in any of my node js code. But is being executed by the user (mysql user) which my node server uses, I also use this user with adminer.
This query always shows up with this specific table. Below is the sample output of show create table my_table; (table name and column name is changed)
CREATE TABLE `my_table` (
`id` int NOT NULL AUTO_INCREMENT,
`col1` varchar(150) COLLATE utf8_unicode_ci DEFAULT NULL,
`col2` int NOT NULL,
`col3` longtext COLLATE utf8_unicode_ci,
`col4` longtext COLLATE utf8_unicode_ci,
`col5` mediumtext COLLATE utf8_unicode_ci,
`col5` int DEFAULT NULL COMMENT 'in seconds',
`col6` tinyint(1) NOT NULL DEFAULT '0',
`col7` int DEFAULT NULL,
`col8` int DEFAULT NULL,
`col9` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`col9` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `test_level` (`col7`),
KEY `col9` (`col9`),
KEY `col6` (`col6`),
KEY `col5` (`col5`),
KEY `col2` (`col2`),
CONSTRAINT `my_table_ibfk_1` FOREIGN KEY (`col7`) REFERENCES `table_1` (`id`),
CONSTRAINT `my_table_ibfk_2` FOREIGN KEY (`col2`) REFERENCES `table_2` (`id`) ON DELETE RESTRICT ON UPDATE RESTRICT
) ENGINE=InnoDB AUTO_INCREMENT=4078 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
No of rows in this table is around 4000.
exact mysql-version 8.0.19

Related

Update index values extremely slow on MySQL

I have three tables, one is in database db1 and two are in database db2, all on the same MySQL server:
CREATE TABLE `db1`.`user` (
`id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT,
`user_name` varchar(20) NOT NULL,
`password_hash` varchar(71) DEFAULT NULL,
`email_address` varchar(100) NOT NULL,
`registration_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`registration_hash` char(16) DEFAULT NULL,
`active` bit(1) NOT NULL DEFAULT b'0',
`public` bit(1) NOT NULL DEFAULT b'0',
`show_name` bit(1) NOT NULL DEFAULT b'0',
PRIMARY KEY (`id`),
UNIQUE KEY `user_name` (`user_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `db2`.`ref` (
`id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT,
`name` varchar(100) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `db2`.`combination` (
`ref_id` bigint(20) UNSIGNED NOT NULL,
`user_id` bigint(20) UNSIGNED NOT NULL,
`arbitrary_number` tinyint(3) UNSIGNED NOT NULL DEFAULT '0',
PRIMARY KEY (`figurine_id`,`user_id`),
KEY `combination_user` (`user_id`),
KEY `combination_number` (`user_id`,`arbitrary_number`),
CONSTRAINT `combination_ref` FOREIGN KEY (`ref_id`) REFERENCES `ref` (`id`) ON UPDATE CASCADE,
CONSTRAINT `combination_user` FOREIGN KEY (`user_id`) REFERENCES `db1`.`user` (`id`) ON UPDATE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
The table db1.user has around 600 records, the table db2.ref has around 800 records and the table db2.combination has around 300K records.
Now using Perl and DBD::mysql I perform the following query:
UPDATE `db1`.`user` SET `id` = (`id` + 1000)
ORDER BY `id` DESC
However, this query always stops mentioning the connection to the MySQL server was lost. Also executing this same query via PhpMyAdmin results in a timeout. Somehow the query just takes a very long time to execute. I guess it comes because all the foreign key values need to be updated.
Setting FOREIGN_KEY_CHECKS variable to OFF will not update the user_id column in the db.combination table, which do need to be updated.
I have also tried to manipulate the different timeouts (as suggested all over the internet), like this:
SET SESSION net_read_timeout=3000;
SET SESSION net_write_timeout=3000;
SET SESSION wait_timeout=6000;
I have verified that the new values are actually set, by retrieving the values again. However, even with these long timeouts, the query still fails to execute and after about 30 seconds the connection to the MySQL server is again lost (while amidst executing the UPDATE query)
Any suggestions on how to speed up this query are more than welcome.
BTW: The PK columns have a very large integer type. I will also make this type smaller (change to INT). Could this type change also improve the speed significantly?
UPDATE
I also performed an EXPLAIN for the query and it mentions in the Extra column that the query is doing a filesort. I would have expected that due to the indexes on the table (added them, as they were not there in the first place), no filesort would take place.
The 300K CASCADEs is probably the really slow part of the task. So, let's avoid it. (However, there may be a check the verify the resulting links; this should be not-too-slow.)
Disable FOREIGN KEY processing
Create new tables without FOREIGN KEYs. new_user, new_combination. (I don't know if new_ref is needed.)
Do this to populate the tables:
INSERT INTO new_xx (user_id, ...)
SELECT user_id + 1000, ...;
ALTER TABLE new_xx ADD FOREIGN KEY ...; (for each xx)
`RENAME TABLE xx TO old_xx, new_xx TO xx;
`DROP TABLE old_xx;
Enable FOREIGN KEY processing

Foreign key constraint fails but referenced row exists

I'm running MySQL 5.7.21 on Amazon RDS.
I know this question has been asked a thousand times, but I'm getting the issue on a scenario I wouldn't expect, so please read through before downvoting or marking as duplicate.
I'm not restoring the database, just running single INSERT queries, so is not a matter of ordering.
The referenced row does exist on the table; me and my colleagues had it triple checked.
As one might expect, disabling the FK checks with SET foreign_key_checks = 0 does make the query work.
I've seen this happening because of different table charsets, but in this case, both use utf8mb4. Also both have collation set to utf8mb4_general_ci.
This is happening in a production environment, so dropping the tables and recreating them is something I would like to avoid.
Some additional information:
The FK constraint was created AFTER the original tables were already populated.
Here is the relevant portion of the current DDL:
CREATE TABLE `VehicleTickets` (
`id` varchar(50) NOT NULL,
`vehiclePlate` char(7) NOT NULL,
`organizationId` varchar(50) NOT NULL,
`createdAt` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updatedAt` timestamp NULL DEFAULT NULL,
`status` varchar(15) NOT NULL DEFAULT 'OPEN',
`description` text NULL DEFAULT NULL,
`ticketInfo` json DEFAULT NULL,
`externalId` varchar(100) GENERATED ALWAYS AS (json_unquote(json_extract(`ticketInfo`,'$.externalId'))) VIRTUAL,
`value` decimal(10,2) GENERATED ALWAYS AS (json_unquote(json_extract(`ticketInfo`,'$.value'))) VIRTUAL,
`issuedAt` timestamp GENERATED ALWAYS AS (json_unquote(json_extract(`ticketInfo`,'$.issuedAt'))) VIRTUAL NOT NULL,
`expiresAt` timestamp GENERATED ALWAYS AS (json_unquote(json_extract(`ticketInfo`,'$.expiresAt'))) VIRTUAL NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `VehicleTickets_externalId_unq_idx` (`externalId`,`organizationId`),
KEY `VehicleTickets_vehiclePlate_idx` (`vehiclePlate`),
KEY `VehicleTickets_organizationId_idx` (`organizationId`),
KEY `VehicleTickets_issuedAt_idx` (`createdAt`),
KEY `VehicleTickets_expiresAt_idx` (`expiresAt`),
CONSTRAINT `VehicleTickets_Organizations_fk`
FOREIGN KEY (`organizationId`) REFERENCES `Organizations` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
CREATE TABLE `Organizations` (
`id` varchar(50) NOT NULL,
`name` varchar(100) NOT NULL,
`taxPayerId` varchar(50) DEFAULT NULL,
`businessName` varchar(100) DEFAULT NULL,
`status` varchar(15) NOT NULL DEFAULT 'TESTING',
`createdAt` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updatedAt` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`activatedAt` timestamp NULL DEFAULT NULL,
`assetConfiguration` json DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
When I run:
select * from VehicleTickets where organizationId not in (
select id from Organizations
);
I get an empty result set.
However, if I run a query like this:
insert into `VehicleTickets` (
`id`,
`createdAt`,
`organizationId`,
`ticketInfo`,
`vehiclePlate`
)
values (
'... application generated id',
'... current date ',
'cjlchoksi01r8nfks3f51kht8', -- DOES EXIST on Organizations
'{ ... some JSON payload }',
'... vehicle plate'
)
This produces the following error:
Cannot add or update a child row: a foreign key constraint fails
(VehicleTickets, CONSTRAINT VehicleTickets_Organizations_fk
FOREIGN KEY (organizationId) REFERENCES Organizations (id))
Additionally, it gives me:
"errno": 1452,
"sqlState": "23000",
I've read through several threads regarding this issue, but couldn't find a similar case.

How to break find duplicate SQL query on a large table in multiple parts

I have a large table with ~3 million records in a MySQL database. I am trying to find duplicate rows in this table using the following query -
SELECT package_id
FROM version
WHERE metadata IS NOT NULL AND metadata <> '{}'
GROUP BY package_id, metadata HAVING COUNT(package_id) > 1
This query takes ~23 seconds to run on the database. Our database host however kill any query taking larger than 3 seconds using pt-kill. So I need to find a way to break this query down, such as each of the subpart would be a separate query and each one takes less than 3 seconds. Adding just a LIMIT constraint doesn't do it for the query, so how do I break a query to work on different parts of the table.
Result of SHOW CREATE TABLE version
CREATE TABLE `version` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`package_id` bigint(20) unsigned NOT NULL,
`version_number` int(11) unsigned NOT NULL,
`current_state_id` tinyint(2) unsigned NOT NULL,
`md5sum` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_cs NOT NULL DEFAULT '',
`uri` varchar(1024) CHARACTER SET utf8 COLLATE utf8_general_cs NOT NULL DEFAULT '',
`filename` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_cs NOT NULL DEFAULT '',
`size` bigint(11) unsigned NOT NULL DEFAULT '0',
`metadata` varchar(1024) CHARACTER SET utf8 COLLATE utf8_general_cs DEFAULT NULL,
`storage_type_id` tinyint(2) unsigned NOT NULL DEFAULT '1',
PRIMARY KEY (`id`),
UNIQUE KEY `idx_version_package_id_version_number` (`package_id`,`version_number`),
KEY `idx_version_md5sum` (`md5sum`),
KEY `idx_version_metadata` (`metadata`(255)),
KEY `idx_version_current_state_id` (`current_state_id`),
KEY `storage_type_id` (`storage_type_id`),
CONSTRAINT `_fk_version_current_state_id` FOREIGN KEY (`current_state_id`) REFERENCES `state` (`id`),
CONSTRAINT `_fk_version_package_id` FOREIGN KEY (`package_id`) REFERENCES `package` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=3248761 DEFAULT CHARSET=utf8
As can be seen there are many indexes on the table including index on Package_id + Version_number combination of field. The problem is that this table is only going to get bigger and I don't think Optimization even if it pulls me back in 3 second range would scale. So I need a way where I can partition this table and run on queries on separate parts.
Steps to improve speed.
Create Table version_small with just columns id and package_id with index on package_id.
insert into version_small select id and package_id from version;
Run your original query on optimised table above - should be much faster on smaller table.
OR
Create Table version_small with just columns id and package_id, and a int counter with unique index on package_id.
insert into version_small select id and package_id from version, on duplicate key increment counter;
The rows with counter>1 are package_id that have more than one entry.

Slow Updates for Single Records by Primary Key

I am using MySQL 5.5.
I have an InnoDB table definition as follows:
CREATE TABLE `table1` (
`col1` int(11) NOT NULL AUTO_INCREMENT,
`col2` int(11) DEFAULT NULL,
`col3` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`col4` int(11) DEFAULT NULL,
`col5` datetime DEFAULT NULL,
`col6` tinyint(1) NOT NULL DEFAULT '0',
`col7` datetime NOT NULL,
`col8` datetime NOT NULL,
`col9` int(11) DEFAULT NULL,
`col10` tinyint(1) NOT NULL DEFAULT '0',
`col11` tinyint(1) DEFAULT '0',
PRIMARY KEY (`col1`),
UNIQUE KEY `index_table1_on_ci_ai_tn_sti` (`col2`,`col4`,`col3`,`col9`),
KEY `index_shipments_on_applicant_id` (`col4`),
KEY `index_shipments_on_shipment_type_id` (`col9`),
KEY `index_shipments_on_created_at` (`col7`),
KEY `idx_tracking_number` (`col3`)
) ENGINE=InnoDB AUTO_INCREMENT=7634960 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
The issue is UPDATES. There are about 2M rows in this table.
A typical UPDATE query would be :
UPDATE table1 SET col6 = 1 WHERE col1 = 7634912;
We have about 5-10k QPS on this production server. These queries are often in "Updating" state when looked at through the process list. The InnoDB locks show that there are many rec but not gap locks on index_table1_on_ci_ai_tn_sti. No transaction is waiting for lock.
My feeling is that the Unique Index is causing the lag but I'm not sure why. This is the only table we have that is defined this way using the Unique Index.
I don't think the UNIQUE key has any impact (in this case).
Are you really setting a DATETIME to "1"? (Please check for other typos -- they could make a big difference.)
Are you trying to do 10K UPDATEs per second?
Is innodb_buffer_pool_size bigger than the table, but no bigger than 70% of available RAM?
What is the value of innodb_flush_log_at_trx_commit? 1 is default and secure, but slower than 2.
Can you put a bunch of updates into a single transaction? That would cut down the transaction overhead.

Why Full text search index is not supported in InnoDB where as its supported in MyISAM?

My MySQL script
DROP TABLE IF EXISTS `informationposting`;
CREATE TABLE `informationposting` (
`Id` int(11) NOT NULL AUTO_INCREMENT,
`StexId` varchar(9) DEFAULT NULL,
`TargetContinent` int(11) DEFAULT NULL,
`TargetCountry` int(11) DEFAULT NULL,
`TargetCity` varchar(15) DEFAULT NULL,
`InfoType` int(11) DEFAULT NULL,
`WebsiteLink` varchar(30) DEFAULT NULL,
`InfoPost` varchar(200) DEFAULT NULL,
`PostingDate` datetime DEFAULT NULL,
`ExpiryDate` datetime DEFAULT NULL,
`Title` varchar(100) DEFAULT NULL,
`NameOfOwner` varchar(45) DEFAULT NULL,
`RegistrationTypeIdOfOwner` int(11) DEFAULT NULL,
PRIMARY KEY (`Id`),
KEY `FK_InformationUser_Id_idx` (`StexId`),
FULLTEXT KEY `InfoPost` (`InfoPost`),
FULLTEXT KEY `NameOfOwner` (`NameOfOwner`),
FULLTEXT KEY `NameOfOwner_2` (`NameOfOwner`,`InfoPost`),
CONSTRAINT `FK_InformationUser_Id` FOREIGN KEY (`StexId`) REFERENCES `userdetails` (`StexId`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=20 DEFAULT CHARSET=utf8;
this script is showing the following error.
Error Code: 1214. The used table type doesn't support FULLTEXT indexes
My mySql version is 5.6. When I change the Engine to MyISAM its working fine. Can any one explain why MySQL behaves like this I want to use the InnoDB engine.
InnoDB 5.6 does support fulltext index types. I just tested your CREATE TABLE statement on a test instance of MySQL 5.6.17, and it works fine.
I suggest that you double-check that you're running that statement on a 5.6 instance.
mysql> SELECT VERSION();
Also, InnoDB wants you to make your primary key column named FTS_DOC_ID in all capital letters. It doesn't give you an error if you name it something else, but in my experience, loading data into the table will cause runaway memory growth. But maybe that is a bug they have fixed. Anyway, look out for it.