Trigger not working with longtext field - mysql

I have 2 MySQL table name uploads and uploads_log.
uploads table has a field name json_values (datatype: longtext)
uploads_log table has 2 fields old_value, new_value (both datatype: longtext)
On After UPDATE of uploads table I have written a trigger which just put the whole content of uploads.json_values into uploads_log table's old_value, new_value.
trigger is
BEGIN
IF (NEW.json_values != OLD.json_values) THEN
INSERT INTO uploads_log (`file_id`, `user_id`, `field_name`, `old_value`, `new_value`, `ip`, `created_at`)
VALUES (OLD.`file_id`,
OLD.`user_id`,
'json_values',
OLD.json_values,
NEW.json_values,
NEW.user_ip,
NOW());
END IF;
END
My issue is: When I'm editing small string in uploads.json_values my trigger is working fine, but when Im editting some realy long string like 378369 characters long. I'm getting the following error.
SQLSTATE[42000]: Syntax error or access violation: 1118 Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline.
I try to debug the issue I removed the trigger and EDITED uploads.json_values with long string it workes fine, and I manually INSERTED that long string into uploads_log.old_value then also it works fine, So the issue is with the trigger.
Is trigger has some limitataion of length?
Both the table uses Storage Engine: InnoDB and MySQL Version is 5.6.21.
uploads table Structure
CREATE TABLE `uploads` (
`file_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`user_id` int(11) unsigned NOT NULL,
`json_values` longtext COLLATE utf8_unicode_ci NOT NULL,
`read_values` longtext COLLATE utf8_unicode_ci,
`user_ip` varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (`file_id`),
KEY `user_id` (`user_id`)
) ENGINE=InnoDB AUTO_INCREMENT=34444 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
uploads_log table Structure
CREATE TABLE `uploads_log` (
`action_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`file_id` int(11) unsigned DEFAULT NULL,
`user_id` int(11) unsigned DEFAULT NULL,
`field_name` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`old_value` longtext COLLATE utf8_unicode_ci,
`new_value` longtext COLLATE utf8_unicode_ci,
`ip` varchar(30) COLLATE utf8_unicode_ci DEFAULT NULL,
`created_at` datetime DEFAULT NULL,
PRIMARY KEY (`action_id`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
I found this question and this one, but it is not related to UPDATE trigger.
Any help/suggestion will be very much appreciated.
Thanks.

I also had some similar issue but not with trigger this question helped me out.
Change innodb_log_file_size value located in C:\xampp\mysql\bin\my.ini (if you are using XAMPP) to something higher than 5M.
or as pointed out by #Vatev you can set innodb_log_file_size = 128M
You can use this MySQL command to get the innodb_log_file_size value, it will give you the result in Byte.

Related

Replace the statement ON DUPLICATE KEY UPDATE

MySQL version 5.7, ONLY_FULL_GROUP_BY is enabled
Please tell me an effective way to solve the following problem: it is necessary to make an entry in the table only if there are no matches in the record for several fields. If there is a matching record, then we update it with new data.
The ON DUPLICATE KEY UPDATE statement does not suit me, because the type and uuid_session fields are not unique (for example, there may be several records with different type, but with the same uuid_session).
Here is an example of a fake-query to better understand my question:
INSERT INTO cameraStatus(time, uuid_session)
VALUES ('2022-12-14 16:01:00', '01234567-8901-2345-6789-012345678901')
ON DUPLICATE KEY UPDATE (type, uuid_session) = ("WORK", "55555555-8901-2345-6789-012345678901");
My table:
CREATE TABLE `cameraStatus` (
`id` int NOT NULL,
`camera_id` int NOT NULL DEFAULT '0',
`time` timestamp NOT NULL DEFAULT '2021-12-31 21:00:00',
`type` varchar(50) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'INFO',
`message` mediumtext COLLATE utf8_unicode_ci,
`uuid_session` varchar(36) CHARACTER SET utf8mb3 COLLATE utf8_unicode_ci DEFAULT '01234567-8901-2345-6789-012345678901'
)

MySQL : SELECT on big table takes a lot of time. Solutions?

my app get stuck for hours on simple queries like :
SELECT COUNT(*) FROM `item`
Context :
This table is around 200Gb+ and 50M+ rows.
We have a RDS on AWS with 2CPU and 16GiB RAM (db.r6g.large).
This is the table structure SQL dump :
/*
Target Server Type : MySQL
Target Server Version : 80023
File Encoding : 65001
*/
SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;
DROP TABLE IF EXISTS `item`;
CREATE TABLE `item` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`status` tinyint DEFAULT '1',
`source_id` int unsigned DEFAULT NULL,
`type` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`url` varchar(2048) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`title` varchar(500) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`sku` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`price` decimal(20,4) DEFAULT NULL,
`price_bc` decimal(20,4) DEFAULT NULL,
`price_original` decimal(20,4) DEFAULT NULL,
`currency` varchar(10) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`description` text CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci,
`image` varchar(1024) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`time_start` datetime DEFAULT NULL,
`time_end` datetime DEFAULT NULL,
`block_update` tinyint(1) DEFAULT '0',
`status_api` tinyint(1) DEFAULT '1',
`data` json DEFAULT NULL,
`created_at` int unsigned DEFAULT NULL,
`updated_at` int unsigned DEFAULT NULL,
`retailer_id` int DEFAULT NULL,
`hash` char(32) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`count_by_hash` int DEFAULT '1',
`item_last_update` int DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `sku_retailer_idx` (`sku`,`retailer_id`),
KEY `updated_at_idx` (`updated_at`),
KEY `time_end_idx` (`time_end`),
KEY `retailer_id_idx` (`retailer_id`),
KEY `hash_idx` (`hash`),
KEY `source_id_hash_idx` (`source_id`,`hash`) USING BTREE,
KEY `count_by_hash_idx` (`count_by_hash`) USING BTREE,
KEY `created_at_idx` (`created_at`) USING BTREE,
KEY `title_idx` (`title`),
KEY `currency_idx` (`currency`),
KEY `price_idx` (`price`),
KEY `retailer_id_title_idx` (`retailer_id`,`title`) USING BTREE,
KEY `source_id_idx` (`source_id`) USING BTREE,
KEY `source_id_count_by_hash_idx` (`source_id`,`count_by_hash`) USING BTREE,
KEY `status_idx` (`status`) USING BTREE,
CONSTRAINT `fk-source_id` FOREIGN KEY (`source_id`) REFERENCES `source` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=1858202585 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
SET FOREIGN_KEY_CHECKS = 1;
does partitioning the table could help on a simple query like this ?
do I need to increase the RAM of the RDS ? If yes what configuration do I need ?
is NoSQL more compatible to this kind of structure ?
Do you have any advices/solutions/fixes so the app can run those queries (we would like to keep all the data and not erase it if possible..) ?
"SELECT COUNT(*) FROM item" needs to scan an index. The smallest index is about 200MB, so that seems like it should not take "minutes".
There are probably multiple queries that do full table scans. Such will bump out all the cached data from the ~11GB of cache (the buffer_pool) and do that about 20 times. That's a lot of I/O and a lot of elapsed time. Meanwhile, most other queries will run slowly because their cached data is being bumped out.
The resolution:
Locate these naughty queries. RDS probably gives you access to the "slowlog".
Grab the slowlog and run pt-query-digest or mysqldumpslow -s t to find the "worst" queries.
Then we can discuss them.
There are some redundant indexes; removing them won't solve the problem. A rule: If you have INDEX(a), INDEX(a,b), you don't need the former.
If hash is some kind of scrambled value, it is likely that a single-row lookup (or update) will require a disk hit (and bump something else out of the cache).
decimal(20,4) takes 10 bytes and allows values up to 9,999,999,999,999,999.9999; that seems excessive. (Shrinking it won't save much space; something to keep in mind for the future.)
I see that AUTO_INCREMENT has reached 1.8 billion. If there are only 50M rows, does the processing do a lot of DELETEs? Or maybe REPLACE``? IODKU is better than REPLACE`.
Thanks for all the advices here, but the problem was that we were using the MySQL json type for a very heavy column. Removing this column or even changing it to varchar made the COUNT(id) around 1000x faster (also adding WHERE id > 1 helped..)
Note : it was impossible to just delete the column as it was, we had to change it to varchar before.

How to overcome performance issue when converting utf8mb4 to latin1?

By my ignorance, I have altered a few tables without specifying collation.
That caused changed columns, which used to be latin1 characters, to be changed to utf8mb4.
This brought HUGE performance loss running joins. And when I say HUGE I mean fraction of a second changed to one hour or more!
So I have made an other request to convert it back to latin1.
And here comes the problem. Mere 60k row table, with ONE utf8mb4 column of 64 characters required 10 hours to complete. No, it is not a mistake. TEN hours. And my even bigger problem is that I have other tables that have millions of rows giving me ETA in years from today!
So now, I wonder what my options are because I can't afford having these tables to be read-only for longer than one day time.
I know that MYSQL ALTER creates a copy of a table. It makes sense because this is field size change, so I doubt I have an option to use ALGORITHM=INPLACE.
If I cannot do INPLACE then I cannot use LOCK=NONE option.
Why in the world utf8mb4 -> latin1 conversion could make such a big impact?
Note that the converted column is indexed, and this may be a reason for the impact!
ANY suggestion or a link would be greatly appreciated!
Maybe the solution would be to drop index (to avoid funky multibyte issue in the index conversion,) do fast alter, and then add an index?
Thanks in advance for any serious suggestion and I suspect I may not find much of a help because of the uniqueness of it.
EDIT
jobs | CREATE TABLE `jobs` (
`auto_inc_key` int(11) NOT NULL AUTO_INCREMENT,
`request_entered_timestamp` datetime NOT NULL,
`hash_id` char(64) CHARACTER SET latin1 NOT NULL,
`name` varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
`host` char(20) CHARACTER SET latin1 NOT NULL,
`user_id` int(11) NOT NULL,
`start_date` datetime NOT NULL,
`end_date` datetime NOT NULL,
`state` char(12) CHARACTER SET latin1 NOT NULL,
`location` varchar(50) NOT NULL,
`value` int(10) NOT NULL DEFAULT '0',
`aggregation_job_id` char(64) CHARACTER SET latin1 DEFAULT NULL,
`aggregation_job_order` int(11) DEFAULT NULL,
PRIMARY KEY (`auto_inc_key`),
KEY `host` (`host`),
KEY `hash_id` (`hash_id`),
KEY `user_id` (`user_id`,`request_entered_timestamp`),
KEY `request_entered_timestamp_idx` (`request_entered_timestamp`)
) ENGINE=InnoDB AUTO_INCREMENT=9068466 DEFAULT CHARSET=utf8mb4
jobs_archive | CREATE TABLE `jobs_archive` (
`auto_inc_key` int(11) NOT NULL AUTO_INCREMENT,
`request_entered_timestamp` datetime NOT NULL,
`hash_id` char(64) CHARACTER SET latin1 NOT NULL,
`name` varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
`host` char(20) CHARACTER SET latin1 NOT NULL,
`user_id` int(11) NOT NULL,
`start_date` datetime NOT NULL,
`end_date` datetime NOT NULL,
`state` char(12) CHARACTER SET latin1 NOT NULL,
`value` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`auto_inc_key`),
KEY `host` (`host`),
KEY `hash_id` (`hash_id`),
KEY `user_id` (`user_id`,`request_entered_timestamp`)
) ENGINE=InnoDB AUTO_INCREMENT=239432 DEFAULT CHARSET=utf8mb4
(taken from PROCEDURE, but you catch the drift...)
INSERT INTO jobs_archive (SELECT * FROM jobs WHERE (TIMESTAMPDIFF(DAY, request_entered_timestamp, starttime) > days));

The size of a table does not match the records in MySQL, AWS RDS

I have a problem with a MySQL table, the table has only 385 records that if I export them via Workbench the file is 894 KB in size. The problem is that the Data length is 10 GB when I try to inspect the table.
Note: In the table when making query only show 385 records.
The only solution to restore the size of the table that I have used is to drop the table and import it again.
I hope and you can help me to verify if it is a MySQL error or it is an attack or I don't know why the behavior is due.
The MySQL server is in an AWS RDS.
I have read the next article: https://docs.aws.amazon.com/es_es/AmazonRDS/latest/UserGuide/MySQL.KnownIssuesAndLimitations.html
CREATE TABLE `logs` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(191) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT '',
`file` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_520_ci NOT NULL,
`status` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT 'yes',
PRIMARY KEY (`id`),
UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=130204 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_520_ci;

1548 [Warning] Unsafe statement written to the binary log using statement format since BINLOG_FORMAT = STATEMENT

I got the following warning running the simple statement and I was curious as to why I got it:
UPDATE `Table1`
SET `City`='Miami',
`ExpDate`='201227',
`User`='JDoe',
`UpdDate`='2015-02-17 16:11:25'
WHERE `id` = 61`
Here is the Table1 structure:
CREATE TABLE `Table1` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`User` varchar(10) COLLATE utf8_bin DEFAULT NULL,
`City` varchar(25) COLLATE utf8_bin DEFAULT NULL,
`ExpDate` varchar(10) COLLATE utf8_bin DEFAULT NULL,
`UpdDate` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `Unique_Index` (`User`,`City`),
UNIQUE KEY `id` (`id`),
KEY `ALT1_IDX_Table1` (`User`,`City`)
) ENGINE=InnoDB AUTO_INCREMENT=64 DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
Full Error from log:
2015-02-17 16:10:08 1548 [Warning] Unsafe statement written to the binary log using statement format since BINLOG_FORMAT = STATEMENT. Statement is unsafe because it invokes a trigger or a stored function that inserts into an AUTO_INCREMENT column. Inserted values cannot be logged correctly. Statement: UPDATETable1
SETCity='Miami',
ExpDate='201227',
User='JDoe',
UpdDate='2015-02-17 16:11:25'
WHEREid= 61
Finally, the error show exactly what's wrong here.
Statement is unsafe because it invokes a trigger or a stored function that inserts into an AUTO_INCREMENT column
The error is not in the table itself, the error is in a trigger or procedure that fires when this table get's updated.