longtext in select makes query extremely slow - mysql

I am having a plain flat table with below structure
CREATE TABLE `oc_pipeline_logging` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`source` TEXT,
`comments` TEXT,
`data` LONGTEXT,
`query` TEXT,
`date_added` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`ip` VARCHAR(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MYISAM AUTO_INCREMENT=20 DEFAULT CHARSET=latin1
In this table I basically log all my error from where ever i get in the code.
Now the data column in the above table is defined as longtext and currently I am having data in this column with almost 32Mb size for each record.
So now when i am going with the plain select query its taking alot of time to fetch the results.
eg:-
SELECT * FROM oc_pipeline_logging limit 10
In-fact when i am running the above query in the terminal i am getting below error
mysql> SELECT COMMENTs,DATA FROM oc_pipeline_logging WHERE id = 18;
ERROR 2020 (HY000): Got packet bigger than 'max_allowed_packet' bytes
But the same is running fine in sqlYog but taking lot of time.
How can I execute this query faster and fetch my rows quickly?

I am trying the same at my end getting such type of error.
But there could be a solution to increase the memory limit in my.ini.
max_allowed_packet=2048M
You can change the limit accordingly, Hope this will resolve the problem.

Related

Select query on MYSQL table taking long time and getting timed out

I have a mysql table with 2 million rows, when I'm running any select query on the table it's taking long time to execute and ultimately it does not return any result.
I have tried running select query from both Mysql Workbench and terminal, it's the same issue happening.
Below is the table:
`object_master`
`key` varchar(300) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`bucket` varchar(255) DEFAULT NULL,
`object_name` varchar(300) DEFAULT NULL,
`object_type` varchar(50) DEFAULT NULL,
`last_modified_date` datetime DEFAULT NULL,
`last_accessed_date` datetime DEFAULT NULL,
`is_deleted` tinyint(1) DEFAULT '0',
`p_object` varchar(300) DEFAULT NULL,
`record_insert_time` datetime DEFAULT CURRENT_TIMESTAMP,
`record_update_time` datetime DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`key`)
ENGINE=InnoDB DEFAULT CHARSET=latin1
And below is the select query i'm running :
select `key` from object_master;
even with a limit 1 is also taking long time and not returning a result, its getting timed out :
select `key` from object_master limit 1;
Could anyone tell me what can be the real reason here?
Also I would like to mention: before I was running these select queries, there was an alter table statement executed on this table which got timed out after 10 minutes and table remained un-altered.
Following is the alter statement:
alter table object_master
modify column object_name varchar(1096) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL;
Note: Using MYSQL version 5.7.24 and Mysql running on Linux Docker container.
So I got this resolved:
There was Java batch program which was executing a query on the same table for long time and was holding a lock on the table. I found this through "processlist" table of information_schema.
Had to kill the long running query through terminal:
mysql> kill <processlist_id> ;
Then it released the lock on that table and all got resolved.
Got help from below SO answers:
Unlocking tables if thread is lost
How do I find which transaction is causing a "Waiting for table metadata lock" state?

Mysql IO write too much

I have a table that uses myisam engine on my server. There are 10 update statements per second on average. I found that the mysql process disk write a lot higher than the theoretical value. After experimenting, I suspect that modifying any column of data would rewrite the entire row of data. The following is an experiment...
My table:
CREATE TABLE `test_update` (
`id` int(11) NOT NULL DEFAULT '0',
`str1` blob,
`str2` blob,
`str3` blob,
`update_time` int(11) DEFAULT '0',
PRIMARY KEY (`id`),
KEY `update_time` (`update_time`)
) ENGINE=MyISAM;
I inserted 100000 rows data,each row has 30k string(10k per blob).After that I randomly update ‘update_time’ column 1 row/sec
while 1:
sql = "update test_update set update_time=%d where id=%d" %(now, randomid)
cur.execute(sql)
conn.commit()
slp_t = 1-(time.time()-end)
if slp_t>0:
time.sleep(slp_t)
end=time.time()
and iotop shows:
https://i.stack.imgur.com/sJa8y.png
It seems like modifying an int column would rewrite the entire row(even more). Is that true? If the answer is yes, why was it designed like this? what should i do to avoid this waste?

MySQL query will not return return value or give error message. (Query optimization)

As a homework assignment I've been given three different datadumps of reddit posts that I'm supposed to write a bunch of queries for. The size of the datadumps are 12mb, 1gb and 2.5gb compressed. I started out with the smallest dataset and wrote queries for them which worked fine, however when I run the queries on the larger datasets they take alot of time to execute. Most of them works but one of the queries takes so long time that it cant even execute.
The Query is supposed to get the Users which has the most/least total of post scores(sum of users post score).
(SELECT `post_author` AS AUTHOR, SUM(`post_score`) AS SCORE FROM `post` GROUP BY `post_author` ORDER BY `SCORE` ASC LIMIT 1)
UNION
(SELECT `post_author` AS AUTHOR, SUM(`post_score`) AS SCORE FROM `post` GROUP BY `post_author` ORDER BY `SCORE` DESC LIMIT 1)
I'm using EasyPHP to host a phpMyAdmin db.
Now I'm not sure if this is a memoryproblem or a timeproblem. I tried raising 'ExecTimeLimit' in the phpMyAdmin config but that didn't seem to make a difference. Also I would appreciate any tips on what I can look into to make the query more efficient.
Create SQL:
Create SQL: SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
SET time_zone = "+00:00";
CREATE TABLE IF NOT EXISTS `post` (
`post_id` bigint(11) NOT NULL,
`post_body` mediumtext NOT NULL,
`post_parent` int(11) NOT NULL,
`post_link` int(11) NOT NULL,
`post_created` date NOT NULL,
`post_author` varchar(50) NOT NULL,
`post_sub_id` int(11) NOT NULL,
`post_score` int(11) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

MySQL UPDATE Statement using LIKE with 2 Tables Takes Decades

can you please advise why such a query would take so long (literally 20-30 minutes)?
I seem to have proper indexes set up, don't I?
UPDATE `temp_val_import_435` t1,
`attr_upc` t2 SET t1.`attr_id` = t2.`id` WHERE t1.`value` LIKE t2.`upc`
CREATE TABLE `attr_upc` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`upc` varchar(255) NOT NULL,
`last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `upc` (`upc`),
KEY `last_update` (`last_update`)
) ENGINE=InnoDB AUTO_INCREMENT=102739 DEFAULT CHARSET=utf8
CREATE TABLE `temp_val_import_435` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`attr_id` int(11) DEFAULT NULL,
`translation_id` int(11) DEFAULT NULL,
`source_value` varchar(255) NOT NULL,
`value` varchar(255) DEFAULT NULL,
`count` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `core_value_id` (`core_value_id`),
KEY `translation_id` (`translation_id`),
KEY `source_value` (`source_value`),
KEY `value` (`value`),
KEY `count` (`count`)
) ENGINE=InnoDB AUTO_INCREMENT=32768 DEFAULT CHARSET=utf8
Ed Cottrell's solution worked for me. Using = instead of LIKE sped up a smaller test query on 1000 rows by a lot.
I measured 2 ways: 1 in phpMyAdmin, the other looking at the time for DOM load (which of course involves other processes).
DOM load went from 44 seconds to 1 second, a 98% increase.
But the difference in query execution time was much more dramatic, going from 43.4 seconds to 0.0052 seconds, a decrease of 99.988%. Pretty good. I will report back on results from huge datasets.
Use = instead of LIKE. = should be much faster than LIKE -- LIKE is only for matching patterns, as in '%something%', which matches anything with "something" anywhere in the text.
If you have this query:
SELECT * FROM myTable where myColumn LIKE 'blah'
MySQL can optimize this by pretending you typed myColumn = 'blah', because it sees that the pattern is fixed and has no wildcards. But what if you have this data in your upc column:
blah
foo
bar
%foo%
%bar
etc.
MySQL can't optimize your query in advance, because it's possible that the text it is trying to match is a pattern, like %foo%. So, it has to perform a full text search for LIKE matches on every single value of temp_val_import_435.value against every single value of attr_upc.upc. With a simple = and the indexes you have defined, this is unnecessary, and the query should be dramatically faster.
In essence you are joining on a LIKE which is going to be problematic (would need EXPLAIN to see is MySQL if utilizing indexes at all). Try this:
UPDATE `temp_val_import_435` t1
INNER JOIN `attr_upc` t2
ON t1.`value` LIKE t2.`upc`
SET t1.`attr_id` = t2.`id` WHERE t1.`value` LIKE t2.`upc`

Mysql Unknown column on existing column (only insert)

I have been researching this problem for quite some time but have not been able to find any helpful results.
I have a table:
CREATE TABLE `jobs` (
`jb_id` MEDIUMINT(7) UNSIGNED NOT NULL AUTO_INCREMENT,
`wo_id` MEDIUMINT(7) UNSIGNED NOT NULL,
`file_name` VARCHAR(140) NOT NULL COLLATE 'latin1_bin',
`jb_status` TINYINT(1) UNSIGNED NOT NULL DEFAULT '0',
`descr` TEXT NULL COLLATE 'latin1_bin',
`syncronized` TINYINT(2) UNSIGNED NOT NULL,
`failedcnt` TINYINT(3) UNSIGNED NOT NULL,
`clip_title` TINYTEXT NULL COLLATE 'latin1_bin',
`clip_description` TEXT NULL COLLATE 'latin1_bin',
`clip_tags` TINYTEXT NULL COLLATE 'latin1_bin',
PRIMARY KEY (`jb_id`),
INDEX `woid` (`wo_id`),
INDEX `job_stat` (`jb_status`),
INDEX `synced` (`syncronized`),
INDEX `failedcnt` (`failedcnt`),
INDEX `file_name` (`file_name`)
)
COLLATE='latin1_bin'
ENGINE=MyISAM;
When i run SELECT or UPDATE commands everything works ok.
select jobs.clip_description from jobs Limit 1;
/* 0 rows affected, 1 rows found. Duration for 1 query: 0.768 sec. */
UPDATE `jobs` SET `clip_description`='test' WHERE `jb_id`=2 LIMIT 1;
But as I try to run
INSERT INTO `jobs` (`clip_description`) VALUES ('test');
/* SQL Error (1054): Unknown column 'clip_description' in 'field list' */
This also happened yesterday, but as i did not have much time to deal with the issue then, i created new table with different name but same structure, copied over all the data and then renamed both tables and it worked again. That is until about two hours ago when the issue returned. It is not really an option to start coping the table every 12h.
For creating a copy i used:
CREATE TABLE jobs_new LIKE jobs; INSERT jobs_new SELECT * FROM jobs;
After which previously mentioned insert would work.
Any help will be greatly appreciated.
EDIT:
If it makes any difference I'm running
Server version: 5.5.28-0ubuntu0.12.04.2-log (Ubuntu)
On ubuntu server 12.04 LTS 64bit
It looks like you have other constrains related to table, may be a trigger or some calculated column depending upon clip_description column. Is not it?
Please check the dependencies and triggers with this table.