I have the following database:
CREATE TABLE IF NOT EXISTS `musics` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`active` tinyint(1) NOT NULL DEFAULT '1',
`slug` varchar(50) NOT NULL,
`movie_id` int(11) NOT NULL,
`added` int(11) NOT NULL,
`updated` int(11) NOT NULL,
`featured` tinyint(1) NOT NULL,
`hits` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `slug` (`slug`),
KEY `active` (`active`),
KEY `featured` (`featured`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3339 ;
And the query UPDATE musics SET hits = 7 WHERE id = '1770' gets run on each page load of a music album. Is there any way to optimize this query?
example..
set global long_query_time=1;
and, my.cnf
[mysqld]
...
long_query_time=1;
and you can use 'explain'.
explain select * from musics where id = 1770;
like the mysql docs say:
"The slow query log consists of SQL statements that took more than long_query_time seconds to execute" -> long_query_time is a Server System Variable from mysql, which could also be changed.
to the optimization-question:
you could write " = 1770" without ' because thats an integer and not a string
Related
i have a three almost identical queries executed one by one in my app. Two of queries are executing fairly fast but one is much slower. Here are queries:
update `supremeshop_product_attributes` set `id_step` = 899 where `id_step` = 1 and `id_product` = 32641
540ms
update `supremeshop_product_attributes` set `id_step` = 1 where `id_step` = 0 and `id_product` = 32641
1.71s
update `supremeshop_product_attributes` set `id_step` = 0 where `id_step` = 899 and `id_product` = 32641
9.75ms
Create table query
CREATE TABLE `supremeshop_product_attributes` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`id_product` int(11) NOT NULL,
`id_attribute` int(11) NOT NULL,
`id_attribute_group` int(11) NOT NULL,
`id_step` int(11) NOT NULL,
`id_alias` int(9) NOT NULL,
`price_retail` int(11) NOT NULL DEFAULT '0',
`behaviour` int(1) NOT NULL DEFAULT '0',
`active` int(1) NOT NULL DEFAULT '1',
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `supremeshop_product_attributes_id_product_index` (`id_product`),
KEY `supremeshop_product_attributes_id_attribute_index` (`id_attribute`),
KEY `supremeshop_product_attributes_id_attribute_group_index` (`id_attribute_group`),
KEY `supremeshop_product_attributes_id_step_index` (`id_step`)
) ENGINE=InnoDB AUTO_INCREMENT=3012991 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Facts:
table has 1500000 rows
each query updates around 650 rows
each searched/updated field has index (id(primary), id_product, id_step, id_alias)
only second query takes much longer time to execute (everytime - even executed as single query not one by one)
each variable used in query is integer
if i execute queries direcly in phpymysql -> sql - i get the same execution time (like in my laravel app so query is somehow slow)
Question is why? And is there a good explanation/ fix for that problem.
For any help many thanks!
Mark.
Hello I have trouble running mysql update cross databases. MySQL UPDATE query hangs with error - Lock wait timeout exceeded; try restarting transaction. I know similar questions have been already raised but could not find any working solution. I am using this query inside of the PHP script. BUt when I run it within mysql server it does the same thing. When I kill the process, it completes. Here is what hangs. PS: This started like 2 days ago, before that, the query ran without problem. Temp table is filled only when admin pushes there data from html form. 'Teams to esc' table grows like 20 rows per day.
UPDATE query -
UPDATE escalations.teams_to_esc
JOIN domguard.temp_table ON escalations.teams_to_esc.ticket_number = domguard.temp_table.task_number
AND
escalations.teams_to_esc.team = domguard.temp_table.team
SET
escalations.teams_to_esc.`status` = domguard.temp_table.`status`,
escalations.teams_to_esc.`wiw_assigned` = domguard.temp_table.`person`
WHERE
(escalations.teams_to_esc.team,escalations.teams_to_esc.ticket_number) IN
(SELECT domguard.temp_table.team, domguard.temp_table.task_number
FROM domguard.temp_table);
First table - temp_table
CREATE TABLE `temp_table` (
`ID` INT(32) NOT NULL AUTO_INCREMENT,
`task_number` INT(10) NULL DEFAULT '0',
`type` VARCHAR(50) NULL DEFAULT '0',
`assign_date` DATE NULL DEFAULT NULL,
`team` VARCHAR(50) NULL DEFAULT NULL,
`person` VARCHAR(50) NULL DEFAULT NULL,
`task` VARCHAR(50) NULL DEFAULT NULL,
`comment` TEXT NULL,
`short_text` VARCHAR(500) NULL DEFAULT NULL,
`delay_comment` VARCHAR(500) NULL DEFAULT NULL,
`color` VARCHAR(500) NULL DEFAULT NULL,
`status` VARCHAR(500) NULL DEFAULT NULL,
`end_time` TIME NULL DEFAULT NULL,
`tel_it` INT(2) NULL DEFAULT '0',
`co_allocation` VARCHAR(500) NULL DEFAULT '0',
`co_allocation_text` VARCHAR(500) NULL DEFAULT '0',
`delay_code_text` VARCHAR(500) NULL DEFAULT '0',
PRIMARY KEY (`ID`)
)
COLLATE='latin1_swedish_ci'
ENGINE=InnoDB
AUTO_INCREMENT=2483
;
Second table
CREATE TABLE `teams_to_esc` (
`ID` INT(11) NOT NULL AUTO_INCREMENT,
`esc` INT(11) NOT NULL,
`team` VARCHAR(500) NOT NULL,
`ticket_number` VARCHAR(500) NOT NULL,
`closed_time` DATE NOT NULL,
`checked` VARCHAR(500) NULL DEFAULT NULL,
`reaction_from_tl` VARCHAR(50) NULL DEFAULT NULL,
`status` VARCHAR(50) NULL DEFAULT NULL,
`wiw_assigned` VARCHAR(50) NULL DEFAULT NULL,
PRIMARY KEY (`ID`),
INDEX `esc` (`esc`),
CONSTRAINT `teams_to_esc_ibfk_1` FOREIGN KEY (`esc`) REFERENCES `main_table` (`ID`)
)
COLLATE='latin1_swedish_ci'
ENGINE=InnoDB
AUTO_INCREMENT=2670
;
=== ERROR INVESTIGATING ===
Output from SHOW processlist;
17107338 root localhost \N Query 1278 Sending data UPDATE escalations.teams_to_esc JOIN domguard.temp_table ON
escalations.teams_to_esc.ticket_number
Output from Transaction settings
SELECT ##GLOBAL.tx_isolation, ##tx_isolation, ##session.tx_isolation;
When i try to update my torrents table (My torrent site permits to share only open source stuff) with the following query
UPDATE `torrents` SET `leech` = '0', `seed` = '1' WHERE `id` = '26784'
It take approximaty 0.5 seconds to update a table which contains only 20,000 records . My other queries are executed in less than 0.0478s (SELECT queries)
CREATE TABLE IF NOT EXISTS `torrents` (
`id` int(11) NOT NULL,
`hash_info` varchar(255) NOT NULL,
`category_slug` varchar(255) NOT NULL,
`name` varchar(255) NOT NULL,
`size` bigint(20) NOT NULL,
`age` int(11) NOT NULL,
`description` text NOT NULL,
`trackers` longtext NOT NULL,
`magnet` longtext,
`files` longtext,
`parent_category` int(11) NOT NULL,
`category` int(11) NOT NULL,
`publish_date` int(11) DEFAULT NULL,
`uploader` int(11) NOT NULL,
`seed` int(11) DEFAULT '0',
`leech` int(11) DEFAULT '0',
`file_key` varchar(255) DEFAULT NULL,
`comments_count` int(11) DEFAULT '0'
) ENGINE=InnoDB AUTO_INCREMENT=26816 DEFAULT CHARSET=latin1;
Any solution ?
Lookups based on the indexed columns are much faster than the lookups on the non-indexed columns. This behavior can be case more visible with the growing amount of the data.
Create an index on Id column and check if it helps you improve the performance of the query.
id is declared to be an integer. So, first your comparison should be to an integer not a string:
UPDATE `torrents`
SET `leech` = '0', `seed` = '1'
WHERE `id` = 26784;
Second, you need an index on the id. You can create one by:
create index idx_torrents_id on torrents(id);
Alternatively, make it a primary key in the table.
I am trying to execute following query
SELECT
a.sessionID AS `sessionID`,
firstSeen, birthday, gender,
isAnonymous, LanguageCode
FROM transactions AS trx
INNER JOIN actions AS a ON a.sessionID = trx.SessionID
WHERE a.ActionType = 'PURCHASE'
GROUP BY trx.TransactionNumber
Explain provides the following output
1 SIMPLE trx ALL TransactionNumber,SessionID NULL NULL NULL 225036 Using temporary; Using filesort
1 SIMPLE a ref sessionID sessionID 98 infinitiExport.trx.SessionID 1 Using index
The problem is that I am trying to use one field for join and different field for GROUP BY.
How can I tell MySQL to use different indices for same table?
CREATE TABLE `transactions` (
`SessionID` varchar(32) NOT NULL DEFAULT '',
`date` datetime DEFAULT NULL,
`TransactionNumber` varchar(32) NOT NULL DEFAULT '',
`CustomerECommerceTrackID` int(11) DEFAULT NULL,
`SKU` varchar(45) DEFAULT NULL,
`AmountPaid` double DEFAULT NULL,
`Currency` varchar(10) DEFAULT NULL,
`Quantity` int(11) DEFAULT NULL,
`Name` tinytext NOT NULL,
`Category` varchar(45) NOT NULL DEFAULT '',
`customerInfoXML` text,
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`),
KEY `TransactionNumber` (`TransactionNumber`),
KEY `SessionID` (`SessionID`)
) ENGINE=InnoDB AUTO_INCREMENT=212007 DEFAULT CHARSET=utf8;
CREATE TABLE `actions` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`sessionActionDate` datetime DEFAULT NULL,
`actionURL` varchar(255) DEFAULT NULL,
`sessionID` varchar(32) NOT NULL DEFAULT '',
`ActionType` varchar(64) DEFAULT NULL,
`CustomerID` int(11) DEFAULT NULL,
`IPAddressID` int(11) DEFAULT NULL,
`CustomerDeviceID` int(11) DEFAULT NULL,
`customerInfoXML` text,
PRIMARY KEY (`id`),
KEY `ActionType` (`ActionType`),
KEY `CustomerDeviceID` (`CustomerDeviceID`),
KEY `sessionID` (`sessionID`)
) ENGINE=InnoDB AUTO_INCREMENT=15042833 DEFAULT CHARSET=utf8;
Thanks
EDIT 1: My indexes were broken, I had to add (SessionID, TransactionNumber) index to transactions table, however now, when I try to include trx.customerInfoXML table mysql stops using index
EDIT 2 Another answer does not really solved my problem because it's not standard sql syntax and generally not a good idea to force indices.
For ORM users such syntax is a unattainable luxury.
EDIT 3 I updated my indices and it solved the problem, see EDIT 1
I ran this sql query in my database:
update payments set method = 'paysafecard' AND amount = 25 WHERE payment_id IN (1,2,3,4,5,...)
Of course i meant set method = 'paysafecard' , amount = 25
However I did it in phpmyadmin and it showed me that rows were affected. After running it again it showed 0 rows affected.
I don't know what may have changed in the database, what could this have done?
My table looks like this:
CREATE TABLE IF NOT EXISTS `payments` (
`payment_id` int(11) NOT NULL AUTO_INCREMENT,
`method_unique_id` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`method` enum('moneybookers','paypal','admin','wallet','voucher','sofortueberweisung','bitcoin','paysafecard','paymentwall') COLLATE utf8_unicode_ci NOT NULL,
`method_tid` int(11) DEFAULT NULL,
`uid` int(11) NOT NULL,
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`plan` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`expires_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`amount` decimal(8,2) NOT NULL,
`currency` enum('EUR','USD','BTC') COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`payment_id`),
UNIQUE KEY `method` (`method`,`method_tid`),
UNIQUE KEY `method_unique_id` (`method_unique_id`,`method`),
KEY `expires_at` (`expires_at`),
KEY `uid` (`uid`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=8030 ;
I am running
-- Server version: 5.1.41
-- PHP Version: 5.3.2-1ubuntu4.11
This would result in the method field being set to '0' for all of your records fitting the where clause.
It is interpreted as the following:
set method = ('paysafecard' AND amount = 25)
This is a logical AND, and results in a boolean value for these records(which will be parsed to the corresponding field of your column).