I've created a database to store movies data. My tables are the following:
movies:
CREATE TABLE IF NOT EXISTS `movies` (
`movieId` int(11) NOT NULL AUTO_INCREMENT,
`imdbId` varchar(255) DEFAULT NULL,
`imdbRating` float DEFAULT NULL,
`movieTitle` varchar(255) NOT NULL,
`movieLength` varchar(255) NOT NULL,
`imdbRatingCount` varchar(255) NOT NULL,
`poster` varchar(255) NOT NULL,
`year` varchar(255) NOT NULL,
PRIMARY KEY (`movieId`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
I have a table in which i store movie actors:
CREATE TABLE IF NOT EXISTS `actors` (
`actorId` int(10) NOT NULL AUTO_INCREMENT,
`actorName` varchar(255) NOT NULL,
PRIMARY KEY (`actorId`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
And one other in which i store the relation between the movies and actors: (movieActor)
CREATE TABLE IF NOT EXISTS `movieActor` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`movieId` int(10) NOT NULL,
`actorId` int(10) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Now when i want to select a list of movies in which are the selected actors my query is:
SELECT *
FROM movies m inner join
(SELECT movieId FROM movieActor WHERE actorId IN(1,2,3) GROUP BY movieId having count(*) = 3) ma ON m.movieId = ma.movieId
WHERE imdbRating IS NOT NULL ORDER BY imdbRating DESC
This is working perfectly, but i don't know that this is the optimal table structure and query to accomplish this. Are there any better table structure to store data or query the list?
First of all, use indexes on your tables. In my opinion it should be useful to have 3 indexes on movieActor. MovieId - ActorID - MovieIdActorId.
Second try tu use foreign keys. These help to identify the best execution plan for your dbs.
Third try to avoid generating temp tables in your execution plan of your query. Subselects often creates temp tables which are used when the database has to temporarily save something in the RAM. To check this, write EXPLAIN in front of goer query.
I would write it like this:
SELECT m.*, movieActor
FROM movies m inner join
movieActor ma ON m.movieId = ma.movieId
WHERE imdbRating IS NOT NULL
and actorId IN(1,2,3)
GROUP BY movieId
having count(*) = 3)
ORDER BY imdbRating DESC
(Not tested)
Just try to optimize it with the EXPLAIN keyword. It also can help you to create the right indexes.
Related
I have an SQL query on tables having a lot of rows. So this query runs for a very long time. How can I optimize this query?
These tables already have indexes on id and friend_id
SELECT u.id, u.first, u.last,
group_concat(u2.first, " " , u2.last) MyFriends
FROM Users u
INNER JOIN Friends f ON f.user_id = u.id
INNER JOIN Users u2 ON u2.id = f.friend_id
GROUP BY u.id;
These are the table structures:
CREATE TABLE Users (
id int(10) unsigned NOT NULL,
first varchar(50) DEFAULT NULL,
last varchar(50) DEFAULT NULL,
city varchar(20) DEFAULT NULL,
country varchar(20) DEFAULT NULL,
Age tinyint(3) unsigned NOT NULL,
KEY users_idx_id (id))
ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE Friends (
user_id int(10) unsigned NOT NULL,
friend_id int(10) unsigned NOT NULL,
KEY idx_friends (friend_id))
ENGINE=InnoDB DEFAULT CHARSET=latin1;
A many-to-many mapping table (Friends) needs improved indexes. Drop all the indexes you have now and add
PRIMARY KEY(user_id, friend_id),
INDEX(friend_id, user_id)
More discussion: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#many_to_many_mapping_table
Age is a moving target. Think about a better way to store that.
There are about 6 countries with names longer than 20. "Saint Vincent and the Grenadines" is 32.
As for cities, 'Poselok Uchebnogo Khozyaystva Srednego Professionalno-Tekhnicheskoye Uchilishche Nomer Odin' is 91 chars.
ALTER TABLE Friends
DROP INDEX idx_friends,
ADD PRIMARY KEY(user_id, friend_id),
ADD INDEX(friend_id, user_id);
Every table should have a PRIMARY KEY:
ALTER TABLE Users
DROP INDEX users_idx_id,
ADD PRIMARY KEY(user_id)
Read about AUTO_INCREMENT.
The "execution plan" can be had by running EXPLAIN SELECT .... However it won't provide many clues in this case.
I want to get all the neighborhoods (based on different zips) which the user is not a member of already.
I have a users table and several other tables like this:
table name: neighborhood
CREATE TABLE neighborhood(
`neighborhood_id` INT(11) NOT NULL AUTO_INCREMENT,
`name` VARCHAR(255) NOT NULL,
`description` TEXT DEFAULT NULL,
`neighborhood_postal_code` VARCHAR(255) NOT NULL,
`region_neighborhood` VARCHAR(255) NOT NULL,
`created_at` DATETIME DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`neighborhood_id`),
INDEX `neighborhood_region_neighborhood_FI_1` (`region_neighborhood`)
) ENGINE = InnoDB;
table name: user_neighborhood
CREATE TABLE user_neighborhood(
`user_id` INT(11) NOT NULL,
`neighborhood_id` INT(11) NOT NULL,
`activity_circle` INT(1) DEFAULT 0,
`duo_circle` INT(1) DEFAULT 0,
FOREIGN KEY (`user_id`) REFERENCES `users` (`user_id`),
FOREIGN KEY (`neighborhood_id`) REFERENCES `neighborhood` (`neighborhood_id`)
) ENGINE = InnoDB;
I have tried the following query, but the result is not correct:
SELECT n.*
FROM `neighborhood` as n
left join user_neighborhood as un on n.neighborhood_id = un.neighborhood_id
where un.user_id != 1 and n.neighborhood_postal_code IN ('2000', '2100')
UPDATE: I managed to make the query seem correct at first instance using a subquery like this:
select *
from neighborhood
where neighborhood_id NOT IN (select neighborhood_id from user_neighborhood where user_id != 1)
AND neighborhood_postal_code IN ('2000', '2100')
However, it also returns (some) of the neighborhoods i am in already. It doesnot make much sense to me why only some..
Why exactly are you adding user_id != 1 in your subquery? I think if you know the id of the user you want to fetch for lets say user_id is 10 then use where user_id = 10 in subquery like:
select *
from neighborhood
where neighborhood_id NOT IN (select distinct neighborhood_id from user_neighborhood where user_id = 10)
AND neighborhood_postal_code IN ('2000', '2100')
But if you want to fetch all the neighbors which have no user then you can use this Query:
select *
from neighborhood
where neighborhood_id NOT IN (select distinct neighborhood_id from user_neighborhood)
AND neighborhood_postal_code IN ('2000', '2100')
Hope this helps!
I have a query that is executed in 35s, which is waaaaay too long.
Here are the 3 tables concerned by the query (each table is approx. 13000 lines long, and should be much longer in the future) :
Table 1 : Domains
CREATE TABLE IF NOT EXISTS `domain` (
`id_domain` int(11) NOT NULL AUTO_INCREMENT,
`domain_domain` varchar(255) NOT NULL,
`projet_domain` int(11) NOT NULL,
`date_crea_domain` int(11) NOT NULL,
`date_expi_domain` int(11) NOT NULL,
`active_domain` tinyint(1) NOT NULL,
`remarques_domain` text NOT NULL,
PRIMARY KEY (`id_domain`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Table 2 : Keywords
CREATE TABLE IF NOT EXISTS `kw` (
`id_kw` int(11) NOT NULL AUTO_INCREMENT,
`kw_kw` varchar(255) NOT NULL,
`clics_kw` int(11) NOT NULL,
`cpc_kw` float(11,3) NOT NULL,
`date_kw` int(11) NOT NULL,
PRIMARY KEY (`id_kw`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Table 3 : Linking between domain and keyword
CREATE TABLE IF NOT EXISTS `kw_domain` (
`id_kd` int(11) NOT NULL AUTO_INCREMENT,
`kw_kd` int(11) NOT NULL,
`domain_kd` int(11) NOT NULL,
`selected_kd` tinyint(1) NOT NULL,
PRIMARY KEY (`id_kd`),
KEY `kw_to_domain` (`kw_kd`,`domain_kd`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
The query is as follows :
SELECT ng.*, kd.*, kg.*
FROM domain ng
LEFT JOIN kw_domain kd ON kd.domain_kd = ng.id_domain
LEFT JOIN kw kg ON kg.id_kw = kd.kw_kd
GROUP BY ng.id_domain
ORDER BY kd.selected_kd DESC, kd.id_kd DESC
Basically, it selects all domains, with, for each one of these domains, the last associated keyword.
Does anyone have an idea on how to optimize the tables or the query ?
The following will get the last keyword, according to your logic:
select ng.*,
(select kw_kd
from kw_domain kd
where kd.domain_kd = ng.id_domain and kd.selected_kd = 1
order by kd.id_kd desc
limit 1
) as kw_kd
from domain ng;
For performance, you want an index on kw_domain(domain_kd, selected_kd, kw_kd). In this case, the order of the fields matters.
You can use this as a subquery to get more information about the keyword:
select ng.*, kg.*
from (select ng.*,
(select kw_kd
from kw_domain kd
where kd.domain_kd = ng.id_domain and kd.selected_kd = 1
order by kd.id_kd desc
limit 1
) as kw_kd
from domain ng
) ng left join
kw kg
on kg.id_kw = ng.kw_kd;
In MySQL, group by can have poor performance, so this might work better, particularly with the right indexes.
I need some help with a MySQL query. I have two tables, one with offers and one with statuses. An offer can has one or more statuses. What I would like to do is get all the offers and their latest status. For each status there's a table field named 'added' which can be used for sorting.
I know this can be easily done with two queries, but I need to make it with only one because I also have to apply some filters later in the project.
Here's my setup:
CREATE TABLE `test`.`offers` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY ,
`client` TEXT NOT NULL ,
`products` TEXT NOT NULL ,
`contact` TEXT NOT NULL
) ENGINE = MYISAM ;
CREATE TABLE `statuses` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`offer_id` int(11) NOT NULL,
`options` text NOT NULL,
`deadline` date NOT NULL,
`added` datetime NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
Should work but not very optimal imho :
SELECT *
FROM offers
INNER JOIN statuses ON (statuses.offer_id = offers.id
AND statuses.id =
(SELECT allStatuses.id
FROM statuses allStatuses
WHERE allStatuses.offer_id = offers.id
ORDER BY allStatuses.added DESC LIMIT 1))
Try this:
SELECT
o.*
FROM offers o
INNER JOIN statuses s ON o.id = s.offer_id
ORDER BY s.added
LIMIT 1
Looking at this query there's got to be something bogging it down that I'm not noticing. I ran it for 7 minutes and it only updated 2 rows.
//set product count for makes
$tru->query->run(array(
'name' => 'get-make-list',
'sql' => 'SELECT id, name FROM vehicle_make',
'connection' => 'core'
));
while($tempMake = $tru->query->getArray('get-make-list')) {
$tru->query->run(array(
'name' => 'update-product-count',
'sql' => 'UPDATE vehicle_make SET product_count = (
SELECT COUNT(product_id) FROM taxonomy_master WHERE v_id IN (
SELECT id FROM vehicle_catalog WHERE make_id = '.$tempMake['id'].'
)
) WHERE id = '.$tempMake['id'],
'connection' => 'core'
));
}
I'm sure this query can be optimized to perform better, but I can't think of how to do it.
vehicle_make = 45 rows
taxonomy_master = 11,223 rows
vehicle_catalog = 5,108 rows
All tables have appropriate indexes
UPDATE: I should note that this is a 1-time script so overhead isn't a big deal as long as it runs.
CREATE TABLE IF NOT EXISTS `vehicle_make` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(32) NOT NULL,
`product_count` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=46 ;
CREATE TABLE IF NOT EXISTS `taxonomy_master` (
`product_id` int(10) NOT NULL,
`v_id` int(10) NOT NULL,
`vehicle_requirement` varchar(255) DEFAULT NULL,
`is_sellable` enum('True','False') DEFAULT 'True',
`programming_override` varchar(25) DEFAULT NULL,
PRIMARY KEY (`product_id`,`v_id`),
KEY `idx2` (`product_id`),
KEY `idx3` (`v_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
CREATE TABLE IF NOT EXISTS `vehicle_catalog` (
`v_id` int(10) NOT NULL,
`id` int(11) NOT NULL,
`v_make` varchar(255) NOT NULL,
`make_id` int(11) NOT NULL,
`v_model` varchar(255) NOT NULL,
`model_id` int(11) NOT NULL,
`v_year` varchar(255) NOT NULL,
PRIMARY KEY (`v_id`,`v_make`,`v_model`,`v_year`),
UNIQUE KEY `idx` (`v_make`,`v_model`,`v_year`),
UNIQUE KEY `idx2` (`v_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
Update: The successful query to get what I needed is here....
SELECT
m.id,COUNT(t.product_id) AS CountOf
FROM taxonomy_master t
INNER JOIN vehicle_catalog v ON t.v_id=v.id
INNER JOIN vehicle_make m ON v.make_id=m.id
GROUP BY m.id;
without the tables/columns this is my best guess from reverse engineering the given queries:
UPDATE m
SET product_count =COUNT(t.product_id)
FROM taxonomy_master t
INNER JOIN vehicle_catalog v ON t.v_id=v.id
INNER JOIN vehicle_make m ON v.make_id=m.id
GROUP BY m.name
The given code loops over each make, and then runs a query the counts for each. My answer just does them all in one query and should be a lot faster.
have an index for each of these:
vehicle_make.id cover on name
vehicle_catalog.id cover make_id
taxonomy_master.v_id
EDIT
give this a try:
CREATE TEMPORARY TABLE CountsOf (
id int(11) NOT NULL
, CountOf int(11) NOT NULL DEFAULT 0.00
);
INSERT INTO CountsOf
(id, CountOf )
SELECT
m.id,COUNT(t.product_id) AS CountOf
FROM taxonomy_master t
INNER JOIN vehicle_catalog v ON t.v_id=v.id
INNER JOIN vehicle_make m ON v.make_id=m.id
GROUP BY m.id;
UPDATE taxonomy_master,CountsOf
SET taxonomy_master.product_count=CountsOf.CountOf
WHERE taxonomy_master.id=CountsOf.id;
instead of using nested query ,
you can separated this query to 2 or 3 queries,
and in php insert the result of the inner query to the out query ,
its faster !
#haim-evgi Separating the queries will not increase the speed significantly, it will just shift the load from the DB server to the Web server and create overhead of moving data between the two servers.
I am not sure with the appropriate indexes you run such query 7 minutes. Could you please show the table structure of the tables involved in these queries.
Seems like you need the following indices:
INDEX BTREE('make_id') on vehicle_catalog
INDEX BTREE('v_id') on taxonomy_master