MySQL -- joining then joining then joining again - mysql

MySQL setup: step by step.
programs -> linked to --> speakers (by program_id)
At this point, it's easy for me to query all the data:
SELECT *
FROM programs
JOIN speakers on programs.program_id = speakers.program_id
Nice and easy.
The trick for me is this. My speakers table is also linked to a third table, "books." So in the "speakers" table, I have "book_id" and in the "books" table, the book_id is linked to a name.
I've tried this (including a WHERE you'll notice):
SELECT *
FROM programs
JOIN speakers on programs.program_id = speakers.program_id
JOIN books on speakers.book_id = books.book_id
WHERE programs.category_id = 1
LIMIT 5
No results.
My questions:
What am I doing wrong?
What's the most efficient way to make this query?
Basically, I want to get back all the programs data and the books data, but instead of the book_id, I need it to come back as the book name (from the 3rd table).
Thanks in advance for your help.
UPDATE:
(rather than opening a brand new question)
The left join worked for me. However, I have a new problem. Multiple books can be assigned to a single speaker.
Using the left join, returns two rows!! What do I need to add to return only a single row, but separate the two books.

is there any chance that the books table doesn't have any matching columns for speakers.book_id?
Try using a left join which will still return the program/speaker combinations, even if there are no matches in books.
SELECT *
FROM programs
JOIN speakers on programs.program_id = speakers.program_id
LEFT JOIN books on speakers.book_id = books.book_id
WHERE programs.category_id = 1
LIMIT 5
Btw, could you post the table schemas for all tables involved, and exactly what output (or reasonable representation) you'd expect to get?
Edit: Response to op author comment
you can use group by and group_concat to put all the books on one row.
e.g.
SELECT speakers.speaker_id,
speakers.speaker_name,
programs.program_id,
programs.program_name,
group_concat(books.book_name)
FROM programs
JOIN speakers on programs.program_id = speakers.program_id
LEFT JOIN books on speakers.book_id = books.book_id
WHERE programs.category_id = 1
GROUP BY speakers.id
LIMIT 5
Note: since I don't know the exact column names, these may be off

That's typically efficient. There is some kind of assumption you are making that isn't true. Do your speakers have books assigned? If they don't that last JOIN should be a LEFT JOIN.
This kind of query is typically pretty efficient, since you almost certainly have primary keys as indexes. The main issue would be whether your indexes are covering (which is more likely to occur if you don't use SELECT *, but instead select only the columns you need).

Related

2 Queries vs inner join with where

I have 2 tables: Articles and Comments;
"Comments.articleID" is a foreign key.
I want to query the database to compose a website that shows the article text of a certain article (given an articleID) and all the article's comments.
I can think of 2 ways to query the data:
Use 2 separate queries:
SELECT articles.text FROM articles where id = givenArticleID
SELECT comments.* FROM comments where comments.articleID = givenArticleID
Use an Inner join:
SELECT articles.text, comments.*
FROM articles
INNER JOIN comments on articles.id = comments.articleID
WHERE articles.id = givenArticleID
The first option only returns the data I am interested in - that is good.
The second option returns all data I am interested in, but much more data than necessary. Every row in the result set contains the article.text column, that could be a lot of (unnecessary) data.
I think that the join would be better for certain queries, that do not require a WHERE condition (thus containing different articles).
Which way would you generally prefer in the situation above?
Or is there an even better alternative...?
Option 2 is probably better, because it is only one client-server round trip.
Also don't forget that each query has to be parsed by the database server.
I'd recommend that you benchmark both versions and see which one performs better.

SQL Return All Rows Where 2 Values Are Repeated

I am sure this question has already been answered, but I can't find it or the answer was too complicated. I am new to SQL and am not sure how to word this generically.
I have a mySQL database of software installed on devices. My query to pull all the data has more fields and more joins, but for brevity I just included a few. I need to add another dimension to create a report that lists every case where a device has more than one installation of software from the same product family.
sample
Right now I have code kind of like this and it is not doing what I need. I have seen some info on exists but the examples didn't account for multiple joins so the syntax escapes me. Help?
select
devices.name,
sw_inventory.product,
products.family_name,
sw_inventory.ignore_usage,
from sw_inventory
inner join products
on sw_inventory.product=products.product_name
inner join devices
on sw_inventory.device_name=devices.name
where sw_inventory.ignore=0
group by devices.name, products.family_name
There are plenty of answers out there on this topic but I definitely understand not always knowing terminology. you are looking for how to find duplicates values.
Basically this is a two step process. 1 find the duplicates 2 relate that back to the original records if you want those. Note the second part is optional.
So to literally find all of the duplicates of the query you provided
ADD HAVING COUNT(*) > 1 after group by statements. If you want to know how many duplicates add a calculated column to count them.
select
devices.name,
sw_inventory.product,
products.family_name,
sw_inventory.ignore_usage,
NumberOfDuplicates = COUNT(*)
from sw_inventory
inner join products
on sw_inventory.product=products.product_name
inner join devices
on sw_inventory.device_name=devices.name
where sw_inventory.ignore=0
group by devices.name, products.family_name
HAVING COUNT(*) > 1

Are indeces useful when using them in combination of an INNER JOIN?

I have three tables:
Orders
OrdersPromotions
Promotions
Most of my queries are of this kind:
SELECT `promotions`.* FROM `promotions` INNER JOIN `orders_promotions` ON `promotions`.`id` = `orders_promotions`.`promotions_id` WHERE `orders_promotions`.`orders_id` = 3 AND `promotions`.`code` = 'my_promotion_code'
So, I never fetch promotions directly, but also within the scope of an order. An order won't have many promotions. I am wondering if it would be useful to place an INDEX in the code column of promotion, knowing that when doing the INNER JOIN actually the results after the INNER JOIN are not many, and so, it would be ok to go through all them finding the promotion which code is the given.
Would an index make sense in my previous query, knowing that just this query:
SELECT `promotions`.* FROM `promotions` INNER JOIN `orders_promotions` ON `promotions`.`id` = `orders_promotions`.`promotions_id` WHERE `orders_promotions`.`orders_id` = 3
Would return no more than 20 rows?
You should almost always use an index on any fields you are going to use for joins, sorts, grouping, or filtering in where clauses. I would say ALWAYS, but there could be exceptions to the rule (like if you had a very heavy write load on a table that was very infrequently used for reads where indexes would be useful).

Database design to enable Multiple tags like Stackoverflow?

I have the following tables.
Articles table
a_id INT primary unique
name VARCHAR
Description VARCHAR
c_id INT
Category table
id INT
cat_name VARCHAR
For now I simply use
SELECT a_id,name,Description,cat_name FROM Articles LEFT JOIN Category ON Articles.a_id=Category.id WHERE c_id={$id}
This gives me all articles which belong to a certain category along with category name.
Each article is having only one category.
AND I use a sub category in a similar way(I have another table named sub_cat).But every article doesn't necessary have a sub category.It may belong to multiple categories instead.
I now think of tagging an article with more than one category just like the questions at stackoverflow are tagged(eg: with multiple tags like PHP,MYSQL,SQL etc).AND later I have to display(filter) all article with certain tags(eg: tagged with php,php +MySQL) and I also have to display the tags along with the article name,Description.
Can anyone help me redesign the database?(I am using php + MySQL at back-end)
Create a new table:
CREATE TABLE ArticleCategories(
A_ID INT,
C_ID INT,
Constraint PK_ArticleCategories Primary Key (Article_ID, Category_ID)
)
(this is the SQL server syntax, may be slightly different for MySQL)
This is called a "Junction Table" or a "Mapping Table" and it is how you express Many-to-Many relationships in SQL. So, whenever you want to add a Category to an Article, just INSERT a row into this table with the IDs of the Article and the Category.
For instance, you can initialize it like this:
INSERT Into ArticleCategories(A_ID,C_ID)
SELECT A_ID,C_ID From Articles
Now you can remove c_id from your Articles table.
To get back all of the Categories for a single Article, you would do use a query like this:
SELECT a_id,name,Description,cat_name
FROM Articles
LEFT JOIN ArticleCategories ON Articles.a_id=ArticleCategories.a_id
INNER JOIN Category ON ArticleCategories.c_id=Category.id
WHERE Articles.a_id={$a_id}
Alternatively, to return all articles that have a category LIKE a certain string:
SELECT a_id,name,Description
FROM Articles
WHERE EXISTS( Select *
From ArticleCategories
INNER JOIN Category ON ArticleCategories.c_id=Category.id
WHERE Articles.a_id=ArticleCategories.a_id
AND Category.cat_name LIKE '%'+{$match}+'%'
)
(You may have to adjust the last line, as I am not sure how string parameters are passed MySQL+PHP.)
Ok RBarryYoung you asked me about an reference/analyse you get one
This reference / analyse is based off the documention / source code analyse off the MySQL server
INSERT Into ArticleCategories(A_ID,C_ID)
SELECT A_ID,C_ID From Articles
On an large Articles table with many rows this copy will push one core off the CPU to 100% load and will create a disk based temporary table what will slow down the complete MySQL performance because the disk will be stress out with that copy.
If this is a one time process this is not that bad but do the math if you run this every time..
SELECT a_id,name,Description
FROM Articles
WHERE EXISTS( Select *
From ArticleCategories
INNER JOIN Category ON ArticleCategories.c_id=Category.id
WHERE Articles.a_id=ArticleCategories.a_id
AND Category.cat_name LIKE '%'+{$match}+'%'
)
Note dont take the Execution Times on sqlfriddle for real its an busy server and the times vary alot to make a good statement but look to what View Execution Plan has to say
see http://sqlfiddle.com/#!2/48817/21 for demo
Both querys always trigger an complete table scan on table Articles and two DEPENDENT SUBQUERYS thats not good if you have an large Articles table with many records.
This means the performance depends on the number of Articles rows even when you want only the articles that are in the category.
Select *
From ArticleCategories
INNER JOIN Category ON ArticleCategories.c_id=Category.id
WHERE Articles.a_id=ArticleCategories.a_id
AND Category.cat_name LIKE '%'+{$match}+'%'
This query is the inner subquery but when you try to run it, MySQL cant run because it depends on a value of the Articles table so this is correlated subquery. a subquery type that will be evaluated once for each row processed by the outer query. not good indeed
There are more ways off rewriting RBarryYoung query i will show one.
The INNER JOIN way is much more efficent even with the LIKE operator
Note ive made an habbit out off it that i start with the table with the lowest number off records and work my way up if you start with the table Articles the executing will be the same if the MySQL optimizer chooses the right plan..
SELECT
Articles.a_id
, Articles.name
, Articles.description
FROM
Category
INNER JOIN
ArticleCategories
ON
Category.id = ArticleCategories.c_id
INNER JOIN
Articles
ON
ArticleCategories.a_id = Articles.a_id
WHERE
cat_name LIKE '%php%';
;
see http://sqlfiddle.com/#!2/43451/23 for demo Note that this look worse because it looks like more rows needs to be checkt
Note if the Article table has low number off records RBarryYoung EXIST way and INNER JOIN way will perform more or less the same based on executing times and more proof the INNER JOIN way scales better when the record count become larger
http://sqlfiddle.com/#!2/c11f3/1 EXISTS oeps more Articles records needs to be checked now (even when they are not linked with the ArticleCategories table) so the query is less efficient now
http://sqlfiddle.com/#!2/7aa74/8 INNER JOIN same explain plan as the first demo
Extra notes about scaling it becomes even more worse when you also want to ORDER BY or GROUP BY the NOT EXIST way has an bigger chance it will create an disk based temporary table that will kill MySQL performance
Lets also analyse the LIKE '%php%' vs = 'php' for the EXIST way and INNER JOIN way
the EXIST way
http://sqlfiddle.com/#!2/48817/21 / http://sqlfiddle.com/#!2/c11f3/1 (more Articles) the explain tells me both patterns are more or less the same but 'php' should be little faster because off the const type vs ref in the TYPE column but LIKE %php% will use more CPU because an string compare algoritme needs to run.
the INNER JOIN way
http://sqlfiddle.com/#!2/43451/23 / http://sqlfiddle.com/#!2/7aa74/8 (more Articles) the explain tell me the LIKE '%php%' should be slower because 3 more rows need to be analysed but not shocking slower in this case (you can see the index is not really used on the best way).
RBarryYoung way works but doenst keep performance atleast not on a MySQL server
see http://sqlfiddle.com/#!2/b2bd9/1 or http://sqlfiddle.com/#!2/34ea7/1
for examples that will scale on large tables with lots of records this is what the topic starter needs

Efficiently selecting from many-to-many relation in H2

I'm using H2, and I have a database of books (table Entries) and authors (table Persons), connected through a many-to-many relationship, itself stored in a table Authorship.
The database is fairly large (900'000+ persons and 2.5M+ books).
I'm trying to efficiently select the list of all books authored by at least one author whose name matches a pattern (LIKE '%pattern%'). The trick here is that the pattern should severly restrict the number of matching authors, and each author has a reasonably small number of associated books.
I tried two queries:
SELECT p.*, e.title FROM (SELECT * FROM Persons WHERE name LIKE '%pattern%') AS p
INNER JOIN Authorship AS au ON au.authorId = p.id
INNER JOIN Entries AS e ON e.id = au.entryId;
and:
SELECT p.*, e.title FROM Persons AS p
INNER JOIN Authorship AS au ON au.authorId = p.id
INNER JOIN Entries AS e ON e.id = au.entryId
WHERE p.name like '%pattern%';
I expected the first one to be much faster, as I'm joining a much smaller (sub)table of authors, however they both take as long. So long in fact that I can manually decompose the query into three selects and find the result I want faster.
When I try to EXPLAIN the queries, I observe that indeed they are very similar (a full join on the tables and only then a WHERE clause), so my question is: how can I achieve a fast select, that relies on the fact that the filter on authors should result in a much smaller join with the other two tables?
Note that I tried the same queries with MySQL and got results in line with what I expected (selecting first is much faster).
Thank you.
OK, here is something that finally worked for me.
Instead of running the query:
SELECT p.*, e.title FROM (SELECT * FROM Persons WHERE name LIKE '%pattern%') AS p
INNER JOIN Authorship AS au ON au.authorId = p.id
INNER JOIN Entries AS e ON e.id = au.entryId;
...I ran:
SELECT title FROM Entries e WHERE id IN (
SELECT entryId FROM Authorship WHERE authorId IN (
SELECT id FROM Persons WHERE name LIKE '%pattern%'
)
)
It's not exactly the same query, because now I don't get the author id as a column in the result, but that does what I wanted: take advantage of the fact that the pattern restricts the number of authors to a very small value to search only through a small number of entries.
What is interesting is that this worked great with H2 (much, much faster than the join), but with MySQL it is terribly slow. (This has nothing to do with the LIKE '%pattern%' part, see comments in other answers.) I suppose queries are optimized differently.
SELECT * FROM Persons WHERE name LIKE '%pattern%' will always take LONG on a 900,000+ row table no matter what you do because when your pattern '%pattern%' starts with a % MySql can't use any indexes and should do a full table scan. You should look into full-text indexes and function.
Well, since the like condition starts with a wildcard it will result in a full table scan which is always slow, no internal caching can take place.
If you want to do full text searches, mysql is not the best bet you have. Look into other software (solr for instance) to solve this kind of problems.