Query using multiple secondary indexes on the same table - sqlalchemy

I want to make a Spanner SQL query on a single table with WHERE statements using different multiple secondary indexes that exist on the table (prop1_index and prop2_index).
I think it's not possible to specify multiple FORCE_INDEX for the same FROM table (see below). So maybe there are SQL ways to chain/combine/join multiple SELECT statements that all use different secondary indexes.
SELECT
c.uid,
c.prop1,
c.prop2
FROM
mycolumn#{FORCE_INDEX=prop1_index} AS c -- Also need `{FORCE_INDEX=prop2_index}` here
WHERE
c.prop1 BETWEEN 100
AND 101
AND c.prop2 BETWEEN 50
AND 1000
Ideally, I also looking to perform the same using SQLAlchemy.

Yes you are right. Multiple indices for the same FROM is not supported.
However, you can use a self-join to achieve what you want:
SELECT
c.uid,
c1.prop1,
c2.prop2
FROM
mycolumn#{FORCE_INDEX=prop1_index} AS c1
HASH JOIN
mycolumn#{FORCE_INDEX=prop2_index} AS c2
ON c1.uid = c2.uid
WHERE
c1.prop1 BETWEEN 100
AND 101
AND c2.prop2 BETWEEN 50
AND 1000
Also, if these two indices are always used together, you can also consider combining them into one index.

Related

MYSQL search query optimization from two many-to-many tables

I have three tables.
tbl_post for a table of posts. (post_idx, post_created, post_title, ...)
tbl_mention for a table of mentions. (mention_idx, mention_name, mention_img, ...)
tbl_post_mention for a unique many-to-many relation between the two tables. (post_idx, mention_idx)
For example,
PostA can have MentionA and MentionB.
PostB can have MentionA and MentionC.
PostC cannot have MentionC and MentionC.
tbl_post has about million rows, tbl_mention has less than hundred rows, and tbl_post_mention has a couple of million rows. All three tables are heavily loaded with foreign keys, unique indices, etc.
I am trying to make two separate search queries.
Search for post ids with all the given mention ids[AND condition]
Search for post ids with any of the given mention ids[OR condition]
Then join with tbl_post and tbl_mention to populate with meaningful data, order the results, and return the top n. In the end, I hope to have a n list of posts with all the data required for my service to display on the front end.
Here are the respective simpler queries
SELECT post_idx
FROM
(SELECT post_idx, count(*) as c
FROM tbl_post_mention
WHERE mention_idx in (1,95)
GROUP BY post_idx) AS A
WHERE c >= 2;
The problem with this query is that it is already inefficient before the joins and ordering. This process alone takes 0.2 seconds.
SELECT DISTINCT post_idx
FROM tbl_post_mention
WHERE mention_idx in (1,95);
This is a simple index range scan, but because of the IN statement, the query becomes expensive again once you start joining it with other tables.
I tried more complex and "clever" queries and tried indexing different sets of columns with no avail. Are there special syntaxes that I could use in this case? Maybe a clever trick? Partitioning? Or am I missing some fundamental concept here... :(
Send help.
The query you want is this:
SELECT post_idx
FROM tbl_post_mention
WHERE mention_idx in (1,95)
GROUP BY post_idx
HAVING COUNT(*) >= 2
The HAVING clause does your post-GROUP BY filtering.
The index that will help you is this.
CREATE INDEX mentionsdex ON tbl_post_mention (mention_idx, post_idx);
It covers your query by allowing rapid lookup by mention_idx then grouping by post_idx.
Often so-called join tables with two columns -- like your tbl_post_mention -- work most efficiently when they have a pair of indexes with the columns in opposite orders.

Index when using OR in query

What is the best way to create index when I have a query like this?
... WHERE (user_1 = '$user_id' OR user_2 = '$user_id') ...
I know that only one index can be used in a query so I can't create two indexes, one for user_1 and one for user_2.
Also could solution for this type of query be used for this query?
WHERE ((user_1 = '$user_id' AND user_2 = '$friend_id') OR (user_1 = '$friend_id' AND user_2 = '$user_id'))
MySQL has a hard time with OR conditions. In theory, there's an index merge optimization that #duskwuff mentions, but in practice, it doesn't kick in when you think it should. Besides, it doesn't give as performance as a single index when it does.
The solution most people use to work around this is to split up the query:
SELECT ... WHERE user_1 = ?
UNION
SELECT ... WHERE user_2 = ?
That way each query will be able to use its own choice for index, without relying on the unreliable index merge feature.
Your second query is optimizable more simply. It's just a tuple comparison. It can be written this way:
WHERE (user_1, user_2) IN (('$user_id', '$friend_id'), ('$friend_id', '$user_id'))
In old versions of MySQL, tuple comparisons would not use an index, but since 5.7.3, it will (see https://dev.mysql.com/doc/refman/5.7/en/row-constructor-optimization.html).
P.S.: Don't interpolate application code variables directly into your SQL expressions. Use query parameters instead.
I know that only one index can be used in a query…
This is incorrect. Under the right circumstances, MySQL will routinely use multiple indexes in a query. (For example, a query JOINing multiple tables will almost always use at least one index on each table involved.)
In the case of your first query, MySQL will use an index merge union optimization. If both columns are indexed, the EXPLAIN output will give an explanation along the lines of:
Using union(index_on_user_1,index_on_user_2); Using where
The query shown in your second example is covered by an index on (user_1, user_2). Create that index if you plan on running those queries routinely.
The two cases are different.
At the first case both columns needs to be searched for the same value. If you have a two column index (u1,u2) then it may be used at the column u1 as it cannot be used at column u2. If you have two indexes separate for u1 and u2 probably both of them will be used. The choice comes from statistics based on how many rows are expected to be returned. If returned rows expected few an index seek will be selected, if the appropriate index is available. If the number is high a scan is preferable, either table or index.
At the second case again both columns need to be checked again, but within each search there are two sub-searches where the second sub-search will be upon the results of the first one, due to the AND condition. Here it matters more and two indexes u1 and u2 will help as any field chosen to be searched first will have an index. The choice to use an index is like i describe above.
In either case however every OR will force 1 more search or set of searches. So the proposed solution of breaking using union does not hinder more as the table will be searched x times no matter 1 select with OR(s) or x selects with union and no matter index selection and type of search (seek or scan). As a result, since each select at the union get its own execution plan part, it is more likely that (single column) indexes will be used and finally get all row result sets from all parts around the OR(s). If you do not want to copy a large select statement to many unions you may get the primary key values and then select those or use a view to be sure the majority of the statement is in one place.
Finally, if you exclude the union option, there is a way to trick the optimizer to use a single index. Create a double index u1,u2 (or u2,u1 - whatever column has higher cardinality goes first) and modify your statement so all OR parts use all columns:
... WHERE (user_1 = '$user_id' OR user_2 = '$user_id') ...
will be converted to:
... WHERE ((user_1 = '$user_id' and user_2=user_2) OR (user_1=user_1 and user_2 = '$user_id')) ...
This way a double index (u1,u2) will be used at all times. Please not that this will work if columns are nullable and bypassing this with isnull or coalesce may cause index not to be selected. It will work with ansi nulls off however.

SQL query takes too much time (3 joins)

I'm facing an issue with an SQL Query. I'm developing a php website, and to avoid making too much queries, I prefer to make a big one looking like :
select m.*, cj.*, cjb.*, me.pseudo as pseudo_acheteur
from mercato m
JOIN cartes_joueur cj
ON m.ID_carte = cj.ID_carte_joueur
JOIN cartes_joueur_base cjb
ON cj.ID_carte_joueur_base = cjb.ID_carte_joueur_base
JOIN membres me
ON me.ID_membre = cj.ID_membre
where not exists (select * from mercato_encheres me where me.ID_mercato = m.ID_mercato)
and cj.ID_membre = 2
and m.status <> 'cancelled'
ORDER BY total_carac desc, cj.level desc, cjb.nom_carte asc
This should return all cards sold by the member without any bet on it. In the result, I need all the information to display them.
Here is the approximate rows in each table :
mercato : 1200
cartes_joueur : 800 000
carte_joueur_base : 62
membres : 2000
mercato_enchere : 15 000
I tried to reduce them (in dev environment) by deleting old data; but the query still needs 10~15 seconds to execute (which is way too long on a website )
Thanks for your help.
Let's take a look.
The use of * in SELECT clauses is harmful to query performance. Why? It's wasteful. It needlessly adds to the volume of data the server must process, and in the case of JOINs, can force the processing of columns with duplicate values. If you possibly can do so, try to enumerate the columns you need.
You may not have useful indexes on your tables for accelerating this. We can't tell. Please notice that MySQL can't exploit multiple indexes in a single query, so to make a query fast you often need a well-chosen compound index. I suggest you try defining the index (ID_membre, ID_carte_jouer, ID_carte_joueur_base) on your cartes_joueur table. Why? Your query matches for equality on the first of those columns, and then uses the second and third column in ON conditions.
I have often found that writing a query with the largest table (most rows) first helps me think clearly about optimizing. In your case your largest table is cartes_jouer and you are choosing just one ID_membre value from that table. Your clearest path to optimization is the knowledge that you only need to examine approximately 400 rows from that table, not 800 000. An appropriate compound index will make that possible, and it's easiest to imagine that index's columns if the table comes first in your query.
You have a correlated subquery -- this one.
where not exists (select *
from mercato_encheres me
where me.ID_mercato = m.ID_mercato)
MySQL's query planner can be stupidly literal-minded when it sees this, running it thousands of times. In your case it's even worse: it's got SELECT * in it: see point 1 above.
It should be refactored to use the LEFT JOIN ... IS NULL pattern. Here's how that goes.
select whatever
from mercato m
JOIN ...
JOIN ...
LEFT JOIN mercato_encheres mench ON mench.ID_mercato = m.ID_mercato
WHERE mench.ID_mercato IS NULL
and ...
ORDER BY ...
Explanation: The use of LEFT JOIN rather than ordinary inner JOIN allows rows from the mercato table to be preserved in the output even when the ON condition does not match them to tables in the mercato_encheres table. The mismatching rows get NULL values for the second table. The mench.ID_mercato IS NULL condition in the WHERE clause then selects only the mismatching rows.

mysql in clause vs big table joins

I have a query which gets data by joining 3 big tables (~1mm records each), in addition they are very busy tables.
is it better to do the traditional joins? or rather first fetch values from first table and do a secondary query passing the values retrieved as in comma delimited in clause?
Option #1
SELECT *
FROM BigTable1 a
INNER JOIN BigTable2 b using(someField2)
INNER JOIN BigTable3 c using(someField3)
WHERE a.someField1 = 'value'
vs
Option #2
$values = SELECT someField2 FROM WHERE someField1 = 'value'; #(~20-200 values)
SELECT *
FROM BigTable2
INNER JOIN BigTable3 c using(someField1)
WHERE someField2 in ($values)
Option #3
create temp-table to store these values from BigTable1
and use this instead of join to BigTable1 directly
any other option?
I think the best option is to try both approaches and run explain on them.
Finally, one optimization you could make would be to use a stored procedure for the second approach which would reduce the time/overhead of having to run 2 queries from the client.
Finally, Joining is quite an expensive operation for very large tables since your essentially projecting and selecting over 1m X 1m rows. ( terms: What are projection and selection?)
There is no definitive answer to your question and you could profile both ways since they depend on multiple factors.
However, the first approach is usually taken and should be faster if all of the tables are correctly indexed and the sizes of the rows are "standard".
Also take into account that in the second approach the latency of the network communication will be far worse since you will need multiple trips to the DB.

Reduce number of database IO or size of data operation?

To make the system more efficient, should we reduce the number of database IO or reduce the size of data operation?
More specifically, suppose I want to get top 60-70 objects.
1st approach:
By joining several tables, I got a huge table here. Then sorting the table based on some attributes, and return the top 70 objects with all its attributes and I only use the 60-70 objects.
2nd approach:
By joining less tables and sorting them, I got top 70 objects' ids, and then I do a second lookup for 60-70 objects based on their ids.
So which one is better in terms of efficiency, esp for MySQL.
It will depend on how you designed your query.
Usually JOIN operations are more efficient than using IN (group) or nested SELECTs, but when joining 3 or more tables you have to choose carefully the order to optimize it.
And of course, every table bind should envolve a PRIMARY KEY.
If the query remain too slow despite of your efforts, then you should use a cache. A new table, or even a file that will store the results of this query up to a given expiration time when it should be updated.
This is a common practice when the results of a heavy query are needed frequently in the system.
You can always count on MySQL Workbench to measure the speed of your queries and play with your options.
Ordinarily, the best way to take advantage of query optimization is to combine the two approaches you present.
SELECT col, col, col, col, etc
FROM tab1,
JOIN tabl2 ON col = col
JOIN tabl3 ON col = col
WHERE tab1.id IN
( SELECT distinct tab1.id
FROM whatever
JOIN whatever ON col = col
WHERE whatever
ORDER BY col DESC
LIMIT 70
)
See how that goes? You make a subquery to select the IDs, then use it in the main query.