I need to select data from 3 mysql database tables, tally up various results from 1 table (points), order by highest points to lowest and show only 100 results.
I have this query which I believe may be on the cusp of success but not quite.
The 3 tables are users, dealerships and sales_list.
Your assistance with achieving the above and correcting the query is appreciated.
$query = " SELECT t1.*, t2.*, t3.sales_points
FROM users t1
JOIN dealerships t2
ON t1.dealership_id = t2.dealership_id
INNER JOIN sales_list t3
ON t1.users_sales_guild_id = t3.users_sales_guild_id
ORDER BY t3.sales_points
LIMIT 100";
Related
I'm new to MySQL, and I'd like some help in setting up a MySQL query to pull some data from a few tables (~100,000 rows) in a particular output format.
This problem involves three SQL tables:
allusers : This one contains user information. The columns of interest are userid and vip
table1 and table2 contain data, but they also have a userid column, which matches the userid column in allusers.
What I'd like to do:
I'd like to create a query which searches through allusers, finds the userid of those that are VIP, and then count the number of records in each of table1 and table2 grouped by the userid. So, my desired output is:
userid | Count in Table1 | Count in Table2
1 | 5 | 21
5 | 16 | 31
8 | 21 | 12
What I've done so far:
I've created this statement:
SELECT userid, count(1)
FROM table1
WHERE userid IN (SELECT userid FROM allusers WHERE vip IS NOT NULL)
GROUP BY userid
This gets me close to what I want. But now, I want to add another column with the respective counts from table2
I also tried using joins like this:
select A.userid, count(T1.userid), count(T2.userid) from allusers A
left join table1 T1 on T1.userid = A.userid
left join table2 T2 on T2.userid = A.userid
where A.vip is not null
group by A.userid
However, this query took a very long time and I had to kill the query. I'm assuming this is because using Joins for such large tables is very inefficient.
Similar Questions
This one is looking for a similar result as I am, but doesn't need nearly as much filtering with subqueries
This one sums up the counts across tables, while I need the counts separated into columns
Could someone help me set up the query to generate the data I need?
Thanks!
You need to pre-aggregate first, then join, otherwise the results will not be what you expect if a user has several rows in both table1 and table2. Besides, pre-aggregation is usually more efficient than outer aggregation in a situation such as yours.
Consider:
select a.userid, t1.cnt cnt1, t2.cnt cnt2
from allusers a
left join (select userid, count(*) cnt from table1 group by userid) t1
on t1.userid = a.userid
left join (select userid, count(*) cnt from table2 group by userid) t2
on t2.userid = a.userid
where a.vip is not null
This is a case where I would recommend correlated subqueries:
select a.userid,
(select count(*) from table1 t1 where t1.userid = a.userid) as cnt1,
(select count(*) from table2 t2 where t2.userid = a.userid) as cnt2
from allusers a
where a.vip is not null;
The reason that I recommend this approach is because you are filtering the alllusers table. That means that the pre-aggregation approach may be doing additional, unnecessary work.
I have a query for a list of genes and I am doing an INNER JOIN to retrieve the results that matches those genes in a database (EER Diagram):
SELECT t1.*, database1.*
FROM t1
INNER JOIN database1
ON t1.GeneSymbol = database1.GeneSymbol;
I have multiple databases that contains interactions of genes with different number of rows (varies from 5,000 to 70,000,000 rows) and I would like to add up all the rows together that match. I have tried to perform an simple UNION ALL instead of a INNER JOIN like the following:
SELECT t1.*, database1.*
FROM t1, database1
WHERE t1.GeneSymbol = database1.GeneSymbol
UNION ALL
SELECT t1.*, database2.*
FROM t1, database2
WHERE t1.GeneSymbol = database2.GeneSymbol;
However, If I try to add up more and more databases using UNION ALL to merge the results, it would take forever. I was wondering if doing an INNER JOIN + INSERT INTO TABLE for all the databases and inserting the output in a table with the correct number of columns, would it go a lot faster ?
I have 2 tables in mysql - User (user_id, first_name ....) and login_history(user_id, login_time)
Every time an user loges in, system records the time in login_history.
I want to run a query to fetch all the fields from the users table and the latest login time from login_history . Can anyone help please?
You have to use a join then :
SELECT *, login_history.login_time
FROM User
INNER JOIN login_history
ON User.user_id=login_history.user_id;
And this query gonna give you, all the columns of User and the login_time.
SELECT t1.col1
,t1.col2
,[...repeat for all columns in User table]
,max(t2.login_time)
FROM user t1
INNER JOIN login_history t2 ON t1.user_id = t2.user_id
GROUP BY t1.col1
,t1.col2
,[..repeat for all columns in User table]
This should work, assuming login_time is stored in a sane data type and/or format.
Following are 2 queries that can help you out to select latest login time with user details
SELECT * FROM User C,login_history O where C.user_id=O.user_id order by O.login_time desc limit 1
or
SELECT * FROM User C,login_history O where C.user_id=O.user_id and ROWNUM <=1 order by O.login_time desc
Due to its geographic capabilities I'm migrating my database from MySQL to PostgreSQL/PostGIS, and SQL that used to be so trivial is now are becoming painfully slow to overcome.
In this case I use a nested query to obtain the results in two columns, having in 1st column an ID and in the 2nd a counting result and insert those results in table1.
EDIT: This is the original MySQL working code that I need to be working in PostgreSQL:
UPDATE table1 INNER JOIN (
SELECT id COUNT(*) AS cnt
FROM table2
GROUP BY id
) AS c ON c.id = table1.id
SET table1.cnt = c.cnt
The result is having all rows with the same counting result, that being the 1st counting result of the nested select.
In MySQL this would be solved easily.
How would this work in PostgreSQL?
Thank you!
UPDATE table1 dst
SET cnt = src.cnt
FROM (SELECT id, COUNT (*) AS cnt
FROM table2
GROUP BY id) as src
WHERE src.id = dst.id
;
I have two tables, one holds user info (id, name, etc) and another table that holds user tickets and ticket status (ticket_id, user_id, ticket_status, etc).
I want to produce a list of ALL the users for example: ( SELECT * FROM user_table )
And for each user I need a count of their tickets for example:
(SELECT t1.user_id, COUNT(*) FROM user_tickets t1 WHERE t1.ticket_status = 15 GROUP BY t1.ticket_status, t1.user_id )
I can do this query to achieve what I’m looking for but it takes 5sec. to run the query on 50000 tickets, while each query running separately only takes fraction of a second.
SELECT t1.user_id, COUNT(*)
FROM user_tickets t1
LEFT JOIN user_table t2 ON t1.user_id = t2.id
WHERE t2.group_id = 20 AND t1.status_id = 15
GROUP BY t1.status_id, user_id
Any idea how to write the query to get same performance as each separately?
An indexing where clause fixed the problem.