Get count of complex query - mysql

I would like to count the total number of rows returned by the following query:
SELECT table1.*, COUNT(table2.fk) * (100/18) AS 'number'
FROM table1 INNER JOIN table2 ON table1.pk = table2.fk
WHERE table1.Street LIKE '$Street%'
AND table1.City LIKE '$City%'
AND table1.Zip LIKE '%$Zip'
AND table1.DOBY LIKE '%$DOBY'
AND table1.DOBM LIKE '%$DOBM'
AND table1.DOBD LIKE '%$DOBD'
AND table1.Gender LIKE '$gender%'
AND table2.year>= 2004
AND table2.type IN ('AA', 'AB', 'AC')
GROUP BY table2.fk
HAVING (COUNT(table2.fk) * (100/18)) >= '$activity'
ORDER BY DOBY, DOBM, DOBD ASC
The query counts the number or times the primary key of table1 occurs as the foreign key of table2, and calculates a percentage ('number') based on a fixed amount. It works well enough, but I'm having trouble getting the total amount of records found for my pagination script.
I would appreciate it if anyone can offer some suggestions or solutions.

u can do SQL_CALC_FOUND_ROWS (google for exact syntax)
And then use SELECT FOUND_ROWS() AS total

Going with what Itay Moav says, a programming language should have a function for the found_rows function. As per the function documentation, it returns the number of rows of a SELECT statement with a LIMIT keyword if the LIMIT keyword wasn't there.
If it doesn't, you can just make another SELECT query to the database: SELECT FOUND_ROWS();. It will return the same information.

Related

SQL - Nested query optimization

How can I optimize this query SQL?
CREATE TABLE table1 AS
SELECT * FROM temp
WHERE Birth_Place IN
(SELECT c.DES_COM
FROM tableCom AS c
WHERE c.COD_PROV IS NULL)
ORDER BY Cod, Birth_Date
I think that the problem is the IN clause
First of all it's not quite valid SQL, since you are selecting and sorting by columns that are not part of the group. What you want to do is called "select top N in group", check out Select first row in each GROUP BY group?
Your query doesn't make sense, because you have SELECT * with GROUP BY. Ignoring that, I would recommend writing the query as:
SELECT t.*
FROM temp t
WHERE EXISTS (SELECT 1
FROM tableCom c
WHERE t.Birth_Place = c.DES_COM AND
c.COD_PROV IS NULL
)
ORDER BY Cod, Birth_Date;
For this, I recommend an index on tableCom(desc_com, cod_prov). Your database might also be able to use an an index on temp(cod, birth_date, birthplace).

The query gives single row query returns more than one row

I'm trying to show staff_code, staff_name and dept_name for those who have taken one book.
Here's my query:
SELECT SM.STAFF_CODE,SM.STAFF_NAME,DM.DEPT_NAME,BT.BOOK_CODE
FROM STAFF_MASTER SM,DEPARTMENT_MASTER DM,BOOK_TRANSACTIONS BT
WHERE SM.DEPT_CODE =DM.DEPT_CODE
AND SM.STAFF_CODE = (
SELECT STAFF_CODE
FROM BOOK_TRANSACTIONS
HAVING COUNT(*) > 1
GROUP BY STAFF_CODE)
It gives the error:
single-row subquery returns more than one row.
How to solve this?
Change = to IN:
WHERE SM.STAFF_CODE IN (SELECT ...)
Because the select returns multiple values, using equals won't work, but IN returns true if any of the values in a list match. The list can be a hard-coded CSV list, or a select with one column like your query is.
That will fix the error, but you also need to remove BOOK_TRANSACTIONS from the table list and remove BOOK_CODE from the select list.
After making these changes, your query would look like this:
SELECT SM.STAFF_CODE,SM.STAFF_NAME,DM.DEPT_NAME
FROM STAFF_MASTER SM,DEPARTMENT_MASTER DM
WHERE SM.DEPT_CODE =DM.DEPT_CODE
AND SM.STAFF_CODE IN (
SELECT STAFF_CODE
FROM BOOK_TRANSACTIONS
HAVING COUNT(*) > 1
GROUP BY STAFF_CODE)
I recommend learning the modern (now over 25 year old) JOIN syntax.

SQL: How to decrease the statement execution time?

I'm not an expert in SQL, i have an sql statement :
SELECT * FROM articles WHERE article_id IN
(SELECT distinct(content_id) FROM contents_by_cats WHERE cat_id='$cat')
AND permission='true' AND date <= '$now_date_time' ORDER BY date DESC;
Table contents_by_cats has 11000 rows.
Table articles has 2700 rows.
Variables $now_date_time and $cat are php variables.
This query takes about 10 seconds to return the values (i think because it has nested SELECT statements) , and 10 seconds is a big amount of time.
How can i achieve this in another way ? (Views or JOIN) ?
I think JOIN will help me here but i don't know how to use it properly for the SQL statement that i mentioned.
Thanks in advance.
A JOIN is exactly what you are looking for. Try something like this:
SELECT DISTINCT articles.*
FROM articles
JOIN contents_by_cats ON articles.article_id = contents_by_cats.content_id
WHERE contents_by_cats.cat_id='$cat'
AND articles.permission='true'
AND articles.date <= '$now_date_time'
ORDER BY date DESC;
If your query is still not as fast as you would like then check that you have an index on articles.article_id and contents_by_cats.content_id and contents_by_cats.cat_id. Depending on the data you may want an index on articles.date as well.
Do note that if the $cat and $now_date_time values are coming from a user then you should really be preparing and binding the query rather than just dumping these values into the query.
This is the query we are starting with:
SELECT a.*
FROM articles a
WHERE article_id IN (SELECT distinct(content_id)
FROM contents_by_cats
WHERE cat_id ='$cat'
) AND
permission ='true' AND
date <= '$now_date_time'
ORDER BY date DESC;
Two things will help this query. The first is to rewrite it using exists rather than in and to simplify the subquery:
SELECT a.*
FROM articles a
WHERE EXISTS (SELECT 1
FROM contents_by_cats cbc
WHERE cbc.content_id = a.article_id and cat_id = '$cat'
) AND
permission ='true' AND
date <= '$now_date_time'
ORDER BY date DESC;
Second, you want indexes on both articles and contents_by_cats:
create index idx_articles_3 on articles(permission, date, article_id);
create index idx_contents_by_cats_2 on contents_by_cat(content_id, cat_id);
By the way, instead of $now_date_time, you can just use the now() function in MySQL.

mysql search query returns the same record x times

I have the following mysql query
SELECT *, DATE(file_created) as created, s.disk_id, s.url,
(SELECT COUNT(*) FROM Comments WHERE cmt_type=1 AND file_id=cmt_id) as comments FROM (files, servers s)
WHERE usr_id=1 AND (file_name LIKE CONCAT('%','sample','%') OR file_descr LIKE CONCAT('%','sample','%'))
ORDER BY file_created DESC
When I run this query I get 40 records back if there is atleast one record matching the query and all the 40 results will be the same record with the same ID!
I cant see any obvious problems with the query so not sure what is causing this problem.
Here is your query, formatted so it is better understood:
SELECT *, DATE(file_created) as created, s.disk_id, s.url,
(SELECT COUNT(*) FROM Comments WHERE cmt_type=1 AND file_id=cmt_id) as comments
FROM files, servers s
WHERE usr_id = 1 AND
(file_name LIKE CONCAT('%','sample','%') OR file_descr LIKE CONCAT('%','sample','%'))
ORDER BY file_created DESC
You have no join condition between files and server. No surprise that you ware getting duplicates. The comma in the from clause means cross join or "create a cartesian product". Simply do not use commas in the from clause. A simple rule that will save future frustration.
So, if the file has a server id, then you might want:
SELECT *, DATE(file_created) as created, s.disk_id, s.url,
(SELECT COUNT(*) FROM Comments WHERE cmt_type=1 AND file_id=cmt_id) as comments
FROM files JOIN
servers s
ON files.serverid = s.serverid
WHERE usr_id = 1 AND
(file_name LIKE CONCAT('%','sample','%') OR file_descr LIKE CONCAT('%','sample','%'))
ORDER BY file_created DESC
You are joining two tables (files and servers) but I can't see anything to restrict the servers, so it's going to repeat each matching line of files with each matching line of servers (all of them, I suppose).
I think,
First, you should add joining criteria for tables like files.server_id = servers.id at the where statement.
Second, you should change file_id=cmt_id statement to comments.file_id = file_id for correct counts.
SELECT *, DATE(file_created) as created, s.disk_id, s.url, (SELECT COUNT(*) FROM Comments WHERE cmt_type=1 AND Comments.file_id = files.file_id) as comments
FROM (files, servers s)
WHERE files.file_server_id = s.server_id AND usr_id=1 AND (file_name LIKE CONCAT('%','sample','%') OR file_descr LIKE CONCAT('%','sample','%'))
ORDER BY file_created DESC
I hope, it works.

Why is this SQL query with subquery very slow?

I have this query:
select *
from transaction_batch
where id IN
(
select MAX(id) as id
from transaction_batch
where status_id IN (1,2)
group by status_id
);
The inner query runs very fast (less than 0.1 seconds) to get two ID's, one for status 1, one for status 2, then it selects based on primary key so it is indexed. The explain query says that it's searching 135k rows using where only, and I cannot for the life of me figure out why this is so slow.
The inner query is run seperatly for every row of your table over and over again.
As there is no reference to the outer query in the inner query, I suggest you split those two queries and just insert the results of the inner query in the WHERE clause.
select b.*
from transaction_batch b
inner join (
select max(id) as id
from transaction_batch
where status_id in (1, 2)
group by status_id
) bm on b.id = bm.id
my first post here.. sorry about the lack of formatting
I had a performance problem shown below:
90sec: WHERE [Column] LIKE (Select [Value] From [Table]) //Dynamic, slow
1sec: WHERE [Column] LIKE ('A','B','C') //Hardcoded, fast
1sec: WHERE #CSV like CONCAT('%',[Column],'%') //Solution, below
I had tried joining rather than subquerying.
I had also tried a hardcoded CTE.
I had lastly tried a temp table.
None of these standard options worked, and I was not willing to dosp_execute option.
The only solution that worked as:
DECLARE #CSV nvarchar(max) = Select STRING_AGG([Value],',') From [Table];
// This yields #CSV = 'A,B,C'
...
WHERE #CSV LIKE CONCAT('%',[Column],'%')