I have the following query I'm trying to use to spit out each day in a date range and show the # of leads, assignments, & returns:
select
date_format(from_unixtime(date_created), '%m/%d/%Y') as date_format,
(select count(distinct(id_lead)) from lead_history where (date_format(from_unixtime(date_created), '%m/%d/%Y') = date_format) and (id_vertical in (2)) and (id_website in (3,8))) as leads,
(select count(id) from assignments where deleted=0 and (date_format(from_unixtime(date_assigned), '%m/%d/%Y') = date_format) and (id_vertical in (2)) and (id_website in (3,8))) as assignments,
(select count(id) from assignments where deleted=1 and (date_format(from_unixtime(date_deleted), '%m/%d/%Y') = date_format) and (id_vertical in (2)) and (id_website in (3,8))) as returns
from lead_history
where date_created between 1509494400 and 1512086399
group by date_format
The date_created, date_assigned, and date_deleted fields are integers representing timestamps. id, id_lead, id_vertical and id_website are already indexed.
Would adding indexes to date_created, date_assigned, date_deleted, and deleted help make this faster? The issue I'm having is that it is very slow, and I'm not sure an index will help when using date_format(from_unixtime(...
Here is the EXPLAIN:
Looking to your code you could rewrite the query as ..
select
date_format(from_unixtime(date_created), '%m/%d/%Y') as date_format
, count(distinct(h.id_lead) as leads
, sum(case a.deleted = 1 then 1 else 0 end) assignments
, sum(case b.deleted = 0 then 1 else 0 end) returns
from lead_history h
inner join assignments on a a.date_assigned = h.date_created
and a.id_vertical = 2
and id_website in (3,8))
inner join assignments on b b.deleted = h.date_created
and a.id_vertical = 2
and id_website in (3,8))
where date_created between 1509494400 and 1512086399
group by date_format
anyway you shold avoid unuseful () and nested (), avoid unuseful conversion between date and use join instead of subselect .. or at least reduce similar sabuselect using case
PS for what concern the index remember that the use of conversion on a column value invalid the use of related the index ..
Related
Here is my query
SELECT
SUM(o.order_disc + o.order_disc_vat) AS manualsale
FROM
orders o
WHERE
o.order_flag IN (0 , 2, 3)
AND o.order_status = '1'
AND (o.assign_sale_id IN (SELECT GROUP_CONCAT(CAST(id AS SIGNED)) AS ids FROM users WHERE team_id = 92))
AND DATE(o.payment_on) = DATE(NOW())
above query return null when i run this query in terminal
When i use subquery below it returns data
SELECT GROUP_CONCAT(CAST(id AS SIGNED)) AS ids FROM users WHERE team_id = 92)
above query returns
'106,124,142,179'
and when i run my first query like below
SELECT
SUM(o.order_disc + o.order_disc_vat) AS manualsale
FROM
orders o
WHERE
o.order_flag IN (0 , 2, 3)
AND o.order_status = '1'
AND (o.assign_sale_id IN (106,124,142,179))
AND DATE(o.payment_on) = DATE(NOW())
it return me value.
Why it is not working with subquery please help
This does not do what you want:
AND (o.assign_sale_id IN (SELECT GROUP_CONCAT(CAST(id AS SIGNED)) AS ids FROM users WHERE team_id = 92))
This compares a single value against a comma-separated list of values, so it never matches (unless there is just one row in users for the given team).
You could phrase this as:
AND assign_sale_id IN (SELECT id FROM users WHERE team_id = 92)
But this would probably be more efficently expressed with exists:
AND EXISTS(SELECT 1 FROM users u WHERE u.team_id = 92 AND u.id = o.assign_sale_id)
Side note: I would also recommend rewriting this condition:
AND DATE(o.payment_on) = DATE(NOW())
To the following, which can take advantage of an index:
AND o.payment_on >= current_date AND o.payment_on < current_date + interval 1 day
I'm stuck at the query where I need to concat IDs of the table. And from that group of IDs, I need to fetch that rows in sub query. But when I try to do so, MySQL consider group_concat() as a string. So that condition becomes false.
select count(*)
from rides r
where r.ride_status = 'cancelled'
and r.id IN (group_concat(rides.id))
*************** Original Query Below **************
-- Daily Earnings for 7 days [Final]
select
group_concat(rides.id) as ids,
group_concat(ride_category.name) as rideType,
group_concat(ride_cars.amount + ride_cars.commission) as rideAmount ,
group_concat(ride_types.name) as carType,
count(*) as numberOfRides,
(
select count(*) from rides r where r.ride_status = 'cancelled' and r.id IN (group_concat(rides.id) )
) as cancelledRides,
(
select count(*) from rides r where r.`ride_status` = 'completed' and r.id IN (group_concat(rides.id))
) as completedRides,
group_concat(ride_cars.status) as status,
sum(ride_cars.commission) + sum(ride_cars.amount) as amount,
date_format(from_unixtime(rides.requested_at/1000 + rides.offset*60), '%Y-%m-%d') as requestedDate,
date_format(from_unixtime(rides.requested_at/1000 + rides.offset*60), '%V') as week
from
ride_cars,
rides,
ride_category,
ride_type_cars,
ride_types
where
ride_cars.user_id = 166
AND (rides.ride_status = 'completed' or. rides.ride_status = 'cancelled')
AND ride_cars.ride_id = rides.id
AND (rides.requested_at >= 1559347200000 AND requested_at < 1561852800000)
AND rides.ride_category = ride_category.id
AND ride_cars.car_model_id = ride_type_cars.car_model_id
AND ride_cars.ride_type_id = ride_types.id
group by
requestedDate;
Any solutions will be appreciated.
Try to replace the sub-query
(select count(*) from rides r where r.ride_status = 'cancelled' and r.id IN (group_concat(rides.id) )) as cancelledRides,
with below to count using SUM and CASE, it will make use of the GROUP BY
SUM(CASE WHEN rides.ride_status = 'cancelled' THEN 1 ELSE 0 END) as cancelledRides
and the same for completedRides
And move to using JOIN instead of implicit joins
I have a table (id, employee_id, device_id, logged_time) [simplified] that logs attendances of employees from biometric devices.
I generate reports showing the first in and last out time of each employee by date.
Currently, I am able to fetch the first in and last out time of each employee by date, but I also need to fetch the first in and last out device_ids of each employee. The entries are not in sequential order of the logged time.
I do not want to (and probably cannot) use joins as in one of the reports the columns are dynamically generated and can lead to thousands of joins. Furthermore, these are subqueries and are joined to other queries to get further details.
A sample setup of the table and queries are at http://sqlfiddle.com/#!9/3bc755/4
The first one just shows lists the entry and exit time by date of every employee
select
attendance_logs.employee_id,
DATE(attendance_logs.logged_time) as date,
TIME(MIN(attendance_logs.logged_time)) as entry_time,
TIME(MAX(attendance_logs.logged_time)) as exit_time
from attendance_logs
group by date, attendance_logs.employee_id
The second one builds up an attendance chart given a date range
select
`attendance_logs`.`employee_id`,
DATE(MIN(case when DATE(`attendance_logs`.`logged_time`) = '2017-09-18' THEN `attendance_logs`.`logged_time` END)) as date_2017_09_18,
MIN(case when DATE(`attendance_logs`.`logged_time`) = '2017-09-18' THEN `attendance_logs`.`logged_time` END) as entry_2017_09_18,
MAX(case when DATE(`attendance_logs`.`logged_time`) = '2017-09-18' THEN `attendance_logs`.`logged_time` END) as exit_2017_09_18,
DATE(MIN(case when DATE(`attendance_logs`.`logged_time`) = '2017-09-19' THEN `attendance_logs`.`logged_time` END)) as date_2017_09_19,
MIN(case when DATE(`attendance_logs`.`logged_time`) = '2017-09-19' THEN `attendance_logs`.`logged_time` END) as entry_2017_09_19,
MAX(case when DATE(`attendance_logs`.`logged_time`) = '2017-09-19' THEN `attendance_logs`.`logged_time` END) as exit_2017_09_19
/*
* dynamically generated columns for dates in date range
*/
from `attendance_logs`
where `attendance_logs`.`logged_time` >= '2017-09-18 00:00:00' and `attendance_logs`.`logged_time` <= '2017-09-19 23:59:59'
group by `attendance_logs`.`employee_id`;
Tried:
Similar to max and min logged_time of each date using case, tried to select the device_id where logged_time is max/min.
```MIN(case
when
`attendance_logs.logged_time` = MIN(
case when DATE(`attendance_logs`.`logged_time`)
= '2017-09-18' THEN `attendance_logs`.`logged_time` END
)
then `attendance_logs`.`device_id` end) as entry_device_2017_09_18 ```
This results in invalid use of group by
A quick hack for your query to pick the device id for in and out by using GROUP_CONCAT with in SUBSTRING_INDEX
SUBSTRING_INDEX(GROUP_CONCAT(case when DATE(`l`.`logged_time`) = '2017-09-18' THEN `l`.`device_id` END ORDER BY `l`.`device_id` desc),',',1) exit_device_2017_09_18,
Or if device id will be same for each in and its out then simply it can be written with GROUP_CONCAT only
GROUP_CONCAT(DISTINCT case when DATE(`l`.`logged_time`) = '2017-09-18' THEN `l`.`device_id` END)
DEMO
To avoid joins I suggest you try "correlated subqueries" instead:
select
employee_id
, logdate
, TIME(entry_time) entry_time
, (select MIN(l.device_id)
from attendance_logs l
where l.employee_id = d.employee_id
and l.logged_time = d.entry_time) entry_device
, TIME(exit_time) exit_time
, (select MAX(l.device_id)
from attendance_logs l
where l.employee_id = d.employee_id
and l.logged_time = d.exit_time) exit_device
from (
select
attendance_logs.employee_id
, DATE(attendance_logs.logged_time) as logdate
, MIN(attendance_logs.logged_time) as entry_time
, MAX(attendance_logs.logged_time) as exit_time
from attendance_logs
group by
attendance_logs.employee_id
, DATE(attendance_logs.logged_time)
) d
;
see: http://sqlfiddle.com/#!9/06e0e2/3
Note: I have used MIN() and MAX() on those subqueries only to avoid any possibility that these return more than one value. You could use limit 1 instead if you prefer.
Note also: I do not normally recommend correlated subqueries as they can cause performance issues, but they do supply the data you need.
oh, and please try to avoid using date as a column name, it isn't good practice.
For readability, I would like to modify the below statement. Is there a way to extract the CASE statement, so I can use it multiple times without having to write it out every time?
select
mturk_worker.notes,
worker_id,
count(worker_id) answers,
count(episode_has_accepted_imdb_url) scored,
sum( case when isnull(imdb_url) and isnull(accepted_imdb_url) then 1
when imdb_url = accepted_imdb_url then 1
else 0 end ) correct,
100 * ( sum( case when isnull(imdb_url) and isnull(accepted_imdb_url) then 1
when imdb_url = accepted_imdb_url then 1
else 0 end)
/ count(episode_has_accepted_imdb_url) ) percentage
from
mturk_completion
inner join mturk_worker using (worker_id)
where
timestamp > '2015-02-01'
group by
worker_id
order by
percentage desc,
correct desc
You can actually eliminate the case statements. MySQL will interpret boolean expressions as integers in a numeric context (with 1 being true and 0 being false):
select mturk_worker.notes, worker_id, count(worker_id) answers,
count(episode_has_accepted_imdb_url) scored,
sum(imdb_url = accepted_imdb_url or imdb_url is null and accepted_idb_url is null) as correct,
(100 * sum(imdb_url = accepted_imdb_url or imdb_url is null and accepted_idb_url is null) / count(episode_has_accepted_imdb_url)
) as percentage
from mturk_completion inner join
mturk_worker
using (worker_id)
where timestamp > '2015-02-01'
group by worker_id
order by percentage desc, correct desc;
If you like, you can simplify it further by using the null-safe equals operator:
select mturk_worker.notes, worker_id, count(worker_id) answers,
count(episode_has_accepted_imdb_url) scored,
sum(imdb_url <=> accepted_imdb_url) as correct,
(100 * sum(imdb_url <=> accepted_imdb_url) / count(episode_has_accepted_imdb_url)
) as percentage
from mturk_completion inner join
mturk_worker
using (worker_id)
where timestamp > '2015-02-01'
group by worker_id
order by percentage desc, correct desc;
This isn't standard SQL, but it is perfectly fine in MySQL.
Otherwise, you would need to use a subquery, and there is additional overhead in MySQL associated with subqueries.
I have a database including certain strings, such as '{TICKER|IBM}' to which I will refer as ticker-strings. My target is to count the amount of ticker-strings per day for multiple strings.
My database table 'tweets' includes the rows 'tweet_id', 'created at' (dd/mm/yyyy hh/mm/ss) and 'processed text'. The ticker-strings, such as '{TICKER|IBM}', are within the 'processed text' row.
At this moment, I have a working SQL query for counting one ticker-string (thanks to the help of other Stackoverflow-ers). What I would like to have is a SQL query in which I can count multiple strings (next to '{TICKER|IBM}' also '{TICKER|GOOG}' and '{TICKER|BAC}' for instance).
The working SQL query for counting one ticker-string is as follows:
SELECT d.date, IFNULL(t.count, 0) AS tweet_count
FROM all_dates AS d
LEFT JOIN (
SELECT COUNT(DISTINCT tweet_id) AS count, DATE(created_at) AS date
FROM tweets
WHERE processed_text LIKE '%{TICKER|IBM}%'
GROUP BY date) AS t
ON d.date = t.date
The eventual output should thus give a column with the date, a column with {TICKER|IBM}, a column with {TICKER|GOOG} and one with {TICKER|BAC}.
I was wondering whether this is possible and whether you have a solution for this? I have more than 100 different ticker-strings. Of course, doing them one-by-one is an option, but it is a very time-consuming one.
If I understand correctly, you can do this with conditional aggregation:
SELECT d.date, coalesce(IBM, 0) as IBM, coalesce(GOOG, 0) as GOOG, coalesce(BAC, 0) AS BAC
FROM all_dates d LEFT JOIN
(SELECT DATE(created_at) AS date,
COUNT(DISTINCT CASE WHEN processed_text LIKE '%{TICKER|IBM}%' then tweet_id
END) as IBM,
COUNT(DISTINCT CASE WHEN processed_text LIKE '%{TICKER|GOOG}%' then tweet_id
END) as GOOG,
COUNT(DISTINCT CASE WHEN processed_text LIKE '%{TICKER|BAC}%' then tweet_id
END) as BAC
FROM tweets
GROUP BY date
) t
ON d.date = t.date;
I'd return the specified resultset like this, adding expressions to the SELECT list for each "ticker" I want returned as a separate column:
SELECT d.date
, IFNULL(SUM(t.processed_text LIKE '%{TICKER|IBM}%' ),0) AS `cnt_ibm`
, IFNULL(SUM(t.processed_text LIKE '%{TICKER|GOOG}%'),0) AS `cnt_goog`
, IFNULL(SUM(t.processed_text LIKE '%{TICKER|BAC}%' ),0) AS `cnt_goog`
, IFNULL(SUM(t.processed_text LIKE '%{TICKER|...}%' ),0) AS `cnt_...`
FROM all_dates d
LEFT
JOIN tweets t
ON t.created_at >= d.date
AND t.created_at < d.date + INTERVAL 1 DAY
GROUP BY d.date
NOTES: The expressions within the SUM aggregates above are evaluated as booleans, so they return 1 (if true), 0 (if false), or NULL. I'd avoid wrapping the created_at column in a DATE() function, and use a range scan instead, especially if a predicate is added (WHERE clause) that restricts the values ofdatebeing returned fromall_dates`.
As an alternative, expressions like this will return an equivalent result:
, SUM(IF(t.process_text LIKE '%{TICKER|IBM}%' ,1,0)) AS `cnt_ibm`