Select timeline in MySQL database - mysql

This is my MySQL table.
+-----------+-------------+---------------------+
| element | status | hour |
+-----------+-------------+---------------------+
| 18 | Available | 2020-01-19 14:23:49 |
| 18 | Unavailable | 2019-09-13 18:19:47 |
| 18 | Available | 2019-09-13 18:18:49 |
| 18 | Unavailable | 2019-09-09 08:22:45 |
| 19 | Available | 2019-09-07 19:13:56 |
| 19 | Available | 2019-09-03 18:13:49 |
+-----------+-------------+---------------------+
Normally the timeline of rows in this MySQL table for each element status is unavailable / available.
But it happened that for element number 19 the timeline of rows in status is available / available :
+----------+-------------+---------------------+
| element | status | hour |
+----------+-------------+---------------------+
| 19 | Available | 2019-09-07 19:13:56 |
| 19 | Available | 2019-09-03 18:13:49 |
+----------+-------------+---------------------+
is this means anomaly.
I need to intercept these cases that is, all the rows for each element status when the timeline is available / available.
How to do resolve this ?
Please can you help me ?
#Edit 01
+-----------+-------------+---------------------+---------+
| element | status | hour | ID |
+-----------+-------------+---------------------+---------+
| 18 | Available | 2020-01-19 14:23:49 | 6 |
| 18 | Unavailable | 2019-09-13 18:19:47 | 5 |
| 18 | Available | 2019-09-13 18:18:49 | 4 |
| 18 | Unavailable | 2019-09-09 08:22:45 | 3 |
| 19 | Available | 2019-09-07 19:13:56 | 2 |
| 19 | Available | 2019-09-03 18:13:49 | 1 |
+-----------+-------------+---------------------+---------+

Based on your originally published sample data you can use a sub query to take a look at the next and previous status then test
select s.element,s.hour,s.`status`
from
(
select t.*,
(select concat(t1.status,',',t1.hour) from t t1
where t1.element = t.element and t1.hour < t.hour
order by t1.element,t1.hour desc limit 1) prev,
(select concat(t1.status,',',t1.hour) from t t1
where t1.element = t.element and t1.hour > t.hour
order by t1.element,t1.hour limit 1) nxt
from t
) s
where s.status = substring_index(s.nxt,',',1) or
s.status = substring_index(s.prev,',',1)
;

Related

Calculate read pages through select query in reading progress table

I have a small program that I use to track my progress in reading books and stuff like goodreads to know how much I read per day.
I created two tables for that, tbl_materials(material_id int, name varchar), tbl_progress(date_of_update timestamp, material_id int foreign key, read_pages int, skipped bit).
Whenever I read some pages I insert into tbl_progress the current page that I've finished
I may read in the book multiple times. And if I skipped some pages I insert them into tbl_progress and mark the bit skipped to true. The problem is I can't query the tbl_progress to know how much I read per day
What I have tried is to find the last inserted progress for every single material in every single day
so for example:
+-------------+------------+---------+---------------------+
| material_id | read_pages | skipped | last_update |
+-------------+------------+---------+---------------------+
| 4 | 1 | | 2017-09-22 00:56:02 |
| 3 | 1 | | 2017-09-22 00:56:14 |
| 12 | 1 | | 2017-09-24 20:13:01 |
| 4 | 30 | | 2017-09-25 01:56:38 |
| 4 | 34 | | 2017-09-25 02:19:47 |
| 54 | 1 | | 2017-09-29 04:22:11 |
| 59 | 9 | | 2017-10-14 15:25:14 |
| 4 | 68 | T | 2017-10-18 02:33:04 |
| 4 | 72 | | 2017-10-18 03:50:51 |
| 2 | 3 | | 2017-10-18 15:02:46 |
| 2 | 5 | | 2017-10-18 15:10:46 |
| 4 | 82 | | 2017-10-18 16:18:03 |
| 4 | 84 | | 2017-10-20 18:06:40 |
| 4 | 87 | | 2017-10-20 19:11:07 |
| 4 | 103 | T | 2017-10-21 19:50:29 |
| 4 | 104 | | 2017-10-22 19:56:14 |
| 4 | 108 | | 2017-10-22 20:08:08 |
| 2 | 6 | | 2017-10-23 00:35:45 |
| 4 | 111 | | 2017-10-23 02:29:32 |
| 4 | 115 | | 2017-10-23 03:06:15 |
+-------------+------------+---------+---------------------+
I calculate my total read pages per day = last read page in this day - last read page in a date prior to this date and this works but the problem is I can't avoid skipped pages.
the first row in 2017-09-22 I read 1 page then another 1 page so the total read in this day = 2 (for only material_id = 4)
in 2017-09-25 the last update for material_id 4 is 34 pages which means I read 34-1 = 33 pages (last update in this day 34 - last update prior to this date 1) = 33
till now every thing works well but when it comes to considering skipped pages I could't do it for example:
in 2017-10-18 the last number of read pages for material_id = 4 was 34 (in 2017-09-25) then I skipped 34 pages and now the current page is 68 then read 4 pages (2017-10-18 03:50:51 ) then another 10 pages (2017-10-18 16:18:03) so the total for material_id = 4 is 14
I created a view to select the most recent last_update for every book in every day
create view v_mostRecentPerDay as
select material_id id,
(select title from materials where materials.material_id = id) title,
completed_pieces,
last_update,
date(last_update) dl,
skipped
from progresses
where last_update = (
select max(last_update)
from progresses s2
where s2.material_id = progresses.material_id
and date(s2.last_update) = date(progresses.last_update)
and s2.skipped = false
);
so if there are many updates for single book in one day, this view retrieves the last one (with the max of last_update) which accompany the biggest number of read pages and so for every single book
and another view to get the total read pages every day:
create view v_totalReadInDay as
select dl, sum(diff) totalReadsInThisDay
from (
select dl,
completed_pieces - ifnull((select completed_pieces
from progresses
where material_id = id
and date(progresses.last_update) < dl
ORDER BY last_update desc
limit 1
), 0) diff
from v_mostRecentPerDay
where skipped = false
) omda
group by dl;
but the problem is that the last view calculates skipped pages.
expected result:
+------------+------------------+
| day | total_read_pages |
+------------+------------------+
| 2017-09-22 | 2 |
+------------+------------------+
| 2017-09-24 | 1 |
+------------+------------------+
| 2017-09-25 | 33 |
+------------+------------------+
| 2017-09-29 | 1 |
+------------+------------------+
| 2017-10-14 | 9 |
+------------+------------------+
| 2017-10-18 | 19 |
+------------+------------------+
| 2017-10-20 | 5 |
+------------+------------------+
| 2017-10-21 | 0 |
+------------+------------------+
| 2017-10-22 | 21 |
+------------+------------------+
| 2017-10-23 | 8 |
+------------+------------------+
mysql> SELECT VERSION();
+-----------------------------+
| VERSION() |
+-----------------------------+
| 5.7.26-0ubuntu0.16.04.1-log |
+-----------------------------+
This seems like a super-convoluted way to evaluate pages read per day. Have you considered denormalising your data slightly and storing both the current page and the number of pages read?
The current page may make more sense stored in the material table, or in a separate bookmark table e.g.
bookmark - id, material_id, page_number
reading - id, bookmark_id, pages_complete, was_skipped, ended_at
When a reading (or skipping!) session is complete, the pages_complete can easily be calculated from the current page minus the old current page in the bookmark, and this can be done in your application logic
Your pages per day query simply becomes
SELECT SUM(pages_complete) pages_read
FROM reading
WHERE ended_at >= :day
AND ended_at < :day + INTERVAL 1 DAY
AND was_skipped IS NOT TRUE
You can make a view the uses the same columns of table progresses + another derived column which uses the same idea as #Arth suggested (pages_completed column)
This column will contain the current completed_pages - completed_pages with last update prior to the first completed pages which is the difference.
So for example if your progress table like this:
+-------------+------------+---------+---------------------+
| material_id | read_pages | skipped | last_update |
+-------------+------------+---------+---------------------+
| 4 | 68 | T | 2017-10-18 02:33:04 |
| 4 | 72 | | 2017-10-18 03:50:51 |
| 2 | 3 | | 2017-10-18 15:02:46 |
| 2 | 5 | | 2017-10-18 15:10:46 |
| 4 | 82 | | 2017-10-18 16:18:03 |
+-------------+------------+---------+---------------------+
we will add another derived column called diff.
where diff read_pages in 2017-10-18 02:33:04 - read_pages directly prior to 2017-10-18 02:33:04
+-------------+------------+---------+---------------------+------------------+
| material_id | read_pages | skipped | last_update | Derived_col_diff |
+-------------+------------+---------+---------------------+------------------+
| | 68 | T | 2017-10-18T02:33:04 | 68 - null = 0 |
| 4 | | | | |
+-------------+------------+---------+---------------------+------------------+
| 4 | 72 | | 2017-10-18T03:50:51 | 72 - 68 = 4 |
+-------------+------------+---------+---------------------+------------------+
| 2 | 3 | | 2017-10-18T15:02:46 | 3 - null = 0 |
+-------------+------------+---------+---------------------+------------------+
| 2 | 5 | | 2017-10-18T15:10:46 | 5 - 3 = 2 |
+-------------+------------+---------+---------------------+------------------+
| 4 | 82 | | 2017-10-18T16:18:03 | 82 - 72 = 10 |
+-------------+------------+---------+---------------------+------------------+
note: that 68 - null is null but I put it 0 for clarification
The derived column here is the difference between this read_pages - read_pages directly before this read_pages.
Here is a view
create view v_progesses_with_read_pages as
select s0.*,
completed_pieces - ifnull((select completed_pieces
from progresses s1
where s1.material_id = s0.material_id
and s1.last_update = (
select max(last_update)
FROM progresses s2
where s2.material_id = s1.material_id and s2.last_update < s0.last_update
)), 0) read_pages
from progresses s0;
Then you can select the sum of this derived column per day:
select date (last_update) dl, sum(read_pages) totalReadsInThisDay from v_progesses_with_read_pages where skipped = false group by dl;
Which will result in something like this:
+-------------+-----------------------------+
| material_id | totalReadsInThisDay |
+-------------+-----------------------------+
| 2017-10-18 | 16 |
+-------------+-----------------------------+
| 2017-10-19 | 20 (just for clarification) |
+-------------+-----------------------------+
Note that the last row is from my mind lol

Return multi columns result from single table with zero values

I have a single table like :
mysql> select RefID,State,StartTime,EndTime from execReports limit 5;
+--------------------------------------+-----------+---------------------+---------------------+
| RefID | State | StartTime | EndTime |
+--------------------------------------+-----------+---------------------+---------------------+
| 00019a52-8480-4431-9ad2-3767c3933627 | Completed | 2016-04-18 13:45:00 | 2016-04-18 13:45:01 |
| 00038a8a-995e-4cb2-a335-cb05d5b3e92d | Aborted | 2016-05-03 04:00:00 | 2016-05-03 04:00:02 |
| 001013f8-0b86-456f-bd59-a7ef066e565f | Completed | 2016-04-14 03:30:00 | 2016-04-14 03:30:11 |
| 001f8d23-3022-4271-bba0-200494de678a | Failed | 2016-04-30 05:00:00 | 2016-04-30 05:00:02 |
| 0027ba42-1c37-4e50-a7d6-a4e24056e080 | Completed | 2016-04-18 03:45:00 | 2016-04-18 03:45:02 |
+--------------------------------------+-----------+---------------------+---------------------+
I can extract the count of exec for each state with :
mysql> select distinct State,count(StartTime) as nbExec from execReports group by State;
+-----------+--------+
| State | nbExec |
+-----------+--------+
| Aborted | 3 |
| Completed | 14148 |
| Failed | 49 |
+-----------+--------+
4 rows in set (0.02 sec)
I can extract the count of exec for each week with :
mysql> select distinct extract(week from StartTime) as Week, count(StartTime) as nbExec from execReports group by Week;
+------+--------+
| Week | nbExec |
+------+--------+
| 14 | 1317 |
| 15 | 3051 |
| 16 | 3066 |
| 17 | 3059 |
| 18 | 3059 |
| 19 | 652 |
+------+--------+
6 rows in set (0.01 sec)
But I would like to extract a crossing table like :
+------+---------+-----------+--------+---------+---------+
| Week | nbExec | Completed | Failed | Running | Aborted |
+------+---------+-----------+--------+---------+---------+
| 14 | 1317 | 1312 | 3 | 1 | 1 |
| 15 | 3051 | 3050 | 1 | 0 | 0 |
| 16 | 3066 | 3060 | 3 | 2 | 1 |
| 17 | 3059 | 3058 | 0 | 1 | 0 |
| 18 | 3059 | 3057 | 1 | 0 | 1 |
| 19 | 652 | 652 | 0 | 0 | 0 |
+------+---------+-----------+--------+---------+---------+
I'm stuck on this for a few days. Any help appreciated.
Best regards
select extract(week from StartTime) as Week, count(StartTime) as nbExec,
sum(if(state="Completed",1,0)) Completed,
sum(if(state="Failed",1,0)) Failed,
sum(if(state="Aborted",1,0)) Aborted
from execReports group by Week;
demo
You can join multi tables for this. If you want for dynamic row to column, check this: MySQL pivot row into dynamic number of columns
SELECT
a.week,
count(a.StartTime) as nbExec,
count(b1.StartTime) as Completed,
count(b2.StartTime) as Failed,
count(b3.StartTime) as Running,
count(b4.StartTime) as Aborted,
FROM execReports a
LEFT JOIN execReports b1 ON a.refID = b1.refID and b1.state ='Completed'
LEFT JOIN execReports b2 ON a.refID = b2.refID and b2.state ='Failed'
LEFT JOIN execReports b3 ON a.refID = b3.refID and b3.state ='Running'
LEFT JOIN execReports b4 ON a.refID = b4.refID and b4.state ='Aborted'
GROUP BY 1

Conditionally move MySQL data between rows in same table

Working in Redmine, I need to copy(not move) data from certain rows to other rows based on matching project id numbers with time entries.
I have included a diagram of the table "custom_values" and my understanding of the design below(CURRENT DATA):
+----+-----------------+---------------+-----------------+-------+
| id | customized_type | customized_id | custom_field_id | value |
+----+-----------------+---------------+-----------------+-------+
| 1 | Project | 1 | 1 | 01 |
| 2 | TimeEntry | 1 | 4 | 01 |
| 3 | Project | 2 | 1 | 02 |
| 4 | TimeEntry | 2 | 4 | 02 |
| 5 | Project | 3 | 1 | 03 |
| 6 | TimeEntry | 3 | 4 | |
| 7 | Project | 4 | 1 | 04 |
| 8 | TimeEntry | 4 | 4 | |
+----+-----------------+---------------+-----------------+-------+
At the risk of oversimplifying,
"id" = The primary key for each entry in custom_values
"customized_type" = Specifies which db table the row is referring to.
"customized_id" = Specifies the primary key for the db table entry previously specified in "customized_type".
"custom_field_id" = Specifies which custom field the row is referring to. Redmine admins can arbitrarily add and remove custom fields.
"value" = The data contained within the custom field specified by
"custom_field_id"
In my situation, the values listed in "value" are representing unique customer id numbers. The customer id numbers did not always get entered with each time entry. I need to copy the customer numbers from the project rows to the matching time entry rows. Each time entry has a project_id field.
So far, here is my mangled SQL query:
SELECT
custom_field_id,
custom_values.value AS 'CUSTOMER_NUMBER',
custom_values.customized_id AS 'PROJECT_ID_NUMBER',
custom_values.customized_type,
time_entries.comments AS 'TIME_ENTRY_COMMENTS'
FROM
redmine_tweaking.custom_values
LEFT JOIN
redmine_tweaking.time_entries ON custom_values.customized_id = time_entries.project_id
WHERE
custom_values.customized_type='Project' AND custom_values.custom_field_id=1;
The query I have so far allows me to see that I have the time entries connected properly to their matching projects, but that is all I have been able to figure out. So in other words, this SQL statement does not exactly solve my problem.
Plus, even if it did work, I think the way I laid it out looks like 200 lbs of bird poop. There must be a better/more optimized way to do this.
Any help would be greatly appreciated. I am relatively new and I have been pouring hours into solving this problem.
UPDATE:
Ok, here is the time_entries table:
+----+------------+---------+----------+-------+----------+-------------+------------+-------+--------+-------+---------------------+---------------------+
| id | project_id | user_id | issue_id | hours | comments | activity_id | spent_on | tyear | tmonth | tweek | created_on | updated_on |
+----+------------+---------+----------+-------+----------+-------------+------------+-------+--------+-------+---------------------+---------------------+
| 1 | 1 | 1 | 1 | .25 | test | 9 | 2015-11-04 | 2015 | 11 | 45 | 2015-11-04 08:18:12 | 2015-11-04 10:18:12 |
| 2 | 2 | 1 | 1 | .25 | test2 | 9 | 2015-11-04 | 2015 | 11 | 45 | 2015-11-04 09:18:12 | 2015-11-04 12:18:12 |
+----+------------+---------+----------+-------+----------+-------------+------------+-------+--------+-------+---------------------+---------------------+
As opposed to the original table that I first posted, the expected output would show this:
+----+-----------------+---------------+-----------------+-------+
| id | customized_type | customized_id | custom_field_id | value |
+----+-----------------+---------------+-----------------+-------+
| 1 | Project | 1 | 1 | 01 |
| 2 | TimeEntry | 1 | 4 | 01 |
| 3 | Project | 2 | 1 | 02 |
| 4 | TimeEntry | 2 | 4 | 02 |
| 5 | Project | 3 | 1 | 03 |
| 6 | TimeEntry | 3 | 4 | 03 |
| 7 | Project | 4 | 1 | 04 |
| 8 | TimeEntry | 4 | 4 | 04 |
+----+-----------------+---------------+-----------------+-------+

How to get sum for different entry in mysql

I have table in mysql like
| service_code | charges | caller_number | duration | minutes |
+--------------+---------+---------------+----------+---------+
| 10 | 15 | 8281490235 | 00:00:00 | 1.0000 |
| 11 | 12 | 9961621709 | 00:00:00 | 0.0000 |
| 10 | 15 | 8281490235 | 01:00:44 | 60.7333 |
| 11 | 2 | 9744944316 | 01:00:44 | 60.7333 |
+--------------+---------+---------------+----------+---------+
from this table I want to get charges*minutes for each separate caller_number.
I have done like this
SELECT sum(charges*minutes) as cost from t8_m4_bill groupby caller_number
but I am not getting expected output. Please help?
SELECT caller_number,sum(charges*minutes) as cost
from t8_m4_bill
group by caller_number
order by caller_number

SQL statement to combine both fields in one table that match the id element in another table

If you are familiar with Drupal this is using 2 taxonomy terms to describe each content_type_event node. If you don't know Drupal you still have everything below.
I have built a SQL Fiddle that is easier to follow and test with than my drawn out tables. The fiddle has the actual database content which is a little different than the sample info that is shown below but I have tried to make them as similar as possible.
I have three tables:
content_type_event:
____________________________________
| nid | field_eventstartdate_value |
------------------------------------
| 17 | 1984581600 |
| 18 | 1984581600 |
| 19 | 1984581600 |
| 20 | 1984581600 |
| 22 | 1984581600 |
====================================
term_node:
_____________
| nid | tid |
-------------
| 17 | 6 |
| 17 | 15 |
| 18 | 7 |
| 18 | 17 |
| 19 | 6 |
| 19 | 15 |
| 20 | 16 |
| 20 | 9 |
| 22 | 10 |
| 22 | 15 |
=============
term_data:
__________________________
| tid | vid | name |
--------------------------
| 6 | 4 | Location 1 |
| 15 | 3 | Event 1 |
| 7 | 4 | Location 2 |
| 9 | 4 | Location 3 |
| 10 | 4 | Location 4 |
| 16 | 3 | Event 2 |
| 17 | 3 | Event 3 |
==========================
The content_type_event table has information about the event but for the location and event type I have to dig deeper.
The term_node table has all the tid (term id) that goes to each nid (node id). Each nid should have 2 tid for our events. One tid will give us our location the other will give use our event_type.
The term_node table gives the name of the tid and gives us a vid that tells if this name is an event_type or location.
My goal is to get the nid, location, event_type, field_eventstartdate_value for all events. All of the following start in the future so it should look like:
______________________________________________________________
| nid | location | event_type | field_eventstartdate_value |
--------------------------------------------------------------
| 17 | Location 1 | Event 1 | 1984581600 |
| 18 | Location 2 | Event 3 | 1984581600 |
| 19 | Location 1 | Event 1 | 1984581600 |
| 20 | Location 3 | Event 2 | 1984581600 |
| 22 | Location 4 | Event 1 | 1984581600 |
==============================================================
I am not so good with SQL. This is what I have so far:
SELECT event.nid, event.field_eventstartdate_value, location.name, event_type.name
FROM content_type_event AS event
JOIN term_node ON term_node.nid = event.nid
LEFT JOIN (
SELECT tid, name AS location FROM term_data WHERE vid = 4
) AS location ON location.tid = term_node.tid
LEFT JOIN (
SELECT tid, name AS location FROM term_data WHERE vid = 3
) AS event_type ON event_type.tid = term_node.tid;
But this gives me:
______________________________________________________________
| nid | location | event_type | field_eventstartdate_value |
--------------------------------------------------------------
| 17 | NULL | Event 1 | 1984581600 |
| 17 | Location 1 | NULL | 1984581600 |
| 18 | NULL | Event 3 | 1984581600 |
| 18 | Location 2 | NULL | 1984581600 |
| 19 | NULL | Event 1 | 1984581600 |
| 19 | Location 1 | NULL | 1984581600 |
| 20 | NULL | Event 2 | 1984581600 |
| 20 | Location 3 | NULL | 1984581600 |
| 22 | NULL | Event 1 | 1984581600 |
| 22 | Location 4 | NULL | 1984581600 |
==============================================================
I can not seem to group these results together so I get just one full row per event instead of 2 rows containing partial info.
If you want them you can grab the table build statements off of the
FIDDLE: SQL Fiddle.
I think you need this:
SELECT
event.nid,
location.location,
event_type.event_type,
event.field_eventstartdate_value
FROM
content_type_event AS event
LEFT OUTER JOIN (
SELECT
nid,
name AS location
FROM
term_data JOIN
term_node ON term_data.tid=term_node.tid
WHERE term_data.vid = 4) AS location ON location.nid = event.nid
LEFT OUTER JOIN (
SELECT
nid,
name AS event_type
FROM
term_data JOIN
term_node ON term_data.tid = term_node.tid
WHERE term_data.vid = 3) AS event_type ON event_type.nid = event.nid;
http://sqlfiddle.com/#!2/c2459/17/0