I'm creating simple project where I count the employee attendance and other things
when employee late on work ,I have to enter the delay time, example
employee came to work late by 30 min, so I will enter on the field 30:00 or 00:30:00 , in the end of each month , week, or day, I want to calculate the time, such as 30 min + 1H + 25 min and in the end of each month I will get the total of all that hours and minutes on the report, the report will show me the average and the sum of that time.
the following fields ( Timing Logged in Duration , Total Talking Time, Work , Time Not ready Time ) I want to calculate the hours and minutes.
what I’m getting in the report is showing the SUM of the hours I entered as time with AM/PM which is not what I’m looking for.
this is MySQL query
SELECT
ID,
`Date`,
`Shift Time`,
`In charge`,
`Agent Name`,
Attendance,
Timing,
`Logged in Duration`,
`Total Talking Time`,
`Work Time`,
`Not ready Time`,
`Calls Handled`,
RNA,
`Shift Calls Presented`,
`Shift Calls Handled`,
`HD Calls Abandoned`,
`Upload Your Report`
FROM `shift report`
I thought I would post you the answer in the perspective of summing up time value releated issues developers, users have face so far. It's not just about formatting.
You may or may not have noticed that there's posibility sum of your total time can be miscalculated by the engine. Further mysql engine could return null although you have values.
When you are adding/aggregating time values it is converted as a number and results will be in number format. If you attempt to adding up time values like the following:
e.g. 01:38:50, 03:40:25 --> should ideally result in 05:19:15
If you do Sum(the above two) --> you get a number
If you use CAST(Sum(the above two) AS TIME) --> you get null
There are two other possibilities showed in the my code snippet.
Sample code reference. It will show you the different results that we just discussed. As for now, it seems like three way formatting could help.
time_format(sec_to_time(sum(time_to_sec(logged_duration))),'%h:%i:%s') total_log_duration
The bug reported to MySQL is not rectified yet.
Related
I am trying to determine the average time of the day that a particular event takes place. For most events, I am able to do this by using the following query:
SEC_TO_TIME( round( avg( TIME_TO_SEC( TIME(MY_EVENT_TIME) ) ) ),0 )
Where this logic breaks, is when the time range spans midnight.
Given the following dataset,
21:37:37
22:00:29
23:01:13
23:09:41
23:56:37
00:02:43
00:15:31
01:19:52
02:55:59
04:27:56
I would expect the average time to be somewhere around 00:00:00.
Averaging the UNIX_TIMESTAMP's doesn't help, because that takes into account the dates.
Is there a purely MySQL method to solve this?
The problem you have is that without a date the events are all happening in the same day. Hence you end up with the average being midday and not midnight.
Adding a date to your data gets the correct result. For example
create table day_event_log
(
an_event timestamp not null
);
insert day_event_log(an_event)
values ('2023-01-01 21:37:37'),
('2023-01-01 22:00:29'),
('2023-01-01 23:01:13'),
('2023-01-01 23:09:41'),
('2023-01-01 23:56:37'),
('2023-01-02 00:02:43'),
('2023-01-02 00:15:31'),
('2023-01-02 01:19:52'),
('2023-01-02 02:55:59'),
('2023-01-02 04:27:56');
select time(FROM_UNIXTIME(AVG(UNIX_TIMESTAMP(an_event)))) avg_event
from day_event_log;
avg_event = 00:16:45.8000
Although this explains the reason you are seeing a midday result with the data set you have, as mentioned in the comments it does not solve your problem. Time is continuous, so you have to decide how to partition it and where the "centre" of that time period is in order to calculate an average.
I know there're lots of possible duplicates similar to this question, I tested all I could find but I still have an issue understanding the logic of the query...
I have a bar chart which I wanna connect it to the query that showcases how many "seatbelt", "Speeding", etc violations happened per day.
The query I have now is:
select violationType as 'Violations', count( DISTINCT violationType) as 'Total', day(violationDateTime) as 'Day' from traffic_violations group by day(violationDateTime);
The result:
Do you think the result is correct? because what I see is that "wrong parking" is repeated by day, yes I know it showcases how many of wrong parking happened on day 1, 2, 3 but what about "Speeding"? It didn't happened in day 1, nor 2,3?
Basically I want to create a daily chart in the dashboard. Starts at 0 in the beginning of the day and counts up the traffic violations until the end of the day then resets the next day.
I also tried this:
select count(id) as 'Total', violationType as 'Violation', violationDateTime as 'Date' from traffic_violations group by violationType, day(violationDateTime);
results in:
Which I think is more correct than the above?
A table 'Log' has the below columns:
SystemName
User
Datetime - it's a timestamp
Status - has the values Start or Stop.
I need to write a query which will give me :
Number of hours spent per user per day on system X.
Please see example data below:
X, Amit, 05/01/2019 08:45:00, Start
X, Amit, 05/03/2019 13:25:00, Stop
X, Dave, 05/01/2019 09:10:35, Start
X, Dave, 05/01/2019 17:35:42, Stop
Output:
Amit,05/01/2019, 15h
Amit,05/02/2019, 24h
Amit,05/03/2019, 9h
Dave,05/01/2019, 8h
My approach till now :
I was thinking I could use lead or lag to get the consecutive times in the same row. But in the case of user Amit that spans across multiple days. Also there could be a user who has started and stopped multiple times on the same day. Even if I do that how could I generate hours for the dates amidst the range. Can you please help me.
This should work. You will only get Hours spent if both Start and Stop status exists for a user in a single day.
SELECT SystemName,[user],
CONVERT(varchar, CAST(Datetime AS DATETIME), 1) Datetime,
DATEDIFF
(
HH,
MAX(CASE WHEN Ststus = 'Start' THEN [Datetime] ELSE NULL END ),
MAX(CASE WHEN Ststus = 'Stop' THEN Datetime ELSE NULL END )
)HourSpent
FROM your_table A
GROUP BY SystemName,[User],
CONVERT(varchar, CAST(Datetime AS DATETIME), 1)
Since the output consists of one row per User + Day, then you would need to JOIN the data to a calendar table of dates.
You would need a way to extract the Start and Stop timestamp pairs for a given user, join it to the calendar table, then count the number of hours on that day that are between the start and stop times. (The hour count could be use a User Defined Function.
That's pretty complex. Frankly, I would rather write a Python program to parse the data rather than doing it via SQL. It would be very simple:
Read start line
Read end line
Loop through days, outputting hours per day (quite simple in Python)
Sometimes the best hammer is a spanner. (Translation: Sometimes a different tool is better.)
I am having trouble understanding the structure of the query i wish to perform. What i have is a large set of data in a table with multiple UnitID's. The units have temperatures and Timestamps of when the temperatures where recorded.
I want to be able to display the data where I can see the Average temperature of each unit separated in a weekly interval.
Apologies for my previous post, I'm still a novice with querying. But i will show you what i have done so far.
SELECT UnitID AS 'Truck ID',
AVG(Temp) As 'AVG Temp',
LogTime AS 'Event Time',
DAY(g.`LogTime`) as 'Day',
MONTH(g.`LogTime`) as 'Month',
COUNT(*) AS 'Count'
FROM `temperature` as g
WHERE DATE_SUB(g.`LogTime`,INTERVAL 1 WEEK)
AND Ana > 13 AND Ana < 16 AND NOT g.Temp = -100
GROUP BY 'truck id', YEAR(g.`LogTime`),MONTH(g.`LogTime`),WEEK(g.`LogTime`)
Order BY 'truck id', YEAR(g.`LogTime`),MONTH(g.`LogTime`),WEEK(g.`LogTime`)
;
(Sorry, I don't know how to display a table result at the moment)
This result gives me the weekly temperature averages of a truck, and shows me on which day of the month the temperature was recorded, as well as a count of temperatures per week, per truck.
The Query I want to perform , creates 5 columns, being UnitID, Week1, Week2, Week3, Week4.
Within the 'Week' columns I want to be able to display a weekly(Every day of the Week) temperature average for each truck, where the following week is set a week after the previous week (ie. Week2 is set to display the avg(temp) one week from Week1).
And this is where I am stuck on the structure of how to create the query. Im not sure if i need to create sub-queries or use a Union clause. I have tried a couple of queries , but i have deleted them because they did not work. I'm not sure if this query is too complex or if its even possible.
If anyone will be able to help I would greatly appreciate it. If there is any other info I can supply that will help, I will try to do so.
Hopefully this is solvable. :p
MySQL has a WEEK function that will return the week of the year as an integer (0-52). You can use that in you GROUP BY clause, and then use the AVG aggregation function to get the average temperature. Your query would look something like this:
SELECT unitID, WEEK(dateColumn) AS week, AVG(tempColumn) AS averageTemperature
FROM myTable
GROUP BY unitID, WEEK(dateColumn);
Here is a list of other helpful Date and Time Functions that may be useful for querying your database.
I'm reasonably new to Access and having trouble solving what should be (I hope) a simple problem - think I may be looking at it through Excel goggles.
I have a table named importedData into which I (not so surprisingly) import a log file each day. This log file is from a simple data-logging application on some mining equipment, and essentially it saves a timestamp and status for the point at which the current activity changes to a new activity.
A sample of the data looks like this:
This information is then filtered using a query to define the range I want to see information for, say from 29/11/2013 06:00:00 AM until 29/11/2013 06:00:00 PM
Now the object of this is to take a status entry's timestamp and get the time difference between it and the record on the subsequent row of the query results. As the equipment works for a 12hr shift, I should then be able to build a picture of how much time the equipment spent doing each activity during that shift.
In the above example, the equipment was in status "START_SHIFT" for 00:01:00, in status "DELAY_WAIT_PIT" for 06:08:26 and so-on. I would then build a unique list of the status entries for the period selected, and sum the total time for each status to get my shift summary.
You can use a correlated subquery to fetch the next timestamp for each row.
SELECT
i.status,
i.timestamp,
(
SELECT Min([timestamp])
FROM importedData
WHERE [timestamp] > i.timestamp
) AS next_timestamp
FROM importedData AS i
WHERE i.timestamp BETWEEN #2013-11-29 06:00:00#
AND #2013-11-29 18:00:00#;
Then you can use that query as a subquery in another query where you compute the duration between timestamp and next_timestamp. And then use that entire new query as a subquery in a third where you GROUP BY status and compute the total duration for each status.
Here's my version which I tested in Access 2007 ...
SELECT
sub2.status,
Format(Sum(Nz(sub2.duration,0)), 'hh:nn:ss') AS SumOfduration
FROM
(
SELECT
sub1.status,
(sub1.next_timestamp - sub1.timestamp) AS duration
FROM
(
SELECT
i.status,
i.timestamp,
(
SELECT Min([timestamp])
FROM importedData
WHERE [timestamp] > i.timestamp
) AS next_timestamp
FROM importedData AS i
WHERE i.timestamp BETWEEN #2013-11-29 06:00:00#
AND #2013-11-29 18:00:00#
) AS sub1
) AS sub2
GROUP BY sub2.status;
If you run into trouble or need to modify it, break out the innermost subquery, sub1, and test that by itself. Then do the same for sub2. I suspect you will want to change the WHERE clause to use parameters instead of hard-coded times.
Note the query Format expression would not be appropriate if your durations exceed 24 hours. Here is an Immediate window session which illustrates the problem ...
' duration greater than one day:
? #2013-11-30 02:00# - #2013-11-29 01:00#
1.04166666667152
' this Format() makes the 25 hr. duration appear as 1 hr.:
? Format(#2013-11-30 02:00# - #2013-11-29 01:00#, "hh:nn:ss")
01:00:00
However, if you're dealing exclusively with data from 12 hr. shifts, this should not be a problem. Keep it in mind in case you ever need to analyze data which spans more than 24 hrs.
If subqueries are unfamiliar, see Allen Browne's page: Subquery basics. He discusses correlated subqueries in the section titled Get the value in another record.