i spent lot of time to search a solution to my problem. i think i'm near the solution but my final request doesn't work...
first of all, i have a table that represent water index based on 10 minutes sampling.
----------------------------------
| DateTime | Counter |
----------------------------------
| 2020-05-13 15:00:03 | 38450 |
| 2020-05-13 15:10:03 | 38454 |
| 2020-05-15 15:00:03 | 38500 |
| 2020-06-02 12:10:03 | 38510 |
| 2020-06-15 12:10:03 | 38600 |
----------------------------------
Some samples could be not present in the table
so i would like to extract a table to see my consumptions by days, week, month, year
i have found many examples, but none works as i expect...
for the example table above, i expect to get:
------------------------------------------------------------------------------
| fromDateTime | toDateTime | fromCounter | toCounter | diff |
------------------------------------------------------------------------------
| 2020-05-13 15:00:03 | 2020-06-02 12:10:03 | 38450 | 38510 | 60 |
| 2020-06-02 12:10:03 | 2020-06-15 12:10:03 | 38510 | 38600 | 90 |
------------------------------------------------------------------------------
i have writen a query
select mt1.DateTime as fromDateTime,
mt2.DateTime as toDateTime,
mt1.Counter as fromCounter,
mt2.Counter as toCounter,
(mt2.Counter - mt1.Counter) as diff
from WaterTest as mt1
left join WaterTest as mt2
on mt2.DateTime=(
select max(dd.datetime) as DateTime from (SELECT MIN(DateTime) as DateTime
FROM WaterTest as mt3
WHERE month(mt3.DateTime) = month(mt1.DateTime + INTERVAL 1 month)
union ALL
select max(DateTime) as DateTime
from WaterTest as mt4
where month(mt4.DateTime) = month(mt1.DateTime)
) as dd
)
But MySql results with an error saying "Field 'mt1.DateTime' unknown in where clause"
is someone can help to find where i'm wrong ?
Am I on the good way to achieve this ?
(and for sure, if there is a more powerfull request.... :) )
Related
I have a table "ips", where I store my download logs. Accidentally, I forgot to add timestamp for it (yea, stupid mistake)... Now, I have fixed it, but there are already 65.5k entries without timestamp.. Is there a way, how to add random timestamp from date range to fill NULL timestamps?
I was able to generate timestamps list using this queries:
SET #MIN = '2020-04-05 18:30:00';
SET #MAX = NOW();
SELECT TIMESTAMPADD(SECOND, FLOOR(RAND() * TIMESTAMPDIFF(SECOND, #MIN, #MAX)), #MIN) as dldate FROM ips WHERE name="filename1" ORDER BY dldate ASC;
It generated the exact count of entries I need for specific filename, but I have absolutely no idea, how to use this list to update already existing entries in my "ips" table and KEEP IT ORDERED by "dldate"...
When I was testing it, I was close, when I used this query (I was afraid to use UPDATE to not mess my data up, so I used just SELECT):
SELECT ips.id, ips.name, t1.dldate FROM (SELECT id, name FROM ips WHERE name="filename1") ips INNER JOIN (SELECT ips.id as id, TIMESTAMPADD(SECOND, FLOOR(RAND() * TIMESTAMPDIFF(SECOND, #MIN, #MAX)), #MIN) as dldate FROM ips WHERE name="filename1" ORDER BY dldate ASC) t1 ON (ips.id=t1.id) ORDER BY ips.id ASC;
That worked, but timestaps are purely random (obviously :D), and I need them to "respect" id from "ips" table (starting with lower timestamp for lowest id, and then continuously higher timestamps for higher ids).
I'm getting this:
+------+-----------+---------------------+
| id | name | dldate |
+------+-----------+---------------------+
| 15 | filename1 | 2020-12-18 21:35:03 |
| 1118 | filename1 | 2020-12-18 13:34:47 |
| 1141 | filename1 | 2020-08-07 12:49:46 |
| 1142 | filename1 | 2020-11-29 00:43:31 |
| 1143 | filename1 | 2020-05-13 03:00:16 |
| 1286 | filename1 | 2020-12-14 09:58:50 |
| 1393 | filename1 | 2021-04-14 06:45:23 |
| 1394 | filename1 | 2021-03-03 17:42:25 |
| 1395 | filename1 | 2020-09-03 05:56:56 |
| .... |
|62801 | filename1 | 2021-01-05 21:21:29 |
+------+-----------+---------------------+
And I would like to get this:
+------+-----------+---------------------+
| id | name | dldate |
+------+-----------+---------------------+
| 15 | filename1 | 2020-04-05 21:35:03 |
| 1118 | filename1 | 2020-04-18 13:34:47 |
| 1141 | filename1 | 2020-05-07 12:49:46 |
| 1142 | filename1 | 2020-06-29 00:43:31 |
| 1143 | filename1 | 2020-08-13 03:00:16 |
| 1286 | filename1 | 2020-10-14 09:58:50 |
| 1393 | filename1 | 2020-12-14 06:45:23 |
| 1394 | filename1 | 2021-01-03 17:42:25 |
| 1395 | filename1 | 2021-03-03 05:56:56 |
| .... |
|62801 | filename1 | 2021-04-29 14:21:29 |
+------+-----------+---------------------+
Is there any way, how to achieve this output and how to use it with UPDATE statement instead of SELECT with INNER JOIN?
Thank you for help!
How about just starting with a date and adding a time unit?
update ips
set timestamp = '2000-01-01' + interval id second
where timestamp is null;
I'm not sure if second is the right unit. Or if '2000-01-01' is a good base time. But this gives you an approach for doing what you want.
You can, of course, test this using a select first.
If you do want randomness, you can do something like this:
select ips.*,
'2021-04-01' - interval running second
from (select ips.*,
sum(rnd) over (order by id desc) as running
from (select ips.*,
rand() * 1000 as rnd
from ips
where timestamp is null
) ips
) ips;
This calculates a random number of seconds. Then it does a revenue cumulative sum . . . and subtracts those seconds from a base date.
I have a table like this:
// reset_password_emails
+----+----------+--------------------+-------------+
| id | id_user | token | unix_time |
+----+----------+--------------------+-------------+
| 1 | 2353 | 0c274nhdc62b9dc... | 1339412843 |
| 2 | 2353 | 0934jkf34098joi... | 1339412864 |
| 3 | 5462 | 3408ujf34o9gfvr... | 1339412894 |
| 4 | 3422 | 2309jrgv0435gff... | 1339412899 |
| 5 | 3422 | 34oihfc3lpot4gv... | 1339412906 |
| 6 | 2353 | 3498hfjp34gv4r3... | 1339412906 |
| 16 | 2353 | asdf3rf3409kv39... | 1466272801 |
| 7 | 7785 | 123dcoj34f43kie... | 1339412951 |
| 9 | 5462 | 3fcewloui493e4r... | 1339413621 |
| 13 | 8007 | 56gvb45cf3454g3... | 1339424860 |
| 14 | 7785 | vg4er5y2f4f45v4... | 1339424822 |
+----+----------+--------------------+-------------+
Each row is an email. Now I'm trying to implement a limitation for sending-reset-password email. I mean an user can achieve 3 emails per day (not more).
So I need an query to check user's history for the number of emails:
SELECT count(1) FROM reset_password_emails WHERE token = :token AND {from not until last day}
How can I implement this:
. . . {from now until last day}
Actually I can do that like: NOW() <= (unix_time + 86400) .. But I guess there is a better approach by using interval. Can anybody tell me what's that?
Your expression will work, but has 3 problems:
the way you've coded it means the subtraction must be performed for every row (performance hit)
because you're not using the raw column value, you couldn't use an index on the time column (if one existed)
it isn't clear to read
Try this:
unix_time > unix_timestamp(subdate(now(), interval '1' day))
here the threshold datetime is calculated once per query, so all of the problems above have been addressed.
See SQLFiddle demo
You can convert your unix_time using from_unixtime function
select r.*
from reset_password_emails r
where now() <= from_unixtime(r.unix_time) - interval '1' day
Just add the extra filters you want.
See it here: http://sqlfiddle.com/#!9/4a7a9/3
It evaluates to no rows because your given data for unix_time field is all from 2011
Edited with a sqlfiddle that show the conversion:
http://sqlfiddle.com/#!9/4a7a9/4
I have a single-table SQL database built from DHCPD logs, structured as below:
+------+-------+------+----------+---------+-------------------+-----------------+
| id | Month | Day | Time | Type | MAC | ClientIP |
+------+-------+------+----------+---------+-------------------+-----------------+
| 9305 | Nov | 24 | 03:20:00 | DHCPACK | 00:04:f2:4b:dd:51 | 10.123.246.116 |
| 9307 | Nov | 24 | 03:20:07 | DHCPACK | 00:04:f2:99:4c:ba | 10.123.154.176 |
| 9310 | Nov | 24 | 03:20:08 | DHCPACK | 00:19:bb:cf:cd:28 | 10.99.107.3 |
| 9311 | Nov | 24 | 03:20:08 | DHCPACK | 00:19:bb:cf:cd:28 | 10.99.107.3 |
Every DHCP event from the log will eventually make its way into this database, so events from any point in time will be potentially used in the construction of graphs. To make use of the data for graphing, I need to be able to create an output table with multiple columns, but with values derived from a count of those in a single column matching a specific pattern.
The closest thing I've managed to come up with is this query:
select 'Data' as ClientIP, count(*) from Log where ClientIP like '10.99%' and MAC like '00:04:f2%'
union
select 'Voice' as ClientIP, count(*) from Log where ClientIP like '10.123%' and MAC like '00:04:f2%';
Which yields the following result:
+-----------+-------+
| ClientIP | Count |
+-----------+-------+
| Data | 4618 |
| Voice | 13876 |
+-----------+-------+
Fine for a one-off query, but I want to take those two rows, turn them into two columns, and run the same query with one row per hour (for instance). I want something like this:
+------+-------+------+
| Hour | Voice | Data |
+------+-------+------+
| 03 | 22 | 4 |
| 04 | 123 | 23 |
| 05 | 45 | 5 |
Any advice is greatly welcomed.
Thanks
You can group by hour and use conditional computation to count Data and Voice traffic.
For example:
SELECT
HOUR(time) AS `Hour`,
SUM(CASE WHEN ClientIP like '10.99%' and MAC like '00:04:f2%' THEN 1 ELSE 0 END) AS `Data`,
SUM(CASE WHEN ClientIP like '10.123%' and MAC like '00:04:f2%' THEN 1 ELSE 0 END) AS `Voice`
FROM log
GROUP BY HOUR(time)
Create a separate table for (as you want) :
+------+-------+------+
| Hour | Voice | Data |
+------+-------+------+
and update it every hour using triggers.
I'm having trouble with a certain query in MySQL, and I hope someone can help me.
A little background info:
We have a callcenter reporting API available to us, from our "telephony as a service" company. The pertinent fields I'm grabbing from their XML interface are:
agent_name
interaction_id
origination <-- this is the "caller ID", which is not always accurate
create_timestamp
accept_timestamp
abandon_timestamp
queue_id
Regular phone calls (interactions, in this case) are answered by each agent, after having queued in our "Main" queue. The create_timestamp field is the time the call starts queuing to agents belonging to "Main", and the accept_timestamp is the time when the agent answers the call. The abandon_timestamp is the time the caller gets tired of queuing and 1) hangs up, or 2) presses a menu option to go to voicemail. The voicemail is saved as an .mp3 file and is queued to the same group of agents as if it were a new, inbound call, except it is associated with the "Main_VM" queue rather than the "Main" queue.
The tricky part is this:
If a call comes in and is "abandoned" to voicemail, the interaction_id does not stay the same for the voicemail .mp3 that queues to the agents. Nor is it always incremented by 1 ... there are times when other calls come in during the time the person has been queuing. Here are example record snippets:
A)
+----------------+--------------+---------------------+---------------------+---------------------+---------------+
| interaction_id | origination | create_timestamp | accept_timestamp | abandon_timestamp | queue_id |
+----------------+--------------+---------------------+---------------------+---------------------+---------------+
| 21771 | NNNPPPXXXX | 2012-09-04 08:26:15 | 0000-00-00 00:00:00 | 2012-09-04 08:27:17 | Main |
| 21772 | NNNPPPXXXX | 2012-09-04 08:27:44 | 2012-09-04 08:32:07 | 0000-00-00 00:00:00 | Main_VM |
+----------------+--------------+---------------------+---------------------+---------------------+---------------+
B)
+----------------+--------------+---------------------+---------------------+---------------------+---------------+
| interaction_id | origination | create_timestamp | accept_timestamp | abandon_timestamp | queue_id |
+----------------+--------------+---------------------+---------------------+---------------------+---------------+
| 2195 | AAAAAAAAAA | 2011-10-28 09:21:02 | 2011-10-28 09:23:50 | 0000-00-00 00:00:00 | Main |
| 2197 | NNNPPPXXXX | 2011-10-28 09:22:37 | 0000-00-00 00:00:00 | 2011-10-28 09:26:42 | Main |
| 2199 | BBBBBBBBBB | 2011-10-28 09:23:38 | 2011-10-28 09:27:23 | 0000-00-00 00:00:00 | Main |
| 2200 | CCCCCCCCCC | 2011-10-28 09:24:40 | 2011-10-28 09:33:09 | 0000-00-00 00:00:00 | Main |
| 2201 | NNNPPPXXXX | 2011-10-28 09:27:16 | 2011-10-28 09:42:28 | 0000-00-00 00:00:00 | Main_VM |
+----------------+--------------+---------------------+---------------------+---------------------+---------------+
In MySQL, I need to be able to associate interaction_id 2197 with 2201, and 21771 with 21772, for example. I'll be doing things like TIMESTAMPDIFF() to calculate the "total" time to answer the call, SLA met & abandoned percentages, ...etc.; while also accounting for hours of operation and holidays. I think I have most of that worked out, but my main trouble is what I've just described.
NOTE: I intend to change the "0000-00-00 00:00:00" timestamps to NULL. I'm still in planning.
I made some headway on this, and I thought I'd share. I just had one of the fields return the most recent call, using a LIMIT 1:
select interaction_id, origination, create_timestamp, accept_timestamp, abandon_timestamp, queue_name, parent_call, agi.agent_name
from (
(
select interaction_id, origination, create_timestamp, accept_timestamp, abandon_timestamp, queue_name,
(
select interaction_id
from queue_interactions q1
where q1.origination = q2.origination
and ABS(timestampdiff(SECOND, q1.abandon_timestamp, q2.create_timestamp)) < 180
order by q2.abandon_timestamp
LIMIT 1
) as parent_call
from queue_interactions q2
where q2.queue_name = "Service Desk VM"
)
UNION
(
select interaction_id, origination, create_timestamp, accept_timestamp, abandon_timestamp, queue_name, NULL as parent_call
from queue_interactions q3
where q3.queue_name = "Service Desk"
)
) a natural left join agent_interactions agi
order by a.create_timestamp
;
I'd like to use GROUP BY multiple columns, I think it's best to start with an example:
SELECT
eventsviews.eventId,
showsActive.showId,
showsActive.venueId,
COUNT(*) AS count
FROM eventsviews
INNER JOIN events ON events.eventId = eventsviews.eventId
INNER JOIN showsActive ON showsActive.eventId = eventsviews.eventId
WHERE events.status = 1
GROUP BY showsActive.venueId, showsActive.showId, showsActive.eventId
ORDER BY count DESC
LIMIT 100;
Output:
| *eventId* | *showId* | *venueId* | *count* |
+-----------+----------+-----------+---------+
[...snip...]
| 95 | 92099 | 9770 | 32 |
| 95 | 105472 | 10702 | 32 |
| 3804 | 41225 | 8165 | 17 |
| 3804 | 41226 | 8165 | 17 |
| 923 | 2866 | 5451 | 14 |
| 923 | 20184 | 5930 | 14 |
[...snip...]
What I would like instead:
| *eventId* | *showId* | *venueId* | *count* |
+-----------+----------+-----------+---------+
| 95 | 92099 | 9770 | 32 |
| 3804 | 41226 | 8165 | 17 |
| 923 | 20184 | 5930 | 14 |
So, I want my data grouped by eventId, but only once for each showId and venueId ...
I actually have a SQL query that does that, but it has 8 subqueries and is as slow as a T-Ford ... And since this is executed on every page load, speeding things up looks like a good idea!
There are a few questions like this, and I've tried many different things, but I've been at this query for an hour and I can't seem to get it to work as I want :-(
Thanks!
You probably want either a min or a max on showid, and then not include it in the group by, I can't tell which because looking at your "prefered" resultset, you have both.
If you want your data grouped by eventId, group just by eventId and you'll get exactly the result you're looking for.
This is a MySQL feature (?) that it allows you to select non-aggregate columns, in which case it will return the first row available. In other DBMS it's achieved by DISTINCT ON, which is not available in MySQL.