How do I fetch data for a machine running log with different 'start' 'stop' entry in mysql? - mysql

My application keeps track of machines that are 'started' and 'stopped' as many times as needed, and this information is saved in the same table with the fields machine_ID, timestamp, entry_type(start/stop).
A start creates a row, and a stop creates a different row.
Sample start row : M1 - 2019-06-27 12:08:30 - 'start'
Sample stop row : M1 - 2019-06-27 14:30:05 - 'stop'
I need to be able to display:
1) All currently running machines (Machines that don't have a stop entry after the latest start entry)
2)A timeline of the previous machine activity preferably chronologically. Eg Machine 1 started: 09AM stopped 10 AM, Machine 2 started: 1pm, stopped 2pm, etc.
I imagine a result table as follows would be what I need but I can't understand how to build a query to fetch it.
'M1 - 2019-06-27 12:08:30 - 'start''
'M1 - 2019-06-27 14:30:05 - 'stop''
'M1 - 2019-06-27 16:00:00 - 'start''
'M1 - 2019-06-27 16:30:00 - 'stop''
'M2 - 2019-06-27 16:00:00 - 'start''
'M2 - 2019-06-27 16:30:00 - 'stop''
I plan tp use PHP to parse through such a table.
I am unable to alter the table structure in any way.
I believe the general principle behind solutions to this kind of problem has been well documented, however, not knowing how to refer to it, I am unable to do my own research.
Thanks.

Related

how to Update status of order in mysql after 30days using sequelize nodejs

I have problems is: Auto-update order status if that status doesn't have any update in 30 days.
For example, the current status is " processing" and during 30days later, no more update on this. So the MySQL auto-update order status to "on Hold
I found and guest it something related to Hook, but I don't know how to implement it
depends on how you run you app. I'll suggest using e.g. some cron to get such instances and update it's status.(e.g. https://www.npmjs.com/package/node-cron)
Set it for example to run once a day, query such instances depends on build in column updatedAt. Or just to add some other column which will be updated with current date, depends what rules you want to apply
you can use cronjob.as npm says Cron is a tool that allows you to execute something on a schedule.
for install cron - 'npm i cron'
the (*) shows in order given details.
Seconds: 0-59 Minutes: 0-59 Hours: 0-23 Day of Month: 1-31 Months:
0-11 (Jan-Dec) Day of Week: 0-6 (Sun-Sat)
how to initialize cronjob:
var CronJob = require('cron').CronJob;
var job = new CronJob(
'* * * */30 * *',
function() {
console.log('every 30 days it is auto updated!');
},
null,
true,
'America/Los_Angeles'
);
make sure other parameters pass according to your application
and also cron use different port then your application.
As suggested by Damian you need to use a kind of CRON job which runs everyday to check for expired orders(created more than 30 days ago without any update).
You can use Bull with PM2
Create a processor in Bull and a queue and use the CRON functionality of Bull to set this job to run everyday usually at 12:05 AM or any time you see suitable.
The job would fetch all orders which have status "processing" and were created more than 30 days ago and update them.
Use PM2 to run Bull

Missing Firebase app_update events

I'm seeing huge difference for app_update events automatically sent by Firebase and real conversion of user base to newer version when it's released.
For example 5 days during release of new version: 120 events VS 3k users (x20 difference)
I did check another "alpha" update with data exported to BigQuery:
Count app_update events (to any version):
SELECT
COUNT(user_pseudo_id)
FROM
`analytics_151378081.events_*`
WHERE
_TABLE_SUFFIX BETWEEN '20190820' AND '20190827' AND
event_name = "app_update"
---|---
1 | 49
Count users with new version who's first app launch was before this version was pushed to alpha:
SELECT
COUNT(DISTINCT(user_pseudo_id))
FROM
`analytics_151378081.events_*`
WHERE
_TABLE_SUFFIX BETWEEN '20190820' AND '20190827'
AND app_info.version = "0.4.2.1"
AND user_first_touch_timestamp < 1566259200000000
--|---
1 | 73
So it's 49 events (to any version) VS 73 "old" users using new version.
Is anyone seeing the same?
My platform is AndroidTV and a lot of sessions do not have UI (TV Input Framework integration through service)

To get an optimizing solution for VRP using Graphhopper/jsprit

While solving Vehicle Routing problem using Graphhopper/jsprit, For example , we get the solution as follows
Pickup1 - job1
Pickup2 - job2
Pickup3 - job3
Delivery1 - job1
Delivery2 - job2
Pickup4 - job4
So the problem i have with this solution is, It is going for Pickup of job4 eventhough - job3 is still in the Vehicle. My ultimate aim would be to go for pickup only when the vehicle is empty.Also If a vehicle isn't going to be delivered before another pickup then it shouldn't have been picked up in the first place(For ex: job 3 in above example).
So any suggestions to help with the possible Hard type constraint to satisfy the above conditions? Please suggest.I will update any further details if needed.

Rails 4 / RSpec - Testing time values persisted to MYSQL database in controller specs

I'm writing pretty standard RSpec controller tests for my Rails app. One issue I'm running into is simply testing that time values have been persisted in an update action.
In my controller I have:
def update
if #derp.update_attributes(derp_params)
redirect_to #derp, flash: {success: "Updated derp successfully."}
else
render :edit
end
end
#derp has a time attribute of type time. I can test all of its other attributes in the update action as follows:
describe "PATCH #update" do
before do
#attr = {
attribute_1: 5, attribute_2: 6, attribute_3: 7,
time: Time.zone.now
}
end
end
The error I'm getting is:
1) DerpsController logged in PATCH #update updates the derps's attributes
Failure/Error: expect(derp.time).to eq(#attr[:time])
expected: 2015-08-24 18:30:32.096943000 -0400
got: 2000-01-01 18:30:32.000000000 +0000
(compared using ==)
Diff:
## -1,2 +1,2 ##
-2015-08-24 18:30:32 -0400
+2000-01-01 18:30:32 UTC
I've tried using Timecop and also comparing with a to_s or to_i...but every attribute of data type time is completely off as far as the year goes. I've seen a couple posts saying how you can expect it to be within 1 second and how to deal with that, but it looks like my year is completely off?
This can't be that difficult - I just want to test that the controller can take a time sent to it and save it to the database.
What am I missing here?
EDIT: No help here after a couple of days. Let me try to re-iterate what's happening - The date is being stripped because it is a MYSQL type TIME. Notice the 2000-01-01...
let(:time) {'Mon, 24 Aug 2015 23:19:09'}
describe "PATCH #update" do
before do
#attr = {
attribute_1: 5, attribute_2: 6, attribute_3: 7,
time: time.in_time_zone('Eastern Time (US & Canada)').to_datetime
}
end
end
Rather than saving Time.zone.now in the record, define a time variable and update the record with that time so you know which time should be saved in the database. Then, expect to get that time back when you compare.
I ended up doing this:
let(:time) {'01 Jan 2000 16:20:00'}
I really can't find any good explanation as to how or why Rails is storing time as 2000-01-01, or any helper method to format a time like that. I checked the controller params and its actually being sent to the controller with that 2000-01-01 date in it.
Some people say to just use a datetime, but I am truly trying to store a time of the day, so I don't think that makes sense for this use case.
Anyway, I can just use that time variable anywhere in my specs and works.

DbConnectionBroker + MySQL multiple connection issue

I have an application in Struts 2 which connects to the MySQL database & I am using DBConnectionBroker for DB pooling -
While performing select after insert operations I am facing strange issues as below -
I insert & select a record in a table (on primary key id) & receive a success message,
but the result is not displayed until a refresh is performed.
When I successfully update a record the current changes are not reflected instead the previous value is displayed.
Multiple refresh keeps displaying different values every time (i.e old as well as new changes)
Below is my logs.log file details created by DBConnectionBroker:
===============================================================================
Starting DbConnectionBroker Version 1.0.13:
dbDriver = com.mysql.jdbc.Driver
dbServer = jdbc:mysql://localhost:3306/dev01?profileSQL=true
dbLogin = esmdev
log file = c:\DBlogs.log
minconnections = 1
maxconnections = 20
Total refresh interval = 1.0 days
logAppend = false
maxCheckoutSeconds = 60
debugLevel = 2
Wed Aug 21 20:45:51 IST 2013 Opening connection 0 com.mysql.jdbc.JDBC4Connection#1ff5c98:
Wed Aug 21 20:45:52 IST 2013 Opening connection 1 com.mysql.jdbc.JDBC4Connection#a6e0a9:
-----> Connections Exhausted! Will wait and try again in loop 1
================================================================================
My initial debugging/analysis by setting profileSQL=true on my dburl shows following logs in my eclipse ide console
===============================================================================
duration: 1 ms, connection-id: 29, statement-id: 14, resultset-id: 16
duration: 2 ms, connection-id: 28, statement-id: 15, resultset-id: 17
===============================================================================
The interesting part here I identified is connection-id -
For same query I can see two different Ids (Min connection is 1) but result set is different.
I also observed one thing when I start tomcat in debug mode & set a breakpoint
to watch few objects above it causes some delay/slows down the execution & only one connect-id is displayed & everything seems to work fine due to the delay.
I am confused as to why a new connection is created though I free the connection
after queries are executed.