Jess rules reacting to elapsed time - jess

New to Jess and need rules to fire based on elapsed time.
I.e. if no new facts are entered but a certain amount of time has elapsed, then I need rules to fire. Example could be if a watchman registers at checkpoint 1 but fails to register at checkpoint 2 and more than 30 minutes have elapsed.
Obvious solution is to post time facts every (say) 5 minutes from a thread, but is there a better way?

Related

how is time for re-execute a google app script calculated?

let us suppose that i have a script running every 10 minutes, and now i add a code line
Utilities.sleep(30000)
in between.
will it now keep running every 10 minutes, or every 15 minutes?
Regards,
If a timed trigger is set to fire every 10 minutes, that is what it will do. It does not depend on how long the function takes to execute. In principle, you could have a 5 minute timeout inside a function that's triggered to run every 1 minute. Except that will quickly run into problems:
Total trigger-based execution time limit: 90 minutes per day
"There are too many scripts running simultaneously for this Google user account" (how many is "too many" is not documented as far as I know).

Database setup for activity logging

I'm currently working on an application, which requires detection of activity within the past 12 hours (divided into 10 minute intervals, i.e. 6 times an hour). My initial thought has been to do a cronjob every 10 minutes to detect and accumulate activity level (rows) from a table within the past 10 minutes, and then insert these into a new table, which collects and updates the activity level for e.g. 1.20 (from current timestamp).
Though I'm struggling in figuring out the logic of "pushing" (in need of a better word) all other values to next value within the table. E.g. 1 hour and 20 minutes ago, should then be "pushed" to 1 hour and 30 minutes ago etc.
I realize that my thoughts as to the setup is limited to my understanding of PHP/MySQL and use of same, but am open to other setups such as NodeJS/MongoDB if that seems more flexible and feasible. The output should be a JSON file, showing the activity level for each hour divided into 10 minutes for the past 12 hours.
Would love some thoughts/feedback as to approach and way to handle this. Thanks a bunch in advance.

MySQL time table data structure

I'm making an air conditioning scheduler website for school. A user will be able to add a temperature and humidity setting for any of the 30 minute intervals throughout the day, for seven days of the week. For example, a user will be able to say that on Sunday, at 3:30 PM, they want the cooler (rather than the heater) to cool their home down to 70 degrees and a humidity index of 50 for 15 minutes. I could use advice setting up a MySQL table (or tables) to handle such commands. It's not the individual variables for all the potential settings I'm worried about, but rather handling all those times for all seven days.
So far I am thinking of having one big table called Scheduler which would handle the entire week. The day AND time slots for the seven days of the week could go into a VARCHAR column called time_slot, and would have both the day and the time slot in military time. For example.
time_slot (a VARCHAR column)
sunday_0000 (this is sunday at midnight)
.....
sunday_1630 (this is sunday at 4:30 pm)
.....
sunday_1130 (this is the final possible sunday time slot at 11:30 PM)
monday_0000 (this is the start of monday)
(continue for all seven days)
the remaining columns for the table would be all the necessary settings a user could put, as well as a duration from 30 seconds to the full 30 minutes before the next potential time slot. Does anyone have any ideas for a more efficient MySQL table? Perhaps something that gives each individual day it's own table?
You may want to consider having multiple columns, using TINYINT for day (1-7) and TIME (00:00-23:59). This way one could set the time for each days individually or all at once.
e.g.
UPDATE scheduler
set ...
where TIME = '12:00';

find execution time of a program, given cycle count and GHz

I have to find the execution time (in microseconds) of a small block of MIPS code, given that:
it will take a total of 30 cycles
total of 10 MIPS instructions
2.0 GHz CPU
That's all the information I am given to solve this question with (I already added up the total number of cycles, given the assumptions I am supposed to make about how many cycles different kinds of instructions are supposed to take). I have been playing around with the formulas from the book trying to find the execution time, but I can't get an answer that seems right. Whats the process for solving a problem like this? Thanks.
My best guess at interpreting your problem is that on average each instruction takes 3 cycles to complete. Because you were given the total number of cycles I'm not sure that the instruction count even matters.
You have a 2Ghz machine so that is 2 * 10^9 cycles per second. This equates to each cycle taking 5 * 10^(-10) seconds. (twice as fast as a 1Ghz machine which is 1*10^(-9)).
We have 30 cycles to complete to run the program so...
30 * (5 * 10^(-10)) = 1.5 * 10^(-8) or 15 nano seconds to execute all 10 instructions in 30 cycles.

Computing estimated times of file copies / movements?

Inspired by this xckd cartoon I wondered exactly what is the best mechanism to provide an estimate to the user of a file copy / movement?
The alt tag on xkcd reads as follows:
They could say "the connection is probably lost," but it's more fun to do naive time-averaging to give you hope that if you wait around for 1,163 hours, it will finally finish.
Ignoring the funny, is that really how it's done in Windows? How about other OS? Is there a better way?
Have a look at my answer to a similar question (and the other answers there) on how the remaining time is estimated in Windows Explorer.
In my opinion, there is only one way to get good estimates:
Calculate the exact number of bytes to be copied before you begin the copy process
Recalculate you estimate regularly (every 1, 5 or 10 seconds, YMMV) based on the current transfer speed
The current transfer speed can fluctuate heavily when you are copying on a network, so use an average, for example based on the amount of bytes transfered since your last estimate.
Note that the first point may require quite some work, if you are copying many files. That is probably why the guys from Microsoft decided to go without it. You need to decide yourself if the additional overhead created by that calculation is worth giving your user a better estimate.
I've done something similar to estimate when a queue will be empty, given that items are being dequeued faster than they are being enqueued. I used linear regression over the most recent N readings of (time,queue size).
This gives better results than a naive
(bytes_copied_so_far / elapsed_time) * bytes_left_to_copy
Start a global timer that fires say, every 1000 milliseconds and update a total elpased time counter. Let's call this variable "elapsedTime"
While the file is being copied, update some local variable with the amount already copied. Let's call this variable "totalCopied"
In the timer event that is periodically raised, divide totalCopied by totalElapsed to give the number of bytes copied per timer interval (in this case, 1000ms). Let's call this variable "bytesPerSec"
Divide the total file size by bytesPerSec and obtain the total number of seconds theoretically required to copy this file. Let's call this variable remainingTime
Subtract elapsedTime from remainingTime and you a somewhat accurate calculation for file copy time.
I think dialogs should just admit their limitations. It's not annoying because it's failing to give a useful time estimate, it's annoying because it's authoritatively offering an estimate that's obvious nonsense.
So, estimate however you like, based on current rate or average rate so far, rolling averages discarding outliers, or whatever. Depends on the operation and the typical durations of events which delay it, so you might have different algorithms when you know the file copy involves a network drive. But until your estimate has been fairly consistent for a period of time equal to the lesser of 30 seconds or 10% of the estimated time, display "oh dear, there seems to be some kind of holdup" when it's massively slowed, or just ignore it if it's massively sped up.
For example, dialog messages taken at 1-second intervals when a connection briefly stalls:
remaining: 60 seconds // estimate is 60 seconds
remaining: 59 seconds // estimate is 59 seconds
remaining: delayed [was 59 seconds] // estimate is 12 hours
remaining: delayed [was 59 seconds] // estimate is infinity
remaining: delayed [was 59 seconds] // got data: estimate is 59 seconds
// six seconds later
remaining: 53 seconds // estimate is 53 seconds
Most of all I would never display seconds (only hours and minutes). I think it's really frustrating when you sit there and wait for a minute while the timer jumps between 10 and 20 seconds. And always display real information like: xxx/yyyy MB copied.
I would also include something like this:
if timeLeft > 5h --> Inform user that this might not work properly
if timeLeft > 10h --> Inform user that there might be better ways to move the file
if timeLeft > 24h --> Abort and check for problems
I would also inform the user if the estimated time varies too much
And if it's not too complicated, there should be an auto-check function that checks if the process is still alive and working properly every 1-10 minutes (depending on the application).
speaking about network file copy, the best thing is to calculate file size to be transfered, network response and etc. An approach that i used once was:
Connection speed = Ping and calculate the round trip time for packages with 15 Kbytes.
Get my file size and see, theorically, how many time it would take if i would break it in
15 kb packages using my connection speed.
Recalculate my connection speed after transfer is started and ajust the time that will be spended.
I've been pondering on this one myself. I have a copy routine - via a Windows Explorer style interface - which allows the transfer of selected files from an Android Device, to a PC.
At the start, I know the total size of the file(s) that are to be copied, and as I am using C#.NET, I am using a Stopwatch, to get the elapsed time, and while the copy is in progress, I am keeping a total of what is copied so far, in terms of bytes.
I haven't actually tested it yet, but the best way seems to be this -
estimated = elapsed * ((totalSize - copiedSoFar) / copiedSoFar)
I never saw it the way you guys are explaining it-by trasfeed bytes & total bytes.
The "experience" always made a lot more sense (not good/accurate) if you instead use bytes of each file, and file count. This is how the estimate swings wildly.
If you are transferring large files first, the estimate goes long-even with the connection static. It is like it naively thinks that all files are the average size of those thus transferred, and then makes a guess assuming that the average file size will remain accurate for the entire time.
This, and the other ways, all get worse when the connection 'speed' varies...