I'm writing a script that will, given parameters for start/end datetimes, find the first mutually available timeslot on two different calendars in order to book a meeting.
The script I have so far is running really slowly. I'm guessing this is because I'm looping through both calendars in 30 minute increments and running CalendarApp.getCalendarById(email).getEvents each time to see if there's a free 30-minute timeslot.
I've thought about running a batch operation using .getEvents() once to minimize the number of reads but I get stuck here because the result is an array with busy timeslots, whereas I'm trying to find free timeslots.
Is there a better way to approach this to make my script run faster?
Find Open Time Slots
I've done something similar to this recently and what I did was to create and object similar to something like this:
var timeSlotsObj={"8:00-8:30":0,"8:30-9:00":0,"9:00-9:30":0,"9:30-10:00":0,...."7:30-8:00":0,slotA["8:00-8:30","8:30-9:00",...]}
Then I went through each calendar and incremented the timeslot value for each event on a given day which overlapped that time slot in each calendar. After that I took the slotsA array and looped through it looking for any timeSlot that still had 0 in it.
The loop looks like the following:
for(var i=0;i<timeSlotObj.slotA.length;i++){
if(timeSlotsObj[timeSlotsObj.slotA[i]]==0){
//You just found an empty time slot and it's value is timeSlotsObj.slotA[i]
}
Any object property that still 0 as it's value is an open time slot for the given set of calendars on any given day.
Javascript Object Reference
In my case I actually used Date Objects as the Object Properties or Keys but the idea is the same. Whatever slots have no events in them are free slots.
Related
I have a firestore database that looks like this
/entries/ ....
/users/{userid}...
a bunch of documents is being sent into ... of entries and userid contains on 8 docs of user profile information.
my problem is that the entries doc contains field hours and no relation to the user doc which contains the field weekly_capacity
I need to aggregate this the two fields hours/weekly capacity setting them to Full-time equivalency variable
But the Full-time equivalency needs to be accurate and this company FTE can change so it would need to calculate the FTE over various date even if the user changed their FTE status x number of times.
And the current app only fetched the entries when the user logins into the app, which can be whenever.
None of the API requests that I am using will give me a json that holds both weekly_capacity and hours on the same fetch. If every time a user logs into the app firestore calls the http to fetch all entries then how can I compare the hours field on the collection's entries to the weekly_capacity field
Just a little context: FTE = Full-time equivalency and is used to measure as a standard to see if an employee compares to there core commit hours they signed up for which is 40. SO if I agreed to work 40 and I work actually work 40 hours then I would be 1 whole FTE. If I worked 20 and I suppose to work 40 I am .5 FTE. The math is really simple it's just that in my situation the variable FTE can change any time and the app will allow the user to enter a range of dates fetching the total actual hours they worked and FTE letting them know how many hours they were supposed to work vs how many hours they actually worked. Since the variable changes, I need some way in firestore to track the change and aggregate correctly against the hours actually worked. To give an error example: let's say I changed my FTE from 1 to .7 on March 20th, I then want to generate a report of March 1 to March 30th stating my hours worked and FTE status meaning did I reach my goal. The kicker is that I can't fetch or merge the entries which hold the var hours and /users/ which hold the var weekly_capacity.
I don't even think a cloud function would solve the problem since entries are only fetched when the user logins in right?
I'm assuming the following for answering your question.
Requirement: To calculate FTE for a user when user's weekly_capacity is updated or user logs in.
Problems:
Some way in firestore to track the change.
Calculate FTE correctly according to the change.
Here's what I think will solve the problems.
Google Cloud Firestore supports listeners for the collections in which you store the data. So you can listen for any change in users collection and entries collection. This is how you can track the change.
To calculate FTE, when a change is made to weekly_capacity of user document or a new entry is made to entries collection you need to query both collections separately to get the records corresponding to the user affected. You can also use a collections-group query for this purpose but that depends on your database design.
Hope that helps.
We are building a warehouse stock management system and have a stock movements table that records stock into, through and out of the system, for each product and each location it is stored. i.e.
10 units of Product A is received into Location A
10 units Product A are moved to Location B and removed from Location A.
1 unit is removed (sold) from Location B
... and so on.
This means that over to work out how much of each product is stored in each location we would;
"SELECT SUM('qty') FROM stock_movements GROUP BY location, product"
(we actually use Eloquent but I have used SQL for an example)
Over time, this will mean our stock movements table will grow to millions of rows and I am wondering the way to best manage this. The options I can think of:
Sum the rows as grouped above and accept it may get slow over time. Im not sure how many rows it will take before it actually starts to cause any performance issues. When requesting a whole inventory log via our API each row would have to be summed for every product, so this will compile to a fairly large calculation.
Create a snapshot of the summed rows every day/week/month etc. on a cron and then just add the sum of the most recent rows on the fly.
Create a separate table with a live stock level which is added to and subtracted with every stock movement. The stock movements table shows an entire history of all movements while the new table just shows the live amounts. We would use database transactions here to ensure they keep in sync.
Is there a defined and best practice way to handle this kind of thing already? Would love to hear your thoughts!
The good news is that your system is already where a lot of people say the database world should be moving: event sourcing. ES just stores every event against an object, in this case your location, and in order to get the current state you have to start with an empty object and replay all of that objects events.
Of course, this can be time-consuming, and your last two bullet points are the standard ways of dealing with it. First, you can create regular snapshots with the current-as-of-then totals for that location, and then when someone asks for the current-as-of-now totals you only need to replay events since the last snapshot. Second, you can have a separate table of current values, and whenever you insert a record into your event store you also update the current value. If they ever get out-of-sync, you can always start fresh and replay the entire event series again.
Both of these scenarios are typically managed through an intermediary queue service, like SQL's Service Broker, RabbitMQ, or Amazon's SQS: instead of inserting an event directly into your event store, you send the change into a queue and the code that processes the queue will update your snapshot.
Good luck!
I am trying to reduce the amount of events I get from a query on a table in mysql which has a lot of events stored in it. There is roughly one event each minute, each event has a datetime and then some other sensor readings. I would like to reduce the amount of data so that I'm only getting one reading every hour or so.
I realise I can do something like:
IncomingData.objects.filter(utctime__range=('2016-10-07', '2016-10-14'))[::60]
This will give me 1 event an hour (assuming they are ordered by time?) but this is still returning 60 events per hour from the database.
Potentially I might want to read a bigger date range and less events - for instance one event a day over an entire year, and this method wouldn't work because it would be reading millions of unnecessary rows.
I have seen some solutions using ROWNUM but I want to keep away from raw sql if possible (e.g. https://dba.stackexchange.com/a/56389)
I have also tried the following which I would have thought would return the first event each hour but it returns an empty queryset:
IncomingHcData.objects.filter(utctime__minute=0)
It outputs the following SQL as the generated query:
SELECT
"incoming_hc_data"."uid",
"incoming_hc_data"."utctime",
"incoming_hc_data"."temp_1",
"incoming_hc_data"."temp_2",
"incoming_hc_data"."temp_3",
FROM "incoming_hc_data"
WHERE django_datetime_extract('minute', "incoming_hc_data"."utctime", GMT) = 0
Use the extra function:
IncomingData.objects.extra(where=['minute(utctime)=0'])
I am an admin for a Google Apps for Business domain and we want to be able to run a report to tell us what groups have been created in the last week. There is no such "Date Created" column for the groups. The best I have been able to do so far is run a list of the groups on a weekly basis but I want to be able to automate comparing that to the list from the week before.
You might as well store the list you have goten in a 'permanent storage' - a spreadsheet, ScriptDB or script Properties - and proceed to a comparison every week to see if something has been added (or removed)... This is maybe less straightforward and elegant but might be simpler to get working.
The weekly triggered function could do this :
get the list of names
sort it
write the sorted list to spreadsheet
retrieve the sorted list from last week by reading the preceding row in spreadsheet
compare both sorted lists at array level
and send yourself a mail with the difference.(eventually write the log to the spreadsheet)
This is certainly possible but requires a bit of coding.
You'll have to use the Audit API for this. See this response for some starter code on how to make basic calls to the API. The one tricky part is to set up OAuth 2 but its very possible after that.
Once you have the setup working you can then add additional startTime and endTime parameters to define your week interval along with the CREATE_GROUP event filter in the URL.
I'm quite new to VB and i'm working on a project to record the details of employees clocking in and clocking out. I want to know how to make it so when the 'clock in' button is clicked the time will start recording and when the 'clock out' button is pressed the time will stop recording. Also once clock out is clicked the hours in between clock in and clock out will be recorded and stored into a mySQL database.
This information will be outputted onto a DataGrid showing the time and date of when the employee has clocked in.
Then the amount of hours will be multiplied by a pre-written hourly wage .. which is already stored inside one of the tables in my mySQL database.
Any help would be appreciated.
You should store the event instead of the result.
Store the clock-in time as well as a row for the clock-out time.
Then you will need a procedure either on your database or in the application that will iterate over the rows and match clock-ins to clock-outs.
This approach will let the application crash/terminate and restart without losing data.
Alternatively you could put the clock-in and out in the same record (different columns), and just insert the clock-out into the first row that matched employee and null clock-out.
I would have the clock In button fire an event in the program that created a record for the employee ID I'm assuming you have at that time.
Then once the clock out button is clicked you would fire an event that would go out to your database and pull in the first record it found with the employee ID you are looking for, a valid clock in time and a null for the clock out time. If the program didn't find something that matched all that criteria you would have to handle that however you wanted (I would do the lookup when the employee logged in or whatever and only allow them access to the clock in button if there was no record present and only allow them to use the clock out button if there was a record found for their ID).
Once you have that record in memory you should set the clock out time and calculate the difference using the clock in time that was written to the database earlier.
I would use a stored procedure in the database to handle adding/updating/managing the record and do all the calculations and whatever else you want to do at the time of the clock in/out inside the program itself. But I think its all just preference as far as where the actual processing takes place is concerned.
The most obvious reason for this is that the program can be shut down in between clock in's and clock out's without losing anything at all. If you try to keep track of it all in memory you will lose all your clock in's once the program is shut down for whatever reason(closed manually/"End Task"ed through task manager/unhandled error).