Is it possible to restrict the amount of time the user code runs in JRuby?
Not sure how you're embedding the Ruby code, but you could easily have a background timer thread that evaluates
raise YourTimeLimitExceededException, 'no more time'
after a period of time. You'd also want to disable Thread.new and Thread.start (as well as java.lang.Thread.new) for the user code, or they could easily circumvent your timer.
Related
Cannot find a clean way to set Stackdriver alert notifications on errors in cloud functions
I am using a cloud function to process data to cloud data store. There are 2 types of errors that I want to be alerted on:
Technical exceptions which might cause function to 'crash'
Custom errors that we are logging from the cloud function
I have done the below,
Created a log metric searching for specific errors (although this will not work for 'crash' as the error message can be different each time)
Created an alert for this metric in Stackdriver monitoring with parameters as in below code section
This is done as per the answer to the question,
how to create alert per error in stackdriver
For the first trigger of the condition I receive an email. However, on subsequent triggers lets say on the next day, I don't. Also the incident is in 'opened' state.
Resource type: cloud function
Metric:from point 2 above
Aggregation: Aligner: count, Reducer: None, Alignment period: 1m
Configuration: Condition triggers if: Any time series violates, Condition:
is above, Threshold: 0.001, For: 1 min
So I have 3 questions,
Is this the right way to do to satisfy my requirement of creating alerts?
How can I still receive alert notifications for subsequent errors?
How to set the incident to 'resolved' either automatically/ manually?
I was having a similar problem and managed to at least get a mail every time. The "trick" seems to be to use sum instead of count in combination with for most recent value - see the screenshot below.
This causes Stackdriver to send a mail everytime a matching log entry is found and closing the issue a minute later.
Normally, alerts resolve themselves once the alerting policy stops firing. The problem you're having with your alerts not resolving is because your metric only writes non-zero points - if there are no errors, it doesn't write zero. That means that the policy never gets an unambiguous signal that everything is fine, so the alerts just sit there (they'll automatically close after 7 days, but I imagine that's not all that useful for you).
This is a common problem and it's a tricky one to solve. One possibility is to write your policy as a ratio of errors to something non-zero, like request count. As long as the request count is non-zero, the ratio will compute zero if there are no errors, and so an alert on the ratio will automatically resolve. You need to be a bit careful about rounding errors, though - if your request count is high enough, you might potentially miss a single error because the ratio could round to zero.
Aaron Sher, Stackdriver engineer
We got around this issue by having the insertId as a label of the log-based metric we created for every log record we get from the pods running our services.
In the alerting policy, this label helped in two things:
We grouped by it (named as record_id) which served in making each incident unique, so it got reported without waiting for other incidents to get resolved and at the same time it got resolved instantly.
We used it in the documentation of the notification to include a direct link to the issue (log record) itself which was a nice and essential feature to have. https://console.cloud.google.com/logs/viewer?project=MY_PROJECT&advancedFilter=insertId%3D%22${metric.label.record_id}%22
As #Aaron Sher mentioned in his answer, it is a tricky problem. We might have done something not recommended or not efficient, but it works fine and of course we are open for improvement recommendations.
Hi guys I am constructing a task distribution management system for my team in which I want to add a functionality that:
When I create a task, I will have an option to choose "how long is this task valid for being taken". For example, when creating the task I put "2 hours" in the
<input id="valid-for">
, then this task will only be displayed on the dashboard for 2 hours from the time it was created and then after 2 hours -> "display: none".
I've searched the web for the mechanism of achieving this feature but I didn't get a satisfied answer probably because I don't know the right terminology to google. I tried to use AJAX and use TIME_STAMP type attribute in MySQL but didn't know how to proceed. Could anybody tell me how to achieve this feature by the use of MySQL, jQuery or any other technics that could fulfill this feature? No code necessary I just need some explanation.
Thanks guys!
Without knowing any more details, here is how I would consider writing the code:
In the database, have a start time and a use-by time.
In your browser page, you can run a script periodically, say every minute (this is called polling). In this case, you can use Ajax to call back to the server for updates.
At the server end, check for new tasks as well as expired tasks. Then send the results back to the Ajax caller.
Back at the browser, update the dashboard accordingly.
I would be inclined to remove the task on the browser rather than simply hide it.
I have written a standard google web app script with server code and a html interface. The web interface prompts the user for a month and year then creates a google document (by copying a template document) for every day in the month selected.
I have got the code working but I would like to provide some sort of status as the code is running. It takes about 4 mins. I put out a message before calling the server code and a message when it completes but I would like to provide progress updates. i.e. "Docs Created 1", "Docs created 2" etc.
I guess what I am really asking is can google server code update something on the web page while it is running?
Thanks.
Will
No, it cannot (AFAIK).
But since it is taking 4 minutes (which is dangerously close to the 6 minute limit), I suggest you create only one document on your function, then return to the client-side. The client-side will then update the status on the screen and call your server-side function again, which will do the second document, return and so on, until you finish.
By doing this you not only updates your client-side but also avoid getting close to the maximum execution limit. The downside, of course, is that you'll add up some seconds to your total execution time. But for long tasks I don't much problem, actually informing the user of your script progress will make it feel faster in comparison.
Is there anyway of catching this GAS error: "Exceeded maximum execution time"
I mean catching with try ... catch(e) // so far it's not working for me.
Thanks
As written in the comments to your question, thats not possible. But, however, you can set a flag in scriptDB or properties when execution starts and clear that flag when execution comes to a normal end, so you can find out during the next run wether your script came to a regular end when it was run last time and try to take corrective actions if not.
The answer above is correct; it's not possible. An easy alternative to the workaround that pbhd mentioned would be to simply track the runtime of your script (e.g. comparing results of new Date().getTime() at regular intervals) and run whatever you'd include under your catch statement right before you hit the maximum execution time. The maximum is 6 minutes (reference).
That way, you don't have to catch the error -- you can preempt it.
During normal testing, it is possible to accidentally create an infinite (or very long running) loop that consumes the daily execution time limit 100%.
Even if you realize what you have dome wrong immediately, you cannot immediately re-try with Google scripts for another 24 hours - thus slowing down ongoing development significantly and maybe forcing the developer to do some other work , taking his focus/"attention stream" away from the current problem. This is almost always bad.
My product ("IBM OLIVER CICS test/debug" - see Wikipedia article) solved this problem - and many others - around 37 years ago - by having a time limit on any particular transaction and intercepting the resulting time out, allowing options of:-
continuation or
examine/modify variables
"manual" re-try (for the same time) or
abort.
Google could implement this approach just as easily - by "pausing" if execution time is looking too heavy. I had a similar solution to other resources in OLIVER - such as excessive API calls ("possible macro loop") and excessive memory usage.
It seems it takes an "old timer" like me to provide solutions to problems that have existed "since the beginning of time" (and certainly before PC's were thought of).
Googles current "solution" (i.e. absolute limits) only helps Google keep its own servers from being swamped. It would be easy for them to do what OLIVER did all those years ago. By the way there should be no "IBM" prefix on the Wikipedia article - it was my own product and some clown Wikipedia editor altered it to include the prefix.
(By the way, Google do not prevent other scripts on same s/s running - that maybe only use minimal amounts of extra time ( i.e. Scripts on the same spreadsheet still work) . I tried renaming the original script as an experiment but it was stopped after a very short time with "exceeded execution time" error.
GIZ-A-JOB Google - you know its worth it!
Okay, so I have this small procedural SVG editor in Clojure.
It has a code pane where the user creates code that generates a SVG document, and a preview pane. The preview pane is updated whenever the code changes.
Right now, on a text change event, the code gets recompiled on the UI thread (Ewwwww!) and the preview pane updated. The compilation step should instead happen asynchronously, and agents seem a good answer to that problem: ask an agent to recompile the code on an update, and pass the result to the image pane.
I have not yet used agents, and I do not know whether they work with an implicit queue, but I suspect so. In my case, I have zero interest in computing "intermediate" steps (think about fast keystrokes: if a keystroke happens before a recompilation has been started, I simply want to discard the recompilation) -- ie I want a send to overwrite any pending agent computation.
How do I make that happen? Any hints? Or even a code sample? Is my rambling even making sense?
Thanks!
You describe a problem that has more to deal with execution flow control rather than shared state management. Hence, you might want to leave STM apart for a moment and look into futures: they're still executed in a thread pool as agents, but instead of agents they can be stopped by calling future-cancel, and inspecting their status with future-cancelled?.
There are no strong guarantees that the thread the future is executing can be effectively stopped. Still, your code will be able to try to cancel the future, and move on to schedule the next recompilation.
agents to indeed work on a queue, so each function gets the state of the agent and produces the next state of the agent. Agents track an identity over time. this sounds like a little more than you need, atoms are a slightly better fit for your task and used in a very similar manner.