I'd like to know if I can buffer an execution error in Tricentis Tosca.
I am creating automated tests on Google Chrome and I would like to buffer the errors that Tosca gives me when there is something wrong (that log would be used in the recovery/cleanup scenario).
Related
I have deployed a storage trigger cloud function that needs more memory. While deploying the GCF, I have deployed in the following manner with the appropriate flags.
gcloud functions deploy GCF_name--runtime python37 --trigger-resource bucket_name --trigger-event google.storage.object.finalize --timeout 540s --memory 8192MB
But I observed in the google cloud console, the memory utilization map is not going beyond 2GB. And in the logs I am getting this error, Function execution took 34566 ms, finished with status: 'connection error' which happens because of memory shortage. Can I get some help on this.
Edited
The application uploads text files to the storage that contains certain number of samples. Each file is read when it is uploaded to the storage and the data appended to a pre existing file. The total number of samples will be maximum of 75600002. That's why I need 8GB data. Its giving the connection error while appending the data to the file.
def write_to_file(filename,data,write_meta = False,metadata = []):
file1 = open('/tmp/'+ filename,"a+")
if write_meta:
file1.write(":".join(metadata))
file1.write('\n')
file1.write(",".join(data.astype(str)))
file1.close()
The memory utilisation map was the same after every upload.
You are writing a file to /tmp which is an in-memory filesystem. So start by deleting that file when you finish uploading it. In fact those:
Files that you write consume memory available to your function, and sometimes persist between invocations. Failing to explicitly delete these files may eventually lead to an out-of-memory error and a subsequent cold start.
Ref : https://cloud.google.com/functions/docs/bestpractices/tips#always_delete_temporary_files
Is there a way to get more detail about unsupported events in google cloud datastream?
I am running a datastream from MySQL and have a few UNSUPPORTED_EVENTS_DISCARDED and I would like to understand what these events are.
In the logs explorer detail is limited to something like following:
message: "Discarded 1 unsupported events with reason code: MYSQL_UNKNOWN_ERROR. Latest discarded event details: An unexpected error occurred while fetching log: mysql-bin.013919, log_pos: 91832523."
event_code: "UNSUPPORTED_EVENTS_DISCARDED"
Here are some limitations regarding datastream/mysql:
Events have a size limitation of 3 MB
Tables that have more than 100 million rows
Not all changes to the source schema can be detected automatically
I suspect that some of the data that you are fetching hits the limitation and returns the error. I recommend to review the document with the limitation and make sure all the data can be fetched.
I deploy a google cloud function with lazy loading that loads data from google datastore. The last update time of my function is 7/25/18, 11:35 PM. It works well last week.
Normally, if the function is called less than about 30 minutes since last called. The function does not need to load data loaded from google datastore again. But I found that the lazy loading is not working since yesterday. Even the time between two function is less than 1 minute.
Does anyone meet the same problem? Thanks!
The Cloud Functions can fail due to several reasons such as uncaught exception and internal process crashes, therefore, it is required to check the logs files / HTTP responses error messages to verify the issue root cause and determine if the function is being restarted and generating Function execution timeouts that could explain why your function is not working.
I suggest you take a look on the Reporting Errors documentation that explains the process required to return a function error in order to validate the exact error message thrown by the service and return the error at the recommended way. Keep in mind that when the errors are returned correctly, then the function instance that returned the error is labelled as behaving normally, avoiding cold starts that leads higher latency issues, and making the function available to serve future requests if need be.
I am using Matlab's Image Acquisition Toolbox to acquire high-speed video over gigabit Ethernet. I'm having some trouble with frame-dropping, but that's not what this question is about. What I really want to do is tell Matlab to continue running the script even after encountering the frame-dropping error.
I used a try/catch statement for this purpose but it just doesn't work. Here is my code, sparing some of the details relating to setting up the camera and using the data:
%% setting up camera
while(1)
% continue acquiring data forever
while(vidObj.FramesAvailable < vidObj.FramesPerTrigger)
% wait until we're ready to get the data
try
pause(.1)
catch exception
disp "i got an error"
end
end
% get the data
[img, t] = getdata(vidObj);
%% do something with the data
%% ...
end
What happens is that, every once in a while, some frames are dropped and the toolbox raises an error. This happens inside the try block, but Matlab raises an exception anyway! The output looks something like:
Error event occurred at 21:08:20 for video input object: Mono8-gige-1.
gige: Block/frame 1231 is being dropped beecause a lost packet is unable to be resent....
Error in script_name (line 82)
pause(.1)
You can see that the error occurs while we're waiting to collect data (the "pause" statement), which is inside the try block, and yet the exception is not caught correctly because my debugging message doesn't print and the program grinds to a halt.
How can I get Matlab to observe the try/catch structure and continue after this error happens?
I figured it out. The error message is not a true error, but more of a warning. Execution does not stop. However, vidObj stops collecting frames and my code keeps looping forever, waiting for enough frames to be collected.
You can insert a check for this condition like so:
% wait until enough frames are available
while(vidObj.FramesAvailable < vidObj.FramesPerTrigger)
pause(.1)
if strcmp(vidObj.Running, 'off')
% It has stopped running, probably because frames were dropped
start(vidObj)
end
end
Now, upon frame dropping, the object will be restarted and acquistion continues. Obviously the dropped frames cannot be recovered so there will be a gap in the video.
I've created a website using HTML 5 offline Application Cache and it works well in most cases, but for some users it fails. In Chrome, when the application is being cached, the progress is displayed for each file and also error messages if something goes wrong, like:
Application Cache Checking event
Application Cache Downloading event
...
Application Cache Progress event (7 of 521) http://localhost/HTML5App/js/main.js
...
Application Cache Error event: Failed to commit new cache to storage, would exceed quota.
I've added event listeners to window.applicationCache (error, noupdate, obsolete, etc.), but there is no information stored on the nature of the error.
Is there a way to access this information from the web site using JavaScript ?
I would like to determine somehow which file caused the error or what kind of error occurred.
I believe that the spec doesn't mention that the exact cause of the exception should be included in the error. Currently the console is your only friend.
To wit, your current error "exceed quota" is due to the fact that Chrome currently limits the storage to 5MB. You can work around this by creating an app package that requests unlimited_Storage via the permission model. See http://code.google.com/chrome/apps/docs/developers_guide.html#live for more details.
If you want specific error messages on the "onerror" handler raise a bug on http://crbug.com/new