I need a watcher, because for one of my application we are getting some Not found errors in our main application in mule. So we implemented a new application like the payloads which are failed with Not found errors in the main application it will come to new application and there it will try for n number of times even though if it get failed after n number of times it will come to the Error queue. So, the errors which are come to error queue after retires i need a watcher notification in kibana. For this scenario i need a watcher and it should trigger for every 24 hours and it should notify to team via mail.
I need a kiabana watcher script for the above scenario.
Related
I'm creating a Forge application which needs to get version information from a BIM 360 hub. Sometimes it works, but sometimes (usually after the code has already been run once this session) I get the following error:
Exception thrown: 'Autodesk.Forge.Client.ApiException' in mscorlib.dll
Additional information: Error calling GetItem: {
"fault":{
"faultstring":"Unexpected EOF at target",
"detail": {
"errorcode":"messaging.adaptors.http.flow.UnexpectedEOFAtTarget"
}
}
}
The above error will be thrown from a call to an api, such as one of these:
dynamic item = await itemApi.GetItemAsync(projectId, itemId);
dynamic folder = await folderApi.GetFolderAsync(projectId, folderId);
var folders = await projectApi.GetProjectTopFoldersAsync(hubId, projectId);
Where the apis are initialized as follows:
ItemsApi itemApi = new ItemsApi();
itemApi.Configuration.AccessToken = Credentials.TokenInternal;
The Ids (such as 'projectId', 'itemId', etc.) don't seem to be any different when this error is thrown and when it isn't, so I'm not sure what is causing the error.
I based my application on the .Net version of this tutorial: http://learnforge.autodesk.io/#/datamanagement/hubs/net
But I adapted it so I can retrieve multiple nodes asynchronously (for example, all of the nodes a user has access to) without changing the jstree. I did this to allow extracting information in the background without disrupting the user's workflow. The main change I made was to add another Route on the server side that calls "GetTreeNodeAsync" (from the tutorial) asynchronously on the root of the tree and then calls it on each of the returned children, then each of their children, and so on. The function waits until all of the nodes are processed using Task.WhenAll, then returns data from each of the nodes to the client;
This means that there could be many api calls running asynchronously, and there might be duplicate api calls if a node was already opened in the jstree and then it's information is requested for the background extraction, or if the background extraction happens more than once. This seems to be when the error is most likely to happen.
I was wondering if anyone else has encountered this error, and if you know what I can do to avoid it, or how to recover when it is caught. Currently, after this error occurs, it seems that every other api call will throw this error as well, and the only way I've found to fix it is to rerun the code (I use Visual Studio so I just rerun the server and client, and my browser launches automatically)
Those are sporadic errors from our apigee router due to latency issues in the authorization process that we are currently looking into internally.
When they occur please cease all your upcoming requests, wait for a few minutes and retry again. Take a look at stuff like this or this to help you out.
And our existing reports calling out similar errors seem to point to concurrency as one of the factors leading up to the issue so you might also want to limit your concurrent requests and see if that mitigate the issue.
I deploy a google cloud function with lazy loading that loads data from google datastore. The last update time of my function is 7/25/18, 11:35 PM. It works well last week.
Normally, if the function is called less than about 30 minutes since last called. The function does not need to load data loaded from google datastore again. But I found that the lazy loading is not working since yesterday. Even the time between two function is less than 1 minute.
Does anyone meet the same problem? Thanks!
The Cloud Functions can fail due to several reasons such as uncaught exception and internal process crashes, therefore, it is required to check the logs files / HTTP responses error messages to verify the issue root cause and determine if the function is being restarted and generating Function execution timeouts that could explain why your function is not working.
I suggest you take a look on the Reporting Errors documentation that explains the process required to return a function error in order to validate the exact error message thrown by the service and return the error at the recommended way. Keep in mind that when the errors are returned correctly, then the function instance that returned the error is labelled as behaving normally, avoiding cold starts that leads higher latency issues, and making the function available to serve future requests if need be.
SSIS package in question runs a series of stored procedures and fills 13 different excel files with results and sends those excel files to 13 different users in attachments. Package run stops with the message in the title of this question, sometime right in the middle of sending or for example today, on the 4th user. The files get created because I can see them in their directories so only the send mail task is failing. When I go back to visual studio and execute each send task manually, send task works fine even though sometime it still gives me the error yet, still sends the right file to the right person but not thru SSIS package run in SQL server... I tried to delay SMTP processes thinking that might be in the way (to 660000 miliseconds) but did not help. Has this happened to anybody?.. Thanks for all your answers in advance.
Here is the full message for a task that sent the e-mail with attachment regardless the error when task was manually executed...
[Send Mail Task] Error: An error occurred with the following error message: "The operation has timed out.".
Progress: The SendMail task is completed. - 100 percent complete
Task Send Mail Task for Inventory Reports 038 failed
Finished, 12:03:03 PM, Elapsed time: 00:00:00.655
I think I figured it out why this was happening. In case somebody / anybody is interested, here is what I think has happened.
I was trying to expand the timeout period thru a script task, playing with Threading.Thread.Sleep value but I neglected to do the same in my SMTP connection properties. When I changed the timeout value in properties for SMTP connection, error messages stopped coming :)
I wish I could post a picture to show you where exactly that property is located but my reputation failed me!.. :( (less than 10 points yet)
I am in the process of completing all of my changes then I will post again with final result hoping that will resolve all of my problems.
Thanks to all who showed interest.
I have written a script to update my db table after reading data from db tables and solr. I am using asyn.waterfall module. The problem is that the script is not getting exited after successful completion of all operations. I have used db connection pool also thinking that may be creating the script to wait infinitly.
I want to put this script in crontab and if it will not exit properly it would be creating a hell lot of instances unnecessarily.
I just went through this issue.
The problem with just using process.exit() is that the program I am working on was creating handles, but never destroying them.
It was processing a directory and putting data into orientdb.
so some of the things that I have come to learn is that database connections need to be closed before getting rid of the reference. And that process.exit() does not solve all cases.
When my project processed 2,000 files. It would get down to about 500 left, and the extra handles would have filled up the available working memory. Which means it would not be able to continue. Therefore never reaching the process.exit at the end.
On the other hand, if you close the items that are requesting the app to stay open, you can solve the problem at its source.
The two "Undocumented Functions" that I was able to use, were
process._getActiveHandles();
process._getActiveRequests();
I am not sure what other functions will help with debugging these types of issues, but these ones were amazing.
They return an array, and you can determine a lot about what is going on in your process by using these methods.
You have to tell it when you're done, by calling
process.exit();
More specifically, you'll want to call this in the callback from async.waterfall() (the second argument to that function). At that point, all your asynchronous code has executed, and your script should be ready to exit.
EDIT: As pointed out by #Aaron below, this likely has to do with something like a database connection being active, and not allowing the node process to end.
You can use the node module why-is-node-running:
Run npm install -D why-is-node-running
Add import * as log from 'why-is-node-running'; in your code
When you expect your program to exit, add a log statement:
afterAll(async () => {
await app.close();
log();
})
This will print a list of open handles with a stacktrace to find out where they originated:
There are 5 handle(s) keeping the process running
# Timeout
/home/maf/dev/node_modules/why-is-node-running/example.js:6 - setInterval(function () {}, 1000)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
# TCPSERVERWRAP
/home/maf/dev/node_modules/why-is-node-running/example.js:7 - server.listen(0)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
We can quit the execution by using:
connection.destroy();
If you use Visual Studio code, you can attach to an already running Node script directly from it.
First, run the Debug: Attached to Node Process command:
When you invoke the command, VS Code will prompt you which Node.js process to attach to:
Your terminal should display this message:
Debugger listening on ws://127.0.0.1:9229/<...>
For help, see: https://nodejs.org/en/docs/inspector
Debugger attached.
Then, inside your debug console, you can use the code from The Lazy Coder’s answer:
process._getActiveHandles();
process._getActiveRequests();
I am using ssis event handler to trigger an email whenever an error occured in the entire package(PACKAGE+ONEEROR). Here number of emails triggered is equal to number of errors generated.How can I restrict it to one mail eventhough the same error occured 10 times.
Please suggest....
You have a few options. The problem with setting an ONERROR email at the package level is that it will send an email for each error the package encounters. This gets ugly if you have a deep level transform fail, which will error as it fails back up to the package level.
I suggest that you either:
1) Setup ONERROR events at the task level and remove the package level event. Usually this will be good enough. Most tasks will only have one error to report. Be careful with Data Flows, they can act in a similar fashion as the package level events.
2) Setup some sort of advance logging. I’ve seen this done several ways. I’ve seen some people setup Script tasks to log the errors (at the task level) to a variable, and then send a final email containing the variable in the body (at control flow level). I have also seen people call stored procedures (at the task level and package level) for each error that occurs. The sproc would log errors to the DB and allow the package to continue on to the next step/container. The logged errors can then be dumped into a csv and emailed as an attachment.
If you like your current setup, you can try changing the error properties for each container/task. I haven't ever done this, but I do know you can change the way tasks handle errors! I don't like this option because you would possibly be missing errors (maybe? kind of guessing).
update From another solution - If you want to keep your current email ONERROR and simply prevent certain errors from "bubbling" up and sending emails, you can follow this link to learn how to gracefully handle errors. You could prevent certain tasks errors from reaching your ONERROR event at the package level. good luck.