I'm thinking on one problem. If I have simple application which t.ex. send files from one place to another (SFTP) and this sftp connection is broken i catched possible exceptions. If there were t.ex 20 files. Exception strategy would send 20 emails for each try to send file. If there'll be thousands of files?? Is this possible to gather all this catched exceptions to one mail??
I did non-perfect solution - Exception strategy makes file with exceptions and next this file is consumed in next flow which send "error mail". I think it's not perfect. Do you have better solutions??
I'v seen
https://www.appnovation.com/blog/handling-multiple-errors-mule-collection-aggregator
But it doesn't work for me.
Have you tried to encapsulate the "send mail" component in a Cache scope that expires in ,lets say, each 10 min, and that way you could limit the number of emails that you receive?
Related
I have implemented a complex csv import script in Golang.
I use a Workerpool implementation for it. Inside that workerpool, workers run through 1000s of small csv files, categorizing, tagging and branding the products.
And they all write to the same database table. So far so good.
The problem i'm facing is, that if i chose more than 2 workers, the process crashes with the following message randomly
The workflow is
foreach (csv) {
workerPool.submit(csv)
}
func worker(csv) {
foreach (line) {
import(line)
}
}
import(line) {
product = get(line)
product.category = determine_category(product)
product.brand = determine_brand(product)
save(brand)
product.tags = determine_tags(product)
//and after all
save(product)
}
I tried to wrap the save() calls in transactions, but it didn't help.
Now i have the following questions:
Is MySQL suited to save concurrently to 1 table?
If transactions are need to accomplish this, where should they be set?
Is the Go SQL Driver (where the error ALWAYS happens in packets.go:1102) suited to do this ?
Could anyone help me (maybe by hiring for a few hours)?
I'm completely stuck. I can also share the sourcecode if that helps. But I first wanted to know i you guess that it's rather my code or a general issue.
Open a new db connection in each goroutine (or thread, for languages that use threads).
MySQL's protocol is stateful, which means if multiple goroutines attempt to use the same connection, the requests and responses get very confused.
You would have the same problem trying to share any other kind of stateful protocol connection between goroutines.
For example ftp is also a stateful protocol, and that may be easier to understand. A client goroutine might send a message like "get file x" and the response should be a series of messages containing the content of that file. If another goroutine tries to use the same connection while that request/response is inprogress, both clients will be confused. The second goroutine will read packets that belong to a file it didn't request. The first goroutine who requested the file will find some packets it was expecting have already been read.
Similarly, MySQL's protocol does not support multiple client goroutines sharing a single connection.
Our portal has been running on Liferay 6.2 for several years. We have many services that use HTML forms (usually written with Alloy UI in Freemarker) to allow users to submit requests. The server code is written in Java and uses the liferay portletrequest objects to return the submit form data.
However, recently these forms suddenly stopped working.
Specifically: if the form includes a file for uploading, then the ActionRequest object does not return any of the form fields as parameters the way it usually does (request.getparameter(paramtername) returns null instead of the string value that the user entered into the form). If the user does not include any files then it works normally.
This doesn't seem to be an issue with the forms or the java code as many forms who's code has not been touched in years suddenly stopped working. What's more this stopped working partway through a day in which we didn't make any changes to the application.
I'm struggling to understand what I'm seeing in the logs. The error messages that feel most promising look like:
Caused by: org.apache.commons.fileupload.FileUploadBase$IOFileUploadException: Processing of multipart/form-data request failed. Stream ended unexpectedly
at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:351)
at org.apache.commons.fileupload.portlet.PortletFileUpload.parseRequest(PortletFileUpload.java:109)
at org.springframework.web.portlet.multipart.CommonsPortletMultipartResolver.parseRequest(CommonsPortletMultipartResolver.java:151)
... 208 more
But I haven't been able to find anything that seems relevant. Another type of error that might be related looks like:
10:00:09,095 WARN [http-bio-8080-exec-272][FileImpl:422] Unable to extract text from Scan4.JPG
org.apache.tika.exception.TikaException: Unexpected RuntimeException from org.apache.tika.parser.jpeg.JpegParser#3e34efc2
We've been trying to track down the issue for days now, I'm desperate and out of ideas. Can anyone think of any possible reasons why files would not upload?
I've got an application that's been working for a long time.
Recently we created a new app/keys for it, and it's behaving strangely.
(I did figure out the scope requirements had been put in place. I am requesting bucket:create bucket:read data:read data:write).
When I upload a file to a bucket, I've traditionally called done the call to get the object details afterwards, to verify that it's successfully uploaded.
With the new key, I am intermittently getting this error:
GetObjectDetails: InternalServerError {"fault":{"faultstring":"Execution of ServiceCallout servicecallout-auth-acm-request failed. Reason: timeout occurred servicecallout-auth-acm-request","detail":{"errorcode":"steps.servicecallout.ExecutionFailed"}}}
Is this something I should be re-trying with a sleep in between? or is it indicative of something wrong with the upload?
(FYI - putting in a retry seems to have have resolved this for me, but I still don't know if that's the right answer - and if this issue might happen on other calls).
It could be that the service requires a slight delay between a put object and a get, so I would suggest either use a timer or a retry as you mentioned. However a successful response from the upload should be enough to ensure your object has been placed to the bucket without the need to double check.
Mule documentation states that catch-exception-strategy is similar to java catch block. But unfortunately, the payload is consumed (message is consumed); from the catch block the payload is lost (unlike a java method where you can access the method input parameters from a catch/finally block).
The problem with this design is that at any instance, (from the catch strategy flow) it is impossible to know the error and last known enriched payload which was used (which caused the error?). This complicates auditing of data which caused the error.
Suppose if there is a flow with 10 message processors, it becomes tedious to identify the message processor which threw error.
I can see 2 workarounds regarding the payload:
1) After the inbound endpoint, push the payload to a flow variable before every message processor (again another disadvantage is what happens to the Inbound properties and attachments?)
2) Use Rollback exception strategy with zero attempts (the transaction will be rolled back), and original input message may be available. (drawback: it is difficult to introspect on why the error happend and on which message processor - example: I may have 5 or 6 DB processors)
The reason why this becomes important is supporting an ESB application in production becomes easier.
For example, from the catch-block if we are able to pipe the payload and exception details (linked to a single UID), then you can run a log monitor tool, push it to a real time dashboard for monitoring purpose/raise Alerts. The same approach can be uniformly applied to all the applications/flows and java components, etc.
MMC is weak in this area - for example, if you want to supress Alerts from a batch job after 5 occurrences, MMC cannot do it.
My questions are:
1) Is there any reason on why the payload is made unavailable?
Possible workaround is to push (last known data) to another variable as part of message called originalPayload or originalInboundProperties?
2) Any other straight forward way of piping the exception and payload to an appender (instead of workarounds)?
Ananth Krishnan (WHISHWORKS.com)
I am using ssis event handler to trigger an email whenever an error occured in the entire package(PACKAGE+ONEEROR). Here number of emails triggered is equal to number of errors generated.How can I restrict it to one mail eventhough the same error occured 10 times.
Please suggest....
You have a few options. The problem with setting an ONERROR email at the package level is that it will send an email for each error the package encounters. This gets ugly if you have a deep level transform fail, which will error as it fails back up to the package level.
I suggest that you either:
1) Setup ONERROR events at the task level and remove the package level event. Usually this will be good enough. Most tasks will only have one error to report. Be careful with Data Flows, they can act in a similar fashion as the package level events.
2) Setup some sort of advance logging. I’ve seen this done several ways. I’ve seen some people setup Script tasks to log the errors (at the task level) to a variable, and then send a final email containing the variable in the body (at control flow level). I have also seen people call stored procedures (at the task level and package level) for each error that occurs. The sproc would log errors to the DB and allow the package to continue on to the next step/container. The logged errors can then be dumped into a csv and emailed as an attachment.
If you like your current setup, you can try changing the error properties for each container/task. I haven't ever done this, but I do know you can change the way tasks handle errors! I don't like this option because you would possibly be missing errors (maybe? kind of guessing).
update From another solution - If you want to keep your current email ONERROR and simply prevent certain errors from "bubbling" up and sending emails, you can follow this link to learn how to gracefully handle errors. You could prevent certain tasks errors from reaching your ONERROR event at the package level. good luck.