WinInet FtpOpenFile timeout - wininet

I have an app that regularly uploads file using WinInet's FTP functions. It's been running for years but started failing on 4/1/2021. It fails opening a file using FtpOpenFile with a status of 12002 Internet Timeout. The call looks like this..
hiOpenFile = FtpOpenFile(
hiSiteConnect,
"TEMP.htm",
GENERIC_WRITE,
FTP_TRANSFER_TYPE_ASCII,
NULL
);
The file does get created on the server.
I'm wondering what the time out value for this function is and if there is anyway to change it?

I kept getting 12002 Internet Timeout with both FtpOpenFile and FtpGetFile but both work now after adding the INTERNET_FLAG_PASSIVE flag to my InternetConnect call.
Regarding timeouts, normally you would use INTERNET_OPTION_CONNECT_TIMEOUT,INTERNET_OPTION_RECEIVE_TIMEOUT, or INTERNET_OPTION_SEND_TIMEOUT with InternetSetOption. See here for details on the option flags: https://learn.microsoft.com/en-us/windows/win32/wininet/option-flags
However, due to a very old MS bug, setting the timeout as above simply has no effect whatsoever. There is a workaround to reduce the timeout but not to increase. It is done by creating a new worker thread and waiting for it. See here for the article:
https://mskb.pkisolutions.com/kb/224318

Related

Couchbase Java SDK times out with BUCKET_NOT_AVAILABLE

I am doing a lookup operation Couchbase Java SDK 3.0.9 which looks like this:
// Set up
bucket = cluster.bucket("my_bucket")
collection = bucket.defaultCollection()
// Look up operation
val specs = listOf(LookupInSpecStandard.get("hash"))
collection.lookupIn(id, specs)
The error I get is BUCKET_NOT_AVAILABLE. Here are is the full message:
com.couchbase.client.core.error.UnambiguousTimeoutException: SubdocGetRequest, Reason: TIMEOUT {"cancelled":true,"completed":true,"coreId":"0xdb7f8e4800000003","idempotent":true,"reason":"TIMEOUT","requestId":608806,"requestType":"SubdocGetRequest","retried":39,"retryReasons":["BUCKET_NOT_AVAILABLE"],"service":{"bucket":"export","collection":"_default","documentId":"export:main","opaque":"0xcfefb","scope":"_default","type":"kv"},"timeoutMs":15000,"timings":{"totalMicros":15008977}}
The strange part is that this code hasn't been touched for months and the lookup broke out of a sudden. The CB cluster is working fine. Its version is
Enterprise Edition 6.5.1 build 6299.
Do you have any ideas what might have gone wrong?
Note that in Couchbase Java SDK 3.x, the Cluster::bucket method returns instantly, and continues opening a bucket in the background. So the first operation you perform - a lookupIn here - needs to wait for that resource opening to complete before it can proceed. It looks like it took a little longer to access the Couchbase bucket than usual and you got a timeout.
I recommend using the Bucket::waitUntilReady method after opening a bucket, to block until the resource opening is complete:
bucket = cluster.bucket("my_bucket")
bucket.waitUntilReady(Duration.ofMinutes(1));
This problem can occur because of firewall. You need to allow these ports.
Client-to-node
Unencrypted: 8091-8097, 9140 [3], 11210
Encrypted: 11207, 18091-18095, 18096, 18097
You can check more from below
https://docs.couchbase.com/server/current/install/install-ports.html#_footnotedef_2

Autodesk DM API: Is Retry appropriate here?

I've got an application that's been working for a long time.
Recently we created a new app/keys for it, and it's behaving strangely.
(I did figure out the scope requirements had been put in place. I am requesting bucket:create bucket:read data:read data:write).
When I upload a file to a bucket, I've traditionally called done the call to get the object details afterwards, to verify that it's successfully uploaded.
With the new key, I am intermittently getting this error:
GetObjectDetails: InternalServerError {"fault":{"faultstring":"Execution of ServiceCallout servicecallout-auth-acm-request failed. Reason: timeout occurred servicecallout-auth-acm-request","detail":{"errorcode":"steps.servicecallout.ExecutionFailed"}}}
Is this something I should be re-trying with a sleep in between? or is it indicative of something wrong with the upload?
(FYI - putting in a retry seems to have have resolved this for me, but I still don't know if that's the right answer - and if this issue might happen on other calls).
It could be that the service requires a slight delay between a put object and a get, so I would suggest either use a timer or a retry as you mentioned. However a successful response from the upload should be enough to ensure your object has been placed to the bucket without the need to double check.

Fiware CEP server stops responding

In developing in Fi-Cloud's CEP I've been having an issue that has been happening repeatedly. As I'm trying to develop a definition to perform a task, CEP's server and Authoring Tool stop responding, although ssh is still responsive.
This issue happens as I develop. I'm using the AuthoringTool to alter the definition bit by bit and then I re-upload it to the server through the authoring tool's export feature.
To reinitiate the proton with the new definition each time I alter it, I use Google's Postman with this single operation:
-PUT (url:http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
header: 'Content-Type' : 'application/json'; body : {"action": "ChangeDefinitions","definitions-url" : "/ProtonOnWebServerAdmin/resources/definitions/Definition_Name"}
At the same time, I'm logged in with three ssh intances, one to monitor the files being created on /opt/tomcat10/sample/ and other things, and the other two to 'tail -f ' log files the definition writes to, as events are processed: one log for events recieved and another log for events detected by the EPAgent.
I'm iterating through these procedures over and over as I'm developing and eventualy CEP server and the Authoring Tool stop responding.
By "tailing" tomcat's log file (# tail -f /opt/tomcat10/logs/catalina.out) I can see that, when under these circumstances, if I attemp a:
-GET (url: http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
I get no response back and tomcat logs the following response:
11452100 [http-bio-8080-exec-167] ERROR org.apache.wink.server.internal.RequestProcessor - An unhandled exception occurred which will be propagated to the container.
java.lang.OutOfMemoryError: PermGen space
Exception in thread "http-bio-8080-exec-167" java.lang.OutOfMemoryError: PermGen space
Ssh is still responsive and I can look at tomcat's log this way.
To get over this and continue, I exit ssh connections and restart CEP's instance in the Fi-Cloud.
Is the procedure I'm using to re-upload and re-run the definition inapropriate? Should I take a different approach to developing?
When you update a definition that the CEP is already working with, and you want the CEP engine to work with the updated definition, you need to:
Export the definition using the authoring tool export (as you did)
Stop the engine run, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"stop"}
Start the engine, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"start"}
You don't need to activate the "ChangeDefinitions" action, since it is the same definition name that the engine is already working with.
Activating "ChangeDefinitions" action, only influences the next run of the CEP, and has no influence on the current run.
This answer your question about how you should update a CEP definition.
Hope it will solve your issue.

my nodejs script is not exiting on its own after successful execution

I have written a script to update my db table after reading data from db tables and solr. I am using asyn.waterfall module. The problem is that the script is not getting exited after successful completion of all operations. I have used db connection pool also thinking that may be creating the script to wait infinitly.
I want to put this script in crontab and if it will not exit properly it would be creating a hell lot of instances unnecessarily.
I just went through this issue.
The problem with just using process.exit() is that the program I am working on was creating handles, but never destroying them.
It was processing a directory and putting data into orientdb.
so some of the things that I have come to learn is that database connections need to be closed before getting rid of the reference. And that process.exit() does not solve all cases.
When my project processed 2,000 files. It would get down to about 500 left, and the extra handles would have filled up the available working memory. Which means it would not be able to continue. Therefore never reaching the process.exit at the end.
On the other hand, if you close the items that are requesting the app to stay open, you can solve the problem at its source.
The two "Undocumented Functions" that I was able to use, were
process._getActiveHandles();
process._getActiveRequests();
I am not sure what other functions will help with debugging these types of issues, but these ones were amazing.
They return an array, and you can determine a lot about what is going on in your process by using these methods.
You have to tell it when you're done, by calling
process.exit();
More specifically, you'll want to call this in the callback from async.waterfall() (the second argument to that function). At that point, all your asynchronous code has executed, and your script should be ready to exit.
EDIT: As pointed out by #Aaron below, this likely has to do with something like a database connection being active, and not allowing the node process to end.
You can use the node module why-is-node-running:
Run npm install -D why-is-node-running
Add import * as log from 'why-is-node-running'; in your code
When you expect your program to exit, add a log statement:
afterAll(async () => {
await app.close();
log();
})
This will print a list of open handles with a stacktrace to find out where they originated:
There are 5 handle(s) keeping the process running
# Timeout
/home/maf/dev/node_modules/why-is-node-running/example.js:6 - setInterval(function () {}, 1000)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
# TCPSERVERWRAP
/home/maf/dev/node_modules/why-is-node-running/example.js:7 - server.listen(0)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
We can quit the execution by using:
connection.destroy();
If you use Visual Studio code, you can attach to an already running Node script directly from it.
First, run the Debug: Attached to Node Process command:
When you invoke the command, VS Code will prompt you which Node.js process to attach to:
Your terminal should display this message:
Debugger listening on ws://127.0.0.1:9229/<...>
For help, see: https://nodejs.org/en/docs/inspector
Debugger attached.
Then, inside your debug console, you can use the code from The Lazy Coder’s answer:
process._getActiveHandles();
process._getActiveRequests();

HTML 5 Application Cache catch events in Chrome

I've created a website using HTML 5 offline Application Cache and it works well in most cases, but for some users it fails. In Chrome, when the application is being cached, the progress is displayed for each file and also error messages if something goes wrong, like:
Application Cache Checking event
Application Cache Downloading event
...
Application Cache Progress event (7 of 521) http://localhost/HTML5App/js/main.js
...
Application Cache Error event: Failed to commit new cache to storage, would exceed quota.
I've added event listeners to window.applicationCache (error, noupdate, obsolete, etc.), but there is no information stored on the nature of the error.
Is there a way to access this information from the web site using JavaScript ?
I would like to determine somehow which file caused the error or what kind of error occurred.
I believe that the spec doesn't mention that the exact cause of the exception should be included in the error. Currently the console is your only friend.
To wit, your current error "exceed quota" is due to the fact that Chrome currently limits the storage to 5MB. You can work around this by creating an app package that requests unlimited_Storage via the permission model. See http://code.google.com/chrome/apps/docs/developers_guide.html#live for more details.
If you want specific error messages on the "onerror" handler raise a bug on http://crbug.com/new