websockets disconnects on wildfly openshift - openshift

I have web application deploy on WildFly Application Server 8.2.0.Final on Openshift.
My application serves websockets endpoint.
I connect to the websocket endpoint with my java (tyrus implementation) client application and after short period (few hours) connection is closed by server side. I receive close reason "Closed abnormally" and close reason code: "1006".
Client does automatic reconnection and then exactly every hour connection is again broken with mentioned close reason.
Is this builtin mechanism working on openshift serverside? Some sort of cleaning mechanism?
I would like to have permanent websocket connection to server.
Would buying openshift broze/silver support solve this problem?

The problem is in your browser, not in the server:
Close Code 1006 is a special code that means the connection was closed abnormally (locally) by the browser implementation.
If your browser client reports close code 1006, then you should be looking at the websocket.onerror(evt) event for details.
See this SO answer for more details:
https://stackoverflow.com/a/19305172/212224

Related

Connect Timeout Error on cloudhub : Mule version:4.2.2

I am trying to hit an https client api which is working fine on postman(gives response in 800ms) and in local mule flow but it is not working fine on cloudhub . I am getting Connect Timeout error. It tries connecting for 30 secs(as per logs) and then gives HTTP:CONNECTIVITY error.
failed: Connect timeout.
errorType=HTTP:CONNECTIVITY
cause=org.mule.extension.http.api.error.HttpRequestFailedException
Response Timeout that I have set is 5 mins.
The flow was working fine when deployed on cloudhub before.It stopped working a few days ago though I didn't make any changes to my code.I am unable to debug this issue as it is not reproducible on my local env(it works perfectly). Any help would be appreciated.
There are 4 different types of general timeouts mule HTTP calls offer. Each has its own differences.
Connection Idle Timeout
Response Timeout
Max Idle Timeout
Query or Transactions Timeout ( Applies for DB Connectors)
Since you are getting
HTTP:CONNECTIVITY ERROR.
Applying a 5 min Response Timeout doesn't help.
Response Timeout (means taking longer time to respond) should be worried only after Establishing a Connection Handshake.
Your problem is with the Connection itself.
The only possible way you could try fixing this is by Applying a Connection Idle Timeout and a Reconnection Strategy with some frequency gaps.
Since you are so sure about tests in local. I suggest you the below two steps:
1. Try using the same HTTP connector configuration in a separate new mule APP. Try with a simple listener and the failing requestor. Also add one more freely available online REST services into your code in other extra flow. Now try to test both. See which one is working and which is failing.
This would tell if it's a real HTTP CONNECTIVITY problem or anything else related to some mule bug.
2. Check your configurations once again and make sure if your hitting the same endpoint in the cloudhub version.
Finally, I hope you did not accidentally put any proxy conf in the local version.
If it was working, probably there was a networking change in the other side that prevents access from the CloudHub application. You didn't share the URL so it is not clear if it is an internal host or a public host. We also don't know if there is some kind of whitelisting on the server side.
You can test connectivity to the HTTP host and port using the Network Tools application, to see if it accessible from your CloudHub environment.

WebLogic Bridge Message: "Failure of Web Server bridge: No backend server available for connection..."

I have an application (packaged software from a vendor) that runs on Oracle WebLogic.
There are few operations that, if I try them, I consistently get the following error page:
(WebLogic Bridge Message) Failure of Web Server bridge: No backend server available for connection: timed out after 10 seconds or idempotent set to OFF or method not idempotent.
The error occurs consistently almost exactly five minutes after I try the operation.
The page does not look like the typical error page you get when the application logic fails. It looks like something to do with the infrastructure (e.g., WebLogic configuration).
I am pursuing the issue with the software vendor, but that's not going well.
Has anyone seen this message and/or suggest an approach for diagnosing the root cause here?
Looks like you are using a Proxy Server between Browser and Weblogic Server. By seeing the error it's evident that proxy server unable to connect to back-end WLS server. You may have to enable proxy debugs to get more info.

HTTP Request Automatic Retries

how to prevent automatic repeat http request from client machine to tomcat server?
i have a problem in my live web applicaton working through local LAN.when a network failure occur sometimes multiple insertion take place in my mysql data base.
Any one know about these type of issues?
my application use tomcat as web sever and centos as interface

Unable to Handle huge HTTP request

I am using Apache Tomcat Version 8, It contains 5-8 web application developed in J2EE with some frameworks i.e. Spring, Struts, most of them are developed in Servlet also there is database connection with them on same system. These application are running on Linux OS with Java 7.
Every time when I am restarting Apache tomcat server its able to handle certain amount of http request after that its unable to handle the request, every new request taking too much time and webpage connection time out.
I have tried many ways to solve this issue but failed. Also I have increase connection limits in Apache Tomcat as well as at OS level but no solution.
If anyone has any idea please guide me to solve this issue.
OR
Is there any other web server which can solve my issue?

HTML 5 - Web Sockets on Browser close

How does a Server know to close a Web Socket connection in HTML5 on below scenarios and other cases.
Browser closed abruptly
Browser Refresh(A new Socket connection creation or still it will use existing Connection)
System abrupt power off
In case the client quits without being able to notify the server, the basic characteristics of the TCP implementation define the behavior.
As long as your application (and host system itself) do not attempt to send any data over this broken connection, the host will not realize that something is wrong. Hence, the connection could stay 'open' for a long while and allocate resources, from the server's point of view.
However, in the moment data is attempted to be sent to the remote end, the remote end will not acknowledge the retrieval and TCP retransmission comes into play. It involves a certain number of repetitions and used timeouts. The exact parameters depend on the implementation (operating system in use). When the retransmission finally fails, the TCP connection is closed and resources are freed on the server side. So you can
rely on the fact that at some point your application might want to write to the missing remote end and while doing so trigger the detection of the dead connection or
detect missing remote ends yourself by using something like pings on the application level or
use something like pings on the operating system level, via TCP keepalive techniques.
The easiest part of your question is the browser refresh part. IE,FF and Chrome will close the opened connection and open a new one. I guess, that any other browser will do the same.
Point 1 and 3 i can only guess: If the client can still close the tcp connection cleanly, the server will immediately recognize that the connection has been closed. If you are using tomcat, the onClose method of the MessageInbound instance will be called.
If the client could not close the tcp connection cleanly, the server will wait for some kind of timeout. The server will definitely timeout fast when it tries to write something to the socket. You could implement a heartbeat mechanism to do this. Websockets seem to have the option of an automatic heartbeat but not all browsers and servers seem to support it.
If a user closes a browser tab with an opened web socket, the server will not know this one has been closed right away. However, as Jan-Philip says, if you attempt to write the operation will fail and using the error given you know the current state for the connection.
For example, when using the ws lib for nodejs, if you try to send data to a closed websocket an exception will be thrown, saying something like [Error: not opened]. Their you know the connection no longer exists and you can do any cleanup needed.