I am using Apache Tomcat Version 8, It contains 5-8 web application developed in J2EE with some frameworks i.e. Spring, Struts, most of them are developed in Servlet also there is database connection with them on same system. These application are running on Linux OS with Java 7.
Every time when I am restarting Apache tomcat server its able to handle certain amount of http request after that its unable to handle the request, every new request taking too much time and webpage connection time out.
I have tried many ways to solve this issue but failed. Also I have increase connection limits in Apache Tomcat as well as at OS level but no solution.
If anyone has any idea please guide me to solve this issue.
OR
Is there any other web server which can solve my issue?
Related
I am trying to hit an https client api which is working fine on postman(gives response in 800ms) and in local mule flow but it is not working fine on cloudhub . I am getting Connect Timeout error. It tries connecting for 30 secs(as per logs) and then gives HTTP:CONNECTIVITY error.
failed: Connect timeout.
errorType=HTTP:CONNECTIVITY
cause=org.mule.extension.http.api.error.HttpRequestFailedException
Response Timeout that I have set is 5 mins.
The flow was working fine when deployed on cloudhub before.It stopped working a few days ago though I didn't make any changes to my code.I am unable to debug this issue as it is not reproducible on my local env(it works perfectly). Any help would be appreciated.
There are 4 different types of general timeouts mule HTTP calls offer. Each has its own differences.
Connection Idle Timeout
Response Timeout
Max Idle Timeout
Query or Transactions Timeout ( Applies for DB Connectors)
Since you are getting
HTTP:CONNECTIVITY ERROR.
Applying a 5 min Response Timeout doesn't help.
Response Timeout (means taking longer time to respond) should be worried only after Establishing a Connection Handshake.
Your problem is with the Connection itself.
The only possible way you could try fixing this is by Applying a Connection Idle Timeout and a Reconnection Strategy with some frequency gaps.
Since you are so sure about tests in local. I suggest you the below two steps:
1. Try using the same HTTP connector configuration in a separate new mule APP. Try with a simple listener and the failing requestor. Also add one more freely available online REST services into your code in other extra flow. Now try to test both. See which one is working and which is failing.
This would tell if it's a real HTTP CONNECTIVITY problem or anything else related to some mule bug.
2. Check your configurations once again and make sure if your hitting the same endpoint in the cloudhub version.
Finally, I hope you did not accidentally put any proxy conf in the local version.
If it was working, probably there was a networking change in the other side that prevents access from the CloudHub application. You didn't share the URL so it is not clear if it is an internal host or a public host. We also don't know if there is some kind of whitelisting on the server side.
You can test connectivity to the HTTP host and port using the Network Tools application, to see if it accessible from your CloudHub environment.
how to prevent automatic repeat http request from client machine to tomcat server?
i have a problem in my live web applicaton working through local LAN.when a network failure occur sometimes multiple insertion take place in my mysql data base.
Any one know about these type of issues?
my application use tomcat as web sever and centos as interface
We are experiencing the issue when trying to connect to the cluster after updating the version of Java SDK.
The setup of the system is as follows:
We have a web application that is using Java SDK and a Couchbase cluster. In between we have a VIP (Virtual IP Address). We realise that isn’t ideal but we’re not able to change that immediately since VIP was mandated by Tech Ops. VIP is basically only there to reroute the initial request on application startup. That way we can make modifications on the cluster and ensure that when application starts it can find the cluster regardless of the actual nodes in the cluster and their IPs.
Prior to the issue we used JAVA SDK version 1.4.4. Our application would start and Java SDK would initiate a request on port 8091 to VIP. Please note that port 8091 is the only port open on VIP. VIP would reroute the request to one of the node cluster currently in use the cluster would respond to Java SDK. At that point Java SDK would discover all the nodes in the cluster and application would run fine. During up time if we would add, remove a node from the cluster Java SDK would update automatically and everything would run without the issue.
In the last sprint we updated the Java SDK to version 2.1.3. Our application would start and Java SDK would initiate a request on port 11210 to VIP. Since this port is not open the request would fail and Java SDK would throw an exception:
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:93)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:108)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:99)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:89)
No further request would be made on any port.
It appears the order in which port are being used has been changed between versions. Could somebody please confirm, or dispute, that the order in which ports are being used for cluster discovery has been changed between versions. Also could somebody please provide some advice on how we could resolve the issue. We are trying to understand the clients behavior, if we could open all those ports on the VIP would the client still then function correctly and at full performance?
The issue is happening on our production environment which we cannot use for testing out potential solutions since it will interfere with our products.
In v2.x of the Java SDK, it defaults to 11210 to get the cluster map to bootstrap the application. This is a huge improvement actually as now the map comes from the managed cache and not the cluster manager (8091). The SDK should use 8091 as a fall back if it cannot get the map on 11210 though. Regardless, you really want to get that map from 11210, trust me. It cleans up a lot of problems.
To resolve this long term and follow Couchbase best practices, upgrade to the Java 2.2.x SDK, get rid of the VIP entirely and go with a DNS SRV record instead. That gives you one DNS entry for the SDK connection object and you just manage the node list in DNS. It works great. I say SDK 2.2 as the DNS SRV record solution is fully supported there, in 2.1 it is experimental. VIPs are specifically recommended against by Couchbase these days. In older versions of the SDKs it was fine to do this and it helped with limiting the number of connections from the app to the DB nodes, but that is no longer necessary and can actually be a bad thing.
in addition to Kirk's long term answer (which I also advise you to follow), a shorter term solution may be to deactivate the 11210 bootstraping (carrier bootstrap) through the CouchbaseEnvironment by calling bootstrapCarrierEnabled(false) on the builder.
I don't guarantee that it'll work with a vIP even after that, but that may be worth a try if you're in a hurry.
I have deployed a java web application in openshift using tomcat server.Application is running fine. But after 1-2 days automatically tomcat server gets killed.so i have to manually start the server.can any one tell me what is going wrong here.i'm using free account.
Thanks,
A.G.Ishara
Applications on the free account get idled automatically after 24 hours of no external http requests. The application should then automatically restart itself once a request is made.
I am currently running a virtualized environment for my web and db server. When I access the web server or the MySQL server individually, they are both fast. I also have websites running on the web server that do not require the db server and those all load quickly. However, when I access my hosted website that requires the web server to call from the db server, there is about a 5-7 second latency for every page load. This has been confirmed with both a very simple site and with a Word Press setup as well. Here is the config:
Web server - CentOS 6.5, Apache 2.2.15
DB server - CentOS 6.5, MySQL 5.1.73
My question is, are the servers continuously authenticating with one another (and thus causing latency) on every single db call? If that is the case, does anyone know how to permanently authenticate between the two?
I might be way off on this assumption and authentication could have nothing to do with it. I am completely open to any and all ideas at this point. Thank you very much.
V/R,
Tony
To me it seems to be a network issue.
and obviously the db-server will need authentication every time there is a hit.