I need to write a code to send a server request consequently, the request is xml format string. After doing research, I select to use multiple threads and connection pool manager to handle the request and response . I use PoolingHttpClientConnectionManager, httpclient 4.3 version. When I use one thread, the code works, send request and get the response. But when I make it as multiple thread and using single httpclient, the connection seems broken. I use netstat to check and see the TCP status is TIME_WAIT. My code is like:
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setMaxTotal(50);
cm.setDefaultMaxPerRoute(10);
cm.setMaxPerRoute(new HttpRoute(new HttpHost(ENV_ ABCD)), 20);
CloseableHttpClient httpclient = HttpCleints.custom().setConnectionManager(cm).build();
I use thread pool too, each thread is used to handle one task (runnable). The task includes generating one request, sending the request to the server and getting and processing the response, once this task finish, the thread is put back to the thread pool and the connection is also put back to the connection pool.
All requests are sent to the same server so far.
The runnable task includes following line code:
HttpPost post = new HttpPost(url);
try {
// the request is xml string
post.setEntity(new StringEntity(request));
} catch (UnsupportedEncodingException e1) {
e1.printStackTrace();
}
String responseStr = null;
try {
// the commonHttpClient is same instance of httpclient declared above.
HttpResponse response = commonHttpClient.execute(post);
HttpEntity entity = response.getEntity();
if (entity != null){
responseStr = getString(entity.getContent());
}
} catch (ClientProtocolException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
I do not get any response. I can see the code is hanging on the line of “commonHttpClient.execute(post);”. From the netstat, it tells the client side is close the connection.
I do not have this problem if there is only one thread. Can anyone tell me what part of my code is wrong. Do I miss any step to configure connection? It is hard to find the example using apache httpclient 4.3.
Thanks
Related
A "side effect" of using Netty is that you need to handle stuff you never thought about, like sockets closing and connection resets. A recurring theme is having your logs stuffed full of java.lang.IOException: Connection reset by peer.
What I am wondering about is how to handle these "correctly" from a web server perspective. AFAIK, this error simply means the other side has closed its socket (for instance, if reloading the web page or similar) while a request was sent to the server.
This is how we currently handle exceptions happening in our pipeline (I think it does not make full sense):
s, not the handler I have attached to the end of the pipeline.
current setup
pipeline.addLast(
new HttpServerCodec(),
new HttpObjectAggregator(MAX_CONTENT_LENGTH),
new HttpChunkContentCompressor(),
new ChunkedWriteHandler()
// lots of handlers
// ...
new InterruptingExceptionHandler()
);
pipeline.addFirst(new OutboundExceptionRouter());
the handler of exceptions
private class InterruptingExceptionHandler extends ChannelInboundHandlerAdapter {
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
final var id = ctx.channel().id();
// This needs to ge before the next section as the interrupt handler might shutdown the server before
// we are able to notify the client of the error
ctx.writeAndFlush(serverErrorJSON("A server error happened. Examine the logs for channel id " + id));
if (cause instanceof Error) {
logger.error(format("Error caught at end of pipeline in channel %s, interrupting!", id), cause);
ApplicationPipelineInitializer.this.serverInterruptHook.run();
} else {
logger.error(format("Uncaught user land exception in channel %s for request %s: ", id, requestId(ctx)), cause);
}
}
If some exception, like the IOException, is thrown we try and write a response back. In the case of a closed socket, this will then fail, right? So I guess we should try and detect "connection reset by peer" somehow and just ignore the exception silently to avoid triggering a new issue by writing to a closed socket ... If so, how? Should I try and do err instanceof IOException and err.message.equals("Connection reset by peer") or are there more elegant solutions? To me, it seems like this should be handled by some handler further down in the stack, closer to the HTTP handler
If you wonder about the OutboundExceptionRouter:
/**
* This is the first outbound handler invoked in the pipeline. What it does is add a listener to the
* outbound write promise which will execute future.channel().pipeline().fireExceptionCaught(future.cause())
* when the promise fails.
* The fireExceptionCaught method propagates the exception through the pipeline in the INBOUND direction,
* eventually reaching the ExceptionHandler.
*/
private class OutboundExceptionRouter extends ChannelOutboundHandlerAdapter {
#Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
promise.addListener(ChannelFutureListener.FIRE_EXCEPTION_ON_FAILURE);
super.write(ctx, msg, promise);
}
}
My MSMQ is located on a remote machine.
My code is as follows,
private void OnReceiveCompleted(object sender, ReceiveCompletedEventArgs e)
{
System.Messaging.Message msg = _queue.EndReceive(e.AsyncResult);
FireReceiveEvent(msg.Body); // Here msg.Body throws exception
_queue.BeginReceive();
}
I'm running this as a windows service, not sure if that makes a difference. But msg.Body throws a InvaliOperationException.
Infact most of the msg's properties are throwing exceptions. Any idea?
here is a screen shot
Why don't you try casting source parameter as MessageQueue
private void MessageQueueReceiveCompleted(Object source, ReceiveCompletedEventArgs asyncReceive)
{
try
{
//Get a handle to the Message Queue
MessageQueue messageQueue = (MessageQueue)source;
Message message = messageQueue.EndReceive(asyncReceive.AsyncResult);
if (message != null)
{
ProcessMsmqMessage(message.Body);
}
}
catch (Exception e)
{
Exception err = new Exception(String.Format("Error in QueueListener: {0}. Detail: {1}", queueName, e.Message), e);
OnListeningError(err);
}
finally{
messageQueue.BeginReceive();
}
}
OK. So after much work and reading, and banging my head against the wall, I found what the problem was.
REMOTE queues work very differently than local private queues.
You may ask, why? Well... this is probably a deficiency in MS' API.
Remote queues are very icky. They do not support a lot of feature that are available for regular local queues.
For example, in a remote queue unless it's transactional, you cannot do a BeginPeek. You cannot even check message.Body, because it would throw an error.
But that's not all. You cannot even accidentally subscribe to an event like OnPeekCompleted (even if you don't do a BeginPeek). The entire MessageQueue object goes crazy when you do that.
This silly reason was the reason for my headache.
Some use cases require being able to count the requests sent by the Apache API. For example, when massively requesting a web API, which API requires an authentication through an API key, and which TOS limits the requests count in time for each key.
Being more specific on the case, I'm requesting https://domain1/fooNeedNoKey, and depending on its response analyzed data, I request https://domain2/fooNeedKeyWithRequestsCountRestrictions. All sends of those 1-to-2-requests sequences, are performed through a single org.apache.http.impl.client.FutureRequestExecutionService.
As of now, depending on org.apache.httpcomponents:httpclient:4.3.3, I'm using those API elements:
org.apache.http.impl.client.FutureRequestExecutionService, to perform multi-threaded HTTP requests. It offers time metrics (how much time did an HTTP thread took until terminated), but no requests counter metrics
final CloseableHttpClient httpClient = HttpClients.custom()
// the auto-retry feature of the Apache API will retry up to 5
// times on failure, being also allowed to send again requests
// that were already sent if necessary (I don't really understand
// the purpose of the second parameter below)
.setRetryHandler(new StandardHttpRequestRetryHandler(5, true))
// for HTTP 503 'Service unavailable' errors, also retrying up to
// 5 times, waiting 500ms between each retry. Guessed is that those
// 5 retries are part of the previous "global" 5 retries setting.
// The below setting, when used alone, would allow to only enable
// retries for HTTP 503, or to get a greater count of retries for
// this specific error
.setServiceUnavailableRetryStrategy(new DefaultServiceUnavailableRetryStrategy(5, 500))
.build();, which customizes the Apache API retry behavior
Getting back to the topic :
A request counter could be created by extending the Apache API retry-related classes quoted before
Alternatively, an Apache API support unrelated ticket tends to indicate this requests-counter metrics could be available and forwarded out of the API, into Java NIO
Edit 1:
Looks like the Apache API won't permit this to be done.
Quote from the inside of the API, RetryExec not beeing extendable in the API code I/Os:
package org.apache.http.impl.execchain;
public class RetryExec implements ClientExecChain {
..
public CloseableHttpResponse execute(
final HttpRoute route,
final HttpRequestWrapper request,
final HttpClientContext context,
final HttpExecutionAware execAware) throws IOException, HttpException {
..
for (int execCount = 1;; execCount++) {
try {
return this.requestExecutor.execute(route, request, context, execAware);
} catch (final IOException ex) {
..
if (retryHandler.retryRequest(ex, execCount, context)) {
..
}
..
}
}
The 'execCount' variable is the needed info, and it can't be accessed since it's only locally used.
As well, one can extend 'retryHandler', and manually count requests in it, but 'retryHandler.retryRequest(ex, execCount, context)' is not provided with the 'request' variable, making it impossible to know on what we're incrementing a counter (one may only want to increment the counter for requests sent to a specific domain).
I'm out of Java ideas for it. A 3rd party alternative: having the Java process polling a file on disk, managed by a shell script counting the desired requests. Sure it will make a lot of disk read-accesses and will be a hardware killer option.
Ok, the work around was easy, the HttpContext class of the API is intended for this:
// optionnally, in case your HttpCLient is configured for retry
class URIAwareHttpRequestRetryHandler extends StandardHttpRequestRetryHandler {
public URIAwareHttpRequestRetryHandler(final int retryCount, final boolean requestSentRetryEnabled)
{
super(retryCount, requestSentRetryEnabled);
}
#Override
public boolean retryRequest(final IOException exception, final int executionCount, final HttpContext context)
{
final boolean ret = super.retryRequest(exception, executionCount, context);
if (ret) {
doForEachRequestSentOnURI((String) context.getAttribute("requestURI"));
}
return ret;
}
}
// optionnally, in addition to the previous one, in case your HttpClient has specific settings for the 'Service unavailable' errors retries
class URIAwareServiceUnavailableRetryStrategy extends DefaultServiceUnavailableRetryStrategy {
public URIAwareServiceUnavailableRetryStrategy(final int maxRetries, final int retryInterval)
{
super(maxRetries, retryInterval);
}
#Override
public boolean retryRequest(final HttpResponse response, final int executionCount, final HttpContext context)
{
final boolean ret = super.retryRequest(response, executionCount, context);
if (ret) {
doForEachRequestSentOnURI((String) context.getAttribute("requestURI"));
}
return ret;
}
}
// main HTTP querying code: retain the URI in the HttpContext to make it available in the custom retry-handlers code
httpContext.setAttribute("requestURI", httpGET.getURI().toString());
try {
httpContext.setAttribute("requestURI", httpGET.getURI().toString());
httpClient.execute(httpGET, getHTTPResponseHandlerLazy(), httpContext);
// if request got successful with no need of retries, of if it succeeded on the last send: in any cases, this is the last query sent to server and it got successful
doForEachRequestSentOnURI(httpGET.getURI().toString());
} catch (final ClientProtocolException e) {
// if request definitively failed after retries: it's the last query sent to server, and it failed
doForEachRequestSentOnURI(httpGET.getURI().toString());
} catch (final IOException e) {
// if request definitively failed after retries: it's the last query sent to server, and it failed
doForEachRequestSentOnURI(httpGET.getURI().toString());
} finally {
// restoring the context as it was initially
httpContext.removeAttribute("requestURI");
}
Solved.
I am writing a bulk email program using the JavaMail api. I have a Microsoft Exhange server which I am trying to send the emails in to. When I run my program I get the following error:
**com.sun.mail.smtp.SMTPTransport.issueSendCommand(SMTPTransport.java:2057)
at com.sun.mail.smtp.SMTPTransport.finishData(SMTPTransport.java:1862)
at com.sun.mail.smtp.SMTPTransport.sendMessage(SMTPTransport.java:1100)
at javax.mail.Transport.send0(Transport.java:195)
at javax.mail.Transport.send(Transport.java:124)
at SendEmail.postMail(SendEmail.java:100)
at EmailGenerator.main(EmailGenerator.java:52)**
The part of my code trying to send the message is as follows:
Properties props = new Properties();
props.put("mail.smtp.host", email_server);
props.put("mail.transport.protocol", "smtp");
props.put("mail.smtp.auth", true);
class EmailAuthenticator extends Authenticator {
String user;
String pw;
EmailAuthenticator (String FROM, String PASSWORD)
{
super();
this.user = FROM;
this.pw = PASSWORD;
}
public PasswordAuthentication getPasswordAuthentication()
{
return new PasswordAuthentication(user, pw);
}
}
Session session = Session.getInstance(props, new EmailAuthenticator(USER, PASSWORD));
session.setDebug(debug);
System.out.println("Session created");
.. CREATED MESSAGE HERE...
Transport transport = session.getTransport("smtp");
transport.connect(exchange_server,user,password);
transport.send(msg);
transport.close();
I wonder am I missing some configuration on the Exchange server side, or is an issue with my code?
OK I figured out where I was going wrong here and am posting up the answer incase anybody else can get some value out of it. I had the following line of code:
props.put("mail.smtp.auth", true);
This was telling my application that it needed to authenticate to the SMTP server, when in fact it didnt. This was causing my application from logging into the SMTP server and sending the email and thus producing the error message. Setting this property to false or not having this line of code fixed the issue for me. This line of code is only necessary for SMTP servers that require you to login, which my Exchange server didnt.
There is a web-service deployed on tomcat 6 and exposed via apache-cxf 2.3.3. A generated sources stubs using wsdl2java to be able to call this service.
Things seemed fine until I sent big request(~1Mb). This request wasn't processed and failing with exception:
Interceptor for {http://localhost/}ResourceAllocationServiceSoapService has thrown
exception, unwinding now org.apache.cxf.binding.soap.SoapFault:
Error reading XMLStreamReader.
...
com.ctc.wstx.exc.WstxEOFException: Unexpected EOF in prolog
at [row,col {unknown-source}]: [1,0]
Is some kind of max request length here, I'm totally stuck with it.
Vladimir's suggestion worked. This code below will help others with understanding where to put the 1000000.
public void handleMessage(SoapMessage message) throws Fault {
// Get message content for dirty editing...
InputStream inputStream = message.getContent(InputStream.class);
if (inputStream != null)
{
String processedSoapEnv = "";
// Cache InputStream so it can be read independently
CachedOutputStream cachedInputStream = new CachedOutputStream(1000000);
try {
IOUtils.copy(inputStream,cachedInputStream);
inputStream.close();
cachedInputStream.close();
InputStream tmpInputStream = cachedInputStream.getInputStream();
try{
String inputBuffer = "";
int data;
while((data = tmpInputStream.read()) != -1){
byte x = (byte)data;
inputBuffer += (char)x;
}
/**
* At this point you can choose to reformat the SOAP
* envelope or simply view it just make sure you put
* an InputStream back when you done (see below)
* otherwise CXF will complain.
*/
processedSoapEnv = fixSoapEnvelope(inputBuffer);
}
catch(IOException e){
}
}
catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// Re-set the SOAP InputStream with the new envelope
message.setContent(InputStream.class,new ByteArrayInputStream( processedSoapEnv.getBytes()));
/**
* If you just want to read the InputStream and not
* modify it then you just need to put it back where
* it was using the CXF cached inputstream
*
* message.setContent(InputStream.class,cachedInputStream.getInputStream());
*/
}
}
I figured out what was wrong. Actually it was bug inside interceptor's code:
CachedOutputStream requestStream = new CachedOutputStream()
When I replaced this with
CachedOutputStream requestStream = new CachedOutputStream(1000000);
things start working fine.
So the request was just trunkated during copying of streams.
I run into same issue of geting "com.ctc.wstx.exc.WstxEOFException: Unexpected EOF in prolog" when using CachedOutputStream class.
Looking at sources of CachedOutputStream class the threshold is used to switch between storing stream's data from "in memory" to "a file".
Assuming stream operates on data that exceeds threshold it gets stored in a file thus following code is going to break
IOUtils.copy(inputStream,cachedInputStream);
inputStream.close();
cachedInputStream.close(); //closes the stream, the file on disk gets deleted
InputStream tmpInputStream = cachedInputStream.getInputStream(); //returned tmpInputStream is brand *empty* one
// ... reading tmpInputStream here will produce WstxEOFException
Increasing 'threshold' does help as all stream data is stored into memory and in such scenario calling cachedInputStream.close() does not really close the underlying stream implementation so one can still read from it later on.
Here is 'fixed' version of above code (at least it worked without exception for me)
IOUtils.copy(inputStream,cachedInputStream);
inputStream.close();
InputStream tmpInputStream = cachedInputStream.getInputStream();
cachedInputStream.close();
// reading from tmpInputStream here works fine
Temporary file gets deleted when close() is called on tmpInputStream and there are no more other references to it, see source code of CachedOutputStream.maybeDeleteTempFile()