How to fix ConnectionPoolTimeoutException in Apache Http Client - apache-httpclient-4.x

In our code we use version 4.5.3 of apache http client. PoolingHttpClientConnectionManager is used as follows.
final PoolingHttpClientConnectionManager connManager = new PoolingHttpClientConnectionManager(
600_000, TimeUnit.MILLISECONDS);
I understand that in order to fix ConnectionPoolTimeoutException, we either need to call EntityUtils.consume(httpEntity), or use ResponseHandler wherever HttpClient is used. However in our code base, the same HttpClient is being used from many places. After fixing much code to use either of the above two, the frequency is reduced, but we still get ConnectionPoolTimeoutException at times.
Can this exception be fixed by using evictExpiredConnections() or evictIdleConnections(maxIdleTime) methods while creating HttpClientBuilder? What is the ideal value of maxIdleTime that we can use - more than connectionTimeToLive?
One of the code example is as follows.
final HttpResponse resp ;
try
{
resp = client.execute(request);
}
catch (IOException e)
{
logException(e);
return;
}
processResponse(resp);
return;
We noticed a correlation between logException invocations & ConnectionPoolTimeoutException in the logs. In other words ConnectionPoolTimeoutException starts happening on a node, shortly after there are number of logException calls on that node. Is there something wrong with this code snippet?

Finally the solution was to use ResponseHandler wherever we had missed it. That was the only approach that worked, and solved ConnectionPoolTimeoutExceptions.
In an enterprise application, you cannot just close the HttpClient after each use. Using ResponseHandler is the best coding practice.

Related

Mockito - Function calls other function, should I mock both?

I have a code with two methods. Method A is calling method B. Should I mock method B? Or can I let method A call method B since there it's only buciness logic without datatabase connection or httprequests?
public Response InsertAsset(UpdateRequest apiRequest, String token) throws IOException, InterruptedException
{
/* TODO
* Change hard-coded URL implementation
*/
String url = "http://test:8080/update";
User user = userRepository.findByToken(token);
UpdateRequestRequest = new UpdateRequest();
generateRequestAPI(Request, user);
Request.setAsset(apiRequest.getAsset());
Request.setKey(generateCombinedKey(Request, user));
// Will throw NullPointerException in case HTTP body cannot be generated
HttpRequest httpRequest = generateHttpPostRequest(url, Request, token);
HttpResponse<String> httpResponse = httpClient.send(httpRequest, HttpResponse.BodyHandlers.ofString());
return objectMapper.readValue(httpResponse.body(), Response.class);
}
Edited because I had gotten the question wrong at first.
Short answer is: you may probably just use the generateHttpPostRequest().
Longer answer ...
The original answer:
Without knowing your code an answer is impossible. Mocks are for unit tests. In a unit test you have the system under test (SUT) and external dependencies. For a unit test you want to get rid of all behaviour in the dependecies and instead completely control what you SUT will see during the test. Also unit tests must be easy to read, hence complex configurations are a no.
Some hints for your decision:
Never mock the SUT!
If the dependency has no behaviour and you can easily determin what state it will present your SUT, you may not need to mock it.
Configuring a mock to return a mock may be needed sometimes but generally should be avoided, if possible.

How to Publish JSON Object on ActiveMQ

I am trying to publish JSON Message(Object) on to the ActiveMQ queue/topic.
currently i am converting JSON object into String then publishing it.
But i don't want to convert it into String.I don't want to convert it into String instead of that i want to send as it is JSON Object as a Message.
Below is my code
public void sendMessage(final JSONObject msg) {
logger.info("Producer sends---> " + msg);
jmsTemplate.send(destination, new MessageCreator() {
public Message createMessage(Session session) throws JMSException {
String s = msg.toString();
return session.createTextMessage(s);
// createTextMessage(msg);
}
});
}
Using text on the queue is best practice since you will be able to debug a lot easier as well as not being restricted to the exactly same language/framework or even version of the libraries on the applications on both sides of the queue.
If you really want that hard coupling (i.e. when you are using the queue inside a single application and don't need to inspect messages manually on the queues) you can do it:
instead of return session.createTextMessage(s); do return session.createObjectMessage(msg);
One more thing: Be aware that using JMS ObjectMessage may cause security issues if you don't have 100% control of the code posting messages. Therefore this is not allowed in default ActiveMQ settings. You need to enable this in both client and server settings. For reference, see this page: http://activemq.apache.org/objectmessage.html

How to stop a flink streaming job from program

I am trying to create a JUnit test for a Flink streaming job which writes data to a kafka topic and read data from the same kafka topic using FlinkKafkaProducer09 and FlinkKafkaConsumer09 respectively. I am passing a test data in the produce:
DataStream<String> stream = env.fromElements("tom", "jerry", "bill");
And checking whether same data is coming from the consumer as:
List<String> expected = Arrays.asList("tom", "jerry", "bill");
List<String> result = resultSink.getResult();
assertEquals(expected, result);
using TestListResultSink.
I am able to see the data coming from the consumer as expected by printing the stream. But could not get the Junit test result as the consumer will keep on running even after the message finished. So it did not come to test part.
Is thre any way in Flink or FlinkKafkaConsumer09 to stop the process or to run for specific time?
The underlying problem is that streaming programs are usually not finite and run indefinitely.
The best way, at least for the moment, is to insert a special control message into your stream which lets the source properly terminate (simply stop reading more data by leaving the reading loop). That way Flink will tell all down-stream operators that they can stop after they have consumed all data.
Alternatively, you can throw a special exception in your source (e.g. after some time) such that you can distinguish a "proper" termination from a failure case (by checking the error cause). Throwing an exception in the source will fail the program.
In your test you can start job execution in a separate thread, wait some time allowing it for data processing, cancel the thread (it will interrupt the job) and the make the assrtions.
CompletableFuture<Void> handle = CompletableFuture.runAsync(() -> {
try {
environment.execute(jobName);
} catch (Exception e) {
e.printStackTrace();
}
});
try {
handle.get(seconds, TimeUnit.SECONDS);
} catch (TimeoutException e) {
handle.cancel(true); // this will interrupt the job execution thread, cancel and close the job
}
// Make assertions here
Can you not use isEndOfStream override within the Deserializer to stop fetching from Kafka? If I read correctly, the flink/Kafka09Fetcher has the following code in its run method which breaks the event loop
if (deserializer.isEndOfStream(value)) {
// end of stream signaled
running = false;
break;
}
My thought was to use Till Rohrmann's idea of a control message in conjunction with this isEndOfStream method to tell the KafkaConsumer to stop reading.
Any reason that will not work? Or maybe some corner cases I'm overlooking?
https://github.com/apache/flink/blob/07de86559d64f375d4a2df46d320fc0f5791b562/flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java#L146
Following #TillRohrman
You can combine the special exception method and handle it in your unit test if you use an EmbeddedKafka instance, and then read off the EmbeddedKafka topic and assert the consumer values.
I found https://github.com/asmaier/mini-kafka/blob/master/src/test/java/de/am/KafkaProducerIT.java to be extremely useful in this regard.
The only problem is that you will lose the element that triggers the exception but you can always adjust your test data to account for that.

Mule: JUnit test case to call a service which is in middle of the Mule flow

I'm newbie for JUnit test case. Please help me on this issue. I have 2 mule flows- first flow having MQ as inbound and it has datamapper to transformer the xml. With the first flow input, i'm calling second flow where we are calling the existing service ( SOAP/HTTP) call. Please find my JUnit below. I'm able to get the success response. But my requirement
1. I need to see the transformer response coming out from the Transformer.( Like how we see via logger component in our flow)
2.Need to override the url (HTTP) through JUnit ( in order to test the error scenario)
public class Request_SuccessPath extends FunctionalTestCase {
#Test
public void BulkRequest () throws Exception {
MuleClient client = muleContext.getClient();
System.out.println("test");
String payload = " <root> <messageName>str1234</messageName><messageId>12345</messageId><DS>123</DS><</root>";
MuleMessage reply = client.send ("vm://test",payload ,null);}
#Override
protected String getConfigResources() {
// TODO Auto-generated method stub
return "src/main/app/project.xml";}
i thought the following snippet will override the url.But it is not
DefaultHttpClient client1 = new DefaultHttpClient();
HttpGet httpGet = new HttpGet("http://localhost:7800/service);
assertNotNull(response);
3. How to take the control of the flow and see any response inbetween the flow.
Instead of WMQ, i have replaced VM as inbound end point for testing purposes.
4. Is there any chance like without replacing VM can we call directly with WMQ through JUnit TestCase. Kindly help me on this.
I'm using 3.4 version, not using maven as of now. Please help me. Thanks in advance.
1) What do you mean by "see". Would it work logging it? inspecting it while debugging?
2) You should parametrize your endpoint with a variable, something like
and configure a property placeholder as explained here: http://www.mulesoft.org/documentation/display/current/Using+Parameters+in+Your+Configuration+Files
Adding http.port, http.host and http.path variables to mule-app.properties
taking into account that you must set system-properties-mode="OVERRIDE" and then start your Mule server using bin/mule -M-Dhttp.host=your-host -M-Dhttp.port=your-port -M-Dhttp.path=your-path
3) Yes, WMQ has a Java API you can use to interact with it: http://publib.boulder.ibm.com/infocenter/wmqv6/v6r0/index.jsp?topic=%2Fcom.ibm.mq.csqzaw.doc%2Fuj41013_.htm , you will probably found hundreds of examples by googling about it.
Regards.

ejb-3.0 customized exception

My ejb3 application running on JBOSS6 already has a customized Exception handler "Ejbexception.java" which extends Exception class
I want to use the same to trap Exceptions with some number and send back the same to the Client Code for handling gentel message .
ex:
try{
.....
}catch(SQLException ex){
throw new EjbException("1001");
}
Now HOWto get the "1001" on the Client Code ?????
thx in advance
karthik
Did you write this Ejbexception class yourself? If so, that's a poor choice of name, because there's already a javax.ejb.EJBException in the library. However, it will work: when you throw it, the container will transport it to the client, who can then catch it. The string you inserted will be available from the exception's getMessage() method, just like normal.
If you're actually throwing a javax.ejb.EJBException here, then things are slightly different. That exception is aimed at the container, not the client. I actually don't know how it's made visible to the client. My suggestion would be to switch to using a custom exception, which the container will then pass to the client.