HttpClient connection pool over two internets - apache-httpclient-4.x

I am experiencing some problems in my client's environment.
It has 2 internet providers, the second is a backup when the first one goes down, both managed by the Pfsence instance.
I am using HttpClient to make requests to another server on a VPN managed by this pfsence instance as well.
When the first internet goes down (the second takes over), but HttpClient has an internal connection pool
(PoolingHttpClientConnectionManager) and there is my problem ...
This pool is holding a socket instance that does not time out (I don’t know why) and the next requests that must use the second internet stop responding because the pool has dilivery that socket connected by the first internet that is down now, and it doesn't result in a read timeout or something like that ...
How can I solve this? How can I test this pool and discard these connectors in the right way?

I created a health-check thread and each time the server refuses a connection I evict the pool:
Thread healthCheck = new Thread(() -> {
while (true) {
try (Socket s = new Socket()) {
s.connect(new InetSocketAddress("ip-server-address", 80), 3000);
} catch (IOException e) {
myPoolingHttpClientConnectionManager.closeExpiredConnections();
myPoolingHttpClientConnectionManager.closeIdleConnections(0, TimeUnit.MILLISECONDS);
}
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
return;
}
}
});
healthCheck.setDaemon(true);
healthCheck.start();
This solution was based on this article:
https://www.baeldung.com/httpclient-connection-management#eviction

Related

My javax.jms.MessageListener stops receiving messages from IBM MQ Server

I am building a service that continuously consumes messages from a IBM MQ Queue. This service runs in a private cloud with 80 replicas (containers or PODs) listening to the queue. Observing the application logs, I can see about 20 PODs that are not consuming any messages, even though I can check the queue PUT metric counter which shows about 200 req/s (rush hour). Most of the messages stay in the queue for a while (20 seconds) and get expired.
During the non rush hour, the application receives about 40 req/s and can process all the messages without any being expired.
I wonder if there is a limit of active listeners that the IBM MQ Server support? If so, is there any way to increase this limit?
I create a JMSContext using this method:
public JMSContext buildContext() {
if (this.context == null) {
try {
JmsConnectionFactory connection = JmsFactoryFactory.getInstance(JmsConstants.WMQ_PROVIDER).createConnectionFactory();
String hostname = System.getenv("HOSTNAME");
String appName = application.get();
if (hostname != null && !hostname.trim().isEmpty()) {
if (hostname.length() > 28) { // Tamanho máximo da propriedade WMQ_APPLICATIONNAME
appName = hostname.substring(hostname.length() - 28);
} else {
appName = hostname;
}
}
connection.setStringProperty(CommonConstants.WMQ_HOST_NAME, host.get());
connection.setIntProperty(CommonConstants.WMQ_PORT, port.get());
connection.setStringProperty(CommonConstants.WMQ_CHANNEL, channel.get());
connection.setStringProperty(CommonConstants.WMQ_QUEUE_MANAGER, queueManager);
connection.setStringProperty(JmsConstants.USERID, userid);
connection.setStringProperty(JmsConstants.PASSWORD, password);
connection.setBooleanProperty(JmsConstants.USER_AUTHENTICATION_MQCSP, isUserAuthenticationMqcsp);
connection.setIntProperty(CommonConstants.WMQ_CONNECTION_MODE, CommonConstants.WMQ_CM_CLIENT);
connection.setStringProperty(CommonConstants.WMQ_APPLICATIONNAME, appName);
connection.setIntProperty(CommonConstants.WMQ_CLIENT_RECONNECT_OPTIONS, CommonConstants.WMQ_CLIENT_RECONNECT);
LOGGER.info(String.format("buildContext - Parametros da fila a alta [%s],[%s],[%s],[%s],[%s]",
host.get(), channel.get(), application.get(), queueManager, userid));
this.context = connection.createContext();
this.context.start();
this.connectionFactory = connection;
} catch (JMSException e) {
LOGGER.error(String.format("buildContext - error - code: [%s], message: [%s].", e.getErrorCode(), e.getMessage()));
}
}
return this.context;
}
I also configure my message listener using a separate thread as following:
#Override
public void run() {
JMSContext context = factory.buildContext();
this.creditoQueueConsumer = context.createConsumer(context.createQueue(Constantes.PREFIXO_QUEUE.concat(creditoQueue)));
this.creditoQueueConsumer.setMessageListener(mqAltaPlataformaCreditoListener);
// mqAltaPlataformaCreditoListener is an implementation of MessageListener
context.start();
}
The application seems to work fine, however there is a problem when some listeners suddenly stop receiving messages.
I assume the MQ server works as a proxy and then distributes equally messages to all configured listeners, but for some reason, some of the listeners stop receiving messages. The MQ server team mentioned that some connection timeouts are occurring in the server side, as shown in the following picture.
Is there any configuration that can be done in the client to avoid those timeouts ou to avoid the service to stop receiving messages?

Port Scanning with WebSockets

Recently a post was featured in Hacker News about websites abusing WebSockets to find open ports on the client's machine.
The post does not go into any details, so I decided give it a try.
I opened a web server on port 8080 and tried running this script in Chrome's console:
function test(port) {
try {
var start = performance.now();
var socket = new WebSocket('ws://localhost:' + port);
socket.onerror = function (event) {
console.log('error', performance.now() - start, event);
}
socket.addEventListener('close', function(event) {
console.log('close', performance.now() - start, event);
})
socket.addEventListener('open', function (event) {
console.log('open', performance.now() - start, event);
socket.send('Hello Server!');
});
socket.addEventListener('message', function (event) {
console.log('message ', performance.now() - start, event);
});
} catch(ex) {
console.log(ex)
}
}
Indeed Chrome logs different a error message (ERR_CONNECTION_REFUSED) when I try to connect to a port that is not open:
test(8081)
VM1886:3 WebSocket connection to 'ws://127.0.0.1:8081/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
And when I try to connect to a port that is open but is not listening to WebSockets (Unexpected response code: 200):
test(8080)
WebSocket connection to 'ws://127.0.0.1:8080/' failed: Error during WebSocket handshake: Unexpected response code: 200
But I can't find any way to access and read these errors in JavaScript.
Control flow does not reach the catch clause catch(ex) { console.log(ex) } and the event objects that Chrome passes to socket.onerror do not seem to be any different whether the port is open or not.
Timing attacks also don't seem to be helping at least in Chrome. Delta time between onerror and new Socket() creation seems to increase after calling test(...) a few times.
So is there actually a way for a web page to determine if a port is open on my computer?
The presentation slides linked to below show it was well known in 2016 and lack of a timing difference in your tests show mitigations may have been applied upstream.
https://datatracker.ietf.org/meeting/96/materials/slides-96-saag-1/
It might only work on windows:
https://blog.avast.com/why-is-ebay-port-scanning-my-computer-avast

AMS doesn't receive unpublish command SOMETIMES over rtmpt

This one has had me going for a week at least. I am trying to record a video file to AMS. It works great almost all of the time, except about 1 in 10 or 15 recording sessions, I never receive 'NetStream.Unpublish.Success' on my netstream from AMS when I close the stream. I am connecting to AMS using rtmpt when this happens, it seems to work fine over rtmp. Also, it seems like this only happens in safari on mac, but since its so intermittent I don't really trust that. Here is my basic flow:
// just a way to use promises with netStatusEvents
private function netListener(code:String, netObject:*):Promise {
var deferred:Deferred = new Deferred();
var netStatusHandler:Function = function (event:NetStatusEvent):void {
if (event.info.level == 'error') {
deferred.reject(event);
} else if (event.info.code == code) {
deferred.resolve(netObject);
// we want this to be a one time listener since the connection can swap between record/playback
netObject.removeEventListener(NetStatusEvent.NET_STATUS, netStatusHandler);
}
};
netObject.addEventListener(NetStatusEvent.NET_STATUS, netStatusHandler);
return deferred.promise;
}
// set up for recording
private function initRecord():void {
Settings.recordFile = Settings.uniquePrefix + (new Date()).getTime();
// detach any existing NetStream from the video
_view.video.attachNetStream(null);
// dispose of existing NetStream
if (_videoStream) {
_videoStream.dispose();
_videoStream = null;
}
// disconnect before connecting anew
(_nc.connected ? netListener('NetConnection.Connect.Closed', _nc) : Promise.when(_nc))
.then(function (nc:NetConnection):void {
netListener('NetConnection.Connect.Success', _nc)
.then(function (nc:NetConnection):void {
_view.video.attachCamera(_webcam);
// get new NetStream
_videoStream = getNetStream(_nc);
ExternalInterface.call("CTplayer." + Settings.instanceName + ".onRecordReady", true);
}, function(error:NetStatusEvent):void {
ExternalInterface.call("CTplayer." + Settings.instanceName + ".onError", error.info);
});
_nc.connect(Settings.recordServer);
}); // end ncClose
if (_nc.connected) _nc.close();
}
// stop recording
private function stop():void {
netListener('NetStream.Unpublish.Success', _videoStream)
.then(function (ns:NetStream):void {
ExternalInterface.call("CTplayer." + Settings.instanceName + ".onRecordStop", Settings.recordFile);
});
_videoStream.attachCamera(null);
_videoStream.attachAudio(null);
_videoStream.close();
}
// start recording
private function record():void {
netListener('NetStream.Publish.Start', _videoStream)
.then(function (ns:NetStream):void {
ExternalInterface.call("CTplayer." + Settings.instanceName + ".onRecording");
});
_videoStream.attachCamera(_webcam);
_videoStream.attachAudio(_microphone);
_videoStream.publish(Settings.recordFile, "record"); // fires NetStream.Publish.Success
}
Update
I am now using a new NetConnection per connection attempt and also not forcing port 80 (see my 'answer' below). This has not solved my connection woes, only made the instances more infrequent. Now like every week or so I still have some random failure of ams or flash. Most recently someone made a recording and then flash player was unable to load the video for playback. The ams logs show a connection attempt and then nothing. There should at least be a play event logged for when i load the metadata. This is quite frustrating and impossible to debug.
I would try 2 distinct NetConnection objects, one for record and one for replay. This will remove your complexities around listeners adding/removing and connect/reconnect/disconnect logic and would IMO be cleaner.
NetConnections are cheap, and I've always used one per task at hand. The other advantage is that you can connect both at startup so the replay connection is ready instantly.
I've not seen a Promise used here before, but I'm not qualified to comment if that may cause a problem or not.
I think my issue was connecting over port 80. I originally thought I had to use port 80 with rtmpt, so I set my Settings.recordServer variable to rtmpt://myamsserver.net:80/app. I'm now using a shotgun approach where I try a bunch of port/protocol combos at once and pick the first one to connect. It is almost always picking port 443 over rtmpt, which seems much faster and more stable all around than 80, and I haven't had this issue since. It could also be due to not reusing the same NetConnection object like Stefan suggested, its hard to say.

Web Socket Connection Disconnecting - ApacheAMQ

I'm trying to use STOMP with Apache AMQ as I was hoping web sockets would give me a better performance than the typicalorg.activemq.Amq Ajax connection.
Anyway, my activemq config file has the proper entry
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
And I'm connecting to it via the following means:
function amqWebSocketConn() {
var url = "ws://my.ip.address:61614/stomp";
var client = Stomp.client(url);
var connect_callback = function() {
alert('connected to stomp');
client.subscribe("topic://MY.TOPIC",callback);
var callback = function(message) {
if (message.body) {
alert("got message with body " + message.body);
} else { alert("got empty message"); }
};
};
client.connect("", "", connect_callback);
}
When I first open up the web browser & navigate to http://localhost:8161/admin/connections.jsp It shows the following:
Name Remote Address Active Slow
ID:mymachine-58770-1406129136930-4:9 StompSocket_657224557 true false
Shortly there after - it removes itself. Is there something else I need such as a heart beat to keep the connection alive?
Using
var amq = org.activemq.Amq;
amq.init({
uri : '/myDomain/amq',
timeout : 50,
clientId : (new Date()).getTime().toString()
});
Kept the connection up for the TCP AJAX Connection
I have faced similar problem, solved it using this
client.heartbeat.incoming = 0;
client.heartbeat.outgoing = 0;
You have to add these two lines before connect.
Even after this I have seen disconnection after 5-10 minutes, if there are no incoming messages. To solve that you have to implement ondisconnect call back of connect method.
client.connect('','',connect_callback,function(frame){
//Connection Lost
console.log(frame);
//Reconnect and subscribe again from here
});
This is successfully working in my application.

HttpClient timeouts before the timeout specified

I'm using HttpClient from WP8 and do a Post request. I know the call may take long time as I'm actually simulating slow network scenarios. Therefore I set the HttpClient.Timeout accordingly to 5 minutes.
However, I get a Timeout at around 60s. I believe the Timeout is not working.
I believe there is an issue with this for WP as stated in this question:
HttpClient Portable returns 404 notfound on WP8.
They use a workaround but that does not applies to my scenario. I do actually want to wait for long time.
My questions:
1) Is it a bug/issue of HttpClient for WP8 or I'm not setting it properly?
2) Do you think of a workaround still using HttpClient?
I've read that maybe HttpWebRequest is an option. However, I believe HttpClient should be ideal for this 'simple' scenario.
My code is simple:
private static async Task<HttpResponseMessage> PostAsync(Uri serverUri, HttpContent httpContent)
{
var client = new HttpClient();
client.Timeout = TimeSpan.FromMinutes(5);
return await client.PostAsync(serverUri, httpContent).ConfigureAwait(false);
}
The server receives the request and while is processing it, the client aborts.
UPDATE: The HttpResponseMessage returned by HttpClient.PostAsyn is this "{StatusCode: 404, ReasonPhrase: '', Version: 0.0, Content: System.Net.Http.StreamContent, Headers: { Content-Length: 0 }}". As I said, the server is found and is receiving the data and processing it.
After some search and some tests I've came to the conclusion that the problem is Windows Phone itself and that it has a 60 seconds timeout (irrespective of the HttpClient) and that cannot be changed to my knowledge. See http://social.msdn.microsoft.com/Forums/en-US/faf00a04-8a2e-4a64-b1c1-74c52cf685d3/httpwebrequest-60-seconds-timeout.
As I'm programming the server as well, I will try the advice by Darin Rousseau in the link provided above, specifically to send an OK and then do some more processing.
UPDATE: The problem seems to be the Windows Phone emulator as stated here:
http://social.msdn.microsoft.com/forums/wpapps/en-us/6c114ae9-4dc1-4e1f-afb2-a6b9004bf0c6/httpclient-doesnt-work-on-windows-phone?forum=wpdevelop. In my experience the tcp connection times-out if it doesn't hear anything for 60s.
Therefore my solution is to use the Http header characters as a way of keep alive. The first line Http header response always starts with HTTP/1.0. So I send the characters one by one with a delay <60s between them. Of course, if the response gets ready, everything that is left is sent right away. This buys some time, for instance if using a delay of 50s per 9 character we get about 450s.
This is a project for my degree so I wouldn't recommend it for production.
By the way, I also tried with other characters instead the sub string of the header, for instance space character, but that results in a http protocol violation.
This is the main part of the code:
private const string Header1 = #"HTTP/1.0 ";
private int _keepAliveCounter = 0;
private readonly object _sendingLock = new object();
private bool _keepAliveDone = true;
private void StartKeepAlive()
{
Task.Run(() => KeepAlive());
}
/// <summary>
/// Keeps the connection alive sending the first characters of the http response with an interval.
/// This is a hack for Windows Phone 8 that need reponses within 60s interval.
/// </summary>
private void KeepAlive()
{
try
{
_keepAliveDone = false;
_keepAliveCounter = 0;
while (!_keepAliveDone && _keepAliveCounter < Header1.Length)
{
Task.Delay(TimeSpan.FromSeconds(50)).Wait();
lock (_sendingLock)
{
if (!_keepAliveDone)
{
var sw = new StreamWriter(OutputStream);
sw.Write(Header1[_keepAliveCounter]);
Console.Out.WriteLine("Wrote keep alive char '{0}'", Header1[_keepAliveCounter]);
_keepAliveCounter++;
sw.Flush();
}
}
}
_keepAliveCounter = 0;
_keepAliveDone = true;
}
catch (Exception e)
{
// log the exception
Console.Out.WriteLine("Error while sending keepalive: " + e.Message);
}
}
Then, the actual processing happens in a different thread.
Comments and critics are appreciated.
It is possible that you are hitting the timeout of the network stream. You can change this by doing,
var handler = new WebRequestHandler();
handler.ReadWriteTimeout= 5 * 60 * 1000;
var client = new HttpClient(handler);
client.Timeout = TimeSpan.FromMinutes(5);
return await client.PostAsync(serverUri, httpContent).ConfigureAwait(false);
The default on the desktop OS is already 5mins. However, it is possible that on Windows Phone it has been reduced by default.