HikariCP debug output, is this normal? - swing

I've just started using HikariCP in a swing application with hibernate. I'm maintaining and old project, so there are a lot of crazy stuff going on in there. The connection leak detection feature helped me understand the sessions would close only on certain events, for example when a user is clicking on the "Save" button. In other cases, there is a leak. I'm thinking the previous developers were trying to implement the "long conversations" unit of work, but they missed some (most) cases.
So my goal now is to find all leaks and fix them. I'm planning to use the HikariCP debug output to help me do that. I don't know if there is a wiki page on the HikariCP documentation that explains the output of debugging, but I was wondering if this output when the application is idle is normal, or there is something strange going in there that I should investigate more:
2015-09-14 01:12:51 DEBUG HikariPool - After fill pool stats HikariPool-0 (total=10, inUse=3, avail=7, waiting=0)
2015-09-14 01:13:21 DEBUG HikariPool - Before cleanup pool stats HikariPool-0 (total=10, inUse=3, avail=7, waiting=0)
2015-09-14 01:13:21 DEBUG HikariPool - After cleanup pool stats HikariPool-0 (total=6, inUse=3, avail=3, waiting=0)
2015-09-14 01:13:21 DEBUG PoolUtilities - Closing connection com.mysql.jdbc.JDBC4Connection#4fb38272
2015-09-14 01:13:21 DEBUG PoolUtilities - Closing connection com.mysql.jdbc.JDBC4Connection#417465f4
2015-09-14 01:13:21 DEBUG PoolUtilities - Closing connection com.mysql.jdbc.JDBC4Connection#454be902
2015-09-14 01:13:21 DEBUG PoolUtilities - Closing connection com.mysql.jdbc.JDBC4Connection#496fcf
If this is normal behaviour, I would also like to know what these 4 connections are for, and why are they closing at that point. Thanks.

This is all normal output. These connections are likely closing due to being idle for the "idleTimeout" period, or because they reached their lifetime ("maxLifetime"). I do recommend updating to the latest version (2.4.x), which will typically provide a reason why the connection was closed in the debug log message.

Related

fatal: the remote hung up unexpectedly. Can’t push things to my git repo

My first problem looked like this:
Writing objects: 60% (9/15)
It freezed there for some time with very low upload speed (in kb/s), then, after long time, gave this message:
fatal: the remote end hung up unexpectedly
Everything up-to-date
I found something what seemed to be a solution:
git config http.postBuffer 524288000
This created a new problem that looks like this:
MacBook-Pro-Liana:LC | myWebsite Liana$ git config http.postBuffer 524288000
MacBook-Pro-Liana:LC | myWebsite Liana$ git push -u origin master
Enumerating objects: 15, done.
Counting objects: 100% (15/15), done.
Delta compression using up to 4 threads
Compressing objects: 100% (14/14), done.
Writing objects: 100% (15/15), 116.01 MiB | 25.16 MiB/s, done.
Total 15 (delta 2), reused 0 (delta 0)
error: RPC failed; curl 56 LibreSSL SSL_read: SSL_ERROR_SYSCALL, errno 54
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
Please help, I have no idea what’s going on...
First, Git 2.25.1 made it clear that:
Users in a wide variety of situations find themselves with HTTP push problems.
Oftentimes these issues are due to antivirus software, filtering proxies, or other man-in-the-middle situations; other times, they are due to simple unreliability of the network.
This works for none of the aforementioned situations and is only useful in a small, highly restricted number of cases: essentially, when the connection does not properly support HTTP/1.1.
Raising this is not, in general, an effective solution for most push problems, but can increase memory consumption significantly since the entire buffer is allocated even for small pushes.
Second, it depends on your actual remote (GitHub? GitLab? BitBucket? On-premise server). Said remote server might have an incident in progress.

Hono adapters cannot connect to enmasse

I'm currently installing hono together with enmasse on top of openshift/okd. Everything goes fine except for the connection between the adapters and enmasse. When I deploy the amqp adapter for example (happens with http and mqtt adapter as well), I'm getting following logging from the hono adapter:
12:25:45.404 [vert.x-eventloop-thread-0] DEBUG o.e.hono.client.impl.HonoClientImpl - starting attempt [#5] to connect to server [messaging-hono-default.enmasse-infra.svc:5672]
12:25:45.404 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - connecting to AMQP 1.0 container [amqp://messaging-hono-default.enmasse-infra.svc:5672]
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - can't connect to AMQP 1.0 container [amqp://messaging-hono-default.enmasse-infra.svc:5672]: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.hono.client.impl.HonoClientImpl - connection attempt failed
io.netty.channel.ConnectTimeoutException: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:125)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - can't connect to AMQP 1.0 container [amqp://messaging-hono-default.enmasse-infra.svc:5672]: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.hono.client.impl.HonoClientImpl - connection attempt failed
io.netty.channel.ConnectTimeoutException: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:125)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Enmasse logs following:
2019-01-07 12:36:24.962160 +0000 SERVER (info) [160]: Accepted connection to 0.0.0.0:5672 from 10.128.0.1:44664
2019-01-07 12:36:24.962258 +0000 SERVER (info) [160]: Connection from 10.128.0.1:44664 (to 0.0.0.0:5672) failed: amqp:connection:framing-error No valid protocol header found
Additional info:
Hono version: 0.8.x
Enmasse version: 0.24.1
Can somebody tell me what I'm missing?
Thanks!
PS: if somebody with enough reputation could add a newly "enmasse" tag, would be nice.
I've found the solution to this problem.
First of all: the framing errors are not incoming connections from hono. I already see this logging when enmasse is installed without installing hono. I don't know where they are coming from. If somebody has an idea, please tell me.
As for the real problem: it seems I needed to allow communication between the two projects (enmasse-infra and hono). This is documented on the Openshift documentation.
TLDR
Used solution: oc adm pod-network make-projects-global enmasse-infra. I used this because the enmasse framework needs to be reachable by all projects (including hono but also ditto and our custom backend application).
Should also work (not tested): oc adm pod-network join-projects --to=enmasse-infra hono

DIY cartridge stops on git push

I've been developing an application for some weeks, and it's been running in a OpenShift small gear with DIY 0.1 + PostgreSQL cartridges for several days, including ~5 new deployments. Everything was ok and a new deploy stopped and started everything in seconds.
Nevertheless today pushing master as usual stops the cartridge and it won't start. This is the trace:
Counting objects: 2688, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (1930/1930), done.
Writing objects: 100% (2080/2080), 10.76 MiB | 99 KiB/s, done.
Total 2080 (delta 1300), reused 13 (delta 0)
remote: Stopping DIY cartridge
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
Logging in with ssh and running the start action hook manually fails because database is stopped. Restarting the gear makes everything work again.
The failing deployment has nothing to do with it, since it only adds a few lines of code, nothing about configuration or anything that might break the boot.
Logs (at $OPENSHIFT_LOG_DIR) reveal nothing. Quota usage seems fine:
Cartridges Used Limit
---------------------- ------ -----
diy-0.1 postgresql-9.2 0.6 GB 1 GB
Any suggestions about what could I check?
Oh, dumb mistake. My last working deployment involved a change in the binary name, which now matches the gear name. stop script, with ps grep and so on was wrong, not killing only the application but also the connection. Changing it fixed the issue.
Solution inspired by this blogpost.

How can WebRTC reconnect to the same peer after disconnection?

Now I am working on a peer-to-peer chatting system based on WebRTC. This system can make a pair with any person who is listening on the peer list at the same time and I have finished the basic functionanity of real-time communication in audio and video. But I have no ideas how to reconnect to the same peer if it disconnected accidentally ?
Thanks ! As mido22 mention that iceConnectionState automatically changes to connected if disconnected by some connection problem. I found some articles on here https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/iceConnectionState , and it solved my confusion about the recovery operation of automatic-reconnecting to the same peer on some flaky network !!!
Put additional constraint:
1: cons.mandatory.add(new MediaConstraints.KeyValuePair("IceRestart", "true"));
Generate sdp file:
2: pc.createOffer(new WebRtcObserver(callbacks), cons);
Set resulted sdp to PeerConnection:
3: pc.setLocalDescription(new WebRtcObserver(callbacks), sdp);
4: Send it to remote peer.
So steps 2-4 are the same as for regular offer.

Webdriver (ChromeDriver) with Play?

I'm using ChromeDriver with Play! framework. I have a UnitTest where ChromeDriver is instantiated and make a get request to my Dyndns url. When the test starts, it opens chrome, makes the request but there is no response. It waits indefinitely. And when I closed chrome, testrunner fails with the exception;
A org.openqa.selenium.WebDriverException has been caught, Chrome did
not respond to 'NavigateToURL'. Elapsed time was 116077 ms. Request
details:
({"command":"NavigateToURL","navigation_count":1,"tab_index":0,"url":"http://myurl.dyndns.org:9000/test/","windex":0}).
(WARNING: The server did not provide any stacktrace information) Build
info: version: '2.5.0', revision: '13516', time: '2011-08-23 18:29:57'
System info: os.name: 'Windows Vista', os.arch: 'x86', os.version:
'6.0', java.version: '1.6.0_21' Driver info: driver.version:
RemoteWebDriver
When I do not use UnitTest (and TestRunner) and start my test directly with a main method (also initializing the Play! by myself) test runs with no problem. But I need JUnit's assert methods and it's surely better that all tests are run from the same module (I have many other unit and functional tests).
Any ideas to fix this?
Thanks.
What happens is that the http://localhost:9000/#tests fires off a web request to http://localhost:9000/#tests/<your_test_class>.class to run your test class taking up one thread, your test tries to fire off a request to http://localhost:9000/your_path that blocks until the request for http://localhost:9000/#tests/<your_test_class>.class finishes. Thus you wait indefinitely since by default the number of threads is one. So if you increase play.pool > 1, your test suite will work properly.
See conf/application.conf
# Execution pool
# ~~~~~
# Default to 1 thread in DEV mode or (nb processors + 1) threads in PROD mode.
# Try to keep a low as possible. 1 thread will serialize all requests (very useful for debugging purpose)
# play.pool=3
Note: One thing I found helpful in understanding how #tests work is turning on Network in chrome, then I could easily trace how it works and then it made more sense where the block was.