I am using c3p0 (0.9.1.2) version and after a hour or so I see numUnclosedOrphanedConnections keep increasing slowly like 1 per hour. c3p0 docs said that
numUnclosedOrphanedConnections will only be non-zero following a call
to softReset(). It represents the number of Connections that were
checked out when a soft reset occurred and were therefore silently
excluded from the pool, and which remain unclosed by the client
application.
Why does c3p0 doing soft reset? My c3p0 settings is like
initialPoolSize=1
minPoolSize=1
maxPoolSize=100
maxIdleTime=60
checkoutTimeout=5000
testConnectionOnCheckin=true
Thanks Steve for helping me fixing it. This is how I did it.
Enable Debug level logging for c3po:
<logger name="com.mchange" additivity="false">
<level value="DEBUG" />
<appender-ref ref="C3p0Appender" />
</logger>
c3p0 settings:
debugUnreturnedConnectionStackTraces=true
# 30 sec is enough for me but you should change it for your case
unreturnedConnectionTimeout=30
And the keyword to search inside c3p0 log file is: "Overdue resource check-out "
This logging is only enable in the trunk version of c3p0. It should be there in pre6 release.
Related
How can I configure the size of the executor service the Weld subsystem of Wildfly uses to execute asynchronous event observer methods? Specifically I want to increase the size of the thread pool.
The Weld documentation has some config parameters but points out that those can be ignored by integrators and that Wildfly is one that does. The Wildfly documentation on the other hand contains configuration option for nearly every subsystem except the Weld subsystem.
I'm using Wildfly 19.
The actual executor service that WFLY uses for Weld purposes is WeldExecutorServices and even more precisely for async obsever notification, this method returns the executor.
With a little bit of digging I could find that this is set in WeldSubsystemAdd, here. So it has some defaults but it is pulling the config from somewhere before using the default.
Therefore, you should be able to adjust this by configuring the given WildFly subsystem, in this case Weld.
I have found out that documentation mentions certain options for Weld subsystem, one of which is thread-pool-size. See https://docs.wildfly.org/19/wildscribe/subsystem/weld/index.html
I don't know exactly how to pass in these options to WFLY because it has been a long time since I last used it. However, it is some generic way in which you can pass in options for any subsystem. Once you figure that out, you should be good to go.
I have an appender configured like...
<appender name="DAILY_ROLLING" class="ch.qos.logback.core.rolling.RollingFileAppender">
<File>logs/dm.log</File>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>logs/dm.%d{yyyyMMdd}.log</FileNamePattern>
</rollingPolicy>
<encoder>
<pattern>%m%n</pattern>
</encoder>
</appender>
...
<root level="info">
<appender-ref ref="DAILY_ROLLING" />
<appender-ref ref="SYSLOG" />
</root>
...which typically has the effect of logging current data to the dm.log file and everyday at midnight rolling over the dm.log into a file named for the date, dm.20130205.log. Yesterday, however, for the first time ever, this rollover did not occur. My dm.log file now has 2 days worth of data and I am wondering what went wrong? I expected to find a RolloverFailure or some indication of what went wrong to be laying in the dm.log file but there is nothing there.
Where do I look to figure out what went wrong in logback? I have never seen this mechanism fail in either logback or log4j.
Try adding:
-Dlogback.statusListenerClass=ch.qos.logback.core.status.OnConsoleStatusListener
Or event implementing your own status listener. Then try configuring a file roll every 10 seconds, in a test system, and leave it to soak test.
It might well be a NAS issue. We've had our NAS fail silently in hard to detect ways. Also if the file roll in is in the middle of the night, and it's an overnight batch processing system then the NAS load can be high at that time.
What version of logback are you using?
Edit: A the rollover could be skipped if the rename of the file fails. In my own code I would retry this, but in RenameUtil.rename() logback will just warn you via the status listener.
I saw these in a template configuration file:
<property>
<name>mapred.map.tasks</name>
<value>2</value>
<description>The default number of map tasks per job. Typically set
to a prime several times greater than number of available hosts.
Ignored when mapred.job.tracker is "local".
</description>
</property>
...
<property>
<name>mapred.reduce.tasks</name>
<value>1</value>
<description>The default number of reduce tasks per job. Typically set
to a prime close to the number of available hosts. Ignored when
mapred.job.tracker is "local".
</description>
</property>
I couldn't find any other reference, neither online nor in the Hadoop O'Reilly book, as to why these should be prime. Anyone have any ideas?
Thanks.
See HADOOP-5519; this is no longer in the configuration file as there was no (or little) reason for it.
I haven't seen it for at least two versions, and JIRA says it was resolved a couple of years ago.
I would like to use slf4j+logback for logging on an JBossAS7.
Additionaly I have to solve the following requirements:
I need to share one logback configuration / context within multiple deployed applications/EARs
I need to change the logback configuration on runtime without a redeploy/restart of the EARs
make (as much as possible) log entries of the JBoss Server visible inside my logging configuration (e.g. deployment logs, etc...)
What I know now, is that JBoss uses its own logging layer. For architectural reasons, I can not use this. I would like to stick with only SLF4J as Logging-API and Logback as framework.
I would be happy to get some hints, how this could be solved.
Regards,
Lars
Lars,
The only way I can think of to do this would be to write a custom handler. While it's not very well documented at the moment, you can create custom java.util.logging.Handler's. You could write a wrapper in a sense around around the logback's configuration. I think they have a BasicConfigurator or something like that.
You register a custom handler like so:
<custom-handler name="logbackHandler" class="org.jboss.LogbackHandler" module="org.jboss.logback">
<level name="DEBUG"/>
<properties>
<property name="nameOfASetterMethod" value="the value to set" />
</properties>
</custom-handler>
<root-logger>
<level name="INFO"/>
<handlers>
<handler name="CONSOLE"/>
<handler name="FILE"/>
<handler name="logbackHandler"/>
</handlers>
</root-logger>
That said there is probably no real need to do that. The application server logger will log the messages even if you are logging through a different façade. You can set-up different file handlers if you want to write to your own files.
I realize logging in JBoss AS7 could really use some better documentation. I do plan on updating that when I find the time :-) And really I just need to make the time.
I am pretty sure that you can use slf4j+logback for your own applications within JBoss and completely bypass its logging. JBoss will continue logging all of its own log messages to its own logs, but your software will not connect to jboss-logging at all and will have its own logs. I have tried this under JBoss 6; we have not yet tried JBoss 7, so things may be different there, but I doubt it. Just make sure slf4j and logback jars are in your applications' classpaths, and you should be good.
If you search through the System properties available to you, you will find some jboss.* properties that may be useful in your logback configuration for finding a place to put your log files.
Personally, I wish JBoss would switch to using slf4j.
I'm using Hibernate and Spring with the DAO pattern (all Hibernate dependencies in a *DAO.java class). I have nine unit tests (JUnit) which create some business objects, save them, and perform operations on them; the objects are in a hash (so I'm reusing the same objects all the time).
My JUnit setup method calls my DAO.deleteAllObjects() method which calls getSession().createSQLQuery("DELETE FROM <tablename>").executeUpdate() for my business object table (just one).
One of my unit tests (#8/9) freezes. I presumed it was a database deadlock, because the Hibernate log file shows my delete statement last. However, debugging showed that it's simply HibernateTemplate.save(someObject) that's freezing. (Eclipse shows that it's freezing on HibernateTemplate.save(Object), line 694.)
Also interesting to note is that running this test by itself (not in the suite of 9 tests) doesn't cause any problems.
How on earth do I troubleshoot and fix this?
Also, I'm using #Entity annotations, if that matters.
Edit: I removed reuse of my business objects (use unique objects in every method) -- didn't make a difference (still freezes).
Edit: This started trickling into other tests, too (can't run more than one test class without getting something freezing)
Edit: Breaking the freezing tests into two classes works. I'm going to do that for now, as shamefully un-DRY as it is to have two or more test classes unit-testing the same one business object class.
Transaction configuration:
<bean id="txManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<tx:advice id="txAdvice" transaction-manager="txManager">
<!-- the transactional semantics... -->
<tx:attributes>
<!-- all methods starting with 'get' are read-only -->
<tx:method name="get*" read-only="true" />
<tx:method name="find*" read-only="true" />
<!-- other methods use the default transaction settings (see below) -->
<tx:method name="*" />
</tx:attributes>
</tx:advice>
<!-- my bean which is exhibiting the hanging behavior -->
<aop:config>
<aop:pointcut id="beanNameHere"
expression="execution(* com.blah.blah.IMyDAO.*(..))" />
<aop:advisor advice-ref="txAdvice" pointcut-ref="beanNameHere" />
</aop:config>
When the freeze happens break the application, find the main thread and capture the stacktrace. Poke through until you find exactly what DB query is running that is blocking in the DB.
You mention running the test on it's own works ok but running the full suite causes a problem. If this is the case then I would guess that one of the prior tests still has a transaction open and has locks on some rows which the blocking test is trying to access.
Do your tests run concurrently? If so stop doing that as they could interfere with each other.
Turn on hibernate.show_sql option so you can see in the console all the SQL being generated.
At the point the freeze happens can you find out which rows are locked in the DB. e.g. in SQLServer you can run sp_lock to see this and sp_who to see which SQL process ids are blocking on another.
A few things to check:
proper transaction management - it appears that in your configuration you have transactions over a DAO of yours. Generally it is advisable to have transactions around your service layer, and not the DAO. But anyway - make sure you have a transaction around the dao in use by the test. Or make the test #Transactional (if using spring's junit runner)
change the logging treshold to info for the datasource (c3p0, perhaps?). It reports deadlocks.
watch the database logs for deadlocks (if there is such option)