Low logging level preventing shutdown hook to run properly - mysql

I am using the MariaDb4j library for my integration tests and it registers a shutdown hook, this way:
protected void cleanupOnExit() {
String threadName = "Shutdown Hook Deletion Thread for Temporary DB " + dataDir.toString();
final DB db = this;
Runtime.getRuntime().addShutdownHook(new Thread(threadName) {
#Override
public void run() {
// ManagedProcess DestroyOnShutdown ProcessDestroyer does
// something similar, but it shouldn't hurt to better be save
// than sorry and do it again ourselves here as well.
try {
// Shut up and don't log if it was already stop() before
if (mysqldProcess != null && mysqldProcess.isAlive()) {
logger.info("cleanupOnExit() ShutdownHook now stopping database");
db.stop();
}
} catch (ManagedProcessException e) {
logger.warn("cleanupOnExit() ShutdownHook: An error occurred while stopping the database", e);
}
if (dataDir.exists() && Util.isTemporaryDirectory(dataDir.getAbsolutePath())) {
logger.info("cleanupOnExit() ShutdownHook quietly deleting temporary DB data directory: " + dataDir);
FileUtils.deleteQuietly(dataDir);
}
if (baseDir.exists() && Util.isTemporaryDirectory(baseDir.getAbsolutePath())) {
logger.info("cleanupOnExit() ShutdownHook quietly deleting temporary DB base directory: " + baseDir);
FileUtils.deleteQuietly(baseDir);
}
}
});
}
This was working fine.
But then I added Logback and created a Console appender.
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<Pattern>${defaultPattern}</Pattern>
</encoder>
</appender>
<root level="DEBUG">
<appender-ref ref="STDOUT" />
</root>
If I set the Logging level to WARN or ERROR it is still working fine, but when I set it to INFO or lower then I get this exception:
Exception: java.lang.IllegalStateException thrown from the UncaughtExceptionHandler in thread "Shutdown Hook Deletion Thread for Temporary DB /var/folders/t5/lr8ytf257hb9_649cjp9hkn40000gn/T/MariaDB4j/data/3306"
"Shutdown Hook Deletion Thread for Temporary DB... " is the name of the thread registered in the first piece of code above.
The result is that I am left with a mysqld process running. And that prevents the tests to run again as MariaDB4j complains about it and won't start a new database.
Once I kill the mysqld process than I can run my tests again, but then same thing happens.
I assume this is a JVM problem. I don't see how the logging level can prevent a shutdown hook to work properly.
I use MariaDB4j in my integration tests. When I run them with IntelliJ or Eclipse it do not get this error. I only get it when I run them with gradle (as part of the build task).
What could be causing that and how to get around it?

I had similar problem. It was caused by Gradle issue which is described there.
It can be solved by downgrading Gradle to version 3.2 or upgrading to version 3.5.

Related

Adding Splunk logback appender prevents application termination

I have the following logback configuration and I am using it in a very simple Java application that does nothing except logging one line. When I uncomment the Splunk appender line it doesn't let the application exit, even though the application is finished.
Is there a way to terminate all the logging threads so that the main application exits?
logback.xml
<appender name="SPLUNK" class="com.splunk.logging.HttpEventCollectorLogbackAppender">
<url>${splunkUrl}</url>
<token>${splunkToken}</token>
<source>${projectName}</source>
<host>${COMPUTERNAME}</host>
<sourcetype>batch_application_log:json</sourcetype>
<disableCertificateValidation>true</disableCertificateValidation>
<!--<messageFormat>json</messageFormat>-->
<!--<retries_on_error>1</retries_on_error>-->
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>"%msg"</pattern>
</layout>
</appender>
<root level="INFO">
<!--<appender-ref ref="SPLUNK"/>--> if I uncomment this line application never exits
</root>
Java code
public class Main {
public static void main(String[] args) {
final Logger logger = LoggerFactory.getLogger(Main.class);
logger.info("******");
}
}
You could add a Logback shutdown hook, this will close all appenders and stop any active threads related to Logback.
For example:
<configuration debug="true">
<shutdownHook class="ch.qos.logback.core.hook.DelayingShutdownHook">
<!--
the default value is 0 millis, I have included a non default
value here just to show you how it can be supplied
-->
<delay>10</delay>
</shutdownHook>
...
<configuration>
With the shutdown hook in place and debug="true" Logback will emit its own log events like so ...
08:57:19,410 |-INFO in ch.qos.logback.core.hook.DelayingShutdownHook#2bafec4c - Sleeping for 10 milliseconds
08:57:19,421 |-INFO in ch.qos.logback.core.hook.DelayingShutdownHook#2bafec4c - Logback context being closed via shutdown hook
Note: there is no requirement to use debug="true", I have only o
included that to show you how to verify that the shutdown hook has been executed.

How to properly run migrations and seed a docker MySql DB using Entity Framework Core

I implemented database migrations in my ASP.NET core solution as it's recommended in the following issue: Pattern for seeding database with EF7 in ASP.NET 5
My solution is setup for working on linux docker and the application depends on a MySql container that is configured in the docker compose file and setup on the first run.
The migrations run in the Startup.Configure method as:
using (var serviceScope = app.ApplicationServices.GetRequiredService<IServiceScopeFactory>().CreateScope())
{
var context = serviceScope.ServiceProvider.GetService<ApplicationDbContext>();
context.Database.Migrate();
context.EnsureSeedData();
}
But running the application for the first time always throws the following error:
An exception of type 'MySql.Data.MySqlClient.MySqlException' occurred in System.Private.CoreLib.ni.dll but was not handled in user code
Then if I wait some seconds and re-launch the debug session the code executes without problem and the first-run data is there.
Is there a way that it could wait for the DB server to be ready before running the migrations?
EDIT:
If I change the migration method for the one in this question: Cannot get the UserManager class
instead of the previous error I get this one:
An exception of type 'System.AggregateException' occurred in System.Private.CoreLib.ni.dll but was not handled in user code
Is there a way that it could wait for the DB server to be ready before running the migrations?
In your Program.Main, you could add code that attempts to open a connection to MySql, and loop until the connection opens successfully.
For example:
public static void Main()
{
MySqlConnection connection;
while (true)
{
try
{
connection = new MySqlConnection("Database=mysql; Server=server;User ID=user;Password=password");
connection.Open();
break;
}
// ex.Number = 1042 when the server isn't up yet, assuming you're using MySql.Data and not some other MySql implementation
catch (MySqlException ex) when (ex.Number is 1042)
{
Console.Error.WriteLine("Waiting for db.");
Thread.Sleep(1000);
}
}
// ... continue launching website
}

Qt - QSqlDatabse open() blocked?

I have a weird behaviour on my application using QSqlDatabase.
This is the simple code i'm using:
QSqlDatabase db = QSqlDatabase::addDatabase("QMYSQL", QString::number(this->m_id));
db.setHostName(SERVER_DATABASE_DATABASE_HOST);
db.setDatabaseName(SERVER_DATABASE_DATABASE_NAME);
db.setUserName(SERVER_DATABASE_USERNAME);
db.setPassword(SERVER_DATABASE_PASSWORD);
if( !db.open() ){
...
}
...
This snippet of code is executed inside the run() method of a QRunnable and i have n(n~20) of those Task that run asynchronusly without problems(no duplicated db connections, the connections are removed, ecc..).
The problem is that, after a lot of iteration, the execution crash.
The crash is replicable but not deterministic.
I tryied to run the application in debug mode but the debugger, stop at the line where i call the db.open() function, whitout further information(no stack trace, no signal).
My system specification:
MySQL v5.7.17 (community edition)
Mac OSX 10.11.6
Qt 5.7.0
Any suggestion is very appreciated

The web application [] appears to have started a thread named [Abandoned connection cleanup thread] com.mysql.jdbc.AbandonedConnectionCleanupThread

In the middle of my web-development I just close my web-app in my eclipse IDE, about a minute, I just saw a WARNING in my eclipse console.
WARNING: The web application [/Spring.MVC] registered the JDBC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
Sep 06, 2014 8:31:55 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
WARNING: The web application [/Spring.MVC] appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(Unknown Source)
com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:40)
Sep 06, 2014 8:32:00 PM org.apache.catalina.core.ApplicationContext log
INFO: No Spring WebApplicationInitializer types detected on classpath
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Sep 06, 2014 8:32:00 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
Sep 06, 2014 8:32:03 PM org.hibernate.jpa.internal.util.LogHelper logPersistenceUnitInformation
INFO: HHH000204: Processing PersistenceUnitInfo [
name: personPU
...]
It did not caused any memory leak so far, I checked my VisualVM, everything is working as usual, but as I search more about this thing I realize that this warning is caused by the MySQL driver not releasing resources or not being closed properly(I dont know how to say it exactly) and I ended up in this post at SO related issue
and the OP is right the answer of "don't worry about it" won't be sufficient. This warning bothers me because it may give me some future persistence problems and that worries me a lot, I tried the code the OP has written but I'm having a problem what Libraries should I use to make this code work. This is what I got sor far..
import java.sql.Driver;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.Enumeration;
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import javax.servlet.annotation.WebListener;
import org.hibernate.annotations.common.util.impl.LoggerFactory;
import com.mysql.jdbc.AbandonedConnectionCleanupThread;
#WebListener
public class ContextFinalizer implements ServletContextListener {
private static final Logger LOGGER = LoggerFactory.getLogger(ContextFinalizer.class);
public void contextInitialized(ServletContextEvent sce) {
}
public void contextDestroyed(ServletContextEvent sce) {
Enumeration<Driver> drivers = DriverManager.getDrivers();
Driver d = null;
while (drivers.hasMoreElements()) {
try {
d = drivers.nextElement();
DriverManager.deregisterDriver(d);
LOGGER.warn(String.format("Driver %s deregistered", d));
}
catch (SQLException ex) {
LOGGER.warn(String.format("Error deregistering driver %s", d), ex);
}
}
try {
AbandonedConnectionCleanupThread.shutdown();
}
catch (InterruptedException e) {
logger.warn("SEVERE problem cleaning up: " + e.getMessage());
e.printStackTrace();
}
}
}
I just want to know what Libraries do I need or If I'm using the right libraries to implement this properly, I do not even know what Logger should I use, thank you for any help,
See this answer. It seems that MySQL driver should be in {$TOMCAT]/lib shared between applications. Check that you are not including it with each application. At least it worked for me and I have been able to remove the warning.
If you are using Maven mark the dependency as provided.
UPDATE:
root cause is that Tomcat have problems to garbage collect the driver because it is registered in a singleton common to several applications. Closing one application does not allow Tomcat to release the driver. See this answer.
For me i stop mysql (service mysql start) and then i stop tomcat (./shutdown.sh) in bin folder and the problem was fixed.
This works for me.
Issue was with elastic search in my case.
Send a DELETE request from postman to below endpoint and restart your application .
http://localhost:9200/*

How to fix LockObtainFailedException: Lock obtain timed out?

My integration tests are failing when I run them from a Gradle task.
org.springframework.data.solr.UncategorizedSolrException: **SolrCore 'collection1' is not available due to init failure: Error opening new searcher; nested exception is org.apache.solr.common.SolrException: SolrCore 'collection1' is not available due to init failure: Error opening new searcher**
at org.springframework.data.solr.core.SolrTemplate.execute(SolrTemplate.java:122)
at org.springframework.data.solr.core.SolrTemplate.saveDocuments(SolrTemplate.java:206)
at org.springframework.data.solr.core.SolrTemplate.saveDocuments(SolrTemplate.java:201)
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#/opt/solr/example/solr/collection1/data/index/write.lock
When I run the integration tests directly in Intellij, the tests run successfully. Here is my bean definition for the embedded server. I added the destroyMethod and it had no effect.
#Bean(destroyMethod = "shutdown")
public SolrServer solrServer(org.apache.commons.configuration.Configuration configuration) {
EmbeddedSolrServerFactory factory;
try {
factory = new EmbeddedSolrServerFactory(configuration.getString("solr.home"));
} catch (ParserConfigurationException | IOException | SAXException e) {
String errorMsg = "Encountered an exception while initializing the SolrServer bean.";
log.error(errorMsg, e);
throw new OrdersClientRuntimeException(errorMsg, e);
}
return factory.getSolrServer();
}
Here are the logs. Everything seems to be shutting down correctly.
2014-09-02 17:32:15.757 thread="Thread-6" level="DEBUG" logger="o.s.b.f.s.DisposableBeanAdapter" - **Invoking destroy method 'shutdown' on bean with name 'solrServer'**
2014-09-02 17:32:15.759 thread="Thread-8" level="DEBUG" logger="o.s.b.f.s.DefaultListableBeanFactory" - Retrieved dependent beans for bean 'solrDocumentRepository': [net.nike.orders.client.search.repository.DocumentRepositorySpec]
2014-09-02 17:32:15.759 thread="Thread-6" level="INFO " logger="org.apache.solr.core.CoreContainer" - **Shutting down CoreContainer instance=179265569**
2014-09-02 17:32:15.760 thread="Thread-8" level="DEBUG" logger="o.s.b.f.s.DisposableBeanAdapter" - **Invoking destroy method 'shutdown' on bean with name 'solrServer'**
2014-09-02 17:32:15.760 thread="Thread-8" level="INFO " logger="org.apache.solr.core.CoreContainer" - **Shutting down CoreContainer instance=1604485329**
2014-09-02 17:32:15.762 thread="Thread-6" level="INFO " logger="org.apache.solr.core.SolrCore" - [collection1] **CLOSING SolrCore** org.apache.solr.core.SolrCore#28da98e2
2014-09-02 17:32:15.769 thread="Thread-8" level="DEBUG" logger="o.a.h.i.c.PoolingClientConnectionManager" - **Connection manager is shutting down**
2014-09-02 17:32:15.769 thread="Thread-6" level="INFO " logger="**org.apache.solr.update.UpdateHandler" - closing** DirectUpdateHandler2{commits=23,autocommit maxTime=15000ms,autocommits=0,soft autocommits=2,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=33,cumulative_deletesById=32,cumulative_deletesByQuery=0,cumulative_errors=0,transaction_logs_total_size=5302,transaction_logs_total_number=10}
2014-09-02 17:32:15.771 thread="Thread-8" level="DEBUG" logger="o.a.h.i.c.PoolingClientConnectionManager" - Connection manager shut down
2014-09-02 17:32:15.773 thread="Thread-8" level="DEBUG" logger="o.a.h.i.c.PoolingClientConnectionManager" - Connection manager is shutting down
2014-09-02 17:32:15.774 thread="Thread-8" level="DEBUG" logger="o.a.h.i.c.PoolingClientConnectionManager" - Connection manager shut down
Here is my environment information:
Linux Mint 17
Solr 4.9.0
Solr Test Framework 4.9.0
Oracle Java 1.7
Spring Data Solr 1.2.2.RELEASE
IntelliJ 13.1.4
Gradle 1.12
Tests are developed in Spock
Any help would be greatly appreciated. Thanks!
I ran into the same thing with my tests but the unlockOnStartup flag didn't work for me. I ended up changing the lock type (and leaving unlockOnStartup commented out):
<lockType>${solr.lock.type:single}</lockType>
<!--<unlockOnStartup>true</unlockOnStartup>-->
The note on the default lock type (native) makes me think that I don't the server shutting down cleany between test runs but I haven't been able to find it yet :(
native = NativeFSLockFactory - uses OS native file locking.
Do not use when multiple solr webapps in the same
JVM are attempting to share a single index.
Just had a similar issue and got it fixed.
I use the EmbeddedSolrServer in my unit tests and dynamically create new cores during runtime.
When creating the CoreContainer be sure to call shutdown() after the tests.
Also be sure all SolrCore instances are closed after your tests.
Calling coreContainer.create(CoreDescriptor, name, ...) opens a SolrCore which you have to close manually.
When creating the EmbeddedSolrServer by passing the coreName, the SolrCore is not opened! Open and close is handled by the EmbeddedSolrServer for each request/action.
Ok, so I was able to fix the issue by changing
<unlockOnStartup>true</unlockOnStartup>
in the solrconfig.xml file.