Logback MDC doesn't seem to work with custom class loaders - logback

I have webapp using Spring 5.1.10, running on Jetty 9.4.20. Through out the app JCL is used for logging. Jetty is configured by enabling jcl-slf4j (to capture webapp and Spring messages) and logging-logback module and editing corresponding resources/logback.xml. In that configuration file, there is defined a logger that have two MDC's in pattern: %mdc{instance:-internal} and %mdc{user:-default}. MCD key instance is set by a Filter and user by RequestInterceptor. Basically they work, as when logging statements is called by some controller, correct values for instance and user end up in log file.
The problem is that there is controller, that deals with legacy part of the system. It looks up a class file, loads this class using custom class loader, does some setup (setting some properties) then executes a method, that actually does the job. The problem is that all loaded classes that emit log messages have both MDC keys as default values (internal and default respectively) despite both values being set to correct values.
I have added log statements to filter and request interceptor and it looks like they occupy the same thread as the class being loaded. Also I have added a test log statement to controller, which is emitted after custom class is loaded, but before its method execution. The result is that despite all entities being executed within the same thread MCD works in controllers, filters and interceptors and doesn't work with loaded classes. That leads me to believe, that class loading somehow is involved.
The question is: How I can get MDC to work within classes loaded by custom class loader?

Related

How can I externalize ISchedulerExecutorService to run tasks in an external hazelcast cluster(Hazecast 5.2) without using UserCodeDeployment?

I am working on externalizing our IScheduledExecutorService so I can run tasks externally on a external cluster. I am able to write a test and get the Runnable to actually run ONLY if I turn on UserCode deployment. If I want to change this task at all and run the tests again I get the below in my external cluster member's logs..
java.lang.IllegalStateException: Class com.mycompany.task.ScheduledTask is already in local cache and has conflicting byte code representation
I want to be able to change the task if I could and redeploy to Hazelcast to just handle it. I do this kind of thing with our external maps now. It can handle different versions of our objects using compact serialization.
Am I stuck using user code deployment for these functional objects? If I need to make a change to it I need to change the class name and redeploy to production. I'm hoping to get this task right the first time and not have to ever do that but I have a way of handling it if I do.
The cluster is already running in production and I'll have to add the following to each member
HZ_USERCODEDEPLOYMENT_ENABLED=true
and the appropriate client code(listed below) to enable this.
What I've done...
Added the following to my local docker file
HZ_USERCODEDEPLOYMENT_ENABLED=true
and also in the code that creates a hazelcast client connecting to my external cluster with
ClientConfig clientConfig = new ClientConfig(); ClientUserCodeDeploymentConfig clientUserCodeDeploymentConfig = new ClientUserCodeDeploymentConfig(); clientUserCodeDeploymentConfig.addClass("com.mycompany.task.ScheduledTask"); clientUserCodeDeploymentConfig.setEnabled(true); clientConfig.setUserCodeDeploymentConfig(clientUserCodeDeploymentConfig);
However, if I remove those two pieces I get the following Exception with a failing test. It doesn't know about my class at all.
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.mycompany.task.ScheduledTask
Side Note:
We are using compact serialization for several maps already and when I try to configure this Runnable task via compact serialization I get the below error. I don't think that's the right approach either.
[Scheduler: myScheduledExecutorService][Partition: 121][Task: 7afe68d5-3185-475f-b375-5a82a7088de3] Exception occurred during run
java.lang.ClassCastException: class com.hazelcast.internal.serialization.impl.compact.DeserializedGenericRecord cannot be cast to class java.lang.Runnable (com.hazelcast.internal.serialization.impl.compact.DeserializedGenericRecord is in unnamed module of loader 'app'; java.lang.Runnable is in module java.base of loader 'bootstrap')
at com.hazelcast.scheduledexecutor.impl.ScheduledRunnableAdapter.call(ScheduledRunnableAdapter.java:49) ~[hazelcast-5.2.0.jar:5.2.0]
at com.hazelcast.scheduledexecutor.impl.TaskRunner.call(TaskRunner.java:78) ~[hazelcast-5.2.0.jar:5.2.0]
at com.hazelcast.internal.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:64) ~[hazelcast-5.2.0.jar:5.2.0]

Grails JSON marhsaling using introspection causes severe bottleneck on Classloader.loadClass()

I am using Grails 2.2.4 and have a controller endpoint which converts a domain object list to JSON. Under load (as little as 5 concurrent requests) the marshaling performance is very poor. Taking thread dumps the threads are blocked on:
java.lang.ClassLoader.loadClass(ClassLoader.java:291)
There is a single marhsaler registered to marshal all domain objects using reflection and introspection. Realizing that reflection and introspection is slower than direct method calls, I am still seeing unexpected behavior in that the class loader is caller every time and in turn blocking occurs. An example stacktrace is as follows:
java.lang.Thread.State: BLOCKED (on object monitor)
at java.lang.ClassLoader.loadClass(ClassLoader.java:291)
- waiting to lock <785e31830> (a org.grails.plugins.tomcat.ParentDelegatingClassLoader)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.beans.Introspector.instantiate(Introspector.java:1470)
at java.beans.Introspector.findExplicitBeanInfo(Introspector.java:431)
at java.beans.Introspector.<init>(Introspector.java:380)
at java.beans.Introspector.getBeanInfo(Introspector.java:167)
at java.beans.Introspector.getBeanInfo(Introspector.java:230)
at java.beans.Introspector.<init>(Introspector.java:389)
at java.beans.Introspector.getBeanInfo(Introspector.java:167)
at java.beans.Introspector.getBeanInfo(Introspector.java:230)
at java.beans.Introspector.<init>(Introspector.java:389)
at java.beans.Introspector.getBeanInfo(Introspector.java:167)
at java.beans.Introspector.getBeanInfo(Introspector.java:230)
at java.beans.Introspector.<init>(Introspector.java:389)
at java.beans.Introspector.getBeanInfo(Introspector.java:167)
at org.springframework.beans.CachedIntrospectionResults.<init>(CachedIntrospectionResults.java:217)
at org.springframework.beans.CachedIntrospectionResults.forClass(CachedIntrospectionResults.java:149)
at org.springframework.beans.BeanWrapperImpl.getCachedIntrospectionResults(BeanWrapperImpl.java:324)
at org.springframework.beans.BeanWrapperImpl.getPropertyValue(BeanWrapperImpl.java:727)
at org.springframework.beans.BeanWrapperImpl.getPropertyValue(BeanWrapperImpl.java:721)
at org.springframework.beans.PropertyAccessor$getPropertyValue.call(Unknown Source)
at com.ngs.id.RestDomainClassMarshaller.extractValue(RestDomainClassMarshaller.groovy:203)
...
...
A simple benchmark loading the same endpoint with the same parameters results in the loadClass call.
I was under the impression the classes would be at least cached by the class loader and not loaded on every method call to get the property to be marshaled.
The code to retrieve the property value is as follows:
BeanWrapper beanWrapper = PropertyAccessorFactory.forBeanPropertyAccess(domainObject);
return beanWrapper.getPropertyValue(property.getName());
Is there a configuration setting that is needed to ensure the classes are only loaded once? or perhaps a different way to get the property that doesn't result in class loading every time? Or perhaps a more performant way to achieve this?
Writing a custom marshaler per domain class would avoid the reflection and introspection but is going to be a lot of repeat code.
Appreciate any input.
So after much digging this is what I found out.
Using the BeanUtils.getPropertyDescriptors and getValue will always try and find a BeanInfo class describing the bean using the class loader. In this case we don't provide BeanInfo classes for our grails domain classes so this call is redundant. I found some information where you can provide a custom BeanInfoFactory to bypass this and exclude your packages but I couldn't find how to configure it with Grails.
Also searching the springframework documentation there is a configuration option you can pass Introspector.IGNORE_ALL_BEANINFO that will tell CachedIntorspectionResults to never look up the bean classes. However this was not available in version 3.1.4 of springframework which was current for grails 2.2.4. The newer versions do appear to have this option.
So, if using BeanUtils you can't by pass this initial lookup on the class loader. However subsequent loaders should be cached by CachedIntrospectionResults. Unfortunately this doesn't happen in our scenario. There looks to be a bug in the test to see if the lookup is cacheable. See more info on this below.
The fix was ultimately to fall back to use pure reflection. Rather than use:
beanWrapper.getPropertyValue(property.getName());
To use:
PropertyDescription pd = BeanUtils.getPropertyDescriptor(domainObject.getClass(), property.getName())
pd.readMethod.invoke(domainObject)
Where the pd is cached.
After fixing this the profiler still showed a lack of caching on CachedIntorspectionResults for the out of the box grails marshaller. This was due to the bad caching implementation in CachedIntrospectionResults. The work around for this was to add the correct class loader to the acceptedClassLoaders in the CachedIntrospectionResults.
public class EnhanceCachedIntrospectionResultsAcceptedClassLoadersListener implements ServletContextListener {
public void contextInitialized(ServletContextEvent event) {
CachedIntrospectionResults.acceptClassLoader(Thread.currentThread().getContextClassLoader().getParent());
}
public void contextDestroyed(ServletContextEvent event) {
CachedIntrospectionResults.clearClassLoader(Thread.currentThread().getContextClassLoader().getParent());
Introspector.flushCaches();
}
}
Note that it was required to add the parent to the accepted class loader list rather than the current class loader. Not sure if this is specific to grails or not but this fixed the issue. I'm not sure if there may be a side effect to this fix.
In summary we went from 10 requests/sec in the original setup to 120 requests/sec after using direct reflection and fixing the CachedIntrospectionResults cache.
However the real eye opened was that if we use a 1-1 marshaller per domain class we were seeing another x2 improvement in performance over the generic marshaller where we test objects for whether they're instances of class etc. We're saving a lot of code with the generic marshaller but there's a lot more work to do to get comparable performance to writing a 1-1 marshaller.
Hopefully this will be useful to someone else who runs into this ...

Grails JSON Marshalling Works after first running compile

I'm experiencing a discrepancy between the first compilation of a Grails app and the compilation that happens when a file changes while the app is running.
Background:
My app creates some spring beans from Spring LDAP (docs) using conf/spring/resources.groovy.
I have an LdapUser.groovy class in src/groovy (I'm using it similarly to a domain class, except it isn't in grails-app/domain as it doesn't map to a database table).
In BootStrap.groovy I register a JSON marshaller for LdapUser (using JSON.registerObjectMarshaller).
I have a controller with an index method that responds a list of LdapUser objects. This renders correctly in JSON (according to the marshaller).
With that background, here are the pieces of the problem:
When the show method, which responds a single LdapUser, gets called, I get an exception that LdapUser cannot be converted to grails.converters.JSON. (fair enough)
But, if I save the LdapUser.groovy file, thus invoking a recompile on the file while the app is running, the JSON marshaller suddenly works fine.
Before saving the LdapUser.groovy, my controller has a to an LdapUserRepo (a class instantiated via an #EnableLdapRepositories annotation on the controller), but this reference becomes null after I save LdapUser.groovy. I'm not sure how this relates to the problem, as I was also able to reproduce the problem in a controller lacking an injected LdapUserRepo (but with the annotated controllers still in the app).
I also at one point was setting an asType method on the LdapUser class, which was called as expected before the save-invoked recompile. After the recompile, however, my asType method was no longer called and the JSON marshaller was taking over. ( I was doing exception-worthy things in the asType that were throwing before recompile and not throwing after... )
My understanding of the problem is therefore:
Somehow the asType method of the LdapUser.groovy class is not being automatically generated on first compile when running the app, but is being generated on subsequent compiles.
The LdapUser class is tied to the LdapUserRepo in more ways than merely being a type the Repo uses, and the recompile is not reflecting that connection correctly.
Methods rendering lists of objects are somehow unaffected by the asType method. This leads me to believe that the JSON marshaller gets called directly on list elements (instead of via asType) when the list asType has been called (whether or not the "as" operation is implicit...).
My question then is:
what is the Grails compiler doing differently on run-app vs on compile while app is running that could be causing this behavior?
how can I restructure things to ensure it works properly out of the box?
If I need to RTFM, what would be the FM section? (My google-fu is sadly quite weak).
Note: this question is vaguely similar, but doesn't have any meaningfulness to the answer:
Grails: Defining a JSON custom marshaller as static method in domain

Windsor 'Scope cache was already disposed' within Envers custom Revision Listener

Update: I think is down to a Windsor configuration, does any one have any idea as to what I have not configured correctly with Windsor?
I am currently using Envers within a C# WebApi project. Windsor is used for IoC.
I have a custom RevisionEntity which add a User property to audit the user who has made the data change.
To ensure all configurations were correct I started off with a "simple string here" being added in the NewRevision method;
public class AuditRevisionListener : IRevisionListener
{
public void NewRevision(object revisionEntity)
{
((AuditRevision)revisionEntity).User = "Simple string here";
}
}
and all persisted as expected.
Next step is to achieve a full User object to which I need to obtain the UserService;
public class AuditRevisionListener : IRevisionListener
{
public void NewRevision(object revisionEntity)
{
var userServices = (IUserServices)GlobalConfiguration.Configuration.DependencyResolver.GetService(typeof(IUserServices));
var user = userServices.GetRequestingUser();
((AuditRevision)revisionEntity).User = user;
}
}
However, the DependencyResolver.GetService is throwing the error;
"Cannot access a disposed object. Object name: 'Scope cache was already disposed. This is most likely a bug in the calling code.'. "
UPDATE
I have now created a demo project available at https://github.com/ScottFindlater/WindsorEnversIssue
On first setting up the solution all will run fine because the custom Envers RevisionListener is not performing any dependency resolving.
Run the solution which performs a GET to the HomeController, which simply loads one User and modifies another;
Dependency resolving is shown to be working as there is an ActionFilter called DependencyResolverDoesWork which successfully resolves the UserServices.
Envers is shown to be working as the UserAudit table is populated.
To “turn on” the dependency resolving in the customer RevisionListener navigate to; Domain NHibernate project, Auditing folder, AuditRevisionListener class, NewRevision method and uncomment the 2 lines of code.
Full rebuild and then run the solution again and the project will run time exception in the WindsorDependencyResolver class, GetService method with “Cannot access a disposed object”, and clicking the View Detail Action expands this message to “{"Cannot access a disposed object.\r\nObject name: 'Scope cache was already disposed. This is most likely a bug in the calling code.'."}”.
The comment posted by Roger, thank you so much, which suggests changing the LifeStyle to Singleton does work. However, this demo has been purposefully kept simple and the use of PerWebRequest LifeStyle is needed because the ApplicationServices in the real project has contextual related data injected such as requesting user which is used to enforce security.
I am so stuck now and any pointers/ answers as to what I have setup wrong will be gratefully received. In addition, I know this has been posted at SO and Envers forum, I WILL update an answer on both.
I think is down to a Windsor configuration, does any one have any idea as to what I have not configured correctly with Windsor?
I haven't tried to run your sample, but I think this is down to an interplay between the two http modules defined in your web.config (https://github.com/ScottFindlater/WindsorEnversIssue/blob/master/API%20Endpoints/Web.config)
Castle.MicroKernel.Lifestyle.PerWebRequestLifestyleModule - Controls the lifetime of "per web request" components
APIEndpoints.HttpModules.NHibernateSessionCoordinator - Opens a session and begins a transaction at the beginning of each web request, then commits the transaction and disposes the session at the end of the web request
It is at the point where you commit your transaction - at the end of the request, triggered by NHibernateSessionCoordinator, that any changes you've made to objects within your NHibernate ISession actually get written to the database. This is the point at which Envers does its stuff and, in turn, at which you attempt to resolve IUserService from your Windsor container. The exception is thrown because IUserService is registered with the "per web request" lifestyle and Windsor is treating the current web request as complete and has disposed any objects tied to the request.
Have you tried reversing the order in which the HttpModules are defined, e.g. NHibernateSessionCoordinator before PerWebRequestLifestyleModule? This will result in your NHibernate transaction being committed before per web request components are disposed.

How to log SQL queries to a log file with CakePHP

I have a CakePHP 1.2 application that makes a number of AJAX calls using the AjaxHelper object. The AjaxHelper makes a call to a controller function which then returns some data back to the page.
I would like to log the SQL queries that are executed by the AJAX controller functions. Normally, I would just turn the debug level to 2 in config/core.php, however, this breaks my AJAX functionality because it causes the output SQL queries to be appended to the output that is returned to the client side.
To get around this issue, I would like to be able to log any SQL queries performed to a log file. Any suggestions?
I found a nice way of adding this logging functionality at this link:
http://cakephp.1045679.n5.nabble.com/Log-SQL-queries-td1281970.html
Basically, in your cake/libs/model/datasources/dbo/ directory, you can make a subclass of the dbo that you're using. For example, if you're using the dbo_mysql.php database driver, then you can make a new class file called dbo_mysql_with_log.php. The file would contain some code along the lines of the following:
App::import('Core', array('Model', 'datasource', 'dbosource', 'dbomysql'));
class DboMysqlWithLog extends DboMysql {
function _execute($sql) {
$this->log($sql);
return parent::_execute($sql);
}
}
In a nutshell, this class modifies (i.e. overrides) the _execute function of the superclass to log the SQL query before doing whatever logic it normally does.
You can modify your app/config/database.php configuration file to use the new driver that you just created.
This is a fantastic way to debug things like this, https://github.com/cakephp/debug_kit