According to the Ehcache documentation, starting with version 2.0, an Ehcache cache may participate in a JTA transaction based on the value of attribute transactionalMode on element <cache/>.
If this is true, then why does Ehcache, when it encounters this attribute in my Ehcache configuration file, throw the following exception and complain that "Element does not allow attribute "transactionalMode".":
Caused by: net.sf.ehcache.CacheException: Error configuring from zip:C:/Program Files/Oracle/Middleware/user_projects/domains/abstrack1/servers/AdminServer/tmp/_WL_user/_appsdir_middleware-ear-1.0-SNAPSHOT_ear/n8rga7/middleware-ejb-1.0-SNAPSHOT.jar!/ehcache.xml. Initial cause was Error configuring from input stream. Initial cause was null:35: Element <cache> does not allow attribute "transactionalMode".
at net.sf.ehcache.config.ConfigurationFactory.parseConfiguration(ConfigurationFactory.java:95)
at net.sf.ehcache.config.ConfigurationFactory.parseConfiguration(ConfigurationFactory.java:131)
at net.sf.ehcache.CacheManager.parseConfiguration(CacheManager.java:241)
at net.sf.ehcache.CacheManager.init(CacheManager.java:190)
at net.sf.ehcache.CacheManager.<init>(CacheManager.java:183)
at net.sf.ehcache.hibernate.EhCacheProvider.start(EhCacheProvider.java:128)
at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:183)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1291)
at org.springframework.orm.hibernate3.LocalSessionFactoryBean.newSessionFactory(LocalSessionFactoryBean.java:814)
at org.springframework.orm.hibernate3.LocalSessionFactoryBean.buildSessionFactory(LocalSessionFactoryBean.java:732)
at org.springframework.orm.hibernate3.AbstractSessionFactoryBean.afterPropertiesSet(AbstractSessionFactoryBean.java:211)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1369)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1335)
... 76 more]]>
Here is a sample cache definition from my ehcache.xml file in which I've set transactionalMode to "xa":
<cache
name="com.db.spgit.abstrack.model.Security"
maxElementsInMemory="500"
eternal="false"
timeToIdleSeconds="300"
timeToLiveSeconds="86400"
overflowToDisk="false"
transactionalMode="xa" />
Turns out that Maven had also included Ehcache 1.2.3 in my project EAR file because Hibernate Ehcache Integration 3.3.2.GA requires Ehcache 1.2.3.
that means that you need to remove the transitive dependency ehcache 1.2.3.jar and leave include 2.X version . need to see if that has no problem working with hibernate 3.2.X though.
Related
I have an application deployed in WAS 9 using custom jsf provider (set to DEFAULT in WAS). Jars are in a shared lib with an isolated class loader. Everything worked fine until we migrated from richfaces to primefaces. We use javax.faces 2.1.29 but for some reason primefaces seems to detect that we are using 2.2 and is making a call to a method that only exists in 2.2 (getPassThroughAttributes). Looking at the stack versions in play seem correct so I'm not sure why the 2.2 method call is being made. Anyone run into this?
> 3/19/19 17:19:07:671 CDT] 00000091 ServletWrappe E com.ibm.ws.webcontainer.servlet.ServletWrapper service SRVE0014E: Uncaught service() exception root cause Faces Servlet: javax.servlet.ServletException: javax/faces/component/UIComponent.getPassThroughAttributes(Z)Ljava/util/Map; (loaded from file:/opt/IBM/WebSphere/AppServer_2/trunkLib/javax.faces-2.1.29-10.jar by
com.ibm.ws.classloader.CompoundClassLoader#abecddd0[library:trunkLib]
Local ClassPath: /opt/IBM/WebSphere/AppServer_2/trunkLib/javax.faces-2.1.29-10.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/httpclient-4.5.2.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/httpcore-4.4.4.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/commons-codec-1.11.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/hk2-api-2.4.0-b34.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/hk2-locator-2.4.0-b34.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/hk2-utils-2.4.0-b34.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/javax.annotation-api-1.2.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/jaxrs-ri-2.22.2.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/jersey-guava-2.22.2.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/validation-api-1.1.0.Final.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/classes:/opt/IBM/WebSphere/AppServer_2/trunkLib/javassist-3.23.1-GA.jar
Parent: com.ibm.ws.classloader.ProtectionClassLoader#a5c5ece8
and
> Caused by: java.lang.NoSuchMethodError: javax/faces/component/UIComponent.getPassThroughAttributes(Z)Ljava/util/Map; (loaded from file:/opt/IBM/WebSphere/AppServer_2/trunkLib/javax.faces-2.1.29-10.jar by
com.ibm.ws.classloader.CompoundClassLoader#abecddd0[library:trunkLib]
Local ClassPath: /opt/IBM/WebSphere/AppServer_2/trunkLib/javax.faces-2.1.29-10.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/httpclient-4.5.2.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/httpcore-4.4.4.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/commons-codec-1.11.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/hk2-api-2.4.0-b34.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/hk2-locator-2.4.0-b34.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/hk2-utils-2.4.0-b34.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/javax.annotation-api-1.2.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/jaxrs-ri-2.22.2.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/jersey-guava-2.22.2.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/validation-api-1.1.0.Final.jar:/opt/IBM/WebSphere/AppServer_2/trunkLib/classes:/opt/IBM/WebSphere/AppServer_2/trunkLib/javassist-3.23.1-GA.jar
Parent: com.ibm.ws.classloader.ProtectionClassLoader#a5c5ece8
Delegation Mode: PARENT_LAST) called from class org.primefaces.util.Jsf22Helper (loaded from file:/opt/IBM/WebSphere/AppServer_2/profiles/server1/installedApps/loggerheadNode03Cell/trunk80_war.ear/trunk80.war/WEB-INF/lib/primefaces-6.2.jar
It looks like PrimeFaces searches the classpath for JSF 2.2 classes - and unfortunately in this case PrimeFaces must be finding the those classes in the WAS-provided JSF 2.2 bundle. Moving primefaces-6.2 out of your application trunk80.war and into the isolated shared library trunkLib should resolve this.
First lets begin that the PrimeFaces source is OPEN, it is easilly debugged. A simple search in the source (either locally in your IDE or in GitHub) will get you the source of Jsf22Helper.java. You can than inspect where this is being called. Running in debug mode is the easiest, but a search in the PrimeFaces repository in GitHub shows just one location in CoreRenderer.java
protected void renderDynamicPassThruAttributes(FacesContext context, UIComponent component) throws IOException {
if (PrimeApplicationContext.getCurrentInstance(context).getEnvironment().isAtLeastJsf22()) {
Jsf22Helper.renderPassThroughAttributes(context, component);
}
}
Next you should inspect the
PrimeApplicationContext.getCurrentInstance(context).getEnvironment().isAtLeastJsf22()
And the getter for this returns a property that gets its boolean value from
atLeastJsf22 = LangUtils.tryToLoadClassForName("javax.faces.flow.Flow") != null;
In here you see that for determining the 'minimal' version, they try loading a class that is only present in JSF 2.2 or up. This means that no matter whether you use parent first classloading or not or independent of where you put PrimeFaces, if javax.faces.flow.Flow is on the classpath, PrimeFaces will think JSF 2.2 is available. It does not matter if JSF 2.1 is also on the classpath and even before JSF 2.2 since this specific class is not present in JSF2.1 and will always be loaded from the JSF 2.2 jars.
To fix this you have three options
Override the PrimeFaces PrimeEnvironment.java (e.g. via reading an explicit context property from the web.xml so you can optionally manually override the version detection and create a pull request for this with PrimeFaces so they can accept it as an improvement
'Correct' the way to override the JSF version similar to How to make websphere 8.5 use mojarra not myfaces
Switch to JSF 2.2
The latter would be the best and might even just be working out of the box.
I know we can start spark-shell with the logs set to error but is there an explanation for these warnings? The first few warnings are ok as the path of the identical plugin warnings follow the symbolic link to /usr/local/bin/spark.
scala> val people = spark.read.json("/path/to/people.json").show
18/11/12 09:41:33 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/usr/local/bin/spark/jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/usr/local/share/spark-2.3.0-bin-hadoop2.7/jars/datanucleus-core-3.2.10.jar."
18/11/12 09:41:33 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/usr/local/bin/spark/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/usr/local/share/spark-2.3.0-bin-hadoop2.7/jars/datanucleus-api-jdo-3.2.6.jar."
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
18/11/12 09:41:37 WARN Query: Query for candidates of org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics and subclasses resulted in no possible candidates
Cannot add `SDS`.`SD_ID` as referenced FK column for `TBLS`
org.datanucleus.exceptions.NucleusException: Cannot add `SDS`.`SD_ID` as referenced FK column for `TBLS`
at org.datanucleus.store.rdbms.key.ForeignKey.setColumn(ForeignKey.java:232)
at org.datanucleus.store.rdbms.key.ForeignKey.addColumn(ForeignKey.java:207)
at org.datanucleus.store.rdbms.table.TableImpl.getExistingForeignKeys(TableImpl.java:1057)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/11/12 09:41:40 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
My Liferay 7 server was using SomeModule happily, until I deployed a new version of SomeModule which has an additional required field favoriteColor.
Now whenever I try to load the portlet Liferay says:
java.lang.RuntimeException: Unable to create snapshot class for interface some.SomeModuleConfiguration
at com.liferay.portal.configuration.metatype.bnd.util.ConfigurableUtil._createConfigurableSnapshot(ConfigurableUtil.java:77)
at com.liferay.portal.configuration.metatype.bnd.util.ConfigurableUtil.createConfigurable(ConfigurableUtil.java:51)
at some.SomeModule.activate(SomeModule.java:50)
...
aused by: java.lang.IllegalStateException: Attribute is required but not set favoriteColor
at aQute.bnd.annotation.metatype.Configurable$ConfigurableHandler.invoke(Configurable.java:75)
at com.sun.proxy.$Proxy1220.favoriteColor(Unknown Source)
at some.SomeModuleConfigurationSnapshot407.<init>(Unknown Source)
The configuration UI for SomeModule does not show anything about favoriteColor.
How to fix that, for instance by setting favoriteColor to its default value?
An alternative path would be using a OSGi configuration file to set defaults and missing values. You can use those files as you do for those modules that come with liferay; e.g., elasticsearch config. (check your osgi/configs directory)
If you are lucky enough to have the source code of the module, you can solve this problem like this:
Temporarily make the new field optional, but replacing required = true to required = false in SomeModuleConfiguration.java.
Deploy the module.
Load the configuration page, save.
Restore to required = true.
Deploy again.
Alternative answers welcome!
I am trying to deploy the Pulse Web Application to an external Tomcat. I get this error when deploying. How should I fix this?
org.springframework.beans.factory.NoSuchBeanDefinitionException: No
bean named 'org.springframework.security.authenticationManager' is
defined: Did you forget to add a gobal
element to your configuration (with child
elements)? Alternatively you can use the authentication-manager-ref
attribute on your and elements.
OK. This is fixed. To everyone also experiencing this... you must set the Spring Profile "pulse.authentication.default" or it will not load the AuthenticationManager Bean.
The overall issue is with the RowStore's documentation, which says this is OPTIONAL, when in fact it is required.
http://rowstore.docs.snappydata.io/docs/manage_guide/pulse/quickstart.html#topic_795C97B46B9843528961A094EE520782
It says at Step 4.) that configuring security is Optional when in fact you have to pass a Spring Profile. Also, again in the section "Authenticating Pulse Users", it says this is not a requirement.
To fix the issue I had to pass the Spring Profile "pulse.authentication.default" to activate the Bean in spring-security.xml and deploy pulse.war properly.
A better way for SnappyData pulse.war to do this in the future might be to use "!pulse.authentication.custom", which would always load the default AuthenticationManager bean as long as a custom one was not configured.
Example change for future to make it truly optional:
<beans:beans profile="!pulse.authentication.custom" >
<authentication-manager>
<authentication-provider>
<user-service>
<user name="admin" password="admin" authorities="ROLE_USER" />
</user-service>
</authentication-provider>
</authentication-manager>
</beans:beans>
Which version of Tomcat are you using?
Here is another thread on the same issue with TC authentication.
Else, can you just try Pulse in the "embedded mode" ?
Which version of SnappyData you are using ?
You need to mention a pulse.properties file in the classpath . For details you can check http://rowstore.docs.snappydata.io/docs/manage_guide/pulse/quickstart.html#topic_795C97B46B9843528961A094EE520782.
Let us know if you any problems further.
I am getting OOM exception (Java heap space) for reduce child. I read in the documentation that increasing the value of mapred.reduce.child.java.opts to -Xmx512M or more would help. Since I am not the admin, I cannot change that value in mapred-site.xml. I would like to set that value only for my job through the java program. I tried setting it using Configuration class as follows, but that didn't work.
Configuration config = new Configuration();
config.set("mapred.reduce.child.java.opts", "-Xmx512M");
JobConf conf1 = new JobConf(config, this.getClass());
The version of Hadoop is 1.0.3
What is the proper way of setting the configuration values programmatically?
AS #ThomasJungblut and #octo have pointed out, the procedure I mentioned in the question is the right way of doing it. The OOM exception still persists, so I would start a new thread instead of continuing here.