Caused by: org.apache.ignite.IgniteCheckedException: Failed to validate cache configuration. Cache store factory is not serializable. Cache name: - mysql

I am trying to set up an Apache Ignite cache store using Mysql as external storage.
I have read all official documentation about it and examined many other examples, but I can't make it run:
[2022-06-02 16:45:56:551] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]]]
[2022-06-02 16:45:56:874] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Successfully bound communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]
[2022-06-02 16:45:56:874] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[16:45:56] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[2022-06-02 16:45:56:898] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
[2022-06-02 16:45:56:926] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - Collision resolution is disabled (all jobs will be activated upon arrival).
[16:45:56] Security status [authentication=off, sandbox=off, tls/ssl=off]
[2022-06-02 16:45:56:927] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Security status [authentication=off, sandbox=off, tls/ssl=off]
[2022-06-02 16:45:57:204] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0, locNodeId=b397c114-d34d-4245-9645-f78c5d184888]
[2022-06-02 16:45:57:242] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - DataRegionConfiguration.maxWalArchiveSize instead DataRegionConfiguration.walHistorySize would be used for removing old archive wal files
[2022-06-02 16:45:57:253] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Configured data regions initialized successfully [total=4]
[2022-06-02 16:45:57:307] [ERROR] - 55333 - org.apache.ignite.logger.java.JavaLogger.error(JavaLogger.java:310) - Exception during start processors, node will be stopped and close connections
org.apache.ignite.IgniteCheckedException: Failed to start processor: GridProcessorAdapter []
at org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1989) ~[ignite-core-2.10.0.jar:2.10.0]
Caused by: org.apache.ignite.IgniteCheckedException: Failed to validate cache configuration. Cache store factory is not serializable. Cache name: StockConfigCache
Caused by: org.apache.ignite.IgniteCheckedException: Failed to serialize object: CacheJdbcPojoStoreFactory [batchSize=512, dataSrcBean=null, dialect=org.apache.ignite.cache.store.jdbc.dialect.MySQLDialect#14993306, maxPoolSize=8, maxWrtAttempts=2, parallelLoadCacheMinThreshold=512, hasher=org.apache.ignite.cache.store.jdbc.JdbcTypeDefaultHasher#73ae82da, transformer=org.apache.ignite.cache.store.jdbc.JdbcTypesDefaultTransformer#6866e740, dataSrc=null, dataSrcFactory=com.anyex.ex.memory.model.CacheConfig$$Lambda$310/1421763091#31183ee2, sqlEscapeAll=false]
Caused by: java.io.NotSerializableException: com.anyex.ex.database.DynamicDataSource
Any advice or idea would be appreciated, thank you!
public static CacheConfiguration cacheStockConfigCache(DataSource dataSource, Boolean writeBehind)
{
CacheConfiguration ccfg = new CacheConfiguration();
ccfg.setSqlSchema("public");
ccfg.setName("StockConfigCache");
ccfg.setCacheMode(CacheMode.REPLICATED);
ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
ccfg.setIndexedTypes(Long.class, StockConfigMem.class);
CacheJdbcPojoStoreFactory cacheStoreFactory = new CacheJdbcPojoStoreFactory();
cacheStoreFactory.setDataSourceFactory((Factory<DataSource>) () -> dataSource);
//cacheStoreFactory.setDialect(new OracleDialect());
cacheStoreFactory.setDialect(new MySQLDialect());
cacheStoreFactory.setTypes(JdbcTypes.jdbcTypeStockConfigMem(ccfg.getName(), "StockConfig"));
ccfg.setCacheStoreFactory(cacheStoreFactory);
ccfg.setReadFromBackup(false);
ccfg.setCopyOnRead(true);
if(writeBehind){
ccfg.setWriteThrough(true);
ccfg.setWriteBehindEnabled(true);
}
return ccfg;
} public static JdbcType jdbcTypeStockConfigMem(String cacheName, String tableName)
{
JdbcType type = new JdbcType();
type.setCacheName(cacheName);
type.setKeyType(Long.class);
type.setValueType(StockConfigMem.class);
type.setDatabaseTable(tableName);
type.setKeyFields(new JdbcTypeField(Types.NUMERIC, "id", Long.class, "id"));
type.setValueFields(
new JdbcTypeField(Types.NUMERIC, "id", Long.class, "id"),
new JdbcTypeField(Types.NUMERIC, "stockinfoId", Long.class, "stockinfoId"),
new JdbcTypeField(Types.VARCHAR, "remark", String.class, "remark"),
new JdbcTypeField(Types.TIMESTAMP, "updateTime", Timestamp.class, "updateTime")
);
return type;
} igniteConfiguration.setCacheConfiguration(
CacheConfig.cacheStockConfigCache(dataSource, igniteProperties.getJdbc().getWriteBehind())
); #Bean("igniteInstance")
#ConditionalOnProperty(value = "ignite.enable", havingValue = "true", matchIfMissing = true)
public Ignite ignite(IgniteConfiguration igniteConfiguration)
{
log.info("igniteConfiguration info:{}", igniteConfiguration.toString());
Ignite ignite = Ignition.start(igniteConfiguration);
log.info("{} ignite started with discovery type {}", ignite.name(), igniteProperties.getType());
return ignite;
}

Related

Spring Batch - Partitioning Timeout

I have to migrate around millions of blob records from multiple mysql databases to a physical location as files over WAN network.
I chose to use Spring Batch and has already made it work. However, I am struggling with a timeout error happen with random partitioned steps.
Here is some context,
There are multiple MySql database store >10m records in 20 years.
The source tables indexed two composite keys in varchar datatype (there is no ID key) so I have to use an UN-indexed column in date-time format to partitioning the records by year and week to keep the number of records per partition reasonably at average 200 records. If there is any better advice, it would be welcome!
My issue: When the records per partition is high enough, the stepExecutors will randomly failed due to time out
Could not open JDBC Con nection for transaction; nested exception is java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms
I have done some tweaks with the DataSource properties and Transaction properties but no luck. Can I get some advice please! Thanks
Terminal log:
org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Con
nection for transaction; nested exception is
java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:309)
~[spring-jdbc-5.3.16.jar:5.3.16]
...
Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
2022-03-05 10:05:43.146 ERROR 15624 --- [main] o.s.batch.core.step.AbstractStep : Encountered an error executing step managerStep in job mainJob
org.springframework.batch.core.JobExecutionException: Partition handler returned an unsuccessful step at ...
The job is marked as [FAILED] or [UNKNOWN] sometimes, and not restartable.
org.springframework.batch.core.partition.support.PartitionStep.doExecute(PartitionStep.java:112) ~[spring-batch-core-4.3.5.jar:4.3.5]
2022-03-05 10:05:43.213 INFO 15624 --- [main] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=mainJob]] completed with the following parameters: [{run.id=20}] and the following status: [FAILED] in 3m13s783ms
2022-03-05 10:05:43.590 INFO 15624 --- [SpringApplicationShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-2 - Shutdown initiated...
2022-03-05 10:05:43.624 INFO 15624 --- [SpringApplicationShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-2 - Shutdown completed.
2022-03-05 10:05:43.626 INFO 15624 --- [SpringApplicationShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2022-03-05 10:05:43.637 INFO 15624 --- [SpringApplicationShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
Datasource builder: I have tried to increase the connection timeout and pool size, but it seems not be applied.
#Bean(name = "srcDataSource")
// #ConfigurationProperties(prefix = "spring.datasource.hikari")
public HikariDataSource dataSource() {
HikariDataSource hikariDS = new HikariDataSource();
hikariDS.setDriverClassName("com.mysql.jdbc.Driver");
hikariDS.setJdbcUrl("jdbc:mysql://dburl");
hikariDS.setUsername("dbuser");
hikariDS.setPassword("dbpwd");
// properties below does not solve the problem
hikariDS.setMaximumPoolSize(16);
hikariDS.setConnectionTimeout(30000);
// hikariDS.addDataSourceProperty("serverName",
// getConfig().getString("mysql.host"));
// hikariDS.addDataSourceProperty("port", getConfig().getString("mysql.port"));
// hikariDS.addDataSourceProperty("databaseName",
// getConfig().getString("mysql.database"));
// hikariDS.addDataSourceProperty("user", getConfig().getString("mysql.user"));
// hikariDS.addDataSourceProperty("password",
// getConfig().getString("mysql.password"));
// hikariDS.addDataSourceProperty("autoReconnect", true);
// hikariDS.addDataSourceProperty("cachePrepStmts", true);
// hikariDS.addDataSourceProperty("prepStmtCacheSize", 250);
// hikariDS.addDataSourceProperty("prepStmtCacheSqlLimit", 2048);
// hikariDS.addDataSourceProperty("useServerPrepStmts", true);
// hikariDS.addDataSourceProperty("cacheResultSetMetadata", true);
return hikariDS;
}
ManagerStep:
#Bean
public Step managerStep() {
return stepBuilderFactory.get("managerStep")
.partitioner(workerStep().getName(), dateRangePartitioner())
.step(workerStep())
// .gridSize(52) // number of worker, which is not necessary with datepartition
.taskExecutor(new SimpleAsyncTaskExecutor())
.build();
}
WorkerStep: I also tried to increase the transaction properties timeout, but not luck
#Bean
public Step workerStep() {
DefaultTransactionAttribute attribute = new DefaultTransactionAttribute();
attribute.setPropagationBehavior(Propagation.REQUIRED.value());
attribute.setIsolationLevel(Isolation.DEFAULT.value());
// attribute.setTimeout(30);
attribute.setTimeout(1000000);
return stepBuilderFactory.get("workerStep")
.<Image, Image>chunk(10)
.reader(jdbcPagingReader(null))
.processor(new ImageItemProcessor())
.writer(imageConverter())
// .listener(wrkrStepExecutionListener)
.transactionAttribute(attribute)
.build();
}
Job builder:
#Bean
public Job mainJob() {
return jobBuilderFactory.get("mainJob")
// .incrementer(new RunIdIncrementer())
.start(managerStep())
// .listener()
.build();
}

Cloud Foundry v2 in Grails

I have a grails project and I want to deploy it on Cloud Foundry, but the the console shows this:
[CONTAINER] n.spring.CloudProfileApplicationContextInitializer INFO Adding 'cloud' to list of active profiles
[CONTAINER] g.CloudPropertySourceApplicationContextInitializer INFO Adding 'cloud' PropertySource to ApplicationContext
[CONTAINER] udAutoReconfigurationApplicationContextInitializer INFO Adding cloud service auto-reconfiguration to ApplicationContext
[CONTAINER] ing.DataSourceCloudServiceBeanFactoryPostProcessor INFO Auto-reconfiguring beans of type javax.sql.DataSource
[CONTAINER] ing.DataSourceCloudServiceBeanFactoryPostProcessor INFO No beans of type javax.sql.DataSource found. Skipping auto-reconfiguration.
[CONTAINER] lina.core.ContainerBase.[Catalina].[localhost].[/] SEVERE Exception sending context initialized event to listener instance of class org.codehaus.groovy.grails.web.context.GrailsContextLoaderListener
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'pluginManager' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.NullPointerException: Cannot invoke method getAt() on null object
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
Caused by: java.lang.NullPointerException: Cannot invoke method getAt() on null object
at java.lang.Thread.run(Thread.java:745)
[CONTAINER] org.apache.catalina.core.StandardContext SEVERE Error listenerStart
... 5 more
[CONTAINER] org.apache.catalina.util.SessionIdGeneratorBase INFO Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [135] milliseconds.
[CONTAINER] org.apache.catalina.core.StandardContext SEVERE Context [] startup failed due to previous errors
[CONTAINER] org.apache.catalina.loader.WebappClassLoader WARNING The web application [] registered the JDBC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
[CONTAINER] org.apache.catalina.loader.WebappClassLoader WARNING The web application [] registered the JDBC driver [org.h2.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
java.lang.Object.wait(Native Method)
[CONTAINER] org.apache.catalina.loader.WebappClassLoader WARNING The web application [] appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:142)
I think that it's a problem of DB Conex but I don't know to fix it. I use MySQL in my app and the service plan ClearDB MySQL Database Spark DB.
My Datasource.groovy is:
import org.springframework.cloud.CloudFactory
def cloud
try {
cloud = new CloudFactory().cloud
} catch(e) {}
dataSource {
pooled = true
driverClassName = "com.mysql.jdbc.Driver"
dialect = "org.hibernate.dialect.MySQL5InnoDBDialect"
username = "root"
password = "root"
}
hibernate {
cache.use_second_level_cache = true
cache.use_query_cache = true
cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory'
//cache.region.factory_class = 'org.hibernate.cache.EhCacheProvider'
}
environments {
development {
dataSource {
dbCreate = "create-drop"
url = "jdbc:mysql://localhost:8080/bbddSRL"
username="root"
password="root"
}
}
test {
dataSource {
dbCreate = "create-drop"
url = "jdbc:mysql://localhost:8080/bbddSRL"
username="root"
password="root"
}
}
production {
dataSource {
pooled = true
dbCreate = 'update'
driverClassName = 'com.mysql.jdbc.Driver'
if (cloud) {
def dbInfo = cloud.getServiceInfo('mysql-instance') //mysql-instance is the name of the ClearDB's service.
url = dbInfo.jdbcUrl
username = dbInfo.userName
password = dbInfo.password
} else {
url = 'jdbc:mysql://localhost:8080/bbddSRLprod'
username = 'root'
password = 'root'
}
}
}
}
Any anwers or suggestions?
Try eliminating the datasource block where you define dialect. You will have to still define the items in that block but do it in the datasource of each environment. I did this because it seemed like any changes that I made to the datasource within the environment had no effect.

How to start spring-boot app without depending on Database?

I am using "Spring-boot + Hibernate4 + mysql" for my application. As part of which I have a requirement where my sprint-boot app should be able to start even when database is down. Currently it gives the below exception when I try to start my spring boot app without DB being up.
I researched a lot and found out that this exception has to do with hibernate.temp.use_jdbc_metadata_defaults property.
I tried setting this in "application.yml" of spring boot but this property's value is not being reflected at runtime.
Exception Stack Trace:
2014-05-25 04:09:43.193 INFO 59755 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {4.0.4.Final}
2014-05-25 04:09:43.250 WARN 59755 --- [ main] o.h.e.jdbc.internal.JdbcServicesImpl : HHH000342: Could not obtain connection to query metadata : Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
2014-05-25 04:09:43.263 INFO 59755 --- [ main] o.apache.catalina.core.StandardService : Stopping service Tomcat
Error starting ApplicationContext. To display the auto-configuration report enabled debug logging (start with --debug)
Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaAutoConfiguration.class]: Invocation of init method failed; nested exception is org.hibernate.HibernateException: Access to DialectResolutionInfo cannot be null when 'hibernate.dialect' not set
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1553)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:304)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:300)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:195)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:973)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:750)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:120)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:648)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:311)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:909)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:898)
at admin.Application.main(Application.java:36)
Caused by: org.hibernate.HibernateException: Access to DialectResolutionInfo cannot be null when 'hibernate.dialect' not set
at org.hibernate.engine.jdbc.dialect.internal.DialectFactoryImpl.determineDialect(DialectFactoryImpl.java:104)
at org.hibernate.engine.jdbc.dialect.internal.DialectFactoryImpl.buildDialect(DialectFactoryImpl.java:71)
at org.hibernate.engine.jdbc.internal.JdbcServicesImpl.configure(JdbcServicesImpl.java:205)
at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.configureService(StandardServiceRegistryImpl.java:89)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:206)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:178)
at org.hibernate.cfg.Configuration.buildTypeRegistrations(Configuration.java:1885)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1843)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl$4.perform(EntityManagerFactoryBuilderImpl.java:850)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl$4.perform(EntityManagerFactoryBuilderImpl.java:843)
at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl.withTccl(ClassLoaderServiceImpl.java:399)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:842)
at org.hibernate.jpa.HibernatePersistenceProvider.createContainerEntityManagerFactory(HibernatePersistenceProvider.java:150)
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:336)
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:318)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1612)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1549)
... 15 more
application.yml:
spring:
jpa:
show-sql: true
hibernate:
ddl-auto: none
naming_strategy: org.hibernate.cfg.DefaultNamingStrategy
temp:
use_jdbc_metadata_defaults: false
It was indeed a tough nut to crack.
After lot and lot of research and actually debugging the spring-boot, spring, hibernate, tomcat pool, etc to get it done.
I do think that it will save lot of time for people trying to achieve this type of requirement.
Below are the settings required to achieve the following requirement
Spring boot apps will start fine even if DB is down or there is no DB.
Apps will pick up the connections on the fly as DB comes up which means there is no need to restart the web server or redeploy the apps.
There is no need to start the tomcat or redeploy the apps, if DB goes down from running state and comes up again.
application.yml :
spring:
datasource:
driverClassName: com.mysql.jdbc.Driver
url: jdbc:mysql://localhost:3306/schema
username: root
password: root
continueOnError: true
initialize: false
initialSize: 0
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 5000
minIdle: 0
jpa:
show-sql: true
hibernate:
ddl-auto: none
naming_strategy: org.hibernate.cfg.DefaultNamingStrategy
properties:
hibernate:
dialect: org.hibernate.dialect.MySQL5Dialect
hbm2ddl:
auto: none
temp:
use_jdbc_metadata_defaults: false
I am answering here and will close the issue that you have cross-posted
Any "native" property of the JPA implementation (Hibernate) can be set using the spring.jpa.properties prefix as explained here
I haven't looked much further in the actual issue here but to answer this particular question, you can set that hibernate key as follows
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults
Adding this alone worked for me:
spring.jpa.properties.hibernate.dialect: org.hibernate.dialect.Oracle10gDialect
Just replace the last part with your database dialect.
The solution is really useful for me. Thanks
i used file "application.properties" includes following lines:
app.sqlhost=192.168.10.11
app.sqlport=3306
app.sqldatabase=logs
spring.main.web-application-type=none
# Datasource
spring.datasource.url=jdbc:mysql://${app.sqlhost}:${app.sqlport}/${app.sqldatabase}
spring.datasource.username=user
spring.datasource.password=password
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.hbm2dll.auto = none
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults = false
spring.datasource.continue-on-error=true
spring.datasource.initialization-mode=never
spring.datasource.hikari.connection-timeout=5000
spring.datasource.hikari.idle-timeout=600000
spring.datasource.hikari.max-lifetime=1800000
spring.datasource.hikari.initialization-fail-timeout= -1
spring.jpa.hibernate.use-new-id-generator-mappings=true
spring.jpa.hibernate.ddl-auto=none
spring.jpa.show-sql=true
spring.output.ansi.enabled=always
But, you can not use #Transactional annotation at class level
#Service
//#Transactional //do not use to touch the Repository
#EnableAsync
#Scope( proxyMode = ScopedProxyMode.TARGET_CLASS )
public class LogService {
.... }
#Async
#Transactional // you can use at function level
public void deleteLogs(){
logRepository.deleteAllBy ...
}
Add following config should be work:
spring.jpa.database-platform: org.hibernate.dialect.MySQL5Dialect

Issue accessing backend DB through openRDF Sesame

I have the following code in java to query SPARQL query over the Backend DB (postgreSQL).
import rdfProcessing.RDFRepository;
import java.io.File;
import java.sql.Connection;
import java.sql.DriverManager;
import java.util.List;
import org.openrdf.query.QueryLanguage;
import org.openrdf.query.TupleQueryResult;
import org.openrdf.repository.Repository;
import org.openrdf.repository.RepositoryConnection;
import org.openrdf.repository.manager.LocalRepositoryManager;
import org.openrdf.repository.manager.RepositoryManager;
import org.openrdf.sail.config.SailImplConfig;
import org.openrdf.sail.memory.config.MemoryStoreConfig;
import org.openrdf.repository.config.RepositoryImplConfig;
import org.openrdf.repository.sail.config.SailRepositoryConfig;
import org.openrdf.repository.config.RepositoryConfig;
public class Qeryrdf {
Connection connection;
private static final String REPO_ID = "C:\\RDF_triples\\univData10m\\repositories\\SYSTEM\\memorystore.data";
private static final String q1 = ""
+ "PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>" +
"PREFIX ub:<http://univ.org#>" +
"PREFIX owl:<http://www.w3.org/2002/07/owl#>" +
"PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#>" +
" select distinct ?o ?p where"+
"{ ?s rdf:type ?o." +
"}";
public static void main(String[] args)
throws Exception {
LocalRepositoryManager manager = new LocalRepositoryManager(new File("C:\\RDF triples\\univData1"));
manager.initialize();
try {
Qeryrdf queryrdf = new Qeryrdf();
queryrdf.executeQueries(manager);
} finally {
manager.shutDown();
}
}
private void executeQueries(RepositoryManager manager)
throws Exception {
SailImplConfig backendConfig = new MemoryStoreConfig();
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
String repositoryId = REPO_ID;
RepositoryConfig repConfig = new RepositoryConfig(repositoryId, repositoryTypeSpec);
manager.addRepositoryConfig(repConfig);
Repository repo = manager.getRepository(repositoryId);
repo.initialize();
RepositoryConnection con = repo.getConnection();
RDFRepository repository = new RDFRepository();
String repoDir = "C:\\RDF triples\\univData1" ;
repository.initializeRepository(repoDir );
System.out.println("Executing the query");
executeQuery(q1, con);
con.close();
repo.shutDown();
}
private void executeQuery(String query, RepositoryConnection con) {
getConnection();
try {
TupleQueryResult result = con.prepareTupleQuery(QueryLanguage.SPARQL, query).evaluate();
int resultCount = 0;
long time = System.currentTimeMillis();
while (result.hasNext()) {
result.next();
resultCount++;
}
time = System.currentTimeMillis() - time;
System.out.printf("Result count: %d in %fs.\n", resultCount, time / 1000.0);
} catch (Exception e) {
e.printStackTrace();
}
}
public void getConnection() {
try {
Class.forName("org.postgresql.Driver");
connection = DriverManager.getConnection(
"jdbc:postgresql://localhost:5432/myDB01", "postgres",
"aabbcc");
} catch (Exception e) {
e.printStackTrace();
System.err.println(e.getClass().getName() + ": " + e.getMessage());
System.exit(0);
}
System.out.println("The database opened successfully");
}
}
And I got the following result:
16:46:44.546 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Initializing MemoryStore...
16:46:44.578 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Reading data from C:\RDF triples\univData1\repositories\SYSTEM\memorystore.data...
16:46:44.671 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Data file read successfully
16:46:44.671 [main] DEBUG org.openrdf.sail.memory.MemoryStore - MemoryStore initialized
16:46:44.765 [main] DEBUG org.openrdf.sail.memory.MemoryStore - syncing data to file...
16:46:44.796 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Data synced to file
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - React to commit on SystemRepository for contexts [_:node18j9mufr0x1]
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Processing modified context _:node18j9mufr0x1.
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Is _:node18j9mufr0x1 a repository config context?
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Reacting to modified repository config for C:\RDF triples\univData1\repositories\SYSTEM\memorystore.data
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Modified repository C:\RDF triples\univData1\repositories\SYSTEM\memorystore.data has not been initialized, skipping...
16:46:44.812 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.contextaware.config.ContextAwareFactory
16:46:44.812 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.dataset.config.DatasetRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.http.config.HTTPRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.sail.config.SailRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.sail.config.ProxyRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.sparql.config.SPARQLRepositoryFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.federation.config.FederationFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.inferencer.fc.config.ForwardChainingRDFSInferencerFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.inferencer.fc.config.DirectTypeHierarchyInferencerFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.inferencer.fc.config.CustomGraphQueryInferencerFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.memory.config.MemoryStoreFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.nativerdf.config.NativeStoreFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.rdbms.config.RdbmsStoreFactory
16:46:44.875 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Initializing MemoryStore...
16:46:44.875 [main] DEBUG org.openrdf.sail.memory.MemoryStore - MemoryStore initialized
16:46:44.876 [main] DEBUG o.openrdf.sail.nativerdf.NativeStore - Initializing NativeStore...
16:46:44.876 [main] DEBUG o.openrdf.sail.nativerdf.NativeStore - Data dir is C:\RDF triples\univData1
16:46:44.970 [main] DEBUG o.openrdf.sail.nativerdf.NativeStore - NativeStore initialized
Executing the query
The database opened successfully
16:46:45.735 [main] DEBUG o.o.query.parser.QueryParserRegistry - Registered service class org.openrdf.query.parser.serql.SeRQLParserFactory
16:46:45.751 [main] DEBUG o.o.query.parser.QueryParserRegistry - Registered service class org.openrdf.query.parser.sparql.SPARQLParserFactory
Result count: 0 in 0.000000s.
My problem is:
1. I changed the SPARQL query many times but still retrieving 0 rows.
2. So, Does OpenRDF Sesame connect to backend DB like PostgreSQL, MySQL, etc?
3. If so, Does OpenRDF Sesame translate SPARQL query to SQL then bring results from the backend DB?
Thanks in Advance.
First, answers to your specific questions, in order:
if the query gives no results, that means that either the repository over which you're executing it is empty, or the query you're trying to execute matches no data in that repository. Since it looks like the way in which you set up and initialize your repository is completely wrong (see remarks below), it is probably empty.
in general, yes, Sesame can connect to a PostgreSQL or MySQL database for storage and query. However, in your code this is not done, because you are not using a Sesame RDBMSStore as your SAIL storage backend, but are using a MemoryStore (which, as the name implies, is an in-memory database).
If you were using a Sesame PostgreSQL/MySQL store, then yes, it would translate SPARQL queries to SQL queries. But you're not using it. Also, the Sesame PostgreSQL/MySQL support is now deprecated - it's recommended not to use it, but instead a NativeStore or MemoryStore or any one of the many available third-party Sesame store implementations .
More generally, looking at your code, it is unclear what you're trying to accomplish, and I cannot believe your code actually compiles, let alone runs.
You're using a class RDFRepository in there somewhere, which doesn't exist in Sesame 2, and a method initializeRepository which you give a directory, which also does not exist. It looks vaguely like how things worked in Sesame 1, but that version of Sesame has been out commission for at least 6 years now.
Then you have a method getConnection which sets up a connection to a PostgreSQL database, but that method doesn't accomplish anything - it just creates a Connection object but then nothing is ever done with that Connection.
I recommend that you go back to basics and have a good look through the Sesame documentation, especially the tutorial, and the chapter on Programming with Sesame, which explains how to create and manage repositories and how to work with them.

MySQL Connection Timeout Issue - Grails Application on Tomcat using Hibernate and ORM

I have a small grails application running on Tomcat in Ubuntu on a VPS. I use MySql as my datastore and everything works fine unless I leave the application for more than half a day (8 hours?). I did some searching and apparently this is the default wait_timeout in mysql.cnf so after 8 hours the connection will die but Tomcat won't know so when the next user tries to view the site they will see the connection failure error. Refreshing the page will fix this but I want to get rid of the error altogether. For my version of MySql (5.0.75) I have only my.cnf and it doesn't contain such a parameter, In any case changing this parameter doesn't solve the problem.
This Blog Post seems to be reporting a similar error but I still don't fully understand what I need to configure to get this fixed and also I am hoping that there is a simpler solution than another third party library. The machine I'm running on has 256MB ram and I'm trying to keep the number of programs/services running to a minimum.
Is there something I can configure in Grails / Tomcat / MySql to get this to go away?
Thanks in advance,
Gav
From my Catalina.out;
2010-04-29 21:26:25,946 [http-8080-2] ERROR util.JDBCExceptionReporter - The last packet successfully received from the server was 102,906,722 milliseconds$
2010-04-29 21:26:25,994 [http-8080-2] ERROR errors.GrailsExceptionResolver - Broken pipe
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
...
2010-04-29 21:26:26,016 [http-8080-2] ERROR util.JDBCExceptionReporter - Already closed.
2010-04-29 21:26:26,016 [http-8080-2] ERROR util.JDBCExceptionReporter - Already closed.
2010-04-29 21:26:26,017 [http-8080-2] ERROR servlet.GrailsDispatcherServlet - HandlerInterceptor.afterCompletion threw exception
org.hibernate.exception.GenericJDBCException: Cannot release connection
at java.lang.Thread.run(Thread.java:619)
Caused by: java.sql.SQLException: Already closed.
at org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:84)
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.close(PoolingDataSource.java:181)
... 1 more
Referring to this article, you have stale connections in your DBCP connections pool that are silently dropped by OS or firewall.
The solution is to define a validation query and do a sanity check of the connection before you actually use it in your application.
In grails this is actually done by modifying the grails-app/conf/spring/Resource.groovy file and add the following:
beans = {
dataSource(BasicDataSource) {
//run the evictor every 30 minutes and evict any connections older than 30 minutes.
minEvictableIdleTimeMillis=1800000
timeBetweenEvictionRunsMillis=1800000
numTestsPerEvictionRun=3
//test the connection while its idle, before borrow and return it
testOnBorrow=true
testWhileIdle=true
testOnReturn=true
validationQuery="SELECT 1"
}
}
In grails 1.3.X, you can modify the evictor values in the DataSource.groovy file to make sure pooled connections are used during idle. This will make sure the mysql server will not time out the connection.
production {
dataSource {
pooled = true
// Other database parameters..
properties {
maxActive = 50
maxIdle = 25
minIdle = 5
initialSize = 5
minEvictableIdleTimeMillis = 1800000
timeBetweenEvictionRunsMillis = 1800000
maxWait = 10000
}
}
A quick way to verify this works is to modify the MySQL my.cnf configuration file [mysql] element and add wait_time parameter with a low value.
Try increasing the number of open MySQL connections by putting the following in your DataSources.groovy:
dataSource {
driverClassName = "com.mysql.jdbc.Driver"
pooled=true
maxActive=10
initialSize=5
// Remaining connection params
}
If you want to go the whole hog, try implementing a connection pool; here is a useful link on this.
For grails 1.3.X, I had to add the following code to Bootstrap.groovy :
def init = {servletContext ->
def ctx=servletContext.getAttribute(ApplicationAttributes.APPLICATION_CONTEXT)
//implement test on borrow
def dataSource = ctx.dataSource
dataSource.targetDataSource.setMinEvictableIdleTimeMillis(1000 * 60 * 30)
dataSource.targetDataSource.setTimeBetweenEvictionRunsMillis(1000 * 60 * 30)
dataSource.targetDataSource.setNumTestsPerEvictionRun(3)
dataSource.targetDataSource.setTestOnBorrow(true)
dataSource.targetDataSource.setTestWhileIdle(true)
dataSource.targetDataSource.setTestOnReturn(false)
dataSource.targetDataSource.setValidationQuery("SELECT 1")
}
I also had to import org.codehaus.groovy.grails.commons.ApplicationAttributes
Add these parameters to dataSource
testOnBorrow = true
testWhileIdle = true
testOnReturn = true
See this article for more information
http://sacharya.com/grails-dbcp-stale-connections/
Starting from grails 2.3.6 default configuration already has options for preventing closing connection by timeout
These are the new defaults.
properties {
// See http://grails.org/doc/latest/guide/conf.html#dataSource for documentation
....
minIdle = 5
maxIdle = 25
maxWait = 10000
maxAge = 10 * 60000
timeBetweenEvictionRunsMillis = 5000
minEvictableIdleTimeMillis = 60000
validationQuery = "SELECT 1"
validationQueryTimeout = 3
validationInterval = 15000
testOnBorrow = true
testWhileIdle = true
testOnReturn = false
jdbcInterceptors = "ConnectionState;StatementCache(max=200)"
defaultTransactionIsolation = java.sql.Connection.TRANSACTION_READ_COMMITTED
}
What does your JDBC connection string look like? You can set an autoReconneect param in your data source config, e.g.
jdbc:mysql://hostname/mydb?autoReconnect=true