Strange issue in quarkus pod initialization after openshift cluster starts - openshift

I'm using quarkus oidc to protect a resource through keycloak. This is the code used:
#Path("/api")
public class NamasteResource {
#Inject
JsonWebToken jwt;
#GET
#Path("health")
#Produces(MediaType.TEXT_PLAIN)
public String health() {
return "I'm ok";
}
#GET
#RolesAllowed("USERS")
#Path("namaste-secured")
#Produces(MediaType.TEXT_PLAIN)
public String namasteSecured() {
String userName = jwt.getName();
return "Hello " + userName;
}
}
The health resource is used for pod's readiness probe.
The issue here is that when openshift cluster starts and pod is deployed, I get an internal server error and the application doesn't work anymore. This is the exception trace:
2021-06-24 01:59:44,457 ERROR [io.qua.ver.htt.run.QuarkusErrorHandler] (vert.x-eventloop-thread-0) HTTP Request to /api/health failed, error id: 80e18320-2973-48c6-a377-edfc0f1db56b-1: io.quarkus.oidc.OIDCException: Tenant configuration has not been resolved
at io.quarkus.oidc.runtime.OidcAuthenticationMechanism.resolve(OidcAuthenticationMechanism.java:61)
at io.quarkus.oidc.runtime.OidcAuthenticationMechanism.authenticate(OidcAuthenticationMechanism.java:40)
at io.quarkus.oidc.runtime.OidcAuthenticationMechanism_ClientProxy.authenticate(OidcAuthenticationMechanism_ClientProxy.zig:189)
at io.quarkus.vertx.http.runtime.security.HttpAuthenticator.attemptAuthentication(HttpAuthenticator.java:100)
at io.quarkus.vertx.http.runtime.security.HttpAuthenticator_ClientProxy.attemptAuthentication(HttpAuthenticator_ClientProxy.zig:157)
at io.quarkus.vertx.http.runtime.security.HttpSecurityRecorder$2.handle(HttpSecurityRecorder.java:101)
at io.quarkus.vertx.http.runtime.security.HttpSecurityRecorder$2.handle(HttpSecurityRecorder.java:51)
at io.vertx.ext.web.impl.RouteState.handleContext(RouteState.java:1038)
at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext(RoutingContextImplBase.java:137)
at io.vertx.ext.web.impl.RoutingContextImpl.next(RoutingContextImpl.java:132)
at io.quarkus.vertx.http.runtime.cors.CORSFilter.handle(CORSFilter.java:92)
at io.quarkus.vertx.http.runtime.cors.CORSFilter.handle(CORSFilter.java:18)
at io.vertx.ext.web.impl.RouteState.handleContext(RouteState.java:1038)
at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext(RoutingContextImplBase.java:137)
at io.vertx.ext.web.impl.RoutingContextImpl.next(RoutingContextImpl.java:132)
at io.vertx.ext.web.impl.RouterImpl.handle(RouterImpl.java:54)
at io.vertx.ext.web.impl.RouterImpl.handle(RouterImpl.java:36)
at io.quarkus.vertx.http.runtime.VertxHttpRecorder$9.handle(VertxHttpRecorder.java:426)
at io.quarkus.vertx.http.runtime.VertxHttpRecorder$9.handle(VertxHttpRecorder.java:423)
at io.quarkus.vertx.http.runtime.VertxHttpRecorder$1.handle(VertxHttpRecorder.java:149)
at io.quarkus.vertx.http.runtime.VertxHttpRecorder$1.handle(VertxHttpRecorder.java:131)
at io.vertx.core.http.impl.WebSocketRequestHandler.handle(WebSocketRequestHandler.java:50)
at io.vertx.core.http.impl.WebSocketRequestHandler.handle(WebSocketRequestHandler.java:32)
at io.vertx.core.http.impl.Http1xServerConnection.handleMessage(Http1xServerConnection.java:136)
at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:366)
at io.vertx.core.impl.EventLoopContext.execute(EventLoopContext.java:43)
at io.vertx.core.impl.ContextImpl.executeFromIO(ContextImpl.java:229)
at io.vertx.core.net.impl.VertxHandler.channelRead(VertxHandler.java:164)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:93)
at io.netty.handler.codec.http.websocketx.extensions.WebSocketServerExtensionHandler.channelRead(WebSocketServerExtensionHandler.java:101)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.vertx.core.http.impl.Http1xUpgradeToH2CHandler.channelRead(Http1xUpgradeToH2CHandler.java:109)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.vertx.core.http.impl.Http1xOrH2CHandler.end(Http1xOrH2CHandler.java:61)
at io.vertx.core.http.impl.Http1xOrH2CHandler.channelRead(Http1xOrH2CHandler.java:38)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:829)
I have tried delaying pod initialization health check time, but always get the exception only the first time the pod is deployed.
If pod redeploy is made, then all works fine.
So, what should I do to make success pod initialization after openshift cluster starts?
This is my application.properties file configuration:
quarkus.http.cors=true
quarkus.oidc.auth-server-url=http://keycloak-myproject.192.168.1.110.nip.io/auth/realms/secured-realm
quarkus.oidc.client-id=namaste
I'm using Openshift 3.11 and quarkus 1.13.6.Final version

It is most likely to do with Keycloak not ready yet by the time the pod is deployed - quarkus-oidc does not try a different connection logic when it is deployed in OpenShift, all it does it always attempts to connect to URL set in quarkus.oidc.auth-server-url.
The way to handle it in 1.13.x is to use a quarkus.oidc.connection-delay property, for example, set it to 3M, etc - quarkus-oidc will keep trying to connect for 3M in this case. Here there is a small race condition that even if Keycloak becomes contactable it may still have not finished loading a custom realm file as has been discovered recently by a Quarkus user - a PR will be opened soon to resolve this particular issue.
The best option is to try 2.0.0.CR3 - there quarkus-oidc will recover even if the connection to Keycloak has failed at the start up.
HTH

Related

Caused by: org.apache.ignite.IgniteCheckedException: Failed to validate cache configuration. Cache store factory is not serializable. Cache name:

I am trying to set up an Apache Ignite cache store using Mysql as external storage.
I have read all official documentation about it and examined many other examples, but I can't make it run:
[2022-06-02 16:45:56:551] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]]]
[2022-06-02 16:45:56:874] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Successfully bound communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]
[2022-06-02 16:45:56:874] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[16:45:56] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[2022-06-02 16:45:56:898] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
[2022-06-02 16:45:56:926] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - Collision resolution is disabled (all jobs will be activated upon arrival).
[16:45:56] Security status [authentication=off, sandbox=off, tls/ssl=off]
[2022-06-02 16:45:56:927] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Security status [authentication=off, sandbox=off, tls/ssl=off]
[2022-06-02 16:45:57:204] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0, locNodeId=b397c114-d34d-4245-9645-f78c5d184888]
[2022-06-02 16:45:57:242] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - DataRegionConfiguration.maxWalArchiveSize instead DataRegionConfiguration.walHistorySize would be used for removing old archive wal files
[2022-06-02 16:45:57:253] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Configured data regions initialized successfully [total=4]
[2022-06-02 16:45:57:307] [ERROR] - 55333 - org.apache.ignite.logger.java.JavaLogger.error(JavaLogger.java:310) - Exception during start processors, node will be stopped and close connections
org.apache.ignite.IgniteCheckedException: Failed to start processor: GridProcessorAdapter []
at org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1989) ~[ignite-core-2.10.0.jar:2.10.0]
Caused by: org.apache.ignite.IgniteCheckedException: Failed to validate cache configuration. Cache store factory is not serializable. Cache name: StockConfigCache
Caused by: org.apache.ignite.IgniteCheckedException: Failed to serialize object: CacheJdbcPojoStoreFactory [batchSize=512, dataSrcBean=null, dialect=org.apache.ignite.cache.store.jdbc.dialect.MySQLDialect#14993306, maxPoolSize=8, maxWrtAttempts=2, parallelLoadCacheMinThreshold=512, hasher=org.apache.ignite.cache.store.jdbc.JdbcTypeDefaultHasher#73ae82da, transformer=org.apache.ignite.cache.store.jdbc.JdbcTypesDefaultTransformer#6866e740, dataSrc=null, dataSrcFactory=com.anyex.ex.memory.model.CacheConfig$$Lambda$310/1421763091#31183ee2, sqlEscapeAll=false]
Caused by: java.io.NotSerializableException: com.anyex.ex.database.DynamicDataSource
Any advice or idea would be appreciated, thank you!
public static CacheConfiguration cacheStockConfigCache(DataSource dataSource, Boolean writeBehind)
{
CacheConfiguration ccfg = new CacheConfiguration();
ccfg.setSqlSchema("public");
ccfg.setName("StockConfigCache");
ccfg.setCacheMode(CacheMode.REPLICATED);
ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
ccfg.setIndexedTypes(Long.class, StockConfigMem.class);
CacheJdbcPojoStoreFactory cacheStoreFactory = new CacheJdbcPojoStoreFactory();
cacheStoreFactory.setDataSourceFactory((Factory<DataSource>) () -> dataSource);
//cacheStoreFactory.setDialect(new OracleDialect());
cacheStoreFactory.setDialect(new MySQLDialect());
cacheStoreFactory.setTypes(JdbcTypes.jdbcTypeStockConfigMem(ccfg.getName(), "StockConfig"));
ccfg.setCacheStoreFactory(cacheStoreFactory);
ccfg.setReadFromBackup(false);
ccfg.setCopyOnRead(true);
if(writeBehind){
ccfg.setWriteThrough(true);
ccfg.setWriteBehindEnabled(true);
}
return ccfg;
} public static JdbcType jdbcTypeStockConfigMem(String cacheName, String tableName)
{
JdbcType type = new JdbcType();
type.setCacheName(cacheName);
type.setKeyType(Long.class);
type.setValueType(StockConfigMem.class);
type.setDatabaseTable(tableName);
type.setKeyFields(new JdbcTypeField(Types.NUMERIC, "id", Long.class, "id"));
type.setValueFields(
new JdbcTypeField(Types.NUMERIC, "id", Long.class, "id"),
new JdbcTypeField(Types.NUMERIC, "stockinfoId", Long.class, "stockinfoId"),
new JdbcTypeField(Types.VARCHAR, "remark", String.class, "remark"),
new JdbcTypeField(Types.TIMESTAMP, "updateTime", Timestamp.class, "updateTime")
);
return type;
} igniteConfiguration.setCacheConfiguration(
CacheConfig.cacheStockConfigCache(dataSource, igniteProperties.getJdbc().getWriteBehind())
); #Bean("igniteInstance")
#ConditionalOnProperty(value = "ignite.enable", havingValue = "true", matchIfMissing = true)
public Ignite ignite(IgniteConfiguration igniteConfiguration)
{
log.info("igniteConfiguration info:{}", igniteConfiguration.toString());
Ignite ignite = Ignition.start(igniteConfiguration);
log.info("{} ignite started with discovery type {}", ignite.name(), igniteProperties.getType());
return ignite;
}

Thingsboard: Fails to read valid JSON payload when timestamp is in ISO 8601 format

I send this valid JSON to TB CE edition, and it fails reading it.
mosquitto_pub -d -q 1 -h “192.168.0.108” -t “device/sck/ybuers/readings” -i xxxxx -m
{"ts":"2021-11-08T16:17Z","value1":"99","value2":"24"}
If I send: (just changing the ts format to UNIX style)
mosquitto_pub -d -q 1 -h “192.168.0.108” -t “device/sck/ybuers/readings” -i xxxxx -m
{"ts":"12345678910","value1":"99","value2":"24"}
it works.
Is this a BIG limitation of the platform? or am I missing something basic?
I'm working with TB CE v3.3.1, on Windows.
I paste the error from /var/log/thingsboard below.
Thank you!
2021-11-09 15:25:23,583 [nioEventLoopGroup-4-2] INFO o.t.s.t.mqtt.MqttTransportHandler - [d13716f2-1a55-4056-93c8-11b81bc7794b] Processing connect msg for client: ybuers!
2021-11-09 15:25:23,583 [nioEventLoopGroup-4-2] INFO o.t.s.t.mqtt.MqttTransportHandler - [d13716f2-1a55-4056-93c8-11b81bc7794b] Processing connect msg for client with user name: null!
2021-11-09 15:25:23,639 [DefaultTransportService-28-60] INFO o.t.s.t.mqtt.MqttTransportHandler - [d13716f2-1a55-4056-93c8-11b81bc7794b] Client connected!
2021-11-09 15:25:23,644 [nioEventLoopGroup-4-2] WARN o.t.s.t.mqtt.MqttTransportHandler - [d13716f2-1a55-4056-93c8-11b81bc7794b] **Failed to process publish msg [device/sck/ybuers/readings][1]**
org.thingsboard.server.common.transport.adaptor.AdaptorException: com.google.gson.JsonSyntaxException: **com.google.gson.stream.MalformedJsonException: Unterminated object at line 1 column 18 path $.t**
at org.thingsboard.server.transport.mqtt.adaptors.JsonMqttAdaptor.convertToPostTelemetry(JsonMqttAdaptor.java:67)
at org.thingsboard.server.transport.mqtt.MqttTransportHandler.processDevicePublish(MqttTransportHandler.java:343)
at org.thingsboard.server.transport.mqtt.MqttTransportHandler.processPublish(MqttTransportHandler.java:298)
at org.thingsboard.server.transport.mqtt.MqttTransportHandler.processRegularSessionMsg(MqttTransportHandler.java:255)
at org.thingsboard.server.transport.mqtt.MqttTransportHandler.lambda$processMsgQueue$0(MqttTransportHandler.java:249)
at org.thingsboard.server.transport.mqtt.session.DeviceSessionCtx.tryProcessQueuedMsgs(DeviceSessionCtx.java:181)
at org.thingsboard.server.transport.mqtt.MqttTransportHandler.processMsgQueue(MqttTransportHandler.java:249)
at org.thingsboard.server.transport.mqtt.MqttTransportHandler.enqueueRegularSessionMsg(MqttTransportHandler.java:241)
at org.thingsboard.server.transport.mqtt.MqttTransportHandler.processMqttMsg(MqttTransportHandler.java:183)
at org.thingsboard.server.transport.mqtt.MqttTransportHandler.channelRead(MqttTransportHandler.java:156)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: com.google.gson.JsonSyntaxException: com.google.gson.stream.MalformedJsonException: Unterminated object at line 1 column 18 path $.t
at com.google.gson.internal.Streams.parse(Streams.java:60)
at com.google.gson.JsonParser.parse(JsonParser.java:84)
at com.google.gson.JsonParser.parse(JsonParser.java:59)
at com.google.gson.JsonParser.parse(JsonParser.java:45)
at org.thingsboard.server.transport.mqtt.adaptors.JsonMqttAdaptor.convertToPostTelemetry(JsonMqttAdaptor.java:65)
... 30 common frames omitted
Caused by: com.google.gson.stream.MalformedJsonException: Unterminated object at line 1 column 18 path $.t
at com.google.gson.stream.JsonReader.syntaxError(JsonReader.java:1567)
at com.google.gson.stream.JsonReader.doPeek(JsonReader.java:495)
at com.google.gson.stream.JsonReader.hasNext(JsonReader.java:418)
at com.google.gson.internal.bind.TypeAdapters$29.read(TypeAdapters.java:742)
at com.google.gson.internal.bind.TypeAdapters$29.read(TypeAdapters.java:718)
at com.google.gson.internal.Streams.parse(Streams.java:48)
... 34 common frames omitted
2021-11-09 15:25:23,645 [nioEventLoopGroup-4-2] INFO o.t.s.t.mqtt.MqttTransportHandler - [d13716f2-1a55-4056-93c8-11b81bc7794b] Closing current session due to invalid publish msg [device/sck/ybuers/readings][1]
2021-11-09 15:25:23,646 [nioEventLoopGroup-4-2] INFO o.t.s.t.mqtt.MqttTransportHandler - [d13716f2-1a55-4056-93c8-11b81bc7794b] Client disconnected!
2021-11-09 15:25:23,664 [nioEventLoopGroup-4-1] INFO o.t.s.t.mqtt.MqttTransportHandler - [f03fba50-a2be-4c2f-a301-a48c79d6baaf] Client disconnected!
If you're using windows to execute that cmd, try using windows format quotes.
"{\"ts\":\"12345678910\",\"value1\":\"99\",\"value2\":\"24\"}"

Unable to commit against JDBC Connection

I am using springboot to connect to a mysql database. Please find my configuration below
spring.datasource.url=jdbc:<connection-url>
spring.datasource.username=<username>
spring.datasource.password=<password>
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jpa.show-sql=true
spring.jpa.hibernate.ddl-auto=update
spring.jpa.properties.hibernate.format_sql=true
spring.datasource.tomcat.max-active=50
spring.datasource.tomcat.max-idle=20
spring.datasource.tomcat.max-wait=20000
spring.datasource.tomcat.min-idle=15
Api code
#CrossOrigin(origins = "*", allowedHeaders = "*")
#GetMapping(value = "/validateuser/{consumerName}")
#Transactional
public Boolean valiadateuser(#PathVariable String consumerName) {
LOGGER.info("Inside validateuser -1");
ConsumerName user = consumerRepository.findByName(consumerName);
LOGGER.info("Inside validateuser -2 :::: " + user);
if (user != null) {
return Boolean.TRUE;
}
return Boolean.FALSE;
}
Below is my exception
org.springframework.orm.jpa.JpaSystemException: Unable to commit against JDBC Connection; nested exception is org.hibernate.TransactionException: Unable to commit against JDBC Connection
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:353) ~[spring-orm-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:255) ~[spring-orm-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:538) ~[spring-orm-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:743) [spring-tx-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:711) [spring-tx-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.completeTransactionAfterThrowing(TransactionAspectSupport.java:665) [spring-tx-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:370) [spring-tx-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:118) [spring-tx-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) [spring-aop-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749) [spring-aop-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:691) [spring-aop-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at com.server.controller.SubscribeController$$EnhancerBySpringCGLIB$$14f090fd.subscribeTopic(<generated>) [classes!/:0.0.1-SNAPSHOT]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_251]
Got the answer
updated the resource.properties
spring.datasource.url=jdbc:<connection-url>
spring.datasource.username=<username>
spring.datasource.password=<password>
#spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jpa.show-sql=true
spring.jpa.hibernate.ddl-auto=update
spring.datasource.hikari.connectionTimeout=20000
spring.datasource.hikari.maximumPoolSize=5
I was experiencing the same exception on my Spring / Postgres stack. Basically, the DB can not return/commit all the rows that match your query in time.
It can be fixed by creating indexes on the columns used in the particular query. This speeds the query up.
CREATE INDEX index_redflag_person
ON redflag_person (firstname, alias,lastname,address,birthplace);
I use Spring and Hibernate and PostgreSQL for practice, got the similar exception:
org.springframework.transaction.TransactionSystemException: Could not commit Hibernate transaction;
nested exception is org.hibernate.TransactionException:
Unable to commit against JDBC Connection] with root cause
org.postgresql.util.PSQLException: Cannot commit when autoCommit is enabled.
my hibernate.prop has some config below
hibernate.dialect=org.hibernate.dialect.PostgreSQL10Dialect
hibernate.connection.handling_mode=DELAYED_ACQUISITION_AND_RELEASE_AFTER_STATEMENT
packages.to.scan=org.practice.dao.entity
It's weird bcs the default hibernate.connection.autocommit value in hibernate 5.6 is false
After a whole day search, I could not find the same error on the internet, finally I figured it out: somehow I added hibernate.connection.handling_mode in my config file, remove it(the default is fine) and the app works as expected
so maybe check the config and use the simplest param would help someone else

How to set a bucket password with spring-data couchbase

I have followed the tutorial for spring-data couchbase and have a succesfull example project with unit tests for persisting a number of custom entities with a range of views implemented to query the entities.
This works correctly in both a local dev environment and a ci environment when using the "default" bucket name and no password as the authentication.
Moving beyond the example, I want to make use of a different bucket and ultimately make use of a password.
When I create a new bucket (named "test_bucket"), and update the property injected into the CouchbaseConfig (extends AbstractCouchbaseConfiguration) to use this new bucket inplace of the "default" I get the following exception when running the unit tests.
I also tried adding a password to the creation script and adding the same password ("psswd" string in both cases) to the properties used in the CouchbaseConfig but get the same exception shown below.
So is it possible to use another bucket than "default" (and its no-authorisation required) and how do I configure a password for use on this bucket ?
I have verified that the bucket(s) and the expected views have been created correctly in couchbase from the Admin GUI.
2015-06-09 16:41:40 INFO ClasspathLoggingApplicationListener:55 - Application failed to start with classpath: [file:/C:/tools/cmd/cygwin64/home/akirby/workspaces/repos/blackjack/persistence/target/surefire/surefirebooter7615727324811258159.jar]
2015-06-09 16:41:40 INFO AutoConfigurationReportLoggingInitializer:107 -
Error starting ApplicationContext. To display the auto-configuration report enabled debug logging (start with --debug)
2015-06-09 16:41:40 ERROR SpringApplication:338 - Application startup failed
java.lang.NoSuchMethodError: org.apache.commons.codec.binary.Base64.encodeBase64String([B)Ljava/lang/String;
at com.couchbase.client.http.HttpUtil.buildAuthHeader(HttpUtil.java:55)
at com.couchbase.client.ViewConnection.addOp(ViewConnection.java:205)
at com.couchbase.client.CouchbaseClient.addOp(CouchbaseClient.java:803)
at com.couchbase.client.CouchbaseClient.asyncGetView(CouchbaseClient.java:342)
at com.couchbase.client.CouchbaseClient.getView(CouchbaseClient.java:430)
at org.springframework.data.couchbase.core.CouchbaseTemplate$2.doInBucket(CouchbaseTemplate.java:223)
at org.springframework.data.couchbase.core.CouchbaseTemplate$2.doInBucket(CouchbaseTemplate.java:220)
at org.springframework.data.couchbase.core.CouchbaseTemplate.execute(CouchbaseTemplate.java:244)
at org.springframework.data.couchbase.core.CouchbaseTemplate.queryView(CouchbaseTemplate.java:220)
at org.springframework.data.couchbase.repository.support.SimpleCouchbaseRepository.deleteAll(SimpleCouchbaseRepository.java:168)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.executeMethodOn(RepositoryFactorySupport.java:416)
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:401)
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:373)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.data.repository.core.support.RepositoryFactorySupport$DefaultMethodInvokingMethodInterceptor.invoke(RepositoryFactorySupport.java:486)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.data.couchbase.repository.support.ViewPostProcessor$ViewInterceptor.invoke(ViewPostProcessor.java:87)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy50.deleteAll(Unknown Source)
at com.pubtech.cms.persistence.RepositoryService.doWork(RepositoryService.java:47)
at com.pubtech.cms.persistence.ApplicationRepository.lambda$commandLineRunner$0(ApplicationRepository.java:83)
at com.pubtech.cms.persistence.ApplicationRepository$$Lambda$9/594916129.run(Unknown Source)
at org.springframework.boot.SpringApplication.runCommandLineRunners(SpringApplication.java:672)
at org.springframework.boot.SpringApplication.afterRefresh(SpringApplication.java:690)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:321)
at org.springframework.boot.test.SpringApplicationContextLoader.loadContext(SpringApplicationContextLoader.java:101)
at org.springframework.test.context.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:68)
at org.springframework.test.context.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:86)
at org.springframework.test.context.DefaultTestContext.getApplicationContext(DefaultTestContext.java:72)
at org.springframework.test.context.web.ServletTestExecutionListener.setUpRequestContextIfNecessary(ServletTestExecutionListener.java:170)
at org.springframework.test.context.web.ServletTestExecutionListener.prepareTestInstance(ServletTestExecutionListener.java:110)
at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:212)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:200)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:259)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:261)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:219)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:83)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:68)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:163)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
2015-06-09 16:41:40 INFO GenericWebApplicationContext:862 - Closing org.springframework.web.context.support.GenericWebApplicationContext#6302bbb1: startup date [Tue Jun 09 16:41:33 BST 2015]; root of context hierarchy
2015-06-09 16:41:40 INFO CouchbaseConnection:87 - Shut down Couchbase client
2015-06-09 16:41:40 INFO ViewConnection:87 - I/O reactor terminated
when using a bucket name that requires a password (bucket "t1", password "pswd") I see this authentication error in the logs, is there some format. other than plain text that the passsord should be encoded with ?
2015-06-10 10:55:58 INFO DefaultListableBeanFactory:822 - Overriding bean definition for bean 'beanNameViewResolver': replacing [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.ErrorMvcAutoConfiguration$WhitelabelErrorViewConfiguration; factoryMethodName=beanNameViewResolver; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/ErrorMvcAutoConfiguration$WhitelabelErrorViewConfiguration.class]] with [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.WebMvcAutoConfiguration$WebMvcAutoConfigurationAdapter; factoryMethodName=beanNameViewResolver; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/WebMvcAutoConfiguration$WebMvcAutoConfigurationAdapter.class]]
2015-06-10 10:55:59 INFO Version:27 - HV000001: Hibernate Validator 5.1.3.Final
2015-06-10 10:56:00 ERROR SASLStepOperationImpl:93 - Error: Auth failure
2015-06-10 10:56:00 WARN BinaryMemcachedNodeImpl:90 - Discarding partially completed op: SASL steps operation
2015-06-10 10:56:00 WARN AuthThread:90 - Authentication failed to localhost/127.0.0.1:11210, Status: {OperationStatus success=false: cancelled}
2015-06-10 10:56:02 WARN AuthThread:90 - Authentication failed to localhost/127.0.0.1:11210, Status: {OperationStatus success=false: Invalid arguments}
2015-06-10 10:56:02 WARN AuthThread:90 - Authentication failed to localhost/127.0.0.1:11210, Status: {OperationStatus success=false: Invalid arguments}
I use the couchbase-cli to create the buckets from a script, using the same script to create the working "default" and not working "test_bucket", (properties are correctly injected using mvn filter) :
# Create Bucket
couchbase-cli bucket-create -c $COUCHBASE_HOST:$COUCHBASE_PORT -u $CB_REST_USERNAME -p $CB_REST_PASSWORD \
--bucket=$BUCKET_NAME \
--bucket-type=couchbase \
--bucket-ramsize=200 \
--bucket-replica=1 \
--wait
CouchbaseConfig class:
..
#Configuration
#EnableCouchbaseRepositories(basePackages = {"com.persistence.db"})
#EnableAutoConfiguration
public class CouchbaseConfig extends AbstractCouchbaseConfiguration {
#Value("${couchbase.bucket:boris}")
private String bucketName;
#Value("${couchbase.bucket.password:nopwd}")
private String password;
#Value("${couchbase.host:127.0.0.1}")
private String ip;
..
I think you got a similar issue to what i was experiencing, the issue for me was using #Value in an #Configuration class has a slight special requirement. I was using YAML for my properties file if that matters at all.
add this to your class (must be static as well)
/**
* this is required for some reason: https://jira.spring.io/browse/SPR-11773
*
* #return
*/
#Bean
public static PropertySourcesPlaceholderConfigurer propertyPlaceholderConfigurer() {
return new PropertySourcesPlaceholderConfigurer();

Issue accessing backend DB through openRDF Sesame

I have the following code in java to query SPARQL query over the Backend DB (postgreSQL).
import rdfProcessing.RDFRepository;
import java.io.File;
import java.sql.Connection;
import java.sql.DriverManager;
import java.util.List;
import org.openrdf.query.QueryLanguage;
import org.openrdf.query.TupleQueryResult;
import org.openrdf.repository.Repository;
import org.openrdf.repository.RepositoryConnection;
import org.openrdf.repository.manager.LocalRepositoryManager;
import org.openrdf.repository.manager.RepositoryManager;
import org.openrdf.sail.config.SailImplConfig;
import org.openrdf.sail.memory.config.MemoryStoreConfig;
import org.openrdf.repository.config.RepositoryImplConfig;
import org.openrdf.repository.sail.config.SailRepositoryConfig;
import org.openrdf.repository.config.RepositoryConfig;
public class Qeryrdf {
Connection connection;
private static final String REPO_ID = "C:\\RDF_triples\\univData10m\\repositories\\SYSTEM\\memorystore.data";
private static final String q1 = ""
+ "PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>" +
"PREFIX ub:<http://univ.org#>" +
"PREFIX owl:<http://www.w3.org/2002/07/owl#>" +
"PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#>" +
" select distinct ?o ?p where"+
"{ ?s rdf:type ?o." +
"}";
public static void main(String[] args)
throws Exception {
LocalRepositoryManager manager = new LocalRepositoryManager(new File("C:\\RDF triples\\univData1"));
manager.initialize();
try {
Qeryrdf queryrdf = new Qeryrdf();
queryrdf.executeQueries(manager);
} finally {
manager.shutDown();
}
}
private void executeQueries(RepositoryManager manager)
throws Exception {
SailImplConfig backendConfig = new MemoryStoreConfig();
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
String repositoryId = REPO_ID;
RepositoryConfig repConfig = new RepositoryConfig(repositoryId, repositoryTypeSpec);
manager.addRepositoryConfig(repConfig);
Repository repo = manager.getRepository(repositoryId);
repo.initialize();
RepositoryConnection con = repo.getConnection();
RDFRepository repository = new RDFRepository();
String repoDir = "C:\\RDF triples\\univData1" ;
repository.initializeRepository(repoDir );
System.out.println("Executing the query");
executeQuery(q1, con);
con.close();
repo.shutDown();
}
private void executeQuery(String query, RepositoryConnection con) {
getConnection();
try {
TupleQueryResult result = con.prepareTupleQuery(QueryLanguage.SPARQL, query).evaluate();
int resultCount = 0;
long time = System.currentTimeMillis();
while (result.hasNext()) {
result.next();
resultCount++;
}
time = System.currentTimeMillis() - time;
System.out.printf("Result count: %d in %fs.\n", resultCount, time / 1000.0);
} catch (Exception e) {
e.printStackTrace();
}
}
public void getConnection() {
try {
Class.forName("org.postgresql.Driver");
connection = DriverManager.getConnection(
"jdbc:postgresql://localhost:5432/myDB01", "postgres",
"aabbcc");
} catch (Exception e) {
e.printStackTrace();
System.err.println(e.getClass().getName() + ": " + e.getMessage());
System.exit(0);
}
System.out.println("The database opened successfully");
}
}
And I got the following result:
16:46:44.546 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Initializing MemoryStore...
16:46:44.578 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Reading data from C:\RDF triples\univData1\repositories\SYSTEM\memorystore.data...
16:46:44.671 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Data file read successfully
16:46:44.671 [main] DEBUG org.openrdf.sail.memory.MemoryStore - MemoryStore initialized
16:46:44.765 [main] DEBUG org.openrdf.sail.memory.MemoryStore - syncing data to file...
16:46:44.796 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Data synced to file
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - React to commit on SystemRepository for contexts [_:node18j9mufr0x1]
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Processing modified context _:node18j9mufr0x1.
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Is _:node18j9mufr0x1 a repository config context?
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Reacting to modified repository config for C:\RDF triples\univData1\repositories\SYSTEM\memorystore.data
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Modified repository C:\RDF triples\univData1\repositories\SYSTEM\memorystore.data has not been initialized, skipping...
16:46:44.812 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.contextaware.config.ContextAwareFactory
16:46:44.812 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.dataset.config.DatasetRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.http.config.HTTPRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.sail.config.SailRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.sail.config.ProxyRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.sparql.config.SPARQLRepositoryFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.federation.config.FederationFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.inferencer.fc.config.ForwardChainingRDFSInferencerFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.inferencer.fc.config.DirectTypeHierarchyInferencerFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.inferencer.fc.config.CustomGraphQueryInferencerFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.memory.config.MemoryStoreFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.nativerdf.config.NativeStoreFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.rdbms.config.RdbmsStoreFactory
16:46:44.875 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Initializing MemoryStore...
16:46:44.875 [main] DEBUG org.openrdf.sail.memory.MemoryStore - MemoryStore initialized
16:46:44.876 [main] DEBUG o.openrdf.sail.nativerdf.NativeStore - Initializing NativeStore...
16:46:44.876 [main] DEBUG o.openrdf.sail.nativerdf.NativeStore - Data dir is C:\RDF triples\univData1
16:46:44.970 [main] DEBUG o.openrdf.sail.nativerdf.NativeStore - NativeStore initialized
Executing the query
The database opened successfully
16:46:45.735 [main] DEBUG o.o.query.parser.QueryParserRegistry - Registered service class org.openrdf.query.parser.serql.SeRQLParserFactory
16:46:45.751 [main] DEBUG o.o.query.parser.QueryParserRegistry - Registered service class org.openrdf.query.parser.sparql.SPARQLParserFactory
Result count: 0 in 0.000000s.
My problem is:
1. I changed the SPARQL query many times but still retrieving 0 rows.
2. So, Does OpenRDF Sesame connect to backend DB like PostgreSQL, MySQL, etc?
3. If so, Does OpenRDF Sesame translate SPARQL query to SQL then bring results from the backend DB?
Thanks in Advance.
First, answers to your specific questions, in order:
if the query gives no results, that means that either the repository over which you're executing it is empty, or the query you're trying to execute matches no data in that repository. Since it looks like the way in which you set up and initialize your repository is completely wrong (see remarks below), it is probably empty.
in general, yes, Sesame can connect to a PostgreSQL or MySQL database for storage and query. However, in your code this is not done, because you are not using a Sesame RDBMSStore as your SAIL storage backend, but are using a MemoryStore (which, as the name implies, is an in-memory database).
If you were using a Sesame PostgreSQL/MySQL store, then yes, it would translate SPARQL queries to SQL queries. But you're not using it. Also, the Sesame PostgreSQL/MySQL support is now deprecated - it's recommended not to use it, but instead a NativeStore or MemoryStore or any one of the many available third-party Sesame store implementations .
More generally, looking at your code, it is unclear what you're trying to accomplish, and I cannot believe your code actually compiles, let alone runs.
You're using a class RDFRepository in there somewhere, which doesn't exist in Sesame 2, and a method initializeRepository which you give a directory, which also does not exist. It looks vaguely like how things worked in Sesame 1, but that version of Sesame has been out commission for at least 6 years now.
Then you have a method getConnection which sets up a connection to a PostgreSQL database, but that method doesn't accomplish anything - it just creates a Connection object but then nothing is ever done with that Connection.
I recommend that you go back to basics and have a good look through the Sesame documentation, especially the tutorial, and the chapter on Programming with Sesame, which explains how to create and manage repositories and how to work with them.