Issue accessing backend DB through openRDF Sesame - mysql

I have the following code in java to query SPARQL query over the Backend DB (postgreSQL).
import rdfProcessing.RDFRepository;
import java.io.File;
import java.sql.Connection;
import java.sql.DriverManager;
import java.util.List;
import org.openrdf.query.QueryLanguage;
import org.openrdf.query.TupleQueryResult;
import org.openrdf.repository.Repository;
import org.openrdf.repository.RepositoryConnection;
import org.openrdf.repository.manager.LocalRepositoryManager;
import org.openrdf.repository.manager.RepositoryManager;
import org.openrdf.sail.config.SailImplConfig;
import org.openrdf.sail.memory.config.MemoryStoreConfig;
import org.openrdf.repository.config.RepositoryImplConfig;
import org.openrdf.repository.sail.config.SailRepositoryConfig;
import org.openrdf.repository.config.RepositoryConfig;
public class Qeryrdf {
Connection connection;
private static final String REPO_ID = "C:\\RDF_triples\\univData10m\\repositories\\SYSTEM\\memorystore.data";
private static final String q1 = ""
+ "PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>" +
"PREFIX ub:<http://univ.org#>" +
"PREFIX owl:<http://www.w3.org/2002/07/owl#>" +
"PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#>" +
" select distinct ?o ?p where"+
"{ ?s rdf:type ?o." +
"}";
public static void main(String[] args)
throws Exception {
LocalRepositoryManager manager = new LocalRepositoryManager(new File("C:\\RDF triples\\univData1"));
manager.initialize();
try {
Qeryrdf queryrdf = new Qeryrdf();
queryrdf.executeQueries(manager);
} finally {
manager.shutDown();
}
}
private void executeQueries(RepositoryManager manager)
throws Exception {
SailImplConfig backendConfig = new MemoryStoreConfig();
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
String repositoryId = REPO_ID;
RepositoryConfig repConfig = new RepositoryConfig(repositoryId, repositoryTypeSpec);
manager.addRepositoryConfig(repConfig);
Repository repo = manager.getRepository(repositoryId);
repo.initialize();
RepositoryConnection con = repo.getConnection();
RDFRepository repository = new RDFRepository();
String repoDir = "C:\\RDF triples\\univData1" ;
repository.initializeRepository(repoDir );
System.out.println("Executing the query");
executeQuery(q1, con);
con.close();
repo.shutDown();
}
private void executeQuery(String query, RepositoryConnection con) {
getConnection();
try {
TupleQueryResult result = con.prepareTupleQuery(QueryLanguage.SPARQL, query).evaluate();
int resultCount = 0;
long time = System.currentTimeMillis();
while (result.hasNext()) {
result.next();
resultCount++;
}
time = System.currentTimeMillis() - time;
System.out.printf("Result count: %d in %fs.\n", resultCount, time / 1000.0);
} catch (Exception e) {
e.printStackTrace();
}
}
public void getConnection() {
try {
Class.forName("org.postgresql.Driver");
connection = DriverManager.getConnection(
"jdbc:postgresql://localhost:5432/myDB01", "postgres",
"aabbcc");
} catch (Exception e) {
e.printStackTrace();
System.err.println(e.getClass().getName() + ": " + e.getMessage());
System.exit(0);
}
System.out.println("The database opened successfully");
}
}
And I got the following result:
16:46:44.546 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Initializing MemoryStore...
16:46:44.578 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Reading data from C:\RDF triples\univData1\repositories\SYSTEM\memorystore.data...
16:46:44.671 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Data file read successfully
16:46:44.671 [main] DEBUG org.openrdf.sail.memory.MemoryStore - MemoryStore initialized
16:46:44.765 [main] DEBUG org.openrdf.sail.memory.MemoryStore - syncing data to file...
16:46:44.796 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Data synced to file
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - React to commit on SystemRepository for contexts [_:node18j9mufr0x1]
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Processing modified context _:node18j9mufr0x1.
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Is _:node18j9mufr0x1 a repository config context?
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Reacting to modified repository config for C:\RDF triples\univData1\repositories\SYSTEM\memorystore.data
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Modified repository C:\RDF triples\univData1\repositories\SYSTEM\memorystore.data has not been initialized, skipping...
16:46:44.812 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.contextaware.config.ContextAwareFactory
16:46:44.812 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.dataset.config.DatasetRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.http.config.HTTPRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.sail.config.SailRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.sail.config.ProxyRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.sparql.config.SPARQLRepositoryFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.federation.config.FederationFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.inferencer.fc.config.ForwardChainingRDFSInferencerFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.inferencer.fc.config.DirectTypeHierarchyInferencerFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.inferencer.fc.config.CustomGraphQueryInferencerFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.memory.config.MemoryStoreFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.nativerdf.config.NativeStoreFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.rdbms.config.RdbmsStoreFactory
16:46:44.875 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Initializing MemoryStore...
16:46:44.875 [main] DEBUG org.openrdf.sail.memory.MemoryStore - MemoryStore initialized
16:46:44.876 [main] DEBUG o.openrdf.sail.nativerdf.NativeStore - Initializing NativeStore...
16:46:44.876 [main] DEBUG o.openrdf.sail.nativerdf.NativeStore - Data dir is C:\RDF triples\univData1
16:46:44.970 [main] DEBUG o.openrdf.sail.nativerdf.NativeStore - NativeStore initialized
Executing the query
The database opened successfully
16:46:45.735 [main] DEBUG o.o.query.parser.QueryParserRegistry - Registered service class org.openrdf.query.parser.serql.SeRQLParserFactory
16:46:45.751 [main] DEBUG o.o.query.parser.QueryParserRegistry - Registered service class org.openrdf.query.parser.sparql.SPARQLParserFactory
Result count: 0 in 0.000000s.
My problem is:
1. I changed the SPARQL query many times but still retrieving 0 rows.
2. So, Does OpenRDF Sesame connect to backend DB like PostgreSQL, MySQL, etc?
3. If so, Does OpenRDF Sesame translate SPARQL query to SQL then bring results from the backend DB?
Thanks in Advance.

First, answers to your specific questions, in order:
if the query gives no results, that means that either the repository over which you're executing it is empty, or the query you're trying to execute matches no data in that repository. Since it looks like the way in which you set up and initialize your repository is completely wrong (see remarks below), it is probably empty.
in general, yes, Sesame can connect to a PostgreSQL or MySQL database for storage and query. However, in your code this is not done, because you are not using a Sesame RDBMSStore as your SAIL storage backend, but are using a MemoryStore (which, as the name implies, is an in-memory database).
If you were using a Sesame PostgreSQL/MySQL store, then yes, it would translate SPARQL queries to SQL queries. But you're not using it. Also, the Sesame PostgreSQL/MySQL support is now deprecated - it's recommended not to use it, but instead a NativeStore or MemoryStore or any one of the many available third-party Sesame store implementations .
More generally, looking at your code, it is unclear what you're trying to accomplish, and I cannot believe your code actually compiles, let alone runs.
You're using a class RDFRepository in there somewhere, which doesn't exist in Sesame 2, and a method initializeRepository which you give a directory, which also does not exist. It looks vaguely like how things worked in Sesame 1, but that version of Sesame has been out commission for at least 6 years now.
Then you have a method getConnection which sets up a connection to a PostgreSQL database, but that method doesn't accomplish anything - it just creates a Connection object but then nothing is ever done with that Connection.
I recommend that you go back to basics and have a good look through the Sesame documentation, especially the tutorial, and the chapter on Programming with Sesame, which explains how to create and manage repositories and how to work with them.

Related

Caused by: org.apache.ignite.IgniteCheckedException: Failed to validate cache configuration. Cache store factory is not serializable. Cache name:

I am trying to set up an Apache Ignite cache store using Mysql as external storage.
I have read all official documentation about it and examined many other examples, but I can't make it run:
[2022-06-02 16:45:56:551] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]]]
[2022-06-02 16:45:56:874] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Successfully bound communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]
[2022-06-02 16:45:56:874] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[16:45:56] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[2022-06-02 16:45:56:898] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
[2022-06-02 16:45:56:926] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - Collision resolution is disabled (all jobs will be activated upon arrival).
[16:45:56] Security status [authentication=off, sandbox=off, tls/ssl=off]
[2022-06-02 16:45:56:927] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Security status [authentication=off, sandbox=off, tls/ssl=off]
[2022-06-02 16:45:57:204] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0, locNodeId=b397c114-d34d-4245-9645-f78c5d184888]
[2022-06-02 16:45:57:242] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - DataRegionConfiguration.maxWalArchiveSize instead DataRegionConfiguration.walHistorySize would be used for removing old archive wal files
[2022-06-02 16:45:57:253] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Configured data regions initialized successfully [total=4]
[2022-06-02 16:45:57:307] [ERROR] - 55333 - org.apache.ignite.logger.java.JavaLogger.error(JavaLogger.java:310) - Exception during start processors, node will be stopped and close connections
org.apache.ignite.IgniteCheckedException: Failed to start processor: GridProcessorAdapter []
at org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1989) ~[ignite-core-2.10.0.jar:2.10.0]
Caused by: org.apache.ignite.IgniteCheckedException: Failed to validate cache configuration. Cache store factory is not serializable. Cache name: StockConfigCache
Caused by: org.apache.ignite.IgniteCheckedException: Failed to serialize object: CacheJdbcPojoStoreFactory [batchSize=512, dataSrcBean=null, dialect=org.apache.ignite.cache.store.jdbc.dialect.MySQLDialect#14993306, maxPoolSize=8, maxWrtAttempts=2, parallelLoadCacheMinThreshold=512, hasher=org.apache.ignite.cache.store.jdbc.JdbcTypeDefaultHasher#73ae82da, transformer=org.apache.ignite.cache.store.jdbc.JdbcTypesDefaultTransformer#6866e740, dataSrc=null, dataSrcFactory=com.anyex.ex.memory.model.CacheConfig$$Lambda$310/1421763091#31183ee2, sqlEscapeAll=false]
Caused by: java.io.NotSerializableException: com.anyex.ex.database.DynamicDataSource
Any advice or idea would be appreciated, thank you!
public static CacheConfiguration cacheStockConfigCache(DataSource dataSource, Boolean writeBehind)
{
CacheConfiguration ccfg = new CacheConfiguration();
ccfg.setSqlSchema("public");
ccfg.setName("StockConfigCache");
ccfg.setCacheMode(CacheMode.REPLICATED);
ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
ccfg.setIndexedTypes(Long.class, StockConfigMem.class);
CacheJdbcPojoStoreFactory cacheStoreFactory = new CacheJdbcPojoStoreFactory();
cacheStoreFactory.setDataSourceFactory((Factory<DataSource>) () -> dataSource);
//cacheStoreFactory.setDialect(new OracleDialect());
cacheStoreFactory.setDialect(new MySQLDialect());
cacheStoreFactory.setTypes(JdbcTypes.jdbcTypeStockConfigMem(ccfg.getName(), "StockConfig"));
ccfg.setCacheStoreFactory(cacheStoreFactory);
ccfg.setReadFromBackup(false);
ccfg.setCopyOnRead(true);
if(writeBehind){
ccfg.setWriteThrough(true);
ccfg.setWriteBehindEnabled(true);
}
return ccfg;
} public static JdbcType jdbcTypeStockConfigMem(String cacheName, String tableName)
{
JdbcType type = new JdbcType();
type.setCacheName(cacheName);
type.setKeyType(Long.class);
type.setValueType(StockConfigMem.class);
type.setDatabaseTable(tableName);
type.setKeyFields(new JdbcTypeField(Types.NUMERIC, "id", Long.class, "id"));
type.setValueFields(
new JdbcTypeField(Types.NUMERIC, "id", Long.class, "id"),
new JdbcTypeField(Types.NUMERIC, "stockinfoId", Long.class, "stockinfoId"),
new JdbcTypeField(Types.VARCHAR, "remark", String.class, "remark"),
new JdbcTypeField(Types.TIMESTAMP, "updateTime", Timestamp.class, "updateTime")
);
return type;
} igniteConfiguration.setCacheConfiguration(
CacheConfig.cacheStockConfigCache(dataSource, igniteProperties.getJdbc().getWriteBehind())
); #Bean("igniteInstance")
#ConditionalOnProperty(value = "ignite.enable", havingValue = "true", matchIfMissing = true)
public Ignite ignite(IgniteConfiguration igniteConfiguration)
{
log.info("igniteConfiguration info:{}", igniteConfiguration.toString());
Ignite ignite = Ignition.start(igniteConfiguration);
log.info("{} ignite started with discovery type {}", ignite.name(), igniteProperties.getType());
return ignite;
}

Spring Batch - Partitioning Timeout

I have to migrate around millions of blob records from multiple mysql databases to a physical location as files over WAN network.
I chose to use Spring Batch and has already made it work. However, I am struggling with a timeout error happen with random partitioned steps.
Here is some context,
There are multiple MySql database store >10m records in 20 years.
The source tables indexed two composite keys in varchar datatype (there is no ID key) so I have to use an UN-indexed column in date-time format to partitioning the records by year and week to keep the number of records per partition reasonably at average 200 records. If there is any better advice, it would be welcome!
My issue: When the records per partition is high enough, the stepExecutors will randomly failed due to time out
Could not open JDBC Con nection for transaction; nested exception is java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms
I have done some tweaks with the DataSource properties and Transaction properties but no luck. Can I get some advice please! Thanks
Terminal log:
org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Con
nection for transaction; nested exception is
java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:309)
~[spring-jdbc-5.3.16.jar:5.3.16]
...
Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
2022-03-05 10:05:43.146 ERROR 15624 --- [main] o.s.batch.core.step.AbstractStep : Encountered an error executing step managerStep in job mainJob
org.springframework.batch.core.JobExecutionException: Partition handler returned an unsuccessful step at ...
The job is marked as [FAILED] or [UNKNOWN] sometimes, and not restartable.
org.springframework.batch.core.partition.support.PartitionStep.doExecute(PartitionStep.java:112) ~[spring-batch-core-4.3.5.jar:4.3.5]
2022-03-05 10:05:43.213 INFO 15624 --- [main] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=mainJob]] completed with the following parameters: [{run.id=20}] and the following status: [FAILED] in 3m13s783ms
2022-03-05 10:05:43.590 INFO 15624 --- [SpringApplicationShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-2 - Shutdown initiated...
2022-03-05 10:05:43.624 INFO 15624 --- [SpringApplicationShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-2 - Shutdown completed.
2022-03-05 10:05:43.626 INFO 15624 --- [SpringApplicationShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2022-03-05 10:05:43.637 INFO 15624 --- [SpringApplicationShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
Datasource builder: I have tried to increase the connection timeout and pool size, but it seems not be applied.
#Bean(name = "srcDataSource")
// #ConfigurationProperties(prefix = "spring.datasource.hikari")
public HikariDataSource dataSource() {
HikariDataSource hikariDS = new HikariDataSource();
hikariDS.setDriverClassName("com.mysql.jdbc.Driver");
hikariDS.setJdbcUrl("jdbc:mysql://dburl");
hikariDS.setUsername("dbuser");
hikariDS.setPassword("dbpwd");
// properties below does not solve the problem
hikariDS.setMaximumPoolSize(16);
hikariDS.setConnectionTimeout(30000);
// hikariDS.addDataSourceProperty("serverName",
// getConfig().getString("mysql.host"));
// hikariDS.addDataSourceProperty("port", getConfig().getString("mysql.port"));
// hikariDS.addDataSourceProperty("databaseName",
// getConfig().getString("mysql.database"));
// hikariDS.addDataSourceProperty("user", getConfig().getString("mysql.user"));
// hikariDS.addDataSourceProperty("password",
// getConfig().getString("mysql.password"));
// hikariDS.addDataSourceProperty("autoReconnect", true);
// hikariDS.addDataSourceProperty("cachePrepStmts", true);
// hikariDS.addDataSourceProperty("prepStmtCacheSize", 250);
// hikariDS.addDataSourceProperty("prepStmtCacheSqlLimit", 2048);
// hikariDS.addDataSourceProperty("useServerPrepStmts", true);
// hikariDS.addDataSourceProperty("cacheResultSetMetadata", true);
return hikariDS;
}
ManagerStep:
#Bean
public Step managerStep() {
return stepBuilderFactory.get("managerStep")
.partitioner(workerStep().getName(), dateRangePartitioner())
.step(workerStep())
// .gridSize(52) // number of worker, which is not necessary with datepartition
.taskExecutor(new SimpleAsyncTaskExecutor())
.build();
}
WorkerStep: I also tried to increase the transaction properties timeout, but not luck
#Bean
public Step workerStep() {
DefaultTransactionAttribute attribute = new DefaultTransactionAttribute();
attribute.setPropagationBehavior(Propagation.REQUIRED.value());
attribute.setIsolationLevel(Isolation.DEFAULT.value());
// attribute.setTimeout(30);
attribute.setTimeout(1000000);
return stepBuilderFactory.get("workerStep")
.<Image, Image>chunk(10)
.reader(jdbcPagingReader(null))
.processor(new ImageItemProcessor())
.writer(imageConverter())
// .listener(wrkrStepExecutionListener)
.transactionAttribute(attribute)
.build();
}
Job builder:
#Bean
public Job mainJob() {
return jobBuilderFactory.get("mainJob")
// .incrementer(new RunIdIncrementer())
.start(managerStep())
// .listener()
.build();
}

Cloud Foundry v2 in Grails

I have a grails project and I want to deploy it on Cloud Foundry, but the the console shows this:
[CONTAINER] n.spring.CloudProfileApplicationContextInitializer INFO Adding 'cloud' to list of active profiles
[CONTAINER] g.CloudPropertySourceApplicationContextInitializer INFO Adding 'cloud' PropertySource to ApplicationContext
[CONTAINER] udAutoReconfigurationApplicationContextInitializer INFO Adding cloud service auto-reconfiguration to ApplicationContext
[CONTAINER] ing.DataSourceCloudServiceBeanFactoryPostProcessor INFO Auto-reconfiguring beans of type javax.sql.DataSource
[CONTAINER] ing.DataSourceCloudServiceBeanFactoryPostProcessor INFO No beans of type javax.sql.DataSource found. Skipping auto-reconfiguration.
[CONTAINER] lina.core.ContainerBase.[Catalina].[localhost].[/] SEVERE Exception sending context initialized event to listener instance of class org.codehaus.groovy.grails.web.context.GrailsContextLoaderListener
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'pluginManager' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.NullPointerException: Cannot invoke method getAt() on null object
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
Caused by: java.lang.NullPointerException: Cannot invoke method getAt() on null object
at java.lang.Thread.run(Thread.java:745)
[CONTAINER] org.apache.catalina.core.StandardContext SEVERE Error listenerStart
... 5 more
[CONTAINER] org.apache.catalina.util.SessionIdGeneratorBase INFO Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [135] milliseconds.
[CONTAINER] org.apache.catalina.core.StandardContext SEVERE Context [] startup failed due to previous errors
[CONTAINER] org.apache.catalina.loader.WebappClassLoader WARNING The web application [] registered the JDBC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
[CONTAINER] org.apache.catalina.loader.WebappClassLoader WARNING The web application [] registered the JDBC driver [org.h2.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
java.lang.Object.wait(Native Method)
[CONTAINER] org.apache.catalina.loader.WebappClassLoader WARNING The web application [] appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:142)
I think that it's a problem of DB Conex but I don't know to fix it. I use MySQL in my app and the service plan ClearDB MySQL Database Spark DB.
My Datasource.groovy is:
import org.springframework.cloud.CloudFactory
def cloud
try {
cloud = new CloudFactory().cloud
} catch(e) {}
dataSource {
pooled = true
driverClassName = "com.mysql.jdbc.Driver"
dialect = "org.hibernate.dialect.MySQL5InnoDBDialect"
username = "root"
password = "root"
}
hibernate {
cache.use_second_level_cache = true
cache.use_query_cache = true
cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory'
//cache.region.factory_class = 'org.hibernate.cache.EhCacheProvider'
}
environments {
development {
dataSource {
dbCreate = "create-drop"
url = "jdbc:mysql://localhost:8080/bbddSRL"
username="root"
password="root"
}
}
test {
dataSource {
dbCreate = "create-drop"
url = "jdbc:mysql://localhost:8080/bbddSRL"
username="root"
password="root"
}
}
production {
dataSource {
pooled = true
dbCreate = 'update'
driverClassName = 'com.mysql.jdbc.Driver'
if (cloud) {
def dbInfo = cloud.getServiceInfo('mysql-instance') //mysql-instance is the name of the ClearDB's service.
url = dbInfo.jdbcUrl
username = dbInfo.userName
password = dbInfo.password
} else {
url = 'jdbc:mysql://localhost:8080/bbddSRLprod'
username = 'root'
password = 'root'
}
}
}
}
Any anwers or suggestions?
Try eliminating the datasource block where you define dialect. You will have to still define the items in that block but do it in the datasource of each environment. I did this because it seemed like any changes that I made to the datasource within the environment had no effect.

what is wrong with my action script 3 code and my red5 media server?

My question is Im building a website that requires live video streaming. we have a VPS from arvixe with red5 installed but I can not connect to red5 with rtmp. I need to know if this is a code eror on my part or if I just need help finding the proper rtmp address to connect to in my action script 3.
here is my code so far:
import flash.media.Camera;
import flash.media.Video;
import flash.media.Microphone;
import flash.net.URLVariables;
import flash.net.URLRequest;
import flash.net.URLLoader;
import flash.events.*;
import flash.display.MovieClip;
import flash.net.*;
import flash.text.*;
var nc:NetConnection = new NetConnection();
nc.connect("rtmp://198.58.95.110:1935/");
var ns:NetStream = new NetStream(nc);
var camera:Camera = Camera.getCamera();
var video:Video = new Video();
video.smoothing = true;
video.attachCamera(camera);
ns.attachCamera(camera);
ns.publish("cam1");
addChild(video);
then I changed it to this:
import flash.media.Camera;
import flash.media.Video;
import flash.media.Microphone;
import flash.net.URLVariables;
import flash.net.URLRequest;
import flash.net.URLLoader;
import flash.events.*;
import flash.display.MovieClip;
import flash.net.*;
import flash.text.*;
var nc:NetConnection = new NetConnection();
var ns:NetStream = new NetStream(nc);
var camera:Camera = Camera.getCamera();
var video:Video = new Video();
video.smoothing = true;
video.attachCamera(camera);
nc.addEventListener(NetStatusEvent.NET_STATUS, netStatusHandler);
nc.connect("rtmp://cam320.arvixevps.com:5080");
function netStatusHandler(event:NetStatusEvent):void {
if (event.info.code == "NetConnection.Connect.Success") {
ns.attachCamera(camera);
ns.publish("livecam1");
}
}
Next Code I tried:
import flash.media.Camera;
import flash.media.Video;
import flash.media.Microphone;
import flash.net.URLVariables;
import flash.net.URLRequest;
import flash.net.URLLoader;
import flash.events.*;
import flash.display.MovieClip;
import flash.net.*;
import flash.text.*;
var nc:NetConnection = new NetConnection();
nc.connect("rtmp://cam320.arvixevps.com/webapps/oflaDemo");
var ns:NetStream = new NetStream(nc);
var camera:Camera = Camera.getCamera();
var video:Video = new Video();
video.smoothing = true;
video.attachCamera(camera);
nc.addEventListener(NetStatusEvent.NET_STATUS, publish);
addChild(video);
function publish(event:NetStatusEvent):void {
if (event.info.code == "NetConnection.Connect.Success"){
ns.attachCamera(camera);
ns.publish("cam1");
}
}
red5.log file looks like this:
2013-02-02 14:07:34,612 [main] INFO org.red5.server.Launcher - Red5 Server 0.9.0 $Rev: 4030 $ (http://code.google.com/p/red5/)
2013-02-02 14:07:34,726 [main] INFO o.s.c.s.FileSystemXmlApplicationContext - Refreshing org.springframework.context.support.FileSystemXmlApplicationContext#5ff06dc3: startup date [Sat Feb 02 14:07:34 PST 2013]; root of context hierarchy
2013-02-02 14:07:35,281 [main] INFO o.s.b.f.c.PropertyPlaceholderConfigurer - Loading properties file from class path resource [red5.properties]
2013-02-02 14:07:35,293 [main] INFO o.s.b.f.s.DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#69d95da8: defining beans [placeholderConfig,red5.common,red5.core,context.loader,pluginLauncher,tomcat.server]; root of factory hierarchy
2013-02-02 14:07:35,326 [main] INFO o.s.c.s.FileSystemXmlApplicationContext - Refreshing org.springframework.context.support.FileSystemXmlApplicationContext#7cf01771: startup date [Sat Feb 02 14:07:35 PST 2013]; root of context hierarchy
2013-02-02 14:07:35,546 [main] INFO o.s.b.f.c.PropertyPlaceholderConfigurer - Loading properties file from class path resource [red5.properties]
2013-02-02 14:07:35,552 [main] INFO o.s.b.f.s.DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#1118fa47: defining beans [placeholderConfig,red5.server,jmxFactory,jmxAgent,serializer,deserializer,statusObjectService,rtmpCodecFactory,rtmptCodecFactory,remotingCodecFactory,streamableFileFactory,filePersistenceThread,sharedObjectService,streamService,providerService,consumerService,bandwidthFilter,schedulingService,warDeployService,remotingClient,object.cache,keyframe.cache,flv.impl,flvreader.impl,mp4reader.impl,mp3reader.impl,org.springframework.beans.factory.config.MethodInvokingFactoryBean#0,org.springframework.beans.factory.config.MethodInvokingFactoryBean#1,streamExecutor,playlistSubscriberStream,clientBroadcastStream]; root of factory hierarchy
2013-02-02 14:07:35,717 [main] WARN o.s.b.GenericTypeAwarePropertyDescriptor - Invalid JavaBean property 'enableRmiAdapter' being accessed! Ambiguous write methods found next to actually used [public void org.red5.server.jmx.JMXAgent.setEnableRmiAdapter(java.lang.String)]: [public void org.red5.server.jmx.JMXAgent.setEnableRmiAdapter(boolean)]
2013-02-02 14:07:35,717 [main] WARN o.s.b.GenericTypeAwarePropertyDescriptor - Invalid JavaBean property 'enableSsl' being accessed! Ambiguous write methods found next to actually used [public void org.red5.server.jmx.JMXAgent.setEnableSsl(java.lang.String)]: [public void org.red5.server.jmx.JMXAgent.setEnableSsl(boolean)]
2013-02-02 14:07:35,717 [main] WARN o.s.b.GenericTypeAwarePropertyDescriptor - Invalid JavaBean property 'enableMinaMonitor' being accessed! Ambiguous write methods found next to actually used [public void org.red5.server.jmx.JMXAgent.setEnableMinaMonitor(java.lang.String)]: [public void org.red5.server.jmx.JMXAgent.setEnableMinaMonitor(boolean)]
2013-02-02 14:07:36,400 [main] INFO org.red5.server.service.WarDeployer - War deployer service created
2013-02-02 14:07:36,493 [main] INFO o.s.c.s.FileSystemXmlApplicationContext - Refreshing org.springframework.context.support.FileSystemXmlApplicationContext#9c6a99d: startup date [Sat Feb 02 14:07:36 PST 2013]; parent: ApplicationContext 'red5.common'
2013-02-02 14:07:36,683 [main] INFO o.s.b.f.c.PropertyPlaceholderConfigurer - Loading properties file from class path resource [red5.properties]
2013-02-02 14:07:36,691 [main] WARN o.s.b.f.c.CustomEditorConfigurer - Passing PropertyEditor instances into CustomEditorConfigurer is deprecated: use PropertyEditorRegistrars or PropertyEditor class names instead. Offending key [java.net.SocketAddress; offending editor instance: org.apache.mina.integration.beans.InetSocketAddressEditor#77a9f87c
2013-02-02 14:07:36,695 [main] INFO o.s.b.f.s.DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#7bbf68a9: defining beans [customEditorConfigurer,placeholderConfig,rtmpMinaConnManager,rtmpHandler,rtmpMinaIoHandler,rtmpTransport,rtmpMinaConnection,rtmptConnManager,rtmptHandler,rtmptServlet,rtmptConnection,rtmpsMinaIoHandler,rtmpsTransport]; parent: org.springframework.beans.factory.support.DefaultListableBeanFactory#1118fa47
2013-02-02 14:07:36,718 [main] INFO o.r.s.net.rtmp.RTMPMinaTransport - RTMP Mina Transport bound to /0.0.0.0:1935
2013-02-02 14:07:36,719 [main] INFO o.r.s.net.rtmp.RTMPMinaTransport - RTMP Mina Transport Settings
2013-02-02 14:07:36,719 [main] INFO o.r.s.net.rtmp.RTMPMinaTransport - Connection Threads: 4
2013-02-02 14:07:36,719 [main] INFO o.r.s.net.rtmp.RTMPMinaTransport - I/O Threads: 16
2013-02-02 14:07:36,785 [main] INFO o.r.s.net.rtmp.RTMPMinaTransport - TCP No Delay: true
2013-02-02 14:07:36,785 [main] INFO o.r.s.net.rtmp.RTMPMinaTransport - Receive Buffer Size: 65536
2013-02-02 14:07:36,785 [main] INFO o.r.s.net.rtmp.RTMPMinaTransport - Send Buffer Size: 271360
2013-02-02 14:07:36,795 [main] INFO o.s.b.f.s.DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#7bbf68a9: defining beans [customEditorConfigurer,placeholderConfig,rtmpMinaConnManager,rtmpHandler,rtmpMinaIoHandler,rtmpTransport,rtmpMinaConnection,rtmptConnManager,rtmptHandler,rtmptServlet,rtmptConnection,rtmpsMinaIoHandler,rtmpsTransport]; parent: org.springframework.beans.factory.support.DefaultListableBeanFactory#1118fa47
2013-02-02 14:07:36,796 [main] INFO o.s.b.f.s.DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#69d95da8: defining beans [placeholderConfig,red5.common,red5.core,context.loader,pluginLauncher,tomcat.server]; root of factory hierarchy
2013-02-02 14:07:36,796 [main] INFO o.s.c.s.FileSystemXmlApplicationContext - Closing ApplicationContext 'red5.common': startup date [Sat Feb 02 14:07:35 PST 2013]; root of context hierarchy
2013-02-02 14:07:36,796 [main] INFO o.s.b.f.s.DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#1118fa47: defining beans [placeholderConfig,red5.server,jmxFactory,jmxAgent,serializer,deserializer,statusObjectService,rtmpCodecFactory,rtmptCodecFactory,remotingCodecFactory,streamableFileFactory,filePersistenceThread,sharedObjectService,streamService,providerService,consumerService,bandwidthFilter,schedulingService,warDeployService,remotingClient,object.cache,keyframe.cache,flv.impl,flvreader.impl,mp4reader.impl,mp3reader.impl,org.springframework.beans.factory.config.MethodInvokingFactoryBean#0,org.springframework.beans.factory.config.MethodInvokingFactoryBean#1,streamExecutor,playlistSubscriberStream,clientBroadcastStream]; root of factory hierarchy
2013-02-02 14:09:12,350 [NioProcessor-1] INFO o.red5.server.net.rtmp.RTMPHandler - Connecting to: [WebScope#218c6982 Depth = 1, Path = '/default', Name = 'installer']
2013-02-02 14:09:39,316 [NioProcessor-1] INFO o.red5.server.net.rtmp.RTMPHandler - Connecting to: [WebScope#218c6982 Depth = 1, Path = '/default', Name = 'installer']
2013-02-02 14:15:10,259 [NioProcessor-1] INFO o.red5.server.net.rtmp.RTMPHandler - Connecting to: [WebScope#218c6982 Depth = 1, Path = '/default', Name = 'installer']
2013-02-02 14:34:34,089 [NioProcessor-1] INFO o.red5.server.net.rtmp.RTMPHandler - Scope webapps/oflaDemo not found on cam320.arvixevps.com
2013-02-02 14:34:36,935 [Red5_Scheduler_Worker-11] WARN o.r.server.net.rtmp.RTMPConnection - Closing RTMPMinaConnection from 184.63.74.33 : 11053 to cam320.arvixevps.com (in: 3433 out 3266 ), with id 415926013 due to long handshake
2013-02-02 14:34:55,728 [NioProcessor-1] INFO o.red5.server.net.rtmp.RTMPHandler - Scope webapps/oflaDemo not found on cam320.arvixevps.com
2013-02-02 14:34:58,998 [Red5_Scheduler_Worker-12] WARN o.r.server.net.rtmp.RTMPConnection - Closing RTMPMinaConnection from 184.63.74.33 : 11081 to cam320.arvixevps.com (in: 3433 out 3266 ), with id 814254349 due to long handshake
2013-02-02 15:59:32,847 [NioProcessor-1] INFO o.red5.server.net.rtmp.RTMPHandler - Scope webapps/oflaDemo not found on cam320.arvixevps.com
2013-02-02 15:59:35,845 [Red5_Scheduler_Worker-10] WARN o.r.server.net.rtmp.RTMPConnection - Closing RTMPMinaConnection from 184.63.74.33 : 13181 to cam320.arvixevps.com (in: 3433 out 3266 ), with id 2104928456 due to long handshake
2013-02-02 16:00:27,050 [NioProcessor-1] INFO o.red5.server.net.rtmp.RTMPHandler - Scope webapps/oflaDemo not found on cam320.arvixevps.com
2013-02-02 16:00:30,974 [Red5_Scheduler_Worker-11] WARN o.r.server.net.rtmp.RTMPConnection - Closing RTMPMinaConnection from 184.63.74.33 : 13226 to cam320.arvixevps.com (in: 3433 out 3266 ), with id 9885998 due to long handshake
still no luck getting it to work plus where the webcam used to show up before adding the netconnection and netstream code to my action script. Ofcourse Ive been trying to figure out for a few days and no luck. I think the problem is at nc.connect("rtmp://cam320.arvixevps.com:5080"); but not sure what the proper address for red5's default rtmp is so have no clue what to put here. Plus Im not sure if there is any erors in my code since I haven't been working with action script for too long. Unfortunatelly I have a deadline to meet to build this website and Im running short on time. but if someone could atleast point me in the right direction or show me what I did wrong I can fix it and finish this website. oh the VPS is running linux centos 5.I hope someone can help me here. This red5 rtmp connection issue is pissing me off. I just want to know how to fix this and make it connect to red5 correctly with out creating a webapp in flex. I should just be able to use Actionscript3 to connect to red5 then stream live video and not have to create a application with flex. I should say do i need anything besides red5 and my actionscripts to stream live webcam video? what does scope weapps/oflaDemo not found at cam320.arvixevps.com really mean?
red5 uses port 1935 for rtmp. Port 5080 is for http access (red5 control panel and rtmpt).

ESB Mule Client staring with xml-properties fails

I use Mule 3.x
I have a code that tests MuleClient connectivity.
This test is ok and works proper way:
public void testHello() throws Exception
{
MuleClient client = new MuleClient(muleContext);
MuleMessage result = client.send("http://127.0.0.1:8080/hello", "some data", null);
assertNotNull(result);
assertNull(result.getExceptionPayload());
assertFalse(result.getPayload() instanceof NullPayload);
//TODO Assert the correct data has been received
assertEquals("hello", result.getPayloadAsString());
}
But this tes is not ok - it fail with an connection exceptions:
public void testHello_with_Spring() throws Exception {
MuleClient client = new MuleClient("mule-config-test.xml");
client.getMuleContext().start();
//it fails here
MuleMessage result = client.send("http://127.0.0.1:8080/hello", "some data", null);
assertNotNull(result);
assertNull(result.getExceptionPayload());
assertFalse(result.getPayload() instanceof NullPayload);
//TODO Assert the correct data has been received
assertEquals("hello", result.getPayloadAsString());
}
The 'mule-config-test.xml' is used in both tests, the path for this file is ok, i checked.
This is error message I have in the end:
Exception stack is:
1. Address already in use (java.net.BindException) java.net.PlainSocketImpl:-2 (null)
2. Failed to bind to uri "http://127.0.0.1:8080/hello" (org.mule.transport.ConnectException)
org.mule.transport.tcp.TcpMessageReceiver:81
(http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/transport/ConnectException.html)
-------------------------------------------------------------------------------- Root Exception stack trace: java.net.BindException: Address already in
use at java.net.PlainSocketImpl.socketBind(Native Method) at
java.net.PlainSocketImpl.bind(PlainSocketImpl.java:383) at
java.net.ServerSocket.bind(ServerSocket.java:328)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
[10-05 16:33:37] ERROR HttpConnector [main]:
org.mule.transport.ConnectException: Failed to bind to uri
"http://127.0.0.1:8080/hello" [10-05 16:33:37] ERROR ConnectNotifier
[main]: Failed to connect/reconnect: HttpConnector {
name=connector.http.mule.default lifecycle=stop this=7578a7d9
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true connected=false
supportedProtocols=[http] serviceOverrides= } . Root Exception
was: Address already in use. Type: class java.net.BindException [10-05
16:33:37] ERROR DefaultSystemExceptionStrategy [main]: Failed to bind
to uri "http://127.0.0.1:8080/hello"
org.mule.api.lifecycle.LifecycleException: Cannot process event as
"connector.http.mule.default" is stopped
I think the problem is in what you're not showing: testHello_with_Spring() is probably executing while Mule is already running. The second Mule you're starting in it then port-conflicts with the other one.
Are testHello() and testHello_with_Spring() in the same test suite? If yes, seeing that testHello() relies on an already running Mule, I'd say that would be the cause of port conflict for testHello_with_Spring().