I am unable to submit a simple job that just performs a System.out.println(). Here is the error I get back from the SnappyData Lead.
snappy-job.sh submit --lead 10.0.18.66:8090 --app-name SimpleJobApp
--class snappydata.jobs.SimpleJob --app-jar simpleJob.jar OKOK{ "status": "ERROR", "result": {
"message": "null",
"errorClass": "scala.MatchError",
"stack": ["spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:244)", "scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)", "scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)", "java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)", "java.lang.Thread.run(Thread.java:745)"] }
Here is the Job:
public class SimpleJob implements SnappySQLJob {
/**
*
*/
public SimpleJob() {
System.out.println(getClass().getSimpleName()+" Created");
}
#Override
public Object runJob(Object sparkContext, Config jobConfig) {
SnappyContext snappyContext = (SnappyContext)sparkContext;
System.out.println(getClass().getSimpleName()+".runJob: executed");
return null;
}
#Override
public SparkJobValidation validate(Object sparkContext, Config jobConfig) {
SnappyContext snappyContext = (SnappyContext)sparkContext;
System.out.println(getClass().getSimpleName()+".validate: executed");
return null;
}
}
Here is the SnappyData Lead Log:
16/08/05 17:44:07.352 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-14 INFO JarManager: Storing jar for app SimpleJobApp, 1052 bytes 16/08/05 17:44:07.368 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-19 INFO LocalContextSupervisorActor: Creating a SparkContext named snappyContext1470419047337607598 16/08/05 17:44:07.369 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-16 INFO JobManagerActor: Starting actor spark.jobserver.JobManagerActor 16/08/05 17:44:07.371 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-16 INFO JobStatusActor: Starting actor spark.jobserver.JobStatusActor 16/08/05 17:44:07.371 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-14 INFO JobResultActor: Starting actor spark.jobserver.JobResultActor 16/08/05 17:44:07.371 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-17 INFO LocalContextSupervisorActor: SparkContext snappyContext1470419047337607598 initialized 16/08/05 17:44:07.375 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-16 INFO RddManagerActor: Starting actor spark.jobserver.RddManagerActor 16/08/05 17:44:07.389 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-17 INFO JobManagerActor: Loading class snappydata.jobs.SimpleJob for app SimpleJobApp 16/08/05 17:44:07.389 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-17 INFO SparkContext: Added JAR /tmp/spark-jobserver/filedao/data/SimpleJobApp-2016-08-05T17_44_07.353Z.jar at http://10.0.18.66:50772/jars/SimpleJobApp-2016-08-05T17_44_07.353Z.jar with timestamp 1470419047389 16/08/05 17:44:07.390 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-17 INFO ContextURLClassLoader: Added URL file:/tmp/spark-jobserver/filedao/data/SimpleJobApp-2016-08-05T17_44_07.353Z.jar to ContextURLClassLoader 16/08/05 17:44:07.390 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-17 INFO JarUtils$: Loading object snappydata.jobs.SimpleJob$ using loader spark.jobserver.util.ContextURLClassLoader#709f3e69 16/08/05 17:44:07.391 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-17 INFO JarUtils$: Loading class snappydata.jobs.SimpleJob using loader spark.jobserver.util.ContextURLClassLoader#709f3e69 16/08/05 17:44:07.392 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-17 INFO JobManagerActor: Starting Spark job 376c8d23-6b49-4138-aadd-e4cff8f9f945 [snappydata.jobs.SimpleJob]... 16/08/05 17:44:07.398 UTC pool-29-thread-1 INFO JobManagerActor: Starting job future thread 16/08/05 17:44:07.402 UTC SnappyLeadJobServer-akka.actor.default-dispatcher-17 INFO JobStatusActor: Job 376c8d23-6b49-4138-aadd-e4cff8f9f945 finished with an error 16/08/05 17:44:07.402 UTC pool-29-thread-2 WARN JobManagerActor: Exception from job 376c8d23-6b49-4138-aadd-e4cff8f9f945: scala.MatchError: null at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:244) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
Java program with scala APIs gives this error.I have written SimpleJob in with Java APIs.
public class SimpleJob extends JavaSnappySQLJob {
public SimpleJob() {
System.out.println(getClass().getSimpleName()+" Created");
}
#Override
public Object runJavaJob(SnappyContext snappyContext, Config config) {
System.out.println(getClass().getSimpleName()+".runJob:
executed");
return null;
}
#Override
public JSparkJobValidation isValidJob(SnappyContext snappyContext, Config config) {
System.out.println(getClass().getSimpleName()+".validate: executed");
return new JSparkJobValid();
}
}
Related
I followed https://spring.io/guides/gs/accessing-data-mysql/#initial to satrt learning springboot with MySQL. And I met a bug as follows.
2021-09-23 01:28:31.193 INFO 1196 --- [ main] com.example.demo.DemoApplication : Starting DemoApplication using Java 1.8.0_144 on DESKTOP-PFH9867 with PID 1196 (C:\Users\Admin\IdeaProjects\demo\target\classes started by Admin in C:\Users\Admin\IdeaProjects\demo)
2021-09-23 01:28:31.196 INFO 1196 --- [ main] com.example.demo.DemoApplication : No active profile set, falling back to default profiles: default
2021-09-23 01:28:33.014 INFO 1196 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2021-09-23 01:28:33.027 INFO 1196 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2021-09-23 01:28:33.027 INFO 1196 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.52]
2021-09-23 01:28:33.145 INFO 1196 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2021-09-23 01:28:33.146 INFO 1196 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1875 ms
2021-09-23 01:28:33.218 WARN 1196 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'studentController': Unsatisfied dependency expressed through field 'studentRepository'; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'com.example.demo.repository.StudentRepository' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {#org.springframework.beans.factory.annotation.Autowired(required=true)}
2021-09-23 01:28:33.221 INFO 1196 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2021-09-23 01:28:33.242 INFO 1196 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2021-09-23 01:28:33.269 ERROR 1196 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
Field studentRepository in com.example.demo.cotroller.StudentController required a bean of type 'com.example.demo.repository.StudentRepository' that could not be found.
The injection point has the following annotations:
- #org.springframework.beans.factory.annotation.Autowired(required=true)
Action:
Consider defining a bean of type 'com.example.demo.repository.StudentRepository' in your configuration.
Process finished with exit code 1
Obviously, the main problem is #Autowired in Controller. Therefore, I searched a lot of methods to deal with it. And I don't wanna add new code or new file. As the website said when the interface Repository is created, Spring automatically implements this repository interface in a bean that has the same name (with a change in the case — it is called userRepository). Therefore, I don't wanna add new code or file. I thought the issue is in my application.properties or the relative location of directories. But they don't work at all.
And my code is as follows.
DemoApplication.java
package com.example.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
Student.java
package com.example.demo.entity;
import javax.persistence.Entity;
// https://spring.io/guides/gs/accessing-data-mysql/
#Entity // This tells Hibernate to make a table out of this class
public class Student {
// 2147483647
private Integer studentID;
private String name;
private String department;
private String major;
public Integer getStudentID() {
return studentID;
}
public void setStudentID(Integer studentID) {
this.studentID = studentID;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getDepartment() {
return department;
}
public void setDepartment(String department) {
this.department = department;
}
public String getMajor() {
return major;
}
public void setMajor(String major) {
this.major = major;
}
}
StudentController.java
package com.example.demo.cotroller;
import com.example.demo.entity.Student;
import com.example.demo.repository.StudentRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.MediaType;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.*;
#Controller // This means that this class is a Controller
#RequestMapping(path="/api/v1/student") // This means URL's start with /demo (after Application path)
public class StudentController {
// This means to get the bean called studentRepository
// Which is auto-generated by Spring, we will use it to handle the data
#Autowired
private StudentRepository studentRepository;
#PostMapping(path="/add") // Map ONLY POST Requests
public #ResponseBody
String addNewUser (#RequestBody Student student) {
// #ResponseBody means the returned String is the response, not a view name
// #RequestParam means it is a parameter from the GET or POST request
studentRepository.save(student);
return "Saved";
}
}
StudentRepository.java
package com.example.demo.repository;
import com.example.demo.entity.Student;
import org.springframework.data.repository.CrudRepository;
// This will be AUTO IMPLEMENTED by Spring into a Bean called userRepository
// CRUD refers Create, Read, Update, Delete
public interface StudentRepository extends CrudRepository<Student, Integer> {
}
My directories are organized as follows.
java
|--com.example.demo
|--controller
|--StudentController.java
|--entity
|--Student.java
|--repository
|--StudentRepository.java
DemoApplication.java
SOLVED!
Finally, I found the true reason! I am too careless to follow the tutorial. It said Click Dependencies and select Spring Web, Spring Data JPA, and MySQL Driver. but I didn't click Spring Data JPA!
I recreated the project by selecting Spring Web, Spring Data JPA, and MySQL Driver in IDEA, and copied the files described in the question descrption, then the problem is solved.
Yes, you're right spring perform the repository configuration by their side but you have to inform to spring on which class you want to perform or include this functionality. For that you have to add #Repository annotation in your repository class.
And One more thing for your knowledge. In any language structure is the essential part of any programming language. To understand it you always have to follow Best practices. Here you defined repository class directly in controller which is completely wrong. Controller is just for controlling in coming request and send response back to them. So, This kind of business logic always take place in your service class or DAOImpl class. Don't define to database object in other classes.
In addition if you work with multiple Classes like User and Department and you want to perform any action only service to service communication take place not share repository object
I have a docker-compose setup to start my SpringBoot application and a MySQL database. If the database starts first, then my application can connect successfully. But if my application starts first, no database exists yet, so the application throws the following exception and exits:
app_1 | 2018-05-27 14:15:03.415 INFO 1 --- [ main]
com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
app_1 | 2018-05-27 14:15:06.770 ERROR 1 --- [ main]
com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization
app_1 | com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
I could edit my docker-compose file to make sure the database is always up before the application starts up, but I want the application to be able to handle this case on its own, and not immediately exit when it cannot reach the database address.
There are ways to configure the datasource in the application.properties file to make the application reconnect to the database, as answered here and here. But that doesn't work for a startup connection to the datasource.
How can I make my SpringBoot application retry the connection at startup to the database at a given interval until it successfully connects to the database?
Set HikariCP's initializationFailTimeout property to 0 (zero), or a negative number. As documented here:
⌚initializationFailTimeout
This property controls whether the pool will "fail fast" if the pool cannot be seeded with an initial connection successfully. Any positive number is taken to be the number of milliseconds to attempt to acquire an initial connection; the application thread will be blocked during this period. If a connection cannot be acquired before this timeout occurs, an exception will be thrown. This timeout is applied after the connectionTimeout period. If the value is zero (0), HikariCP will attempt to obtain and validate a connection. If a connection is obtained, but fails validation, an exception will be thrown and the pool not started. However, if a connection cannot be obtained, the pool will start, but later efforts to obtain a connection may fail. A value less than zero will bypass any initial connection attempt, and the pool will start immediately while trying to obtain connections in the background. Consequently, later efforts to obtain a connection may fail. Default: 1
There is an alternative way to do this, which doesn't rely on a specific Connection Pool library or a specific database. Note that you will need to use spring-retry to achieve the desired behaviour with this approach
First you need to add spring-retry to your dependencies :
<dependency>
<groupId>org.springframework.retry</groupId>
<artifactId>spring-retry</artifactId>
<version>${spring-retry.version}</version>
</dependency>
Then you can create a decorator over DataSource that will extends AbstractDataSource like bellow :
#Slf4j
#RequiredArgsConstructor
public class RetryableDataSource extends AbstractDataSource {
private final DataSource dataSource;
#Override
#Retryable(maxAttempts = 5, backoff = #Backoff(multiplier = 1.3, maxDelay = 10000))
public Connection getConnection() throws SQLException {
log.info("getting connection ...");
return dataSource.getConnection();
}
#Override
#Retryable(maxAttempts = 5, backoff = #Backoff(multiplier = 2.3, maxDelay = 10000))
public Connection getConnection(String username, String password) throws SQLException {
log.info("getting connection by username and password ...");
return dataSource.getConnection(username, password);
}
}
Then you will need to inject this custom DataSource decorator into Spring context by creating a custom BeanPostProcessor :
#Slf4j
#Order(value = Ordered.HIGHEST_PRECEDENCE)
#Component
public class RetryableDatabasePostProcessor implements BeanPostProcessor {
#Override
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
if(bean instanceof DataSource) {
log.info("-----> configuring a retryable datasource for beanName = {}", beanName);
return new RetryableDataSource((DataSource) bean);
}
return bean;
}
#Override
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
return bean;
}
}
Last but not least you will need to enable Spring retry by adding #EnableRetry annotation to spring main class, example :
#EnableRetry
#SpringBootApplication
public class RetryableDbConnectionApplication {
public static void main(String[] args) {
SpringApplication.run(RetryableDbConnectionApplication.class, args);
}
}
I have this simple class (Spring Boot + JPA/Hibernate) that is being used just for testing.
#Entity
#Table
public class User {
#Id
#Column
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
#Column(name="first_name")
private String firstName;
#Column(name="last_name")
private String lastName;
// getters + setters
....
}
Since by default Spring RestMvc seems to be returning ContentType:application/hal+json, and for my front-end client app in Ember.js I need application/vnd.api+json, I did the following change:
spring.data.rest.defaultMediaType=application/vnd.api+json
Now after I make a request from the client app, I get matching ContentType for both request and response.
But.. Now when I try to directly or via Postman access the API: localhost:8080/api/users/1, the app enters into an infinite loop and a stackoverflow occurs after some time.
I tried some workarounds using Jackson's annotations, like #JsonIgnoreProperties, etc. but that didn't help.
What confuses me most is the fact this class isn't related to any other classes/entities so what could be be causing this loop?
EDIT:
2017-11-04 18:03:35.594 INFO 17468 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2017-11-04 18:03:35.607 INFO 17468 --- [ main] c.i.restapp.RestAppApplication : Started RestAppApplication in 27.08 seconds (JVM running for 66.023)
2017-11-04 18:04:08.780 INFO 17468 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring FrameworkServlet 'dispatcherServlet'
2017-11-04 18:04:08.781 INFO 17468 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization started
2017-11-04 18:04:08.882 INFO 17468 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization completed in 101 ms
Hibernate: select user0_.id as id1_1_0_, user0_.first_name as first_na2_1_0_, user0_.last_name as last_nam3_1_0_ from user user0_ where user0_.id=?
2017-11-04 18:04:54.726 WARN 17468 --- [nio-8080-exec-1] .w.s.m.s.DefaultHandlerExceptionResolver : Failed to write HTTP message: org.springframework.http.converter.HttpMessageNotWritableException: Could not write JSON: Infinite recursion (StackOverflowError); nested exception is com.fasterxml.jackson.databind.JsonMappingException: Infinite recursion (StackOverflowError) (through reference chain:
org.springframework.data.jpa.mapping
.JpaPersistentPropertyImpl["owner"]->org.springframework.data.jpa.mapping
.JpaPersistentEntityImpl["idProperty"]->org.springframework.data.jpa.mapping
.JpaPersistentPropertyImpl["owner"]->org.springframework.data.jpa.mapping
.JpaPersistentEntityImpl["idProperty"]->org.springframework.data.jpa.mapping
.JpaPersistentPropertyImpl["owner"]->org.springframework.data.jpa.mapping
.JpaPersistentEntityImpl["idProperty"]->org.springframework.data.jpa.mapping
.JpaPersistentPropertyImpl["owner"]->org.springframework.data.jpa.mapping
.JpaPersistentEntityImpl["idProperty"]->org.springframework.data.jpa.mapping
.JpaPersistentPropertyImpl["owner"]->org.springframework.data.jpa.mapping
.JpaPersistentEntityImpl["idProperty"]->org.springframework.data.jpa.mapping
.JpaPersistentPropertyImpl["owner"]->org.springframework.data.jpa.mapping
.JpaPersistentEntityImpl["idProperty"]->org.springframework.data.jpa.mapping
.JpaPersistentPropertyImpl["owner"]->org.springframework.data.jpa.mapping
.JpaPersistentEntityImpl["idProperty"]->org.springframework.data.jpa.mapping
Spring Data REST support now only the following media types:
application/hal+json
application/json
https://docs.spring.io/spring-data/rest/docs/current/reference/html/#repository-resources.item-resource
but you can add application/vnd.api+json in supported media types for HAL JacksonHttpMessageConverter.
You can customize the message converters by extending the WebMvcConfigurerAdapter class and overriding the extendMessageConverters method:
package com.example;
import org.springframework.context.annotation.Configuration;
import org.springframework.hateoas.mvc.TypeConstrainedMappingJackson2HttpMessageConverter;
import org.springframework.http.MediaType;
import org.springframework.http.converter.HttpMessageConverter;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter;
import java.util.ArrayList;
import java.util.List;
#Configuration
public class WebConfig extends WebMvcConfigurerAdapter {
private static final MediaType APPLICATION_VND_API_JSON = MediaType.valueOf("application/vnd.api+json");
private static final String HAL_JSON_SUBTYPE = "hal+json";
#Override
public void extendMessageConverters(List<HttpMessageConverter<?>> converters) {
converters.stream()
.filter(TypeConstrainedMappingJackson2HttpMessageConverter.class::isInstance)
.map(TypeConstrainedMappingJackson2HttpMessageConverter.class::cast)
.filter(this::isHalConverter)
.forEach(this::addVndApiMediaType);
super.extendMessageConverters(converters);
}
private boolean isHalConverter(TypeConstrainedMappingJackson2HttpMessageConverter converter) {
return converter.getSupportedMediaTypes().stream().anyMatch(type -> type.getSubtype().equals(HAL_JSON_SUBTYPE));
}
private void addVndApiMediaType(TypeConstrainedMappingJackson2HttpMessageConverter converter) {
List<MediaType> supportedMediaTypes = new ArrayList<>(converter.getSupportedMediaTypes());
supportedMediaTypes.add(APPLICATION_VND_API_JSON);
converter.setSupportedMediaTypes(supportedMediaTypes);
}
}
It still requires the parameter in application.properties:
spring.data.rest.defaultMediaType=application/vnd.api+json
Unfortunately requests with application/hal+json will not work after this dirty hack.
I develop some unmanaged extensions for neo4j 2.3.0. Now I want to test the functionality of my code with junit. Is there a way to test my methods locally without an neo4j instance running on my pc?
I want something like this:
- create a temporary instance of neo4j before executing test
- fill instance with data
- call my extension via rest
- check the results
I have created a Neo4JTestServer class (using neo4j-harness):
public final class Neo4jTestServer {
public static final String EXTENSION_MOUNT_POINT = "/v1";
public static final String EXTENSION_RESOURCES = "my.company.neo4j.extension";
private static Neo4jTestServer INSTANCE = null;
public static synchronized Neo4jTestServer getInstance() {
if (INSTANCE == null) {
INSTANCE = new Neo4jTestServer();
}
return INSTANCE;
}
private final ServerControls serverControls;
private Neo4jTestServer() {
serverControls = TestServerBuilders.newInProcessBuilder()
.withExtension(EXTENSION_MOUNT_POINT, EXTENSION_RESOURCES)
.newServer();
}
public ServerControls getServerControls() {
return serverControls;
}
public void shutdown() {
serverControls.close();
}
}
And my test class is looking like this:
public class TestResource {
private Neo4jTestServer server;
#Before
public void prepare(){
this.server = Neo4jTestServer.getInstance();
}
#After
public void endup(){
// Shutdown server
this.server.shutdown();
}
#Test
public void test(){
//TODO fill neo4j with data
HTTP.Response response = HTTP.GET(this.server.getServerControls().httpURI().resolve("/v1/calculation/test").toString());
}
}
Here is a link! on the resource i have used.
I also checked the questions from here! but i don't understand if they are running the tests on a 'real' neo4j instance or not.
Can this only be executed on the server or is there a way to run such tests locally?
EDIT:
I always get a 500 response calling the rest method.
And this is the output when executing the test:
Nov 04, 2015 10:33:17 AM com.sun.jersey.api.core.PackagesResourceConfig init
INFORMATION: Scanning for root resource and provider classes in the packages:my.company.neo4j.extension
Nov 04, 2015 10:33:17 AM com.sun.jersey.api.core.ScanningResourceConfig logClasses
INFORMATION: Root resource classes found: class my.company.neo4j.extension.Resource
Nov 04, 2015 10:33:17 AM com.sun.jersey.api.core.ScanningResourceConfig init
INFORMATION: No provider classes found.
Nov 04, 2015 10:33:17 AM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFORMATION: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM'
Nov 04, 2015 10:33:17 AM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFORMATION: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM'
Nov 04, 2015 10:33:17 AM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFORMATION: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM'
Nov 04, 2015 10:33:17 AM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFORMATION: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM'
Is this "INFORMATION: No provider classes found." a problem maybe?
EDIT - this is my extension (it is just for testing)
#Path("/calculation")
public class ResourceV1 {
#Path("/test")
#GET
public Response test() throws Exception{
return Response.ok().build();
}
}
I also found the neo4j.log file from the temporary database:
2015-11-04 11:52:39.876+0100 INFO [o.n.s.d.LifecycleManagingDatabase] Successfully started database
2015-11-04 11:52:39.896+0100 INFO [o.n.s.CommunityNeoServer] Starting HTTP on port 7474 (4 threads available)
2015-11-04 11:52:40.087+0100 INFO [o.n.s.m.ThirdPartyJAXRSModule] Mounted unmanaged extension [my.company.neo4j.extension] at [/v1]
2015-11-04 11:52:40.156+0100 INFO [o.n.s.w.Jetty9WebServer] Mounting static content at /webadmin
2015-11-04 11:52:40.233+0100 WARN [o.n.s.w.Jetty9WebServer] No static content available for Neo Server at port 7474, management console may not be available.
2015-11-04 11:52:40.252+0100 INFO [o.n.s.w.Jetty9WebServer] Mounting static content at /browser
2015-11-04 11:52:41.267+0100 INFO [o.n.s.CommunityNeoServer] Remote interface ready and available at http://localhost:7474/
What you are trying is following
- Neo4JTestServer class is a factory for an in-memory Neo4j server. It's basically same as "real" one.
- TestResource class is a simple test for your extension.
Problem could be how do you load your extension into the test. Is your extension in the package name "my.company.neo4j.extension"?
Could you please show us code of your extension?
My suggestions are following
- read http://neo4j.com/docs/stable/server-unmanaged-extensions-testing.html
- look on GraphUnit which provide much better possibilities how to test extensions - https://github.com/graphaware/neo4j-framework/tree/master/tests
After setting up a new project I found the solution for my problem. In my maven dependencies I had both
<dependency>
<groupId>org.neo4j.test</groupId>
<artifactId>neo4j-harness</artifactId>
<version>${neo4j.version}</version>
<scope>test</scope>
</dependency>
and
<dependency>
<groupId>javax.ws.rs</groupId>
<artifactId>javax.ws.rs-api</artifactId>
<version>2.0</version>
<scope>provided</scope>
</dependency>
And in my case, here was the problem - after removing the dependency for javax ws the testcase was successful and I can call the rest methods.
EDIT: After removing the jax rs dependency it was not possible for me to build the extension because of the missing dependencies.
My solution: Adding following dependency instead of javax.ws.rs-api.
<dependency>
<groupId>org.neo4j.3rdparty.javax.ws.rs</groupId>
<artifactId>jsr311-api</artifactId>
<scope>provided</scope>
<version>1.1.2.r612</version>
</dependency>
It seems that it is working fine with neo4j-harness.
I am using Azure service-bus queues (AMQP Protocol) with Apache Qpid (0.3) as Java client.
I am also using Spring JmsTemplate to produce messages and DefaultMessageListenerContainer to manage my consumers, spring JMS 4.0.6.
Spring configurations:
#PostConstruct
private void JndiLookup() throws NamingException {
// Configure JNDI environment
Hashtable<String, String> envPrp = new Hashtable<String, String>();
envPrp.put(Context.INITIAL_CONTEXT_FACTORY,
PropertiesFileInitialContextFactory.class.getName());
envPrp.put("connectionfactory.SBCF", "amqps://owner:{parimeryKey}#{namespace}.servicebus.windows.net");
envPrp.put("queue.STORAGE_NEW_QUEUE", "QueueName");
context = new InitialContext(envPrp);
}
#Bean
public ConnectionFactory connectionFactory() throws NamingException {
ConnectionFactory cf = (ConnectionFactory) context.lookup("SBCF");
return cf;
}
#Bean
public DefaultMessageListenerContainer messageListenerContainer() throws NamingException {
DefaultMessageListenerContainer messageListenerContainer = new DefaultMessageListenerContainer();
messageListenerContainer.setConnectionFactory(connectionFactory());
Destination queue = (Destination) context.lookup("QueueName");
messageListenerContainer.setDestination(queue);
messageListenerContainer.setConcurrency("3-10");
MessageListenerAdapter adapter = new MessageListenerAdapter();
adapter.setDelegate(new MessageWorker());
adapter.setDefaultListenerMethod("onMessage");
messageListenerContainer.setMessageListener(adapter);
return messageListenerContainer;
}
#Bean
public JmsTemplate jmsTemplate() throws NamingException {
JmsTemplate jmsTemplate = new JmsTemplate();
jmsTemplate.setConnectionFactory(connectionFactory());
return jmsTemplate;
}
Nothing fancy in the configurations just straight forward.
Running the code and everything seems to be working .. but after few minutes without traffic in the queue it seems like the consumers are losing connection with the queue and not taking messages.
I dont know if it is related but every 5 minutes in am getting the following warning:
Fri Nov 07 15:23:53 +0000 2014, (DefaultMessageListenerContainer.java:842) WARN : Setup of JMS message listener invoker failed for destination 'org.apache.qpid.amqp_1_0.jms.impl.QueueImpl#8fb0427b' - trying to recover. Cause: Force detach the link because the session is remotely ended.
Fri Nov 07 15:23:56 +0000 2014, (DefaultMessageListenerContainer.java:891) INFO : Successfully refreshed JMS Connection
I have messages being in the queue for hours and not being handled by the consumers only when I restart the app the consumers renewing the connection properly and taking the messages.
Is it possible that the problem is with the Spring Listener container properties or qpid connection factory or is it an issue with Azure service bus??
Couldn't find related post to my situation will appreciate the help!!