Configuration of Axon Framework 4.2 without Spring Boot - configuration

I'm new to Axon framework. I'm evaluating it with Axon Server.
Using Spring Boot is very convenient und worked right away.
Problem is, we are aiming at using Spring Boot processes tied together with non Spring Boot processes (Hybris Commerce from SAP).
My question is: What do I have to do to configure Axon framework 4.2 without Spring Boot? (Without Spring Boot the AxonServerAutoConfiguration does not work).
Thanks in advance!

Axon provides the Configuration API, which as been given its own module as of release 4.
This package contains the Configurer, AggregateConfigurer, EventProcessingConfigurer and SagaConfigurer which should give you all the handles to define any set up with buses, aggregates and query models.
The Reference Guide typically provides snippets on how to configure a given component, both through the Configuration API and Spring Boot as separate code tabs.
Additionally, it's sensible that the AxonServerAutoConfiguration does not work without Spring Boot, as it's a Spring Boot dedicated solution to configuring your application.
Even without it though, Axon will (through a Service Loader mechanism) auto-load the ServerConnectorConfigurerModule once you've created a default Configurer. This should give you the required infrastructure components to configure an application using Axon Server without utilizing Spring Boot.

Related

Deploy Spring Microservices on Openshift

I need to deploy a few microservices on the Openshift. These microservices are implemented using Spring Cloud. I use Spring Eureka for service discovery/load-balancing && Spring Zuul for service routing.
From what I understand, Openshift already provides these features ( service discovery, load balancing, routing ) via Kubernetes.
With this being said, can I integrate Spring Eureka and Spring Zuul with the openshift platform?
Woudn't it be redundant to add Spring Eureka & Spring Zuul components into Openshift since the platform itself already provides these microservice features ?
I was thinking of removing the service registry & routing Spring components and just implement routing using Openshift. However, that would leave the project heavily dependent on this cloud platform.
What would your approach be? Use the features provided by the OpenShift (routing, load balancing) or use the ones provided by the Spring framework and try to integrate them with the cloud platform?
Thanks
It would indeed be redundant.
Eureka can be replaced by Kubernetes services. (they provide a load balancer and a domain name for a group of pods)
Zuul can be replaced by OpenShift Routes for exposing your services.
If you are using a platform, use the platform provided functionality. Kubernetes services will be used on any Kubernetes based platform. So I think that's the easy one to replace and keep your coupling to the platform low. The routing can be more difficult, if Zuul is only used for routing; replace it with the OpenShift router. If Zuul also has other responsibilities like security it might be better to stick with Zuul.
I agree with #Jeff and I want to add about using spring zuul as a gateway instead of openshift routes:
If you use spring zuul as a gateway, you provide the accessing from single point to your cluster. Otherwise, your client you must know the urls exposing by openshift routes. It gets increase the complexity of your code and hard to maintain. A major benefit of using an API Gateway is that it encapsulates the internal structure of the application.
The other is about security. If you use openshift routes to expose your internal microservices, actually you open door of the microservice to the public world directly. In addition, If you want to use JWT or security token, you should choose the spring zuul.
The API Gateway provides each kind of client with a specific API. This reduces the number of round trips between the client and application.

Spring Batch integration with spring cloud data flow on local server to add spring admin capabilities

I have a basic spring batch app that runs on embedded Apache Tomcat in spring boot. I need to add spring admin capabilities to it. As per latest spring docs, I need to use spring cloud data flow to do this (https://docs.spring.io/spring-batch-admin/). So now I need to use spring cloud dataflow and integrate my spring batch app on local server. I just want it to run on my local machine under tomcat without deploying it to any cloud environments like cloud foundry or openshift. Is it possible? I am sure its possible. I would like to get some references/Examples on this type of integration and starter guide integrating spring batch app. Do I need to create tasks in spring cloud data flow to run my spring batch app? If there are any sample examples/pseudo code to guide me then it would be easy.
As described in the migration-guide, you can use the "local" variant of Spring Cloud Data Flow (SCDF) as a replacement to Spring Batch Admin (SBA).
SCDF is a simple Spring Boot application that you can run it as a standalone Java process similar to how you're running the application today.
Also, as described in the migration-steps, you'd have to port your existing batch workload to Spring Cloud Task model, and that should be a straightforward process - use this Spring Batch sample. To the most part, you'd copy/paste the business logic into a Spring Cloud Task application and all the infrastructure including schemas, repository, and other batch goodies will continue to work. There are few complex implementations in task-app-starers, which can be used as a reference, too.
Lastly, you can use SCDF's dashboard for monitoring and management.

Eureka's hsqldb overriding MySQL

I have a Spring Boot project that currently consists of three microservices (all of them are maven children of the mentioned project), namely:
eureka-server : as the name says, it's simply a Eureka project that works as a server for registering other microservices
user-server : a project that holds a 'monolithic stack' (model, DAO, service and controller). Here is where the problem is. More on this later.
web-server : a project that contains the AngularJS application and a controller that is accessible from AngularJS and that communicates with the user-server module.
Eureka forces me to include a hsqldb dependency in the parent pom in order to launch the three mentioned applications.
The problem is that I was using MySQL in user-server and hsqldb has somehow overriden the MySQL data source.
In other words, the database engine of user-server is now hsqldb and I want to keep working with MySQL, and if I remove the dependency, the application will obviously not launch.
Is there any way to solve this and work with, maybe, two databases in user-server?
Thank you everyone!
I finally figured out how to get it working. I'll just post it here in case someone faces a similar problem.
It seems that application.properties wasn't being read when launching the application because telling Spring Boot which .yml configuration file for Eureka should be read, it was overriden.
In the .yml file of the microservice I wasn't able either to set the datasource to MySQL, so the solution was to hardcode the datasource properties when launching the microservice, as follows:
System.setProperty("spring.datasource.platform","mysql");
System.setProperty("spring.datasource.url","jdbc:mysql...");

How to configure Netty Client Handler in Spring Web Application

Is there a specific way in configuring Netty Client Handler as a message receiving point in Spring JSF Web Application?
If some standalone Java applications act as Netty Servers How can I receive messages to the Spring JSF web application?
I am assuming your question is how to configure netty with spring in a web application that uses JSF. If so, see this link Using JAVA NIO framework in SPRING server.

Running Mule Standalone vs Tomcat in Production

There are many ways of deploying Mule ESB into a production environment. According to the documentation, it appears that running Mule as a standalone service is the recommended way of doing so.
Are there any reasons for NOT running Mule standalone in production? I'm sure its stable, but how does it compare to Tomcat as far as performance, reliability, and resource utilization go?
Should I still consider running it within Tomcat for any reason?
Using Tomcat, or any other web container, allows you to use the web tier of that container for HTTP inbound endpoint (via the Servlet transport) instead of either Mule's HTTP or Jetty transports.
Other differences are found in class loading, handling of hot redeployment and logging.
Now the main reason why people do not use Mule standalone is corporate policy, ie "thou shalt deploy on _". When production teams have gained experience babysitting a particular Java app/web server, they want you to deploy your Mule project in that context so they can administer/monitor it in a well-known and consistent manner.
But if you're happy with the inbound HTTP layer you get in Mule standalone and you are allowed to deploy it in production, then go for it. It's production ready.
Mule actually recommends deploying standalone. Inside a container like e.g. tomcat it has to share the threadpool, heap etc... This can obviously prevent it from performing at it's best.
The main reason you'd want to inside a container like tomcat is to get automatic deployment. I.e. you can just update your Mule application .war and the container will restart mule with the new application. This helps in testing.
Also some transports are specific to running inside a container, like the servlet transport. OTOH when designing solution so Mule transports between your container and your servlets your'e doing it wrong.