We're currently running our WCF services in a WAS environment and host NServiceBus in the same host by using App_Code\InitializeService.AppInitialize()
All works fine until we want to use our container in our message handlers with registrations lifestyled as PerWcfOperation. These registrations are out-of-scope for the bus, which is logical.
We are now looking for a way to
Use the same container for the WCF services and NServiceBus
Clone or partially reuse the container we have in the WCF service for NServiceBus
Any ideas?
Related
I need to deploy a few microservices on the Openshift. These microservices are implemented using Spring Cloud. I use Spring Eureka for service discovery/load-balancing && Spring Zuul for service routing.
From what I understand, Openshift already provides these features ( service discovery, load balancing, routing ) via Kubernetes.
With this being said, can I integrate Spring Eureka and Spring Zuul with the openshift platform?
Woudn't it be redundant to add Spring Eureka & Spring Zuul components into Openshift since the platform itself already provides these microservice features ?
I was thinking of removing the service registry & routing Spring components and just implement routing using Openshift. However, that would leave the project heavily dependent on this cloud platform.
What would your approach be? Use the features provided by the OpenShift (routing, load balancing) or use the ones provided by the Spring framework and try to integrate them with the cloud platform?
Thanks
It would indeed be redundant.
Eureka can be replaced by Kubernetes services. (they provide a load balancer and a domain name for a group of pods)
Zuul can be replaced by OpenShift Routes for exposing your services.
If you are using a platform, use the platform provided functionality. Kubernetes services will be used on any Kubernetes based platform. So I think that's the easy one to replace and keep your coupling to the platform low. The routing can be more difficult, if Zuul is only used for routing; replace it with the OpenShift router. If Zuul also has other responsibilities like security it might be better to stick with Zuul.
I agree with #Jeff and I want to add about using spring zuul as a gateway instead of openshift routes:
If you use spring zuul as a gateway, you provide the accessing from single point to your cluster. Otherwise, your client you must know the urls exposing by openshift routes. It gets increase the complexity of your code and hard to maintain. A major benefit of using an API Gateway is that it encapsulates the internal structure of the application.
The other is about security. If you use openshift routes to expose your internal microservices, actually you open door of the microservice to the public world directly. In addition, If you want to use JWT or security token, you should choose the spring zuul.
The API Gateway provides each kind of client with a specific API. This reduces the number of round trips between the client and application.
I have a EJB application running on Glassfish server which stores data on MySQL DB which I call as Global DB.
I have two exact remote Swing applications which are stand alone applications accessing EJB's using RMI. They have their own local DB in case of lost connection.
My aim is to implement two phase commit protocol i.e to make one participant as coordinator and others as participants.
One method which I could think of was to implement using JMS i.e send a message across queue and make remote clients listen to these messages and take appropriate action.
I do this my sending a message on Buttonclick of one of the Swing application.
Problem is, even tough I have implemented MessageListener, onMessage() method does not receive any message for the other client.
Each Remote client has following properties set:
props.setProperty("java.naming.factory.initial", "com.sun.enterprise.naming.SerialInitContextFactory");
props.setProperty("java.naming.factory.url.pkgs", "com.sun.enterprise.naming");
props.setProperty("java.naming.factory.state", "com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl");
props.setProperty("org.omg.CORBA.ORBInitialHost", "localhost");
props.setProperty("org.omg.CORBA.ORBInitialPort", "3700");
This is to connect to Glassfish server and access the connectionFactory and Queue which I have already configured.
Is it because only application running on server are allowed to receive messages and not remote applications?
Any suggestions for topology for 2 PC are welcome.
For this, we used JMS for exchanging the messages between these systems i.e one acting as coordinator who will initiate the process by sending message on the queue and others will respond accordingly by sending back again a message on the queue.
Since you are using EJB,you can use JTA to manage transcation,it a standard implementation of two-phased commit protocal,and JMS support JTA too.
Here are my steps:
config the trans-attribute to Required/Mandatory /Supports, depends on you need.
in your client get UserTransaction by lookup jndi from the EJB server.
start the transaction from client.
commit/rollback the transaction at client side
This is the so called "Client owner tranaction design pattern". I suggest you to read the book javatransactionsbook
There are many ways of deploying Mule ESB into a production environment. According to the documentation, it appears that running Mule as a standalone service is the recommended way of doing so.
Are there any reasons for NOT running Mule standalone in production? I'm sure its stable, but how does it compare to Tomcat as far as performance, reliability, and resource utilization go?
Should I still consider running it within Tomcat for any reason?
Using Tomcat, or any other web container, allows you to use the web tier of that container for HTTP inbound endpoint (via the Servlet transport) instead of either Mule's HTTP or Jetty transports.
Other differences are found in class loading, handling of hot redeployment and logging.
Now the main reason why people do not use Mule standalone is corporate policy, ie "thou shalt deploy on _". When production teams have gained experience babysitting a particular Java app/web server, they want you to deploy your Mule project in that context so they can administer/monitor it in a well-known and consistent manner.
But if you're happy with the inbound HTTP layer you get in Mule standalone and you are allowed to deploy it in production, then go for it. It's production ready.
Mule actually recommends deploying standalone. Inside a container like e.g. tomcat it has to share the threadpool, heap etc... This can obviously prevent it from performing at it's best.
The main reason you'd want to inside a container like tomcat is to get automatic deployment. I.e. you can just update your Mule application .war and the container will restart mule with the new application. This helps in testing.
Also some transports are specific to running inside a container, like the servlet transport. OTOH when designing solution so Mule transports between your container and your servlets your'e doing it wrong.
Can I deploy Mule on any of the application server. If so how do we deploy the Mule examples.
I have configured my eclipse to run JBoss and Mule Flows dont get deployed in the JBOss server. The syncronisation gives error(null pointer).
But when I run as Mule Application it runs fine but starts Mule Server.
How does Mule deployed in the production? Do we need Mule server in production working along with the application Server?
Can we package all the application in one(ESB + application) and deploy it in the application server.
You have the choice regarding production deployments of Mule:
Use Mule standalone as your first choice, as it comes packed with all modules/transport, is production grade (control scripts...) and supports hot application reloads.
If your deployment environment forces you to deploy on application servers (like JBoss), package your Mule application as a web application (WAR).
If you decide to package your Mule application as a WAR, I strongly suggest you use Maven to do so, as they each Mule module/transport require numerous dependencies. Dealing with this by hand would be insane.
Also be sure to use Servlet inbound endpoints instead of HTTP inbound endpoints otherwise Mule will open another HTTP server inside the web container. You want Mule to use the servlet container for its inbound HTTP requests.
Yes you can. You might want to take a look to the Embedding Mule in a Java Application or Webapp manual page, and the Deploying Mule as a Service to Tomcat.
I have swing based web application. Currently, the Swing clients communicates to EJB which is running in remote server through third party 'HTTP Tunneling' tool (JProxy). This is commercial tool.
my question is:
Is there any other open source / free 'HTTP Tunneling' tools equivalent to JProxy using which the Swing clients can communicate to EJB.
What are the ways a Swing client can communicate to EJB other than RMI protocal.
I suppose you could create a Web Service to communicate to your backend
http://en.wikipedia.org/wiki/Web_service
http://java.sun.com/webservices/