Is it mandatory to use domain mode instead of standalone mode to enable session replication in Jboss EAP 6.2 ?
And can anyone provide a proper way to configure session replication in Jboss EAP 6.2 server?
You do not need to use domain mode. You can use the standalone-ha.xml or standalone-full-ha.xml configuration profiles to enable session replication in multiple standalone environments. But the standalone instances need to be on the same subnet for this default configuration to work. Also you need to be sure the same version of your software is deployed to each environment - something that happens automatically in domain mode.
Your application also needs to be configured for session replication:
1) Add a <distributable/> tag in the WEB-INF/web.xml file.
2) In the load balancer (in front of the JBoss instances) enable sticky sessions so that individual users are assigned an JBoss instance, and directed back that same JBoss instance as long as that instance is available.
Related
I have 2 pods(1mysql+1idm) on Kubernetes Cluster (1 master+1 worker node on VirtualBox)
Although Keyrock creates the idm database, it cannot be migrated.
So the superuser is never inserted into db and many fields of the tables are missing.
Below are presented the idm's logs from the relative container:
To find out about the concret problem with your installation, I would need more information about config and version of MySql and Keyrock.
To not have that problems, I would recommend using the keyrock helm-chart:
https://github.com/FIWARE/helm-charts/tree/main/charts/keyrock
Its tested to run with the Bitnami-MySql(https://github.com/bitnami/charts/tree/master/bitnami/mysql) Chart out of the box.
I am setting up a new Openshift system with origin v3.11. The only information provided in the official website is: Disabling Features Using Feature Gates. And in the Service Proxy Mode section of the document it says there are only two proxy-mode supported, so I am not sure if the ipvs mode is removed.
I think kuberntes 1.11 has the IPVS as proxy mode supported and the openshift 3.11 is based on k8s 1.11:
Kubelet Version: v1.11.0+d4cacc0
Kube-Proxy Version: v1.11.0+d4cacc0
Does anyone ever tried to enable the IPVS proxy mode on openshift 3.11? I have tried to modify the master-config.yaml and node-config.yaml files. And tried to modify the config map.I added something like:
feature-gates:
- SupportIPVSProxyMode=true
proxyArguments:
cluster-cidr:
- 10.128.0.0/14
proxy-mode:
- ipvs
ipvs-min-sync-period:
- 5s
ipvs-sync-period:
- 5s
ipvs-scheduler:
- rr
And then restarted the master and node service.
Also I have the ipvsadm installed on all nodes.
But, it seems not to be working.
OpenShift only supports two service proxy modes:
iptables
userspace
In general, OpenShift by design only supports a subset of upstream Kubernetes features (though in some cases, such as route objects or the web console, OpenShift has extra features).
These features are cherry picked by Red Hat engineering based on stability, supportabitily, and customer demand.
We are experiencing the issue when trying to connect to the cluster after updating the version of Java SDK.
The setup of the system is as follows:
We have a web application that is using Java SDK and a Couchbase cluster. In between we have a VIP (Virtual IP Address). We realise that isn’t ideal but we’re not able to change that immediately since VIP was mandated by Tech Ops. VIP is basically only there to reroute the initial request on application startup. That way we can make modifications on the cluster and ensure that when application starts it can find the cluster regardless of the actual nodes in the cluster and their IPs.
Prior to the issue we used JAVA SDK version 1.4.4. Our application would start and Java SDK would initiate a request on port 8091 to VIP. Please note that port 8091 is the only port open on VIP. VIP would reroute the request to one of the node cluster currently in use the cluster would respond to Java SDK. At that point Java SDK would discover all the nodes in the cluster and application would run fine. During up time if we would add, remove a node from the cluster Java SDK would update automatically and everything would run without the issue.
In the last sprint we updated the Java SDK to version 2.1.3. Our application would start and Java SDK would initiate a request on port 11210 to VIP. Since this port is not open the request would fail and Java SDK would throw an exception:
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:93)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:108)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:99)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:89)
No further request would be made on any port.
It appears the order in which port are being used has been changed between versions. Could somebody please confirm, or dispute, that the order in which ports are being used for cluster discovery has been changed between versions. Also could somebody please provide some advice on how we could resolve the issue. We are trying to understand the clients behavior, if we could open all those ports on the VIP would the client still then function correctly and at full performance?
The issue is happening on our production environment which we cannot use for testing out potential solutions since it will interfere with our products.
In v2.x of the Java SDK, it defaults to 11210 to get the cluster map to bootstrap the application. This is a huge improvement actually as now the map comes from the managed cache and not the cluster manager (8091). The SDK should use 8091 as a fall back if it cannot get the map on 11210 though. Regardless, you really want to get that map from 11210, trust me. It cleans up a lot of problems.
To resolve this long term and follow Couchbase best practices, upgrade to the Java 2.2.x SDK, get rid of the VIP entirely and go with a DNS SRV record instead. That gives you one DNS entry for the SDK connection object and you just manage the node list in DNS. It works great. I say SDK 2.2 as the DNS SRV record solution is fully supported there, in 2.1 it is experimental. VIPs are specifically recommended against by Couchbase these days. In older versions of the SDKs it was fine to do this and it helped with limiting the number of connections from the app to the DB nodes, but that is no longer necessary and can actually be a bad thing.
in addition to Kirk's long term answer (which I also advise you to follow), a shorter term solution may be to deactivate the 11210 bootstraping (carrier bootstrap) through the CouchbaseEnvironment by calling bootstrapCarrierEnabled(false) on the builder.
I don't guarantee that it'll work with a vIP even after that, but that may be worth a try if you're in a hurry.
I have created a new OpenShift account for a new application I'm developing.
I have added a MongoDB cartridge for the database, and a Tomcat cartridge for the Java web application.
I now need to connect to the database from my Java web app, but I miss two authentication details:
$OPENSHIFT_MONGODB_DB_HOST
$OPENSHIFT_MONGODB_DB_PORT
As far as I know, I have to type rhc env list -a the_name_of_my_app in the console, but my application seems to have no environment variables set.
What can I do?
Apparently, the default enironment variables are visible only via ssh.
In order to see them, you have to type rhc ssh <appid-as-seen-on-openshift-console> followeb by env.
you can see environment variables by doing ssh to openshift. Also you can use openshift port forwarding feature to setup a connection locally to your database.
Openshift blog link for port forwarding
I am using Intellij IDEA to develop my applications and I use glassfish for my applications.
When I want to run/debug my application I can configure it from Glassfish Server -> Local and define arguments at there. However there is another section instead of Glassfish Server, there is a Remote section for configuration. I can easily configure and debug my application just defining host and port variables.
So my question is why to need for Glassfish Server Local configuration(except for when defining extra parameters) and what is difference between them(I mean performance or etc.)?
There are a number of development work-flow optimizations and automation that can be performed by an IDE when it is working with a local server. I don't have a strong background in IDEA, so I am not sure which of the following they may have implemented:
using in-place|exploded|directory deployment can eliminate jar/war/ear creation in the IDE and deconstruction in the server. This can be a significant time saver.
linked to 1 is smarter redeployment. In some cases, a file change (like changing a jsp or an html file) does not need to trigger redeployment.
JDBC driver integration allows users to configure their IDE to access a DB and then propagates that configuration (which usually includes driver jars, etc.) into the server's classpath as part of deployment of an app.
access to server log files during deployment and execution.
The ability to start and stop the server... even today, you do need to restart GlassFish sometimes.
view the generated Java sources of a JSP.
Most of these features are not available with a remote server and that has a negative effect on iterative development since the break between edit and validate can be fairly long.
This answer is based on my familiarity with the work that we have done for the NetBeans/GlassFish integration. The guys at IntelliJ are smart, so I would not be surprised if they have other features that are available when you are working with a local server.
Local starts Glassfish for you and performs the deployment. With Remote you start Glassfish manually. Remote can be used to debug apps running on another machines, Local is useful for development and testing.