i've installed an arangodb instance on a virtual machine of Google Cloud (tcp://10.240.0.2). I would setup an asymmetrical cluster with another vm where i've installed arangodb (tcp://10.240.0.3).
I follow the official guide to config the production scenario: 1 coordinator and 1 DBServer on the same machine
I tried also a second configuration to cluster with two vm instances, but it doesn't work, showing this error in the GoogleChromeConsole :
{"error":true,"code":500,"errorNum":500,
"errorMessage":"Cannot check port on dispatcher tcp://10.240.0.3:8529"}
Here you can find the configurations that I have tried
What could be the error?
PS: I've open in the firewall the ports: 8529,8530,8629
Thanks in advance.
Daniele
Have you installed ArangoDB on both virtual machines and changed the configuration (on both) to set
[cluster]
disable-dispatcher-kickstarter = false
disable-dispatcher-frontend = false
and then restarted the database servers? I assume so, since you get "Connection OK" for both servers. Your browser would then talk to the first dispatcher, which in turn will contact the second one. The error message you get suggests that this latter step does not work, since checking ports is the first request the first dispatcher would send to the second one.
Is it possible that processes in the first VM cannot access tcp://10.240.0.3:8529 on the second VM? Maybe the respective other subnets are not routed from within the VMs?
Furthermore, when you have got this to work, you will almost certainly also need port 4001 on the first VM, because that is where our etcd (Agency) will listen. In addition, the ports 8530 and 8629 are the defaults which are tried first. If they are not usable for some reason, the dispatchers will use subsequent port numbers instead to assign them to the coordinators and DBservers. In that case you would have to open these as well, at least from the respective other VM.
Related
I have a basic stack of containers on their own user-defined network with a subnet of 172.21.0.0/16. My MySQL container's address is 172.21.0.2 and the PHP/Apache container's address is 172.21.0.3.
Until this point I had to permit MySQL to allow incoming connections via PHP from 172.21.0.3, which made perfect sense. Now, it seems as though the connections are coming from 172.21.0.1, the gateway, and this doesn't make much sense to me. My (basic to intermediate) understanding suggests that the gateway should only be used when traffic is destined for an address outside of its local network - but obviously in this case MySQL and PHP/Apache are on the same network.
Two of our environments have started acting like this, and while it's a simple fix to permit connections from the gateway address, I'm hesitant to proceed without an understanding as to what has happened and why. This also seems to add extra delay to database queries within the application.
Logging in to an affected environment via phpMyAdmin displays "User: root#172.21.0.1" in the "Database Server" information pane. An unaffected environment displays "root#phpmyadmin_1.test_default" (user#[container].[network]).
Both environments are using the exact same images, and the same version of Docker - 18.06.1-ce. Other than a version upgrade of Docker, nothing else has changed with regards to the docker-compose.yml I was using.
Why has my environment started acting like this? Should I prefer the connection coming in from the actual source, and not via the gateway? How can I return to that way of operation?
Thank you for any guidance or knowledge.
For anyone else that experiences a similar rut, I'm of the mind that this was caused by an upgrade of Docker from 18.03.1-ce to 18.06.1-ce via Docker's own repository. Performing a server reboot after this operation has (for now) restored sense to the networking of the stack.
The connection to my MySQL container is now correctly coming from the PHP/Apache container and not from the gateway address of the bridge network. The lag this introduced is gone, and I'm able to remove the privilege associated with the gateway address.
I am using Tomcat7 which is running on port 80.
Services directly to instance IP works just fine but calling services from LB IP throws 502 error.
Assuming, you are using managed instance group for maintaining the homogeneous instances. You need to establish a service endpoint which the load balancer can use to direct the traffic. This might be the problem.
I have written the steps to set up an load balancer here. As, load balancer contains lot of moving parts like target proxies, forwarding rules, backend services. It is difficult to debug without any config files. Posting your config here, would help us debug it better.
What I did to make Load balancing (LB) work is mentioned below.
I created a layer of nginx which by default runs on port 80.
I connected to tomcat7 layer using default file of nginx. Tomcat is now running on default port i.e. 8080.
So when LB tries to connect to my instance group it connects through http port 80.
Health check is really important. Health check of LB should pass. To make it pass keep a file on instance group instances. Like "/foo/bar/index.html" on "/var/lib/tomcat7/webapps/foo/bar/index.html". So that LB can directly connect to this file. Once the health check has passed. Then it wont show that instances are unhealthy.
Keep the same health check for instance group. Instance group also checks for same path as mentioned above.
Ideally health check should have passed without keeping this file. But have tried it several times it does not pass the health check the only way to make it pass is to keep that file.
have a problem when consider about more Couchbase(CB) instances running in same PC. It is because, The screen which allows to add another server provides options to add the second server IP, and no any ports. This might be because each CB communicate through the same port. How ever without mentioning the connecting port, how to add another server which is running on same PC? (the already running server ip is 127.0.0.1, then what to mention in the second servers IP ?)![enter image description here][1]
The best solution for running this would be to use virtual machines to run the CB instances. Use 1 VM per node/instance (which can be quickly provisioned using vagrant). This (particularly the vagrant solution) allows multiple nodes/instances to communicate between each other on correct ports (as each node is given a unique IP (from the reserved private addresses), and is well tested in terms of resource usage/performance.
More information along with prebuilt vagrant configurations can be found on GitHub and at this blog (one of Couchbase's engineers).
We're using OpenNebula to simulate a simple replicated JBoss application.
We've installed all opennebula packages, qemu and kvm and libvirt.
We have created a simple ethernet network ad hoc between my pc (a node) and the one of my friend (which is both node and front-end) by plugging an ethernet cable between me and him (10.0.0.1 and 10.0.0.2).
So we can ping each other correctly, we've set everything to that we can ssh without a password to each other with "oneadmin" user.
We've configured all files such as below:
/etc/libvirt/libvirtd.conf
/etc/default/libvirtd-bin
And so on...
kvm and kvm-intel are both enabled.
The daemon
libvirtd -d -l
seems to start correctly.
In fact, from the gui of opennebula in the front end, we can see both the hosts monitored.
Anyway there's a problem when we try to start the virtual machine on the node which is not the front-end. I mean when we try to do a deploy of a VM on the other node. The error is something like this
cannot stat `/var/lib/one/datastores/1/f5394317d377beaa09fc07697df9ff68
but if, from the front end which has virtual machine n°1 we perform,
cd /var/lib/one/datastores/1
then we can see that file, we've also given all the permissions to it...
Any idea? :(
This may be related with the datastore configuration. If you left the default values, OpenNebula expects a shared filesystem (ie NFS) between the front-end and the virtualization nodes.
More context on the error (which I believe can be found in /var/lib/one/oned.log) would help analysing this problem.
I downloaded the jboss tar file.
Copied into my test server.
Did untar and installed it at $HOME/jboss/
Now, I need to have three instances running at the same time - Dev, QA, UAT - on a SINGLE server.
Is the Domain mode for this situation?
My conclusion was that it is not. That Domain mode is to manage JVMs across multiple servers.
For example, if I wanted QA to be in server1 and server2.
Is that correct?
However, my need is NOT to manage JBOSS instances across multiple servers.
Given that should I be using standalone mode?
If so, how would I run three instances of JBOSS (Dev, QA and UAT) concurrently.
I tried the instructions given here (Approach 2) : https://community.jboss.org/wiki/MultipleInstancesOfJBossAS7OnTheSameMachine
But I keep getting the errors like this:
MSC00001: Failed to start service jboss.serverManagement.controller.management.http: org.jboss.msc.service.StartException in service jboss.serverManagement.controller.management.http: Address already in use /127.0.0.1:9990
Is there any simple tutorial that I can follow.
I see this questions asked multiple times, but none of them seem to have a satisfactory answer.... that I find helpful. Is this a black art that lowly developers should not attempt in their home alone?
SGB
To get multiple jboss instances running on linux, in JBOSS_HOME/standalone/configuration/standalone.xml, I changed a single line from :
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
to the following...
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:100}">
NOTE:
The reason I was having problem was because I had setup my JBOSS_HOME in my .bash_profile as per the jboss installation instructions. I needed to remove this so that both instances would not use the same JBOSS_HOME.
Slight Change in above comment.
bash$ ./standalone.sh -Djboss.socket.binding.port-offset=10000
This will start the server port as 18080.
default port is 8080 + 10000 will give 18080.
It's easier to add "-Djboss.socket.binding.port-offset=1000" while starting standalone.sh, e.g.:
./standalone.sh -Djboss.socket.binding.port-offset=1000
This will start jboss on ports +1000 to the standard ones (so 8080 will become 18080). No need to change xml files.
If you are using Jboss on Intellij, you would like to add the offset into server configuration, just go to Run --> Edit configuration: