Clustering in Ejabberd in 2 linux OS - ejabberd

I am currently trying to cluster ejabberd using 2 ubuntu instances but I am facing some problems. I have 2 instances inside oracle Virtual Box .My current ejabberd.yml file for both instances have the following host:-
hosts:
- "xyz-VirtualBox"
For node1 I modify the ejabberdctl.cfg and change the ejabberd name like the following for example
ERLANG_NODE=ejabberd#1.1.1.1
INET_DIST_INTERFACE=1.1.1.1
where 1.1.1.1 is the ip of my ubuntu machine 1 instance .I make sure that I have the same .erlang.cookie in 2nd ubuntu machine and done the same changes of NODE and INTERFACE for machine2 . I start my first instance by using
ejabberdctl start
and it works fine because I can access the webadmin console. I start my ejabberd on second instance using using
ejabberdctl start
and it runs fine when I try to add the cluster to instance using the following command
ejabberdctl --no-timeout join_cluster ejabberd#1.1.1.1
Error: {no_ping,'ejabberd#1.1.1.1'}
note - cmd> hostname --fqdn (machine 1 output - xyz-VirtualBox)

I guess you have this in machine 1:
ERLANG_NODE=ejabberd#1.1.1.1
and something like this in machine 2:
ERLANG_NODE=ejabberd#1.1.1.2
Can machine 2 connect to machine 1. I mean, In machine 2, does this work?
ping 1.1.1.1
If that pings correctly, then you can test with step 4 of that tutorial, to see if that works: https://ejabberd.im/interconnect-erl-nodes/index.html

Related

Internal 500 error on Google Compute Engine, installing littlest jupyter

"Internal 500 server error" after VM runs for a day or two.
This is the second time it has happened, I start the instance, install littlest Jupyterhub
(see details below). I can login to the external ip, for a day, but then it stops
with internal 500 error. I cannot ssh or get into the instance, only alternate is to
create a new instance and re-do. What is the problem?
I have installed littlest jupyterhub using on this instance, using
#!/bin/bash
curl https://raw.githubusercontent.com/jupyterhub/the-littlest-jupyterhub/master/bootstrap/bootstrap.py | sudo python3 - --admin master
I would recommend you enable access on your instance to the serial console [1].
You will also need to setup a password for your user following this documentation [2].
With these two steps done, you should be able to reconnect to your instance once you are locked out like you mentioned by following this [3].
You should then be able to investigate what is going on in the instance.
Then try to verify if your application is still running, if the SSH server is still running etc.
Frederic
[1] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console#enable_instance_access
[2] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console#setting_up_a_local_password
[3] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console#connectserialconsole

How to setup mysql develper for PCF mySQL database to manage it

I am trying to understand PCF concepts and thinking that once i am done with creating mysql services in PCF, how i can manage that database like creating tables and maintaining that table just like we do in pur traditional environment using mySqldeveoper. I came across one service like PivotalMySQLWeb and tried but didnt liked it much. So if somehow i can get connection details of mysql service , i can use that to connect using sql developer.
The links #khalid mentioned are definitely good.
http://docs.pivotal.io/p-mysql/2-0/use.html
https://github.com/andreasf/cf-mysql-plugin#usage
More generally, you can use an SSH tunnel to access any service, not just MySQL. This also allows you to use whatever tool you would like to access the service.
This is documented here, but if for some reason that goes away here are the steps.
Create your target service instance, if you don't have one already.
Push an app, any app. It really doesn't matter, it can be a hello world app. The app doesn't even need to use the service. We just need something to connect to.
Either Bind the service from #1 to the app in #2 or create a service key using the service from #1. If you bind to the app, run cf env <app> or if you use a service key run cf service-key MY-DB EXTERNAL-ACCESS-KEY and either one will give you your service credentials.
Run cf ssh -L 63306:us-cdbr-iron-east-01.p-mysql.net:3306 YOUR-HOST-APP, where 63306 is the local port you'll connect to on your machine and us-cdbr-iron-east-01.p-mysql.net:3306 are the host and port from the credentials in step #3.
The tunnel is now up, use whatever client you'd like to connect to your service. For example: mysql -u b5136e448be920 -h localhost -p -D ad_b2fca6t49704585d -P 63306, where b5136e448be920 and ad_b2fca6t49704585d are the username and database name from step #3 and 63306 is the local port you picked from step #4.
Additionally, if you want to connect aws-rds-mysql (instantiated from Pivotal Cloud Foundry) from IntelliJ, you can use the DB-Navigator Plugin (https://plugins.jetbrains.com/plugin/1800-database-navigator) inside IntelliJ, through which, database manipulation can be performed.
After creating the ssh tunnel $ cf ssh -L 63306:<DB_HOSTNAME>:3306 YOUR-HOST-APP (as also mentioned in https://docs.pivotal.io/pivotalcf/2-4/devguide/deploy-apps/ssh-services.html),
Go to DB Navigator plugin and click on custom under new connection.
Enter the URL as: jdbc:mysql://:password>#localhost:63306/<database_name>
The following thread might be helpful for you as well How do I connect to my MySQL service on Pivotal Cloud Foundry (PCF) via MySQL Workbench or CLI or MySQLWeb Database Management App?

Configure a Wildfly 10 MSQL datasource in OpenShift v3

I have an application currently working on my local Dev machine. It uses Wildfly 10, MySQL 5.7 and Hibernate. My application looks for the 'AppDS' datasource from within Wildfly.
I've created a Wildfly 10 container and a MySQL container on OpenShift V3. Typically, I would log into Wildfly and configure a datasource, but all that configuration is lost when a container restarts. I thought it would be a matter of finding my connection environment settings, and using the pre-configured database connections, but I can't find what the variables should be set to, and the default connections don't work without them.
I downloaded and read OpenShift for Developers, but they side-step the issue by creating a direct database connection, rather than going through a datasource.
exporting the environment variables failed because 'no matches for apps.openshift.io/, Kind=DeploymentConfig'. Is the book out of date? Are they not using deployment config to store environment variables?
I would appreciate it greatly if someone could point me in the right direction.
I have a project running locally on my machine that uses Wildfly 10, Mysql 5.7 and Hibernate. I found the documentation to be incomplete. After a few days of working with it, I have figured out how to deploy a simple J2EE project with this stack.
I am updating my question with the step-by-step I wish I'd had. I hope this saves someone some time in the future.
create new openshift user
create project dbtest
add MySQL to dbtest project:
The following service(s) have been created in your project: mysql:
Username: test
Password: test
Database Name: testdb
Connection URL: mysql://mysql:3306/
add Wildfly to the project:
oc login https://api.starter-us-west-1.openshift.com
oc project dbtest
oc status
scale current wildfly pod to 0. (you won't have enough CPU to run 3 pods, and redeploy tries to start a new one and hot swap them)
From left menu: Applications->Deployments->(dbtest)Wildfly10 pod->environment(tab)-> add:
MYSQL_DATABASE=testdb
MYSQL_DB_ENABLED=true
MYSQL_USER:test
MYSQL_PASSWORD: test
push wildfly pod back to 1.
use terminal in Wildfly to run ./add-user.sh
oc port-forward wildfly10-6-rkr58 :9990 (replace wildfly10-6-rkr58 with your pod name, found by clicking on the running pod [circle with a 1 in it] and noting the pod name in the upper left corner])
login to Wildfly from 127.0.0.1: and test the MySQLDS. It should now connect.
Go through the environment variables mentioned here to get a better understanding.

Openshift - "Unable to connect to gear" when running: rhc show-app <app> --gear quota

I created an app called "world" following the instructions from:
https://blog.openshift.com/12-tips-for-hosting-wordpress-on-openshift/.
It's a hosted Wordpress blog, with PHP 5.4 scalable up to 1GB, with a Web Load Balancer and MySQL 5.5.
Everytime I try to check for the space used, I get the same error.
rhc show-app world --gears quota
Unable to connect to gear 54d48383fcf933f91f0000aa#54d48383fcf933f91f0000aa-laurapons.rhcloud.com
Unable to connect to gear 54d48383fcf933f91f0000a9#world-laurapons.rhcloud.com
Gear Cartridges Used Limit
------------------------ ------------------- ----- -----
54d48383fcf933f91f0000aa mysql-5.5 error 1 GB
54d48383fcf933f91f0000a9 haproxy-1.4 php-5.4 error 1 GB
I tried to restart the application (using restart and stop&start commands) but nothing seems to work.
I am also facing some other connection problems (probably related to the same issue):
I have the same problem when trying to clone the application with git clone:
ssh: connect to host world-laurapons.rhcloud.com port 22: Bad file number
fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists.
And also with the rhc port-forward world
I copied the URL for git clone from the openshift online dashboard, and I can open the wordpress blog and see all the information, but somehow, I'm unable to access to the data.
I have already created a default Public Key and 2 authorisations (one to access through the browser and the other to access through RHC)...
What should I try?
How can I get the usage?
Do I need to set up anything else?
I am stuck... any suggestion?
Sounds like your SSH key is not working properly. Make sure you installed your keys and that they are working. Try running 'rhc setup'. If that still doesn't work try
ssh -vvv 54d48383fcf933f91f0000a9#world-laurapons.rhcloud.com
and look at the output.
You can also try using
ssh -i /path/to/your/ssh.key 54d48383fcf933f91f0000a9#world-laurapons.rhcloud.com
And see if that works (specifies what ssh key to use)
rhc with some ruby version will have issue with pageant (putty). I closed pageant, ran again rhc command then it worked.

Site to site OpenSWAN VPN tunnel issues with AWS

We have a VPN tunnel with Openswan between two AWS regions and our colo facility (Used AWS’s guide: http://aws.amazon.com/articles/5472675506466066). Regular usage works OK (ssh, etc), but we are having some MySQL issues over the tunnel between all areas. Using mysql command line client on a linux server and trying to connect using the MySQL Connector J it basically stalls… it seems to open the connection, but then gets stuck. It doesn't get denied or anything, just hangs there.
After initial research thought this was an MTU issue, but I've messed with that a lot and no luck.
Connection to the server works fine, and we can choose a database to use and such, but using the Java connector it appears that the Java client isn't receiving any network traffic after the query is made.
When running a select in the MySQL client on linux we can get a max of 2 or 3 rows before it goes dead.
With this said, I also have a separate openswan VPN on the AWS side for client (mac and iOS) vpn connections. Everything works fantastically through the client VPN and it seems more stable in general. The main difference I've noticed is that the static connection is using "tunnel" as the type and the client is using "transport", but when switching the static tunnel connection to transport it says there's like 30 open connections and doesn't work.
I'm very new to OpenSWAN, so hoping someone can help to point me in the right direction of getting the static tunnel working as well as the client VPN.
As always, here's my config files:
ipsec.conf for BOTH static tunnel servers:
# basic configuration
config setup
# Debug-logging controls: "none" for (almost) none, "all" for lots.
# klipsdebug=none
# plutodebug="control parsing"
# For Red Hat Enterprise Linux and Fedora, leave protostack=netkey
protostack=netkey
nat_traversal=yes
virtual_private=
oe=off
# Enable this if you see "failed to find any available worker"
# nhelpers=0
#You may put your configuration (.conf) file in the "/etc/ipsec.d/" and uncomment this.
include /etc/ipsec.d/*.conf
VPC1-to-colo tunnel conf
conn vpc1-to-DT
type=tunnel
authby=secret
left=%defaultroute
leftid=54.213.24.xxx
leftnexthop=%defaultroute
leftsubnet=10.1.4.0/24
right=72.26.103.xxx
rightsubnet=10.1.2.0/23
pfs=yes
auto=start
colo-to-VPC1 tunnel conf
conn DT-to-vpc1
type=tunnel
authby=secret
left=%defaultroute
leftid=72.26.103.xxx
leftnexthop=%defaultroute
leftsubnet=10.1.2.0/23
right=54.213.24.xxx
rightsubnet=10.1.4.0/24
pfs=yes
auto=start
Client point VPN ipsec.conf
# basic configuration
config setup
interfaces=%defaultroute
klipsdebug=none
nat_traversal=yes
nhelpers=0
oe=off
plutodebug=none
plutostderrlog=/var/log/pluto.log
protostack=netkey
virtual_private=%v4:10.1.4.0/24
conn L2TP-PSK
authby=secret
pfs=no
auto=add
keyingtries=3
rekey=no
type=transport
forceencaps=yes
right=%any
rightsubnet=vhost:%any,%priv
rightprotoport=17/0
# Using the magic port of "0" means "any one single port". This is
# a work around required for Apple OSX clients that use a randomly
# high port, but propose "0" instead of their port.
left=%defaultroute
leftprotoport=17/1701
# Apple iOS doesn't send delete notify so we need dead peer detection
# to detect vanishing clients
dpddelay=10
dpdtimeout=90
dpdaction=clear
Found the solution. Needed to add the following IP tables rule on both ends:
iptables -t mangle -I POSTROUTING -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
This along with an MTU of 1400 and we're looking very solid
We had the same issue with a server connecting from the EU region to an RDS instance in the US. This appears to be a known issue with the RDS instances not responding to ICMP which is needed to auto-discover the MTU settings. As a workaround, you'll need to configure a smaller MTU on the instance that is performing the query.
On the server that is making the connection to the RDS instance (not the VPN tunnel instances), run the following command to get a MTU setting of 1422 (which worked for us):
sudo ifconfig eth0 mtu 1422