Haproxy agent-check - mysql

I am following the guides to setup haproxy for mysql load balancer, and to detect slave lag and change weight accordingly.
https://www.percona.com/blog/2014/12/18/making-haproxy-1-5-replication-lag-aware-in-mysql/#comment-10967915
I manage to setup the PHP file (run as service) and it listen well to the port defined (3307). Telnet to the port 3307 is a success and it return the correct seconds_behind_master value.
Now the Haproxy part:
After configuring Haproxy and reload, the Haproxy doesn't do any agent-check on the port 3307.
I shut down the slave to make the seconds_behind_master = NULL, check the haproxy web interface, nothing is changed. The slave server is still up and running.
Can anyone please point me to the right direction?
Tried using both haproxy 1.5.19 (upgrade from previous version) and 1.6.3 (fresh install)
Update:
Haproxy configuration
frontend read_only-front
bind *:3310
mode tcp
option tcplog
log global
default_backend read_only-back
backend read_only-back
mode tcp
balance leastconn
server db01 1.1.1.1:3306 weight 100 check agent-check agent-port 6789 inter 1000 rise 1 fall 1 on-marked-down shutdown-sessions
server db02 2.2.2.2:3306 weight 100 check agent-check agent-port 6789 inter 1000 rise 1 fall 1 on-marked-down shutdown-sessions
Mange to Telnet and "fputs" the weight in PHP script, when I Stop Slave of 1 of the mysql server.
telnet 127.0.0.1 6789
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
down
However, when checking the stats, it still shows 100 up. I even try other variable such as "up 1%", still unable to change the weight.
echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19
# pxname,svname,status,weight
read_only-front,FRONTEND,OPEN,
read_only-back,db-vu01,UP,100
read_only-back,db-vu02,UP,100
read_only-back,BACKEND,UP,200

Related

Accidentally expose port?

I'm a beginner in both docker and mysql, and I use below command to run a mysql container
docker container run --publish 3306:3306 --name mysqlDB -d --env MYSQL_RANDOM_ROOT_PASSWORD=yes mysql
Now it run successfully and in order to grab the generated password, I run below command
docker container logs [containerID]
Within the logs I can find my GENERATED ROOT PASSWORD, but as I try to read the logs I noticed the below log
[System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/var/run/mysqld/mysqlx.sock' bind-address: '::' port: 33060
May I know what is this means? Is there by any chance I opened a port 33060? And how do I verify it?
This seems to be a MySQL plugin that adds document-oriented APIs to MySQL. Here you can find some more info: https://www.percona.com/blog/2019/01/07/understanding-mysql-x-all-flavors/
That port number seems to be unrelated to your bindings, that's just adefault port number for that plugin.
Also, that port number is not exposed, so, there is nothing to fear, attack surface is still the same.
And if you want to disable that thing, here are the instructions: https://dev.mysql.com/doc/refman/8.0/en/x-plugin-disabling.html (command line option is probably your best bet -- considering docker environment).
To make sure port is not exposed you can run container and do docker ps, you'll see something like this:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
43dd96119ded lb_geo-api "/bin/sh -c 'exec sh…" 6 months ago Up 7 days 80/tcp, 0.0.0.0:4203->8080/tcp lb_geo-api_1_a86ebad528fc
Last column -- "PORTS" -- is the list of ports and their bindings on your host:
80/tcp -- port 80 can is exposed from inside container but not mapped to host port, so, nobody from outside can connect there
0.0.0.0:4203->8080/tcp -- port 8080 is exposed and is mapped to port 4203 on all network adapters, and it can be connected from outside
So, if there is no port 33060 in your output, or if it is there but not mapped -- you're safe. In any case only you can map it when you start the container, so, if you did not do that, then it is not mapped.
I was surprised by a MySQL log entry equivalent to yours, #Isaac, which led me to your question, although I'm not working with Docker. Here is what I think I've learned and what I've done.
MySQL's "X plugin" extends MySQL to be able to function as a document store. See MySQL manual section on server plugins, manual section on document store features, and April 2018 document store availability announcement.
By default, for its X plugin features, MySQL listens on port 33060, bound to all IP addresses. See manual section on X plugin options and system variables (indicating default values for "mysqlx_port" and "mysqlx_bind_address"), and X plugin option and variable reference. For its traditional features, MySQL still uses port 3306 by default.
I believe the default X plugin port and network address are what are reflected in the log entry you posted. In particular, I believe the excerpt X Plugin ... bind-address: '::' indicates MySQL's default wildcard ip address binding for X plugin connections.
If you'd like to use the X plugin features but refrain from listening to all IP addresses for them, you can specify the address(es) to which it listens for TCP/IP connections with the mysqlx_bind_address option. The command line format would be
--mysqlx-bind-address=addr
Alternatively, you could set that system variable in a MySQL option file, like this for example:
[mysqld]
<... other mysqld option group settings>
mysqlx_bind_address = 127.0.0.1
The MySQL manual provides helpful general information about specifying options on the command line or in an option file. Here is some information about setting MySQL options in a Docker container, although I have never tried it.
It seems there are distinct settings for the network addresses listened to by MySQL's X-plugin-enabled features and MySQL's traditional features. You set the network address(es) for the traditional features with the bind_address option. So if you want to limit both sets of features to listening for TCP/IP connections from localhost, you could, for example, put this in your MySQL options file, which is what I've just tried in mine:
[mysqld]
bind_address = 127.0.0.1
mysqlx_bind_address = 127.0.0.1
In contrast, it appears, you could set a single system variable -- skip_networking -- to permit only local, non-TCP/IP connections (e.g., Unix sockets, or Windows named pipes or shared memory) for both traditional and X Plugin features.
If you don't want to use the X plugin features at all, you could disable them as #alx suggested.
To verify which network addresses and ports MySQL is listening on, you have a variety of options. In my non-docker Linux environment, I found
netstat -l | grep tcp
and
sudo lsof -i | grep mysql
helpful.
You have published your port. That --publish 3306:3306 actually publishes your container port to host port and now your host port 3306 is occupied by mysql. If you do not want that you can just remove --published 3306:3306 and container port will not be bound to host port.

Cannot remote access MySQL database of my openshift mysql cartridge [duplicate]

This question already has an answer here:
OpenShift: How to connect to postgresql from my PC
(1 answer)
Closed 6 years ago.
I've deployed a nodejs application at openshift.redhat.com with a mysql and phpmyadmin cartridge. I can access my database fine by going to mywebsite.rhcloud.com/phpmyadmin and logging in with my credentials, but when I try to add a connection to MySQL workbench on my local computer it doesn't seem to connect.
The infomation I'm using is from sshing to my application and typing:
echo $OPENSHIFT_MYSQL_DB_USERNAME
echo $OPENSHIFT_MYSQL_DB_PASSWORD
echo $OPENSHIFT_MYSQL_DB_HOST
echo $OPENSHIFT_MYSQL_DB_PORT
This gives my username, password, host and port which I use in MySQL workbench.
I've tried this: https://stackoverflow.com/a/27333276/2890156
Changed the bind-address from my databse ip to 0.0.0.0, added a new user from the phpmyadmin webinterface with % to allow this account to connect from any ip but it all doesn't seem to work.
I can't figue out what I'm doing wrong or missing, can anyone help me out?
EDIT:
Seems the bind-address I've changed has changed back to my remote database ip after restarting the mysql cartridge...
It's likely that a firewall is blocking access to your hosted database. You can verify this by using a network scan utility like nmap.
I'm going to assume the following for this example, change the respective values if they differ:
echo $OPENSHIFT_MYSQL_DB_HOST is mywebsite.rhcloud.com
echo $OPENSHIFT_MYSQL_DB_PORT is 3306
After installing it on your local machine, then run the command:
nmap -Pn -p 3306 mywebsite.rhcloud.com
If it's blocked, then you'll get a filtered scan that looks like this:
Starting Nmap 6.40 ( http://nmap.org ) at 2016-05-05 13:05 CDT
Nmap scan report for rhcloud.com (54.174.51.64)
Host is up.
Other addresses for rhcloud.com (not scanned): 52.2.3.89
rDNS record for 54.174.51.64: ec2-54-174-51-64.compute-1.amazonaws.com
PORT STATE SERVICE
3306/tcp filtered mysql
Nmap done: 1 IP address (1 host up) scanned in 2.10 seconds
Otherwise, you'll get an open scan like this:
Starting Nmap 6.40 ( http://nmap.org ) at 2016-05-05 13:05 CDT
Nmap scan report for rhcloud.com (54.174.51.64)
Host is up.
Other addresses for rhcloud.com (not scanned): 52.2.3.89
rDNS record for 54.174.51.64: ec2-54-174-51-64.compute-1.amazonaws.com
PORT STATE SERVICE
3306/tcp open mysql
Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds

Reverse tunnel works manually, not for replication

My MASTER mysql server is on a local network, and I have a new slave which is remote (i.e. on the internet). As MASTER does not have an accessible IP, I gathered from the docs that I should establish a reverse tunnel. I execute this:
ssh -f -N -T -R 7777:localhost:3306 user#slave.slave.com
on the MASTER. The connection seems to work - I can go to the slave and connect
with mysql to the MASTER without problem. For some reason though, replication does
not start. MASTER is already replicating to two other slaves without problems - seems the configuration is correct there.
I initiated replication on the slave as:
CHANGE MASTER TO MASTER_HOST='127.0.0.1',
MASTER_PORT=7777,
MASTER_USER='my_repl',
MASTER_PASSWORD='xxxxx',
MASTER_LOG_FILE='mysql-bin.nnnnn',
MASTER_LOG_POS=mm;
SLAVE STATUS reports mysql trying to connect to the remote, but never succeeding:
error connecting to master 'my_repl#127.0.0.1:7777' - retry-time: 60 retries: 86400
Can anyone suggest how to diagnose this problem?
BTW: OS is Linux.
My apologies... I didn't realize I had to define a new user with 127.0.0.1 as
IP.
So, 'intranet' connections use
replication_user#machine_name
as id, the connection which comes through the reverse tunnel uses
replication_user#127.0.0.1
as id. Both have to be declared to mysql separately. The rest of the info in the original message is valid - maybe this helps someone...
Greetings,
John
PS: Forgot to mention - I debugged this remotely (both MASTER and SLAVE are remote to me) using tcpdump:
tcpdump -i lo 'tcp port 7777'
on the SLAVE side, and
tcpdump -i lo 'tcp port 3306'
on the MASTER (of course that would not be very useful when there is much traffic).

mysql client port

I'm connecting to a remote MySQL server (on the default port 3306) using the C API call mysql_real_connect().
How can I discover which TCP port is used on the client host?
Is it possible to specify the port that I wish to use?
1
You can use lsof.
Type following in your shell:
$ lsof | grep TCP
And then look for the port on which your mysql server is listening.
You can also make use of netstat.
Details can be found by man netstat.
2
As far as I know, you can not.
the MYSQL structure has an FD buried in it (for me at least; tested on centos7 with mariadb 5.5.58).
you can use that to find the local address and port
struct sockaddr_in laddr;
socklen_t sb = sizeof(laddr);
if (getsockname(mysql.net.fd, (sockaddr*)&laddr, &sb) == -1)
printf("getsockname() failed, err %s\n",strerror(errno));
else
printf("local address [%x] port [%u]",ntohl(laddr.sin_addr.s_addr),ntohs(laddr.sin_port));

How to close this ssh tunnel? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I opened a ssh tunnel as described in this post: Zend_Db: How to connect to a MySQL database over SSH tunnel?
But now I don't know what I actually did. Does this command affect anything on the server?
And how do I close this tunnel, because now I can't use my local mysql properly.
I use OSX Lion and the server runs on Ubuntu 11.10.
Assuming you ran this command: ssh -f user#mysql-server.com -L 3306:mysql-server.com:3306 -N as described in the post you linked.
A breakdown of the command:
ssh: that's pretty self-explanatory. Invokes ssh.
-f: (From the man ssh page)
Requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or
passphrases, but the user wants it in the background.
Essentially, send ssh to background once you've entered any passwords to establish the connection; it gives the shell prompt back to you at localhost rather than logging you in to remote-host.
user#mysql-server.com: the remote server you'd like to log into.
-L 3306:mysql-server.com:3306: This is the interesting bit. -L (from the man ssh page):
[bind_address:]port:host:hostport
Specifies that the given port on the local (client) host is to be
forwarded to the given host and port on the remote side.
So -L 3306:mysql-server.com:3306 binds the local port 3306 to the remote port 3306 on host mysql-server.com.
When you connect to local port 3306, the connection is forwarded over the secure channel to mysql-server.com. The remote host, mysql-server.com then connects to mysql-server.com on port 3306.
-N: don't execute a command. This is useful for "just forwarding ports" (quoting the man page).
Does this command affect anything on the server?
Yes, it establishes a connection between localhost and mysql-server.com on port 3306.
And how do I close this tunnel...
If you've used -f, you'll notice that the ssh process you've opened heads into the background. The nicer method of closing it is to run ps aux | grep 3306, find the pid of the ssh -f ... -L 3306:mysql-server.com:3306 -N, and kill <pid>. (Or maybe kill -9 <pid>; I forget if just kill works). That has the beautiful benefit of not killing all your other ssh connections; if you've got more than one, re-establishing them can be a slight ... pain.
... because now I can't use my local mysql properly.
This is because you've effectively "captured" the local mysql process and forwarded any traffic that attempts to connect to it, off to the remote mysql process. A much nicer solution would be to not use local port 3306 in the port-forward. Use something that's not used, like 33060. (Higher numbers are generally less used; it's pretty common to port-forward a combination like this: "2525->25", "8080->80", "33060->3306" or similar. Makes remembering slightly easier).
So, if you used ssh -f user#mysql-server.com -L 33060:mysql-server.com:3306 -N, you'd then point your Zend connect-to-mysql function to localhost on port 33060, which would connect to mysql-server.com on port 3306. You can obviously still connect to localhost on port 3306, so you can still use the local mysql server.
This will kill all ssh sessions that you have open from the terminal.
sudo killall ssh
Note: adding as answer since comments don't support code blocks.
In my opinion it is better to NOT use -f and instead just background the process as normal with &. That will give you the exact pid you need to kill:
ssh -N -L1234:other:1234 server &
pid=$!
echo "waiting a few seconds to establish tunnel..."
sleep 5
... do yer stuff... launch mysql workbench whatever
echo "killing ssh tunnel $pid"
kill $pid
Or better yet, just create this as a wrapper script:
# backend-tunnel <your cmd line, possibly 'bash'>
ssh -N -L1234:other:1234 server &
pid=$!
echo "waiting a few seconds to establish tunnel..."
sleep 5
"$#"
echo "killing ssh tunnel $pid"
kill $pid
backend-tunnel mysql-workbench
backend-tunnel bash