what's the reason for the ARP is running while trying the ping command? - ping

when i try ping command and when i see it in wireshark first ARP request is going and after the ARP reply only ICMP request is going ,
i think this is what the reason for the ARP request going in the first,,
while trying ping it need to know the MAC address of the target device,
so its trying to the get the MAC address first and then its sending ICMP request
if that is true is it possible to mention the mac address in the ping command(not to try for ARP)
if that is not true what's the reason

You'll note that the ARP request only happens the first time you run ping. If you run it a second time (shortly after the first run), you'll see that the ping start immediately with an ICMP request. This is because when a system discovers the IP address/MAC address association via ARP, it stores the result in a local arp cache. Entries in the cache do expire after some amount of time.
You can manually populate the ARP cache using the arp command:
arp -s <ipaddr> <macaddr>
E.g.:
arp -s 192.168.1.1 192.168.1.1
You can see the contents of your ARP cache like this:
arp -an
So if you were to manually update the ARP cache with the MAC address of your target host, you would be able to ping it with an ARP request going over the network.

Related

Simulate socket.io temporary disconnection in chrome

I have a simple chat app written in javascript(client and node server) with socket.io. Users are permanently connected to server. I need to test how my application behaves when connection with node-socket.io-server being dropped. I need to block socket.io connection for a few seconds and then allow my app to connect again. I need to do it using the browser, without stopping the server.
I know that chrome developer tools has a future of simulating offline mode but this future does not drops/blocks socket.io connections.
So, how can i drop socket.io connection for a few seconds using chrome browser?
It's possible to simulate disconnection using firewall rules on either the backend or the client side.
On the client side you need to drop all outbound packets to the server, for a few seconds
Example using iptables (will work on linux clients):
SERVER_IP="1.1.1.1"
# Append rule
iptables -A OUTPUT -d $SERVER_IP -j DROP
sleep 5
# Delete rule
iptables -D OUTPUT -d $SERVER_IP -j DROP
On the server side you need to drop all inbound packets from_the specific client, for a few seconds
Example using iptables (will work on linux clients):
CLIENT_PUBLIC_IP="2.2.2.2"
# Append rule
iptables -A INPUT -s $CLIENT_PUBLIC_IP -j DROP
sleep 5
# Delete rule
iptables -D INPUT -s $CLIENT_PUBLIC_IP -j DROP

Haproxy agent-check

I am following the guides to setup haproxy for mysql load balancer, and to detect slave lag and change weight accordingly.
https://www.percona.com/blog/2014/12/18/making-haproxy-1-5-replication-lag-aware-in-mysql/#comment-10967915
I manage to setup the PHP file (run as service) and it listen well to the port defined (3307). Telnet to the port 3307 is a success and it return the correct seconds_behind_master value.
Now the Haproxy part:
After configuring Haproxy and reload, the Haproxy doesn't do any agent-check on the port 3307.
I shut down the slave to make the seconds_behind_master = NULL, check the haproxy web interface, nothing is changed. The slave server is still up and running.
Can anyone please point me to the right direction?
Tried using both haproxy 1.5.19 (upgrade from previous version) and 1.6.3 (fresh install)
Update:
Haproxy configuration
frontend read_only-front
bind *:3310
mode tcp
option tcplog
log global
default_backend read_only-back
backend read_only-back
mode tcp
balance leastconn
server db01 1.1.1.1:3306 weight 100 check agent-check agent-port 6789 inter 1000 rise 1 fall 1 on-marked-down shutdown-sessions
server db02 2.2.2.2:3306 weight 100 check agent-check agent-port 6789 inter 1000 rise 1 fall 1 on-marked-down shutdown-sessions
Mange to Telnet and "fputs" the weight in PHP script, when I Stop Slave of 1 of the mysql server.
telnet 127.0.0.1 6789
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
down
However, when checking the stats, it still shows 100 up. I even try other variable such as "up 1%", still unable to change the weight.
echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19
# pxname,svname,status,weight
read_only-front,FRONTEND,OPEN,
read_only-back,db-vu01,UP,100
read_only-back,db-vu02,UP,100
read_only-back,BACKEND,UP,200

Cannot remote access MySQL database of my openshift mysql cartridge [duplicate]

This question already has an answer here:
OpenShift: How to connect to postgresql from my PC
(1 answer)
Closed 6 years ago.
I've deployed a nodejs application at openshift.redhat.com with a mysql and phpmyadmin cartridge. I can access my database fine by going to mywebsite.rhcloud.com/phpmyadmin and logging in with my credentials, but when I try to add a connection to MySQL workbench on my local computer it doesn't seem to connect.
The infomation I'm using is from sshing to my application and typing:
echo $OPENSHIFT_MYSQL_DB_USERNAME
echo $OPENSHIFT_MYSQL_DB_PASSWORD
echo $OPENSHIFT_MYSQL_DB_HOST
echo $OPENSHIFT_MYSQL_DB_PORT
This gives my username, password, host and port which I use in MySQL workbench.
I've tried this: https://stackoverflow.com/a/27333276/2890156
Changed the bind-address from my databse ip to 0.0.0.0, added a new user from the phpmyadmin webinterface with % to allow this account to connect from any ip but it all doesn't seem to work.
I can't figue out what I'm doing wrong or missing, can anyone help me out?
EDIT:
Seems the bind-address I've changed has changed back to my remote database ip after restarting the mysql cartridge...
It's likely that a firewall is blocking access to your hosted database. You can verify this by using a network scan utility like nmap.
I'm going to assume the following for this example, change the respective values if they differ:
echo $OPENSHIFT_MYSQL_DB_HOST is mywebsite.rhcloud.com
echo $OPENSHIFT_MYSQL_DB_PORT is 3306
After installing it on your local machine, then run the command:
nmap -Pn -p 3306 mywebsite.rhcloud.com
If it's blocked, then you'll get a filtered scan that looks like this:
Starting Nmap 6.40 ( http://nmap.org ) at 2016-05-05 13:05 CDT
Nmap scan report for rhcloud.com (54.174.51.64)
Host is up.
Other addresses for rhcloud.com (not scanned): 52.2.3.89
rDNS record for 54.174.51.64: ec2-54-174-51-64.compute-1.amazonaws.com
PORT STATE SERVICE
3306/tcp filtered mysql
Nmap done: 1 IP address (1 host up) scanned in 2.10 seconds
Otherwise, you'll get an open scan like this:
Starting Nmap 6.40 ( http://nmap.org ) at 2016-05-05 13:05 CDT
Nmap scan report for rhcloud.com (54.174.51.64)
Host is up.
Other addresses for rhcloud.com (not scanned): 52.2.3.89
rDNS record for 54.174.51.64: ec2-54-174-51-64.compute-1.amazonaws.com
PORT STATE SERVICE
3306/tcp open mysql
Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds

Reverse tunnel works manually, not for replication

My MASTER mysql server is on a local network, and I have a new slave which is remote (i.e. on the internet). As MASTER does not have an accessible IP, I gathered from the docs that I should establish a reverse tunnel. I execute this:
ssh -f -N -T -R 7777:localhost:3306 user#slave.slave.com
on the MASTER. The connection seems to work - I can go to the slave and connect
with mysql to the MASTER without problem. For some reason though, replication does
not start. MASTER is already replicating to two other slaves without problems - seems the configuration is correct there.
I initiated replication on the slave as:
CHANGE MASTER TO MASTER_HOST='127.0.0.1',
MASTER_PORT=7777,
MASTER_USER='my_repl',
MASTER_PASSWORD='xxxxx',
MASTER_LOG_FILE='mysql-bin.nnnnn',
MASTER_LOG_POS=mm;
SLAVE STATUS reports mysql trying to connect to the remote, but never succeeding:
error connecting to master 'my_repl#127.0.0.1:7777' - retry-time: 60 retries: 86400
Can anyone suggest how to diagnose this problem?
BTW: OS is Linux.
My apologies... I didn't realize I had to define a new user with 127.0.0.1 as
IP.
So, 'intranet' connections use
replication_user#machine_name
as id, the connection which comes through the reverse tunnel uses
replication_user#127.0.0.1
as id. Both have to be declared to mysql separately. The rest of the info in the original message is valid - maybe this helps someone...
Greetings,
John
PS: Forgot to mention - I debugged this remotely (both MASTER and SLAVE are remote to me) using tcpdump:
tcpdump -i lo 'tcp port 7777'
on the SLAVE side, and
tcpdump -i lo 'tcp port 3306'
on the MASTER (of course that would not be very useful when there is much traffic).

What's the best way to allow MySQL on one server to listen to requests from two other different servers?

I have my MySQL database server on Server 1. I want to have my Rails apps on two other servers - say A and B to be able to connect to this Server 1. What's the best way to do this?
In the my.cnf file it appears I can use the bind-address to bind to one and only one IP address. I can't specify the IP addresses of both A and B in my.cnf.
On the other hand, if I comment skip-networking, the gates are wide open.
Is there a golden mean? What are you folks doing to allow a DB server to listen to requests from multiple app servers and still stay secure?
If MySQL is running on Linux:
I am very biased towards using iptables (a.k.a. netfilter, the Linux firewall) to control incoming traffic to various ports. It's simple to use and very robust.
iptables -A INPUT -p tcp -s server1address/32 --dport 3306 -j ACCEPT
iptables -A INPUT -p tcp -s server2address/32 --dport 3306 -j ACCEPT
iptables -A INPUT -p tcp --dport 3306 -j DROP
The bind address is the local IP address of the server, not the allowable client addresses. In your situation, you can provide the static address of your server (in place of localhost) or, if your IP might change, just comment it out.
Again, to clarify: the bind-address is the address on which the server listens for client connections (you could have multiple NICs, or multiple IP addresses, etc.). It is also possible to change the port you want mysql to listen to.
You will want to make sure you configure the root password if you haven't already:
mysql> SET PASSWORD FOR 'root'#'localhost' = PASSWORD('yourpassword');
You would then use other means to restrict access to MySql to something like the local network (i.e. your firewall).
More info about iptables:
The iptables commands above must either be inserted in the existing iptables tables, or else you must delete the existing stuff and start from scratch with the commands above.
Insertion is not hard, but it depends a little bit on the Linux distribution you use, so I'm not sure what to recommend.
To start from scratch, you need to Flush and eXpunge the existing tables first:
iptables -F
iptables -X
Then insert the iptables firewall rules that you need to use, following the model indicated in my previous answer.
Then save the iptables rules. This is again distribution-dependent. On most Red Hat derivatives (Red Hat, Fedora, CentOS), it's enough to run:
service iptables save
Voila, your custom rules are saved. If the iptables service is enabled (check with "chkconfig --list iptables", it must be ":on" on runlevels 3 and 5, depending on your situation, but it's safe to set it ":on" on both 3 and 5 in any case) then your rules will survive the reboot.
At any time, you can check the current running iptables rules. Here's a few commands that do that, with various levels of verbosity:
iptables -L
iptables -L -n
iptables -L -n -v
Without -n, it will try to lookup the domain names and display them instead of IP addresses - this may not be desirable if DNS is not working 100% perfect.
So that's why I almost always use -n.
-v means "verbose", a bit harder to read but it gives more information.
NOTE: If you start from scratch, other services running on that machine may not be protected by iptables anymore. Spend some time and figure out how to insert the MySQL rules in the existing tables. It's better for your system's security.
In addition to getting the bind address right you'll need to open the correct port, create or configure the users and some other details. This explains it pretty clearly.
A DB server will listen to an indefinite number of clients.
Each client Rails app identifies the DB server.
The DB server waits patiently for connections. It has no idea how many clients there are or where the connections come from.
Edit
"how do you securely configure the DB wrt what servers to accept requests from?"
That's what networks, firewalls and routers are for.
That's why the database requires credentials from the Rail apps.