I'm trying to scan list of IP addresses using below command
nmap -v -n -sP -iL <IP-list-file.txt>
here I'm looking retry option with nmap command for the failed probe retransmission. Above command will do a single ICMP probe for each IP/hosts. Even I tried with --max-retries no result. So I'm looking a similiar option like { ping -c<2> IP > along with this nmap.
Even tried "-A -T5" no result
Note:- My purpose is to check only whether the host/IP is alive or dead that's it. Preferably nmap utility.
Nmap uses a lot of different methods for host discovery. The options that you used will do one of two things depending on whether you have root privileges:
If you do run Nmap as root, it will send four probes: ICMP Echo Request, TCP SYN to port 443, TCP ACK to port 80, and ICMP Timestamp Request. Only if all four fail to get a response will it mark the target as down.
If you do not run Nmap as root, it will attempt to make a TCP connection to port 80 and port 443. If both of these time out, it will mark the target as down.
So this method is already more robust than simply using /bin/ping. Nmap also retries probes a certain number of times depending on how reliable the network seems. For host discovery, this starts out at 2 retransmits per probe. There doesn't really seem to be a way to increase this without Nmap detecting network problems, so the best way to increase confidence in a "down" determination is to add more host discovery probes using the various -P* options.
The -A and -T5 options will not help at all. -A turns on extra features, none of which will run if the target is considered down, and -T5 simply tells Nmap to assume a very fast and reliable network. It will never retransmit more than 2 times, and will time out probes very quickly. This is almost certainly the opposite of what you want.
Related
I'm checking for comprehension on a homework question for my class. I looked over the man pages for ping and the -t flag didn't have a lot of info on it so I had to infer quite a bit. If someone could verify that my understanding is correct, and perhaps point me to a resource that explains -t flag better than the man pages, I'd appreciate it.
The question:
Write a bash command line that will verify that no more than two network devices are used to pass messages from the syccuxas01.pcc.edu server to the www.pcc.edu server. Use ping with the -t option and see the TTL Details section of man ping.
ping -t 2 www.pcc.edu
My understanding of what that command means and thoughts: The question is poorly worded as "network devices" would be more accurately described as hops. Hops are the steps that a packet of information takes to get to the server you want to send it to, much like if you're going on a car trip, you'll drive through other towns along the way to your destination.
So if we ping the TTL (time to live, which is very dramatic sounding!) [ aka -t ] twice, we're able to get results, which means that there are 2 "hops" to get to www.pcc.edu from the server we access through Putty. TTL from what I understand means how many hops the ping will try to use to send a packet. So ping -t 1 www.pcc.edu will fail. ping -t 3 www.pcc.edu will succeed, but it's using 3 hops, which is not what we want to solve the problem.
We have a VPN tunnel with Openswan between two AWS regions and our colo facility (Used AWS’s guide: http://aws.amazon.com/articles/5472675506466066). Regular usage works OK (ssh, etc), but we are having some MySQL issues over the tunnel between all areas. Using mysql command line client on a linux server and trying to connect using the MySQL Connector J it basically stalls… it seems to open the connection, but then gets stuck. It doesn't get denied or anything, just hangs there.
After initial research thought this was an MTU issue, but I've messed with that a lot and no luck.
Connection to the server works fine, and we can choose a database to use and such, but using the Java connector it appears that the Java client isn't receiving any network traffic after the query is made.
When running a select in the MySQL client on linux we can get a max of 2 or 3 rows before it goes dead.
With this said, I also have a separate openswan VPN on the AWS side for client (mac and iOS) vpn connections. Everything works fantastically through the client VPN and it seems more stable in general. The main difference I've noticed is that the static connection is using "tunnel" as the type and the client is using "transport", but when switching the static tunnel connection to transport it says there's like 30 open connections and doesn't work.
I'm very new to OpenSWAN, so hoping someone can help to point me in the right direction of getting the static tunnel working as well as the client VPN.
As always, here's my config files:
ipsec.conf for BOTH static tunnel servers:
# basic configuration
config setup
# Debug-logging controls: "none" for (almost) none, "all" for lots.
# klipsdebug=none
# plutodebug="control parsing"
# For Red Hat Enterprise Linux and Fedora, leave protostack=netkey
protostack=netkey
nat_traversal=yes
virtual_private=
oe=off
# Enable this if you see "failed to find any available worker"
# nhelpers=0
#You may put your configuration (.conf) file in the "/etc/ipsec.d/" and uncomment this.
include /etc/ipsec.d/*.conf
VPC1-to-colo tunnel conf
conn vpc1-to-DT
type=tunnel
authby=secret
left=%defaultroute
leftid=54.213.24.xxx
leftnexthop=%defaultroute
leftsubnet=10.1.4.0/24
right=72.26.103.xxx
rightsubnet=10.1.2.0/23
pfs=yes
auto=start
colo-to-VPC1 tunnel conf
conn DT-to-vpc1
type=tunnel
authby=secret
left=%defaultroute
leftid=72.26.103.xxx
leftnexthop=%defaultroute
leftsubnet=10.1.2.0/23
right=54.213.24.xxx
rightsubnet=10.1.4.0/24
pfs=yes
auto=start
Client point VPN ipsec.conf
# basic configuration
config setup
interfaces=%defaultroute
klipsdebug=none
nat_traversal=yes
nhelpers=0
oe=off
plutodebug=none
plutostderrlog=/var/log/pluto.log
protostack=netkey
virtual_private=%v4:10.1.4.0/24
conn L2TP-PSK
authby=secret
pfs=no
auto=add
keyingtries=3
rekey=no
type=transport
forceencaps=yes
right=%any
rightsubnet=vhost:%any,%priv
rightprotoport=17/0
# Using the magic port of "0" means "any one single port". This is
# a work around required for Apple OSX clients that use a randomly
# high port, but propose "0" instead of their port.
left=%defaultroute
leftprotoport=17/1701
# Apple iOS doesn't send delete notify so we need dead peer detection
# to detect vanishing clients
dpddelay=10
dpdtimeout=90
dpdaction=clear
Found the solution. Needed to add the following IP tables rule on both ends:
iptables -t mangle -I POSTROUTING -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
This along with an MTU of 1400 and we're looking very solid
We had the same issue with a server connecting from the EU region to an RDS instance in the US. This appears to be a known issue with the RDS instances not responding to ICMP which is needed to auto-discover the MTU settings. As a workaround, you'll need to configure a smaller MTU on the instance that is performing the query.
On the server that is making the connection to the RDS instance (not the VPN tunnel instances), run the following command to get a MTU setting of 1422 (which worked for us):
sudo ifconfig eth0 mtu 1422
I'm setting up a PrestaShop installation on a development server which is a GCE instance and using Cloud SQL as a database server. Everything works just fine except one thing: whenever there is a long period of inactivity on the site, the first page load after that always gives me this error:
Link to database cannot be established: SQLSTATE[HY000] [2003]
If I refresh the page the error is gone and never appears again until I stop using the site for an hour or so. It almost looks like database instance is going into sleep mode or something like that.
The reason I mentioned Prestashop is the fact that I never get this error when using Adminer or connecting to the database from mysql console client.
With the per use billing model, instances are spun down after a 15 minute timeout to save you money. They then take a few seconds to be spun up when next accessed. It may be the Prestashop is timing out on these first requests (though I have no experience with that application).
Try changing your instance to a package billing, which has a 12 hour timeout, to see if this helps
https://developers.google.com/cloud-sql/faq#how_usage_calculated
According to GCE documentation,
Once a connection has been established with an instance, traffic is permitted in both directions over that connection, until the connection times out after 10 minutes of inactivity
I suspect that might be the cause. To get around it, you can try to lower the tcp keepalive time.
Refer here: https://cloud.google.com/sql/docs/compute-engine-access
To keep long-lived unused connections alive, you can set the TCP keepalive. The following commands set the TCP keepalive value to one minute and make the configuration permanent across instance reboots.
# Display the current tcp_keepalive_time value.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
# Set tcp_keepalive_time to 60 seconds and make it permanent across reboots.
$ echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
# Apply the change.
$ sudo /sbin/sysctl --load=/etc/sysctl.conf
# Display the tcp_keepalive_time value to verify the change was applied.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
When i was checking for mysql load time on site. i got result showing connections as TIME_WAIT. Even though i close connection on every page. Sometimes the site doesnt load saying too many connections. What could be solution to this problem?
Thanks in Advance for any replies or suggestions
If a client connects to a MySQL-Server, it usually opens a local port, example:
localhost:12345 -> mysqlserver:3306
If the client closes the connection, the client gets a TIME_WAIT. Due to TCP routing, a packet might arrive late on the temporary port. A connection in TIME_WAIT just discards these packets. Without a TIME_WAIT, the local port might be reused for another connection and might receive packets from a former connection.
On an high frequent application on the web which opens a mysql-connection per request, a high amount of TIME_WAIT connections is expectable. There is nothing wrong with it.
Problems can occur, if your local port range is too low, so you cannot open outgoing connections any more. The usual timeout is set to 60 seconds. So a problem can already occur on more than 400 requests per second on low ranges.
Check:
To check the amount of TIME_WAIT, you can use the following command:
$ cat /proc/net/sockstat
sockets: used 341
TCP: inuse 12 orphan 0 tw 33365 alloc 23 mem 16
UDP: inuse 9 mem 2
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
The value after "tw", in this case 33365, shows the amount of TIME_WAIT.
Solutions:
a. TIME_WAIT tuning (Linux based OS examples):
Reduce the timeout for TIME_WAIT:
# small values are ok, if your mysql server is in the same local network
echo 15 > /proc/sys/net/ipv4/tcp_fin_timeout
Increase the port range for local ports:
# check, what you highest listening ports are, before setting this
echo 15000 65000 > /proc/sys/net/ipv4/ip_local_port_range
The settings /proc/sys/net/ipv4/tcp_tw_recycle and /proc/sys/net/ipv4/tcp_tw_reuse might be interesting, too. (But we experienced strange side effects with these settings, so better avoid them. More informations in this answer)
b. Persistent Connections
Some programming languages and libraries support persistent connections. Another solution might be using a locally installed proxy like "ProxySQL". This reduces the amount of new and closed connections.
If you are getting alot of TIME_WAIT connections on the Mysql Server then that means that Mysql server is closing the connection. The most likely case in this instance would be that a host or several hosts got on a block list. You can clear this by running
mysqladmin flush-hosts
to get a list of the number of connections you have per ip run,
netstat -nat | awk {'print $5'} | cut -d ":" -f1 | sort | uniq -c | sort -n
you can also confirm this is happening by going to one of your clients that is having trouble connecting and telnet to port 3306. It will thow a message with something like,
telnet mysqlserver 3306
Trying 192.168.1.102...
Connected to mysqlserver.
Escape character is '^]'.
sHost 'clienthost.local' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'Connection closed by foreign host.
As #Zimbabao suggested in the comment, debug your code for any potential errors that may halt the execution of closing the Mysql connection.
If nothing works, check your my.cnf for a system variable called wait_timeout. If its not present add it to the section [mysqld] and restart your Mysql server.
[mysqld]
wait_timeout = 3600
Its the number of seconds the server waits for activity on a noninteractive connection before closing it. Further information can be found http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_wait_timeout
Tune the figure 3600 (1 hour) to your requirements.
HTH
I'm curious if it is possible to map a UNIX socket on to an INET socket. The situation is simply that I'd like to connect to a MySQL server. Unfortunately it has INET sockets disabled and therefore I can only connect with UNIX sockets. The tools I'm using/writing have to connect on an INET socket, so I'm trying to see if I can map one on to the other.
It took a fair amount of searching but I did find socat, which purportedly does what I'm looking for. I was wondering if anyone has any suggestions on how to accomplish this. The command-line I've been using (with partial success) is:
socat -v UNIX-CONNECT:/var/lib/mysql/mysql.sock TCP-LISTEN:6666,reuseaddr
Now I can make connections and talk to the server. Unfortunately any attempts at making multiple connections fail as I need to use the fork option but this option seems to render the connections nonfunctional.
I know I can tackle the issue with Perl (my preferred language), but I'd rather avoid writing the entire implementation myself. I familiar with the IO::Socket libraries, I am simply hoping anyone has experience doing this sort of thing. Open to suggestions/ideas.
Thanks.
Reverse the order of your arguments to socat, and it works.
socat -v tcp-l:6666,reuseaddr,fork unix:/var/lib/mysql/mysql.sock
This instructs socat to
Listen on TCP port 6666 (with SO_REUSEADDR)
Wait to accept a connection
When a connection is made, fork. In the child, continue the steps below. In the parent, go to 2.
Open a UNIX domain connection to the /var/lib/mysql/mysql.sock socket.
Transfer data between the two endpoints, then exit.
Writing it the other way around
socat -v unix:/var/lib/mysql/mysql.sock tcp-l:6666,reuseaddr,fork
doesn't work, because this instructs socat to
Open a UNIX domain connection to the /var/lib/mysql/mysql.sock socket.
Listen on TCP port 6666 (with SO_REUSEADDR)
Wait to accept a connection
When a connection is made, spawn a worker child to transfer data between the two addresses.
The parent continues to accept connections on the second address, but no longer has the first address available: it was given to the first child. So nothing useful can be done from this point on.
Yes, you can do this in Perl.
Look at perlipc, IO::Select, IO::Socket and Beej's Guide to Network Programming.
You might want to consider doing it in POE - it's asynchronous library for dealing with events, so it looks like great for the task.
It is not 100% relevant, but I use POE to write proxy between stateless protocol (HTTP) and statefull protocol (telnet session, and more specifically - MUD session), and it was rather simple - You can check the code in here: http://www.depesz.com/index.php/2009/04/08/learning-poe-http-2-mud-proxy/.
In the comments somebody also suggested Coro/AnyEvent - I haven't played with it yet, but you might want to check it.