count packets received from running tcpdump - tcpdump

I am using tcpdump to check if any packets are being received, while I am getting the packets from continuous listening on the port, I want a condition that has a count and each time a packets received, it is added to the count, I am using bash scripting to do this, any thoughts please?
I get packets, but I don't get a count until I ctrl + c out of the continuous command

Why wouldn't you use Wireshark and filter there? Honest question if this is about debugging only.
If this is however about counting packages to do some action in your own code using tcpdump, you'd either have to:
parse the output of tcpdump where a newline equals one package until the output isn't prefixed by the datetime.
create a file after n packages using the -c switch and await the termination of tcpdump
For further reference, see man tcpdump.
In any case, I'd strongly advocate for Wireshark if you just need this for debugging.

Related

postfix: timing of client responses in a milter and in after-queue processing?

I'm currently using postfix-2.11.3, and I am doing a lot of message processing through a milter. This processing takes place before the client is notified that the message is accepted, and it sometimes involves enough work that it delays the client's receipt of the initial SMTP 250 2.0.0 Ok: queued as xxxxxxxxxxx message.
During large email blasts to my server, this milter processing can cause a backlog, and in some cases, the client connections time out while waiting for that initial 250 ... message.
My question is this: if I rewrite my milter as a postfix after-queue filter with no before-queue processing, will clients indeed get the initial 250 messages right away, with perhaps subsequent SMTP messages coming later? Or will the 250 message still be deferred until after postfix completes the after-queue filtering?
And is it possible for an initial 250 message to be received by the client with a subsequent 4xx or 5xx message received and processed later by that same client, in case the after-queue filter decides to subsequently reject the message?
I know I could test this by writing an after-queue filter. However, my email server is busy, and I don't have a test server available, and so I'd like to know in advance whether an after-queue filter can behave in this manner.
Thank you for any wisdom you could share about this.
I managed to set up a postfix instance on a test machine, and I was able to install a dummy after-queue filter. This allowed me to figure out the answer to my question. It turns out that postfix indeed sends the 250 2.0.0 Ok: queued as xxxxxxxxxxx message before the after-queue filter completes.
This means that I can indeed move my slower milter processing to the after-queue filter in order give senders a quicker SMTP response.

Detecting hanging processes in Perl/MySQL (FreeBSD)

I have a Perl script running on a FreeBSD/Apache system, which makes some simple queries to a MySQL database via DBI. The server is fairly active (150k pages a day) and every once in a while (as much as once a minute) something is causing a process to hang. I've suspected a file lock might be holding up a read, or maybe it's a SQL call, but I have not been able to figure out how to get information on the hanging process.
Per Practical mod_perl it sounds like the way to identify the operation giving me the headache is either system trace, perl trace, or the interactive debugger. I gather the system trace is ktrace on FreeBSD, but when i attach to one of the hanging processes in top, the only output after the process is killed is:
50904 perl5.8.9 PSIG SIGTERM SIG_DFL
That isn't very helpful to me. Can anyone suggest a more meaningful approach on this? I am not terribly advanced in Unix admin, so your patience if I sound stupid is greatly appreciated.... :o)
If I understood correctly, your Perl process is hanging while querying the MySQL, which, by itself, is still operational. MySQL server has the embedded troubleshooting feature for that, the log_slow_queries option. Putting the following lines in your my.cnf enables the trick:
[mysqld]
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 10
After that, restart or reload the MySQL daemon. Let the server run for a while to collect the stats and analyse what's going on:
mysqldumpslow -s at /var/log/mysql/mysql-slow.log | less
On one server of mine, the top record (-s at orders by average query time, BTW) is:
Count: 286 Time=101.26s (28960s) Lock=14.74s (4214s) Rows=0.0 (0), iwatcher[iwatcher]#localhost
INSERT INTO `wp_posts` (`post_author`,`post_date`,`post_date_gmt`,`post_content`,`post_content_filtered`,`post_title`,`post_excerpt`,`post_status`,`post_type`,`comment_status`,`ping_status`,`post_password`,`post_name`,`to_ping`,`pinged`,`post_modified`,`post_modified_gmt`,`post_parent`,`menu_order`,`guid`) VALUES ('S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S')
FWIW, it is a WordPress with over 30K posts.
Ktracing only gives you system calls, signals I/O and namei processing. And it generates a lot of data very quickly. So it might not be ideal to fish out trouble spots.
If you can see the standard output for your script, put some strategically placed print statements in your code around suspected trouble spots. Then running the program should show you were the hang occurs:
print "Before query X"
$dbh->do($statement)
print "After query X".
If you cannot see the standard output, either use e.g. the Sys::Syslog perl module, or call FreeBSD's logger(1) program to write the debugging info to a logfile. It is probably easiest to encapsulate that into a debug() function and use that instead or print statements.
Edit: If you don't want a lot of logging on disk, write the logging info to a socket (Sys::Syslog supports that with setlogsock()), and write another script to read from that socket and dump the debug text to a terminal, prefixed with the time the data was received. Once the program hangs, you can see what it was doing.

Ejabberd Message Acknowledgment from Server

I have setup and implemented ejabberd server with my little mobile app chatting program. Have implemented XEP-184 for the message delivery status as well.
But I am having an issue, how would it possible for my app to know if my message has indeed reached the ejabberd server?
My scenario: I am walking into a weak connection signal area, the signal is barely strong enough to keep the connection alive, but with frequent timeout. I tried to send a message out, how would it possible that I can confirm if the message reaches the server?
Hope I am clear enough on my question. Thanks in advance!
I wrote an ejabberd mod for this which you can find at:
https://github.com/kmtsar/ejabberd-mods
A possible approach would be to implement XEP-0198 Stream Management. Stream management is a standard feature in latest ejabberd versions.
With that in place, a client can ask the server to keep a count of the received stanzas, and when interested ask the server to confirm the number of received stanzas.
The client can then get an idea whether one or more stanza were received or not.
This can be done for every single stanza: the client requires an ack for the last sent stanza, and expects an ACK from the server.
In theory you could implement just the "Basic Ack Scenarios" - no need for the full XEP (which includes stream resumption).

tcp underlying transmission mechanism/ network programming

I have searched but I could not find the following:
Process1 transmits data over TCP socket. The code that does the transmission is (pseudocode)
//Section 1
write(sock,data,len);//any language.Just write data
//Section 2
Process1 after the write could continue in section 2, but this does not mean that data has been transmitted. TCP could have buffered the data for later transmission.
Now Process2 is running concurrently with Process1. Both processes try to send data concurrently. I.e. both will have code as above.
Question1: If both processes write data to TCP socket simultaneously how will the data be eventually transmitted over the wire by IP/OS?
a) All data of Process1 followed by all data of Process2 (or reverse) i.e. some FIFO order?
or
b) Data from Process1 & Process2 would be multiplexed by IP layer (or OS) over the wire and would be send "concurrently"?
Question2: If e.g. I added a delay, would I be sure that data from the 2 processes were send serially over the wire (e.g. all data of Process1 followed by all data of Process2)?
UPDATE:
Process1 and Process2 are not parent child. Also they are working on different sockets
Thanks
Hmm, are you are talking about single socket shared by two processes (like parent and child)? In such a case the data will be buffered in order of output system calls (write(2)s).
If, which is more likely, you are talking about two unrelated TCP sockets in two processes then there's no guarantee of any order in which the data will hit the wire. The reason for that is sockets might be connected to remote points that consume data with different speeds. TCP flow control then makes sure that fast sender does not overwhelm slow receiver.
Answer 1: the order is unspecified, at least on the sockets-supporting OS's that I've seen. Processes 1 & 2 should be designed to cooperate, e.g. by sharing a lock/mutex on the socket.
Answer 2: not if you mean just a fixed-time delay. Instead, have process 1 give a go-ahead signal to process 2, indicating that process 1 has done sending. Use pipes, local sockets, signals, shared memory or whatever your operating system provides in terms of interprocess communication. Only send the signal after "flushing" the socket (which isn't actually flushing).
A TCP socket is identified by a tuple that usually is at least (source IP, source port, destination IP, destination port). Different sockets have different identifying tuples.
Now, if you are using the same socket on two processes, it depends on the order of the write(2) calls. But, you should take into account that write(2) may not consume all the data you've passed to it, the send buffer may be full, causing a short write (write()'ing less than asked for, and returning the number of bytes written as return value), causing write() to block/sleep until there is some buffer space, or causing write() to return an EAGAIN/EWOULDBLOCK error (for non-blocking sockets).
write() is atomic; ditto send() and friends. Whichever one executed first would transmit all its data while the other one blocks.
The delay is unnecessary, see (1).
EDIT: but if as I now see you are talking about different sockets per process your question seems pointless. There is no way for an application to know how TCP used the network so what does it matter? TCP will transmit in packets of up to an MTU each in whatever order it sees fit.

More TCP and POSIX sockets listen() and accept() semantics

Situation: The server calls listen() (but not accept()!). The client sends a SYN to the server. The server gets the SYN, and then sends a SYN/ACK back to the client. However, the client now hangs up / dies, so it never sends an ACK back to the server. The connection is in the SYN_SENT state.
Now another client sends a SYN, gets a SYN/ACK back from the server, and sends back an ACK. This connection is now in the ESTABLISHED state.
Now the server finally calls accept(). What happens? Does accept() block on the first, faulty connection, until some kind of timeout occurs? Does it check the queue for any ESTABLISHED connections and return those, first?
Well, what you're describing here is a typical syn-flood attack ( http://en.wikipedia.org/wiki/SYN_flood ) when executed more than once.
When looking for example at: http://lkml.indiana.edu/hypermail/linux/kernel/0307.0/1258.html there are two seperate queues, one syn queue and one established queue. Apparently it the first connection will remain in the syn queue (since it's in the SYN_RCVD state), the second connection will be in the established queue where the accept() will get it from. A netstat should still show the first in the SYN_RCVD state.
Note: see also my comment, it is the client who will be in the SYN_SENT state, the server (which we are discussing) will be in the SYN_RCVD state.
You should note that in some implementations, the half open connection (the one in the SYN_RCVD state), may not even be recorded on the server. Implementations may use SYN cookies, in which they encode all of the information they need to complete establishing the connection into the sequence number of the SYN+ACK packet. When the ACK packet is returned, with the sequence number incremented, they can decrement it and get the information back. This can help protect against SYN floods by not allocating any resources on the server for these half-open connections; thus no matter how many extra SYN packets a client sends, the server will not run out of resources.
Note that SCTP implements a 4-way handshake, with cookies built into the protocol, to protect against SYN floods while allowing more information to be stored in the cookie, and thus not having to limit the protocol features supported because the size of the cookie is too small (in TCP, you only get 32 bits of sequence number to store all of the information).
So to answer your question, the user-space accept() will only ever see fully established connections, and will have no notion of the half-open connections that are purely an implementation detail of the TCP stack.
You have to remember that listen(), accept(), et al, are not under the hood protocol debugging tools. From the accept man page: "accept - accept a connection on a socket". Incomplete connections aren't reported, nor should they be. The application doesn't need to worry about setup and teardown of sockets, or retransmissions, or fragment reassembly, or ...
If you are writing a network application, covering the things that you should be concerned about is more than enough work. If you have a working application but are trying to figure out problems then use a nice network debugging tool, tools for inspecting the state of your OS, etc. Do NOT try to put this in your applications.
If you're trying to write a debugging tool, then you can't accomplish what you want by using application level TCP/IP calls. You'll need to drop down at least one level.