how to save a new file when tcpdum file size reaches 10Mb - tcpdump

I want to capture my network traffic with using tcpdump and if captured packet rise is 10mb i want to make another file.how can i schedule this with tcpdump. please be kind enough to help me. thank you.

tcpdump -W 5 -C 10 -w capfile
What the above command does is create a rotating buffer of 5 files (-W 5) and tcpdump switches to another file once the current file reaches 10,000,000 bytes, about 10MB (-C works in units of 1,000,000 bytes, so -C 10 = 10,000,000 bytes). The prefix of the files will be capfile (-w capfile), and a one-digit integer will be appended to each.
So your directory will have 5 files rotating with constant capture data:
capfile0
capfile1
capfile2
capfile3
capfile4
Each will be approximately 10,000,000 bytes, but will probably be slightly larger (depending on the space remaining and the size of the last packet received). If you want to have a larger rolling data set your -W to a higher count (-W 50).
It is also very important that the user and group tcpdump have access to write to the location where you are storing these files. Even if you are running as root.

Worth mention that omitting the -W will disable rotation, so files will just be added instead of being rotated. its good when u investigate a network problem and need to capture traffic for long time and want all files to be there.
% tcpdump -i eth0 -w file.pcap -G 3600 -C 100 -K -n &

Since this is a very useful thread, it might be worthwhile to mention that the folder where you run this tcpdump command requires world write-able permissions or your user needs to be owner or part of the owning group of the directory otherwise you will see an error writing to files.

Related

How many bytes does ffprobe need?

I would like to use ffprobe to look at the information of media files. However, the files are not on my local disk, and I have to read from a remote storage. I can read the first n bytes, write them to a temporary file and use ffprobe to read the information. I would like to know the least such n.
I tested with a few files, and 512KB worked with all the files that I tested. However, I am not sure if that will work for all media files.
ffprobe (and ffmpeg) aims to parse two things when opening an input:
the input container's header
payload data from each stream, enough to ascertain salient stream parameters like codec attributes and frame rate.
The header size is generally proportional to the number of packets in the file i.e. a 3 hour MP4 file will have a larger header than a 3 min MP4.
(if the header is at the end of the file, then access to the first 512 kB won't help)
From each stream, ffmpeg will decode packets till its stream attributes have been populated. The amount of bytes consumed will depend on stream bitrate, and how many streams are present.
So, the strict response to 'I am not sure if that will work for all media files' is it won't.

How to resolve tcpdump dropped packets?

I am using tcpdump to capture network packets and running into issue when I start dropping packets. I ran an application which exchanges packets rapidly over network; resulting in high network bandwidth.
>> tcpdump -i eno1 -s 64 -B 919400
126716 packets captured
2821976 packets received by filter
167770 packets dropped by kernel
Since I am only interested in protocol related part from TCP packet; I want to collect TCP packets without data/payload. I hope this strategy can also help in capturing more packets before dropping packets. It appears that I can only increase buffer size (-B argument) upto certain limit. Even with higher limit I am dropping more packets than captured.
can you help me understanding above messages and questions I have
what are packets captured ?
what are packets received by filter?
what are packets dropped by kernel?
how can I capture all packets at high bandwidth without dropping any packets. My test application runs for 3 minutes and exchanges packets at a very high rate. I am only interested in protocol related information not in actual data/ payload being sent.
From Guy Harris himself:
the "packets captured" number is a number that's incremented every time tcpdump sees a packet, so it counts packets that tcpdump reads from libpcap and thus that libpcap reads from BPF and supplies to tcpdump.
The "packets received by filter" number is the "ps_recv" number from a call to pcap_stats(); with BPF, that's the bs_recv number from the BIOCGSTATS ioctl. That count includes all packets that were handed to BPF; those packets might still be in a buffer that hasn't yet been read by libpcap (and thus not handed to tcpdump), or might be in a buffer that's been read by libpcap but not yet handed to tcpdump, so it can count packets that aren't reported as "captured".
And from the tcpdump man page:
packets ``dropped by kernel'' (this is the number of packets that were dropped, due to a lack of buffer space, by the packet capture mechanism in the OS on which tcpdump is running, if the OS reports that information to applications; if not, it will be reported as 0).
To attempt to improve capture performance, here are a few things to try:
Don't capture in promiscuous mode if you don't need to. That will cut down on the amount of traffic that the kernel has to process. Do this by using the -p option.
Since you're only interested in TCP traffic, apply a capture expression that limits the traffic to TCP only. Do this by appending "tcp" to your command.
Try writing the packets to a file (or files to limit size) rather than displaying packets to the screen. Do this with the -w file option or look into the -C file_size and -G rotate_seconds options if you want to limit file sizes.
You could try to improve tcpdump's scheduling priority via nice.
From Wireshark's Performance wiki page:
stop other programs running on that machine, to remove system load
buy a bigger, faster machine :)
increase the buffer size (which you're already doing)
set a snap length (which you're already doing)
write capture files to a RAM disk
Try using PF_RING.
You could also try using dumpcap instead of tcpdump, although I would be surprised if the performance was drastically different.
You could try capturing with an external, dedicated device using a TAP or Switch+SPAN port. See Wireshark's Ethernet Capture Setup wiki page for ideas.
Another promising possibility: Capturing Packets in Linux at a Speed of Millions of Packets per Second without Using Third Party Libraries.
See also Andrew Brown's Sharkfest '14 Maximizing Packet Capture Performance document for still more ideas.
Good luck!
I would try actually lowering the value of your -B option.
The unit is 1 KiB (1024 bytes), thus the buffer size you specified (919400) is almost 1 gigabyte.
I suppose you would get better results by using a value closer to your CPU cache size, e.g. -B 16384.

When is memory scratch space 15 used in BPF (Berkeley Packet Filter) or tcpdump?

My question is regarding the tcpdump command..
The command "tcpdump -i eth1 -d" list out the assembly instructions involved in the filter..
I am curious to see that no instruction is accessing M[15] (memory slot 15).
Can someone let me know , are there any filters for which this memory slot is used ?
What is it reserved for and how is it used ?
Memory slots aren't assigned to specific purposes; they're allocated dynamically by pcap_compile() as needed.
For most filters on most network types, pcap_compile()'s optimizer will remove all memory slot uses, or, at least, reduce them so that the code doesn't need 16 memory slots.
For 802.11 (native 802.11 that you see in monitor mode, not the "fake Ethernet" you get when not in monitor mode), the optimizer currently isn't used (it's designed around assumptions that don't apply to the more complicated decision making required to handle 802.11, and fixing it is a big project), so you'll see more use of memory locations. However, you'll probably need a very complicated filter to use M[15] - or M[14] or M[13] or most of the lower-address memory location.
(You can also run tcpdump with the -O option to disable the optimizer.)

How to increase a soft limit of a stack size for processes run by some user (uid) on Solaris 10

Our enterprise runs on Oracle Tuxedo 10, under Solaris 10. As a result of some recent development (customization source code all across the system was changed by extending sizes of local variables which are declared in C functions) we run into Stack Overflow problem from time to time (depending on how long the functions calls chain is).
As a work-around we decided on increasing a soft limit size of a stack (for all Tuxedo processes running by single user). Considering to use ulimit, /etc/project etc.
Clear & short step-by-step instruction for our on-site support team on how to extend a stack size (per-user) in Solaris 10 would be very appreciated! Thank you in advance.
you could use limit -s
Then you can apply it to /etc/.login or /etc/profile
limit -s 8192 would apply 8192 kbytes to all users on the system if you stick it in one of these 2 files.
For a specific user, use projects or add limit -s to .bash_profile of your user.
This would affect every sessions connecting as that user.

Ensuring one Job Per Node on StarCluster / SunGridEngine (SGE)

When qsubing jobs on a StarCluster / SGE cluster, is there an easy way to ensure that each node receives at most one job at a time? I am having issues where multiple jobs end up on the same node leading to out of memory (OOM) issues.
I tried using -l cpu=8 but I think that does not check the number of USED cores just the number of cores on the box itself.
I also tried -l slots=8 but then I get:
Unable to run job: "job" denied: use parallel environments instead of requesting slots explicitly.
In your config file (.starcluster/config) add this section:
[plugin sge]
setup_class = starcluster.plugins.sge.SGEPlugin
slots_per_host = 1
Largely depends on how the cluster resources are configured i.e. memory limits, etc. However, one thing to try is to request a lot of memory for each job:
-l h_vmem=xxG
This will have side-effect of excluding other jobs from running on a node by virtue that most of the memory on that node is already requested by another previously running job.
Just make sure the memory you request is not above the allowable limit for the node. You can see if it bypassing this limit by checking the output of qstat -j <jobid> for errors.
I accomplished this by setting the number of slots on each my nodes to 1 using:
qconf -aattr queue slots "[nodeXXX=1]" all.q