I am writing fairly simply pcap "live" capture engine, however the packet processing callback implementation for pcap_dispatch should take relatively long time for processing.
Does pcap run every "pcap_handler" callback in separate thread? If yes, is "pcap_handler" thread-safe, or should the care be taken to protect it with critical sections?
Alternatively, does pcap_dispatch callback works in serial fashion? E.g. is "pcap_handler" for the packet 2 called only after "pcap_handler" for packet 1 is done? If so, is there an approach to avoid accumulating latency?
Thanks,
-V
Pcap basically works like this: There is a kernel-mode driver capturing the packets and placing them in a buffer of size B. The user-mode application may request any amount of packets at any time using pcap_loop, pcap_dispatch, or pcap_next (the latter is basically pcap_dispatch with one packet).
Therefore, when you use pcap_dispatch to request some packets, libpcap goes to the kernel and asks for the next packet in the buffer (If there isn't one the timeout code and stuff kicks in, but this is irrelevant for this discussion), transfers it into userland and deletes it from the buffer. After that, pcap_dispatch calls your handler, reduces it's packets-to-do counter and starts from the beginning. As a result, pcap_dispatch only returns if the requested amount of packets have been processed, an error ocurred, or a timeout happened.
As you can see, libpcap is completely non-threaded, as most C API's are. The kernel-mode driver, however, is obviously happy enough to deliver packets to multiple threads (else you wouldn't be able to capture from more than one process), and is completly thread-safe (there is one separate buffer for each usermode handle).
This implies that you must implement all parallelisms by yourself. You'd want to do something like this:
pcap_dispatch(P, count, handler, data);
.
.
.
struct pcap_work_item {
struct pcap_pkthdr header;
u_char data[];
};
void handler(u_char *user, struct pcap_pkthdr *header, u_char *data)
{
struct pcap_work_item *item = malloc(sizeof(pcap_pkthdr) + header->caplen);
item->header = *header;
memcpy(item->data, data, header->caplen);
queue_work_item(item);
}
Note that we have to copy the packet into the heap, because the header and data pointers are invalid after the callback returns.
The function queue_work_item should find a worker thread, and assign it the task of handling the packet. Since you said that your callback takes a 'relativley long time', you likely need a large number of worker threads. Finding a suitable number of workers is subject to fine-tweaking.
At the beginning of this post I said that the kernel-mode driver has buffer to collect incoming packets which await processing. The size of this buffer is implementation-defined. The snaplen parameter to pcap_open_live only controls how many bytes of one packet are captured, however, the number of packets cannot be controlled in a portable fashion. It might be fixed-size. It might get larger as more and more packets arrive. However, if it overflows, all further packets are discarded until there is enough space for the next one to arrive. If you want to use your application in a high-traffic environment, you want to make sure that your *pcap_dispatch* callback completes quickly. My sample callback simply assigns the packet to a worker, so it works fine even in high-traffic enviroments.
I hope this answers all your questions.
Related
I have a library which uses libpcap to capture packets. I'm using pcap_loop() in a dedicated thread for the capture and pcap_breakloop() to stop the capture.
The packet buffer timeout is set to 500ms.
In some rare cases I am missing the last packets that my application sends before calling pcap_breakloop().
Reading the libpcap documentation I ended up wondering if the packet loss is related to the packet buffer timeout. The documentation says:
packets are not delivered as soon as they arrive, but are delivered after a short delay (called a "packet buffer timeout")
What happens if pcap_breakloop() is called during this delay ? Are the packets in the buffer passed to the callback or are they dropped before pcap_loop() returns ?
I was unable to find the answer in the documentation.
Are the packets in the buffer passed to the callback
No.
or are they dropped before pcap_loop() returns ?
Yes. In capture mechanisms that buffer packets in kernel code and deliver them only when the buffer fills up or the timeout expires pcap_breakloop() doesn't force the packets to be delivered.
For some of those capture mechanisms there might be a way to force the timeout to, in effect, expire, but I don't know of any documented way to do that with Linux PF_PACKET sockets, BPF, or WinPcap/Npcap NPF.
Update, giving more details:
On Linux and Windows, pcap_breakloop() attempt to wake up anything that's blocked waiting for packets on the same pcap_t.
On Linux, this is implemented by having the poll() call in libpcap block on both the PF_PACKET socket being used for capturing and on an "event" descriptor; pcap_breakloop() causes the "event" descriptor to supply an event, so that the poll() wakes up even if there are no packets to pick up from the socket yet. That does not force the current chunk in the buffer (memory shared between the kernel and userland code) to be assigned to userland, so they're not provided to the caller of libpcap.
On Windows, with Npcap, an "event object" is used by the driver and Packet32 library (the libpcap part of Npcap calls routines in the Packet32 library) to allow the library to block waiting for packets and the driver to wake the library up when packets are available. pcap_breakloop() does a SetEvent() call on the handle for that object, which forces userland code waiting for packets to wake up; it then tries to read from the device. I'd have to spend more time looking at the driver code to see whether, if there are be buffered-but-not-delivered packets at that point, they will be delivered.
On all other platforms, pcap_breakloop() does not deliver a wakeup, as the capture mechanism either does no buffering or provides no mechanism to force a wakeup, so:
if no buffering is done, there's no packet buffer to flush;
if there's a timeout, code blocked on a read will be woken up when the timeout expires, and that buffer will be delivered to userland;
if there's no timeout, code blocked on a read could be blocked for an indefinite period of time.
The ideal situation would be if the capture mechanism provided, on all platforms that do buffering, a way for userland code to force the current buffer to be delivered, and thus to cause a wakeup. That would require changes to the NPF driver and Packet32 library in Npcap, and would require kernel changes in Linux, *BSD, macOS, Solaris, and AIX.
Update 2:
Note also that "break loop" means break out of the loop immediately, so even if all of the above were done, when the loop is exited, there might be packets remaining in libpcap's userland buffer. If you want those packets - even though, by calling pcap_breakloop(), you told libpcap "stop giving me packets" - you'll have put the pcap_t in non-blocking mode and call pcap_dispatch() to drain the userland buffer. (That won't drain the kernel buffer.)
We have an application that allows a user to pass an arbitrary Tcl code block (as a callback) to a custom API that invokes it on individual elements of a large data tree. For performance, this is done using a thread pool, so things can get ripping.
The problem is, we have no control over user code, and in one case they are doing a puts that causes memory to explode and the app to crash. I can prevent the this by redirecting stdout to /dev/null which leads me to believe that Tcl's internal buffers can't be emptied fast enough, so it keeps buffering. Heap analysis seems to confirm this.
What I don't understand is that I haven't messed with any of stdout's options, so it should be line buffered, blocking, 4k. So, my first question would be: why is this happening? Shouldn't there already be backpressure applied to prevent this?
My second question would be: how do I prevent this? If the user wants to to something stupid, I'm more than willing to throttle their performance, but I don't want the app to crash. I suppose one solution would be to redefine puts to write to a file (or simply do nothing) before the callback is invoked, but I'd be interested if there was a way to ensure backpressure on the channel to prevent it from continuing to buffer.
Thanks for any thoughts!
It depends on the channel type and how you've configured it. However, the normal model is that writes to a synchronous channel (-blocking true) will either buffer or write immediately (according to the -buffering option) and writes to an asynchronous channel (-blocking false) will, if not processed immediately, be queued to be carried out later by an internal event handler. For most applications, that does the right thing; it sounds like you've passed an asynchronous channel to code that doesn't call into the event loop (or at least not frequently). Try chan configureing the channel to be synchronous before starting the user code; you're in a separate thread so the blocking behaviour shouldn't be a problem for the rest of the application.
Some channels are more tricky. The one that people most normally encounter is the console channel in Tk on platforms such as Windows, where the channel ends up writing into a widget that doesn't have a maximum number of retained lines.
I am trying to use eventlets to process a large number of data requests, approx. 100,000 requests at a time to a remote server, each of which should generate a 10k-15k byte JSON response. I have to decode the JSON, then perform some data transformations (some field name changes, some simple transforms like English->metric, but a few require minor parsing), and send all 100,000 requests out the back end as XML in a couple of formats expected by a legacy system. I'm using the code from the eventlet example which uses imap() "for body in pool.imap(fetch, urls):...."; lightly modified. eventlet is working well so far on a small sample (5K urls), to fetch the JSON data. My question is whether I should add the non-I/O processing (JSON decode, field transform, XML encode) to the "fetch()" function so that all that transform processing happens in the greenthread, or should I do the bare minimum in the greenthread, return the raw response body, and do the main processing in the "for body in pool.imap():" loop? I'm concerned that if I do the latter, the amount of data from completed threads will start building up, and will bloat memory, where doing the former would essentially throttle the process to where the XML output would keep up. Suggestions as to preferred method to implement this welcome. Oh, and this will eventually run off of cron hourly, so it really has a time window it has to fit into. Thanks!
Ideally, you put each data processing operation into separate green thread. Then, only when required, combine several operations into batch or use a pool to throttle concurrency.
When you do non-IO-bound processing in one loop, essentially you throttle concurrency to 1 simultaneous task. But you can run those in parallel using (OS) thread pool in eventlet.tpool module.
Throttle concurrency only when you have too many parallel CPU-bound code running.
I'm developing a web app that needs to handle bursts of very high loads,
once per minute I get a burst of requests in very few seconds (~1M-3M/sec) and then for the rest of the minute I get nothing,
What's my best strategy to handle as many req /sec as possible at each front server, just sending a reply and storing the request in memory somehow to be processed in the background by the DB writer worker later ?
The aim is to do as less as possible during the burst, and write the requests to the DB ASAP after the burst.
Edit : the order of transactions in not important,
we can lose some transactions but 99% need to be recorded
latency of getting all requests to the DB can be a few seconds after then last request has been received. Lets say not more than 15 seconds
This question is kind of vague. But I'll take a stab at it.
1) You need limits. A simple implementation will open millions of connections to the DB, which will obviously perform badly. At the very least, each connection eats MB of RAM on the DB. Even with connection pooling, each 'thread' could take a lot of RAM to record it's (incoming) state.
If your app server had a limited number of processing threads, you can use HAProxy to "pick up the phone" and buffer the request in a queue for a few seconds until there is a free thread on your app server to handle the request.
In fact, you could just use a web server like nginx to take the request and say "200 OK". Then later, a simple app reads the web log and inserts into DB. This will scale pretty well, although you probably want one thread reading the log and several threads inserting.
2) If your language has coroutines, it may be better to handle the buffering yourself. You should measure the overhead of relying on our language runtime for scheduling.
For example, if each HTTP request is 1K of headers + data, want to parse it and throw away everything but the one or two pieces of data that you actually need (i.e. the DB ID). If you rely on your language coroutines as an 'implicit' queue, it will have 1K buffers for each coroutine while they are being parsed. In some cases, it's more efficient/faster to have a finite number of workers, and manage the queue explicitly. When you have a million things to do, small overheads add up quickly, and the language runtime won't always be optimized for your app.
Also, Go will give you far better control over your memory than Node.js. (Structs are much smaller than objects. The 'overhead' for the Keys to your struct is a compile-time thing for Go, but a run-time thing for Node.js)
3) How do you know it's working? You want to be able to know exactly how you are doing. When you rely on the language co-routines, it's not easy to ask "how many threads of execution do I have and what's the oldest one?" If you make an explicit queue, those questions are much easier to ask. (Imagine a handful of workers putting stuff in the queue, and a handful of workers pulling stuff out. There is a little uncertainty around the edges, but the queue in the middle very explicitly captures your backlog. You can easily calculate things like "drain rate" and "max memory usage" which are very important to knowing how overloaded you are.)
My advice: Go with Go. Long term, Go will be a much better choice. The Go runtime is a bit immature right now, but every release is getting better. Node.js is probably slightly ahead in a few areas (maturity, size of community, libraries, etc.)
How about a channel with a buffer size equal to what the DB writer can handle in 15 seconds? When the request comes in, it is sent on the channel. If the channel is full, give some sort of "System Overloaded" error response.
Then the DB writer reads from the channel and writes to the database.
I'm writing an application under Linux, using Qt library.
So, there are two QThreads. In one of the threads pcap_next() function is calling in while cycle. All threads often using public members of each other during its working.
Without using pcap library (for example read packet from hard disk) everything is right, but when I try to put pcap's functions into separate thread, I have SEGFAULT error.
I can't understand how pcap works. Its looks like pcap freezes the whole process, and because of this threads can't get access to public members of each other.
The main run() function of pcap's thread looks like:
while()
{
Data = pcap_next(handle, &header);
if (Data...)
{
//processing functions
}
}
any ideas?
"Freezing the whole process" would keep the other threads from even running; it wouldn't cause the process to crash.
If your program makes simultaneous calls on a single pcap_t in more than one thread, other than some safe calls such as pcap_breakloop() (which will not interrupt a thread that's blocked - you'd need to deliver a signal in UN*X to do that), there is no guarantee that it will work.
If you never make simultaneous pcap calls on the same pcap_t in different threads, it should work.
I.e., you could open the device/savefile in one thread, getting a pcap_t, and, once that's done, have the same thread or another thread read packets from the pcap_t. You could not, however, have more than one thread read packets from the pcap_t.
However, there could be something wrong with the way you're using pcap, in a fashion that would crash even in a single-threaded program. We'd have to see all your pcap calls to see whether that's the case.