ALL,
The documentation here talks about 2 different functions.
Is pcap_next() call for pcap while pcap_next_ex() is for pcapng format? Or both those functions can read both formats?
The page doesn't indicate it.
I have a code that parses pcap file and uses the former call, and I'm just wondering if it will be enough to just check for pcapng file and use the latter call instead.
TIA!!
Is pcap_next() call for pcap while pcap_next_ex() is for pcapng format?
No. pcap_next() is for applications that don't bother checking for errors, and pcap_next_ex() is for applications that do. To quote the RETURN VALUE section of the man page:
pcap_next_ex() returns 1 if the packet was read without problems, 0 if packets are being read from a live capture and the packet buffer timeout expired, PCAP_ERROR if an error occurred while reading the packet, and PCAP_ERROR_BREAK if packets are being read from a ``savefile'' and there are no more packets to read from the savefile. If PCAP_ERROR is returned, pcap_geterr(3PCAP) or pcap_perror(3PCAP) may be called with p as an argument to fetch or display the error text.
pcap_next() returns a pointer to the packet data on success, and returns NULL if an error occurred, or if no packets were read from a live capture (if, for example, they were discarded because they didn't pass the packet filter, or if, on platforms that support a packet buffer timeout that starts before any packets arrive, the timeout expires before any packets arrive, or if the file descriptor for the capture device is in non-blocking mode and no packets were available to be read), or if no more packets are available in a "savefile." Unfortunately, there is no way to determine whether an error occurred or not.
You should use pcap_next_ex().
Or both those functions can read both formats?
Yes. (pcap_next_ex() was added before libpcap could even read pcapng files.)
I have a code that parses pcap file and uses the former call, and I'm just wondering if it will be enough to just check for pcapng file and use the latter call instead.
It would be enough just to use pcap_next_ex() regardless of whether the file is a pcap or pcapng file.
Related
As part of a revit addin that I am running in design automation, I need to extract some data from the file, send it in json format to an external server for analysis, and get the result to update my revit file with new features. I was able to satisfy my requirement by following the indicated in: https://forge.autodesk.com/blog/communicate-servers-inside-design-automation, which worked as I needed, the problem arises when the size of the data to send for the analysis grows, it results in the following error:
[11/12/2020 07:54:08] Error: Payload for "onProgress" callback exceeds $ 5120 bytes limit.
When checking my data it turns out that the payload is around 27000 bytes, are there other ways to send data from design automation for Payloads larger than 5120 bytes?
I was unable to find documentation related to the use of ACESAPI: acesHttpOperation
There is no other way at the moment to send data from your work item to another server.
So either you would have to split up the data into multiple 5120 byte parts and send them like that or have two work items: one for getting the data from the file before doing the analysis and one for updating the file afterwards.
I am working on a TCP-based proxy that must first do a REQ/REPLY handshake in json on a given connection. Because JSON is a self-delimiting protocol I reach for Go's json.Decoder to pull off this work which does the job nicely.
Here are the steps I take:
Dial a connection to a remote server
Write a single json request to a remote server (REQ)
Read a single json reply from the same remote server (completing the proxy handshake REPLY)
Upon a valid json handshake, pass the client connection onto another part of the code which will (going forward) switch to a text based protocol from this point on.
The problem is, when json.Decoder reads data into its internal buffer it can potentially read more data than it needs in which case the json.Decoder has a Buffered() method which gives back an io.Reader with the remainder of the data.
This data (available in the Buffered() method) is now the text-based protocol data which needs to get read from the connection after the json hand-shake did its work. But if I pass the connection forward as is without considering the left over buffer, the connection gets into a locked state because it is waiting to read this data which never comes. The code that deals with the text-based protocol expects a net.Conn going forward and once I pass the connection forward (after the json handshake has been made) the code utilizing the connection understands how to speak the text-based protocol at this point on. So there should be a clear boundary of work.
My question is what is the ideal way to solve this issue so I can still take advantage of the json.Decoder, but ensure that when I pass the connection to a different part of the code in my proxy I know the start of the data for the text-based protocol will still be readable. I somehow need to take the remaining data in the json.Decoder's Buffered() method and put that back in front of the connection so it can be properly read going forward.
Any insight is much appreciated.
You can try
type ConnWithBuffIncluded struct{ //Implement net.Conn so can be passed through pipeline
net.Conn
json.Decoder
}
func (x ConnWithBuffIncluded) Read(p []byte) (n int, err error){ //Will Read both sources
return io.MultiReader(x.Decoder.Buffered(), x.Conn).Read(p)
}
I am using Logback socket appender, and everything is ok, I can get log from socket.
My scenario is: we have a distributed app, all logs will be saved into to a log server's log file with SocketAppender. I just use SimpleSocketServer provided in Logback to get log from all apps. And the logs can be got and saved.
But, the only problem is, for socket appender, no encoder can be added, and the log message will be formatted maybe in some default format. But I must save them in some format.
A way I can find is to write a log server like SimpleSocketServer, and the log server will get the serialized object (ILoggingEvent), and format the object myself.
But in this way, I need to write too many codes. I think there should be one convenient way to add a encoder.
I don't think you need to worry about the serialized version. You will give the SocketAppender on the various clients String messages.
Then, so long as you configure the SimpleSocketServer to use the your desired Encoder in its configuration, all your messages should be in the correct format on disk.
When an "onmessage" event fires in the web socket protocol are you guaranteed the full message or is it more like a straight TCP connection where you buffer the data first and then try to extract packets.
There is protocol level support for fragmented messages and streaming. But this behavior is not represented in the current Javascript API, (reference). So yes, if you receive a message, it is indeed an entire message even if it was sent as many fragments.
i am trying to understand how GetResponseStream works.
when I do request.GetResponseStream, does it download all the data then return the Stream object?
GetResponseStream returns a stream from which you can read the response. Internally, it can work any way it wants to. It can read the entire stream from the server before you call GetResponseStream, it can do it when you call GetResponseStream, it can do it later when you read from the stream. No guarantees are made as to how and when it will read from the server.