zlib - inflate only work after calling inflateinit twice - exception

I have some pre-compressed data (compressed with the help of zlib-flate on Linux) inside my RAM. To use this compressed data I want to umcompress it using zlib an inflate.
I have no dynamic memory management on this system but provided a big enough buffer for the uncompressed data. The problem is if I call the inflate routine after calling the inflateInit routine I get an unhandled exception.
But if I call the inflateInit function two times the following inflate (=decompressing) works fine and I get the correct decompressed data into my provided buffer. This is strange isn't it?
I can also do a compression at any time before calling the inflate and it will also work .. what the hell?
Let me show you the behaviour:
initInflate
inflate > fail
new run..
initInflate
initInflate
inflate > success
new run..
initDeflate
deflate (success but I don't use the result)
initInflate
inflate > success
There is an array somewhere holding the compressed data:
uint8_t src [] = {.....};
This is my buffer which is definetly big enough to contain the complete decompressed data.
#define BUF_SIZE 1000
uint8_t buf[BUF_SIZE];
And this is the code of my decompressing:
z_stream strm;
strm.zalloc = Z_NULL;
strm.zfree = Z_NULL;
strm.opaque = Z_NULL;
strm.avail_in = srcLen;
strm.next_in = src;
strm.avail_out = BUF_SIZE;
strm.next_out = buf;
strm.data_type = Z_BINARY;
inflateInit(&strm);
inflateInit(&strm); // the follwing inflate only works with this second init
inflate(&strm, Z_NO_FLUSH);
I can see that the state member of the stream is changing from 0x40193678 after the first init to 0x40195250 after the second init (maybe this is a important info for you). And both inits are response with Z_OK.
And now I hope you can help me..

What it's doing is allocating memory for the stream twice, using only the second allocation. I can only guess that you are overwriting just the memory allocated by the first inflateInit(), due to some other error in your program. The overwriting crashes inflate() when it is trying to use the first allocation, but succeeds on when using the second allocation, which is not overwritten by the other bug.

Related

Use libpcap to capture multiple interfaces to same file

I would like to use libpcap to capture on multiple specific interfaces (not 'any') to the same file
I have the following code (error handling and some args removed):
static gpointer pkt_tracing_thread(gpointer data)
{
while (1)
{
pcap_dispatch(g_capture_device1, .., dump_file1);
pcap_dispatch(g_capture_device2, .., dump_file2);
}
}
fp1 = calloc(1, sizeof(struct bpf_program));
fp2 = calloc(1, sizeof(struct bpf_program));
cap_dev1 = pcap_open_live(interface1,...
cap_dev2 = pcap_open_live(interface2,...
pcap_compile(cap_dev1, fp1, ...
pcap_compile(cap_dev2, fp2, ...
pcap_setfilter(cap_dev1, fp1);
pcap_setfilter(cap_dev2, fp2);
dump_file1 = pcap_dump_open(g_capture_device1, filename);
dump_file2 = pcap_dump_open(g_capture_device2, filename);
g_thread_create_full(pkt_tracing_thread, (gpointer)fp1, ...
g_thread_create_full(pkt_tracing_thread, (gpointer)fp2, ...
This does not work. What I see in filename is just packets on one of the interfaces. I'm guessing there could be threading issues in the above code.
I've read https://seclists.org/tcpdump/2012/q2/18 but I'm still not clear.
I've read that libpcap does not support writing in pcapng format, which would be required for the above to work, although I'm not clear about why.
Is there any way to capture multiple interfaces and write them to the same file?
Is there any way to capture multiple interfaces and write them to the same file?
Yes, but 1) you have to open the output file only once, with one call to pcap_dump_open() (otherwise, as with your program, you may have two threads writing to the same file independently and stepping on each other) and 2) you would need to have some form of mutex to prevent both threads from writing to the file at the same time.
Also, you should have one thread reading only from one capture device and the other thread reading from the other capture device, rather than having both threads reading from both devices.
As user9065877, you have to open the output file only once and write to it only from one thread at a time.
However, since you'd be serializing everything anyway, you may prefer to ask libpcap for pollable file descriptors for the interfaces and poll in a round-robin fashion for packets, using a single thread and no mutexes.

MariaDB non-blocking with EPOLL

I have single threaded server written in C that accepts TCP/UDP connections based on EPOLL and supports plugins for the multitude of protocol layers we need to support. That bit is fine.
Due to the single threaded nature, I wanted to implement a database layer that could utilize the same EPOLL architecture rather then separately iterating over all of the open connections.
We use MariaDB and the MariaDB connector that supports non blocking functions in it's API.
https://mariadb.com/kb/en/mariadb/using-the-non-blocking-library/
But what I'm finding is not what I expected, and what I was expecting is described below.
First I fire the mysql_real_connect_start() and if it returns zero we dispatch the query immediately as this indicates no blocking was required, although this never happens.
Otherwise, I fetch the file descriptor that seems to be immediate and register it with EPOLL and bail back to the main EPOLL loop waiting for events.
s = mysql_get_socket(mysql);
if(s > 0)
{
brt_socket_set_fds(endpoint, s);
struct epoll_event event;
event.data.fd = s;
event.events = EPOLLRDHUP | EPOLLIN | EPOLLET | EPOLLOUT;
s = epoll_ctl(efd, EPOLL_CTL_ADD, s, &event);
if (s == -1) {
syslog(LOG_ERR, "brd_db : epoll error.");
// handle error.
}
...
So, then some time later I do get the EPOLLOUT indicating the socket has been opened.
And I dutifully call mysql_real_connect_cont() but at this stage it is still returning a non-zero value, indicating I must wait longer?
But then that is the last EPOLL event I get, except for the EPOLLRDHUP when I guess the MariaDB hangs up after 10 seconds.
Can anyone help me understand if this idea is even workable?
Thanks... Thanks... so much Thanks.
OK for anyone else that lands here, I fixed it or rather un-broke it.
Notice that - from the examples - the returned status from _start / _cont calls are passed in as a parameter to the next _cont. Turns out this is critical.
The status contains flags MYSQL_WAIT_READ, MYSQL_WAIT_WRITE, MYSQL_WAIT_EXCEPT, MYSQL_WAIT_TIMEOUT, and if not passed to the next _cont my guess is you are messing with the _cont state-machine.
I was not saving the state of status between different places where _start and _cont were being called.
struct MC
{
MYSQL *mysql;
int status;
} MC;
...
// Initial call
mc->status = mysql_real_connect_start(&ret, mc->mysql, host, user, password, NULL, 0, NULL, 0);
// EPOLL raised calls.
mc->status = mysql_real_connect_cont(&ret, mc->mysql, mc->status);
if(mc->status) return... // keep waiting check for errors.

Reading byte array the second time causes error?

I am using the following code to read an error message from a byte array and it works fine the first time but if I try to access it the second time it throws an error:
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
StandardError is of type InboundPipe?
The error is:
Error: Error #3212: Cannot perform operation on a NativeProcess that is not running.
even though the process is running (process.running is true). It's on the second call to readUTFBytes that seems to be the cause.
Update:
Here is the code calling the same call one after another. The error happens on the next line and process.running has not changed from true. Happens on the second call.
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
I also found out the standardError is a InboundPipe instance and implements IDataInput.
Update 2:
Thanks for all the help. I found this documentation when viewing the bytesAvailable property.
[Read Only] Returns the number of bytes of data available for reading
in the input buffer. User code must call bytesAvailable to ensure that
sufficient data is available before trying to read it with one of the
read methods.
When I call readUTFBytes() it resets the bytes available to 0. So when I read it a second time and there are no bytes available it causes the error. The error is or may be incorrect in my opinion or the native process.running flag is incorrect.
I looked into seeing if it has a position property and it does not, at least not in this instance.
Could you try to set position to zero before reading process, especially before repetitive access:
Moves, or returns the current position, in bytes, of the file pointer into the ByteArray object. This is the point at which the next call to a read method starts reading or a write method starts writing.
//ByteArray example
var source: String = "Some data";
var data: ByteArray = new ByteArray();
data.writeUTFBytes(source);
data.position = 0;
trace(data.readUTFBytes(data.bytesAvailable));
data.position = 0;
trace(data.readUTFBytes(data.bytesAvailable));
This was a tricky problem since the object was not a byte array although it looks and acts like one (same methods and almost same properties). It is an InboundPipe that also implements IDataInput.
I found this documentation when viewing the bytesAvailable property.
[Read Only] Returns the number of bytes of data available for reading
in the input buffer. User code must call bytesAvailable to ensure that
sufficient data is available before trying to read it with one of the
read methods.
When I call readUTFBytes() it resets the bytes available to 0. So when I call it a second time and there are no bytes available it causes the error. The error is or may be incorrect in my opinion or the native process.running flag is incorrect although I have reason to believe it's the former.
The solution is to check bytesAvailable before calling read operations and store the value if it needs to be accessed later.
if (process.standardError.bytesAvailable) {
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
errorDataArray.push(errorData);
}
I looked into seeing if it has a position property and it does not, at least not in this instance.

MmFile Empty Files throws Exception in Destructor

I'm having trouble getting MmFile to work in a directory scanning algorithm.
When I'm stress-testing it as follows
foreach (dent; dirEntries(..)) {
const size_t K = ...;
const ulong size = ...;
scope auto mf = new MmFile(dent.name, MmFile.Mode.read, size, null, win)
}
I can't find a combination of size and win that works for all cases when reading data.
When I set
const size = 0;
const win = 64*1024;
the length gets calculated correctly.
But when dent.name is an existing empty file it crashes in the destruction of the MMFile throwing a
core.exception.FinalizeError...std.exception.ErrnoException#std.mmfile.d(490): munmap failed (Invalid argument).
And I can't recover this error by catching core.exception.FinalizeError because its thrown in the destructor. I haven't tried
try { delete mm; } catch (core.exception.FinalizeError) { ; /* pass */}
Maybe that works.
Is this the default behavior when calling mmap in C on existing empty files?
If so I think that MmFile should check for this error during construction.
The except gets thrown also when I replace scope with an explicit delete.
For now I simply skip calling MmFile on empty files.
It sounds like a bug to me for MmFile to barf on empty files regardless of what mmap itself does. Please report it.
On a side note, I'd advise against using either scope or delete, as they're going to be removed from the language, because they're both unsafe. std.typecons.scoped replaces scope in this context if you want to do that (though it's still unsafe). And as for delete, destroy will destroy the object without freeing its memory, and core.memory can be used to free memory if you really want to, but in general, if you want to be worrying about freeing memory, then you should be manually managing your memory (with malloc and free and possibly emplace) and not using the GC at all.

Correct way to use InternetReadFile() asynchronously

I've got code that's performing HTTP requests using WinInet API's asynchronously. In general, my code works, but I'm confused about the 'right' way to do things. In the documentation for InternetReadFile(), it states:
To ensure all data is retrieved, an application must continue to call
the InternetReadFile function until the function returns TRUE and the
lpdwNumberOfBytesRead parameter equals zero.
but in asynchronous mode, it may (or may not) return false, and an error of ERROR_IO_PENDING, indicating it'll do the work asynchronously, and call my callback when finished. If I read the documentation literally, it seems that the asynchronous calls could also just do a partial read of the requested buffer, and require the caller to keep calling InternetReadFile until a read of 0 bytes is encountered.
A typical implementation using InternetReadFile() synchronously would look something like this:
while(InternetReadFile(Request, Buffer, BufferSize, &BytesRead) && BytesRead != 0)
{
// do something with Buffer
}
but with the possibility that any one call to InternetReadFile() could signal that it's going to do the work asynchronously (and perhaps read part, but not all of your request), it becomes much more complicated. If I turn to MSDN sample code for guidance, the implementation is simple, simply calling InternetReadFile() once, and expecting a single return having read the entire requested buffer either instantly or asynchronously. Is this the correct way to use this function, or is MSDN Sample Code ignoring the possibility that InternetReadFile() will only read part of the requested buffer?
After a more careful reading of the asynchronous example, I see now that it is reading repeatedly until a successful read of 0 bytes is encountered. So to answer my own question, you must call InternetReadFile() over and over again, and be prepared for either a synchronous or asynchronous response.
Reading InternetReadFile() repeatedly until it returns TRUE and BytesRead is 0 is a correct way to use InternetReadFile(), but not enough if you work asynchronously.
As MSDN says
When running asynchronously, if a call to InternetReadFile does not result in a completed transaction, it will return FALSE and a subsequent call to GetLastError will return ERROR_IO_PENDING. When the transaction is completed the InternetStatusCallback specified in a previous call to InternetSetStatusCallback will be called with INTERNET_STATUS_REQUEST_COMPLETE.
So InternetReadFile() may return FALSE and set the last error to ERROR_IO_PENDING value if you work in asynchronous mode.
When InternetSetStatusCallback will be called again with INTERNET_STATUS_REQUEST_COMPLETE, the lpvStatusInformation parameter will contain the address of an INTERNET_ASYNC_RESULT structure (see InternetStatusCallback callback function). The INTERNET_ASYNC_RESULT.dwResult member will contain the result of asynchronous operation (TRUE or FALSE since you called InternetReadFile) and the INTERNET_ASYNC_RESULT.dwError will contain a error code only if dwResult is FALSE.
If dwResult is TRUE then your Buffer contains data read from Internet, and the BytesRead contains the number of bytes read asynchronously.
So one of the most important things when you work asynchronously, the Buffer and the BytesRead must be persistent between InternetStatusCallback calls, i.e. must not be allocated on the stack. Otherwise it has undefined behaviour, causes memory corruption, etc.