I'm having trouble getting MmFile to work in a directory scanning algorithm.
When I'm stress-testing it as follows
foreach (dent; dirEntries(..)) {
const size_t K = ...;
const ulong size = ...;
scope auto mf = new MmFile(dent.name, MmFile.Mode.read, size, null, win)
}
I can't find a combination of size and win that works for all cases when reading data.
When I set
const size = 0;
const win = 64*1024;
the length gets calculated correctly.
But when dent.name is an existing empty file it crashes in the destruction of the MMFile throwing a
core.exception.FinalizeError...std.exception.ErrnoException#std.mmfile.d(490): munmap failed (Invalid argument).
And I can't recover this error by catching core.exception.FinalizeError because its thrown in the destructor. I haven't tried
try { delete mm; } catch (core.exception.FinalizeError) { ; /* pass */}
Maybe that works.
Is this the default behavior when calling mmap in C on existing empty files?
If so I think that MmFile should check for this error during construction.
The except gets thrown also when I replace scope with an explicit delete.
For now I simply skip calling MmFile on empty files.
It sounds like a bug to me for MmFile to barf on empty files regardless of what mmap itself does. Please report it.
On a side note, I'd advise against using either scope or delete, as they're going to be removed from the language, because they're both unsafe. std.typecons.scoped replaces scope in this context if you want to do that (though it's still unsafe). And as for delete, destroy will destroy the object without freeing its memory, and core.memory can be used to free memory if you really want to, but in general, if you want to be worrying about freeing memory, then you should be manually managing your memory (with malloc and free and possibly emplace) and not using the GC at all.
Related
I have some pre-compressed data (compressed with the help of zlib-flate on Linux) inside my RAM. To use this compressed data I want to umcompress it using zlib an inflate.
I have no dynamic memory management on this system but provided a big enough buffer for the uncompressed data. The problem is if I call the inflate routine after calling the inflateInit routine I get an unhandled exception.
But if I call the inflateInit function two times the following inflate (=decompressing) works fine and I get the correct decompressed data into my provided buffer. This is strange isn't it?
I can also do a compression at any time before calling the inflate and it will also work .. what the hell?
Let me show you the behaviour:
initInflate
inflate > fail
new run..
initInflate
initInflate
inflate > success
new run..
initDeflate
deflate (success but I don't use the result)
initInflate
inflate > success
There is an array somewhere holding the compressed data:
uint8_t src [] = {.....};
This is my buffer which is definetly big enough to contain the complete decompressed data.
#define BUF_SIZE 1000
uint8_t buf[BUF_SIZE];
And this is the code of my decompressing:
z_stream strm;
strm.zalloc = Z_NULL;
strm.zfree = Z_NULL;
strm.opaque = Z_NULL;
strm.avail_in = srcLen;
strm.next_in = src;
strm.avail_out = BUF_SIZE;
strm.next_out = buf;
strm.data_type = Z_BINARY;
inflateInit(&strm);
inflateInit(&strm); // the follwing inflate only works with this second init
inflate(&strm, Z_NO_FLUSH);
I can see that the state member of the stream is changing from 0x40193678 after the first init to 0x40195250 after the second init (maybe this is a important info for you). And both inits are response with Z_OK.
And now I hope you can help me..
What it's doing is allocating memory for the stream twice, using only the second allocation. I can only guess that you are overwriting just the memory allocated by the first inflateInit(), due to some other error in your program. The overwriting crashes inflate() when it is trying to use the first allocation, but succeeds on when using the second allocation, which is not overwritten by the other bug.
Why C++ hasn't placement delete that directly corresponds to the placement new, i.e. calls the destructor and calls appropriate placement delete operator?
For example:
MyType *p = new(arena) MyType;
...
//current technique
p->~MyType();
operator delete(p, arena);
//proposed technique
delete(arena) p;
operator delete is unique in being a non-member or static member function that is dynamically dispatched. A type with a virtual destructor performs the call to its own delete from the most derived destructor.
struct abc {
virtual ~abc() = 0;
};
struct d : abc {
operator delete() { std::cout << "goodbye\n"; }
};
int main() {
abc *p = new d;
delete p;
}
(Run this example.)
For this to work with placement delete, the destructor would have to somehow pass the additional arguments to operator delete.
Solution 1: Pass the arguments through the virtual function. This requires a separate virtual destructor for every static member and global operator delete overload with different arguments.
Solution 2: Let the virtual destructor return a function pointer to the caller specifying what operator delete should be called. But if the destructor does lookup, this hits the same problem of requiring multiple virtual function definitions as #1. Some kind of abstract overload set would have to be created, which the caller would resolve.
You have a perfectly good point, and it would be a nice addition to the language. Retrofitting it into the existing semantics of delete is probably even possible, in theory. But most of the time we don't use the full functionality of delete and it suffices to use a pseudo-destructor call followed by something like arena.release(p).
Probably because there was syntax for explicitly calling a destructor without deallocation (exactly as in your question), but no syntax for explicit construction in raw memory?
Actually there is a placement delete which is called by the implementation for an object that was "allocated" using placement new if the constructor threw an exception.
From Wikipedia.
The placement delete functions are called from placement new expressions. In particular, they are called if the constructor of the object throws an exception. In such a circumstance, in order to ensure that the program does not incur a memory leak, the placement delete functions are called.
The whole point of placement new is to separate object creation from its memory management. So it makes no sense to tie it back during object destruction.
If memory for your objects is from heap and you want same lifetime for objects and their memory just use operator new and operator delete, maybe overriding them if you want any special behavior.
Placement new is good for example in vector, which keeps a large chunk of raw memory and creates and destroys object inside of it, but without releasing memory.
I am using the following code to read an error message from a byte array and it works fine the first time but if I try to access it the second time it throws an error:
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
StandardError is of type InboundPipe?
The error is:
Error: Error #3212: Cannot perform operation on a NativeProcess that is not running.
even though the process is running (process.running is true). It's on the second call to readUTFBytes that seems to be the cause.
Update:
Here is the code calling the same call one after another. The error happens on the next line and process.running has not changed from true. Happens on the second call.
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
I also found out the standardError is a InboundPipe instance and implements IDataInput.
Update 2:
Thanks for all the help. I found this documentation when viewing the bytesAvailable property.
[Read Only] Returns the number of bytes of data available for reading
in the input buffer. User code must call bytesAvailable to ensure that
sufficient data is available before trying to read it with one of the
read methods.
When I call readUTFBytes() it resets the bytes available to 0. So when I read it a second time and there are no bytes available it causes the error. The error is or may be incorrect in my opinion or the native process.running flag is incorrect.
I looked into seeing if it has a position property and it does not, at least not in this instance.
Could you try to set position to zero before reading process, especially before repetitive access:
Moves, or returns the current position, in bytes, of the file pointer into the ByteArray object. This is the point at which the next call to a read method starts reading or a write method starts writing.
//ByteArray example
var source: String = "Some data";
var data: ByteArray = new ByteArray();
data.writeUTFBytes(source);
data.position = 0;
trace(data.readUTFBytes(data.bytesAvailable));
data.position = 0;
trace(data.readUTFBytes(data.bytesAvailable));
This was a tricky problem since the object was not a byte array although it looks and acts like one (same methods and almost same properties). It is an InboundPipe that also implements IDataInput.
I found this documentation when viewing the bytesAvailable property.
[Read Only] Returns the number of bytes of data available for reading
in the input buffer. User code must call bytesAvailable to ensure that
sufficient data is available before trying to read it with one of the
read methods.
When I call readUTFBytes() it resets the bytes available to 0. So when I call it a second time and there are no bytes available it causes the error. The error is or may be incorrect in my opinion or the native process.running flag is incorrect although I have reason to believe it's the former.
The solution is to check bytesAvailable before calling read operations and store the value if it needs to be accessed later.
if (process.standardError.bytesAvailable) {
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
errorDataArray.push(errorData);
}
I looked into seeing if it has a position property and it does not, at least not in this instance.
I am attempting to understand from where exception conditions derive. My question is at the end, but I will present an example that might make it clearer.
Take this Java code, for example. It has the path to a file and set-up a File object. If the path is null, an exception is thrown.
String path = getPathName();
try {
File file = new File(path);
} catch (NullPointerException e) {
// ...
}
This is hardly an exceptional circumstance, though, and if we could modify it in such a way that this might be preferrable:
String path = getPathName();
if (path == null) {
path = DEFAULT_PATH;
}
File file = new File(path); # we've removed the need for an exception
But moving further, we run into a new exception when we try and make the File readable.
try {
file.setReadable(true);
} catch (SecurityException e) {
// ...
}
We can skirt around this issue by checking two conditions:
SecurityManager sm = System.getSecurityManager();
if (sm != null && sm.checkWrite(path)) {
// modify the SecurityManager
} else {
file.setReadable(true);
}
With this example in mind, on to my question...
If we move down the stack, going from Java to the OS, etc., is it possible to replace all exception handling code with if-else branches? Or is there some root cause of exceptions (hardware?) that means they are "baked" into programming?
If we move down the stack, going from Java to the OS, etc., is it possible to replace all exception handling code with if-else branches?
Yes. This is how it used to be done, and still is in languages without exceptions. Exceptions are used because they are easier in a number of senses. The primary advantages are that cases not anticipated by the programmer can be aggregated in a general handler; and that information about the exceptional condition does not need to be explicitly preserved in every single function until it is properly handled.
Or is there some root cause of exceptions (hardware?) that means they are "baked" into programming?
Also yes. In general, unexpected hardware conditions need to be handled in some way, unless you are comfortable with undefined behaviour in such cases.
If all the methods in a program returned a pointer/reference to some kind of "exception" object (for other return values, pass in a pointer or reference to a caller-allocated storage location), and if every call to every method which might directly or indirectly want to throw an exception were bracketed with something like:
ret = callFunction( ...parameters...);
if (ret != NULL)
return AddToExceptionStacktrace(ret, ...info about this call site... );
then there would be no need for any other form of exception handling (note that if the type supports scoped variables, the "return" statement would have to insert code to clean them up before it actually returns to the caller).
Unfortunately, that's a lot of extra code. This approach would be workable in a language which had only "checked" exceptions (meaning a method can neither throw exceptions nor pass them through unless it is declared as doing so), but adding that overhead to every function which might directly or indirectly call a function which throws an exception would be very expensive. Exception-handling mechanisms generally eliminate 99% of the extra overhead in the no-exceptions case, and the expense of increasing the overhead in the "exception" case.
I got a problem,my program coredumped when i iterator a set,the code is below,when the size of the set is below 50000,it runs okay,while it'll fail when the size is bigger than 50000(almost).I did nothing in the for loops,but it still coredumped .what is the problem?
set<CRoute *>::iterator it = route_list.begin();
for(; it != route_list.end(); ++it)
{
//Nothing TODO
}
what is the problem?
It's impossible to say given the data you've provided.
There are several common causes:
You have corrupted the set earlier in the program (e.g. by accessing it from multiple threads without proper locking)
You have used a sorting predicate that violates strict weak ordering requirements of the std::set
You have left a dangling pointer in your std::set, and your sorting predicate uses the dangling data and crashes when given garbage.
To figure out what's happening, quit thinking and look e.g. by running the program in a debugger and understanding exactly where the coredump is happening.