I guess I've found a G++ bug but I'm not sure. I cannot explain it. The compile shouldn't pass BAD code but it does. g++-4.5 and g++4.6 -std=c++0x pass this code without any warning.
As is the compile thinks that pointer to Bar object is Bar object itself.
I'm crazy. I spent many hours to get the bug. Is there any technique to protect from this kind of bug?
Bad code gives:
g++-4.6 for_stackoverflow.cpp && ./a.out
address of bar in main() 0xbff18fc0
Foo 0x9e80008 Bar 0xbff18fec
Foo 0x9e80028 Bar 0xbff18fec
Foo 0x9e80048 Bar 0xbff18fec
end
Source code:
#include <iostream>
#include <list>
#include <iomanip>
#include <algorithm>
#define BAD
using namespace std;
class Bar;
class Foo {
public:
virtual void tick(Bar & b) {
cout << "Foo " << this << " Bar " << setw(14) << (&b) << endl;
}
};
class Bar : public list<Foo*> {
};
int main() {
Bar bar;
cout << "address of bar in main() " << &bar << endl;
bar.push_back(new Foo());
bar.push_back(new Foo());
bar.push_back(new Foo());
#ifdef GOOD
for_each(bar.begin(), bar.end(), bind2nd(mem_fun(&Foo::tick), bar));
#elif defined(BAD)
for_each(bar.begin(), bar.end(), bind2nd(mem_fun(&Foo::tick), &bar));
#else
#error "define GOOD xor BAD"
#endif
cout << "end" << endl;
return 0;
}
bind2nd is declared as:
template <class Fn, class T>
binder2nd<Fn> bind2nd(const Fn&, const T&);
This means that the type T is deduced, in this case as Bar *.
On my system it's implemented as:
template<typename _Operation, typename _Tp>
inline binder2nd<_Operation>
bind2nd(const _Operation& __fn, const _Tp& __x)
{
typedef typename _Operation::second_argument_type _Arg2_type;
return binder2nd<_Operation>(__fn, _Arg2_type(__x));
}
To see why that would compile consider:
class Bar {};
int main() {
Bar *b = 0;
typedef const Bar& type;
const type t = type(b);
}
which seems to be the real problem and does compile with g++, because it's basically a reinterpret_cast.
The simplest workaround is changing it to use boost::bind (or std::bind for C++11):
#include <boost/bind.hpp>
...
boost::bind(mem_fun(&Foo::tick), _1, &bar)
or a lambda function does give the error you'd expect to see.
Related
I just want to use cpp to read LevelDB features extracted from caffe.
I use the following code in eclipse:
// Copyright 2014 BVLC and contributors.
#include <glog/logging.h>
#include <stdio.h> // for snprintf
#include <google/protobuf/text_format.h>
#include <leveldb/db.h>
#include <leveldb/write_batch.h>
#include <string>
#include <vector>
#include <cassert>
#include <iostream>
#include <map>
//#include "cpp/sample.pb.h"
#include "caffe/proto/caffe.pb.h" // for: Datum
using namespace caffe;
#define NUMBER_FEATURES_PER_IMAGE 16
using namespace std;
int main(int argc, char** argv)
{
//google::InitGoogleLogging(argv[0]);
if (argc < 2)
{
printf("ERROR! Not enough arguments!\nusage: %s <feature_folder>", argv[0]);
exit(1);
}
LOG(INFO) << "Creating leveldb object\n";
leveldb::DB* db;
leveldb::Options options;
options.create_if_missing = true;
leveldb::Status status = leveldb::DB::Open(options, argv[1], &db);
assert(status.ok());
leveldb::Iterator* it = db->NewIterator(leveldb::ReadOptions());
int i = 0;
double count = 0.0f;
for (it->SeekToFirst(); it->Valid(); it->Next())
{
Datum d;
d.clear_float_data();
d.clear_data();
d.ParseFromString(it->value().ToString());
for (int j = 0; j < d.height(); ++j)
count += d.float_data(j);
i++;
}
assert(it->status().ok());
LOG(INFO) << "Number of datums (or feature vectors): " << i << "\n";;
LOG(INFO) << "Reduction of All Vectors to A Scalar Value: " << count << "\n";
delete it;
}
It builds without error,but when running it says:
/home/deep/cuda-workspace/ReadLevelDB/Debug/ReadLevelDB: error while loading shared libraries: libcaffe.so.1.0.0-rc3: cannot open shared object file: No such file or directory
what is the problem ?
You program fail to find *.so. There are three method:
Create links of *.so in /usr/lib:
ln -s /where/you/install/lib/*.so /usr/lib
sudo ldconfig
Modify LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=/where/you/install/lib:$LD_LIBRARY_PATH
sudo ldconfig
Modify /etc/ld.so.conf:
vim /etc/ld.so.conf
add /where/you/install/lib
sudo ldconfig
#include <iostream>
using namespace std;
int main()
{
unsigned u = 10;
int i = -42;
cout << i + i << endl; // prints -84
cout << u + i << endl; // if 32-bit ints, prints 4294967264
}
I have that code and on the second arithmetic equation 'u+i' I get the value "4294967264," now why is that? Why?
Can you explain?
I'm only so far in C++ so please explain step by step and refrain from using complicated terminology! Please!
I am having trouble understanding how clang throws exceptions when I try to allocate an object that would exceed its limit. For instance if I compile and run the following bit of code:
#include <limits>
#include <new>
#include <iostream>
int main(int argc, char** argv) {
typedef unsigned char byte;
byte*gb;
try{
gb=new byte[std::numeric_limits<std::size_t>::max()];
}
catch(const std::bad_alloc&){
std::cout<<"Normal"<<std::endl;
return 0;}
delete[]gb;
std::cout<<"Abnormal"<<std::endl;
return 1;
}
then when I compile using "clang++ -O0 -std=c++11 main.cpp" the result I get is "Normal" as expected, but as soon as I enable optimizations 1 through 3, the program unexpectedly returns "Abnormal".
I am saying unexpectedly, because according to the C++11 standard 5.3.4.7:
When the value of the expression in a noptr-new-declarator is zero, the allocation function is called to
allocate an array with no elements. If the value of that expression is less than zero or such that the size
of the allocated object would exceed the implementation-defined limit, or if the new-initializer is a braced-
init-list for which the number of initializer-clauses exceeds the number of elements to initialize, no storage
is obtained and the new-expression terminates by throwing an exception of a type that would match a
handler (15.3) of type std::bad_array_new_length (18.6.2.2).
[This behavior is observed with both clang 3.5 using libstd++ on linux and clang 3.3 using libc++ on Mac. The same behavior is also observed when the -std=c++11 flag is removed.]
The plot thickens when I compile the same program using gcc 4.8, using the exact same command line options. In that case, the program returns "Normal" for any chosen optimization level.
I cannot find any undefined behavior in the code posted above that would explain why clang would feel free not to throw an exception when code optimizations are enabled. As far as the bug database is concerned, the closest I can find is http://llvm.org/bugs/show_bug.cgi?id=11644 but it seems to be related to the type of exception being thrown rather than a behavior difference between debug and release code.
So it this a bug from Clang? Or am I missing something? Thanks,
It appears that clang eliminates the allocation as the array is unused:
#include <limits>
#include <new>
#include <iostream>
int main(int argc, char** argv)
{
typedef unsigned char byte;
bytes* gb;
const size_t max = std::numeric_limits<std::size_t>::max();
try
{
gb = new bytes[max];
}
catch(const std::bad_alloc&)
{
std::cout << "Normal" << std::endl;
return 0;
}
try
{
gb[0] = 1;
gb[max - 1] = 1;
std::cout << gb[0] << gb[max - 1] << "\n";
}
catch ( ... )
{
std::cout << "Exception on access\n";
}
delete [] gb;
std::cout << "Abnormal" << std::endl;
return 1;
}
This code prints "Normal" with -O0 and -O3, see this demo. That means that in this code, it is actually tried to allocate the memory and it indeed fails, hence we get the exception. Note that if we don't output, clang is still smart enough to even ignore the writes.
It appears that clang++ on Mac OSX does throw bad_alloc, but it also prints an error message from malloc.
Program:
// bad_alloc example
#include <iostream> // std::cout
#include <sstream>
#include <new> // std::bad_alloc
int main(int argc, char *argv[])
{
unsigned long long memSize = 10000;
if (argc < 2)
memSize = 10000;
else {
std::istringstream is(argv[1]); // C++ atoi
is >> memSize;
}
try
{
int* myarray= new int[memSize];
std::cout << "alloc of " << memSize << " succeeded" << std::endl;
}
catch (std::bad_alloc& ba)
{
std::cerr << "bad_alloc caught: " << ba.what() << '\n';
}
std::cerr << "Program exiting normally" << std::endl;
return 0;
}
Mac terminal output:
david#Godel:~/Dropbox/Projects/Miscellaneous$ badalloc
alloc of 10000 succeeded
Program exiting normally
david#Godel:~/Dropbox/Projects/Miscellaneous$ badalloc 1234567891234567890
badalloc(25154,0x7fff7622b310) malloc: *** mach_vm_map(size=4938271564938272768)
failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
bad_alloc caught: std::bad_alloc
Program exiting normally
I also tried the same program using g++ on Windows 7:
C:\Users\David\Dropbox\Projects\Miscellaneous>g++ -o badallocw badalloc.cpp
C:\Users\David\Dropbox\Projects\Miscellaneous>badallocw
alloc of 10000 succeeded
C:\Users\David\Dropbox\Projects\Miscellaneous>badallocw 1234567890
bad_alloc caught: std::bad_alloc
Note: The program is a modified version of the example at
http://www.cplusplus.com/reference/new/bad_alloc/
I prefer to work with std::string but I like to figure out what is going wrong here.
I am unable to understand out why std::find isn't working properly for type T** even though pointer arithmetic works on them correctly. Like -
std::cout << *(argv+1) << "\t" <<*(argv+2) << std::endl;
But it works fine, for the types T*[N].
#include <iostream>
#include <algorithm>
int main( int argc, const char ** argv )
{
std::cout << *(argv+1) << "\t" <<*(argv+2) << std::endl;
const char ** cmdPtr = std::find(argv+1, argv+argc, "Hello") ;
const char * testAr[] = { "Hello", "World" };
const char ** testPtr = std::find(testAr, testAr+2, "Hello");
if( cmdPtr == argv+argc )
std::cout << "String not found" << std::endl;
if( testPtr != testAr+2 )
std::cout << "String found: " << *testPtr << std::endl;
return 0;
}
Arguments passed: Hello World
Output:
Hello World
String not found
String found: Hello
Thanks.
Comparing types of char const* amounts to pointing to the addresses. The address of "Hello" is guaranteed to be different unless you compare it to another address of the string literal "Hello" (in which case the pointers may compare equal). Your compare() function compares the characters being pointed to.
In the first case, you're comparing the pointer values themselves and not what they're pointing to. And the constant "Hello" doesn't have the same address as the first element of argv.
Try using:
const char ** cmdPtr = std::find(argv+1, argv+argc, std::string("Hello")) ;
std::string knows to compare contents and not addresses.
For the array version, the compiler can fold all literals into a single one, so every time "Hello" is seen throughout the code it's really the same pointer. Thus, comparing for equality in
const char * testAr[] = { "Hello", "World" };
const char ** testPtr = std::find(testAr, testAr+2, "Hello");
yields the correct result
I just started to dig into Boost::Spirit, latest version by now -- V2.4.
The essense of my problem is following:
I would like to parse strings like "1a2" or "3b4".
So the rule I use is:
(double_ >> lit('b') >> double_)
| (double_ >> lit('a') >> double_);
The attribute of the rule must be "vector <double>". And I'm reading it into the container.
The complete code:
#include <boost/spirit/include/qi.hpp>
#include <boost/spirit/include/phoenix_core.hpp>
#include <boost/spirit/include/phoenix_operator.hpp>
#include <iostream>
#include <algorithm>
#include <string>
#include <vector>
#include <cstring>
int main(int argc, char * argv[])
{
using namespace std;
using namespace boost::spirit;
using namespace boost::spirit::qi;
using boost::phoenix::arg_names::arg1;
char const * first = "1a2";
char const * last = first + std::strlen(first);
vector<double> h;
rule<char const *, vector<double>()> or_test;
or_test %= (double_ >> lit('b') >> double_)
| (double_ >> lit('a') >> double_);
if (parse(first, last, or_test,h)) {
cout << "parse success: ";
for_each(h.begin(), h.end(), (cout << arg1 << " "));
cout << "end\n";
} else cout << "parse error\n" << endl;
return 0;
}
I'm compiling it with g++ 4.4.3. And it returns "1 1 2". While I expect "1 2".
As far as I understand this happens because parser:
goes to the first alternative
reads a double_ and stores it in the container
then stops at "a", while expecting lit("b")
goes to the second alternative
reads two more doubles
My question is -- Is this a correct behavior, and if yes -- why?
That's expected behavior. During backtracking Spirit does not 'unmake' changes to attributes. Therefore, you should use the hold[] directive explicitly forcing the parser to hold on to a copy of the attribute (allowing to roll back any attribute change):
or_test =
hold[double_ >> lit('b') >> double_)]
| (double_ >> lit('a') >> double_)
;
This directive needs to be applied to all alternatives modifying the attribute, except the last one.