Stopping SWIG from expanding #if expressions - swig

I have a case where I need to put a guard on a definition so that it doesn't get included in the SWIG output more than once under certain circumstances. Unfortunately, SWIG is expanding the #if statement before writing the .cc file. Here's the situation:
%define FOO(BAR)
%{
#if !defined(_##BAR##_DECLARED)
#define _##BAR##_DECLARED
// declaration stuff
#endif
%}
// implementation stuff
%enddef
SWIG generates FOO(CLS) as follows:
#if !0
#define _CLS_DECLARED
/*#SWIG:dummy.swg,46,FOO#*/
// declaration stuff
/*#SWIG#*/;
#endif
// implementation stuff
So the #if statement is expanded by SWIG before writing the output (creating the useless #if !0), but not the #define. Is there a way to tell SWIG to not to expand the #if?
I tried adding an auxiliary macro GUARD(SYM) which I've defined in various ways:
// Using C-style macros, including !defined()
#define GUARD(SYM) !defined(_##SYM##DECLARED)
// Using C-style macros, excluding !defined()
#define GUARD(SYM) _##SYM##DECLARED)
// Using SWIG-style macros, including !defined()
%define GUARD(SYM) !defined(_##SYM##DECLARED)
%enddef
// Using SWIG-style macros, excluding !defined()
%define GUARD(SYM) _##SYM##DECLARED
%enddef
For each of these I've modified the #if statement accordingly. The output is always the same.
In case it's useful, here's the output of swig -version:
SWIG Version 2.0.8
Compiled with g++ [i386-apple-darwin11.4.2]
Configured options: +pcre
Please see http://www.swig.org for reporting bugs and further information

If you want SWIG to pass statements directly into generated file unchanged then put them into
%{ .. %}
Yes, its used for delivering #include into swig generated output as well.

Related

Incorrect number of arguments in preprocessor macro when passing CUDA kernel call as argument macro

I have the following macro
#define TIMEIT( variable, body ) \
variable = omp_get_wtime(); \
body; \
variable = omp_get_wtime() - variable;
which I use to very simply time sections of code.
However, macro calls are sensitive to commas, and a CUDA kernel call (using the triple chevron syntax) causes the preprocessor to believe that the macro is being passed more than 2 arguments.
Is there a way around this?
Since C99/C++11, you can use a variadic argument (varargs) macro to solve this problem. You write a varargs macro using ... as the last parameter; in the body of the macro, __VA_ARGS__ will be replaced with the trailing arguments from the macro call, with commas intact:
#define TIMEIT( variable, ... ) \
variable = omp_get_wtime(); \
__VA_ARGS__; \
variable = omp_get_wtime() - variable;
For compilers without varargs macro support, your only alternative is to try to protect all commas by using them only inside parenthetic expressions. Because parentheses protect commas from being treated as macro argument delimiters, many commas are naturally safe. But there are lots of exceptions, such as C++ template argument lists (<…> doesn't protect commas), declarations of multiple objects, and -- as you say -- triple chevron calls. Some of these may be harder to protect than others.
In particular, I don't know if you can put redundant parentheses around a CUDA kernel call, for example. Of course, if nvcc does handle varargs macros, you wouldn't need to. But based on this bug report, I'm not so sure. nvcc is based on the EDG compilet, which is conformant, but it does not seem to have occurred to nvidia to document which version of the standard is being used.

How can I use FJsonSerializer::Deserialize() to deserialize JSON in Unreal Engine? [duplicate]

What are undefined reference/unresolved external symbol errors? What are common causes and how to fix/prevent them?
Compiling a C++ program takes place in several steps, as specified by 2.2 (credits to Keith Thompson for the reference):
The precedence among the syntax rules of translation is specified by the following phases [see footnote].
Physical source file characters are mapped, in an implementation-defined manner, to the basic source character set
(introducing new-line characters for end-of-line indicators) if
necessary. [SNIP]
Each instance of a backslash character (\) immediately followed by a new-line character is deleted, splicing physical source lines to
form logical source lines. [SNIP]
The source file is decomposed into preprocessing tokens (2.5) and sequences of white-space characters (including comments). [SNIP]
Preprocessing directives are executed, macro invocations are expanded, and _Pragma unary operator expressions are executed. [SNIP]
Each source character set member in a character literal or a string literal, as well as each escape sequence and universal-character-name
in a character literal or a non-raw string literal, is converted to
the corresponding member of the execution character set; [SNIP]
Adjacent string literal tokens are concatenated.
White-space characters separating tokens are no longer significant. Each preprocessing token is converted into a token. (2.7). The
resulting tokens are syntactically and semantically analyzed and
translated as a translation unit. [SNIP]
Translated translation units and instantiation units are combined as follows: [SNIP]
All external entity references are resolved. Library components are linked to satisfy external references to entities not defined in the
current translation. All such translator output is collected into a
program image which contains information needed for execution in its
execution environment. (emphasis mine)
[footnote] Implementations must behave as if these separate phases occur, although in practice different phases might be folded together.
The specified errors occur during this last stage of compilation, most commonly referred to as linking. It basically means that you compiled a bunch of implementation files into object files or libraries and now you want to get them to work together.
Say you defined symbol a in a.cpp. Now, b.cpp declared that symbol and used it. Before linking, it simply assumes that that symbol was defined somewhere, but it doesn't yet care where. The linking phase is responsible for finding the symbol and correctly linking it to b.cpp (well, actually to the object or library that uses it).
If you're using Microsoft Visual Studio, you'll see that projects generate .lib files. These contain a table of exported symbols, and a table of imported symbols. The imported symbols are resolved against the libraries you link against, and the exported symbols are provided for the libraries that use that .lib (if any).
Similar mechanisms exist for other compilers/ platforms.
Common error messages are error LNK2001, error LNK1120, error LNK2019 for Microsoft Visual Studio and undefined reference to symbolName for GCC.
The code:
struct X
{
virtual void foo();
};
struct Y : X
{
void foo() {}
};
struct A
{
virtual ~A() = 0;
};
struct B: A
{
virtual ~B(){}
};
extern int x;
void foo();
int main()
{
x = 0;
foo();
Y y;
B b;
}
will generate the following errors with GCC:
/home/AbiSfw/ccvvuHoX.o: In function `main':
prog.cpp:(.text+0x10): undefined reference to `x'
prog.cpp:(.text+0x19): undefined reference to `foo()'
prog.cpp:(.text+0x2d): undefined reference to `A::~A()'
/home/AbiSfw/ccvvuHoX.o: In function `B::~B()':
prog.cpp:(.text._ZN1BD1Ev[B::~B()]+0xb): undefined reference to `A::~A()'
/home/AbiSfw/ccvvuHoX.o: In function `B::~B()':
prog.cpp:(.text._ZN1BD0Ev[B::~B()]+0x12): undefined reference to `A::~A()'
/home/AbiSfw/ccvvuHoX.o:(.rodata._ZTI1Y[typeinfo for Y]+0x8): undefined reference to `typeinfo for X'
/home/AbiSfw/ccvvuHoX.o:(.rodata._ZTI1B[typeinfo for B]+0x8): undefined reference to `typeinfo for A'
collect2: ld returned 1 exit status
and similar errors with Microsoft Visual Studio:
1>test2.obj : error LNK2001: unresolved external symbol "void __cdecl foo(void)" (?foo##YAXXZ)
1>test2.obj : error LNK2001: unresolved external symbol "int x" (?x##3HA)
1>test2.obj : error LNK2001: unresolved external symbol "public: virtual __thiscall A::~A(void)" (??1A##UAE#XZ)
1>test2.obj : error LNK2001: unresolved external symbol "public: virtual void __thiscall X::foo(void)" (?foo#X##UAEXXZ)
1>...\test2.exe : fatal error LNK1120: 4 unresolved externals
Common causes include:
Failure to link against appropriate libraries/object files or compile implementation files
Declared and undefined variable or function.
Common issues with class-type members
Template implementations not visible.
Symbols were defined in a C program and used in C++ code.
Incorrectly importing/exporting methods/classes across modules/dll. (MSVS specific)
Circular library dependency
undefined reference to `WinMain#16'
Interdependent library order
Multiple source files of the same name
Mistyping or not including the .lib extension when using the #pragma (Microsoft Visual Studio)
Problems with template friends
Inconsistent UNICODE definitions
Missing "extern" in const variable declarations/definitions (C++ only)
Visual Studio Code not configured for a multiple file project
Errors on Mac OS X when building a dylib, but a .so on other Unix-y systems is OK
Class members:
A pure virtual destructor needs an implementation.
Declaring a destructor pure still requires you to define it (unlike a regular function):
struct X
{
virtual ~X() = 0;
};
struct Y : X
{
~Y() {}
};
int main()
{
Y y;
}
//X::~X(){} //uncomment this line for successful definition
This happens because base class destructors are called when the object is destroyed implicitly, so a definition is required.
virtual methods must either be implemented or defined as pure.
This is similar to non-virtual methods with no definition, with the added reasoning that
the pure declaration generates a dummy vtable and you might get the linker error without using the function:
struct X
{
virtual void foo();
};
struct Y : X
{
void foo() {}
};
int main()
{
Y y; //linker error although there was no call to X::foo
}
For this to work, declare X::foo() as pure:
struct X
{
virtual void foo() = 0;
};
Non-virtual class members
Some members need to be defined even if not used explicitly:
struct A
{
~A();
};
The following would yield the error:
A a; //destructor undefined
The implementation can be inline, in the class definition itself:
struct A
{
~A() {}
};
or outside:
A::~A() {}
If the implementation is outside the class definition, but in a header, the methods have to be marked as inline to prevent a multiple definition.
All used member methods need to be defined if used.
A common mistake is forgetting to qualify the name:
struct A
{
void foo();
};
void foo() {}
int main()
{
A a;
a.foo();
}
The definition should be
void A::foo() {}
static data members must be defined outside the class in a single translation unit:
struct X
{
static int x;
};
int main()
{
int x = X::x;
}
//int X::x; //uncomment this line to define X::x
An initializer can be provided for a static const data member of integral or enumeration type within the class definition; however, odr-use of this member will still require a namespace scope definition as described above. C++11 allows initialization inside the class for all static const data members.
Failure to link against appropriate libraries/object files or compile implementation files
Commonly, each translation unit will generate an object file that contains the definitions of the symbols defined in that translation unit.
To use those symbols, you have to link against those object files.
Under gcc you would specify all object files that are to be linked together in the command line, or compile the implementation files together.
g++ -o test objectFile1.o objectFile2.o -lLibraryName
-l... must be to the right of any .o/.c/.cpp files.
The libraryName here is just the bare name of the library, without platform-specific additions. So e.g. on Linux library files are usually called libfoo.so but you'd only write -lfoo. On Windows that same file might be called foo.lib, but you'd use the same argument. You might have to add the directory where those files can be found using -L‹directory›. Make sure to not write a space after -l or -L.
For Xcode: Add the User Header Search Paths -> add the Library Search Path -> drag and drop the actual library reference into the project folder.
Under MSVS, files added to a project automatically have their object files linked together and a lib file would be generated (in common usage). To use the symbols in a separate project, you'd
need to include the lib files in the project settings. This is done in the Linker section of the project properties, in Input -> Additional Dependencies. (the path to the lib file should be
added in Linker -> General -> Additional Library Directories) When using a third-party library that is provided with a lib file, failure to do so usually results in the error.
It can also happen that you forget to add the file to the compilation, in which case the object file won't be generated. In gcc you'd add the files to the command line. In MSVS adding the file to the project will make it compile it automatically (albeit files can, manually, be individually excluded from the build).
In Windows programming, the tell-tale sign that you did not link a necessary library is that the name of the unresolved symbol begins with __imp_. Look up the name of the function in the documentation, and it should say which library you need to use. For example, MSDN puts the information in a box at the bottom of each function in a section called "Library".
Declared but did not define a variable or function.
A typical variable declaration is
extern int x;
As this is only a declaration, a single definition is needed. A corresponding definition would be:
int x;
For example, the following would generate an error:
extern int x;
int main()
{
x = 0;
}
//int x; // uncomment this line for successful definition
Similar remarks apply to functions. Declaring a function without defining it leads to the error:
void foo(); // declaration only
int main()
{
foo();
}
//void foo() {} //uncomment this line for successful definition
Be careful that the function you implement exactly matches the one you declared. For example, you may have mismatched cv-qualifiers:
void foo(int& x);
int main()
{
int x;
foo(x);
}
void foo(const int& x) {} //different function, doesn't provide a definition
//for void foo(int& x)
Other examples of mismatches include
Function/variable declared in one namespace, defined in another.
Function/variable declared as class member, defined as global (or vice versa).
Function return type, parameter number and types, and calling convention do not all exactly agree.
The error message from the compiler will often give you the full declaration of the variable or function that was declared but never defined. Compare it closely to the definition you provided. Make sure every detail matches.
The order in which interdependent linked libraries are specified is wrong.
The order in which libraries are linked DOES matter if the libraries depend on each other. In general, if library A depends on library B, then libA MUST appear before libB in the linker flags.
For example:
// B.h
#ifndef B_H
#define B_H
struct B {
B(int);
int x;
};
#endif
// B.cpp
#include "B.h"
B::B(int xx) : x(xx) {}
// A.h
#include "B.h"
struct A {
A(int x);
B b;
};
// A.cpp
#include "A.h"
A::A(int x) : b(x) {}
// main.cpp
#include "A.h"
int main() {
A a(5);
return 0;
};
Create the libraries:
$ g++ -c A.cpp
$ g++ -c B.cpp
$ ar rvs libA.a A.o
ar: creating libA.a
a - A.o
$ ar rvs libB.a B.o
ar: creating libB.a
a - B.o
Compile:
$ g++ main.cpp -L. -lB -lA
./libA.a(A.o): In function `A::A(int)':
A.cpp:(.text+0x1c): undefined reference to `B::B(int)'
collect2: error: ld returned 1 exit status
$ g++ main.cpp -L. -lA -lB
$ ./a.out
So to repeat again, the order DOES matter!
Symbols were defined in a C program and used in C++ code.
The function (or variable) void foo() was defined in a C program and you attempt to use it in a C++ program:
void foo();
int main()
{
foo();
}
The C++ linker expects names to be mangled, so you have to declare the function as:
extern "C" void foo();
int main()
{
foo();
}
Equivalently, instead of being defined in a C program, the function (or variable) void foo() was defined in C++ but with C linkage:
extern "C" void foo();
and you attempt to use it in a C++ program with C++ linkage.
If an entire library is included in a header file (and was compiled as C code); the include will need to be as follows;
extern "C" {
#include "cheader.h"
}
what is an "undefined reference/unresolved external symbol"
I'll try to explain what is an "undefined reference/unresolved external symbol".
note: i use g++ and Linux and all examples is for it
For example we have some code
// src1.cpp
void print();
static int local_var_name; // 'static' makes variable not visible for other modules
int global_var_name = 123;
int main()
{
print();
return 0;
}
and
// src2.cpp
extern "C" int printf (const char*, ...);
extern int global_var_name;
//extern int local_var_name;
void print ()
{
// printf("%d%d\n", global_var_name, local_var_name);
printf("%d\n", global_var_name);
}
Make object files
$ g++ -c src1.cpp -o src1.o
$ g++ -c src2.cpp -o src2.o
After the assembler phase we have an object file, which contains any symbols to export.
Look at the symbols
$ readelf --symbols src1.o
Num: Value Size Type Bind Vis Ndx Name
5: 0000000000000000 4 OBJECT LOCAL DEFAULT 4 _ZL14local_var_name # [1]
9: 0000000000000000 4 OBJECT GLOBAL DEFAULT 3 global_var_name # [2]
I've rejected some lines from output, because they do not matter
So, we see follow symbols to export.
[1] - this is our static (local) variable (important - Bind has a type "LOCAL")
[2] - this is our global variable
src2.cpp exports nothing and we have seen no its symbols
Link our object files
$ g++ src1.o src2.o -o prog
and run it
$ ./prog
123
Linker sees exported symbols and links it. Now we try to uncomment lines in src2.cpp like here
// src2.cpp
extern "C" int printf (const char*, ...);
extern int global_var_name;
extern int local_var_name;
void print ()
{
printf("%d%d\n", global_var_name, local_var_name);
}
and rebuild an object file
$ g++ -c src2.cpp -o src2.o
OK (no errors), because we only build object file, linking is not done yet.
Try to link
$ g++ src1.o src2.o -o prog
src2.o: In function `print()':
src2.cpp:(.text+0x6): undefined reference to `local_var_name'
collect2: error: ld returned 1 exit status
It has happened because our local_var_name is static, i.e. it is not visible for other modules.
Now more deeply. Get the translation phase output
$ g++ -S src1.cpp -o src1.s
// src1.s
look src1.s
.file "src1.cpp"
.local _ZL14local_var_name
.comm _ZL14local_var_name,4,4
.globl global_var_name
.data
.align 4
.type global_var_name, #object
.size global_var_name, 4
global_var_name:
.long 123
.text
.globl main
.type main, #function
main:
; assembler code, not interesting for us
.LFE0:
.size main, .-main
.ident "GCC: (Ubuntu 4.8.2-19ubuntu1) 4.8.2"
.section .note.GNU-stack,"",#progbits
So, we've seen there is no label for local_var_name, that's why linker hasn't found it. But we are hackers :) and we can fix it. Open src1.s in your text editor and change
.local _ZL14local_var_name
.comm _ZL14local_var_name,4,4
to
.globl local_var_name
.data
.align 4
.type local_var_name, #object
.size local_var_name, 4
local_var_name:
.long 456789
i.e. you should have like below
.file "src1.cpp"
.globl local_var_name
.data
.align 4
.type local_var_name, #object
.size local_var_name, 4
local_var_name:
.long 456789
.globl global_var_name
.align 4
.type global_var_name, #object
.size global_var_name, 4
global_var_name:
.long 123
.text
.globl main
.type main, #function
main:
; ...
we have changed the visibility of local_var_name and set its value to 456789.
Try to build an object file from it
$ g++ -c src1.s -o src2.o
ok, see readelf output (symbols)
$ readelf --symbols src1.o
8: 0000000000000000 4 OBJECT GLOBAL DEFAULT 3 local_var_name
now local_var_name has Bind GLOBAL (was LOCAL)
link
$ g++ src1.o src2.o -o prog
and run it
$ ./prog
123456789
ok, we hack it :)
So, as a result - an "undefined reference/unresolved external symbol error" happens when the linker cannot find global symbols in the object files.
If all else fails, recompile.
I was recently able to get rid of an unresolved external error in Visual Studio 2012 just by recompiling the offending file. When I re-built, the error went away.
This usually happens when two (or more) libraries have a cyclic dependency. Library A attempts to use symbols in B.lib and library B attempts to use symbols from A.lib. Neither exist to start off with. When you attempt to compile A, the link step will fail because it can't find B.lib. A.lib will be generated, but no dll. You then compile B, which will succeed and generate B.lib. Re-compiling A will now work because B.lib is now found.
Template implementations not visible.
Unspecialized templates must have their definitions visible to all translation units that use them. That means you can't separate the definition of a template
to an implementation file. If you must separate the implementation, the usual workaround is to have an impl file which you include at the end of the header that
declares the template. A common situation is:
template<class T>
struct X
{
void foo();
};
int main()
{
X<int> x;
x.foo();
}
//differentImplementationFile.cpp
template<class T>
void X<T>::foo()
{
}
To fix this, you must move the definition of X::foo to the header file or some place visible to the translation unit that uses it.
Specialized templates can be implemented in an implementation file and the implementation doesn't have to be visible, but the specialization must be previously declared.
For further explanation and another possible solution (explicit instantiation) see this question and answer.
This is one of most confusing error messages that every VC++ programmers have seen time and time again. Let’s make things clarity first.
A. What is symbol?
In short, a symbol is a name. It can be a variable name, a function name, a class name, a typedef name, or anything except those names and signs that belong to C++ language. It is user defined or introduced by a dependency library (another user-defined).
B. What is external?
In VC++, every source file (.cpp,.c,etc.) is considered as a translation unit, the compiler compiles one unit at a time, and generate one object file(.obj) for the current translation unit. (Note that every header file that this source file included will be preprocessed and will be considered as part of this translation unit)Everything within a translation unit is considered as internal, everything else is considered as external. In C++, you may reference an external symbol by using keywords like extern, __declspec (dllimport) and so on.
C. What is “resolve”?
Resolve is a linking-time term. In linking-time, linker attempts to find the external definition for every symbol in object files that cannot find its definition internally. The scope of this searching process including:
All object files that generated in compiling time
All libraries (.lib) that are either explicitly or implicitly
specified as additional dependencies of this building application.
This searching process is called resolve.
D. Finally, why Unresolved External Symbol?
If the linker cannot find the external definition for a symbol that has no definition internally, it reports an Unresolved External Symbol error.
E. Possible causes of LNK2019: Unresolved External Symbol error.
We already know that this error is due to the linker failed to find the definition of external symbols, the possible causes can be sorted as:
Definition exists
For example, if we have a function called foo defined in a.cpp:
int foo()
{
return 0;
}
In b.cpp we want to call function foo, so we add
void foo();
to declare function foo(), and call it in another function body, say bar():
void bar()
{
foo();
}
Now when you build this code you will get a LNK2019 error complaining that foo is an unresolved symbol. In this case, we know that foo() has its definition in a.cpp, but different from the one we are calling(different return value). This is the case that definition exists.
Definition does not exist
If we want to call some functions in a library, but the import library is not added into the additional dependency list (set from: Project | Properties | Configuration Properties | Linker | Input | Additional Dependency) of your project setting. Now the linker will report a LNK2019 since the definition does not exist in current searching scope.
Incorrectly importing/exporting methods/classes across modules/dll (compiler specific).
MSVS requires you to specify which symbols to export and import using __declspec(dllexport) and __declspec(dllimport).
This dual functionality is usually obtained through the use of a macro:
#ifdef THIS_MODULE
#define DLLIMPEXP __declspec(dllexport)
#else
#define DLLIMPEXP __declspec(dllimport)
#endif
The macro THIS_MODULE would only be defined in the module that exports the function. That way, the declaration:
DLLIMPEXP void foo();
expands to
__declspec(dllexport) void foo();
and tells the compiler to export the function, as the current module contains its definition. When including the declaration in a different module, it would expand to
__declspec(dllimport) void foo();
and tells the compiler that the definition is in one of the libraries you linked against (also see 1)).
You can similary import/export classes:
class DLLIMPEXP X
{
};
undefined reference to WinMain#16 or similar 'unusual' main() entry point reference (especially for visual-studio).
You may have missed to choose the right project type with your actual IDE. The IDE may want to bind e.g. Windows Application projects to such entry point function (as specified in the missing reference above), instead of the commonly used int main(int argc, char** argv); signature.
If your IDE supports Plain Console Projects you might want to choose this project type, instead of a windows application project.
Here are case1 and case2 handled in more detail from a real world problem.
Also if you're using 3rd party libraries make sure you have the correct 32/64 bit binaries
Microsoft offers a #pragma to reference the correct library at link time;
#pragma comment(lib, "libname.lib")
In addition to the library path including the directory of the library, this should be the full name of the library.
Visual Studio NuGet package needs to be updated for new toolset version
I just had this problem trying to link libpng with Visual Studio 2013. The problem is that the package file only had libraries for Visual Studio 2010 and 2012.
The correct solution is to hope the developer releases an updated package and then upgrade, but it worked for me by hacking in an extra setting for VS2013, pointing at the VS2012 library files.
I edited the package (in the packages folder inside the solution's directory) by finding packagename\build\native\packagename.targets and inside that file, copying all the v110 sections. I changed the v110 to v120 in the condition fields only being very careful to leave the filename paths all as v110. This simply allowed Visual Studio 2013 to link to the libraries for 2012, and in this case, it worked.
Suppose you have a big project written in c++ which has a thousand of .cpp files and a thousand of .h files.And let's says the project also depends on ten static libraries. Let's says we are on Windows and we build our project in Visual Studio 20xx. When you press Ctrl + F7 Visual Studio to start compiling the whole solution ( suppose we have just one project in the solution )
What's the meaning of compilation ?
Visual Studio search into file .vcxproj and start compiling each file which has the extension .cpp. Order of compilation is undefined.So you must not assume that the file main.cpp is compiled first
If .cpp files depends on additional .h files in order to find symbols
that may or may not be defined in the file .cpp
If exists one .cpp file in which the compiler could not find one symbol, a compiler time error raises the message Symbol x could not be found
For each file with extension .cpp is generated an object file .o and also Visual Studio writes the output in a file named ProjectName.Cpp.Clean.txt which contains all object files that must be processed by the linker.
The Second step of compilation is done by Linker.Linker should merge all the object file and build finally the output ( which may be an executable or a library)
Steps In Linking a project
Parse all the object files and find the definition which was only declared in headers ( eg: The code of one method of a class as is mentioned in previous answers, or event the initialization of a static variable which is member inside a class)
If one symbol could not be found in object files he also is searched in Additional Libraries.For adding a new library to a project Configuration properties -> VC++ Directories -> Library Directories and here you specified additional folder for searching libraries and Configuration properties -> Linker -> Input for specifying the name of the library.
-If the Linker could not find the symbol which you write in one .cpp he raises a linker time error which may sound like
error LNK2001: unresolved external symbol "void __cdecl foo(void)" (?foo##YAXXZ)
Observation
Once the Linker find one symbol he doesn't search in other libraries for it
The order of linking libraries does matter.
If Linker finds an external symbol in one static library he includes the symbol in the output of the project.However, if the library is shared( dynamic ) he doesn't include the code ( symbols ) in output, but Run-Time crashes may occur
How To Solve this kind of error
Compiler Time Error :
Make sure you write your c++ project syntactical correct.
Linker Time Error
Define all your symbol which you declare in your header files
Use #pragma once for allowing compiler not to include one header if it was already included in the current .cpp which are compiled
Make sure that your external library doesn't contain symbols that may enter into conflict with other symbols you defined in your header files
When you use the template to make sure you include the definition of each template function in the header file for allowing the compiler to generate appropriate code for any instantiations.
Use the linker to help diagnose the error
Most modern linkers include a verbose option that prints out to varying degrees;
Link invocation (command line),
Data on what libraries are included in the link stage,
The location of the libraries,
Search paths used.
For gcc and clang; you would typically add -v -Wl,--verbose or -v -Wl,-v to the command line. More details can be found here;
Linux ld man page.
LLVM linker page.
"An introduction to GCC" chapter 9.
For MSVC, /VERBOSE (in particular /VERBOSE:LIB) is added to the link command line.
The MSDN page on the /VERBOSE linker option.
A bug in the compiler/IDE
I recently had this problem, and it turned out it was a bug in Visual Studio Express 2013. I had to remove a source file from the project and re-add it to overcome the bug.
Steps to try if you believe it could be a bug in compiler/IDE:
Clean the project (some IDEs have an option to do this, you can also
manually do it by deleting the object files)
Try start a new project,
copying all source code from the original one.
Linked .lib file is associated to a .dll
I had the same issue. Say i have projects MyProject and TestProject. I had effectively linked the lib file for MyProject to the TestProject. However, this lib file was produced as the DLL for the MyProject was built. Also, I did not contain source code for all methods in the MyProject, but only access to the DLL's entry points.
To solve the issue, i built the MyProject as a LIB, and linked TestProject to this .lib file (i copy paste the generated .lib file into the TestProject folder). I can then build again MyProject as a DLL. It is compiling since the lib to which TestProject is linked does contain code for all methods in classes in MyProject.
Since people seem to be directed to this question when it comes to linker errors I am going to add this here.
One possible reason for linker errors with GCC 5.2.0 is that a new libstdc++ library ABI is now chosen by default.
If you get linker errors about undefined references to symbols that involve types in the std::__cxx11 namespace or the tag [abi:cxx11] then it probably indicates that you are trying to link together object files that were compiled with different values for the _GLIBCXX_USE_CXX11_ABI macro. This commonly happens when linking to a third-party library that was compiled with an older version of GCC. If the third-party library cannot be rebuilt with the new ABI then you will need to recompile your code with the old ABI.
So if you suddenly get linker errors when switching to a GCC after 5.1.0 this would be a thing to check out.
Your linkage consumes libraries before the object files that refer to them
You are trying to compile and link your program with the GCC toolchain.
Your linkage specifies all of the necessary libraries and library search paths
If libfoo depends on libbar, then your linkage correctly puts libfoo before libbar.
Your linkage fails with undefined reference to something errors.
But all the undefined somethings are declared in the header files you have
#included and are in fact defined in the libraries that you are linking.
Examples are in C. They could equally well be C++
A minimal example involving a static library you built yourself
my_lib.c
#include "my_lib.h"
#include <stdio.h>
void hw(void)
{
puts("Hello World");
}
my_lib.h
#ifndef MY_LIB_H
#define MT_LIB_H
extern void hw(void);
#endif
eg1.c
#include <my_lib.h>
int main()
{
hw();
return 0;
}
You build your static library:
$ gcc -c -o my_lib.o my_lib.c
$ ar rcs libmy_lib.a my_lib.o
You compile your program:
$ gcc -I. -c -o eg1.o eg1.c
You try to link it with libmy_lib.a and fail:
$ gcc -o eg1 -L. -lmy_lib eg1.o
eg1.o: In function `main':
eg1.c:(.text+0x5): undefined reference to `hw'
collect2: error: ld returned 1 exit status
The same result if you compile and link in one step, like:
$ gcc -o eg1 -I. -L. -lmy_lib eg1.c
/tmp/ccQk1tvs.o: In function `main':
eg1.c:(.text+0x5): undefined reference to `hw'
collect2: error: ld returned 1 exit status
A minimal example involving a shared system library, the compression library libz
eg2.c
#include <zlib.h>
#include <stdio.h>
int main()
{
printf("%s\n",zlibVersion());
return 0;
}
Compile your program:
$ gcc -c -o eg2.o eg2.c
Try to link your program with libz and fail:
$ gcc -o eg2 -lz eg2.o
eg2.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
collect2: error: ld returned 1 exit status
Same if you compile and link in one go:
$ gcc -o eg2 -I. -lz eg2.c
/tmp/ccxCiGn7.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
collect2: error: ld returned 1 exit status
And a variation on example 2 involving pkg-config:
$ gcc -o eg2 $(pkg-config --libs zlib) eg2.o
eg2.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
What are you doing wrong?
In the sequence of object files and libraries you want to link to make your
program, you are placing the libraries before the object files that refer to
them. You need to place the libraries after the object files that refer
to them.
Link example 1 correctly:
$ gcc -o eg1 eg1.o -L. -lmy_lib
Success:
$ ./eg1
Hello World
Link example 2 correctly:
$ gcc -o eg2 eg2.o -lz
Success:
$ ./eg2
1.2.8
Link the example 2 pkg-config variation correctly:
$ gcc -o eg2 eg2.o $(pkg-config --libs zlib)
$ ./eg2
1.2.8
The explanation
Reading is optional from here on.
By default, a linkage command generated by GCC, on your distro,
consumes the files in the linkage from left to right in
commandline sequence. When it finds that a file refers to something
and does not contain a definition for it, to will search for a definition
in files further to the right. If it eventually finds a definition, the
reference is resolved. If any references remain unresolved at the end,
the linkage fails: the linker does not search backwards.
First, example 1, with static library my_lib.a
A static library is an indexed archive of object files. When the linker
finds -lmy_lib in the linkage sequence and figures out that this refers
to the static library ./libmy_lib.a, it wants to know whether your program
needs any of the object files in libmy_lib.a.
There is only object file in libmy_lib.a, namely my_lib.o, and there's only one thing defined
in my_lib.o, namely the function hw.
The linker will decide that your program needs my_lib.o if and only if it already knows that
your program refers to hw, in one or more of the object files it has already
added to the program, and that none of the object files it has already added
contains a definition for hw.
If that is true, then the linker will extract a copy of my_lib.o from the library and
add it to your program. Then, your program contains a definition for hw, so
its references to hw are resolved.
When you try to link the program like:
$ gcc -o eg1 -L. -lmy_lib eg1.o
the linker has not added eg1.o to the program when it sees
-lmy_lib. Because at that point, it has not seen eg1.o.
Your program does not yet make any references to hw: it
does not yet make any references at all, because all the references it makes
are in eg1.o.
So the linker does not add my_lib.o to the program and has no further
use for libmy_lib.a.
Next, it finds eg1.o, and adds it to be program. An object file in the
linkage sequence is always added to the program. Now, the program makes
a reference to hw, and does not contain a definition of hw; but
there is nothing left in the linkage sequence that could provide the missing
definition. The reference to hw ends up unresolved, and the linkage fails.
Second, example 2, with shared library libz
A shared library isn't an archive of object files or anything like it. It's
much more like a program that doesn't have a main function and
instead exposes multiple other symbols that it defines, so that other
programs can use them at runtime.
Many Linux distros today configure their GCC toolchain so that its language drivers (gcc,g++,gfortran etc)
instruct the system linker (ld) to link shared libraries on an as-needed basis.
You have got one of those distros.
This means that when the linker finds -lz in the linkage sequence, and figures out that this refers
to the shared library (say) /usr/lib/x86_64-linux-gnu/libz.so, it wants to know whether any references that it has added to your program that aren't yet defined have definitions that are exported by libz
If that is true, then the linker will not copy any chunks out of libz and
add them to your program; instead, it will just doctor the code of your program
so that:-
At runtime, the system program loader will load a copy of libz into the
same process as your program whenever it loads a copy of your program, to run it.
At runtime, whenever your program refers to something that is defined in
libz, that reference uses the definition exported by the copy of libz in
the same process.
Your program wants to refer to just one thing that has a definition exported by libz,
namely the function zlibVersion, which is referred to just once, in eg2.c.
If the linker adds that reference to your program, and then finds the definition
exported by libz, the reference is resolved
But when you try to link the program like:
gcc -o eg2 -lz eg2.o
the order of events is wrong in just the same way as with example 1.
At the point when the linker finds -lz, there are no references to anything
in the program: they are all in eg2.o, which has not yet been seen. So the
linker decides it has no use for libz. When it reaches eg2.o, adds it to the program,
and then has undefined reference to zlibVersion, the linkage sequence is finished;
that reference is unresolved, and the linkage fails.
Lastly, the pkg-config variation of example 2 has a now obvious explanation.
After shell-expansion:
gcc -o eg2 $(pkg-config --libs zlib) eg2.o
becomes:
gcc -o eg2 -lz eg2.o
which is just example 2 again.
I can reproduce the problem in example 1, but not in example 2
The linkage:
gcc -o eg2 -lz eg2.o
works just fine for you!
(Or: That linkage worked fine for you on, say, Fedora 23, but fails on Ubuntu 16.04)
That's because the distro on which the linkage works is one of the ones that
does not configure its GCC toolchain to link shared libraries as-needed.
Back in the day, it was normal for unix-like systems to link static and shared
libraries by different rules. Static libraries in a linkage sequence were linked
on the as-needed basis explained in example 1, but shared libraries were linked unconditionally.
This behaviour is economical at linktime because the linker doesn't have to ponder
whether a shared library is needed by the program: if it's a shared library,
link it. And most libraries in most linkages are shared libraries. But there are disadvantages too:-
It is uneconomical at runtime, because it can cause shared libraries to be
loaded along with a program even if doesn't need them.
The different linkage rules for static and shared libraries can be confusing
to inexpert programmers, who may not know whether -lfoo in their linkage
is going to resolve to /some/where/libfoo.a or to /some/where/libfoo.so,
and might not understand the difference between shared and static libraries
anyway.
This trade-off has led to the schismatic situation today. Some distros have
changed their GCC linkage rules for shared libraries so that the as-needed
principle applies for all libraries. Some distros have stuck with the old
way.
Why do I still get this problem even if I compile-and-link at the same time?
If I just do:
$ gcc -o eg1 -I. -L. -lmy_lib eg1.c
surely gcc has to compile eg1.c first, and then link the resulting
object file with libmy_lib.a. So how can it not know that object file
is needed when it's doing the linking?
Because compiling and linking with a single command does not change the
order of the linkage sequence.
When you run the command above, gcc figures out that you want compilation +
linkage. So behind the scenes, it generates a compilation command, and runs
it, then generates a linkage command, and runs it, as if you had run the
two commands:
$ gcc -I. -c -o eg1.o eg1.c
$ gcc -o eg1 -L. -lmy_lib eg1.o
So the linkage fails just as it does if you do run those two commands. The
only difference you notice in the failure is that gcc has generated a
temporary object file in the compile + link case, because you're not telling it
to use eg1.o. We see:
/tmp/ccQk1tvs.o: In function `main'
instead of:
eg1.o: In function `main':
See also
The order in which interdependent linked libraries are specified is wrong
Putting interdependent libraries in the wrong order is just one way
in which you can get files that need definitions of things coming
later in the linkage than the files that provide the definitions. Putting libraries before the
object files that refer to them is another way of making the same mistake.
A wrapper around GNU ld that doesn't support linker scripts
Some .so files are actually GNU ld linker scripts, e.g. libtbb.so file is an ASCII text file with this contents:
INPUT (libtbb.so.2)
Some more complex builds may not support this. For example, if you include -v in the compiler options, you can see that the mainwin gcc wrapper mwdip discards linker script command files in the verbose output list of libraries to link in. A simple work around is to replace the linker script input command file with a copy of the file instead (or a symlink), e.g.
cp libtbb.so.2 libtbb.so
Or you could replace the -l argument with the full path of the .so, e.g. instead of -ltbb do /home/foo/tbb-4.3/linux/lib/intel64/gcc4.4/libtbb.so.2
Befriending templates...
Given the code snippet of a template type with a friend operator (or function);
template <typename T>
class Foo {
friend std::ostream& operator<< (std::ostream& os, const Foo<T>& a);
};
The operator<< is being declared as a non-template function. For every type T used with Foo, there needs to be a non-templated operator<<. For example, if there is a type Foo<int> declared, then there must be an operator implementation as follows;
std::ostream& operator<< (std::ostream& os, const Foo<int>& a) {/*...*/}
Since it is not implemented, the linker fails to find it and results in the error.
To correct this, you can declare a template operator before the Foo type and then declare as a friend, the appropriate instantiation. The syntax is a little awkward, but is looks as follows;
// forward declare the Foo
template <typename>
class Foo;
// forward declare the operator <<
template <typename T>
std::ostream& operator<<(std::ostream&, const Foo<T>&);
template <typename T>
class Foo {
friend std::ostream& operator<< <>(std::ostream& os, const Foo<T>& a);
// note the required <> ^^^^
// ...
};
template <typename T>
std::ostream& operator<<(std::ostream&, const Foo<T>&)
{
// ... implement the operator
}
The above code limits the friendship of the operator to the corresponding instantiation of Foo, i.e. the operator<< <int> instantiation is limited to access the private members of the instantiation of Foo<int>.
Alternatives include;
Allowing the friendship to extend to all instantiations of the templates, as follows;
template <typename T>
class Foo {
template <typename T1>
friend std::ostream& operator<<(std::ostream& os, const Foo<T1>& a);
// ...
};
Or, the implementation for the operator<< can be done inline inside the class definition;
template <typename T>
class Foo {
friend std::ostream& operator<<(std::ostream& os, const Foo& a)
{ /*...*/ }
// ...
};
Note, when the declaration of the operator (or function) only appears in the class, the name is not available for "normal" lookup, only for argument dependent lookup, from cppreference;
A name first declared in a friend declaration within class or class template X becomes a member of the innermost enclosing namespace of X, but is not accessible for lookup (except argument-dependent lookup that considers X) unless a matching declaration at the namespace scope is provided...
There is further reading on template friends at cppreference and the C++ FAQ.
Code listing showing the techniques above.
As a side note to the failing code sample; g++ warns about this as follows
warning: friend declaration 'std::ostream& operator<<(...)' declares a non-template function [-Wnon-template-friend]
note: (if this is not what you intended, make sure the function template has already been declared and add <> after the function name here)
When your include paths are different
Linker errors can happen when a header file and its associated shared library (.lib file) go out of sync. Let me explain.
How do linkers work? The linker matches a function declaration (declared in the header) with its definition (in the shared library) by comparing their signatures. You can get a linker error if the linker doesn't find a function definition that matches perfectly.
Is it possible to still get a linker error even though the declaration and the definition seem to match? Yes! They might look the same in source code, but it really depends on what the compiler sees. Essentially you could end up with a situation like this:
// header1.h
typedef int Number;
void foo(Number);
// header2.h
typedef float Number;
void foo(Number); // this only looks the same lexically
Note how even though both the function declarations look identical in source code, but they are really different according to the compiler.
You might ask how one ends up in a situation like that? Include paths of course! If when compiling the shared library, the include path leads to header1.h and you end up using header2.h in your own program, you'll be left scratching your header wondering what happened (pun intended).
An example of how this can happen in the real world is explained below.
Further elaboration with an example
I have two projects: graphics.lib and main.exe. Both projects depend on common_math.h. Suppose the library exports the following function:
// graphics.lib
#include "common_math.h"
void draw(vec3 p) { ... } // vec3 comes from common_math.h
And then you go ahead and include the library in your own project.
// main.exe
#include "other/common_math.h"
#include "graphics.h"
int main() {
draw(...);
}
Boom! You get a linker error and you have no idea why it's failing. The reason is that the common library uses different versions of the same include common_math.h (I have made it obvious here in the example by including a different path, but it might not always be so obvious. Maybe the include path is different in the compiler settings).
Note in this example, the linker would tell you it couldn't find draw(), when in reality you know it obviously is being exported by the library. You could spend hours scratching your head wondering what went wrong. The thing is, the linker sees a different signature because the parameter types are slightly different. In the example, vec3 is a different type in both projects as far as the compiler is concerned. This could happen because they come from two slightly different include files (maybe the include files come from two different versions of the library).
Debugging the linker
DUMPBIN is your friend, if you are using Visual Studio. I'm sure other compilers have other similar tools.
The process goes like this:
Note the weird mangled name given in the linker error. (eg. draw#graphics#XYZ).
Dump the exported symbols from the library into a text file.
Search for the exported symbol of interest, and notice that the mangled name is different.
Pay attention to why the mangled names ended up different. You would be able to see that the parameter types are different, even though they look the same in the source code.
Reason why they are different. In the example given above, they are different because of different include files.
[1] By project I mean a set of source files that are linked together to produce either a library or an executable.
EDIT 1: Rewrote first section to be easier to understand. Please comment below to let me know if something else needs to be fixed. Thanks!
Inconsistent UNICODE definitions
A Windows UNICODE build is built with TCHAR etc. being defined as wchar_t etc. When not building with UNICODE defined as build with TCHAR defined as char etc. These UNICODE and _UNICODE defines affect all the "T" string types; LPTSTR, LPCTSTR and their elk.
Building one library with UNICODE defined and attempting to link it in a project where UNICODE is not defined will result in linker errors since there will be a mismatch in the definition of TCHAR; char vs. wchar_t.
The error usually includes a function a value with a char or wchar_t derived type, these could include std::basic_string<> etc. as well. When browsing through the affected function in the code, there will often be a reference to TCHAR or std::basic_string<TCHAR> etc. This is a tell-tale sign that the code was originally intended for both a UNICODE and a Multi-Byte Character (or "narrow") build.
To correct this, build all the required libraries and projects with a consistent definition of UNICODE (and _UNICODE).
This can be done with either;
#define UNICODE
#define _UNICODE
Or in the project settings;
Project Properties > General > Project Defaults > Character Set
Or on the command line;
/DUNICODE /D_UNICODE
The alternative is applicable as well, if UNICODE is not intended to be used, make sure the defines are not set, and/or the multi-character setting is used in the projects and consistently applied.
Do not forget to be consistent between the "Release" and "Debug" builds as well.
Clean and rebuild
A "clean" of the build can remove the "dead wood" that may be left lying around from previous builds, failed builds, incomplete builds and other build system related build issues.
In general the IDE or build will include some form of "clean" function, but this may not be correctly configured (e.g. in a manual makefile) or may fail (e.g. the intermediate or resultant binaries are read-only).
Once the "clean" has completed, verify that the "clean" has succeeded and all the generated intermediate file (e.g. an automated makefile) have been successfully removed.
This process can be seen as a final resort, but is often a good first step; especially if the code related to the error has recently been added (either locally or from the source repository).
Missing "extern" in const variable declarations/definitions (C++ only)
For people coming from C it might be a surprise that in C++ global constvariables have internal (or static) linkage. In C this was not the case, as all global variables are implicitly extern (i.e. when the static keyword is missing).
Example:
// file1.cpp
const int test = 5; // in C++ same as "static const int test = 5"
int test2 = 5;
// file2.cpp
extern const int test;
extern int test2;
void foo()
{
int x = test; // linker error in C++ , no error in C
int y = test2; // no problem
}
correct would be to use a header file and include it in file2.cpp and file1.cpp
extern const int test;
extern int test2;
Alternatively one could declare the const variable in file1.cpp with explicit extern
Even though this is a pretty old questions with multiple accepted answers, I'd like to share how to resolve an obscure "undefined reference to" error.
Different versions of libraries
I was using an alias to refer to std::filesystem::path: filesystem is in the standard library since C++17 but my program needed to also compile in C++14 so I decided to use a variable alias:
#if (defined _GLIBCXX_EXPERIMENTAL_FILESYSTEM) //is the included filesystem library experimental? (C++14 and newer: <experimental/filesystem>)
using path_t = std::experimental::filesystem::path;
#elif (defined _GLIBCXX_FILESYSTEM) //not experimental (C++17 and newer: <filesystem>)
using path_t = std::filesystem::path;
#endif
Let's say I have three files: main.cpp, file.h, file.cpp:
file.h #include's <experimental::filesystem> and contains the code above
file.cpp, the implementation of file.h, #include's "file.h"
main.cpp #include's <filesystem> and "file.h"
Note the different libraries used in main.cpp and file.h. Since main.cpp #include'd "file.h" after <filesystem>, the version of filesystem used there was the C++17 one. I used to compile the program with the following commands:
$ g++ -g -std=c++17 -c main.cpp -> compiles main.cpp to main.o
$ g++ -g -std=c++17 -c file.cpp -> compiles file.cpp and file.h to file.o
$ g++ -g -std=c++17 -o executable main.o file.o -lstdc++fs -> links main.o and file.o
This way any function contained in file.o and used in main.o that required path_t gave "undefined reference" errors because main.o referred to std::filesystem::path but file.o to std::experimental::filesystem::path.
Resolution
To fix this I just needed to change <experimental::filesystem> in file.h to <filesystem>.
When linking against shared libraries, make sure that the used symbols are not hidden.
The default behavior of gcc is that all symbols are visible. However, when the translation units are built with option -fvisibility=hidden, only functions/symbols marked with __attribute__ ((visibility ("default"))) are external in the resulting shared object.
You can check whether the symbols your are looking for are external by invoking:
# -D shows (global) dynamic symbols that can be used from the outside of XXX.so
nm -D XXX.so | grep MY_SYMBOL
the hidden/local symbols are shown by nm with lowercase symbol type, for example t instead of `T for code-section:
nm XXX.so
00000000000005a7 t HIDDEN_SYMBOL
00000000000005f8 T VISIBLE_SYMBOL
You can also use nm with the option -C to demangle the names (if C++ was used).
Similar to Windows-dlls, one would mark public functions with a define, for example DLL_PUBLIC defined as:
#define DLL_PUBLIC __attribute__ ((visibility ("default")))
DLL_PUBLIC int my_public_function(){
...
}
Which roughly corresponds to Windows'/MSVC-version:
#ifdef BUILDING_DLL
#define DLL_PUBLIC __declspec(dllexport)
#else
#define DLL_PUBLIC __declspec(dllimport)
#endif
More information about visibility can be found on the gcc wiki.
When a translation unit is compiled with -fvisibility=hidden the resulting symbols have still external linkage (shown with upper case symbol type by nm) and can be used for external linkage without problem if the object files become part of a static libraries. The linkage becomes local only when the object files are linked into a shared library.
To find which symbols in an object file are hidden run:
>>> objdump -t XXXX.o | grep hidden
0000000000000000 g F .text 000000000000000b .hidden HIDDEN_SYMBOL1
000000000000000b g F .text 000000000000000b .hidden HIDDEN_SYMBOL2
Functions or class-methods are defined in source files with the inline specifier.
An example:-
main.cpp
#include "gum.h"
#include "foo.h"
int main()
{
gum();
foo f;
f.bar();
return 0;
}
foo.h (1)
#pragma once
struct foo {
void bar() const;
};
gum.h (1)
#pragma once
extern void gum();
foo.cpp (1)
#include "foo.h"
#include <iostream>
inline /* <- wrong! */ void foo::bar() const {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
gum.cpp (1)
#include "gum.h"
#include <iostream>
inline /* <- wrong! */ void gum()
{
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
If you specify that gum (similarly, foo::bar) is inline at its definition then
the compiler will inline gum (if it chooses to), by:-
not emitting any unique definition of gum, and therefore
not emitting any symbol by which the linker can refer to the definition of gum, and instead
replacing all calls to gum with inline copies of the compiled body of gum.
As a result, if you define gum inline in a source file gum.cpp, it is
compiled to an object file gum.o in which all calls to gum are inlined
and no symbol is defined by which the linker can refer to gum. When you
link gum.o into a program together with another object file, e.g. main.o
that make references to an external symbol gum, the linker cannot resolve
those references. So the linkage fails:
Compile:
g++ -c main.cpp foo.cpp gum.cpp
Link:
$ g++ -o prog main.o foo.o gum.o
main.o: In function `main':
main.cpp:(.text+0x18): undefined reference to `gum()'
main.cpp:(.text+0x24): undefined reference to `foo::bar() const'
collect2: error: ld returned 1 exit status
You can only define gum as inline if the compiler can see its definition in every source file in which gum may be called. That means its inline definition needs to exist in a header file that you include in every source file
you compile in which gum may be called. Do one of two things:
Either don't inline the definitions
Remove the inline specifier from the source file definition:
foo.cpp (2)
#include "foo.h"
#include <iostream>
void foo::bar() const {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
gum.cpp (2)
#include "gum.h"
#include <iostream>
void gum()
{
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
Rebuild with that:
$ g++ -c main.cpp foo.cpp gum.cpp
imk#imk-Inspiron-7559:~/develop/so/scrap1$ g++ -o prog main.o foo.o gum.o
imk#imk-Inspiron-7559:~/develop/so/scrap1$ ./prog
void gum()
void foo::bar() const
Success.
Or inline correctly
Inline definitions in header files:
foo.h (2)
#pragma once
#include <iostream>
struct foo {
void bar() const { // In-class definition is implicitly inline
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
};
// Alternatively...
#if 0
struct foo {
void bar() const;
};
inline void foo::bar() const {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
#endif
gum.h (2)
#pragma once
#include <iostream>
inline void gum() {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
Now we don't need foo.cpp or gum.cpp:
$ g++ -c main.cpp
$ g++ -o prog main.o
$ ./prog
void gum()
void foo::bar() const

Conditionally adding a define to a configuration .h file with CMake

I know that you can generate config.h files with CMake processing a template, where whenever you write
#define FOO ${SOME_VARIABLE}
and in CMake you set SOME_VARIABLE to, say, 123, then you'll get
#define FOO 123
And that's great. But no I want BAR to be defined only conditionally, i.e. sometimes config.h will have
#define BAR
and sometimes it won't, not even to a null/default value. How can I do this with CMake?
The way to achieve this is with:
#cmakedefine BOO
in the configure.h file template. This will be replaced with either
#define BAR whatever_it_is_defined_to_goes_here
or
/* #undef BAR */
depending on whether VAR is set in CMake to any value not considered a false constant by the if() command.
Cf. the CMake documentation page on the configure_file() command.

How to Rename SWIG Generated Proxy Java classes created from C enum types

I'm trying to use the SWIG rename to change the name for the auto generated proxy Java class, test_cache_t.java to Example.java. I've tried the below, as it works fine for C structs as per this question but it's not working for C enums. Any ideas? I'm getting some warnings that don't quite lead me to the problem...
%module Example
%rename (Example) test_cache_t_;
typedef enum test_cache_t_ {
CACHE_FALSE = 0,
CACHE_TRUE = 1
} test_cache_t;
%{
#include "Example.h"
%}
%include "Example.h"
[exec] /test/include/Example.h:84: Warning 302: Identifier 'test_cache_t' redefined (ignored) (Renamed from 'test_cache_t_'),
[exec] test.i:7: Warning 302: previous definition of 'test_cache_t' (Renamed from 'test_cache_t_').
[exec] /test/include/Example.h:82: Warning 302: Identifier 'CACHE_FALSE' redefined (ignored),
[exec] test.i:5: Warning 302: previous definition of 'CACHE_FALSE'.
[exec] /test/include/Example.h:84: Warning 302: Identifier 'CACHE_TRUE' redefined (ignored),
[exec] test.i:7: Warning 302: previous definition of 'CACHE_TRUE'.
You have two problems here I think:
Your module has the same name as your (%renamed) type, so you have two things wanting to be Example.java.
Solution: change the name of either the module or the new name from %rename
It looks like you've provided SWIG two definitions of the same enum, once in the interface file and once in the header file.
Solution: probably remove typedef enum test_cache_t_ from the interface file, alternatively use %ignore before the %include, or drop the %include altogether.
My final interface file when testing ended up looking like:
%module SomeOtherName
%{
#include "Example.h"
%}
%rename (Example) test_cache_t;
%include "Example.h"
Oddly for this to work I had to use the typedef'd name in the %rename, not the enum name. I'm not quite sure why that seems to be the opposite of the case for struct/class.

SunStudio C++ compiler pragma to disable warnings?

The STLport bundled with the SunStudio11 generates alot of warnings. I beleive most compilers have a way to disable warnings from certain source files, like this:
Sun C
#pragma error_messages off
#include <header.h>
// ...
#pragma error_messages on
gcc
#pragma warning(push, 0)
#include <header.h>
// ...
#pragma warning(pop)
How do you do this in the SunStudio C++ compiler? (btw, the sunstudio C pragmas don't work in sunstudio C++)
In SunStudio 12, the #pragma error_messages work as documented in the C users manual.
You can see the tags with the -errtags=yes option, and use it like this:
// Disable badargtypel2w:
// String literal converted to char* in formal argument
#pragma error_messages (off, badargtypel2w )
and then compile with CC (the C++ compiler).
If you'd rather use a command line option than #pragmas, a simple answer is that you can use -erroff=%all on your compile line.
You can suppress specific warning messages with -erroff=%tag
You can print out the tags for the warning messages by adding -errtags to your compile line. You can then define a set of comma-separated values for -erroff that suppress just those tags.
See http://docs.oracle.com/cd/E19205-01/820-7599/bkapa/index.html for more info.
Note that Sun Studio 12 update 1 is now available, and I'm referencing the SS12u1 doc here.
Can't help with turning the warnings off, but when I last looked at SunStudio, it shipped with two STLs - an older one for backward compatibility with earlier compiler versions and STLport. Might be worth checking if you're using STLport before trying to trying to turn off the warnings.
add -w to your $CC or whatever var you use.