Why do I get this warnings under Cython? - cython

I try to reproduce some examples on the Cython tutorial to learn Cython:
http://docs.cython.org/en/latest/src/tutorial/external.html
I think the two following warnings are not related. Therefore two qestions:
(1)
Using this as input to
python setup.py build_ext --inplace -c mingw32
from libc.math cimport sin
cdef extern from "math.h":
cdef double sin(double x)
cpdef double f(double x):
return sin(x*x)
cpdef test(double x):
return f(x)
I get:
D:\python\cython>python setup.py build_ext --inplace -c mingw32
Compiling primes.pyx because it changed.
[1/1] Cythonizing primes.pyx
warning: primes.pyx:4:19: Function signature does not match previous declaration
running build_ext
building 'primes' extension
C:\MinGW\bin\gcc.exe -mdll -O -Wall -IC:\Python34\include -IC:\Python34\include -c primes.c -o build\temp.win32-3.4\Release\primes.o
writing build\temp.win32-3.4\Release\primes.def
C:\MinGW\bin\gcc.exe -shared -s build\temp.win32-3.4\Release\primes.o build\temp.win32-3.4\Release\primes.def -LC:\Python34\libs -LC:\Python34\PCbuild -lpython34 -lmsvcr100 -o D:\python\cython\primes.pyd
D:\python\cython>
Why is the warning "Function signature does not match previous declaration" ?
(2)
When I declare
cdef extern from "math.h":
cpdef double sin(double x)
I get the additional warning
warning: primes.pyx:4:20: Function 'sin' previously declared as 'cpdef'
However, it is given exactly in the same way as example in the chapter "External declarations" of the linked page. In a python module where the module is imported, sin is not known under the package. Where is the problem?
The description in the tutorial is:
Note that you can easily export an external C function from your Cython module by declaring it as cpdef. This generates a Python wrapper for it and adds it to the module dict.

the different parts of the tutorial show different manners to call C functions.
For some functions for which a Cython .pxd header is provided, you can use from libc.math import sin. For all libraries, you can use the lengthier method of .h header and re-declaration.
You cannot mix the two however, as it creates two definitions of the same function even though they are identical.

Related

Is cython compatible with typing.NamedTuple?

I have the following code in file temp.py
from typing import NamedTuple
class C(NamedTuple):
a: int
b: int
c = C(1, 2)
I compile it using the command:
cythonize -3 -i temp.py
and run it using the command
python3 -c 'import temp'
I get the following exception:
Traceback (most recent call last): File "<string>", line 1, in <module> File "temp.py", line 7, in init temp
c = C(1, 2) TypeError: __new__() takes 1 positional argument but 3 were given
Version of python: 3.6.15
Version of cython: 0.29.14
Is there anything wrong in the above code/build steps ?
It'll work in the current Cython 3 alpha version (and later). It won't work in Cython 0.29.x (you're using a pretty outdated version of this, but that won't affect this feature).
It requires classes to have an __annotations__ dictionary, which is a feature that was added in the Cython 3 alpha releases.
You won't get much/any speed advantage from compiling this is Cython though - it'll still generate a normal Python class. But it will work.
in short, NO, it is not compatible. Edit: not currently compatible.
named tuples is just python magic (creating classes at runtime), cython doesn't know about it, so you have to execute that code by calling the interpreter at runtime, using exec.
# temp.pyx
temp_global = {}
exec("""
from typing import NamedTuple
class C(NamedTuple):
a: int
b: int
""",temp_global)
C = temp_global['C']
c = C(1,2)
print(c)
to test it
import pyximport
pyximport.install()
import temp
this ends up being some python code that's being executed whenever you import your binary, the entire file is being passed to exec whenever you import it, so it's not really "Cython Code", you can just write it as a python .py file and avoid cython, or just implement your "Cython class" without relying on python magic. (no named tuples or dynamic code that is created at runtime)

How can I use FJsonSerializer::Deserialize() to deserialize JSON in Unreal Engine? [duplicate]

What are undefined reference/unresolved external symbol errors? What are common causes and how to fix/prevent them?
Compiling a C++ program takes place in several steps, as specified by 2.2 (credits to Keith Thompson for the reference):
The precedence among the syntax rules of translation is specified by the following phases [see footnote].
Physical source file characters are mapped, in an implementation-defined manner, to the basic source character set
(introducing new-line characters for end-of-line indicators) if
necessary. [SNIP]
Each instance of a backslash character (\) immediately followed by a new-line character is deleted, splicing physical source lines to
form logical source lines. [SNIP]
The source file is decomposed into preprocessing tokens (2.5) and sequences of white-space characters (including comments). [SNIP]
Preprocessing directives are executed, macro invocations are expanded, and _Pragma unary operator expressions are executed. [SNIP]
Each source character set member in a character literal or a string literal, as well as each escape sequence and universal-character-name
in a character literal or a non-raw string literal, is converted to
the corresponding member of the execution character set; [SNIP]
Adjacent string literal tokens are concatenated.
White-space characters separating tokens are no longer significant. Each preprocessing token is converted into a token. (2.7). The
resulting tokens are syntactically and semantically analyzed and
translated as a translation unit. [SNIP]
Translated translation units and instantiation units are combined as follows: [SNIP]
All external entity references are resolved. Library components are linked to satisfy external references to entities not defined in the
current translation. All such translator output is collected into a
program image which contains information needed for execution in its
execution environment. (emphasis mine)
[footnote] Implementations must behave as if these separate phases occur, although in practice different phases might be folded together.
The specified errors occur during this last stage of compilation, most commonly referred to as linking. It basically means that you compiled a bunch of implementation files into object files or libraries and now you want to get them to work together.
Say you defined symbol a in a.cpp. Now, b.cpp declared that symbol and used it. Before linking, it simply assumes that that symbol was defined somewhere, but it doesn't yet care where. The linking phase is responsible for finding the symbol and correctly linking it to b.cpp (well, actually to the object or library that uses it).
If you're using Microsoft Visual Studio, you'll see that projects generate .lib files. These contain a table of exported symbols, and a table of imported symbols. The imported symbols are resolved against the libraries you link against, and the exported symbols are provided for the libraries that use that .lib (if any).
Similar mechanisms exist for other compilers/ platforms.
Common error messages are error LNK2001, error LNK1120, error LNK2019 for Microsoft Visual Studio and undefined reference to symbolName for GCC.
The code:
struct X
{
virtual void foo();
};
struct Y : X
{
void foo() {}
};
struct A
{
virtual ~A() = 0;
};
struct B: A
{
virtual ~B(){}
};
extern int x;
void foo();
int main()
{
x = 0;
foo();
Y y;
B b;
}
will generate the following errors with GCC:
/home/AbiSfw/ccvvuHoX.o: In function `main':
prog.cpp:(.text+0x10): undefined reference to `x'
prog.cpp:(.text+0x19): undefined reference to `foo()'
prog.cpp:(.text+0x2d): undefined reference to `A::~A()'
/home/AbiSfw/ccvvuHoX.o: In function `B::~B()':
prog.cpp:(.text._ZN1BD1Ev[B::~B()]+0xb): undefined reference to `A::~A()'
/home/AbiSfw/ccvvuHoX.o: In function `B::~B()':
prog.cpp:(.text._ZN1BD0Ev[B::~B()]+0x12): undefined reference to `A::~A()'
/home/AbiSfw/ccvvuHoX.o:(.rodata._ZTI1Y[typeinfo for Y]+0x8): undefined reference to `typeinfo for X'
/home/AbiSfw/ccvvuHoX.o:(.rodata._ZTI1B[typeinfo for B]+0x8): undefined reference to `typeinfo for A'
collect2: ld returned 1 exit status
and similar errors with Microsoft Visual Studio:
1>test2.obj : error LNK2001: unresolved external symbol "void __cdecl foo(void)" (?foo##YAXXZ)
1>test2.obj : error LNK2001: unresolved external symbol "int x" (?x##3HA)
1>test2.obj : error LNK2001: unresolved external symbol "public: virtual __thiscall A::~A(void)" (??1A##UAE#XZ)
1>test2.obj : error LNK2001: unresolved external symbol "public: virtual void __thiscall X::foo(void)" (?foo#X##UAEXXZ)
1>...\test2.exe : fatal error LNK1120: 4 unresolved externals
Common causes include:
Failure to link against appropriate libraries/object files or compile implementation files
Declared and undefined variable or function.
Common issues with class-type members
Template implementations not visible.
Symbols were defined in a C program and used in C++ code.
Incorrectly importing/exporting methods/classes across modules/dll. (MSVS specific)
Circular library dependency
undefined reference to `WinMain#16'
Interdependent library order
Multiple source files of the same name
Mistyping or not including the .lib extension when using the #pragma (Microsoft Visual Studio)
Problems with template friends
Inconsistent UNICODE definitions
Missing "extern" in const variable declarations/definitions (C++ only)
Visual Studio Code not configured for a multiple file project
Errors on Mac OS X when building a dylib, but a .so on other Unix-y systems is OK
Class members:
A pure virtual destructor needs an implementation.
Declaring a destructor pure still requires you to define it (unlike a regular function):
struct X
{
virtual ~X() = 0;
};
struct Y : X
{
~Y() {}
};
int main()
{
Y y;
}
//X::~X(){} //uncomment this line for successful definition
This happens because base class destructors are called when the object is destroyed implicitly, so a definition is required.
virtual methods must either be implemented or defined as pure.
This is similar to non-virtual methods with no definition, with the added reasoning that
the pure declaration generates a dummy vtable and you might get the linker error without using the function:
struct X
{
virtual void foo();
};
struct Y : X
{
void foo() {}
};
int main()
{
Y y; //linker error although there was no call to X::foo
}
For this to work, declare X::foo() as pure:
struct X
{
virtual void foo() = 0;
};
Non-virtual class members
Some members need to be defined even if not used explicitly:
struct A
{
~A();
};
The following would yield the error:
A a; //destructor undefined
The implementation can be inline, in the class definition itself:
struct A
{
~A() {}
};
or outside:
A::~A() {}
If the implementation is outside the class definition, but in a header, the methods have to be marked as inline to prevent a multiple definition.
All used member methods need to be defined if used.
A common mistake is forgetting to qualify the name:
struct A
{
void foo();
};
void foo() {}
int main()
{
A a;
a.foo();
}
The definition should be
void A::foo() {}
static data members must be defined outside the class in a single translation unit:
struct X
{
static int x;
};
int main()
{
int x = X::x;
}
//int X::x; //uncomment this line to define X::x
An initializer can be provided for a static const data member of integral or enumeration type within the class definition; however, odr-use of this member will still require a namespace scope definition as described above. C++11 allows initialization inside the class for all static const data members.
Failure to link against appropriate libraries/object files or compile implementation files
Commonly, each translation unit will generate an object file that contains the definitions of the symbols defined in that translation unit.
To use those symbols, you have to link against those object files.
Under gcc you would specify all object files that are to be linked together in the command line, or compile the implementation files together.
g++ -o test objectFile1.o objectFile2.o -lLibraryName
-l... must be to the right of any .o/.c/.cpp files.
The libraryName here is just the bare name of the library, without platform-specific additions. So e.g. on Linux library files are usually called libfoo.so but you'd only write -lfoo. On Windows that same file might be called foo.lib, but you'd use the same argument. You might have to add the directory where those files can be found using -L‹directory›. Make sure to not write a space after -l or -L.
For Xcode: Add the User Header Search Paths -> add the Library Search Path -> drag and drop the actual library reference into the project folder.
Under MSVS, files added to a project automatically have their object files linked together and a lib file would be generated (in common usage). To use the symbols in a separate project, you'd
need to include the lib files in the project settings. This is done in the Linker section of the project properties, in Input -> Additional Dependencies. (the path to the lib file should be
added in Linker -> General -> Additional Library Directories) When using a third-party library that is provided with a lib file, failure to do so usually results in the error.
It can also happen that you forget to add the file to the compilation, in which case the object file won't be generated. In gcc you'd add the files to the command line. In MSVS adding the file to the project will make it compile it automatically (albeit files can, manually, be individually excluded from the build).
In Windows programming, the tell-tale sign that you did not link a necessary library is that the name of the unresolved symbol begins with __imp_. Look up the name of the function in the documentation, and it should say which library you need to use. For example, MSDN puts the information in a box at the bottom of each function in a section called "Library".
Declared but did not define a variable or function.
A typical variable declaration is
extern int x;
As this is only a declaration, a single definition is needed. A corresponding definition would be:
int x;
For example, the following would generate an error:
extern int x;
int main()
{
x = 0;
}
//int x; // uncomment this line for successful definition
Similar remarks apply to functions. Declaring a function without defining it leads to the error:
void foo(); // declaration only
int main()
{
foo();
}
//void foo() {} //uncomment this line for successful definition
Be careful that the function you implement exactly matches the one you declared. For example, you may have mismatched cv-qualifiers:
void foo(int& x);
int main()
{
int x;
foo(x);
}
void foo(const int& x) {} //different function, doesn't provide a definition
//for void foo(int& x)
Other examples of mismatches include
Function/variable declared in one namespace, defined in another.
Function/variable declared as class member, defined as global (or vice versa).
Function return type, parameter number and types, and calling convention do not all exactly agree.
The error message from the compiler will often give you the full declaration of the variable or function that was declared but never defined. Compare it closely to the definition you provided. Make sure every detail matches.
The order in which interdependent linked libraries are specified is wrong.
The order in which libraries are linked DOES matter if the libraries depend on each other. In general, if library A depends on library B, then libA MUST appear before libB in the linker flags.
For example:
// B.h
#ifndef B_H
#define B_H
struct B {
B(int);
int x;
};
#endif
// B.cpp
#include "B.h"
B::B(int xx) : x(xx) {}
// A.h
#include "B.h"
struct A {
A(int x);
B b;
};
// A.cpp
#include "A.h"
A::A(int x) : b(x) {}
// main.cpp
#include "A.h"
int main() {
A a(5);
return 0;
};
Create the libraries:
$ g++ -c A.cpp
$ g++ -c B.cpp
$ ar rvs libA.a A.o
ar: creating libA.a
a - A.o
$ ar rvs libB.a B.o
ar: creating libB.a
a - B.o
Compile:
$ g++ main.cpp -L. -lB -lA
./libA.a(A.o): In function `A::A(int)':
A.cpp:(.text+0x1c): undefined reference to `B::B(int)'
collect2: error: ld returned 1 exit status
$ g++ main.cpp -L. -lA -lB
$ ./a.out
So to repeat again, the order DOES matter!
Symbols were defined in a C program and used in C++ code.
The function (or variable) void foo() was defined in a C program and you attempt to use it in a C++ program:
void foo();
int main()
{
foo();
}
The C++ linker expects names to be mangled, so you have to declare the function as:
extern "C" void foo();
int main()
{
foo();
}
Equivalently, instead of being defined in a C program, the function (or variable) void foo() was defined in C++ but with C linkage:
extern "C" void foo();
and you attempt to use it in a C++ program with C++ linkage.
If an entire library is included in a header file (and was compiled as C code); the include will need to be as follows;
extern "C" {
#include "cheader.h"
}
what is an "undefined reference/unresolved external symbol"
I'll try to explain what is an "undefined reference/unresolved external symbol".
note: i use g++ and Linux and all examples is for it
For example we have some code
// src1.cpp
void print();
static int local_var_name; // 'static' makes variable not visible for other modules
int global_var_name = 123;
int main()
{
print();
return 0;
}
and
// src2.cpp
extern "C" int printf (const char*, ...);
extern int global_var_name;
//extern int local_var_name;
void print ()
{
// printf("%d%d\n", global_var_name, local_var_name);
printf("%d\n", global_var_name);
}
Make object files
$ g++ -c src1.cpp -o src1.o
$ g++ -c src2.cpp -o src2.o
After the assembler phase we have an object file, which contains any symbols to export.
Look at the symbols
$ readelf --symbols src1.o
Num: Value Size Type Bind Vis Ndx Name
5: 0000000000000000 4 OBJECT LOCAL DEFAULT 4 _ZL14local_var_name # [1]
9: 0000000000000000 4 OBJECT GLOBAL DEFAULT 3 global_var_name # [2]
I've rejected some lines from output, because they do not matter
So, we see follow symbols to export.
[1] - this is our static (local) variable (important - Bind has a type "LOCAL")
[2] - this is our global variable
src2.cpp exports nothing and we have seen no its symbols
Link our object files
$ g++ src1.o src2.o -o prog
and run it
$ ./prog
123
Linker sees exported symbols and links it. Now we try to uncomment lines in src2.cpp like here
// src2.cpp
extern "C" int printf (const char*, ...);
extern int global_var_name;
extern int local_var_name;
void print ()
{
printf("%d%d\n", global_var_name, local_var_name);
}
and rebuild an object file
$ g++ -c src2.cpp -o src2.o
OK (no errors), because we only build object file, linking is not done yet.
Try to link
$ g++ src1.o src2.o -o prog
src2.o: In function `print()':
src2.cpp:(.text+0x6): undefined reference to `local_var_name'
collect2: error: ld returned 1 exit status
It has happened because our local_var_name is static, i.e. it is not visible for other modules.
Now more deeply. Get the translation phase output
$ g++ -S src1.cpp -o src1.s
// src1.s
look src1.s
.file "src1.cpp"
.local _ZL14local_var_name
.comm _ZL14local_var_name,4,4
.globl global_var_name
.data
.align 4
.type global_var_name, #object
.size global_var_name, 4
global_var_name:
.long 123
.text
.globl main
.type main, #function
main:
; assembler code, not interesting for us
.LFE0:
.size main, .-main
.ident "GCC: (Ubuntu 4.8.2-19ubuntu1) 4.8.2"
.section .note.GNU-stack,"",#progbits
So, we've seen there is no label for local_var_name, that's why linker hasn't found it. But we are hackers :) and we can fix it. Open src1.s in your text editor and change
.local _ZL14local_var_name
.comm _ZL14local_var_name,4,4
to
.globl local_var_name
.data
.align 4
.type local_var_name, #object
.size local_var_name, 4
local_var_name:
.long 456789
i.e. you should have like below
.file "src1.cpp"
.globl local_var_name
.data
.align 4
.type local_var_name, #object
.size local_var_name, 4
local_var_name:
.long 456789
.globl global_var_name
.align 4
.type global_var_name, #object
.size global_var_name, 4
global_var_name:
.long 123
.text
.globl main
.type main, #function
main:
; ...
we have changed the visibility of local_var_name and set its value to 456789.
Try to build an object file from it
$ g++ -c src1.s -o src2.o
ok, see readelf output (symbols)
$ readelf --symbols src1.o
8: 0000000000000000 4 OBJECT GLOBAL DEFAULT 3 local_var_name
now local_var_name has Bind GLOBAL (was LOCAL)
link
$ g++ src1.o src2.o -o prog
and run it
$ ./prog
123456789
ok, we hack it :)
So, as a result - an "undefined reference/unresolved external symbol error" happens when the linker cannot find global symbols in the object files.
If all else fails, recompile.
I was recently able to get rid of an unresolved external error in Visual Studio 2012 just by recompiling the offending file. When I re-built, the error went away.
This usually happens when two (or more) libraries have a cyclic dependency. Library A attempts to use symbols in B.lib and library B attempts to use symbols from A.lib. Neither exist to start off with. When you attempt to compile A, the link step will fail because it can't find B.lib. A.lib will be generated, but no dll. You then compile B, which will succeed and generate B.lib. Re-compiling A will now work because B.lib is now found.
Template implementations not visible.
Unspecialized templates must have their definitions visible to all translation units that use them. That means you can't separate the definition of a template
to an implementation file. If you must separate the implementation, the usual workaround is to have an impl file which you include at the end of the header that
declares the template. A common situation is:
template<class T>
struct X
{
void foo();
};
int main()
{
X<int> x;
x.foo();
}
//differentImplementationFile.cpp
template<class T>
void X<T>::foo()
{
}
To fix this, you must move the definition of X::foo to the header file or some place visible to the translation unit that uses it.
Specialized templates can be implemented in an implementation file and the implementation doesn't have to be visible, but the specialization must be previously declared.
For further explanation and another possible solution (explicit instantiation) see this question and answer.
This is one of most confusing error messages that every VC++ programmers have seen time and time again. Let’s make things clarity first.
A. What is symbol?
In short, a symbol is a name. It can be a variable name, a function name, a class name, a typedef name, or anything except those names and signs that belong to C++ language. It is user defined or introduced by a dependency library (another user-defined).
B. What is external?
In VC++, every source file (.cpp,.c,etc.) is considered as a translation unit, the compiler compiles one unit at a time, and generate one object file(.obj) for the current translation unit. (Note that every header file that this source file included will be preprocessed and will be considered as part of this translation unit)Everything within a translation unit is considered as internal, everything else is considered as external. In C++, you may reference an external symbol by using keywords like extern, __declspec (dllimport) and so on.
C. What is “resolve”?
Resolve is a linking-time term. In linking-time, linker attempts to find the external definition for every symbol in object files that cannot find its definition internally. The scope of this searching process including:
All object files that generated in compiling time
All libraries (.lib) that are either explicitly or implicitly
specified as additional dependencies of this building application.
This searching process is called resolve.
D. Finally, why Unresolved External Symbol?
If the linker cannot find the external definition for a symbol that has no definition internally, it reports an Unresolved External Symbol error.
E. Possible causes of LNK2019: Unresolved External Symbol error.
We already know that this error is due to the linker failed to find the definition of external symbols, the possible causes can be sorted as:
Definition exists
For example, if we have a function called foo defined in a.cpp:
int foo()
{
return 0;
}
In b.cpp we want to call function foo, so we add
void foo();
to declare function foo(), and call it in another function body, say bar():
void bar()
{
foo();
}
Now when you build this code you will get a LNK2019 error complaining that foo is an unresolved symbol. In this case, we know that foo() has its definition in a.cpp, but different from the one we are calling(different return value). This is the case that definition exists.
Definition does not exist
If we want to call some functions in a library, but the import library is not added into the additional dependency list (set from: Project | Properties | Configuration Properties | Linker | Input | Additional Dependency) of your project setting. Now the linker will report a LNK2019 since the definition does not exist in current searching scope.
Incorrectly importing/exporting methods/classes across modules/dll (compiler specific).
MSVS requires you to specify which symbols to export and import using __declspec(dllexport) and __declspec(dllimport).
This dual functionality is usually obtained through the use of a macro:
#ifdef THIS_MODULE
#define DLLIMPEXP __declspec(dllexport)
#else
#define DLLIMPEXP __declspec(dllimport)
#endif
The macro THIS_MODULE would only be defined in the module that exports the function. That way, the declaration:
DLLIMPEXP void foo();
expands to
__declspec(dllexport) void foo();
and tells the compiler to export the function, as the current module contains its definition. When including the declaration in a different module, it would expand to
__declspec(dllimport) void foo();
and tells the compiler that the definition is in one of the libraries you linked against (also see 1)).
You can similary import/export classes:
class DLLIMPEXP X
{
};
undefined reference to WinMain#16 or similar 'unusual' main() entry point reference (especially for visual-studio).
You may have missed to choose the right project type with your actual IDE. The IDE may want to bind e.g. Windows Application projects to such entry point function (as specified in the missing reference above), instead of the commonly used int main(int argc, char** argv); signature.
If your IDE supports Plain Console Projects you might want to choose this project type, instead of a windows application project.
Here are case1 and case2 handled in more detail from a real world problem.
Also if you're using 3rd party libraries make sure you have the correct 32/64 bit binaries
Microsoft offers a #pragma to reference the correct library at link time;
#pragma comment(lib, "libname.lib")
In addition to the library path including the directory of the library, this should be the full name of the library.
Visual Studio NuGet package needs to be updated for new toolset version
I just had this problem trying to link libpng with Visual Studio 2013. The problem is that the package file only had libraries for Visual Studio 2010 and 2012.
The correct solution is to hope the developer releases an updated package and then upgrade, but it worked for me by hacking in an extra setting for VS2013, pointing at the VS2012 library files.
I edited the package (in the packages folder inside the solution's directory) by finding packagename\build\native\packagename.targets and inside that file, copying all the v110 sections. I changed the v110 to v120 in the condition fields only being very careful to leave the filename paths all as v110. This simply allowed Visual Studio 2013 to link to the libraries for 2012, and in this case, it worked.
Suppose you have a big project written in c++ which has a thousand of .cpp files and a thousand of .h files.And let's says the project also depends on ten static libraries. Let's says we are on Windows and we build our project in Visual Studio 20xx. When you press Ctrl + F7 Visual Studio to start compiling the whole solution ( suppose we have just one project in the solution )
What's the meaning of compilation ?
Visual Studio search into file .vcxproj and start compiling each file which has the extension .cpp. Order of compilation is undefined.So you must not assume that the file main.cpp is compiled first
If .cpp files depends on additional .h files in order to find symbols
that may or may not be defined in the file .cpp
If exists one .cpp file in which the compiler could not find one symbol, a compiler time error raises the message Symbol x could not be found
For each file with extension .cpp is generated an object file .o and also Visual Studio writes the output in a file named ProjectName.Cpp.Clean.txt which contains all object files that must be processed by the linker.
The Second step of compilation is done by Linker.Linker should merge all the object file and build finally the output ( which may be an executable or a library)
Steps In Linking a project
Parse all the object files and find the definition which was only declared in headers ( eg: The code of one method of a class as is mentioned in previous answers, or event the initialization of a static variable which is member inside a class)
If one symbol could not be found in object files he also is searched in Additional Libraries.For adding a new library to a project Configuration properties -> VC++ Directories -> Library Directories and here you specified additional folder for searching libraries and Configuration properties -> Linker -> Input for specifying the name of the library.
-If the Linker could not find the symbol which you write in one .cpp he raises a linker time error which may sound like
error LNK2001: unresolved external symbol "void __cdecl foo(void)" (?foo##YAXXZ)
Observation
Once the Linker find one symbol he doesn't search in other libraries for it
The order of linking libraries does matter.
If Linker finds an external symbol in one static library he includes the symbol in the output of the project.However, if the library is shared( dynamic ) he doesn't include the code ( symbols ) in output, but Run-Time crashes may occur
How To Solve this kind of error
Compiler Time Error :
Make sure you write your c++ project syntactical correct.
Linker Time Error
Define all your symbol which you declare in your header files
Use #pragma once for allowing compiler not to include one header if it was already included in the current .cpp which are compiled
Make sure that your external library doesn't contain symbols that may enter into conflict with other symbols you defined in your header files
When you use the template to make sure you include the definition of each template function in the header file for allowing the compiler to generate appropriate code for any instantiations.
Use the linker to help diagnose the error
Most modern linkers include a verbose option that prints out to varying degrees;
Link invocation (command line),
Data on what libraries are included in the link stage,
The location of the libraries,
Search paths used.
For gcc and clang; you would typically add -v -Wl,--verbose or -v -Wl,-v to the command line. More details can be found here;
Linux ld man page.
LLVM linker page.
"An introduction to GCC" chapter 9.
For MSVC, /VERBOSE (in particular /VERBOSE:LIB) is added to the link command line.
The MSDN page on the /VERBOSE linker option.
A bug in the compiler/IDE
I recently had this problem, and it turned out it was a bug in Visual Studio Express 2013. I had to remove a source file from the project and re-add it to overcome the bug.
Steps to try if you believe it could be a bug in compiler/IDE:
Clean the project (some IDEs have an option to do this, you can also
manually do it by deleting the object files)
Try start a new project,
copying all source code from the original one.
Linked .lib file is associated to a .dll
I had the same issue. Say i have projects MyProject and TestProject. I had effectively linked the lib file for MyProject to the TestProject. However, this lib file was produced as the DLL for the MyProject was built. Also, I did not contain source code for all methods in the MyProject, but only access to the DLL's entry points.
To solve the issue, i built the MyProject as a LIB, and linked TestProject to this .lib file (i copy paste the generated .lib file into the TestProject folder). I can then build again MyProject as a DLL. It is compiling since the lib to which TestProject is linked does contain code for all methods in classes in MyProject.
Since people seem to be directed to this question when it comes to linker errors I am going to add this here.
One possible reason for linker errors with GCC 5.2.0 is that a new libstdc++ library ABI is now chosen by default.
If you get linker errors about undefined references to symbols that involve types in the std::__cxx11 namespace or the tag [abi:cxx11] then it probably indicates that you are trying to link together object files that were compiled with different values for the _GLIBCXX_USE_CXX11_ABI macro. This commonly happens when linking to a third-party library that was compiled with an older version of GCC. If the third-party library cannot be rebuilt with the new ABI then you will need to recompile your code with the old ABI.
So if you suddenly get linker errors when switching to a GCC after 5.1.0 this would be a thing to check out.
Your linkage consumes libraries before the object files that refer to them
You are trying to compile and link your program with the GCC toolchain.
Your linkage specifies all of the necessary libraries and library search paths
If libfoo depends on libbar, then your linkage correctly puts libfoo before libbar.
Your linkage fails with undefined reference to something errors.
But all the undefined somethings are declared in the header files you have
#included and are in fact defined in the libraries that you are linking.
Examples are in C. They could equally well be C++
A minimal example involving a static library you built yourself
my_lib.c
#include "my_lib.h"
#include <stdio.h>
void hw(void)
{
puts("Hello World");
}
my_lib.h
#ifndef MY_LIB_H
#define MT_LIB_H
extern void hw(void);
#endif
eg1.c
#include <my_lib.h>
int main()
{
hw();
return 0;
}
You build your static library:
$ gcc -c -o my_lib.o my_lib.c
$ ar rcs libmy_lib.a my_lib.o
You compile your program:
$ gcc -I. -c -o eg1.o eg1.c
You try to link it with libmy_lib.a and fail:
$ gcc -o eg1 -L. -lmy_lib eg1.o
eg1.o: In function `main':
eg1.c:(.text+0x5): undefined reference to `hw'
collect2: error: ld returned 1 exit status
The same result if you compile and link in one step, like:
$ gcc -o eg1 -I. -L. -lmy_lib eg1.c
/tmp/ccQk1tvs.o: In function `main':
eg1.c:(.text+0x5): undefined reference to `hw'
collect2: error: ld returned 1 exit status
A minimal example involving a shared system library, the compression library libz
eg2.c
#include <zlib.h>
#include <stdio.h>
int main()
{
printf("%s\n",zlibVersion());
return 0;
}
Compile your program:
$ gcc -c -o eg2.o eg2.c
Try to link your program with libz and fail:
$ gcc -o eg2 -lz eg2.o
eg2.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
collect2: error: ld returned 1 exit status
Same if you compile and link in one go:
$ gcc -o eg2 -I. -lz eg2.c
/tmp/ccxCiGn7.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
collect2: error: ld returned 1 exit status
And a variation on example 2 involving pkg-config:
$ gcc -o eg2 $(pkg-config --libs zlib) eg2.o
eg2.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
What are you doing wrong?
In the sequence of object files and libraries you want to link to make your
program, you are placing the libraries before the object files that refer to
them. You need to place the libraries after the object files that refer
to them.
Link example 1 correctly:
$ gcc -o eg1 eg1.o -L. -lmy_lib
Success:
$ ./eg1
Hello World
Link example 2 correctly:
$ gcc -o eg2 eg2.o -lz
Success:
$ ./eg2
1.2.8
Link the example 2 pkg-config variation correctly:
$ gcc -o eg2 eg2.o $(pkg-config --libs zlib)
$ ./eg2
1.2.8
The explanation
Reading is optional from here on.
By default, a linkage command generated by GCC, on your distro,
consumes the files in the linkage from left to right in
commandline sequence. When it finds that a file refers to something
and does not contain a definition for it, to will search for a definition
in files further to the right. If it eventually finds a definition, the
reference is resolved. If any references remain unresolved at the end,
the linkage fails: the linker does not search backwards.
First, example 1, with static library my_lib.a
A static library is an indexed archive of object files. When the linker
finds -lmy_lib in the linkage sequence and figures out that this refers
to the static library ./libmy_lib.a, it wants to know whether your program
needs any of the object files in libmy_lib.a.
There is only object file in libmy_lib.a, namely my_lib.o, and there's only one thing defined
in my_lib.o, namely the function hw.
The linker will decide that your program needs my_lib.o if and only if it already knows that
your program refers to hw, in one or more of the object files it has already
added to the program, and that none of the object files it has already added
contains a definition for hw.
If that is true, then the linker will extract a copy of my_lib.o from the library and
add it to your program. Then, your program contains a definition for hw, so
its references to hw are resolved.
When you try to link the program like:
$ gcc -o eg1 -L. -lmy_lib eg1.o
the linker has not added eg1.o to the program when it sees
-lmy_lib. Because at that point, it has not seen eg1.o.
Your program does not yet make any references to hw: it
does not yet make any references at all, because all the references it makes
are in eg1.o.
So the linker does not add my_lib.o to the program and has no further
use for libmy_lib.a.
Next, it finds eg1.o, and adds it to be program. An object file in the
linkage sequence is always added to the program. Now, the program makes
a reference to hw, and does not contain a definition of hw; but
there is nothing left in the linkage sequence that could provide the missing
definition. The reference to hw ends up unresolved, and the linkage fails.
Second, example 2, with shared library libz
A shared library isn't an archive of object files or anything like it. It's
much more like a program that doesn't have a main function and
instead exposes multiple other symbols that it defines, so that other
programs can use them at runtime.
Many Linux distros today configure their GCC toolchain so that its language drivers (gcc,g++,gfortran etc)
instruct the system linker (ld) to link shared libraries on an as-needed basis.
You have got one of those distros.
This means that when the linker finds -lz in the linkage sequence, and figures out that this refers
to the shared library (say) /usr/lib/x86_64-linux-gnu/libz.so, it wants to know whether any references that it has added to your program that aren't yet defined have definitions that are exported by libz
If that is true, then the linker will not copy any chunks out of libz and
add them to your program; instead, it will just doctor the code of your program
so that:-
At runtime, the system program loader will load a copy of libz into the
same process as your program whenever it loads a copy of your program, to run it.
At runtime, whenever your program refers to something that is defined in
libz, that reference uses the definition exported by the copy of libz in
the same process.
Your program wants to refer to just one thing that has a definition exported by libz,
namely the function zlibVersion, which is referred to just once, in eg2.c.
If the linker adds that reference to your program, and then finds the definition
exported by libz, the reference is resolved
But when you try to link the program like:
gcc -o eg2 -lz eg2.o
the order of events is wrong in just the same way as with example 1.
At the point when the linker finds -lz, there are no references to anything
in the program: they are all in eg2.o, which has not yet been seen. So the
linker decides it has no use for libz. When it reaches eg2.o, adds it to the program,
and then has undefined reference to zlibVersion, the linkage sequence is finished;
that reference is unresolved, and the linkage fails.
Lastly, the pkg-config variation of example 2 has a now obvious explanation.
After shell-expansion:
gcc -o eg2 $(pkg-config --libs zlib) eg2.o
becomes:
gcc -o eg2 -lz eg2.o
which is just example 2 again.
I can reproduce the problem in example 1, but not in example 2
The linkage:
gcc -o eg2 -lz eg2.o
works just fine for you!
(Or: That linkage worked fine for you on, say, Fedora 23, but fails on Ubuntu 16.04)
That's because the distro on which the linkage works is one of the ones that
does not configure its GCC toolchain to link shared libraries as-needed.
Back in the day, it was normal for unix-like systems to link static and shared
libraries by different rules. Static libraries in a linkage sequence were linked
on the as-needed basis explained in example 1, but shared libraries were linked unconditionally.
This behaviour is economical at linktime because the linker doesn't have to ponder
whether a shared library is needed by the program: if it's a shared library,
link it. And most libraries in most linkages are shared libraries. But there are disadvantages too:-
It is uneconomical at runtime, because it can cause shared libraries to be
loaded along with a program even if doesn't need them.
The different linkage rules for static and shared libraries can be confusing
to inexpert programmers, who may not know whether -lfoo in their linkage
is going to resolve to /some/where/libfoo.a or to /some/where/libfoo.so,
and might not understand the difference between shared and static libraries
anyway.
This trade-off has led to the schismatic situation today. Some distros have
changed their GCC linkage rules for shared libraries so that the as-needed
principle applies for all libraries. Some distros have stuck with the old
way.
Why do I still get this problem even if I compile-and-link at the same time?
If I just do:
$ gcc -o eg1 -I. -L. -lmy_lib eg1.c
surely gcc has to compile eg1.c first, and then link the resulting
object file with libmy_lib.a. So how can it not know that object file
is needed when it's doing the linking?
Because compiling and linking with a single command does not change the
order of the linkage sequence.
When you run the command above, gcc figures out that you want compilation +
linkage. So behind the scenes, it generates a compilation command, and runs
it, then generates a linkage command, and runs it, as if you had run the
two commands:
$ gcc -I. -c -o eg1.o eg1.c
$ gcc -o eg1 -L. -lmy_lib eg1.o
So the linkage fails just as it does if you do run those two commands. The
only difference you notice in the failure is that gcc has generated a
temporary object file in the compile + link case, because you're not telling it
to use eg1.o. We see:
/tmp/ccQk1tvs.o: In function `main'
instead of:
eg1.o: In function `main':
See also
The order in which interdependent linked libraries are specified is wrong
Putting interdependent libraries in the wrong order is just one way
in which you can get files that need definitions of things coming
later in the linkage than the files that provide the definitions. Putting libraries before the
object files that refer to them is another way of making the same mistake.
A wrapper around GNU ld that doesn't support linker scripts
Some .so files are actually GNU ld linker scripts, e.g. libtbb.so file is an ASCII text file with this contents:
INPUT (libtbb.so.2)
Some more complex builds may not support this. For example, if you include -v in the compiler options, you can see that the mainwin gcc wrapper mwdip discards linker script command files in the verbose output list of libraries to link in. A simple work around is to replace the linker script input command file with a copy of the file instead (or a symlink), e.g.
cp libtbb.so.2 libtbb.so
Or you could replace the -l argument with the full path of the .so, e.g. instead of -ltbb do /home/foo/tbb-4.3/linux/lib/intel64/gcc4.4/libtbb.so.2
Befriending templates...
Given the code snippet of a template type with a friend operator (or function);
template <typename T>
class Foo {
friend std::ostream& operator<< (std::ostream& os, const Foo<T>& a);
};
The operator<< is being declared as a non-template function. For every type T used with Foo, there needs to be a non-templated operator<<. For example, if there is a type Foo<int> declared, then there must be an operator implementation as follows;
std::ostream& operator<< (std::ostream& os, const Foo<int>& a) {/*...*/}
Since it is not implemented, the linker fails to find it and results in the error.
To correct this, you can declare a template operator before the Foo type and then declare as a friend, the appropriate instantiation. The syntax is a little awkward, but is looks as follows;
// forward declare the Foo
template <typename>
class Foo;
// forward declare the operator <<
template <typename T>
std::ostream& operator<<(std::ostream&, const Foo<T>&);
template <typename T>
class Foo {
friend std::ostream& operator<< <>(std::ostream& os, const Foo<T>& a);
// note the required <> ^^^^
// ...
};
template <typename T>
std::ostream& operator<<(std::ostream&, const Foo<T>&)
{
// ... implement the operator
}
The above code limits the friendship of the operator to the corresponding instantiation of Foo, i.e. the operator<< <int> instantiation is limited to access the private members of the instantiation of Foo<int>.
Alternatives include;
Allowing the friendship to extend to all instantiations of the templates, as follows;
template <typename T>
class Foo {
template <typename T1>
friend std::ostream& operator<<(std::ostream& os, const Foo<T1>& a);
// ...
};
Or, the implementation for the operator<< can be done inline inside the class definition;
template <typename T>
class Foo {
friend std::ostream& operator<<(std::ostream& os, const Foo& a)
{ /*...*/ }
// ...
};
Note, when the declaration of the operator (or function) only appears in the class, the name is not available for "normal" lookup, only for argument dependent lookup, from cppreference;
A name first declared in a friend declaration within class or class template X becomes a member of the innermost enclosing namespace of X, but is not accessible for lookup (except argument-dependent lookup that considers X) unless a matching declaration at the namespace scope is provided...
There is further reading on template friends at cppreference and the C++ FAQ.
Code listing showing the techniques above.
As a side note to the failing code sample; g++ warns about this as follows
warning: friend declaration 'std::ostream& operator<<(...)' declares a non-template function [-Wnon-template-friend]
note: (if this is not what you intended, make sure the function template has already been declared and add <> after the function name here)
When your include paths are different
Linker errors can happen when a header file and its associated shared library (.lib file) go out of sync. Let me explain.
How do linkers work? The linker matches a function declaration (declared in the header) with its definition (in the shared library) by comparing their signatures. You can get a linker error if the linker doesn't find a function definition that matches perfectly.
Is it possible to still get a linker error even though the declaration and the definition seem to match? Yes! They might look the same in source code, but it really depends on what the compiler sees. Essentially you could end up with a situation like this:
// header1.h
typedef int Number;
void foo(Number);
// header2.h
typedef float Number;
void foo(Number); // this only looks the same lexically
Note how even though both the function declarations look identical in source code, but they are really different according to the compiler.
You might ask how one ends up in a situation like that? Include paths of course! If when compiling the shared library, the include path leads to header1.h and you end up using header2.h in your own program, you'll be left scratching your header wondering what happened (pun intended).
An example of how this can happen in the real world is explained below.
Further elaboration with an example
I have two projects: graphics.lib and main.exe. Both projects depend on common_math.h. Suppose the library exports the following function:
// graphics.lib
#include "common_math.h"
void draw(vec3 p) { ... } // vec3 comes from common_math.h
And then you go ahead and include the library in your own project.
// main.exe
#include "other/common_math.h"
#include "graphics.h"
int main() {
draw(...);
}
Boom! You get a linker error and you have no idea why it's failing. The reason is that the common library uses different versions of the same include common_math.h (I have made it obvious here in the example by including a different path, but it might not always be so obvious. Maybe the include path is different in the compiler settings).
Note in this example, the linker would tell you it couldn't find draw(), when in reality you know it obviously is being exported by the library. You could spend hours scratching your head wondering what went wrong. The thing is, the linker sees a different signature because the parameter types are slightly different. In the example, vec3 is a different type in both projects as far as the compiler is concerned. This could happen because they come from two slightly different include files (maybe the include files come from two different versions of the library).
Debugging the linker
DUMPBIN is your friend, if you are using Visual Studio. I'm sure other compilers have other similar tools.
The process goes like this:
Note the weird mangled name given in the linker error. (eg. draw#graphics#XYZ).
Dump the exported symbols from the library into a text file.
Search for the exported symbol of interest, and notice that the mangled name is different.
Pay attention to why the mangled names ended up different. You would be able to see that the parameter types are different, even though they look the same in the source code.
Reason why they are different. In the example given above, they are different because of different include files.
[1] By project I mean a set of source files that are linked together to produce either a library or an executable.
EDIT 1: Rewrote first section to be easier to understand. Please comment below to let me know if something else needs to be fixed. Thanks!
Inconsistent UNICODE definitions
A Windows UNICODE build is built with TCHAR etc. being defined as wchar_t etc. When not building with UNICODE defined as build with TCHAR defined as char etc. These UNICODE and _UNICODE defines affect all the "T" string types; LPTSTR, LPCTSTR and their elk.
Building one library with UNICODE defined and attempting to link it in a project where UNICODE is not defined will result in linker errors since there will be a mismatch in the definition of TCHAR; char vs. wchar_t.
The error usually includes a function a value with a char or wchar_t derived type, these could include std::basic_string<> etc. as well. When browsing through the affected function in the code, there will often be a reference to TCHAR or std::basic_string<TCHAR> etc. This is a tell-tale sign that the code was originally intended for both a UNICODE and a Multi-Byte Character (or "narrow") build.
To correct this, build all the required libraries and projects with a consistent definition of UNICODE (and _UNICODE).
This can be done with either;
#define UNICODE
#define _UNICODE
Or in the project settings;
Project Properties > General > Project Defaults > Character Set
Or on the command line;
/DUNICODE /D_UNICODE
The alternative is applicable as well, if UNICODE is not intended to be used, make sure the defines are not set, and/or the multi-character setting is used in the projects and consistently applied.
Do not forget to be consistent between the "Release" and "Debug" builds as well.
Clean and rebuild
A "clean" of the build can remove the "dead wood" that may be left lying around from previous builds, failed builds, incomplete builds and other build system related build issues.
In general the IDE or build will include some form of "clean" function, but this may not be correctly configured (e.g. in a manual makefile) or may fail (e.g. the intermediate or resultant binaries are read-only).
Once the "clean" has completed, verify that the "clean" has succeeded and all the generated intermediate file (e.g. an automated makefile) have been successfully removed.
This process can be seen as a final resort, but is often a good first step; especially if the code related to the error has recently been added (either locally or from the source repository).
Missing "extern" in const variable declarations/definitions (C++ only)
For people coming from C it might be a surprise that in C++ global constvariables have internal (or static) linkage. In C this was not the case, as all global variables are implicitly extern (i.e. when the static keyword is missing).
Example:
// file1.cpp
const int test = 5; // in C++ same as "static const int test = 5"
int test2 = 5;
// file2.cpp
extern const int test;
extern int test2;
void foo()
{
int x = test; // linker error in C++ , no error in C
int y = test2; // no problem
}
correct would be to use a header file and include it in file2.cpp and file1.cpp
extern const int test;
extern int test2;
Alternatively one could declare the const variable in file1.cpp with explicit extern
Even though this is a pretty old questions with multiple accepted answers, I'd like to share how to resolve an obscure "undefined reference to" error.
Different versions of libraries
I was using an alias to refer to std::filesystem::path: filesystem is in the standard library since C++17 but my program needed to also compile in C++14 so I decided to use a variable alias:
#if (defined _GLIBCXX_EXPERIMENTAL_FILESYSTEM) //is the included filesystem library experimental? (C++14 and newer: <experimental/filesystem>)
using path_t = std::experimental::filesystem::path;
#elif (defined _GLIBCXX_FILESYSTEM) //not experimental (C++17 and newer: <filesystem>)
using path_t = std::filesystem::path;
#endif
Let's say I have three files: main.cpp, file.h, file.cpp:
file.h #include's <experimental::filesystem> and contains the code above
file.cpp, the implementation of file.h, #include's "file.h"
main.cpp #include's <filesystem> and "file.h"
Note the different libraries used in main.cpp and file.h. Since main.cpp #include'd "file.h" after <filesystem>, the version of filesystem used there was the C++17 one. I used to compile the program with the following commands:
$ g++ -g -std=c++17 -c main.cpp -> compiles main.cpp to main.o
$ g++ -g -std=c++17 -c file.cpp -> compiles file.cpp and file.h to file.o
$ g++ -g -std=c++17 -o executable main.o file.o -lstdc++fs -> links main.o and file.o
This way any function contained in file.o and used in main.o that required path_t gave "undefined reference" errors because main.o referred to std::filesystem::path but file.o to std::experimental::filesystem::path.
Resolution
To fix this I just needed to change <experimental::filesystem> in file.h to <filesystem>.
When linking against shared libraries, make sure that the used symbols are not hidden.
The default behavior of gcc is that all symbols are visible. However, when the translation units are built with option -fvisibility=hidden, only functions/symbols marked with __attribute__ ((visibility ("default"))) are external in the resulting shared object.
You can check whether the symbols your are looking for are external by invoking:
# -D shows (global) dynamic symbols that can be used from the outside of XXX.so
nm -D XXX.so | grep MY_SYMBOL
the hidden/local symbols are shown by nm with lowercase symbol type, for example t instead of `T for code-section:
nm XXX.so
00000000000005a7 t HIDDEN_SYMBOL
00000000000005f8 T VISIBLE_SYMBOL
You can also use nm with the option -C to demangle the names (if C++ was used).
Similar to Windows-dlls, one would mark public functions with a define, for example DLL_PUBLIC defined as:
#define DLL_PUBLIC __attribute__ ((visibility ("default")))
DLL_PUBLIC int my_public_function(){
...
}
Which roughly corresponds to Windows'/MSVC-version:
#ifdef BUILDING_DLL
#define DLL_PUBLIC __declspec(dllexport)
#else
#define DLL_PUBLIC __declspec(dllimport)
#endif
More information about visibility can be found on the gcc wiki.
When a translation unit is compiled with -fvisibility=hidden the resulting symbols have still external linkage (shown with upper case symbol type by nm) and can be used for external linkage without problem if the object files become part of a static libraries. The linkage becomes local only when the object files are linked into a shared library.
To find which symbols in an object file are hidden run:
>>> objdump -t XXXX.o | grep hidden
0000000000000000 g F .text 000000000000000b .hidden HIDDEN_SYMBOL1
000000000000000b g F .text 000000000000000b .hidden HIDDEN_SYMBOL2
Functions or class-methods are defined in source files with the inline specifier.
An example:-
main.cpp
#include "gum.h"
#include "foo.h"
int main()
{
gum();
foo f;
f.bar();
return 0;
}
foo.h (1)
#pragma once
struct foo {
void bar() const;
};
gum.h (1)
#pragma once
extern void gum();
foo.cpp (1)
#include "foo.h"
#include <iostream>
inline /* <- wrong! */ void foo::bar() const {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
gum.cpp (1)
#include "gum.h"
#include <iostream>
inline /* <- wrong! */ void gum()
{
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
If you specify that gum (similarly, foo::bar) is inline at its definition then
the compiler will inline gum (if it chooses to), by:-
not emitting any unique definition of gum, and therefore
not emitting any symbol by which the linker can refer to the definition of gum, and instead
replacing all calls to gum with inline copies of the compiled body of gum.
As a result, if you define gum inline in a source file gum.cpp, it is
compiled to an object file gum.o in which all calls to gum are inlined
and no symbol is defined by which the linker can refer to gum. When you
link gum.o into a program together with another object file, e.g. main.o
that make references to an external symbol gum, the linker cannot resolve
those references. So the linkage fails:
Compile:
g++ -c main.cpp foo.cpp gum.cpp
Link:
$ g++ -o prog main.o foo.o gum.o
main.o: In function `main':
main.cpp:(.text+0x18): undefined reference to `gum()'
main.cpp:(.text+0x24): undefined reference to `foo::bar() const'
collect2: error: ld returned 1 exit status
You can only define gum as inline if the compiler can see its definition in every source file in which gum may be called. That means its inline definition needs to exist in a header file that you include in every source file
you compile in which gum may be called. Do one of two things:
Either don't inline the definitions
Remove the inline specifier from the source file definition:
foo.cpp (2)
#include "foo.h"
#include <iostream>
void foo::bar() const {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
gum.cpp (2)
#include "gum.h"
#include <iostream>
void gum()
{
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
Rebuild with that:
$ g++ -c main.cpp foo.cpp gum.cpp
imk#imk-Inspiron-7559:~/develop/so/scrap1$ g++ -o prog main.o foo.o gum.o
imk#imk-Inspiron-7559:~/develop/so/scrap1$ ./prog
void gum()
void foo::bar() const
Success.
Or inline correctly
Inline definitions in header files:
foo.h (2)
#pragma once
#include <iostream>
struct foo {
void bar() const { // In-class definition is implicitly inline
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
};
// Alternatively...
#if 0
struct foo {
void bar() const;
};
inline void foo::bar() const {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
#endif
gum.h (2)
#pragma once
#include <iostream>
inline void gum() {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
Now we don't need foo.cpp or gum.cpp:
$ g++ -c main.cpp
$ g++ -o prog main.o
$ ./prog
void gum()
void foo::bar() const

Can I use cimport inside a Cython pxd file?

I want to use the uint_fast16_t type in a Cython function. Can I gain access to it in my pxd file by adding from libc.stdint cimport uint_fast16_t to the top of the file? I've searched the Cython documentation and I can't find any mention of using cimport inside a pxd file.
Yes. It seems that adding a cimport line to the top of my pxd file does work. Here is a small test case I've used to demonstrate:
test.pxd:
from libc.stdint cimport uint_fast16_t
cdef uint_fast16_t double_it(uint_fast16_t x)
test.pyx:
# cython: language_level=3, boundscheck=False, wraparound=False, cdivision=True
from test cimport double_it, uint_fast16_t
cdef uint_fast16_t double_it(uint_fast16_t x):
return 2*x
def fast_double(uint_fast16_t x):
return double_it(x)
Result after cythonize-3.7 -i test.pyx && python
>>> import test
>>> test.fast_double(10)
20
The compiler directives I put at the top of test.pyx were copied from my main project file. I put it there simply to make sure it worked using the same exact settings I was already using. I don't think any of them are necessary to use cimport inside a .pxd file.

Cython undefined symbol with c wrapper

I am trying to expose c code to cython and am running into "undefined symbol" errors when trying to use functions defined in my c file from another cython module.
Functions defined in my h files and functions using a manual wrapper work without a problem.
Basically the same case as this question but the solution (Linking against the library) isn't satisfactory for me.
I assume i am missing something in the setup.py script ?
Minimized example of my case:
foo.h
int source_func(void);
inline int header_func(void){
return 1;
}
foo.c
#include "foo.h"
int source_func(void){
return 2;
}
foo_wrapper.pxd
cdef extern from "foo.h":
int source_func()
int header_func()
cdef source_func_wrapper()
foo_wrapper.pyx
cdef source_func_wrapper():
return source_func()
The cython module i want to use the functions in:
test_lib.pyx
cimport foo_wrapper
def do_it():
print "header func"
print foo_wrapper.header_func() # ok
print "source func wrapped"
print foo_wrapper.source_func_wrapper() # ok
print "source func"
print foo_wrapper.source_func() # undefined symbol: source_func
setup.py build both foo_wrapper and test_lib
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
# setup wrapper
setup(
ext_modules = cythonize([
Extension("foo_wrapper", ["foo_wrapper.pyx", "foo.c"])
])
)
# setup test module
setup(
ext_modules = cythonize([
Extension("test_lib", ["test_lib.pyx"])
])
)
There are 3 different types of function in foo_wrapper:
source_func_wrapper is a python-function and python run-time handles the calling of this function.
header_func is an inline-function which is used at compile time, so its definition/machine code is not needed later on.
source_func on the other hand must be handled by static (this is the case in foo_wrapper) or dynamic (I assume this is your wish for test_lib) linker.
Further down I'll try to explain, why the setup doesn't not work out of the box, but fist I would like to introduce two (at least in my opinion) best alternatives :
A: avoid this problem altogether. Your foo_wrapper wraps c-functions from foo.h. That means every other module should use these wrapper-functions. If everyone just can access the functionality directly - this makes the whole wrapper kind of obsolete. Hide the foo.h interface in your `pyx-file:
#foo_wrapper.pdx
cdef source_func_wrapper()
cdef header_func_wrapper()
#foo_wrapper.pyx
cdef extern from "foo.h":
int source_func()
int header_func()
cdef source_func_wrapper():
return source_func()
cdef header_func_wrapper():
B: It might be valid to want to use the foo-functionality directly via c-functions. In this case we should use the same strategy as cython with stdc++-library: foo.cpp should become a shared library and there should be only a foo.pdx-file (no pyx!) which can be imported via cimport wherever needed. Additionally, libfoo.so should then be added as dependency to both foo_wrapper and test_lib.
However, approach B means more hustle - you need to put libfoo.so somewhere the dynamic loader can find it...
Other alternatives:
As we will see, there are a lot of ways to get foo_wrapper+test_lib to work. First, let's see in more detail, how loading of dynamic libraries works in python.
We start out by taking a look at the test_lib.so at hand:
>>> nm test_lib.so --undefined
....
U PyXXXXX
U source_func
there are a lot of undefined symbols most of which start with Py and will be provided by a python executable during the runtime. But also there is our evildoer - source_func.
Now, we start python via
LD_DEBUG=libs,files,symbols python
and load our extension via import test_lib. In the triggered debug -trace we can see the following:
>>>>: file=./test_lib.so [0]; dynamically loaded by python [0]
python loads test_lib.so via dlopen and starts to look-up/resolve the undefined symbols from test_lib.so:
>>>>: symbol=PyExc_RuntimeError; lookup in file=python [0]
>>>>: symbol=PyExc_TypeError; lookup in file=python [0]
these python symbols are found pretty quickly - they are all defined in the python-executable - the first place dynamic linker looks at (if this executable was linked with -Wl,-export-dynamic). But it is different with source_func:
>>>>: symbol=source_func; lookup in file=python [0]
>>>>: symbol=source_func; lookup in file=/lib/x86_64-linux-gnu/libpthread.so.0 [0]
...
>>>>: symbol=source_func; lookup in file=/lib64/ld-linux-x86-64.so.2 [0]
>>>>: ./test_lib.so: error: symbol lookup error: undefined symbol: source_func (fatal)
So after looking up all loaded shared libraries the symbol is not found and we have to abort. The fun fact is, that foo_wrapper is not yet loaded, so the source_func cannot be looked up there (it would be loaded in the next step as dependency of test_lib by python).
What happens if we start python with preloaded foo_wrapper.so?
LD_DEBUG=libs,files,symbols LD_PRELOAD=$(pwd)/foo_wrapper.so python
this time, calling import test_lib succeed, because preloaded foo_wrapper is the first place the dynamic loader looks up the symbols (after the python-executable):
>>>>: symbol=source_func; lookup in file=python [0]
>>>>: symbol=source_func; lookup in file=/home/ed/python_stuff/cython/two/foo_wrapper.so [0]
But how does it work, when foo_wrapper.so is not preloaded? First let's add foo_wrapper.so as library to our setup of test_lib:
ext_modules = cythonize([
Extension("test_lib", ["test_lib.pyx"],
libraries=[':foo_wrapper.so'],
library_dirs=['.'],
)])
this would lead to the following linker command:
gcc ... test_lib.o -L. -l:foo_wrapper.so -o test_lib.so
If we now look up the symbols, so we see no difference:
>>> nm test_lib.so --undefined
....
U PyXXXXX
U source_func
source_func is still undefined! So what is the advantage of linking against the shared library? The difference is, that now foo_wrapper.so is listed as needed in for test_lib.so:
>>>> readelf -d test_lib.so| grep NEEDED
0x0000000000000001 (NEEDED) Shared library: [foo_wrapper.so]
0x0000000000000001 (NEEDED) Shared library: [libpthread.so.0]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
ld does not link, this is a job of the dynamic linker, but it does a dry run and help dynamic linker by noting, that foo_wrapper.so is needed in order to resolve the symbols, so it must be loaded before the search of the symbols starts. However, it does not explicitly say, that the symbol source_func must be looked in foo_wrapper.so - we could actually find it and use it anywhere.
Lets start python again, this time without preloading:
>>>> LD_DEBUG=libs,files,symbols python
>>>> import test_lib
....
>>>> file=./test_lib.so [0]; dynamically loaded by python [0]....
>>>> file=foo_wrapper.so [0]; needed by ./test_lib.so [0]
>>>> find library=foo_wrapper.so [0]; searching
>>>> search cache=/etc/ld.so.cache
.....
>>>> `foo_wrapper.so: cannot open shared object file: No such file or directory.
Ok, now the dynamic linker knows, it has to find foo_wrapper.so but it is nowhere in the path, so we get an error message.
We have to tell dynamic linker where to look for the shared libraries. There is a lot of ways, one of them is to set LD_LIBRARY_PATH:
LD_DEBUG=libs,symbols,files LD_LIBRARY_PATH=. python
>>>> import test_lib
....
>>>> find library=foo_wrapper.so [0]; searching
>>>> search path=./tls/x86_64:./tls:./x86_64:. (LD_LIBRARY_PATH)
>>>> ...
>>>> trying file=./foo_wrapper.so
>>>> file=foo_wrapper.so [0]; generating link map
This time foo_wrapper.so is found (dynamic loader looked at places hinted at by LD_LIBRARY_PATH), loaded and then used for resolving the undefined symbols in test_lib.so.
But what is the difference, if runtime_library_dirs-setup argument is used?
ext_modules = cythonize([
Extension("test_lib", ["test_lib.pyx"],
libraries=[':foo_wrapper.so'],
library_dirs=['.'],
runtime_library_dirs=['.']
)
])
and now calling
LD_DEBUG=libs,symbols,files python
>>>> import test_lib
....
>>>> file=foo_wrapper.so [0]; needed by ./test_lib.so [0]
>>>> find library=foo_wrapper.so [0]; searching
>>>> search path=./tls/x86_64:./tls:./x86_64:. (RPATH from file ./test_lib.so)
>>>> trying file=./foo_wrapper.so
>>>> file=foo_wrapper.so [0]; generating link map
foo_wrapper.so is found on a so called RPATH even if not set via LD_LIBRARY_PATH. We can see this RPATH being inserted by the static linker:
>>>> readelf -d test_lib.so | grep RPATH
0x000000000000000f (RPATH) Library rpath: [.]
however this is the path relative to the current working directory, which is most of the time not what is wanted. One should pass an absolute path or use
ext_modules = cythonize([
Extension("test_lib", ["test_lib.pyx"],
libraries=[':foo_wrapper.so'],
library_dirs=['.'],
extra_link_args=["-Wl,-rpath=$ORIGIN/."] #rather than runtime_library_dirs
)
])
to make the path relative to current location (which can change for example through copying/moving) of the resultingshared library. readelf shows now:
>>>> readelf -d test_lib.so | grep RPATH
0x000000000000000f (RPATH) Library rpath: [$ORIGIN/.]
which means the needed shared library will be searched relatively to the path of the loaded shared library, i.e test_lib.so.
That is also how your setup should be, if you would like to reuse the symbols from foo_wrapper.so which I do not advocate.
There are however some possibilities to use the libraries you have already built.
Let's go back to original setup. What happens if we first import foo_wrapper (as a kind of preload) and only then test_lib? I.e.:
>>>> import foo_wrapper
>>>>> import test_lib
This doesn't work out of the box. But why? Obviously, the loaded symbols from foo_wrapper are not visible to other libraries. Python uses dlopen for dynamical loading of shared libraries, and as explained in this good article, there are some different strategies possible. We can use
>>>> import sys
>>>> sys.getdlopenflags()
>>>> 2
to see which flags are set. 2 means RTLD_NOW, which means that the symbols are resolved directly upon the loading of the shared library. We need to OR flag withRTLD_GLOBAL=256 to make the symbols visible globally/outside of the dynamically loaded library.
>>> import sys; import ctypes;
>>> sys.setdlopenflags(sys.getdlopenflags()| ctypes.RTLD_GLOBAL)
>>> import foo_wrapper
>>> import test_lib
and it works, our debug trace shows:
>>> symbol=source_func; lookup in file=./foo_wrapper.so [0]
>>> file=./foo_wrapper.so [0]; needed by ./test_lib.so [0] (relocation dependency)
Another interesting detail: foo_wrapper.so is loaded once, because python does not load a module twice via import foo_wrapper. But even if it would be opened twice, it would be only once in the memory (the second read only increases the reference count of the shared library).
But now with won insight we could even go further:
>>>> import sys;
>>>> sys.setdlopenflags(1|256)#RTLD_LAZY+RTLD_GLOBAL
>>>> import test_lib
>>>> test_lib.do_it()
>>>> ... it works! ....
Why this? RTLD_LAZY means that the symbols are resolved not directly upon the loading but when they are used for the first time. But before the first usage (test_lib.do_it()), foo_wrapper is loaded (import inside of test_lib module) and due to RTLD_GLOBAL its symbols can be used for resolving later on.
If we don't use RTLD_GLOBAL, the failure comes only when we call test_lib.do_it(), because the needed symbols from foo_wrapper are not seen globally in this case.
To the question, why it is not such a great idea just to link both modules foo_wrapper and test_lib against foo.cpp: Singletons, see this.

Sympy's autowrap with cython and Matrix generates fatal error: 'numpy/arrayobject.h' file not found

I'm trying to execute the simple example from the Sympy's autowrap module that includes matrix/vector product with the Cython langage (since I do not have gfortran installed):
import sympy.utilities.autowrap as aw
from sympy.utilities.autowrap import autowrap
from sympy import symbols, IndexedBase, Idx, Eq
A, x, y = map(IndexedBase, ['A', 'x', 'y'])
m, n = symbols('m n', integer=True)
i = Idx('i', m)
j = Idx('j', n)
instruction = Eq(y[i], A[i, j]*x[j])
matvec = autowrap(instruction, language='C',backend='cython')
I'm on OSX 10.9.4, with the anaconda distribution for python 2.7, sympy 0.7.6.1 and cython 0.23.2.
I get the following (known) error: fatal error: 'numpy/arrayobject.h' file not found
It seems to be a systematic error, and one needs to include the appropriate numpy's header target in the setup file attached to the compilation process of cython as suggested here.
How to get rid form this issue in an autowrap context?
It seems this is a bug fixed here, but it does not work for me... Is this bug fix included in sympy's realease 0.7.6.1?
Any idea?
This was a bug and is now fixed. See this pull request:
https://github.com/sympy/sympy/pull/8848
If you use the development version of SymPy, it should work. Else you could have autowrap spit the files out to a temporary directory, add the correct include statement to the generated files, and manually compile the code.