SWIG tcl : undefined symbol error for log4cpp wrapper - tcl

I am new in log4cpp and swig wrapper. I am trying to write an interface for simple logging using log4cpp.
I have installed log4cpp and swig in my Ubuntu machine.
log4cpp.cpp:
#include "log4cpp/Category.hh"
#include "log4cpp/Appender.hh"
#include "log4cpp/FileAppender.hh"
#include "log4cpp/OstreamAppender.hh"
#include "log4cpp/Layout.hh"
#include "log4cpp/BasicLayout.hh"
#include "log4cpp/Priority.hh"
#include "log4cpp.h"
void writeLog() {
log4cpp::Appender *appender1 = new log4cpp::OstreamAppender("console", &std::cout);
appender1->setLayout(new log4cpp::BasicLayout());
log4cpp::Appender *appender2 = new log4cpp::FileAppender("default", "program.log");
appender2->setLayout(new log4cpp::BasicLayout());
log4cpp::Category& root = log4cpp::Category::getRoot();
root.setPriority(log4cpp::Priority::WARN);
root.addAppender(appender1);
log4cpp::Category& sub1 = log4cpp::Category::getInstance(std::string("sub1"));
sub1.addAppender(appender2);
// use of functions for logging messages
root.error("root error");
root.info("root info");
sub1.error("sub1 error");
sub1.warn("sub1 warn");
// printf-style for logging variables
root.warn("%d + %d == %s ?", 1, 1, "two");
// use of streams for logging messages
root << log4cpp::Priority::ERROR << "Streamed root error";
root << log4cpp::Priority::INFO << "Streamed root info";
sub1 << log4cpp::Priority::ERROR << "Streamed sub1 error";
sub1 << log4cpp::Priority::WARN << "Streamed sub1 warn";
// or this way:
root.errorStream() << "Another streamed error";
}
log4cpp.h:
void writeLog(void);
log4cpp.i:
%module log4cpp
%{
#include "log4cpp.h"
%}
%inline %{
extern void writeLog(void);
%}
I have done following steps to generate log4cpp.so file:
swig -tcl -c++ log4cpp.i
g++ -c -fPIC log4cpp.cpp log4cpp_wrap.cxx -I/usr/include/tcl8.5
g++ -shared log4cpp.o log4cpp_wrap.o -o log4cpp.so
It generates the log4cpp_wrap.cxx, log4cpp.o, log4cpp_wrap.o and log4cpp.so files without any warning and error.
Whenever I am running the below command in tcl.
load ./log4cpp.so
It generates an undefined symbol error:
% load ./log4cpp.so
couldn't load file "./log4cpp.so": ./log4cpp.so: undefined symbol: _ZN7log4cpp8Appender29AppenderMapStorageInitializerD1Ev
What to do that to remove this error?

You need to link your SWIG shared library to log4cxx like you would with any other C++ application that uses this library. So when you call
g++ -shared log4cpp.o log4cpp_wrap.o -o log4cpp.so
It really needs to be something like this (but adapted to have real library and search path)
g++ -shared log4cpp.o log4cpp_wrap.o -L/path/to/your/install/of/log4cxx -llog4cxx -o log4cpp.so

Related

A consistent example for using the C++ API of Pyarrow

I am trying to use the C++ API of Pyarrow. There is currently no example for it on the official documentation, and this is the best I am able to come up with for a simple thing:
#include <arrow/python/pyarrow.h>
#include <arrow/python/platform.h>
#include "arrow/python/init.h"
#include "arrow/python/datetime.h"
#include <iostream>
void MyFunction(PyObject * obj)
{
Py_Initialize();
std::cout << Py_IsInitialized() << std::endl;
int ret = arrow_init_numpy();
std::cout << ret << std::endl;
if (ret != 0) {
throw 0;
}
::arrow::py::internal::InitDatetime();
if(arrow::py::import_pyarrow() != 0)
{
std::cout << "problem initializing pyarrow" << std::endl;
throw 0;}
std::cout << "test" << std::endl;
Py_Finalize();
//return arrow::py::is_array(obj);
}
I am trying to compile it with
gcc -pthread -B /home/ziheng/anaconda3/envs/da/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O0 -Wall -Wstrict-prototypes -fPIC -D_GLIBCXX_USE_CXX11_ABI=0 -I/home/ziheng/anaconda3/envs/da/lib/python3.7/site-packages/numpy/core/include -I/home/ziheng/anaconda3/envs/da/lib/python3.7/site-packages/pyarrow/include -I/home/ziheng/anaconda3/envs/da/include/python3.7m -c example.cpp -o example.o -std=c++11
g++ -pthread -shared -fPIC -B /home/ziheng/anaconda3/envs/da/compiler_compat -L/home/ziheng/anaconda3/envs/da/lib -Wl,-rpath=/home/ziheng/anaconda3/envs/da/lib -Wl,--no-as-needed -Wl,--sysroot=/ example.o -L/home/ziheng/anaconda3/envs/da/lib/python3.7/site-packages/pyarrow -l:libarrow.so.600 -l:libarrow_python.so.600 -l:libpython3.7m.so -o example.cpython-37m-x86_64-linux-gnu.so
The compilation works with no problems. However when I try to use ctypes to call the compiled .so file, like this:
from ctypes import *
lib = CDLL('example.cpython-37m-x86_64-linux-gnu.so')
lib._Z10MyFunctionP7_object(1)
I get segmentation fault at arrow_init_numpy, after Py_IsInitialized() prints 1.
When I run it through gdb, I get/tmp/build/80754af9/python_1614362349910/work/Python/ceval.c: No such file or directory.
If I try to compile my C code as a standalone executable, however, it works with no problems.
Can someone please help? Thank you.
First, the call to Py_Initialize() is superfluous. You are calling your code from within python and so, presumably, python has already been initialized. That would be needed if you were writing your own main and not a plugin-type library. Correspondingly, the call to Py_Finalize() is probably a bad idea.
Second, and more significant for the error at hand, is that you are using ctypes.CDLL (and not, for example, ctypes.PyDLL) which states (emphasis mine):
The returned function prototype creates functions that use the standard C calling convention. The function will release the GIL during the call. If use_errno is set to true, the ctypes private copy of the system errno variable is exchanged with the real errno value before and after the call; use_last_error does the same for the Windows error code.
And, finally, the Arrow initialization routines assume you are holding the GIL (this should probably be added to the documentation). So the easiest way to fix your program is probably to change CDLL to PyDLL:
from ctypes import *
lib = PyDLL('example.cpython-37m-x86_64-linux-gnu.so')
lib._Z10MyFunctionP7_object(1)

How do you get the stream associated with a thrust execution policy?

I want to be able to get the stream-id which is associated with an execution policy in thrust. I am trying to access this function.
I have tried this :
cudaStream_t stream = 0;
auto policy = thrust::cuda::par.on(stream);
cudaStream_t str = stream(policy);
but I am getting a compilation error :
stream.cu(7): error: expression preceding parentheses of apparent call must have (pointer-to-) function type
Could I get some ideas on how to do this?
"I am trying to access this function." Trying to directly use e.g. things in detail are part of the implementation and may change from one version to the next. To wit: the file you are referring to does not even exist in the the current thrust distributed with CUDA 10.
However, this seems to work for me:
$ cat t354.cu
#include <thrust/execution_policy.h>
#include <iostream>
#include <cstring>
int main(){
cudaStream_t mystream;
cudaStreamCreate(&mystream);
auto policy = thrust::cuda::par.on(mystream);
cudaStream_t str = stream(policy);
for (int i = 0; i < sizeof(cudaStream_t); i++)
if ( *(reinterpret_cast<unsigned char *>(&mystream)+i) != *(reinterpret_cast<unsigned char *>(&str)+i)) {std::cout << "mismatch" << std::endl; return -1;}
std::cout << "match" << std::endl;
}
$ nvcc -std=c++11 -o t354 t354.cu
$ cuda-memcheck ./t354
========= CUDA-MEMCHECK
match
========= ERROR SUMMARY: 0 errors
$

cuda & rdc & thrust in multiple shared objects results in SIGSEV in registerEntryFunction

I'm trying to run relocatable-device-code in two shared libraries, both using cuda-thrust. Everything runs fine if I stop using thrust in kernel.cu, which is not an option.
edit: The program works too if rdc is disabled. Not an option for me either.
It compiles fine but stops with a segfault when run. gdb tells me this:
Program received signal SIGSEGV, Segmentation fault.
0x0000000000422cc8 in cudart::globalState::registerEntryFunction(void**, char const*, char*, char const*, int, uint3*, uint3*, dim3*, dim3*, int*) ()
(cuda-gdb) bt
#0 0x0000000000422cc8 in cudart::globalState::registerEntryFunction(void**, char const*, char*, char const*, int, uint3*, uint3*, dim3*, dim3*, int*) ()
#1 0x000000000040876c in __cudaRegisterFunction ()
#2 0x0000000000402b58 in __nv_cudaEntityRegisterCallback(void**) ()
#3 0x00007ffff75051a3 in __cudaRegisterLinkedBinary(__fatBinC_Wrapper_t const*, void (*)(void**), void*) ()
from /home/mindoms/rdctestmcsimple/libkernel.so
#4 0x00007ffff75050b1 in __cudaRegisterLinkedBinary_66_tmpxft_00007a5f_00000000_16_cuda_device_runtime_ compute_52_cpp1_ii_8b1a5d37 () from /home/user/rdctestmcsimple/libkernel.so
#5 0x000000000045285d in __libc_csu_init ()
#6 0x00007ffff65ea50f in __libc_start_main () from /lib64/libc.so.6
Here is my stripped down example (using cmake) that shows the error.
main.cpp:
#include "kernel.cuh"
#include "kernel2.cuh"
int main(){
Kernel k;
k.callKernel();
Kernel2 k2;
k2.callKernel2();
}
kernel.cuh:
#ifndef __KERNEL_CUH__
#define __KERNEL_CUH__
class Kernel{
public:
void callKernel();
};
#endif
kernel.cu:
#include "kernel.cuh"
#include <stdio.h>
#include <iostream>
#include <thrust/device_vector.h>
__global__
void thekernel(int *data){
if (threadIdx.x == 0)
printf("the kernel says hello\n");
data[threadIdx.x] = threadIdx.x * 2;
}
void Kernel::callKernel(){
thrust::device_vector<int> D2;
D2.resize(11);
int * raw_ptr = thrust::raw_pointer_cast(&D2[0]);
printf("Kernel::callKernel called\n");
thekernel <<< 1, 10 >>> (raw_ptr);
cudaThreadSynchronize();
cudaError_t code = cudaGetLastError();
if (code != cudaSuccess) {
std::cout << "Cuda error: " << cudaGetErrorString(code) << " after callKernel!" << std::endl;
}
for (int i = 0; i < D2.size(); i++)
std::cout << "Kernel D[" << i << "]=" << D2[i] << std::endl;
}
kernel2.cuh:
#ifndef __KERNEL2_CUH__
#define __KERNEL2_CUH__
class Kernel2{
public:
void callKernel2();
};
#endif
kernel2.cu
#include "kernel2.cuh"
#include <stdio.h>
#include <iostream>
#include <thrust/device_vector.h>
__global__
void thekernel2(int *data2){
if (threadIdx.x == 0)
printf("the kernel2 says hello\n");
data2[threadIdx.x] = threadIdx.x * 2;
}
void Kernel2::callKernel2(){
thrust::device_vector<int> D;
D.resize(11);
int * raw_ptr = thrust::raw_pointer_cast(&D[0]);
printf("Kernel2::callKernel2 called\n");
thekernel2 <<< 1, 10 >>> (raw_ptr);
cudaThreadSynchronize();
cudaError_t code = cudaGetLastError();
if (code != cudaSuccess) {
std::cout << "Cuda error: " << cudaGetErrorString(code) << " after callKernel2!" << std::endl;
}
for (int i = 0; i < D.size(); i++)
std::cout << "Kernel2 D[" << i << "]=" << D[i] << std::endl;
}
The cmake file below was used originally, but I get the same problem when I compile "by hand":
nvcc -arch=sm_35 -Xcompiler -fPIC -dc kernel2.cu
nvcc -arch=sm_35 -shared -Xcompiler -fPIC kernel2.o -o libkernel2.so
nvcc -arch=sm_35 -Xcompiler -fPIC -dc kernel.cu
nvcc -arch=sm_35 -shared -Xcompiler -fPIC kernel.o -o libkernel.so
g++ -o main main.cpp libkernel.so libkernel2.so -L/opt/cuda/current/lib64
Adding -cudart shared to every nvcc call as suggested somewhere results in a different error:
warning: Cuda API error detected: cudaFuncGetAttributes returned (0x8)
terminate called after throwing an instance of 'thrust::system::system_error'
what(): function_attributes(): after cudaFuncGetAttributes: invalid device function
Program received signal SIGABRT, Aborted.
0x000000313c432625 in raise () from /lib64/libc.so.6
(cuda-gdb) bt
#0 0x000000313c432625 in raise () from /lib64/libc.so.6
#1 0x000000313c433e05 in abort () from /lib64/libc.so.6
#2 0x00000031430bea7d in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib64/libstdc++.so.6
#3 0x00000031430bcbd6 in std::set_unexpected(void (*)()) () from /usr/lib64/libstdc++.so.6
#4 0x00000031430bcc03 in std::terminate() () from /usr/lib64/libstdc++.so.6
#5 0x00000031430bcc86 in __cxa_rethrow () from /usr/lib64/libstdc++.so.6
#6 0x00007ffff7d600eb in thrust::detail::vector_base<int, thrust::device_malloc_allocator<int> >::append(unsigned long) () from ./libkernel.so
#7 0x00007ffff7d5f740 in thrust::detail::vector_base<int, thrust::device_malloc_allocator<int> >::resize(unsigned long) () from ./libkernel.so
#8 0x00007ffff7d5b19a in Kernel::callKernel() () from ./libkernel.so
#9 0x00000000004006f8 in main ()
CMakeLists.txt: Please adjust to your environment
cmake_minimum_required(VERSION 2.6.2)
project(Cuda-project)
set(CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/CMake/cuda" ${CMAKE_MODULE_PATH})
SET(CUDA_TOOLKIT_ROOT_DIR "/opt/cuda/current")
SET(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -gencode arch=compute_52,code=sm_52)
find_package(CUDA REQUIRED)
link_directories(${CUDA_TOOLKIT_ROOT_DIR}/lib64)
set(CUDA_SEPARABLE_COMPILATION ON)
set(BUILD_SHARED_LIBS ON)
list(APPEND CUDA_NVCC_FLAGS -Xcompiler -fPIC)
CUDA_ADD_LIBRARY(kernel
kernel.cu
)
CUDA_ADD_LIBRARY(kernel2
kernel2.cu
)
cuda_add_executable(rdctest main.cpp)
TARGET_LINK_LIBRARIES(rdctest kernel kernel2 cudadevrt)
About my system:
Fedora 23
kernel: 4.4.2-301.fc23.x86_64
Nvidia Driver: 361.28
Nvidia Toolkit: 7.5.18
g++: g++ (GCC) 5.3.1 20151207 (Red Hat 5.3.1-2)
Reproduced on:
CentOS release 6.7 (Final)
Kernel: 2.6.32-573.8.1.el6.x86_64
Nvidia Driver: 352.55
Nvidia Toolkit: 7.5.18
g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16)
glibc 2.12
cmake to 3.5
Apparently, this has something to do with what cuda runtime is used: shared or static.
I slightly modified your example: Instead of building two shared libraries and linking them to the executable individually, I create two static libraries that are linked together to one shared library, and that one is linked to the executable.
Also, here is an updated CMake file that uses the new (>= 3.8) native CUDA language support.
cmake_minimum_required(VERSION 3.8)
project (CudaSharedThrust CXX CUDA)
string(APPEND CMAKE_CUDA_FLAGS " -gencode arch=compute_61,code=compute_61")
if(BUILD_SHARED_LIBS)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
endif()
add_library(kernel STATIC kernel.cu)
set_target_properties(kernel PROPERTIES CUDA_SEPARABLE_COMPILATION ON)
add_library(kernel2 STATIC kernel2.cu)
set_target_properties(kernel2 PROPERTIES CUDA_SEPARABLE_COMPILATION ON)
add_library(allkernels empty.cu) # empty.cu is an empty file
set_target_properties(allkernels PROPERTIES CUDA_SEPARABLE_COMPILATION ON)
target_link_libraries(allkernels kernel kernel2)
add_executable(rdctest main.cpp)
set_target_properties(rdctest PROPERTIES CUDA_SEPARABLE_COMPILATION ON)
target_link_libraries(rdctest allkernels)
Building this without any CMake flags (static build), the build succeeds and the program works.
Building with -DBUILD_SHARED_LIBS=ON, the program compiles, but it crashes with the same error is yours.
Building with
cmake .. -DBUILD_SHARED_LIBS=ON -DCMAKE_CUDA_FLAGS:STRING="--cudart shared"
compiles, and actually makes it run! So for some reason, the shared CUDA runtime is required for this sort of thing.
Also note that the step from 2 SO's -> 2 Static Libs in 1 SO was necessary, because otherwise the program would crash with a hrust::system::system_error.
This, however is expected because NVCC actually ignores shared object files during device linking: http://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html#libraries

CUDA with C/C++ Compilation fails

I am trying to integrate CUDA code with my existing C++ application. As instructed on some web side, I need to have a "file.cu" where in I have a wrapper function which does the memory allocation on the GPU and launch kernel. I followed that advise but, I am not able to compile the code now.
file.cu
#include <cuda.h>
#include <stdio.h>
void preComputeCorrelation_gpu( int * d )
{
//I shall write the kernel later once I am confirmed that CUDA code works
cudaDeviceProp prop;
cudaGetDeviceProperties( &prop, 0 );
printf( "name = %s\n", prop.name );
}
main.cpp
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <cuda.h>
#define __CUDA_SUPPORT__
#ifdef __CUDA_SUPPORT__
// Defination to be found in "cudaWrap.cu"
extern void preComputeCorrelation_gpu( int * d );
#endif
int main()
{
//code to read d from the file and other initialization
int * d;
.
.
#ifdef __CUDA_SUPPORT__
fprintf( stderr, "GPU Computation starts" );
// Defination to be found in "cudaWrap.cu"
preComputeCorrelation_gpu( d );
#else
fprintf( stderr, "CPU Computation starts" );
preComputeCorrelation( d );
#endif
.
.
//more code
return 0 ;
}
Now, I put following commands to compile the code
$ nvcc -c cudaWrap.cu <br/>
$ g++ -I /usr/local/cuda-5.0/include -L /usr/local/cuda-5.0/lib -o GA_omp GA_dev_omp.cpp main_omp.cpp data_stats.cpp cudaWrap.o
Compilation fails and I get the following message after the 2nd command. Although the 1st command works.
cudaWrap.o: In function `preComputeCorrelation_gpu(DataSet*)':
tmpxft_00001061_00000000-3_cudaWrap.cudafe1.cpp:(.text+0x2f): undefined reference to `cudaGetDeviceProperties'
cudaWrap.o: In function `__cudaUnregisterBinaryUtil()':
tmpxft_00001061_00000000-3_cudaWrap.cudafe1.cpp:(.text+0x6b): undefined reference to `__cudaUnregisterFatBinary'
cudaWrap.o: In function `__sti____cudaRegisterAll_43_tmpxft_00001061_00000000_6_cudaWrap_cpp1_ii_f8a043c5()':
tmpxft_00001061_00000000-3_cudaWrap.cudafe1.cpp:(.text+0x8c): undefined reference to `__cudaRegisterFatBinary'
collect2: ld returned 1 exit status
How to I sort this out really?
The solution to this problem is that linking of the ordinary c++ code and cuda code needs to be done with libcudart.so
Compilation should look some like --
$ nvcc -c cudaWrap.cu
$ g++ -lcudart -I /usr/local/cuda-5.0/include -L /usr/local/cuda-5.0/lib -o GA_omp GA_dev_omp.cpp main_omp.cpp data_stats.cpp cudaWrap.o
In this case, cudaWrap.cu contains the cuda code. main_omp.cpp contains the main() and there are some other application files that need compiling too.

"unable to include mysql.h" in C program [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to make #include <mysql.h> work?
I need to connect C and mysql
This is my program
#include <stdio.h>
#include <mysql.h>
#define host "localhost"
#define username "root"
#define password "viswa"
#define database "dbase"
MYSQL *conn;
int main()
{
MYSQL_RES *res_set;
MYSQL_ROW row;
conn = mysql_init(NULL);
if( conn == NULL )
{ `
printf("Failed to initate MySQL\n");
return 1;
}
if( ! mysql_real_connect(conn,host,username,password,database,0,NULL,0) )
{
printf( "Error connecting to database: %s\n", mysql_error(conn));
return 1;
}
unsigned int i;
mysql_query(conn,"SELECT name, email, password FROM users");
res_set = mysql_store_result(conn);
unsigned int numrows = mysql_num_rows(res_set);
unsigned int num_fields = mysql_num_fields(res_set);
while ((row = mysql_fetch_row(res_set)) != NULL)
{
for(i = 0; i < num_fields; i++)
{
printf("%s\t", row[i] ? row[i] : "NULL");
}
printf("\n");
}
mysql_close(conn);
return 0;
}
I got the error "unable to include mysql.h".
I am using windows 7, Turbo C, mysql and I downloaded mysql-connector-c-noinstall-6.0.2-win32-vs2005, but I don't know how to include it.
Wrong syntax. The #include is a C preprocessor directive, not a statement (so should not end with a semi-colon). You should use
#include <mysql.h>
and you may need instead to have
#include <mysql/mysql.h>
or to pass -I /some/dir options to your compiler (with /some/dir replaced by the directory containing the mysql.h header).
Likewise, your #define should very probably not be ended with a semi-colon, you may need
#define username "root"
#define password "viswa"
#define database "dbase"
I strongly suggest reading a good book on C programming. You may want to examine the preprocessed form of your source code; when using gcc you could invoke it as gcc -C -E
yoursource.c to get the preprocessed form.
I also strongly recommend enabling warnings and debugging info (e.g. gcc -Wall -g for GCC). Find out how your specific compiler should be used. Learn also how to use your debugger (e.g. gdb). Study also existing C programs (notably free software).
You should learn how to configure your compiler to use extra include directories, and to link extra libraries.
N.B. With a linux distribution, you'll just have to install the appropriate packages and perhaps use mysql_config inside our Makefile (of course you'll need appropriate compiler and linker flags), perhaps with lines like
CFLAGS += -g -Wall $(shell mysql_config --cflags)
LIBES += $(shell mysql_config --libs)
added to your Makefile.