Use of .a and .so file in ctypes - ctypes

I have a .a file and a .so file from a c program which I want to use in my ctypes python code. Need some help on which one to use and why

(.a) files are Archive libraries, and are statically linked. Thus, if there's any change in the library you need to compile and rebuild your program.
(.so) files are Shared Object files, and are linked during runtime. Thus, if there's a change in the library, you don't need to compile and rebuild your program.
For ctypes, you need to use the .so file.
Here is a good reference: Python Standard Library

Related

is there any way to specify minimum cython version in a pyx file?

Some pyx files require advanced cython features. Some are not. Thus different pyx files have different minimum cython version requirement. Is there any mechanism that we can tell cythonize to throw an error if the version does not meet the requirement when it processing a pyx file?
We have many pyx files we would like to reuse. Use centralized way to manage the version requirement is clumsy obviously.
The recommendation is to generate the C files with Cython once, with an appropriate Cython version, then commit and reuse those C files for compilation without re-cythonizing.
That way, the version of Cython used is static and so are the resulting static C files which then do not need Cython installed.
Even with a minimum version specified, there is no guarantee that a later version of Cython will produce the same C code, or that the later version will work as expected.
From documentation:
It is strongly recommended that you distribute the generated .c files as well as your Cython sources, so that users can install your module without needing to have Cython available.

packages from tcllib not found

I have a strange problem I am using fedora 20 and installed tcllib on my system.
But if I use package require uri in example I got an package not found in response.
Does anyone know what the issue here is or how to determine if the tcllib is added in the package index?
Tcl looks up packages in two ways: with auto_path and with tcl::tm::path.
1. The auto_path — the traditional mechanism.
When you do package require, the package manager looks to see if the package is already present, or if instructions for obtaining the package from the filesystem are present. If neither of these is true, it asks the package unknown handler to load it (strictly, it's the handler installed using the package unknown command). The default implementation of that handler loads packages by looking for pkgIndex.tcl files in the directories on your auto_path, and their immediate subdirectories.
auto_path is a global variable holding a Tcl list of directories to search. You can probably just lappend the right place to it. pkgIndex.tcl is a Tcl script that describes how to make the package available, which it does by calling an appropriate package ifneeded command. The actual loading of the
Once a package is required that isn't present but its instructions for obtaining are, Tcl will simply eval those instructions: they're just a plain old script (that usually just calls source and/or load to do the grunt work).
2. Tcl modules — the new (in 8.5) mechanism.
The Tcl module system uses a different search system managed with the tcl::tm::path command. The tcl::tm::path list subcommand will tell you where it looks (a huge list, to be honest) and you can use the tcl::tm::path add subcommand to extend the path with extra locations to search. Tcl modules have the entire package placed into a single file (with the extension .tm) and have a structured name so that they can avoid having a separate pkgIndex.tcl file; the TM loader can synthesise the package ifneeded calls from the filename itself (in all cases, this is done with source; there are some clever ways to package binary code inside files so they can be loaded, but they're far outside the scope of this answer).
At that point, you're back to the source of the file when the package is actually required; that part is the same whether you're using a module or a traditional package.
The module system is much faster than the traditional search mechanism since it doesn't need to open any files to figure out what to do: it just uses glob with the right options. It is, however, less flexible in how things can be packaged: multi-file packages (e.g., almost anything you make yourself) can't be made into modules (well, not without extra work).

Tcl version change issue from 8.4 to 8.5.12

I have a problem with changing tcl version from 8.4 to 8.5.12 on RHEL machine. Our product uses TclDevKit components like Tcldom, Tclxml, etc. Also we are using Incr Tcl (Itcl). I am trying to create pkgIndex.tcl file in Itcl in order to find Itcl when that package is required as follown:
package ifneeded Itcl 3.4 [list load [file join $dir "libitcl-O.a"] Itcl ]
but when I use
package require Itcl
Getting report: couldn't load file "/somepath/itcl/lib/libitcl-O.a": /somepath/lib/libitcl-O.a: invalid ELF header
It seems I can't load files with .a extention, but the same is done with previous version of tcl (8.4) and it works fine. I googled a lot, read a lot of documentation, but it doesn't help to go further.
Please help.
Thanks in advance
Libraries come in two general sorts, static libraries and shared libraries. On Linux, static libraries have the extension .a by default, and shared libraries have the extension .so (plus optionally some numbers to indicate the version). Only shared libraries will work with Tcl's load command and even then they have to be designed to work that way (with an appropriate Foobar_Init function, as documented).
When dealing with stub-exporting extensions (fairly rare) or Tcl and Tk themselves, the linking is done in two parts. There's a stub library, normally called somethingstub.a, and there's a main shared library. The main shared library contains the implementation of the code; all that is in the stub library is an ABI/API adaptor so that you can avoid binding your code to an explicit version of the implementation library. By building an extension stub-enabled and linking against the stub library, you gain the ability to have your extension loaded into future versions of Tcl/Tk without any recompilation or relinking steps at all. (You also become able to put the extension inside a starkit for deployment, as those use a rather unusual way of managing shared libraries that the stub mechanism conceals from you.)

How to include only sections of Assembly include files

I have created a separate include files for general purpose uses in my assembly programs. (such as string operations / formatted input/etc.)
When i include those files i notice all of the functions get included in the target binary file.
Is there way I can manage to include only the used functions(like using include files in C/C++ library files)?
I'm using MASM and targeting x86.
To extract separate functions from an object file, the linker needs to know where each one starts and where it ends. It can't reliably tell that from the assembly, so you need to help it.
A common way is to put each function into a separate file and assemble them like that; this way the linker can include or exclude each object file independently. This is the simplest way and works with most assemblers, not just MASM, so I'd recommend trying it.
Another way could be to put each function into a separate segment; the MS linker can exclude unused segments but only if they're marked as so-called "COMDAT" (communal data). Unfortunately, MASM does not support setting this attribute.
There have been some work on adding this info to the OBJ file as a post-processing step, but unfortunately the archive with the tool seems to be gone from the Internet:
Function level linking with MASM
Additional links:
How to achieve "function level linking" with MASM? (includes a tool for semi-automated splitting into several files).
flat assembler - COMDAT support
MSDN forums - Comdat
JWASM:
Support for COFF COMDATs
The last link mentions "Support for COMDAT is added in jwasm v2.10."

Building GPL C program with CUDA module

I am attempting to modify a GPL program written in C. My goal is to replace one method with a CUDA implementation, which means I need to compile with nvcc instead of gcc. I need help building the project - not implementing it (You don't need to know anything about CUDA C to help, I don't think).
This is my first time trying to change a C project of moderate complexity that involves a .configure and Makefile. Honestly, this is my first time doing anything in C in a long time, including anything involving gcc or g++, so I'm pretty lost.
I'm not super interested in learning configure and Makefiles - this is more of an experiment. I would like to see if the project implementation goes well before spending time creating a proper build script. (Not unwilling to learn as necessary, just trying to give an idea of the scope).
With that said, what are my options for building this project? I have a myriad of questions...
I tried adding "CC=nvcc" to the configure.in file after AC_PROG_CC. This appeared to work - output from running configure and make showed nvcc as the compiler. However make failed to compile the source file with the CUDA kernel, not recognizing the CUDA specific syntax. I don't know why, was hoping this would just work.
Is it possible to compile a source file with nvcc, and then include it at the linking step in the make process for the main program? If so, how? (This question might not make sense - I'm really rusty at this)
What's the correct way to do this?
Is there a quick and dirty way I could use for testing purposes?
Is there some secret tool everyone uses to setup and understand these configure and Makefiles? This is even worse than the Apache Ant scripts I'm used to (Yeah, I'm out of my realm)
You don't need to compile everything with nvcc. Your guess that you can just compile your CUDA code with NVCC and leave everything else (except linking) is correct. Here's the approach I would use to start.
Add a 1 new header (e.g. myCudaImplementation.h) and 1 new source file (with .cu extension, e.g. myCudaImplementation.cu). The source file contains your kernel implementation as well as a (host) C wrapper function that invokes the kernel with the appropriate execution configuration (aka <<<>>>) and arguments. The header file contains the prototype for the C wrapper function. Let's call that wrapper function runCudaImplementation()
I would also provide another host C function in the source file (with prototype in the header) that queries and configures the GPU devices present and returns true if it is successful, false if not. Let's call this function configureCudaDevice().
Now in your original C code, where you would normally call your CPU implementation you can do this.
// must include your new header
#include "myCudaImplementation.h"
// at app initialization
// store this variable somewhere you can access it later
bool deviceConfigured = configureCudaDevice;
...
// then later, at run time
if (deviceConfigured)
runCudaImplementation();
else
runCpuImplementation(); // run the original code
Now, since you put all your CUDA code in a new .cu file, you only have to compile that file with nvcc. Everything else stays the same, except that you have to link in the object file that nvcc outputs. e.g.
nvcc -c -o myCudaImplementation.o myCudaImplementation.cu <other necessary arguments>
Then add myCudaImplementation.o to your link line (something like:)
g++ -o myApp myCudaImplementation.o
Now, if you have a complex app to work with that uses configure and has a complex makefile already, it may be more involved than the above, but this is the general approach. Bottom line is you don't want to compile all of your source files with nvcc, just the .cu ones. Use your host compiler for everything else.
I'm not expert with configure so can't really help there. You may be able to run configure to generate a makefile, and then edit that makefile -- it won't be a general solution, but it will get you started.
Note that in some cases you may also need to separate compilation of your .cu files from linking them. In this case you need to use NVCC's separate compilation and linking functionality, for which this blog post might be helpful.