(venv) C:\Users\DELL\Downloads\My Projects\tf-openpose>pip install swig
Collecting swig
Could not find a version that satisfies the requirement swig (from versions: )
No matching distribution found for swig
(venv) C:\Users\DELL\Downloads\My Projects\tf-openpose>
What's wrong here?
SWIG isn't a Python package, but a code generator that can be used to generate C/C++ extension code for multiple languages, including Python. Download it from http://www.swig.org/.
Related
Some pyx files require advanced cython features. Some are not. Thus different pyx files have different minimum cython version requirement. Is there any mechanism that we can tell cythonize to throw an error if the version does not meet the requirement when it processing a pyx file?
We have many pyx files we would like to reuse. Use centralized way to manage the version requirement is clumsy obviously.
The recommendation is to generate the C files with Cython once, with an appropriate Cython version, then commit and reuse those C files for compilation without re-cythonizing.
That way, the version of Cython used is static and so are the resulting static C files which then do not need Cython installed.
Even with a minimum version specified, there is no guarantee that a later version of Cython will produce the same C code, or that the later version will work as expected.
From documentation:
It is strongly recommended that you distribute the generated .c files as well as your Cython sources, so that users can install your module without needing to have Cython available.
How to use the C++ files generated by the Chisel compiler? the documentations are not clear on this, is there any other source to point me to it? I am really clueless on this, specially that I don't know C++.
Say for example for a simple adder circuit Adder.scala I will get the following files related to the emulator:
Adder.cpp, Adder.h, Adder-emulator.cpp, emul_api.h, emulator.h and sim_api.h.
For which I can compile by running
g++ Adder.cpp Adder-emulator.cpp
This generates the output a.out running this in the terminal generates three more files that I have no clue what they are.
00003710.cmd, 00003710.in and 00003710.out.
The C++ code is used to build an emulation of your design. You need to also define a tester that will drive the emulation, using poke() to set signal values, and peek() or expect() to read them.
You should not be compiling the C++ yourself. If you pass the --genHarness and --test options to Chisel, it will compile the C++ code, build the emulation and run your tester to drive it.
Have a look at the chisel-tutorial code for examples of this process.
I have a Cython extension which I've compiled on Ubuntu 14 and uploaded as an Anaconda package. I'm trying to install the package on another machine which is running Scientific Linux (6?) which ships with an older version of glibc. When I try to import the module I get an error that looks (something like) this:
./myprogram: /lib/libc.so.6: version `GLIBC_2.14' not found (required by ./myprogram)
When I say "something like" - the "myprogram" is actually the .so name of the extension.
From what I understand this error is because I have a newer version of glibc on the build system which has an updated version of the memcpy function.
This page has a good description of the problem, and some rather impractical solutions: http://www.lightofdawn.org/wiki/wiki.cgi/NewAppsOnOldGlibc
There is also a much simpler answer proposed here: How can I link to a specific glibc version?
My question is: how to I apply this solution to my Cython extension? Assuming the __asm__ solution works (as given in the second link) what's the best way to get it into the C generated by Cython?
Also, more generally, how to other modules avoid this issue in the first place? For example, I installed and ran a pre-built copy of numpy without any issues.
This turned out to be quite simple.
Create the following header, glibc_fix.h:
__asm__(".symver memcpy,memcpy#GLIBC_2.2.5")
Then include it by using CFLAGS="-include glibc_fix.h". This can be set as an environment variable, or defined in setup.py.
Additionally, it turns out numpy doesn't do anything special in this regard. if I compile it myself it links with the newer version on my system.
I am doing a little programming exercise in Modula2. I am using the gm2 compiler
on Ubuntu Linux (10.04).
I have gotten some code to compile but I am unable to import certain
modules which, to my understanding, should be included in the compiler distribution.
For example, if I try to import from the TimeDate module
FROM TimeDate IMPORT Time, GetTime;
which is documented here, I get the error:
$ gm2 -flibs=pim -c SortUtil.mod
failed to find definition module TimeDate.def
According to the documentation, the option -flibs=pim should give access to
the TimeDate module (which is part of the PIM libraries).
Does anyone have any experience with this compiler? Do I need some extra command-line
parameters or do I need to install some extra packages?
I set up a test system and was able to duplicate your problem. Use "-flibs=pim,logitech"... That worked for me and let me compile a basic test app without throwing the error about a missing definition file.
I have a C++ library that I am wrapping with SWIG to make accessible in python. It is my understanding (from experience) that when SWIG wraps a C++ library in python, upon loading it places the C++ library symbols in a "local" scope. That is - a scope which does not enable future dynamically linked libraries to find the symbols.
(I'm getting this definition of "local" from man dlopen(3) )
Is there any way to get SWIG to place these symbols into the "global" scope, such that any future dynamically linked libraries can find them?
You can make python dlopen shared objects with the RTLD_GLOBAL flag by calling setdlopenflags in sys, e.g.:
sys.setdlopenflags(dl.RTLD_NOW | dl.RTLD_GLOBAL)
before your module is loaded. (There's a discussion on swig-users about this)