I tried to compile a source with Ceylon compiler version 1.2 that I previously compiled successfully with Ceylon compiler version 1.1 and I get the following error messages:
source/com/example/helloworld/module.ceylon:2: error: version '1.1.0' of module 'ceylon.net' was compiled by an incompatible version of the compiler (binary version 7.0 of module is not compatible with binary version 8.0 of this compiler)
import ceylon.net "1.1.0" ;
^
source/com/example/helloworld/module.ceylon:2: error: version '1.1.0' of module 'ceylon.collection' was compiled by an incompatible version of the compiler (binary version 7.0 of module is not compatible with binary version 8.0 of this compiler)
import ceylon.net "1.1.0" ;
^
source/com/example/helloworld/module.ceylon:2: error: version '1.1.0' of module 'ceylon.io' was compiled by an incompatible version of the compiler (binary version 7.0 of module is not compatible with binary version 8.0 of this compiler)
import ceylon.net "1.1.0" ;
^
source/com/example/helloworld/module.ceylon:2: error: version '1.1.0' of module 'ceylon.file' was compiled by an incompatible version of the compiler (binary version 7.0 of module is not compatible with binary version 8.0 of this compiler)
import ceylon.net "1.1.0" ;
I suppose that " ... binary version 8.0 ... " in the error message refers to the Java version.
In both attempts to compile (first with Ceylon 1.1, second with 1.2) I used Java version 8 and I dont want to change that back to 7.
Does it help to compile the Ceylon SDK with Java version 8? How can I do that separately from the entire Ceylon distribution?
How can I import the sources of Ceylon SDK into my project and compile it together with my project?
The binary versions in the error message refer to the Ceylon binary versions, which I guess by unfortunate coincidence, happen to match current JVM versions.
Ceylon is compatible with both JVM 7 and JVM 8, but Ceylon 1.2.0 programs must use Ceylon 1.2.0 modules; binary compatibility with Ceylon 1.1.0 was not maintained.
The solution here is to simply change the import to import ceylon.net "1.2.0";.
No, that’s actually the binary version of Ceylon, unrelated to Java. Ceylon 1.1 was binary version 7, and Ceylon 1.2 is binary version 8. Unfortunately, we weren’t able to provide binary compatibility between these releases.
You’ll have to use the 1.2.0 SDK modules with Ceylon 1.2.
Related
I've been trying to run some numba/cuda code, like this module:
https://github.com/Maghoumi/pytorch-softdtw-cuda/blob/master/soft_dtw_cuda.py
However I run into the following error:
numba.cuda.cudadrv.error.NvvmError: Failed to compile
IR version 1.6 incompatible with current version 2.0
<unnamed>: error: incompatible IR detected. Possible mix of compiler/IR from different releases.
NVVM_ERROR_IR_VERSION_MISMATCH
I guess I installed incompatible versions for some packages, but have no idea where to start. Which packages are concerned?
The underlying reason for this appears to be using CUDA 12.
According to the CUDA 12 release notes:
NVVM IR Update: with CUDA 12.0 we are releasing NVVM IR 2.0 which is
incompatible with NVVM IR 1.x accepted by the libNVVM compiler in
prior CUDA toolkit releases. Users of the libNVVM compiler in CUDA
12.0 toolkit must generate NVVM IR 2.0.
From the error, it would appear that the Numba CUDA backend is generating NVVM IR 1.6, and from the release notes for CUDA 12, NVVM IR 1.6 is no longer supported by the NVVM compiler library supplied in CUDA 12.
In the short term, use CUDA 11.x or earlier. In the longer term, report this as a bug to the Numba developers and get them to update their compiler infrastructure to match the CUDA 12 NVVM requirements.
I am trying to follow the example in
https://llvm.org/docs/CompileCudaWithLLVM.html#invoking-clang
I use Ubuntu 18.04.3 LTS, clang version 9.0.0-2
The device I have is (snippet from the output of deviceQuery):
Detected 1 CUDA Capable device(s)
Device 0: "Quadro P520"
CUDA Driver Version / Runtime Version 10.2 / 10.2
CUDA Capability Major/Minor version number: 6.1
I ran the command:
clang++-9 --verbose --cuda-path=/usr/local/cuda-10.2 axpy.cu -o axpy --cuda-gpu-arch=sm_61 -L/usr/local/cuda-10.2 -lcudart_static -ldl -lrt -pthread
And the output is:
clang version 9.0.0-2~ubuntu18.04.1 (tags/RELEASE_900/final)
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
Found candidate GCC installation: /usr/bin/../lib/gcc/i686-linux-gnu/8
Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/7
Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/7.4.0
Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/8
Found candidate GCC installation: /usr/lib/gcc/i686-linux-gnu/8
Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/7
Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/7.4.0
Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/8
Selected GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/7.4.0
Candidate multilib: .;#m64
Selected multilib: .;#m64
Found CUDA installation: /usr/local/cuda-10.2, version unknown
clang: error: cannot find libdevice for sm_61. Provide path to different CUDA installation via --cuda-path, or pass -nocudalib to build without linking with libdevice.
As far as I can tell, libdevice is right where it should be:
~>ls /usr/local/cuda-10.2/nvvm/libdevice/
libdevice.10.bc
What am I doing wrong ?
Added Nov 2020:
Following #ArtemB comment, I tried running it with clang++-10, which throws a warning, but compiles and runs just fine.
Short answer: The version of cuda my driver supports (10.2) is too current for my clang (9.0.0).
Here is the top of the output of nvidia-smi on my machine:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
So my driver indeed supports cuda-10.2. However, it seems this version is not supported by clang 9.0.0. Indeed when running the above command with the extra flag -nocudalib , one gets the following response (only showing the last lines):
In file included from <built-in>:1:
/usr/lib/llvm-9/lib/clang/9.0.0/include/__clang_cuda_runtime_wrapper.h:52:2: error: "Unsupported CUDA version!"
#error "Unsupported CUDA version!"
^
axpy.cu:23:7: error: use of undeclared identifier cudaConfigureCall
axpy<<<1, kDataLen>>>(a, device_x, device_y);
^
2 errors generated when compiling for sm_61.
When inspecting the offending file (the clang cuda runtime wrapper), one sees the following in lines 48-53:
#include "cuda.h"
#if !defined(CUDA_VERSION)
#error "cuda.h did not define CUDA_VERSION"
#elif CUDA_VERSION < 7000 || CUDA_VERSION > 10010
#error "Unsupported CUDA version!"
#endif
Until recently clang was rather particular about CUDA versions. I've relaxed it a bit lately, so clang-10 is more lenient and will attempt to use a newer CUDA version at a feature parity with the latest supported CUDA version (currently 10.1). It will also issue a warning. It does work with CUDA-11.0 well enough to compile Tensorflow.
CUDA-11.1 (and I believe 11.0 update1 on windows) have dropped the version.txt file from the distribution and that will break CUDA compilation with the currently released clang versions, again. This should be fixed in clang-11.0.1 when it's released (version match with CUDA is purely coincidental).
I am building a project using Opencv 3.1 and wxwidgets 3.1. The code I use:
[wxOpenCv Demo1]
I try to add a write frame object, using the function cv::imwrite().
(I changed the c calls to c++ eg: cvQueryFrame( m_pCapture ) to m_pCapture >> m_CurFrame;)
I get this error:
Undefined symbols for architecture x86_64:
"cv::imwrite(cv::String const&, cv::_InputArray const&,
std::vector > const&)", referenced from:
CCamera::SaveFrame() in camera.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Without wxwidgets the opencv functions work fine. So its seems that it has to do with the combination of wxwidgets and opencv.
This works fine with wxwidgets and Opencv:
cv::imshow("tmp",m_CurFrame);
cv::waitKey(4);
// cv::imwrite(Tmp , m_CurFrame);
If I uncomment the last line, I get the error.
OS X: 10 Yosemite and I use the default compiler (Apple LLVM 7.0)
I have no idea what to do about this!
solved the problem (and more) by recompiling wxwidgets 3.1.0 and Opencv 3.1. I used these links to get it going.
Small guide to compiling wxWidgets, Opencv against C++ 11:
Compile wxwidgets 3.1.0: I followed the install.txt for OSX. And tweaked the ../configure call with help from this
I added: --enable-debug and changed the macosx version
../configure --disable-shared --enable-debug --enable-unicode --with-cocoa --with-macosx-version-min=10.7 --with-macosx-sdk=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk CXXFLAGS="-std=c++0x -stdlib=libc++" CPPFLAGS="-stdlib=libc++" LIBS=-lc++
Then with the help of this page I build a xcode project. Tweaking a few things:
(wxcocoa.xcodeproj and minimal.xcodeproj, and a all new projects)
Add to the Header Search Path: $(WXROOT)/build/osx (to find wx.xcconfig)
base SDK: latest OS X (10.11)
C language dialect: GNU 11(not sure if this is right)
C++ language dialect: GNU++11 [-std=gnu++11]
C++ Standard Library: libc++ (LLVM C++ standard library with C++ 11 support)
placed the WXROOT under “preference->locations->Source Trees. Not important, but seems to be a better location (restart xcode)
in wxcocoa.xcconfig I changed: MACOSX_DEPLOYMENT_TARGET = 10.10
Somehow I have to change the name of the created library from libwx_osx_cocoa_static.a to: lwx_osx_cocoa_static.a (why, I do not know)
I use GNU++ 11 and thus libc++ to be able to use new functionality like “future"
I then added OpenCV to my newly created wxXcode project:
Compile OpenCV following this: (search the web for: howto-install-build-and-use-opencv-macosx-10-10)
Make sure that the SDK is the right version (here was my biggest problem), matching the build of wxWidgets
The compiler settings same as for wxWidgets (see above)
(added:) To do this I added some lines to the CMakeLists.txt in the (Opencv-master folder). Below the line: # OpenCV compiler and linker options
(I found this trick here: search the web for: OpenCV with C++11 on OS X 10.8
message("Setting up Xcode for C++11 with libc++.")
set(CMAKE_XCODE_ATTRIBUTE_CLANG_CXX_LANGUAGE_STANDARD "c++0x")
set(CMAKE_XCODE_ATTRIBUTE_CLANG_CXX_LIBRARY "libc++")
Then follow this page to update the newly created wxWidgets xcode project (search the web for: howto-setup-xcode-6-1-to-work-with-opencv-libraries)
This should do the trick! I can now combine wxWidgets, OpenCV and the libc++
(multithreading, “future")
I hope this helps.
Please let me know if you found out more!
The CUDA C Programming Guide Version 4.2 states:
The driver API is implemented in the nvcuda dynamic library which is copied on
the system during the installation of the device driver.
I installed the RC5.0 devdriver on my Linux box along with SDK 4.2 and 5.0. Right now I have difficulties finding this library. Its not in (or under) /usr, /lib, /lib64, nor in one of the SDK libs:
CUDA 4.2:
ls /usr/local/cuda-4.2/cuda/lib64/
libcublas.so libcudart.so libcufft.so libcuinj.so libcurand.so libcusparse.so libnpp.so
libcublas.so.4 libcudart.so.4 libcufft.so.4 libcuinj.so.4 libcurand.so.4 libcusparse.so.4 libnpp.so.4
libcublas.so.4.2.9 libcudart.so.4.2.9 libcufft.so.4.2.9 libcuinj.so.4.2.9 libcurand.so.4.2.9 libcusparse.so.4.2.9 libnpp.so.4.2.9
CUDA 5.0:
ls /usr/local/cuda-5.0/cuda/lib64/
libcublas.so libcudart.so libcufft.so libcuinj.so libcurand.so libcusparse.so libnpp.so libnvToolsExt.so
libcublas.so.5.0 libcudart.so.5.0 libcufft.so.5.0 libcuinj.so.5.0 libcurand.so.5.0 libcusparse.so.5.0 libnpp.so.5.0 libnvToolsExt.so.5.0
libcublas.so.5.0.7 libcudart.so.5.0.7 libcufft.so.5.0.7 libcuinj.so.5.0.7 libcurand.so.5.0.7 libcusparse.so.5.0.7 libnpp.so.5.0.7 libnvToolsExt.so.5.0.7
Where is this library installed to?
It's not that the driver API is not included in the RC 5.0. I just reinstalled devdriver 4.2 and its still not in the above mentioned places.
Found it. But under a different name (libcuda instead libnvcuda):
/usr/lib/libcuda.so.295.41
This must be a typo/error in the manual.
libcuda is always by default installed to /usr/lib/ and on 64bit linux /usr/lib64
See also Chapter 5. Listing of Installed Components for list and locations of other driver components.
When I try to build the JDBC driver, from the source downloaded from here , I get many compilation errors. For example,
The type CallableStatement must implement the inherited abstract method CallableStatement.getCharacterStream(int)
in CallableStatement.java line 57
All these error indicate, I thought, that the driver is compatible with JDK 1.5, because the specified method wasn't part of the JDBC spec in JDK 1.5.
However, when I tried to build the driver with JDK 1.5, I got errors indicating that JDK 1.6 is required. For example,
The import java.sql.RowIdLifetime cannot be resolved
where RowIdLifetime is a class that wasn't part of JDK 1.5.
So, which one is it? JDK 1.5 or 1.6? Am I missing something when I try to build?
Having read the file connector-j.html that is bundled with the source, it looks like I need both:
If you are building Connector/J 5.1 make sure that you have both JDK 1.6.x installed and an older JDK such as JDK 1.5.x. This is because Connector/J supports both JDBC 3.0 (which was prior to JDK 1.6.x) and JDBC 4.0. Set your JAVA_HOME environment variable to the path of the older JDK installation.
Next time, I'll RTM before posting.