I run into problems when trying to use caffe with multiple gpus. When executing following command, I get the error log show below:
caffe train -solver $SOLVER -gpu 0,1 2>&1 | tee $LOGGING
F0409 14:17:22.355074 12079 caffe.cpp:254] Multi-GPU execution not available - rebuild with USE_NCCL
*** Check failure stack trace: ***
# 0x2aee66002b2d google::LogMessage::Fail()
# 0x2aee66004995 google::LogMessage::SendToLog()
# 0x2aee660026a9 google::LogMessage::Flush()
# 0x2aee6600542e google::LogMessageFatal::~LogMessageFatal()
# 0x40c172 train()
# 0x4084f3 main
# 0x2aee78f67b35 __libc_start_main
# 0x408f0b (unknown)
Can anyone explain what is wrong here? Is there some caffe bug which I am not aware of?
Install CUDA
Install cuDNN
Install Dependencies
$ sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler libgflags-dev libgoogle-glog-dev liblmdb-dev libatlas-base-dev git
$ sudo apt-get install --no-install-recommends libboost-all-dev
Install NCCL
NVIDIA NCCL is required to run Caffe on more than one GPU. NCCL can be installed with the following commands:
$ git clone https://github.com/NVIDIA/nccl.git
$ cd nccl
$ sudo make install -j
NCCL libraries and headers will be installed in /usr/local/lib and /usr/local/include.
Install Caffe
Uncomment the line USE_CUDNN := 1. This enables cuDNN acceleration.
Uncomment the line USE_NCCL := 1. This enables NCCL which is required to run Caffe on multiple GPUs.
Save and close the file. You're now ready to compile Caffe.
$ make all -j
When this command completes, the Caffe binary will be available at build/tools/caffe.
Related
I created a Dockerfile to create an image to run web based application. When I running that file and it tries to collect mysqlclient==1.3.7 ,following error is occured.
"mysql_config raise EnvironmentError("%s not found" %(mysql_config.path,))
EnvironmentError: mysql_config not found
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-ME9Fq7/mysqlclient/
You are using pip version 8.1.1, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command. "
This is Dockerfile
############################################################
# Dockerfile to build Python WSGI Application Containers
# Based on Ubuntu
############################################################
# Set the base image to Ubuntu
FROM ubuntu:latest
# File Author / Maintainer
MAINTAINER Brian
# Add the application resources URL
#RUN echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sources.list
# Update the sources list
RUN apt-get update -y;
# Install basic applications
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
# Install Python and Basic Python Tools
RUN apt-get install -y python python3-dev python-distribute python-pip libssl-dev
# Copy the application folder inside the container
ADD /WebApp /WebApp
#RUN pip install mysql
# Get pip to download and install requirements:
RUN pip install -r /WebApp/requirements.txt
# Expose ports
EXPOSE 80
# Set the default directory where CMD will execute
WORKDIR /WebApp
# Set the default command to execute
# when creating a new container
# i.e. using CherryPy to serve the application
CMD ./start.sh
How can I solve this problem?
I believe that you are trying to install the mysql driver for Python, but you don't have MySQL installed. Try to install MySQL before installing mysql driver:
RUN apt-get install mysql-server
Hope it helps!
I'm searching for a way to use the GPU from inside a docker container.
The container will execute arbitrary code so i don't want to use the privileged mode.
Any tips?
From previous research i understood that run -v and/or LXC cgroup was the way to go but i'm not sure how to pull that off exactly
Writing an updated answer since most of the already present answers are obsolete as of now.
Versions earlier than Docker 19.03 used to require nvidia-docker2 and the --runtime=nvidia flag.
Since Docker 19.03, you need to install nvidia-container-toolkit package and then use the --gpus all flag.
So, here are the basics,
Package Installation
Install the nvidia-container-toolkit package as per official documentation at Github.
For Redhat based OSes, execute the following set of commands:
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo
$ sudo yum install -y nvidia-container-toolkit
$ sudo systemctl restart docker
For Debian based OSes, execute the following set of commands:
# Add the package repositories
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
$ sudo systemctl restart docker
Running the docker with GPU support
docker run --name my_all_gpu_container --gpus all -t nvidia/cuda
Please note, the flag --gpus all is used to assign all available gpus to the docker container.
To assign specific gpu to the docker container (in case of multiple GPUs available in your machine)
docker run --name my_first_gpu_container --gpus device=0 nvidia/cuda
Or
docker run --name my_first_gpu_container --gpus '"device=0"' nvidia/cuda
Regan's answer is great, but it's a bit out of date, since the correct way to do this is avoid the lxc execution context as Docker has dropped LXC as the default execution context as of docker 0.9.
Instead it's better to tell docker about the nvidia devices via the --device flag, and just use the native execution context rather than lxc.
Environment
These instructions were tested on the following environment:
Ubuntu 14.04
CUDA 6.5
AWS GPU instance.
Install nvidia driver and cuda on your host
See CUDA 6.5 on AWS GPU Instance Running Ubuntu 14.04 to get your host machine setup.
Install Docker
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update && sudo apt-get install lxc-docker
Find your nvidia devices
ls -la /dev | grep nvidia
crw-rw-rw- 1 root root 195, 0 Oct 25 19:37 nvidia0
crw-rw-rw- 1 root root 195, 255 Oct 25 19:37 nvidiactl
crw-rw-rw- 1 root root 251, 0 Oct 25 19:37 nvidia-uvm
Run Docker container with nvidia driver pre-installed
I've created a docker image that has the cuda drivers pre-installed. The dockerfile is available on dockerhub if you want to know how this image was built.
You'll want to customize this command to match your nvidia devices. Here's what worked for me:
$ sudo docker run -ti --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm tleyden5iwx/ubuntu-cuda /bin/bash
Verify CUDA is correctly installed
This should be run from inside the docker container you just launched.
Install CUDA samples:
$ cd /opt/nvidia_installers
$ ./cuda-samples-linux-6.5.14-18745345.run -noprompt -cudaprefix=/usr/local/cuda-6.5/
Build deviceQuery sample:
$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
$ make
$ ./deviceQuery
If everything worked, you should see the following output:
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GRID K520
Result = PASS
Ok i finally managed to do it without using the --privileged mode.
I'm running on ubuntu server 14.04 and i'm using the latest cuda (6.0.37 for linux 13.04 64 bits).
Preparation
Install nvidia driver and cuda on your host. (it can be a little tricky so i will suggest you follow this guide https://askubuntu.com/questions/451672/installing-and-testing-cuda-in-ubuntu-14-04)
ATTENTION : It's really important that you keep the files you used for the host cuda installation
Get the Docker Daemon to run using lxc
We need to run docker daemon using lxc driver to be able to modify the configuration and give the container access to the device.
One time utilization :
sudo service docker stop
sudo docker -d -e lxc
Permanent configuration
Modify your docker configuration file located in /etc/default/docker
Change the line DOCKER_OPTS by adding '-e lxc'
Here is my line after modification
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 -e lxc"
Then restart the daemon using
sudo service docker restart
How to check if the daemon effectively use lxc driver ?
docker info
The Execution Driver line should look like that :
Execution Driver: lxc-1.0.5
Build your image with the NVIDIA and CUDA driver.
Here is a basic Dockerfile to build a CUDA compatible image.
FROM ubuntu:14.04
MAINTAINER Regan <http://stackoverflow.com/questions/25185405/using-gpu-from-a-docker-container>
RUN apt-get update && apt-get install -y build-essential
RUN apt-get --purge remove -y nvidia*
ADD ./Downloads/nvidia_installers /tmp/nvidia > Get the install files you used to install CUDA and the NVIDIA drivers on your host
RUN /tmp/nvidia/NVIDIA-Linux-x86_64-331.62.run -s -N --no-kernel-module > Install the driver.
RUN rm -rf /tmp/selfgz7 > For some reason the driver installer left temp files when used during a docker build (i don't have any explanation why) and the CUDA installer will fail if there still there so we delete them.
RUN /tmp/nvidia/cuda-linux64-rel-6.0.37-18176142.run -noprompt > CUDA driver installer.
RUN /tmp/nvidia/cuda-samples-linux-6.0.37-18176142.run -noprompt -cudaprefix=/usr/local/cuda-6.0 > CUDA samples comment if you don't want them.
RUN export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64 > Add CUDA library into your PATH
RUN touch /etc/ld.so.conf.d/cuda.conf > Update the ld.so.conf.d directory
RUN rm -rf /temp/* > Delete installer files.
Run your image.
First you need to identify your the major number associated with your device.
Easiest way is to do the following command :
ls -la /dev | grep nvidia
If the result is blank, use launching one of the samples on the host should do the trick.
The result should look like that
As you can see there is a set of 2 numbers between the group and the date.
These 2 numbers are called major and minor numbers (wrote in that order) and design a device.
We will just use the major numbers for convenience.
Why do we activated lxc driver?
To use the lxc conf option that allow us to permit our container to access those devices.
The option is : (i recommend using * for the minor number cause it reduce the length of the run command)
--lxc-conf='lxc.cgroup.devices.allow = c [major number]:[minor number or *] rwm'
So if i want to launch a container (Supposing your image name is cuda).
docker run -ti --lxc-conf='lxc.cgroup.devices.allow = c 195:* rwm' --lxc-conf='lxc.cgroup.devices.allow = c 243:* rwm' cuda
We just released an experimental GitHub repository which should ease the process of using NVIDIA GPUs inside Docker containers.
Recent enhancements by NVIDIA have produced a much more robust way to do this.
Essentially they have found a way to avoid the need to install the CUDA/GPU driver inside the containers and have it match the host kernel module.
Instead, drivers are on the host and the containers don't need them.
It requires a modified docker-cli right now.
This is great, because now containers are much more portable.
A quick test on Ubuntu:
# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb
# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi
For more details see:
GPU-Enabled Docker Container
and: https://github.com/NVIDIA/nvidia-docker
Updated for cuda-8.0 on ubuntu 16.04
Install docker https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-16-04
Build the following image that includes the nvidia drivers and the cuda toolkit
Dockerfile
FROM ubuntu:16.04
MAINTAINER Jonathan Kosgei <jonathan#saharacluster.com>
# A docker container with the Nvidia kernel module and CUDA drivers installed
ENV CUDA_RUN https://developer.nvidia.com/compute/cuda/8.0/prod/local_installers/cuda_8.0.44_linux-run
RUN apt-get update && apt-get install -q -y \
wget \
module-init-tools \
build-essential
RUN cd /opt && \
wget $CUDA_RUN && \
chmod +x cuda_8.0.44_linux-run && \
mkdir nvidia_installers && \
./cuda_8.0.44_linux-run -extract=`pwd`/nvidia_installers && \
cd nvidia_installers && \
./NVIDIA-Linux-x86_64-367.48.run -s -N --no-kernel-module
RUN cd /opt/nvidia_installers && \
./cuda-linux64-rel-8.0.44-21122537.run -noprompt
# Ensure the CUDA libs and binaries are in the correct environment variables
ENV LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64
ENV PATH=$PATH:/usr/local/cuda-8.0/bin
RUN cd /opt/nvidia_installers &&\
./cuda-samples-linux-8.0.44-21122537.run -noprompt -cudaprefix=/usr/local/cuda-8.0 &&\
cd /usr/local/cuda/samples/1_Utilities/deviceQuery &&\
make
WORKDIR /usr/local/cuda/samples/1_Utilities/deviceQuery
Run your container
sudo docker run -ti --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm <built-image> ./deviceQuery
You should see output similar to:
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GRID K520
Result = PASS
Goal:
My goal was to make a CUDA enabled docker image without using nvidia/cuda as base image. Because I have some custom jupyter image, and I want to base from that.
Prerequisite:
The host machine had nvidia driver, CUDA toolkit, and nvidia-container-toolkit already installed. Please refer to the official docs, and to Rohit's answer.
Test that nvidia driver and CUDA toolkit is installed correctly with: nvidia-smi on the host machine, which should display correct "Driver Version" and "CUDA Version" and shows GPUs info.
Test that nvidia-container-toolkit is installed correctly with: docker run --rm --gpus all nvidia/cuda:latest nvidia-smi
Dockerfile
I found what I assume to be the official Dockerfile for nvidia/cuda here I "flattened" it, appended the contents to my Dockerfile and tested it to be working nicely:
FROM sidazhou/scipy-notebook:latest
# FROM ubuntu:18.04
###########################################################################
# See https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/10.1/ubuntu18.04-x86_64/base/Dockerfile
# See https://sarus.readthedocs.io/en/stable/user/custom-cuda-images.html
###########################################################################
USER root
###########################################################################
# base
RUN apt-get update && apt-get install -y --no-install-recommends \
gnupg2 curl ca-certificates && \
curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub | apt-key add - && \
echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 /" > /etc/apt/sources.list.d/cuda.list && \
echo "deb https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 /" > /etc/apt/sources.list.d/nvidia-ml.list && \
apt-get purge --autoremove -y curl \
&& rm -rf /var/lib/apt/lists/*
ENV CUDA_VERSION 10.1.243
ENV CUDA_PKG_VERSION 10-1=$CUDA_VERSION-1
# For libraries in the cuda-compat-* package: https://docs.nvidia.com/cuda/eula/index.html#attachment-a
RUN apt-get update && apt-get install -y --no-install-recommends \
cuda-cudart-$CUDA_PKG_VERSION \
cuda-compat-10-1 \
&& ln -s cuda-10.1 /usr/local/cuda && \
rm -rf /var/lib/apt/lists/*
# Required for nvidia-docker v1
RUN echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf && \
echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64
###########################################################################
#runtime next
ENV NCCL_VERSION 2.7.8
RUN apt-get update && apt-get install -y --no-install-recommends \
cuda-libraries-$CUDA_PKG_VERSION \
cuda-npp-$CUDA_PKG_VERSION \
cuda-nvtx-$CUDA_PKG_VERSION \
libcublas10=10.2.1.243-1 \
libnccl2=$NCCL_VERSION-1+cuda10.1 \
&& apt-mark hold libnccl2 \
&& rm -rf /var/lib/apt/lists/*
# apt from auto upgrading the cublas package. See https://gitlab.com/nvidia/container-images/cuda/-/issues/88
RUN apt-mark hold libcublas10
###########################################################################
#cudnn7 (not cudnn8) next
ENV CUDNN_VERSION 7.6.5.32
RUN apt-get update && apt-get install -y --no-install-recommends \
libcudnn7=$CUDNN_VERSION-1+cuda10.1 \
&& apt-mark hold libcudnn7 && \
rm -rf /var/lib/apt/lists/*
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES all
ENV NVIDIA_REQUIRE_CUDA "cuda>=10.1"
###########################################################################
#docker build -t sidazhou/scipy-notebook-gpu:latest .
#docker run -itd -gpus all\
# -p 8888:8888 \
# -p 6006:6006 \
# --user root \
# -e NB_UID=$(id -u) \
# -e NB_GID=$(id -g) \
# -e GRANT_SUDO=yes \
# -v ~/workspace:/home/jovyan/work \
# --name sidazhou-jupyter-gpu \
# sidazhou/scipy-notebook-gpu:latest
#docker exec sidazhou-jupyter-gpu python -c "import tensorflow as tf; print(tf.config.experimental.list_physical_devices('GPU'))"
To use GPU from docker container, instead of using native Docker, use Nvidia-docker. To install Nvidia docker use following commands
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu16.04/amd64/nvidia-
docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker
sudo pkill -SIGHUP dockerd # Restart Docker Engine
sudo nvidia-docker run --rm nvidia/cuda nvidia-smi # finally run nvidia-smi in the same container
Use x11docker by mviereck:
https://github.com/mviereck/x11docker#hardware-acceleration says
Hardware acceleration
Hardware acceleration for OpenGL is possible with option -g, --gpu.
This will work out of the box in most cases with open source drivers on host. Otherwise have a look at wiki: feature dependencies.
Closed source NVIDIA drivers need some setup and support less x11docker X server options.
This script is really convenient as it handles all the configuration and setup. Running a docker image on X with gpu is as simple as
x11docker --gpu imagename
I would not recommend installing CUDA/cuDNN on the host if you can use docker. Since at least CUDA 8 it has been possible to "stand on the shoulders of giants" and use nvidia/cuda base images maintained by NVIDIA in their Docker Hub repo. Go for the newest and biggest one (with cuDNN if doing deep learning) if unsure which version to choose.
A starter CUDA container:
mkdir ~/cuda11
cd ~/cuda11
echo "FROM nvidia/cuda:11.0-cudnn8-devel-ubuntu18.04" > Dockerfile
echo "CMD [\"/bin/bash\"]" >> Dockerfile
docker build --tag mirekphd/cuda11 .
docker run --rm -it --gpus 1 mirekphd/cuda11 nvidia-smi
Sample output:
(if nvidia-smi is not found in the container, do not try install it there - it was already installed on thehost with NVIDIA GPU driver and should be made available from the host to the container system if docker has access to the GPU(s)):
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.57 Driver Version: 450.57 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:01:00.0 On | N/A |
| 0% 50C P8 17W / 280W | 409MiB / 11177MiB | 7% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
Prerequisites
Appropriate NVIDIA driver with the latest CUDA version support to be installed first on the host (download it from NVIDIA Driver Downloads and then mv driver-file.run driver-file.sh && chmod +x driver-file.sh && ./driver-file.sh). These are have been forward-compatible since CUDA 10.1.
GPU access enabled in docker by installing sudo apt get update && sudo apt get install nvidia-container-toolkit (and then restarting docker daemon using sudo systemctl restart docker).
Get sauce from github, read instructions in doc/build-unix.txt. But make can not into compile!
[urs1412#noname bitcoin]$ cd src
[urs1412#noname src]$ make -f makefile.unix
g++ -c -O2 -pthread -Wall -Wextra -Wformat -Wformat-security \
-Wno-unused-parameter -g -DBOOST_SPIRIT_THREADSAFE \
-D_FILE_OFFSET_BITS=64 -I/home/urs1412/w/bitcoin/src \
-I/home/urs1412/w/bitcoin/src/obj -DUSE_UPNP=0 -DUSE_IPV6=1 \
-I/home/urs1412/w/bitcoin/src/leveldb/include \
-I/home/urs1412/w/bitcoin/src/leveldb/helpers \
-DHAVE_BUILD_INFO -fno-stack-protector \
-fstack-protector-all -Wstack-protector \
-D_FORTIFY_SOURCE=2 -MMD -MF obj/alert.d \
-o obj/alert.o alert.cpp \
alert.cpp:6:53: fatal error: boost/algorithm/string/classification.hpp:
No such file or directory
compilation terminated.
make: *** [obj/alert.o] Error 1
td;dr could not build bitcoin, dumping system info
[urs1412#noname src]$ uname -r
3.6.10-4.fc18.x86_64
[urs1412#noname src]$ git log -n 1
commit 77a1e12eed5fc66dce16584696f54988a8c2bf4e
Merge: fe15aa3 0565b71
Author: Gavin Andresen
Date: Wed Apr 24 08:48:06 2013 -0700
Merge pull request #2554 from fanquake/qt-pro-brew-patch
bitcoin-qt.pro Brew patch
I finally was able to build bitcoin-1.8 (not the git sources, although I believe these same steps will be applicable) on my CentOS VPS.
Here are the packages I had to install. Note that I had to build some of these.
As root:
yum install gcc-c++ make
install boost-devel
yum install db4-devel
yum install openssl-devel # but this didn't provide ec.h, hence the next steps
yum install rpm-build
rpm -U ~jcomeau/rpmbuild/RPMS/x86_64/openssl-devel-1.0.0e-1.x86_64.rpm
yum install lynx # for downloading some source packages
yum install python-devel # for building miniupnpc
rpm -i ~jcomeau/rpmbuild/RPMS/x86_64/libminiupnpc9-1.8.20130503-0.1.x86_64.rpm
rpm -i ~jcomeau/rpmbuild/RPMS/x86_64/libminiupnpc-devel-1.8.20130503-0.1.x86_64.rpm
Then as user, make BOOST_LIB_SUFFIX=-mt all test
If you need instructions on building the openssl-devel (the spec file was in the sources and mostly functional) and libminiupnpc-devel (I got the spec file from an OpenSUSE source RPM and adapted it) let me know.
I believe your immediate problem is you didn't install openssl-devel. But you will likely run into these other problems after that, if you don't do some of the steps I did.
Make sure that boost library for gcc is working correctly. Try a test "hello world" program with boost. You can find it in the directory: BOOST_BUILD_PATH/example/hello
Compile it with BOOST_BUILD_PATH/bin/b2 toolset=gcc
If it doesnt work then boost is not correctly installed.
I have a linux ubuntu 10.04 LTS freshly installed on my x64 PC, and I just followed this step-by-step installation guide to install NVidia CUDA on my PC. But when I cd into ~/NVIDIA_GPU_Computing_SDK/C/src/nbody and try to make the nbody simulation, it just prints out:
/usr/bin/ld: cannot find -lGL
collect2: ld returned 1 exit status
make: *** [../../bin/linux/release/nbody] Error 1
Is this a solvable problem?
I'm a newbie in Linux (and in Cuda programming) so please help me understanding.
cd /usr/lib/
ls -la | grep libGL.so
if libGL.so exists
sudo rm libGL.so
then run
sudo ln -s libGL.so.270.41.19 libGL.so
or whatever version of libGL.so you have
Use the Synaptic Package Manager and install the packages with libgl, libglu, libglut, etc. For example, libgl1-mesa and all its dev variants, freeglut, etc.
sudo apt-get install build-essential x-window-system-dev
will also get you the vast majority of those.
First I wanted to build the DBD::mysql package. That kept failing because whatever make resulted in could not be loaded for the tests with a Symbol not found: _is_prefix. So I assumed that cpan might be a tad old. I know it's a random assumption, but cpan did tell me to install the latest Bundle::CPAN.
Who's successfully installed either DBD::mysql or Bundle::CPAN on Mac OS X 10.5? Could you recommend any thing I could be doing differently?
This is perl, v5.8.8 built for darwin-thread-multi-2level
(with 4 registered patches, see perl -V for more detail)
/usr/local/mysql/bin/mysql Ver 14.14 Distrib 5.1.36,
for apple-darwin9.5.0 (i386) using readline 5.1
Here's a log of the CPAN output for DBD::mysql:
Writing Makefile for DBD::mysql
cc -c -I/Library/Perl/5.8.8/darwin-thread-multi-2level/auto/DBI -I/usr/local/mysql/include -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch ppc -arch i386 -g -pipe -fno-common -DPERL_DARWIN -no-cpp-precomp -fno-strict-aliasing -Wdeclaration-after-statement -I/usr/local/include -O3 -DVERSION=\"4.012\" -DXS_VERSION=\"4.012\" "-I/System/Library/Perl/5.8.8/darwin-thread-multi-2level/CORE" dbdimp.c
/usr/bin/perl -p -e "s/~DRIVER~/mysql/g" /Library/Perl/5.8.8/darwin-thread-multi-2level/auto/DBI/Driver.xst > mysql.xsi
Running Mkbootstrap for DBD::mysql ()
chmod 644 mysql.bs
/usr/bin/perl /System/Library/Perl/5.8.8/ExtUtils/xsubpp -typemap /System/Library/Perl/5.8.8/ExtUtils/typemap mysql.xs > mysql.xsc && mv mysql.xsc mysql.c
cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm
cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm
cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod
cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm
cp mysql.bs blib/arch/auto/DBD/mysql/mysql.bs
chmod 644 blib/arch/auto/DBD/mysql/mysql.bs
Warning: duplicate function definition 'do' detected in mysql.xs, line 225
Warning: duplicate function definition 'rows' detected in mysql.xs, line 650
cc -c -I/Library/Perl/5.8.8/darwin-thread-multi-2level/auto/DBI -I/usr/local/mysql/include -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch ppc -arch i386 -g -pipe -fno-common -DPERL_DARWIN -no-cpp-precomp -fno-strict-aliasing -Wdeclaration-after-statement -I/usr/local/include -O3 -DVERSION=\"4.012\" -DXS_VERSION=\"4.012\" "-I/System/Library/Perl/5.8.8/darwin-thread-multi-2level/CORE" mysql.c
dbdimp.c: In function 'mysql_describe':
dbdimp.c:3309: warning: assignment from incompatible pointer type
dbdimp.c: In function 'mysql_describe':
dbdimp.c:3309: warning: assignment from incompatible pointer type
rm -f blib/arch/auto/DBD/mysql/mysql.bundle
LD_RUN_PATH="/usr/local/mysql/lib" /usr/bin/perl myld cc -mmacosx-version-min=10.5.7 -arch ppc -arch i386 -bundle -undefined dynamic_lookup -L/usr/local/lib dbdimp.o mysql.o -o blib/arch/auto/DBD/mysql/mysql.bundle \
-L/usr/local/mysql/lib -lmysqlclient -lz -lm \
chmod 755 blib/arch/auto/DBD/mysql/mysql.bundle
Manifying blib/man3/DBD::mysql.3pm
Manifying blib/man3/DBD::mysql::INSTALL.3pm
Manifying blib/man3/Bundle::DBD::mysql.3pm
CAPTTOFU/DBD-mysql-4.012.tar.gz
/usr/bin/make -j3 -j3 -- OK
Running make test
PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00base.t .................. 1/6 Bailout called. Further testing stopped: Unable to load DBD::mysql
# Failed test 'use DBD::mysql;'
# at t/00base.t line 21.
# Tried to use 'DBD::mysql'.
# Error: Can't load '/Users/dlamblin/.cpan/build/DBD-mysql-4.012-4n3pv8/blib/arch/auto/DBD/mysql/mysql.bundle' for module DBD::mysql: dlopen(/Users/dlamblin/.cpan/build/DBD-mysql-4.012-4n3pv8/blib/arch/auto/DBD/mysql/mysql.bundle, 2): Symbol not found: _is_prefix
# Referenced from: /Users/dlamblin/.cpan/build/DBD-mysql-4.012-4n3pv8/blib/arch/auto/DBD/mysql/mysql.bundle
# Expected in: dynamic lookup
# at (eval 7) line 2
# Compilation failed in require at (eval 7) line 2.
# BEGIN failed--compilation aborted at (eval 7) line 2.
FAILED--Further testing stopped: Unable to load DBD::mysql
make: *** [test_dynamic] Error 255
CAPTTOFU/DBD-mysql-4.012.tar.gz
/usr/bin/make test -- NOT OK
//hint// to see the cpan-testers results for installing this module, try:
reports CAPTTOFU/DBD-mysql-4.012.tar.gz
Running make install
make test had returned bad status, won't install without force
Failed during this command:
CAPTTOFU/DBD-mysql-4.012.tar.gz : make_test NO
Okay, if you get these errors I now know the following:
MySQL 5.1 for Mac OS X x86_64 is not compatible with DBD::mysql (yet). Install the 32-bit x86 version, and try again. You'll succeed. I wish the perl Makefile.pl would just tell you that in a banner.
Bundle::CPAN had issues because I wasn't installing as root. Why that makes it report circular references instead of installation permission issues, I'll never understand.
Please add a comment if and when this became outdated information.
Installing the (beta) 5.4.1 64 bit version of mysql, available from their developer website, fixes the issue. Tested on Snow Leopard.
Did you try installing Bundle::DBD::mysql?
I haven't dealt with this problem, but I found that MacPorts cleaned up all my UNIX incompatibility problems. You might want to try that before enduring too much pain and suffering.
Where is it complaining about a circular dependency? It looks like you are trying to link to an incompatible version of the mysql libraries. The symbol it's looking for isn't in the library you loaded. I don't think this is a problem caused by CPAN.pm or the cpan script.
Some questions:
Who compiled perl? Is this Apple's perl?
Who compiled mysql? Is that your own version since it's in /usr/local?
Did you previously compile other versions? I start with a compile to ensure everything points to the right places.
Installing latest beta 64bit version of Mysql fixed problem on my computer.