I try to install caffe on win - caffe

I try to install caffe,
https://github.com/BVLC/caffe/tree/opencl
C:\Projects> git clone https://github.com/BVLC/caffe.git
C:\Projects> cd caffe
C:\Projects\caffe> git checkout windows
:: Edit any of the options inside build_win.cmd to suit your needs
C:\Projects\caffe> scripts\build_win.cmd
When I try to build, I get this error,
'MySQL' is not recognized as an internal command
Or external, an executable program or a batch file.
I think the error comes from this file build_win.cmd
Line 117 :: Setup the environement for VS x64
Line 118 set batch_file=!VS%MSVC_VERSION%0COMNTOOLS!..\..\VC\vcvarsall.bat
Line 119 call "%batch_file%" amd64
Here is the PATH
PATH=C:\Server\Python\python353\Scripts\;
C:\Server\Python\python353\;
C:\Users\Snarcraft\AMD APP SDK\2.9-1\bin\x86_64;
C:\Users\Snarcraft\AMD APP SDK\2.9-1\bin\x86;
C:\Program Files (x86)\AMD APP SDK\2.9-1\bin\x86_64;
C:\Program Files (x86)\AMD APP SDK\2.9-1\bin\x86;
C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;
C:\ProgramData\Oracle\Java\javapath;
C:\WINDOWS\system32;
C:\WINDOWS;
C:\WINDOWS\System32\Wbem;
C:\WINDOWS\System32\WindowsPowerShell\v1.0\;
C:\Program Files (x86)\MySQL\MySQL Fabric 1.5 & MySQL Utilities 1.5\;
C:\Program Files (x86)\MySQL\MySQL Fabric 1.5 & MySQL Utilities 1.5\Doctrine extensions for PHP\;
--> C:\Program Files\MySQL\MySQL Server 5.7\bin;
C:\Server\Apache24\bin;
C:\Server\php\php-5.6.30-Win32-VC11-x64;
C:\ProgramData\ComposerSetup\bin;
C:\Server\Git\cmd;C:\Server\MATLAB\R2016a\runtime\win64;
C:\Server\MATLAB\R2016a\bin;
C:\Server\MATLAB\R2016a\polyspace\bin;
--> C:\Program Files\CMake\bin;
C:\Server\Miniconda3;
C:\Server\Miniconda3\Scripts;
C:\Server\Miniconda3\Library\bin;
C:\Program Files (x86)\Windows Kits\8.1\Windows Performance Toolkit\;
C:\Users\Snarcraft\AppData\Local\Microsoft\WindowsApps;
C:\Users\Snarcraft\AppData\Roaming\npm;
C:\Users\Snarcraft\AppData\Roaming\Composer\vendor\bin;
The content of build_win.cmd
#echo off
#setlocal EnableDelayedExpansion
:: Default values
if DEFINED APPVEYOR (
echo Setting Appveyor defaults
if NOT DEFINED MSVC_VERSION set MSVC_VERSION=14
if NOT DEFINED WITH_NINJA set WITH_NINJA=1
if NOT DEFINED CPU_ONLY set CPU_ONLY=1
if NOT DEFINED CUDA_ARCH_NAME set CUDA_ARCH_NAME=Auto
if NOT DEFINED CMAKE_CONFIG set CMAKE_CONFIG=Release
if NOT DEFINED USE_NCCL set USE_NCCL=0
if NOT DEFINED CMAKE_BUILD_SHARED_LIBS set CMAKE_BUILD_SHARED_LIBS=0
if NOT DEFINED PYTHON_VERSION set PYTHON_VERSION=2
if NOT DEFINED BUILD_PYTHON set BUILD_PYTHON=1
if NOT DEFINED BUILD_PYTHON_LAYER set BUILD_PYTHON_LAYER=1
if NOT DEFINED BUILD_MATLAB set BUILD_MATLAB=1
if NOT DEFINED PYTHON_EXE set PYTHON_EXE=python
if NOT DEFINED RUN_TESTS set RUN_TESTS=1
if NOT DEFINED RUN_LINT set RUN_LINT=1
if NOT DEFINED RUN_INSTALL set RUN_INSTALL=1
:: Set python 2.7 with conda as the default python
if !PYTHON_VERSION! EQU 2 (
set CONDA_ROOT=C:\Server\Miniconda3
)
:: Set python 3.5 with conda as the default python
if !PYTHON_VERSION! EQU 3 (
set CONDA_ROOT=C:\Server\Miniconda3
)
set PATH=!CONDA_ROOT!;!CONDA_ROOT!\Scripts;!CONDA_ROOT!\Library\bin;!PATH!
:: Check that we have the right python version
!PYTHON_EXE! --version
:: Add the required channels
conda config --add channels conda-forge
conda config --add channels willyd
:: Update conda
conda update conda -y
:: Download other required packages
conda install --yes cmake ninja numpy scipy protobuf==3.1.0 six scikit-image pyyaml pydotplus graphviz
if ERRORLEVEL 1 (
echo ERROR: Conda update or install failed
exit /b 1
)
:: Install cuda and disable tests if needed
if !WITH_CUDA! == 1 (
call %~dp0\appveyor\appveyor_install_cuda.cmd
set RUN_TESTS=0
set USE_NCCL=1
) else (
set CPU_ONLY=1
)
:: Disable the tests in debug config
if "%CMAKE_CONFIG%" == "Debug" (
echo Disabling tests on appveyor with config == %CMAKE_CONFIG%
set RUN_TESTS=0
)
:: Disable linting with python 3 until we find why the script fails
if !PYTHON_VERSION! EQU 3 (
set RUN_LINT=0
)
) else (
:: Change the settings here to match your setup
:: Change MSVC_VERSION to 12 to use VS 2013
if NOT DEFINED MSVC_VERSION set MSVC_VERSION=14
:: Change to 1 to use Ninja generator (builds much faster)
if NOT DEFINED WITH_NINJA set WITH_NINJA=0
:: Change to 1 to build caffe without CUDA support
if NOT DEFINED CPU_ONLY set CPU_ONLY=0
:: Change to generate CUDA code for one of the following GPU architectures
:: [Fermi Kepler Maxwell Pascal All]
if NOT DEFINED CUDA_ARCH_NAME set CUDA_ARCH_NAME=Auto
:: Change to Debug to build Debug. This is only relevant for the Ninja generator the Visual Studio generator will generate both Debug and Release configs
if NOT DEFINED CMAKE_CONFIG set CMAKE_CONFIG=Release
:: Set to 1 to use NCCL
if NOT DEFINED USE_NCCL set USE_NCCL=0
:: Change to 1 to build a caffe.dll
if NOT DEFINED CMAKE_BUILD_SHARED_LIBS set CMAKE_BUILD_SHARED_LIBS=0
:: Change to 3 if using python 3.5 (only 2.7 and 3.5 are supported)
if NOT DEFINED PYTHON_VERSION set PYTHON_VERSION=2
:: Change these options for your needs.
if NOT DEFINED BUILD_PYTHON set BUILD_PYTHON=1
if NOT DEFINED BUILD_PYTHON_LAYER set BUILD_PYTHON_LAYER=1
if NOT DEFINED BUILD_MATLAB set BUILD_MATLAB=1
:: If python is on your path leave this alone
if NOT DEFINED PYTHON_EXE set PYTHON_EXE=python
:: Run the tests
if NOT DEFINED RUN_TESTS set RUN_TESTS=0
:: Run lint
if NOT DEFINED RUN_LINT set RUN_LINT=0
:: Build the install target
if NOT DEFINED RUN_INSTALL set RUN_INSTALL=0
:: Enable CUDA backend
if NOT DEFINED USE_CUDA set USE_CUDA=0
:: Use cuDNN acceleration with CUDA backend
if NOT DEFINED USE_CUDNN set USE_CUDNN=0
:: Use OpenCL backend
if NOT DEFINED USE_GREENTEA set USE_GREENTEA=1
:: Use LibDNN acceleration with OpenCL and/or CUDA backend
if NOT DEFINED USE_LIBDNN set USE_LIBDNN=1
:: Use OpenMP (disable this on systems with #NUMA > 1)
if NOT DEFINED USE_OPENMP set USE_OPENMP=0
:: Use 64 bit indexing for very large memory blob support (above 2G)
if NOT DEFINED USE_INDEX64 set USE_INDEX64=0
:: Use Intel spatial kernels acceleration for forward convolution on Intel iGPUs
if NOT DEFINED USE_INTEL_SPATIAL set USE_INTEL_SPATIAL=0
:: Disable host/device shared memory
if NOT DEFINED DISABLE_DEVICE_HOST_UNIFIED_MEMORY set DISABLE_DEVICE_HOST_UNIFIED_MEMORY=0
)
:: Set the appropriate CMake generator
:: Use the exclamation mark ! below to delay the
:: expansion of CMAKE_GENERATOR
if %WITH_NINJA% EQU 0 (
if "%MSVC_VERSION%"=="14" (
set CMAKE_GENERATOR=Visual Studio 14 2015 Win64
)
if "%MSVC_VERSION%"=="12" (
set CMAKE_GENERATOR=Visual Studio 12 2013 Win64
)
if "!CMAKE_GENERATOR!"=="" (
echo ERROR: Unsupported MSVC version
exit /B 1
)
) else (
set CMAKE_GENERATOR=Ninja
)
echo INFO: ============================================================
echo INFO: Summary:
echo INFO: ============================================================
echo INFO: MSVC_VERSION = !MSVC_VERSION!
echo INFO: WITH_NINJA = !WITH_NINJA!
echo INFO: CMAKE_GENERATOR = "!CMAKE_GENERATOR!"
echo INFO: CPU_ONLY = !CPU_ONLY!
echo INFO: USE_CUDA = !USE_CUDA!
echo INFO: CUDA_ARCH_NAME = !CUDA_ARCH_NAME!
echo INFO: USE_CUDNN = !USE_CUDNN!
echo INFO: USE_GREENTEA = !USE_GREENTEA!
echo INFO: USE_LIBDNN = !USE_LIBDNN!
echo INFO: USE_OPENMP = !USE_OPENMP!
echo INFO: USE_INDEX64 = !USE_INDEX_64!
echo INFO: USE_INTEL_SPATIAL = !USE_INTEL_SPATIAL!
echo INFO: DISABLE_DEVICE_HOST_UNIFIED_MEMORY = !DISABLE_DEVICE_HOST_UNIFIED_MEMORY!
echo INFO: CMAKE_CONFIG = !CMAKE_CONFIG!
echo INFO: USE_NCCL = !USE_NCCL!
echo INFO: CMAKE_BUILD_SHARED_LIBS = !CMAKE_BUILD_SHARED_LIBS!
echo INFO: PYTHON_VERSION = !PYTHON_VERSION!
echo INFO: BUILD_PYTHON = !BUILD_PYTHON!
echo INFO: BUILD_PYTHON_LAYER = !BUILD_PYTHON_LAYER!
echo INFO: BUILD_MATLAB = !BUILD_MATLAB!
echo INFO: PYTHON_EXE = "!PYTHON_EXE!"
echo INFO: RUN_TESTS = !RUN_TESTS!
echo INFO: RUN_LINT = !RUN_LINT!
echo INFO: RUN_INSTALL = !RUN_INSTALL!
echo INFO: ============================================================
:: Build and exectute the tests
:: Do not run the tests with shared library
if !RUN_TESTS! EQU 1 (
if %CMAKE_BUILD_SHARED_LIBS% EQU 1 (
echo WARNING: Disabling tests with shared library build
set RUN_TESTS=0
)
)
if NOT EXIST build mkdir build
pushd build
:: Setup the environement for VS x64
set batch_file=!VS%MSVC_VERSION%0COMNTOOLS!..\..\VC\vcvarsall.bat
call "%batch_file%" amd64
:: Configure using cmake and using the caffe-builder dependencies
:: Add -DCUDNN_ROOT=C:/Projects/caffe/cudnn-8.0-windows10-x64-v5.1/cuda ^
:: below to use cuDNN
cmake -G"!CMAKE_GENERATOR!" ^
-DBLAS=Open ^
-DCMAKE_BUILD_TYPE:STRING=%CMAKE_CONFIG% ^
-DBUILD_SHARED_LIBS:BOOL=%CMAKE_BUILD_SHARED_LIBS% ^
-DBUILD_python:BOOL=%BUILD_PYTHON% ^
-DBUILD_python_layer:BOOL=%BUILD_PYTHON_LAYER% ^
-DBUILD_matlab:BOOL=%BUILD_MATLAB% ^
-DCPU_ONLY:BOOL=%CPU_ONLY% ^
-DUSE_CUDA:BOOL=%USE_CUDA% ^
-DUSE_CUDNN:BOOL=%USE_CUDNN% ^
-DUSE_LIBDNN:BOOL=%USE_LIBDNN% ^
-DUSE_GREENTEA:BOOL=%USE_GREENTEA% ^
-DUSE_OPENMP:BOOL=%USE_OPENMP% ^
-DUSE_INDEX64:BOOL=%USE_INDEX64% ^
-DUSE_INTEL_SPATIAL:BOOL=%USE_INTEL_SPATIAL% ^
-DDISABLE_DEVICE_HOST_UNIFIED_MEMORY=%DISABLE_DEVICE_HOST_UNIFIED_MEMORY% ^
-DCOPY_PREREQUISITES:BOOL=1 ^
-DINSTALL_PREREQUISITES:BOOL=1 ^
-DUSE_NCCL:BOOL=!USE_NCCL! ^
-DCUDA_ARCH_NAME:STRING=%CUDA_ARCH_NAME% ^
"%~dp0\.."
if ERRORLEVEL 1 (
echo ERROR: Configure failed
exit /b 1
)
:: Lint
if %RUN_LINT% EQU 1 (
cmake --build . --target lint --config %CMAKE_CONFIG%
)
if ERRORLEVEL 1 (
echo ERROR: Lint failed
exit /b 1
)
:: Build the library and tools
cmake --build . --config %CMAKE_CONFIG%
if ERRORLEVEL 1 (
echo ERROR: Build failed
exit /b 1
)
:: Build and exectute the tests
if !RUN_TESTS! EQU 1 (
cmake --build . --target runtest --config %CMAKE_CONFIG%
if ERRORLEVEL 1 (
echo ERROR: Tests failed
exit /b 1
)
if %BUILD_PYTHON% EQU 1 (
if %BUILD_PYTHON_LAYER% EQU 1 (
:: Run python tests only in Release build since
:: the _caffe module is _caffe-d is debug
if "%CMAKE_CONFIG%"=="Release" (
:: Run the python tests
cmake --build . --target pytest
if ERRORLEVEL 1 (
echo ERROR: Python tests failed
exit /b 1
)
)
)
)
)
if %RUN_INSTALL% EQU 1 (
cmake --build . --target install --config %CMAKE_CONFIG%
)
popd
#endlocal
it's all

Related

How to tell OCaml compiler the path of recently installed module

I already have OCaml installed on my Mac by running these commands:
$ brew install opam
$ opam init --bare -a -y
$ opam switch create cs3110-2022fa ocaml-base-compiler.4.14.0
Running any OCaml code which only uses Standard library modules works fine. Then I want to do a few things that are not covered with it, e.g reading CSV files.
$ opam install csv
Let's try to compile this code
open Printf
open Csv
let embedded_csv = "\
\"Banner clickins\"
\"Clickin\",\"Number\",\"Percentage\",
\"brand.adwords\",\"4,878\",\"14.4\"
\"vacation.advert2.adwords\",\"4,454\",\"13.1\"
\"affiliates.generic.tc1\",\"1,608\",\"4.7\"
\"brand.overture\",\"1,576\",\"4.6\"
\"vacation.cheap.adwords\",\"1,515\",\"4.5\"
\"affiliates.generic.vacation.biggestchoice\",\"1,072\",\"3.2\"
\"breaks.no-destination.adwords\",\"1,015\",\"3.0\"
\"fly.no-destination.flightshome.adwords\",\"833\",\"2.5\"
\"exchange.adwords\",\"728\",\"2.1\"
\"holidays.cyprus.cheap\",\"574\",\"1.7\"
\"travel.adwords\",\"416\",\"1.2\"
\"affiliates.vacation.generic.onlinediscount.200\",\"406\",\"1.2\"
\"promo.home.topX.ACE.189\",\"373\",\"1.1\"
\"homepage.hp_tx1b_20050126\",\"369\",\"1.1\"
\"travel.agents.adwords\",\"358\",\"1.1\"
\"promo.home.topX.SSH.366\",\"310\",\"0.9\""
let csvs =
List.map (fun name -> name, Csv.load name)
[ "examples/example1.csv"; "examples/example2.csv" ]
let () =
let ecsv = Csv.input_all(Csv.of_string embedded_csv) in
printf "---Embedded CSV---------------------------------\n" ;
Csv.print_readable ecsv;
List.iter (
fun (name, csv) ->
printf "---%s----------------------------------------\n" name;
Csv.print_readable csv
) csvs;
printf "Compare (Embedded CSV) example1.csv = %i\n"
(Csv.compare ecsv (snd(List.hd csvs)))
let () =
(* Save it to a file *)
let ecsv = Csv.input_all(Csv.of_string embedded_csv) in
let fname = Filename.concat (Filename.get_temp_dir_name()) "example.csv" in
Csv.save fname ecsv;
printf "Saved CSV to file %S.\n" fname
The result is:
$ ocamlopt csvdemo.ml -o csvdemo
File "csvdemo.ml", line 2, characters 5-8:
2 | open Csv
^^^
Error: Unbound module Csv
How to tell OCaml compiler where to find Csv module path?
This seems like a wonderful place to use Dune, but going a little bit old school you can use ocamlfind to locate the package.
% cat test2.ml
open Csv
let () = print_endline "hello"
% ocamlopt -I `ocamlfind query csv` -o test2 csv.cmxa test2.ml
% ./test2
hello
%
Or alternatively:
ocamlfind ocamlopt -package csv -o test2 cvs.cmxa test2.ml

How can I determine the full CUDA version + subversion?

CUDA distributions on Linux used to have a file named version.txt which read, e.g.:
CUDA Version 10.2.89
which is quite useful. However, as of CUDA 11.1, this file no longer exists.
How can I determine, on Linux and from the command line, and inspecting /path/to/cuda/toolkit, which exact version I'm looking at? Including the subversion?
(Answer due to #RobertCrovella's comment)
This will do the trick:
/path/to/cuda/toolkit/bin/nvcc --version | egrep -o "V[0-9]+.[0-9]+.[0-9]+" | cut -c2-
And of course, for the CUDA version currently chosen and configured to be used, just take the nvcc that's on the path:
nvcc --version | egrep -o "V[0-9]+.[0-9]+.[0-9]+" | cut -c2-
For example: You would get 11.2.67 for the download of CUDA 11.2 which was available this week on the NVIDIA website.
The full nvcc --version output would be:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Nov_30_19:08:53_PST_2020
Cuda compilation tools, release 11.2, V11.2.67
Build cuda_11.2.r11.2/compiler.29373293_0
The following python code works well for both Windows and Linux and I have tested it with a variety of CUDA (8-11.2, most of them).
It searches for the cuda_path, via a series of guesses (checking environment vars, nvcc locations or default installation paths) and then grabs the CUDA version from the output of nvcc --version. Doesn't use #einpoklum's style regexp, it simply assumes there is only one release string within the output of nvcc --version, but that can be simply checked.
You can also just use the first function, if you have a known path to query.
Adding it as an extra of #einpoklum answer, does the same thing, just in python.
From TIGRE.
import glob
import os
from os.path import join as pjoin
import subprocess
import sys
def get_cuda_version(cuda_home):
"""Locate the CUDA version
"""
version_file = os.path.join(cuda_home, "version.txt")
try:
if os.path.isfile(version_file):
with open(version_file) as f:
version_str = f.readline().replace('\n', '').replace('\r', '')
return version_str.split(" ")[2][:4]
else:
version_str = subprocess.check_output([os.path.join(cuda_home,"bin","nvcc"),"--version"])
version_str=str(version_str).replace('\n', '').replace('\r', '')
idx=version_str.find("release")
return version_str[idx+len("release "):idx+len("release ")+4]
except:
raise RuntimeError("Cannot read cuda version file")
def locate_cuda():
"""Locate the CUDA environment on the system
Returns a dict with keys 'home', 'include' and 'lib64'
and values giving the absolute path to each directory.
Starts by looking for the CUDA_HOME or CUDA_PATH env variable. If not found, everything
is based on finding 'nvcc' in the PATH.
"""
# Guess #1
cuda_home = os.environ.get('CUDA_HOME') or os.environ.get('CUDA_PATH')
if cuda_home is None:
# Guess #2
try:
which = 'where' if IS_WINDOWS else 'which'
nvcc = subprocess.check_output(
[which, 'nvcc']).decode().rstrip('\r\n')
cuda_home = os.path.dirname(os.path.dirname(nvcc))
except subprocess.CalledProcessError:
# Guess #3
if IS_WINDOWS:
cuda_homes = glob.glob(
'C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v*.*')
if len(cuda_homes) == 0:
cuda_home = ''
else:
cuda_home = cuda_homes[0]
else:
cuda_home = '/usr/local/cuda'
if not os.path.exists(cuda_home):
cuda_home = None
version = get_cuda_version(cuda_home)
cudaconfig = {'home': cuda_home,
'include': pjoin(cuda_home, 'include'),
'lib64': pjoin(cuda_home, pjoin('lib', 'x64') if IS_WINDOWS else 'lib64')}
if not all([os.path.exists(v) for v in cudaconfig.values()]):
raise EnvironmentError(
'The CUDA path could not be located in $PATH, $CUDA_HOME or $CUDA_PATH. '
'Either add it to your path, or set $CUDA_HOME or $CUDA_PATH.')
return cudaconfig, version
CUDA, CUDA_VERSION = locate_cuda()

nix-shell --command `stack build` leads to libpq-fe.h: No such file or directory

i am trying to compile my small project (a yesod application with lambdacms) on nixos. However, after using cabal2nix (more precisely cabal2nix project-karma.cabal --sha256=0 --shell > shell.nix) , I am still missing a dependency wrt. postgresql it seems.
My shell.nix file looks like this:
{ nixpkgs ? import <nixpkgs> {}, compiler ? "default" }:
let
inherit (nixpkgs) pkgs;
f = { mkDerivation, aeson, base, bytestring, classy-prelude
, classy-prelude-conduit, classy-prelude-yesod, conduit, containers
, data-default, directory, fast-logger, file-embed, filepath
, hjsmin, hspec, http-conduit, lambdacms-core, monad-control
, monad-logger, persistent, persistent-postgresql
, persistent-template, random, resourcet, safe, shakespeare, stdenv
, template-haskell, text, time, transformers, unordered-containers
, uuid, vector, wai, wai-extra, wai-logger, warp, yaml, yesod
, yesod-auth, yesod-core, yesod-form, yesod-static, yesod-test
}:
mkDerivation {
pname = "karma";
version = "0.0.0";
sha256 = "0";
isLibrary = true;
isExecutable = true;
libraryHaskellDepends = [
aeson base bytestring classy-prelude classy-prelude-conduit
classy-prelude-yesod conduit containers data-default directory
fast-logger file-embed filepath hjsmin http-conduit lambdacms- core
monad-control monad-logger persistent persistent-postgresql
persistent-template random safe shakespeare template-haskell text
time unordered-containers uuid vector wai wai-extra wai-logger warp
yaml yesod yesod-auth yesod-core yesod-form yesod-static
nixpkgs.zlib
nixpkgs.postgresql
nixpkgs.libpqxx
];
libraryPkgconfigDepends = [ persistent-postgresql];
executableHaskellDepends = [ base ];
testHaskellDepends = [
base classy-prelude classy-prelude-yesod hspec monad-logger
persistent persistent-postgresql resourcet shakespeare transformers
yesod yesod-core yesod-test
];
license = stdenv.lib.licenses.bsd3;
};
haskellPackages = if compiler == "default"
then pkgs.haskellPackages
else pkgs.haskell.packages.${compiler};
drv = haskellPackages.callPackage f {};
in
if pkgs.lib.inNixShell then drv.env else drv
The output is as follows:
markus#nixos ~/git/haskell/karma/karma (git)-[master] % nix-shell --command `stack build`
postgresql-libpq-0.9.1.1: configure
ReadArgs-1.2.2: download
postgresql-libpq-0.9.1.1: build
ReadArgs-1.2.2: configure
ReadArgs-1.2.2: build
ReadArgs-1.2.2: install
-- While building package postgresql-libpq-0.9.1.1 using:
/run/user/1000/stack31042/postgresql-libpq-0.9.1.1/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup --builddir=.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/ build --ghc-options " -ddump-hi -ddump-to-file"
Process exited with code: ExitFailure 1
Logs have been written to: /home/markus/git/haskell/karma/karma/.stack-work/logs/postgresql-libpq-0.9.1.1.log
[1 of 1] Compiling Main ( /run/user/1000/stack31042/postgresql-libpq-0.9.1.1/Setup.hs, /run/user/1000/stack31042/postgresql-libpq-0.9.1.1/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/Main.o )
Linking /run/user/1000/stack31042/postgresql-libpq-0.9.1.1/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup ...
Configuring postgresql-libpq-0.9.1.1...
Building postgresql-libpq-0.9.1.1...
Preprocessing library postgresql-libpq-0.9.1.1...
LibPQ.hsc:213:22: fatal error: libpq-fe.h: No such file or directory
compilation terminated.
compiling .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/PostgreSQL/LibPQ_hsc_make.c failed (exit code 1)
command was: /nix/store/9fbfiij3ajnd3fs1zyc2qy0ispbszrr7-gcc-wrapper-4.9.3/bin/gcc -c .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/PostgreSQL/LibPQ_hsc_make.c -o .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/PostgreSQL/LibPQ_hsc_make.o -fno-stack-protector -D__GLASGOW_HASKELL__=710 -Dlinux_BUILD_OS=1 -Dx86_64_BUILD_ARCH=1 -Dlinux_HOST_OS=1 -Dx86_64_HOST_ARCH=1 -I/run/current-system/sw/include -Icbits -I.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen -include .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen/cabal_macros.h -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/bytes_6elQVSg5cWdFrvRnfxTUrH/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/base_GDytRqRVSUX7zckgKqJjgw/include -I/nix/store/6ykqcjxr74l642kv9gf1ib8v9yjsgxr9-gmp-5.1.3/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/integ_2aU3IZNMF9a7mQ0OzsZ0dS/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include/
I assume not much is missing, so a pointer would be nice.
What is also weird, that is that "nix-shell" works but following that up with "stack exec yesod devel" tells me
Resolving dependencies...
Configuring karma-0.0.0...
cabal: At least the following dependencies are missing:
classy-prelude >=0.10.2,
classy-prelude-conduit >=0.10.2,
classy-prelude-yesod >=0.10.2,
hjsmin ==0.1.*,
http-conduit ==2.1.*,
lambdacms-core >=0.3.0.2 && <0.4,
monad-logger ==0.3.*,
persistent >=2.0 && <2.3,
persistent-postgresql >=2.1.1 && <2.3,
persistent-template >=2.0 && <2.3,
uuid >=1.3,
wai-extra ==3.0.*,
warp >=3.0 && <3.2,
yesod >=1.4.1 && <1.5,
yesod-auth >=1.4.0 && <1.5,
yesod-core >=1.4.6 && <1.5,
yesod-form >=1.4.0 && <1.5,
yesod-static >=1.4.0.3 && <1.6
When using mysql instead, I am getting
pcre-light-0.4.0.4: configure
mysql-0.1.1.8: configure
mysql-0.1.1.8: build
Progress: 2/59
-- While building package mysql-0.1.1.8 using:
/run/user/1000/stack12820/mysql-0.1.1.8/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup --builddir=.stack-work/dist/x86_64- linux/Cabal-1.22.4.0/ build --ghc-options " -ddump-hi -ddump-to-file"
Process exited with code: ExitFailure 1
Logs have been written to: /home/markus/git/haskell/karma/karma/.stack-work/logs/mysql-0.1.1.8.log
[1 of 1] Compiling Main ( /run/user/1000/stack12820/mysql-0.1.1.8/Setup.lhs, /run/user/1000/stack12820/mysql-0.1.1.8/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/Main.o )
Linking /run/user/1000/stack12820/mysql-0.1.1.8/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup ...
Configuring mysql-0.1.1.8...
Building mysql-0.1.1.8...
Preprocessing library mysql-0.1.1.8...
In file included from C.hsc:68:0:
include/mysql_signals.h:9:19: fatal error: mysql.h: No such file or directory
#include "mysql.h"
^
compilation terminated.
compiling .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/MySQL/Base/C_hsc_make.c failed (exit code 1)
command was: /nix/store/9fbfiij3ajnd3fs1zyc2qy0ispbszrr7-gcc-wrapper-4.9.3/bin/gcc -c .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/MySQL/Base/C_hsc_make.c -o .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/MySQL/Base/C_hsc_make.o -fno-stack-protector -D__GLASGOW_HASKELL__=710 -Dlinux_BUILD_OS=1 -Dx86_64_BUILD_ARCH=1 -Dlinux_HOST_OS=1 -Dx86_64_HOST_ARCH=1 -I/nix/store/7ppa4k2drrvjk94rb60c1df9nvw0z696-mariadb-10.0.22-lib/include -I/nix/store/7ppa4k2drrvjk94rb60c1df9nvw0z696-mariadb-10.0.22-lib/include/.. -Iinclude -I.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen -include .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen/cabal_macros.h -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/bytes_6elQVSg5cWdFrvRnfxTUrH/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/base_GDytRqRVSUX7zckgKqJjgw/include -I/nix/store/6ykqcjxr74l642kv9gf1ib8v9yjsgxr9-gmp-5.1.3/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/integ_2aU3IZNMF9a7mQ0OzsZ0dS/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include/
-- While building package pcre-light-0.4.0.4 using:
/home/markus/.stack/setup-exe-cache/setup-Simple-Cabal-1.22.4.0-x86_64-linux-ghc-7.10.2 --builddir=.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/ configure --with-ghc=/run/current-system/sw/bin/ghc --user --package-db=clear --package-db=global --package-db=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/pkgdb/ --libdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/lib --bindir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/bin --datadir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/share --libexecdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/libexec --sysconfdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/etc --docdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/doc/pcre-light-0.4.0.4 --htmldir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/doc/pcre-light-0.4.0.4 --haddockdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/doc/pcre-light-0.4.0.4 --dependency=base=base-4.8.1.0-4f7206fd964c629946bb89db72c80011 --dependency=bytestring=bytestring-0.10.6.0-18c05887c1aaac7adb3350f6a4c6c8ed
Process exited with code: ExitFailure 1
Logs have been written to: /home/markus/git/haskell/karma/karma/.stack-work/logs/pcre-light-0.4.0.4.log
Configuring pcre-light-0.4.0.4...
setup-Simple-Cabal-1.22.4.0-x86_64-linux-ghc-7.10.2: The program 'pkg-config'
version >=0.9.0 is required but it could not be found.
After adding pkgconfig to my global configuration, the build seems to get a little further ahead, so it seems that shell.nix is ignored somewhat.
(Sources for what I tried so far:
https://groups.google.com/forum/#!topic/haskell-stack/_ZBh01VP_fo)
Update: It seems like I overlooked this section of the manual
http://nixos.org/nixpkgs/manual/#using-stack-together-with-nix
However, the first idea that came to mind
(stack --extra-lib-dirs=/nix/store/c6qy7n5wdwl164lnzha7vpc3av9yhnga-postgresql-libpq-0.9.1.1/lib build)
did not work yet, most likely I need to use
--extra-include-dirs or try one of the variations. It seems weird that stack is still trying to build postgresql-libpq in the very same version, though.
Update2: Currently trying out "stack --extra-lib-dirs=/nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/lib --extra-include-dirs=/nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/include build" which looks promising. Does not look like the nix-way, but still.
Update3: Still getting
<command line>: can't load .so/.DLL for: /home/markus /.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/lib/x86_64-linux- ghc-7.10.2/postgresql-libpq-0.9.1.1-ABGs5p1J8FbEwi6uvHaiV6/libHSpostgresql-libpq-0.9.1.1-ABGs5p1J8FbEwi6uvHaiV6-ghc7.10.2.so
(libpq.so.5: cannot open shared object file: No such file or directory) stack build 186.99s user 2.93s system 109% cpu 2:52.76 total
which is strange since libpq.so.5 is contained in /nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/lib.
An additional
$LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/lib
does not help either.
Update4:
By the way, yesod devel does the same as stack exec yesod devel. My libraries are downloaded to /nix/store but they are not recognized.
Maybe I need to make "build-nix" work and yesod devel does not work here?
Just for completeness, here is stack.yaml
resolver: nightly-2015-11-17
#run stack setup otherwise!!
# Local packages, usually specified by relative directory name
packages:
- '.'
# Packages to be pulled from upstream that are not in the resolver (e.g., acme-missiles-0.3)
extra-deps: [lambdacms-core-0.3.0.2 , friendly-time-0.4, lists-0.4.2, list-extras-0.4.1.4 ]
# Override default flag values for local packages and extra-deps
flags:
karma:
library-only: false
dev: false
# Extra package databases containing global packages
extra-package-dbs: []
Next weekend, I will check out
https://pr06lefs.wordpress.com/2014/09/27/compiling-a-yesod-project-on-nixos/
and other search results.
Funny, because I've just had a similar problem myself - solved it by adding these two lines to stack.yaml:
extra-include-dirs: [/nix/store/jrdvjvf0w9nclw7b4k0pdfkljw78ijgk-postgresql-9.4.5/include/]
extra-lib-dirs: [/nix/store/jrdvjvf0w9nclw7b4k0pdfkljw78ijgk-postgresql-9.4.5/lib/]
You may want to check first which postgresql's path from the /nix/store you should use with include/ and lib/:
nix-build --no-out-link "<nixpkgs>" -A postgresql
And BTW, why do you use nix-shell if you are going to use stack and you have project-karma.cabal available..? Have you considered migrating your project with stack init..?
Looks like stack is trying to build haskellPackages.postgresql-libpq outside of the nix framework.
You probably don't want that to happen. Maybe try to add postgresql-libpq to libraryHaskellDepends?

Missing build file during sbt run

I added "--backend" and "v" to my chiselMainTest list, and although I am getting verilog output, I am also getting a build error:
In file included from ./vpi.cpp:1:
./vpi.h:4:10: fatal error: 'vpi_user.h' file not found
#include "vpi_user.h"
^
1 error generated.
A complete listing of the sbt run follows:
BigKiss:chisel mykland$ sbt run
[info] Set current project to chisel (in build file:/Users/mykland/work/chisel/)
[info] Compiling 1 Scala source to /Users/mykland/work/chisel/target/scala-2.10/classes...
[warn] there were 38 feature warning(s); re-run with -feature for details
[warn] one warning found
[info] Running mainStub
[info] [0.056] // COMPILING < (class lut3to1_1)>(0)
[info] [0.078] giving names
[info] [0.088] executing custom transforms
[info] [0.089] adding clocks and resets
[info] [0.093] inferring widths
[info] [0.108] checking widths
[info] [0.110] lowering complex nodes to primitives
[info] [0.113] removing type nodes
[info] [0.117] compiling 84 nodes
[info] [0.117] computing memory ports
[info] [0.117] resolving nodes to the components
[info] [0.133] creating clock domains
[info] [0.134] pruning unconnected IOs
[info] [0.136] checking for combinational loops
[info] [0.139] NO COMBINATIONAL LOOP FOUND
[info] [0.149] COMPILING <lut3to1_1 (class lut3to1_1)> 0 CHILDREN (0,0)
In file included from ./vpi.cpp:1:
./vpi.h:4:10: fatal error: 'vpi_user.h' file not found
#include "vpi_user.h"
^
1 error generated.
[info] [0.666] g++ -c -o ./vpi.o -I$VCS_HOME/include -I./ -fPIC -std=c++11 ./vpi.cpp RET 1
[error] lut3to1_1.scala:58: failed to compile vpi.cpp in class mainStub$
Re-running Chisel in debug mode to obtain erroneous line numbers...
[info] [0.030] // COMPILING < (class lut3to1_1)>(0)
[info] [0.035] giving names
[info] [0.037] executing custom transforms
[info] [0.037] adding clocks and resets
[info] [0.038] inferring widths
[info] [0.045] checking widths
[info] [0.046] lowering complex nodes to primitives
[info] [0.047] removing type nodes
[info] [0.049] compiling 84 nodes
[info] [0.049] computing memory ports
[info] [0.049] resolving nodes to the components
[info] [0.055] creating clock domains
[info] [0.055] pruning unconnected IOs
[info] [0.056] checking for combinational loops
[info] [0.056] NO COMBINATIONAL LOOP FOUND
[info] [0.060] COMPILING <lut3to1_1 (class lut3to1_1)> 0 CHILDREN (0,0)
In file included from ./vpi.cpp:1:
./vpi.h:4:10: fatal error: 'vpi_user.h' file not found
#include "vpi_user.h"
^
1 error generated.
[info] [0.535] g++ -c -o ./vpi.o -I$VCS_HOME/include -I./ -fPIC -std=c++11 ./vpi.cpp RET 1
[error] lut3to1_1.scala:58: failed to compile vpi.cpp in class mainStub$
[error] (run-main-0) Chisel.ChiselException: failed to compile vpi.cpp
Chisel.ChiselException: failed to compile vpi.cpp
at mainStub$.main(lut3to1_1.scala:58)
[trace] Stack trace suppressed: run last compile:run for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 9 s, completed Oct 4, 2015 6:33:30 PM
BigKiss:chisel mykland$
A complete listing of my source code follows:
import Chisel._
class lut3to1_1 extends Module
{
val io = new Bundle
{
val config = UInt(INPUT, 8)
val a = Bool(INPUT)
val b = Bool(INPUT)
val c = Bool(INPUT)
val out = Bool(OUTPUT)
}
io.out := (io.config(0) & !io.a & !io.b & !io.c) |
(io.config(1) & io.a & !io.b & !io.c) |
(io.config(2) & !io.a & io.b & !io.c) |
(io.config(3) & io.a & io.b & !io.c) |
(io.config(4) & !io.a & !io.b & io.c) |
(io.config(5) & io.a & !io.b & io.c) |
(io.config(6) & !io.a & io.b & io.c) |
(io.config(7) & io.a & io.b & io.c)
}
class lut3to1_1_Tests(c: lut3to1_1) extends Tester(c)
{
for ( config <- 0 to 255 )
{
poke( c.io.config, config )
for ( bits <- 0 to 7 )
{
val bitA = bits & 1
val bitB = (bits >> 1) & 1
val bitC = (bits >> 2) & 1
poke( c.io.a, bitA )
poke( c.io.b, bitB )
poke( c.io.c, bitC )
step( 1 )
val result0 = ~bitA & ~bitB & ~bitC & (config & 1)
val result1 = bitA & ~bitB & ~bitC & ((config >> 1) & 1)
val result2 = ~bitA & bitB & ~bitC & ((config >> 2) & 1)
val result3 = bitA & bitB & ~bitC & ((config >> 3) & 1)
val result4 = ~bitA & ~bitB & bitC & ((config >> 4) & 1)
val result5 = bitA & ~bitB & bitC & ((config >> 5) & 1)
val result6 = ~bitA & bitB & bitC & ((config >> 6) & 1)
val result7 = bitA & bitB & bitC & ((config >> 7) & 1)
val result = result0 | result1 | result2 | result3 |
result4 | result5 | result6 | result7
expect( c.io.out, result )
}
}
}
object mainStub
{
def main( args: Array[String] ): Unit =
{
chiselMainTest( Array[String]("--backend", "c", "--backend", "v",
"--compile", "--test", "--genHarness"), () => Module( new lut3to1_1() ) )
{
c => new lut3to1_1_Tests( c )
}
}
}
The missing header file (vpi_user.h) is related to Verilog simulators VPI support, which is the mechanism that Chisel uses to connect your Tester to the Verilog simulator. The current version of Chisel only supports Synopsys VCS as the Verilog simulation tool. There's experimental support for Icarus Verilog (iverilog) version 10.0+, Verilator, Modelsim and Questasim in my fork of Chisel (available here) . Unfortunately I haven't had time to thoroughly test the changes and make a pull-request to the main repository, but you can try it and see if it works for you.
This command is to generate verilog for testbenches simulator.
If you just want to generate verilog for synthesis, simply add chiselMain() function in your main() like it :
object mainStub
{
def main( args: Array[String] ): Unit =
{
chiselMainTest( Array[String]("--backend", "c",
"--compile", "--test", "--genHarness"), () => Module( new lut3to1_1() ) )
{
c => new lut3to1_1_Tests( c )
}
chiselMain(args, () => Module(new lut3to1_1()))
}
}
You will get a synthetizable verilog file named lut3to1_1.v

octave cannot find fltk XGetUtf8FontAndGlyph symbol

octave 3.8.2 produces this error on loading:
error: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/PKG_ADD: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/__init_fltk__.oct: failed to load: /usr/lib64/fltk/libfltk_gl.so.1.3: undefined symbol: XGetUtf8FontAndGlyph
error: called from:
error: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/PKG_ADD at line 6, column 1
GNU Octave, version 3.8.2
I obtain the following information about configuration of graphics libraries
octave:1> octave_config_info().GRAPHICS_LIBS
ans = -L/usr/lib64/fltk -Wl,-rpath,/usr/lib64/fltk -Wl,-O1 -Wl,--sort-common -Wl,--as-needed -lfltk_gl -lGLU -lGL -lfltk -lXcursor -lXfixes -lXext -ldl -lm -lX11
and although no graphic toolkits are evidently loaded initially,
octave:2> available_graphics_toolkits
ans = {}(1x0)
I can register them subsequently,
octave:3> register_graphics_toolkit("gnuplot")
octave:4> available_graphics_toolkits
ans =
{
[1,1] = gnuplot
}
octave:5> register_graphics_toolkit("fltk")
octave:6> available_graphics_toolkits
ans =
{
[1,1] = fltk
[1,2] = gnuplot
}
but attempting to load fltk produces an error consistent with the initial warning
octave:7> graphics_toolkit("fltk")
error: feval: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/__init_fltk__.oct: failed to load: /usr/lib64/fltk/libfltk_gl.so.1.3: undefined symbol: XGetUtf8FontAndGlyph
error: called from:
error: /usr/share/octave/3.8.2/m/plot/util/graphics_toolkit.m at line 74, column 5
and of course attempting to plot anything also fails,
octave:8> plot(1:10)
error: feval: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/__init_fltk__.oct: failed to load: /usr/lib64/fltk/libfltk_gl.so.1.3: undefined symbol: XGetUtf8FontAndGlyph
error: called from:
error: /usr/share/octave/3.8.2/m/plot/util/graphics_toolkit.m at line 74, column 5
error: failed to load fltk graphics toolkit
error: base_graphics_toolkit::initialize: invalid graphics toolkit
error: /usr/share/octave/3.8.2/m/plot/util/figure.m at line 94, column 9
error: /usr/share/octave/3.8.2/m/plot/util/gcf.m at line 63, column 9
error: /usr/share/octave/3.8.2/m/plot/util/newplot.m at line 113, column 8
error: /usr/share/octave/3.8.2/m/plot/draw/plot.m at line 219, column 9
Both octave and fltk were compiled from source under gentoo:
x11-libs/fltk-1.3.3-r2:1 USE="opengl -cairo -debug -doc -examples -games -pdf -static-libs -threads -xft -xinerama"
sci-mathematics/octave-3.8.2:0/3.8.2 USE="X doc glpk gnuplot gui imagemagick opengl qhull qrupdate readline sparse zlib -curl -fftw -hdf5 -java -jit -postscript -static-libs"
resulting in configure switches of (for the fltk library):
./configure --prefix=/usr --build=x86_64-pc-linux-gnu --host=x86_64-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib --includedir=/usr/include/fltk --libdir=/usr/lib64/fltk --docdir=/usr/share/doc/fltk-1.3.3-r2/html --enable-largefile --enable-shared --enable-xdbe --disable-localjpeg --disable-localpng --disable-localzlib --disable-debug --disable-cairo --enable-gl --disable-threads --disable-xft --disable-xinerama
and (for octave)
./configure --prefix=/usr --build=x86_64-pc-linux-gnu --host=x86_64-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib --libdir=/usr/lib64 --disable-silent-rules --disable-dependency-tracking --docdir=/usr/share/doc/octave-3.8.2 --enable-shared --disable-static --localstatedir=/var/state/octave --with-blas=-L/usr/lib64/blas/reference -lblas --with-lapack=-llapack -L/usr/lib64/blas/reference -lblas --enable-docs --disable-java --enable-gui --disable-jit --enable-readline --without-curl --without-fftw3 --without-fftw3f --disable-fftw-threads --with-glpk --without-hdf5 --with-opengl --with-qhull --with-qrupdate --with-arpack --with-umfpack --with-colamd --with-ccolamd --with-cholmod --with-cxsparse --with-x --with-z --with-magick=GraphicsMagick
If I examine libfltk_gl.so.1.3 with nm, I see that the following symbols are exported:
$ nm -D /usr/lib64/fltk/libfltk_gl.so.1.3
U XCreateColormap
U XGetUtf8FontAndGlyph
w _ITM_deregisterTMCloneTable
w _ITM_registerTMCloneTable
w _Jv_RegisterClasses
U _Z10fl_measurePKcRiS1_i
000000000000e170 T _Z10gl_descentv
000000000000e590 T _Z10gl_measurePKcRiS1_
... <snip>
According to nm manual, the U designates that the symbol is global (external) but unknown. My question is whether this unknown symbol status is the origin of the error reported from octave, suggesting that the problem lies with how fltk was compiled, or whether the octave compilation is somehow at fault.
Edit: Solved by enabling Xft support: Please see comments below, and I thank Andy again for his help.
XGetUtf8FontAndGlyph should be in libfltk.so.1.3.
nm -D /usr/lib/x86_64-linux-gnu/libfltk.so.1.3 |grep XGetU
00000000000c2fc0 T XGetUtf8FontAndGlyph
It's very likely that this is a problem with your configure flags for fltk and not GNU Octave. Just try it with the default settings first.
You can test if the UTF8 stuff with OpenGL is okay with the "cube" test. Just digg into the fltk-source dir tests:
cd fltk-1.3.3/test
make cube && ./cube
Does the text in the lower left of the GL window show up?
Had a similar problem. Was getting the following error while trying to run octave (undefined symbol: _ZN18Fl_XFont_On_Demand5valueEv):
bash-4.3$ octave
error: /usr/local/lib/octave/4.0.2/oct/i686-pc-linux-gnu/PKG_ADD: /usr/local/lib/octave/4.0.2/oct/i686-pc-linux-gnu/__init_fltk__.oct: failed to load: /usr/lib/libfltk_gl.so.1.3: undefined symbol: _ZN18Fl_XFont_On_Demand5valueEv
error: called from
/usr/local/lib/octave/4.0.2/oct/i686-pc-linux-gnu/PKG_ADD at line 3 column 1
Command nm -D /usr/lib/libfltk_gl.so.1.3 showed that symbol _ZN18Fl_XFont_On_Demand5valueEv is undefined (with U):
0000a3d4 T _ZN14Fl_Glut_WindowD1Ev
0000a3d4 T _ZN14Fl_Glut_WindowD2Ev
U _ZN18Fl_Font_DescriptorD1Ev
U _ZN18Fl_Graphics_Driver11clip_regionEP8_XRegion
U _ZN18Fl_XFont_On_Demand5valueEv
The solution was to apply a patch file mentioned here to some files inside source directory of FLTK-1.3.3 and then recompile and reinstall FLTK. Now octave works with FLTK without any problem.