octave cannot find fltk XGetUtf8FontAndGlyph symbol - octave

octave 3.8.2 produces this error on loading:
error: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/PKG_ADD: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/__init_fltk__.oct: failed to load: /usr/lib64/fltk/libfltk_gl.so.1.3: undefined symbol: XGetUtf8FontAndGlyph
error: called from:
error: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/PKG_ADD at line 6, column 1
GNU Octave, version 3.8.2
I obtain the following information about configuration of graphics libraries
octave:1> octave_config_info().GRAPHICS_LIBS
ans = -L/usr/lib64/fltk -Wl,-rpath,/usr/lib64/fltk -Wl,-O1 -Wl,--sort-common -Wl,--as-needed -lfltk_gl -lGLU -lGL -lfltk -lXcursor -lXfixes -lXext -ldl -lm -lX11
and although no graphic toolkits are evidently loaded initially,
octave:2> available_graphics_toolkits
ans = {}(1x0)
I can register them subsequently,
octave:3> register_graphics_toolkit("gnuplot")
octave:4> available_graphics_toolkits
ans =
{
[1,1] = gnuplot
}
octave:5> register_graphics_toolkit("fltk")
octave:6> available_graphics_toolkits
ans =
{
[1,1] = fltk
[1,2] = gnuplot
}
but attempting to load fltk produces an error consistent with the initial warning
octave:7> graphics_toolkit("fltk")
error: feval: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/__init_fltk__.oct: failed to load: /usr/lib64/fltk/libfltk_gl.so.1.3: undefined symbol: XGetUtf8FontAndGlyph
error: called from:
error: /usr/share/octave/3.8.2/m/plot/util/graphics_toolkit.m at line 74, column 5
and of course attempting to plot anything also fails,
octave:8> plot(1:10)
error: feval: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/__init_fltk__.oct: failed to load: /usr/lib64/fltk/libfltk_gl.so.1.3: undefined symbol: XGetUtf8FontAndGlyph
error: called from:
error: /usr/share/octave/3.8.2/m/plot/util/graphics_toolkit.m at line 74, column 5
error: failed to load fltk graphics toolkit
error: base_graphics_toolkit::initialize: invalid graphics toolkit
error: /usr/share/octave/3.8.2/m/plot/util/figure.m at line 94, column 9
error: /usr/share/octave/3.8.2/m/plot/util/gcf.m at line 63, column 9
error: /usr/share/octave/3.8.2/m/plot/util/newplot.m at line 113, column 8
error: /usr/share/octave/3.8.2/m/plot/draw/plot.m at line 219, column 9
Both octave and fltk were compiled from source under gentoo:
x11-libs/fltk-1.3.3-r2:1 USE="opengl -cairo -debug -doc -examples -games -pdf -static-libs -threads -xft -xinerama"
sci-mathematics/octave-3.8.2:0/3.8.2 USE="X doc glpk gnuplot gui imagemagick opengl qhull qrupdate readline sparse zlib -curl -fftw -hdf5 -java -jit -postscript -static-libs"
resulting in configure switches of (for the fltk library):
./configure --prefix=/usr --build=x86_64-pc-linux-gnu --host=x86_64-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib --includedir=/usr/include/fltk --libdir=/usr/lib64/fltk --docdir=/usr/share/doc/fltk-1.3.3-r2/html --enable-largefile --enable-shared --enable-xdbe --disable-localjpeg --disable-localpng --disable-localzlib --disable-debug --disable-cairo --enable-gl --disable-threads --disable-xft --disable-xinerama
and (for octave)
./configure --prefix=/usr --build=x86_64-pc-linux-gnu --host=x86_64-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib --libdir=/usr/lib64 --disable-silent-rules --disable-dependency-tracking --docdir=/usr/share/doc/octave-3.8.2 --enable-shared --disable-static --localstatedir=/var/state/octave --with-blas=-L/usr/lib64/blas/reference -lblas --with-lapack=-llapack -L/usr/lib64/blas/reference -lblas --enable-docs --disable-java --enable-gui --disable-jit --enable-readline --without-curl --without-fftw3 --without-fftw3f --disable-fftw-threads --with-glpk --without-hdf5 --with-opengl --with-qhull --with-qrupdate --with-arpack --with-umfpack --with-colamd --with-ccolamd --with-cholmod --with-cxsparse --with-x --with-z --with-magick=GraphicsMagick
If I examine libfltk_gl.so.1.3 with nm, I see that the following symbols are exported:
$ nm -D /usr/lib64/fltk/libfltk_gl.so.1.3
U XCreateColormap
U XGetUtf8FontAndGlyph
w _ITM_deregisterTMCloneTable
w _ITM_registerTMCloneTable
w _Jv_RegisterClasses
U _Z10fl_measurePKcRiS1_i
000000000000e170 T _Z10gl_descentv
000000000000e590 T _Z10gl_measurePKcRiS1_
... <snip>
According to nm manual, the U designates that the symbol is global (external) but unknown. My question is whether this unknown symbol status is the origin of the error reported from octave, suggesting that the problem lies with how fltk was compiled, or whether the octave compilation is somehow at fault.
Edit: Solved by enabling Xft support: Please see comments below, and I thank Andy again for his help.

XGetUtf8FontAndGlyph should be in libfltk.so.1.3.
nm -D /usr/lib/x86_64-linux-gnu/libfltk.so.1.3 |grep XGetU
00000000000c2fc0 T XGetUtf8FontAndGlyph
It's very likely that this is a problem with your configure flags for fltk and not GNU Octave. Just try it with the default settings first.
You can test if the UTF8 stuff with OpenGL is okay with the "cube" test. Just digg into the fltk-source dir tests:
cd fltk-1.3.3/test
make cube && ./cube
Does the text in the lower left of the GL window show up?

Had a similar problem. Was getting the following error while trying to run octave (undefined symbol: _ZN18Fl_XFont_On_Demand5valueEv):
bash-4.3$ octave
error: /usr/local/lib/octave/4.0.2/oct/i686-pc-linux-gnu/PKG_ADD: /usr/local/lib/octave/4.0.2/oct/i686-pc-linux-gnu/__init_fltk__.oct: failed to load: /usr/lib/libfltk_gl.so.1.3: undefined symbol: _ZN18Fl_XFont_On_Demand5valueEv
error: called from
/usr/local/lib/octave/4.0.2/oct/i686-pc-linux-gnu/PKG_ADD at line 3 column 1
Command nm -D /usr/lib/libfltk_gl.so.1.3 showed that symbol _ZN18Fl_XFont_On_Demand5valueEv is undefined (with U):
0000a3d4 T _ZN14Fl_Glut_WindowD1Ev
0000a3d4 T _ZN14Fl_Glut_WindowD2Ev
U _ZN18Fl_Font_DescriptorD1Ev
U _ZN18Fl_Graphics_Driver11clip_regionEP8_XRegion
U _ZN18Fl_XFont_On_Demand5valueEv
The solution was to apply a patch file mentioned here to some files inside source directory of FLTK-1.3.3 and then recompile and reinstall FLTK. Now octave works with FLTK without any problem.

Related

Stanford NER Tagger and NLTK - not working [OSError: Java command failed ]

Trying to run Stanford NER Taggerand NLTK from a jupyter notebook.
I am continuously getting
OSError: Java command failed
I have already tried the hack at
https://gist.github.com/alvations/e1df0ba227e542955a8a
and thread
Stanford Parser and NLTK
I am using
NLTK==3.3
Ubuntu==16.04LTS
Here is my python code:
Sample_text = "Google, headquartered in Mountain View, unveiled the new Android phone"
sentences = sent_tokenize(Sample_text)
tokenized_sentences = [word_tokenize(sentence) for sentence in sentences]
PATH_TO_GZ = '/home/root/english.all.3class.caseless.distsim.crf.ser.gz'
PATH_TO_JAR = '/home/root/stanford-ner.jar'
sn_3class = StanfordNERTagger(PATH_TO_GZ,
path_to_jar=PATH_TO_JAR,
encoding='utf-8')
annotations = [sn_3class.tag(sent) for sent in tokenized_sentences]
I got these files using following commands:
wget http://nlp.stanford.edu/software/stanford-ner-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-postagger-full-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-parser-full-2015-04-20.zip
# Extract the zip file.
unzip stanford-ner-2015-04-20.zip
unzip stanford-parser-full-2015-04-20.zip
unzip stanford-postagger-full-2015-04-20.zip
I am getting the following error:
CRFClassifier invoked on Thu May 31 15:56:19 IST 2018 with arguments:
-loadClassifier /home/root/english.all.3class.caseless.distsim.crf.ser.gz -textFile /tmp/tmpMDEpL3 -outputFormat slashTags -tokenizerFactory edu.stanford.nlp.process.WhitespaceTokenizer -tokenizerOptions "tokenizeNLs=false" -encoding utf-8
tokenizerFactory=edu.stanford.nlp.process.WhitespaceTokenizer
Unknown property: |tokenizerFactory|
tokenizerOptions="tokenizeNLs=false"
Unknown property: |tokenizerOptions|
loadClassifier=/home/root/english.all.3class.caseless.distsim.crf.ser.gz
encoding=utf-8
Unknown property: |encoding|
textFile=/tmp/tmpMDEpL3
outputFormat=slashTags
Loading classifier from /home/root/english.all.3class.caseless.distsim.crf.ser.gz ... Error deserializing /home/root/english.all.3class.caseless.distsim.crf.ser.gz
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1380)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1331)
at edu.stanford.nlp.ie.crf.CRFClassifier.main(CRFClassifier.java:2315)
Caused by: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
at edu.stanford.nlp.ie.crf.CRFClassifier.loadClassifier(CRFClassifier.java:2164)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1249)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1366)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1377)
... 2 more
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-15-5621d0f8177d> in <module>()
----> 1 ne_annot_sent_3c = [sn_3class.tag(sent) for sent in tokenized_sentences]
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag(self, tokens)
79 def tag(self, tokens):
80 # This function should return list of tuple rather than list of list
---> 81 return sum(self.tag_sents([tokens]), [])
82
83 def tag_sents(self, sentences):
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag_sents(self, sentences)
102 # Run the tagger and get the output
103 stanpos_output, _stderr = java(cmd, classpath=self._stanford_jar,
--> 104 stdout=PIPE, stderr=PIPE)
105 stanpos_output = stanpos_output.decode(encoding)
106
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/__init__.pyc in java(cmd, classpath, stdin, stdout, stderr, blocking)
134 if p.returncode != 0:
135 print(_decode_stdoutdata(stderr))
--> 136 raise OSError('Java command failed : ' + str(cmd))
137
138 return (stdout, stderr)
OSError: Java command failed : [u'/usr/bin/java', '-mx1000m', '-cp', '/home/root/stanford-ner.jar', 'edu.stanford.nlp.ie.crf.CRFClassifier', '-loadClassifier', '/home/root/english.all.3class.caseless.distsim.crf.ser.gz', '-textFile', '/tmp/tmpMDEpL3', '-outputFormat', 'slashTags', '-tokenizerFactory', 'edu.stanford.nlp.process.WhitespaceTokenizer', '-tokenizerOptions', '"tokenizeNLs=false"', '-encoding', 'utf-8']
Download Stanford Named Entity Recognizer version 3.9.1: see ‘Download’ section from The Stanford NLP website.
Unzip it and move 2 files "ner-tagger.jar" and "english.all.3class.distsim.crf.ser.gz" to your folder
Open jupyter notebook or ipython prompt in your folder path and run the following python code:
import nltk
from nltk.tag.stanford import StanfordNERTagger
sentence = u"Twenty miles east of Reno, Nev., " \
"where packs of wild mustangs roam free through " \
"the parched landscape, Tesla Gigafactory 1 " \
"sprawls near Interstate 80."
jar = './stanford-ner.jar'
model = './english.all.3class.distsim.crf.ser.gz'
ner_tagger = StanfordNERTagger(model, jar, encoding='utf8')
words = nltk.word_tokenize(sentence)
# Run NER tagger on words
print(ner_tagger.tag(words))
I tested this on NLTK==3.3 and Ubuntu==16.0.6LTS

nix-shell --command `stack build` leads to libpq-fe.h: No such file or directory

i am trying to compile my small project (a yesod application with lambdacms) on nixos. However, after using cabal2nix (more precisely cabal2nix project-karma.cabal --sha256=0 --shell > shell.nix) , I am still missing a dependency wrt. postgresql it seems.
My shell.nix file looks like this:
{ nixpkgs ? import <nixpkgs> {}, compiler ? "default" }:
let
inherit (nixpkgs) pkgs;
f = { mkDerivation, aeson, base, bytestring, classy-prelude
, classy-prelude-conduit, classy-prelude-yesod, conduit, containers
, data-default, directory, fast-logger, file-embed, filepath
, hjsmin, hspec, http-conduit, lambdacms-core, monad-control
, monad-logger, persistent, persistent-postgresql
, persistent-template, random, resourcet, safe, shakespeare, stdenv
, template-haskell, text, time, transformers, unordered-containers
, uuid, vector, wai, wai-extra, wai-logger, warp, yaml, yesod
, yesod-auth, yesod-core, yesod-form, yesod-static, yesod-test
}:
mkDerivation {
pname = "karma";
version = "0.0.0";
sha256 = "0";
isLibrary = true;
isExecutable = true;
libraryHaskellDepends = [
aeson base bytestring classy-prelude classy-prelude-conduit
classy-prelude-yesod conduit containers data-default directory
fast-logger file-embed filepath hjsmin http-conduit lambdacms- core
monad-control monad-logger persistent persistent-postgresql
persistent-template random safe shakespeare template-haskell text
time unordered-containers uuid vector wai wai-extra wai-logger warp
yaml yesod yesod-auth yesod-core yesod-form yesod-static
nixpkgs.zlib
nixpkgs.postgresql
nixpkgs.libpqxx
];
libraryPkgconfigDepends = [ persistent-postgresql];
executableHaskellDepends = [ base ];
testHaskellDepends = [
base classy-prelude classy-prelude-yesod hspec monad-logger
persistent persistent-postgresql resourcet shakespeare transformers
yesod yesod-core yesod-test
];
license = stdenv.lib.licenses.bsd3;
};
haskellPackages = if compiler == "default"
then pkgs.haskellPackages
else pkgs.haskell.packages.${compiler};
drv = haskellPackages.callPackage f {};
in
if pkgs.lib.inNixShell then drv.env else drv
The output is as follows:
markus#nixos ~/git/haskell/karma/karma (git)-[master] % nix-shell --command `stack build`
postgresql-libpq-0.9.1.1: configure
ReadArgs-1.2.2: download
postgresql-libpq-0.9.1.1: build
ReadArgs-1.2.2: configure
ReadArgs-1.2.2: build
ReadArgs-1.2.2: install
-- While building package postgresql-libpq-0.9.1.1 using:
/run/user/1000/stack31042/postgresql-libpq-0.9.1.1/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup --builddir=.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/ build --ghc-options " -ddump-hi -ddump-to-file"
Process exited with code: ExitFailure 1
Logs have been written to: /home/markus/git/haskell/karma/karma/.stack-work/logs/postgresql-libpq-0.9.1.1.log
[1 of 1] Compiling Main ( /run/user/1000/stack31042/postgresql-libpq-0.9.1.1/Setup.hs, /run/user/1000/stack31042/postgresql-libpq-0.9.1.1/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/Main.o )
Linking /run/user/1000/stack31042/postgresql-libpq-0.9.1.1/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup ...
Configuring postgresql-libpq-0.9.1.1...
Building postgresql-libpq-0.9.1.1...
Preprocessing library postgresql-libpq-0.9.1.1...
LibPQ.hsc:213:22: fatal error: libpq-fe.h: No such file or directory
compilation terminated.
compiling .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/PostgreSQL/LibPQ_hsc_make.c failed (exit code 1)
command was: /nix/store/9fbfiij3ajnd3fs1zyc2qy0ispbszrr7-gcc-wrapper-4.9.3/bin/gcc -c .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/PostgreSQL/LibPQ_hsc_make.c -o .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/PostgreSQL/LibPQ_hsc_make.o -fno-stack-protector -D__GLASGOW_HASKELL__=710 -Dlinux_BUILD_OS=1 -Dx86_64_BUILD_ARCH=1 -Dlinux_HOST_OS=1 -Dx86_64_HOST_ARCH=1 -I/run/current-system/sw/include -Icbits -I.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen -include .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen/cabal_macros.h -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/bytes_6elQVSg5cWdFrvRnfxTUrH/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/base_GDytRqRVSUX7zckgKqJjgw/include -I/nix/store/6ykqcjxr74l642kv9gf1ib8v9yjsgxr9-gmp-5.1.3/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/integ_2aU3IZNMF9a7mQ0OzsZ0dS/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include/
I assume not much is missing, so a pointer would be nice.
What is also weird, that is that "nix-shell" works but following that up with "stack exec yesod devel" tells me
Resolving dependencies...
Configuring karma-0.0.0...
cabal: At least the following dependencies are missing:
classy-prelude >=0.10.2,
classy-prelude-conduit >=0.10.2,
classy-prelude-yesod >=0.10.2,
hjsmin ==0.1.*,
http-conduit ==2.1.*,
lambdacms-core >=0.3.0.2 && <0.4,
monad-logger ==0.3.*,
persistent >=2.0 && <2.3,
persistent-postgresql >=2.1.1 && <2.3,
persistent-template >=2.0 && <2.3,
uuid >=1.3,
wai-extra ==3.0.*,
warp >=3.0 && <3.2,
yesod >=1.4.1 && <1.5,
yesod-auth >=1.4.0 && <1.5,
yesod-core >=1.4.6 && <1.5,
yesod-form >=1.4.0 && <1.5,
yesod-static >=1.4.0.3 && <1.6
When using mysql instead, I am getting
pcre-light-0.4.0.4: configure
mysql-0.1.1.8: configure
mysql-0.1.1.8: build
Progress: 2/59
-- While building package mysql-0.1.1.8 using:
/run/user/1000/stack12820/mysql-0.1.1.8/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup --builddir=.stack-work/dist/x86_64- linux/Cabal-1.22.4.0/ build --ghc-options " -ddump-hi -ddump-to-file"
Process exited with code: ExitFailure 1
Logs have been written to: /home/markus/git/haskell/karma/karma/.stack-work/logs/mysql-0.1.1.8.log
[1 of 1] Compiling Main ( /run/user/1000/stack12820/mysql-0.1.1.8/Setup.lhs, /run/user/1000/stack12820/mysql-0.1.1.8/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/Main.o )
Linking /run/user/1000/stack12820/mysql-0.1.1.8/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup ...
Configuring mysql-0.1.1.8...
Building mysql-0.1.1.8...
Preprocessing library mysql-0.1.1.8...
In file included from C.hsc:68:0:
include/mysql_signals.h:9:19: fatal error: mysql.h: No such file or directory
#include "mysql.h"
^
compilation terminated.
compiling .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/MySQL/Base/C_hsc_make.c failed (exit code 1)
command was: /nix/store/9fbfiij3ajnd3fs1zyc2qy0ispbszrr7-gcc-wrapper-4.9.3/bin/gcc -c .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/MySQL/Base/C_hsc_make.c -o .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/MySQL/Base/C_hsc_make.o -fno-stack-protector -D__GLASGOW_HASKELL__=710 -Dlinux_BUILD_OS=1 -Dx86_64_BUILD_ARCH=1 -Dlinux_HOST_OS=1 -Dx86_64_HOST_ARCH=1 -I/nix/store/7ppa4k2drrvjk94rb60c1df9nvw0z696-mariadb-10.0.22-lib/include -I/nix/store/7ppa4k2drrvjk94rb60c1df9nvw0z696-mariadb-10.0.22-lib/include/.. -Iinclude -I.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen -include .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen/cabal_macros.h -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/bytes_6elQVSg5cWdFrvRnfxTUrH/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/base_GDytRqRVSUX7zckgKqJjgw/include -I/nix/store/6ykqcjxr74l642kv9gf1ib8v9yjsgxr9-gmp-5.1.3/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/integ_2aU3IZNMF9a7mQ0OzsZ0dS/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include/
-- While building package pcre-light-0.4.0.4 using:
/home/markus/.stack/setup-exe-cache/setup-Simple-Cabal-1.22.4.0-x86_64-linux-ghc-7.10.2 --builddir=.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/ configure --with-ghc=/run/current-system/sw/bin/ghc --user --package-db=clear --package-db=global --package-db=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/pkgdb/ --libdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/lib --bindir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/bin --datadir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/share --libexecdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/libexec --sysconfdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/etc --docdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/doc/pcre-light-0.4.0.4 --htmldir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/doc/pcre-light-0.4.0.4 --haddockdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/doc/pcre-light-0.4.0.4 --dependency=base=base-4.8.1.0-4f7206fd964c629946bb89db72c80011 --dependency=bytestring=bytestring-0.10.6.0-18c05887c1aaac7adb3350f6a4c6c8ed
Process exited with code: ExitFailure 1
Logs have been written to: /home/markus/git/haskell/karma/karma/.stack-work/logs/pcre-light-0.4.0.4.log
Configuring pcre-light-0.4.0.4...
setup-Simple-Cabal-1.22.4.0-x86_64-linux-ghc-7.10.2: The program 'pkg-config'
version >=0.9.0 is required but it could not be found.
After adding pkgconfig to my global configuration, the build seems to get a little further ahead, so it seems that shell.nix is ignored somewhat.
(Sources for what I tried so far:
https://groups.google.com/forum/#!topic/haskell-stack/_ZBh01VP_fo)
Update: It seems like I overlooked this section of the manual
http://nixos.org/nixpkgs/manual/#using-stack-together-with-nix
However, the first idea that came to mind
(stack --extra-lib-dirs=/nix/store/c6qy7n5wdwl164lnzha7vpc3av9yhnga-postgresql-libpq-0.9.1.1/lib build)
did not work yet, most likely I need to use
--extra-include-dirs or try one of the variations. It seems weird that stack is still trying to build postgresql-libpq in the very same version, though.
Update2: Currently trying out "stack --extra-lib-dirs=/nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/lib --extra-include-dirs=/nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/include build" which looks promising. Does not look like the nix-way, but still.
Update3: Still getting
<command line>: can't load .so/.DLL for: /home/markus /.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/lib/x86_64-linux- ghc-7.10.2/postgresql-libpq-0.9.1.1-ABGs5p1J8FbEwi6uvHaiV6/libHSpostgresql-libpq-0.9.1.1-ABGs5p1J8FbEwi6uvHaiV6-ghc7.10.2.so
(libpq.so.5: cannot open shared object file: No such file or directory) stack build 186.99s user 2.93s system 109% cpu 2:52.76 total
which is strange since libpq.so.5 is contained in /nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/lib.
An additional
$LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/lib
does not help either.
Update4:
By the way, yesod devel does the same as stack exec yesod devel. My libraries are downloaded to /nix/store but they are not recognized.
Maybe I need to make "build-nix" work and yesod devel does not work here?
Just for completeness, here is stack.yaml
resolver: nightly-2015-11-17
#run stack setup otherwise!!
# Local packages, usually specified by relative directory name
packages:
- '.'
# Packages to be pulled from upstream that are not in the resolver (e.g., acme-missiles-0.3)
extra-deps: [lambdacms-core-0.3.0.2 , friendly-time-0.4, lists-0.4.2, list-extras-0.4.1.4 ]
# Override default flag values for local packages and extra-deps
flags:
karma:
library-only: false
dev: false
# Extra package databases containing global packages
extra-package-dbs: []
Next weekend, I will check out
https://pr06lefs.wordpress.com/2014/09/27/compiling-a-yesod-project-on-nixos/
and other search results.
Funny, because I've just had a similar problem myself - solved it by adding these two lines to stack.yaml:
extra-include-dirs: [/nix/store/jrdvjvf0w9nclw7b4k0pdfkljw78ijgk-postgresql-9.4.5/include/]
extra-lib-dirs: [/nix/store/jrdvjvf0w9nclw7b4k0pdfkljw78ijgk-postgresql-9.4.5/lib/]
You may want to check first which postgresql's path from the /nix/store you should use with include/ and lib/:
nix-build --no-out-link "<nixpkgs>" -A postgresql
And BTW, why do you use nix-shell if you are going to use stack and you have project-karma.cabal available..? Have you considered migrating your project with stack init..?
Looks like stack is trying to build haskellPackages.postgresql-libpq outside of the nix framework.
You probably don't want that to happen. Maybe try to add postgresql-libpq to libraryHaskellDepends?

Getting user inputs from the STDIN in Octave using input()?

Can any please help me why am I getting this error on running the Octave(version 3.8.1) code below-
a = input("");
b = input("");
printf("%d", a+b);
./CandidateCode.m: line 1: syntax error near unexpected token ('
./CandidateCode.m: line 1:a = input("");'
Please help me in resolving this error.
If you run your Script CandidateCode.m from shell, you have to add an interpreter with shebang:
Your CandidateCode.m:
#!/usr/bin/octave -q
a = input("");
b = input("");
printf("%d", a+b);
If you want to run it from within Octave, just execute "CandidateCode" (without ./ and .m)

MonoDevelop/MonoTouch - Expression denotes a `value', where a `method group' was expected - Unable to locate error Info

Following the Tasky application's core, I created the business and database layers, however when trying to compile I get this error:
Error CS0119: Expression denotes a 'value', where a 'method group' was expected (CS0119) (assales.core)
The problem is that there is no line number nor file reference to go along with the error as would normally would occur with a compilation error. This makes me assume that perhaps there is an issue with the project options, but that's just a guess and there are many options. What specifically do I need to do to either locate the error or get more information on this error.
The full build output:
Building: assales.core (Debug)
Performing main compilation...
/Library/Frameworks/Mono.framework/Versions/2.10.12/bin/dmcs /noconfig "/out:/Users/sb/assales/assales.core/bin/Debug/assales.core.dll" "/r:/Library/Frameworks/Mono.framework/Versions/2.10.12/lib/mono/4.0/System.dll" "/r:/Library/Frameworks/Mono.framework/Versions/2.10.12/lib/mono/4.0/System.Data.dll" "/r:/Library/Frameworks/Mono.framework/Versions/2.10.12/lib/mono/4.0/Mono.Data.Sqlite.dll" "/r:/Library/Frameworks/Mono.framework/Versions/2.10.12/lib/mono/4.0/System.Data.Linq.dll" "/r:/Library/Frameworks/Mono.framework/Versions/2.10.12/lib/mono/4.0/System.Xml.Linq.dll" "/r:/Library/Frameworks/Mono.framework/Versions/2.10.12/lib/mono/4.0/System.Core.dll" /nologo /warn:4 /debug:full /optimize- /codepage:utf8 "/define:DEBUG" /t:library "/Users/sb/assales/assales.core/AssemblyInfo.cs" "/Users/sb/assales/assales.core/DL/SqlLite.cs" "/Users/sb/assales/assales.core/DL/AlcSalesDatabase.cs" "/Users/sb/assales/assales.core/BusinessLayer/Contracts/BusinessEntityBase.cs" "/Users/sb/assales/assales.core/BusinessLayer/Contracts/IBusinessEntity.cs" "/Users/sb/assales/assales.core/BusinessLayer/Location.cs" "/Users/sb/assales/assales.core/BusinessLayer/Managers/LocationManager.cs" "/Users/sb/assales/assales.core/DAL/LocationRepository.cs"
Compilation failed: 1 error(s), 0 warnings
error CS0119: Expression denotes a `value', where a `method group' was expected
Build complete -- 1 error, 0 warnings
---------------------- Done ----------------------
Build: 1 error, 0 warnings
I think it's a problem of the mono compiler. If I omit the "new" keyword in a statement that uses var:
// "var" version
public class App {
public static void Main() {
//missing keyword "new"
var bitArray = System.Collections.BitArray();
}
}
the compiler does not indicate row number either file name:
$ mcs App.cs
error CS0119: Expression denotes a `type', where a `variable', `value' or `method group' was expected
If instead I declare explicitly bitArray (without using "var"):
public class App {
public static void Main() {
//missing keyword "new"
System.Collections.BitArray bitArray = System.Collections.BitArray();
}
}
the compiler works well:
$ mcs App.cs
App.cs(3,27): error CS0119: Expression denotes a `type', where a `variable', `value' or `method group' was expected
was expected
My mcs version is:
$mcs --version
Mono C# compiler version 3.2.3.0
By the way, the Microsoft compiler works well also with "var version" of App.cs
/cygdrive/c/WINDOWS/Microsoft.NET/Framework/v4.0.30319/csc.exe App.cs
Microsoft (R) Visual C# 2010 Compiler version 4.0.30319.1
Copyright (C) Microsoft Corporation. All rights reserved.
App.cs(4,23): error CS0119: 'System.Collections.BitArray' is a 'type', which is not valid in the given context

Octave leasqr error

I have the following
x=[0.01:0.01:.1];
y=[1 1 1 1 1 0 0 0 0 0 ];
F=#(x,p) 0.5-(1/Pi)*atan(p(2)*(x-p(1)));
p0=[0.05 10000];
When I run the following
[f p]=leasqr(x,y,p0,F)
I get
error: Invalid call to options. Correct usage is:
-- Function File: OPT = options ("KEY1", VALUE1, "KEY2", VALUE2, ...)
error: called from:
error: /usr/share/octave/3.6.2/m/help/print_usage.m at line 87, column 5
error: /usr/share/octave/packages/control-2.3.52/options.m at line 68, column 5
error: evaluating argument list element number 1
error: /usr/share/octave/packages/optim-1.2.0/leasqr.m at line 574, column 5
Am I missing something?
EDIT: Updated the optim package. New error message:
error: binary operator `.*' not implemented for `matrix' by `symbolic matrix' operations
error: called from:
error: /usr/share/octave/packages/optim-1.2.2/private/__lm_svd__.m at line 145, column 5
error: /usr/share/octave/packages/optim-1.2.2/leasqr.m at line 582, column 26
This is a bug.
According to mail-list
You may want to update optim package to fix it. The first step is system-dependent, it will install tools to compile packages, on rpm package name is different.
$ sudo apt-get install liboctave-dev
$ sudo octave
octave> pkg install -forge optim
if you change "Pi" to "pi" in the function this code works for me.
except that it says "CONVERGENCE NOT ACHIEVED! "