Stanford NER Tagger and NLTK - not working [OSError: Java command failed ] - nltk

Trying to run Stanford NER Taggerand NLTK from a jupyter notebook.
I am continuously getting
OSError: Java command failed
I have already tried the hack at
https://gist.github.com/alvations/e1df0ba227e542955a8a
and thread
Stanford Parser and NLTK
I am using
NLTK==3.3
Ubuntu==16.04LTS
Here is my python code:
Sample_text = "Google, headquartered in Mountain View, unveiled the new Android phone"
sentences = sent_tokenize(Sample_text)
tokenized_sentences = [word_tokenize(sentence) for sentence in sentences]
PATH_TO_GZ = '/home/root/english.all.3class.caseless.distsim.crf.ser.gz'
PATH_TO_JAR = '/home/root/stanford-ner.jar'
sn_3class = StanfordNERTagger(PATH_TO_GZ,
path_to_jar=PATH_TO_JAR,
encoding='utf-8')
annotations = [sn_3class.tag(sent) for sent in tokenized_sentences]
I got these files using following commands:
wget http://nlp.stanford.edu/software/stanford-ner-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-postagger-full-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-parser-full-2015-04-20.zip
# Extract the zip file.
unzip stanford-ner-2015-04-20.zip
unzip stanford-parser-full-2015-04-20.zip
unzip stanford-postagger-full-2015-04-20.zip
I am getting the following error:
CRFClassifier invoked on Thu May 31 15:56:19 IST 2018 with arguments:
-loadClassifier /home/root/english.all.3class.caseless.distsim.crf.ser.gz -textFile /tmp/tmpMDEpL3 -outputFormat slashTags -tokenizerFactory edu.stanford.nlp.process.WhitespaceTokenizer -tokenizerOptions "tokenizeNLs=false" -encoding utf-8
tokenizerFactory=edu.stanford.nlp.process.WhitespaceTokenizer
Unknown property: |tokenizerFactory|
tokenizerOptions="tokenizeNLs=false"
Unknown property: |tokenizerOptions|
loadClassifier=/home/root/english.all.3class.caseless.distsim.crf.ser.gz
encoding=utf-8
Unknown property: |encoding|
textFile=/tmp/tmpMDEpL3
outputFormat=slashTags
Loading classifier from /home/root/english.all.3class.caseless.distsim.crf.ser.gz ... Error deserializing /home/root/english.all.3class.caseless.distsim.crf.ser.gz
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1380)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1331)
at edu.stanford.nlp.ie.crf.CRFClassifier.main(CRFClassifier.java:2315)
Caused by: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
at edu.stanford.nlp.ie.crf.CRFClassifier.loadClassifier(CRFClassifier.java:2164)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1249)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1366)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1377)
... 2 more
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-15-5621d0f8177d> in <module>()
----> 1 ne_annot_sent_3c = [sn_3class.tag(sent) for sent in tokenized_sentences]
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag(self, tokens)
79 def tag(self, tokens):
80 # This function should return list of tuple rather than list of list
---> 81 return sum(self.tag_sents([tokens]), [])
82
83 def tag_sents(self, sentences):
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag_sents(self, sentences)
102 # Run the tagger and get the output
103 stanpos_output, _stderr = java(cmd, classpath=self._stanford_jar,
--> 104 stdout=PIPE, stderr=PIPE)
105 stanpos_output = stanpos_output.decode(encoding)
106
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/__init__.pyc in java(cmd, classpath, stdin, stdout, stderr, blocking)
134 if p.returncode != 0:
135 print(_decode_stdoutdata(stderr))
--> 136 raise OSError('Java command failed : ' + str(cmd))
137
138 return (stdout, stderr)
OSError: Java command failed : [u'/usr/bin/java', '-mx1000m', '-cp', '/home/root/stanford-ner.jar', 'edu.stanford.nlp.ie.crf.CRFClassifier', '-loadClassifier', '/home/root/english.all.3class.caseless.distsim.crf.ser.gz', '-textFile', '/tmp/tmpMDEpL3', '-outputFormat', 'slashTags', '-tokenizerFactory', 'edu.stanford.nlp.process.WhitespaceTokenizer', '-tokenizerOptions', '"tokenizeNLs=false"', '-encoding', 'utf-8']

Download Stanford Named Entity Recognizer version 3.9.1: see ‘Download’ section from The Stanford NLP website.
Unzip it and move 2 files "ner-tagger.jar" and "english.all.3class.distsim.crf.ser.gz" to your folder
Open jupyter notebook or ipython prompt in your folder path and run the following python code:
import nltk
from nltk.tag.stanford import StanfordNERTagger
sentence = u"Twenty miles east of Reno, Nev., " \
"where packs of wild mustangs roam free through " \
"the parched landscape, Tesla Gigafactory 1 " \
"sprawls near Interstate 80."
jar = './stanford-ner.jar'
model = './english.all.3class.distsim.crf.ser.gz'
ner_tagger = StanfordNERTagger(model, jar, encoding='utf8')
words = nltk.word_tokenize(sentence)
# Run NER tagger on words
print(ner_tagger.tag(words))
I tested this on NLTK==3.3 and Ubuntu==16.0.6LTS

Related

gym package not identifying ten-armed-bandits-v0 env

Environment:
Python: 3.9
OS: Windows 10
When I try to create the ten armed bandits environment using the following code the error is thrown not sure of the reason.
import gym
import gym_armed_bandits
env = gym.make('ten-armed-bandits-v0')
The error:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File D:\00_PythonEnvironments\01_RL\lib\site-packages\gym\envs\registration.py:158, in EnvRegistry.spec(self, path)
157 try:
--> 158 return self.env_specs[id]
159 except KeyError:
160 # Parse the env name and check to see if it matches the non-version
161 # part of a valid env (could also check the exact number here)
KeyError: 'ten-armed-bandits-v0'
During handling of the above exception, another exception occurred:
UnregisteredEnv Traceback (most recent call last)
Input In [6], in <module>
----> 1 env = gym.make('ten-armed-bandits-v0')
File D:\00_PythonEnvironments\01_RL\lib\site-packages\gym\envs\registration.py:235, in make(id, **kwargs)
234 def make(id, **kwargs):
--> 235 return registry.make(id, **kwargs)
File D:\00_PythonEnvironments\01_RL\lib\site-packages\gym\envs\registration.py:128, in EnvRegistry.make(self, path, **kwargs)
126 else:
127 logger.info("Making new env: %s", path)
--> 128 spec = self.spec(path)
129 env = spec.make(**kwargs)
130 return env
File D:\00_PythonEnvironments\01_RL\lib\site-packages\gym\envs\registration.py:203, in EnvRegistry.spec(self, path)
197 raise error.UnregisteredEnv(
198 "Toytext environment {} has been moved out of Gym. Install it via `pip install gym-legacy-toytext` and add `import gym_toytext` before using it.".format(
199 id
200 )
201 )
202 else:
--> 203 raise error.UnregisteredEnv("No registered env with id: {}".format(id))
UnregisteredEnv: No registered env with id: ten-armed-bandits-v0
When I check the environments available, I am able to see it there.
from gym import envs
print(envs.registry.all())
dict_values([EnvSpec(CartPole-v0), EnvSpec(CartPole-v1), EnvSpec(MountainCar-v0), EnvSpec(MountainCarContinuous-v0), EnvSpec(Pendulum-v1), EnvSpec(Acrobot-v1), EnvSpec(LunarLander-v2), EnvSpec(LunarLanderContinuous-v2), EnvSpec(BipedalWalker-v3), EnvSpec(BipedalWalkerHardcore-v3), EnvSpec(CarRacing-v0), EnvSpec(Blackjack-v1), EnvSpec(FrozenLake-v1), EnvSpec(FrozenLake8x8-v1), EnvSpec(CliffWalking-v0), EnvSpec(Taxi-v3), EnvSpec(Reacher-v2), EnvSpec(Pusher-v2), EnvSpec(Thrower-v2), EnvSpec(Striker-v2), EnvSpec(InvertedPendulum-v2), EnvSpec(InvertedDoublePendulum-v2), EnvSpec(HalfCheetah-v2), EnvSpec(HalfCheetah-v3), EnvSpec(Hopper-v2), EnvSpec(Hopper-v3), EnvSpec(Swimmer-v2), EnvSpec(Swimmer-v3), EnvSpec(Walker2d-v2), EnvSpec(Walker2d-v3), EnvSpec(Ant-v2), EnvSpec(Ant-v3), EnvSpec(Humanoid-v2), EnvSpec(Humanoid-v3), EnvSpec(HumanoidStandup-v2), EnvSpec(FetchSlide-v1), EnvSpec(FetchPickAndPlace-v1), EnvSpec(FetchReach-v1), EnvSpec(FetchPush-v1), EnvSpec(HandReach-v0), EnvSpec(HandManipulateBlockRotateZ-v0), EnvSpec(HandManipulateBlockRotateZTouchSensors-v0), EnvSpec(HandManipulateBlockRotateZTouchSensors-v1), EnvSpec(HandManipulateBlockRotateParallel-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensors-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensors-v1), EnvSpec(HandManipulateBlockRotateXYZ-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensors-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensors-v1), EnvSpec(HandManipulateBlockFull-v0), EnvSpec(HandManipulateBlock-v0), EnvSpec(HandManipulateBlockTouchSensors-v0), EnvSpec(HandManipulateBlockTouchSensors-v1), EnvSpec(HandManipulateEggRotate-v0), EnvSpec(HandManipulateEggRotateTouchSensors-v0), EnvSpec(HandManipulateEggRotateTouchSensors-v1), EnvSpec(HandManipulateEggFull-v0), EnvSpec(HandManipulateEgg-v0), EnvSpec(HandManipulateEggTouchSensors-v0), EnvSpec(HandManipulateEggTouchSensors-v1), EnvSpec(HandManipulatePenRotate-v0), EnvSpec(HandManipulatePenRotateTouchSensors-v0), EnvSpec(HandManipulatePenRotateTouchSensors-v1), EnvSpec(HandManipulatePenFull-v0), EnvSpec(HandManipulatePen-v0), EnvSpec(HandManipulatePenTouchSensors-v0), EnvSpec(HandManipulatePenTouchSensors-v1), EnvSpec(FetchSlideDense-v1), EnvSpec(FetchPickAndPlaceDense-v1), EnvSpec(FetchReachDense-v1), EnvSpec(FetchPushDense-v1), EnvSpec(HandReachDense-v0), EnvSpec(HandManipulateBlockRotateZDense-v0), EnvSpec(HandManipulateBlockRotateZTouchSensorsDense-v0), EnvSpec(HandManipulateBlockRotateZTouchSensorsDense-v1), EnvSpec(HandManipulateBlockRotateParallelDense-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensorsDense-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensorsDense-v1), EnvSpec(HandManipulateBlockRotateXYZDense-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensorsDense-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensorsDense-v1), EnvSpec(HandManipulateBlockFullDense-v0), EnvSpec(HandManipulateBlockDense-v0), EnvSpec(HandManipulateBlockTouchSensorsDense-v0), EnvSpec(HandManipulateBlockTouchSensorsDense-v1), EnvSpec(HandManipulateEggRotateDense-v0), EnvSpec(HandManipulateEggRotateTouchSensorsDense-v0), EnvSpec(HandManipulateEggRotateTouchSensorsDense-v1), EnvSpec(HandManipulateEggFullDense-v0), EnvSpec(HandManipulateEggDense-v0), EnvSpec(HandManipulateEggTouchSensorsDense-v0), EnvSpec(HandManipulateEggTouchSensorsDense-v1), EnvSpec(HandManipulatePenRotateDense-v0), EnvSpec(HandManipulatePenRotateTouchSensorsDense-v0), EnvSpec(HandManipulatePenRotateTouchSensorsDense-v1), EnvSpec(HandManipulatePenFullDense-v0), EnvSpec(HandManipulatePenDense-v0), EnvSpec(HandManipulatePenTouchSensorsDense-v0), EnvSpec(HandManipulatePenTouchSensorsDense-v1), EnvSpec(CubeCrash-v0), EnvSpec(CubeCrashSparse-v0), EnvSpec(CubeCrashScreenBecomesBlack-v0), EnvSpec(MemorizeDigits-v0), EnvSpec(three-armed-bandits-v0), EnvSpec(five-armed-bandits-v0), EnvSpec(ten-armed-bandits-v0), EnvSpec(MultiarmedBandits-v0)])
It could be a problem with your Python version: k-armed-bandits library was made 4 years ago, when Python 3.9 didn't exist. Besides this, the configuration files in the repo indicates that the Python version is 2.7 (not 3.9).
If you create an environment with Python 2.7 and follow the setup instructions it works correctly on Windows:
git clone gym_armed_bandits
cd gym_armed_bandits
pip install -e .

Error when trying process .json files to pandas dataframe

I'm trying to merge various .json files so I can later run sentiment analysis on them.
I tried already other approaches but they always end up with an error.
I checked whether the .json is correctly formatted and can't find any issues there. I also attached an example of a .json file.
Error message is attached below my code.
import glob
import json
# list all files containing News from Guardian API
files = list(glob.iglob('/Users/xxx/tempdata/articles_data/*.json'))
news_data = []
for file in files:
news_file = open(file, "r", encoding = 'utf-8')
# Read in news and store in list: news_data
for line in news_file:
news = json.loads(line)
news_data.append(news)
news_file.close()
Updated Error Output
AttributeError Traceback (most recent call last)
<ipython-input-86-3019ee85b15b> in <module>
12 # Read in news and store in list: news_data
13 for line in news_file:
---> 14 news = json.load(line)
15 news_data.append(news)
16
~/opt/anaconda3/lib/python3.8/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
291 kwarg; otherwise ``JSONDecoder`` is used.
292 """
--> 293 return loads(fp.read(),
294 cls=cls, object_hook=object_hook,
295 parse_float=parse_float, parse_int=parse_int,
AttributeError: 'str' object has no attribute 'read'
2019-11-01.json
{
"id":"business/2019/nov/01/google-snaps-up-fitbit-for-21bn",
"type":"article",
"sectionId":"business",
"sectionName":"Business",
"webPublicationDate":"2019-11-01T14:26:19Z",
"webTitle":"Google snaps up Fitbit for $2.1bn",
"webUrl":"https://www.theguardian.com/business/2019/nov/01/google-snaps-up-fitbit-for-21bn",
"apiUrl":"https://content.guardianapis.com/business/2019/nov/01/google-snaps-up-fitbit-for-21bn",
"fields":{
"headline":"Google snaps up Fitbit for $2.1bn",
"standfirst":"<p>Takeover allows web giant to take on Apple in fast-growing smartwatch and wearables business</p>",
"trailText":"Takeover allows web giant to take on Apple in fast-growing smartwatch and wearables business",
"byline":"Kalyeena Makortoff",
"main":"<figure class=\"element element-image\" data-media-id=\"fc8abb0f70105fcab3aee86dea6c89e211337660\"> <img src=\"https://media.guim.co.uk/fc8abb0f70105fcab3aee86dea6c89e211337660/0_158_3571_2143/1000.jpg\" alt=\"The wireless activity tracker Zip by Fitbit Inc\" width=\"1000\" height=\"600\" class=\"gu-image\" /> <figcaption> <span class=\"element-image__caption\">The wireless activity tracker Zip by Fitbit Inc. Google has confirmed it will buy Fitbit for $2.1bn.</span> <span class=\"element-image__credit\">Photograph: Franck Robichon/EPA</span> </figcaption> </figure>",
"body":"<p>Google has snapped up the Fitbit... ",
"newspaperPageNumber":"38",
"wordcount":"679",
"firstPublicationDate":"2019-11-01T14:25:58Z",
"isInappropriateForSponsorship":"false",
"isPremoderated":"false",
"lastModified":"2019-11-01T18:56:38Z",
"newspaperEditionDate":"2019-11-02T00:00:00Z",
"productionOffice":"UK",
"publication":"The Guardian",
"shortUrl":"https://gu.com/p/cjeze",
"shouldHideAdverts":"false",
"showInRelatedContent":"true",
"thumbnail":"https://media.guim.co.uk/fc8abb0f70105fcab3aee86dea6c89e211337660/0_158_3571_2143/500.jpg",
"legallySensitive":"false",
"lang":"en",
"isLive":"true",
"bodyText":"Google has snapped up the Fitbit activity tracker business in a $2.1bn (\u00a31.6bn) deal that will enable the search giant to go toe-to-toe with Apple in the fast-growing smartwatch and wearables business..." ,
"charCount":"4149",
"shouldHideReaderRevenue":"false",
"showAffiliateLinks":"false",
"bylineHtml":"Kalyeena Makortoff"
},
"isHosted":false,
"pillarId":"pillar/news",
"pillarName":"News"
},
If you are reading json from file, you should use json.load instead of json.loads. json.loads is for reading JSON from string. See json.load documentation.
For example:
import json
with open('ts.json', 'r') as f:
content = json.load(f)
print(content)

TensorFlow tf.decode_csv() not recognizing end of line character

I am trying to read in a CSV file in TensorFlow.
record_defaults = [[0.0], [0.0]]
data = tf.decode_csv(r"C:\Users\USER.NAME\Desktop\tmp.txt", record_defaults=record_defaults)
sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True))
sess.run(tf.global_variables_initializer())
print(sess.run(data))
sess.close()
Where tmp.txt is a simple CSV:
1.0,4.0
-.3,1.2
Note that I am running windows and Notepad++ shows that my lines end with '\r\n' (CRLF).
I get the following error when running the above code, which suggests to me that tensorflow isnt recognizing the end of line character:
InvalidArgumentError Traceback (most recent call last)
C:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in
_do_call(self, fn, *args)
1021 try:
-> 1022 return fn(*args)
1023 except errors.OpError as e:
C:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
1003 feed_dict, fetch_list, target_list,
-> 1004 status, run_metadata)
1005
C:\Anaconda3\Lib\contextlib.py in __exit__(self, type, value, traceback)
65 try:
---> 66 next(self.gen)
67 except StopIteration:
C:\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py in raise_exception_on_not_ok_status()
465 compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 466 pywrap_tensorflow.TF_GetCode(status))
467 finally:
InvalidArgumentError: Expect 2 fields but have 1 in record 0
[[Node: DecodeCSV = DecodeCSV[OUT_TYPE=[DT_FLOAT, DT_FLOAT], field_delim=",", _device="/job:localhost/replica:0/task:0/cpu:0"](DecodeCSV/records, DecodeCSV/record_defaults_0, DecodeCSV/record_defaults_1)]]
The error persists even when I change the delimiter to a space or tab.
I've searched across Google/StackOverflow but haven't been able to find a similar error. Any help is appreciated. Thank you!
Convert your file to unix format. I am assuming you are working on windows. Either way, in Notepad++, change the type of file as below:
From the "Edit" menu, select "EOL Conversion" -> "UNIX/OSX Format".
Convert to Unix

octave cannot find fltk XGetUtf8FontAndGlyph symbol

octave 3.8.2 produces this error on loading:
error: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/PKG_ADD: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/__init_fltk__.oct: failed to load: /usr/lib64/fltk/libfltk_gl.so.1.3: undefined symbol: XGetUtf8FontAndGlyph
error: called from:
error: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/PKG_ADD at line 6, column 1
GNU Octave, version 3.8.2
I obtain the following information about configuration of graphics libraries
octave:1> octave_config_info().GRAPHICS_LIBS
ans = -L/usr/lib64/fltk -Wl,-rpath,/usr/lib64/fltk -Wl,-O1 -Wl,--sort-common -Wl,--as-needed -lfltk_gl -lGLU -lGL -lfltk -lXcursor -lXfixes -lXext -ldl -lm -lX11
and although no graphic toolkits are evidently loaded initially,
octave:2> available_graphics_toolkits
ans = {}(1x0)
I can register them subsequently,
octave:3> register_graphics_toolkit("gnuplot")
octave:4> available_graphics_toolkits
ans =
{
[1,1] = gnuplot
}
octave:5> register_graphics_toolkit("fltk")
octave:6> available_graphics_toolkits
ans =
{
[1,1] = fltk
[1,2] = gnuplot
}
but attempting to load fltk produces an error consistent with the initial warning
octave:7> graphics_toolkit("fltk")
error: feval: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/__init_fltk__.oct: failed to load: /usr/lib64/fltk/libfltk_gl.so.1.3: undefined symbol: XGetUtf8FontAndGlyph
error: called from:
error: /usr/share/octave/3.8.2/m/plot/util/graphics_toolkit.m at line 74, column 5
and of course attempting to plot anything also fails,
octave:8> plot(1:10)
error: feval: /usr/lib64/octave/3.8.2/oct/x86_64-pc-linux-gnu/__init_fltk__.oct: failed to load: /usr/lib64/fltk/libfltk_gl.so.1.3: undefined symbol: XGetUtf8FontAndGlyph
error: called from:
error: /usr/share/octave/3.8.2/m/plot/util/graphics_toolkit.m at line 74, column 5
error: failed to load fltk graphics toolkit
error: base_graphics_toolkit::initialize: invalid graphics toolkit
error: /usr/share/octave/3.8.2/m/plot/util/figure.m at line 94, column 9
error: /usr/share/octave/3.8.2/m/plot/util/gcf.m at line 63, column 9
error: /usr/share/octave/3.8.2/m/plot/util/newplot.m at line 113, column 8
error: /usr/share/octave/3.8.2/m/plot/draw/plot.m at line 219, column 9
Both octave and fltk were compiled from source under gentoo:
x11-libs/fltk-1.3.3-r2:1 USE="opengl -cairo -debug -doc -examples -games -pdf -static-libs -threads -xft -xinerama"
sci-mathematics/octave-3.8.2:0/3.8.2 USE="X doc glpk gnuplot gui imagemagick opengl qhull qrupdate readline sparse zlib -curl -fftw -hdf5 -java -jit -postscript -static-libs"
resulting in configure switches of (for the fltk library):
./configure --prefix=/usr --build=x86_64-pc-linux-gnu --host=x86_64-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib --includedir=/usr/include/fltk --libdir=/usr/lib64/fltk --docdir=/usr/share/doc/fltk-1.3.3-r2/html --enable-largefile --enable-shared --enable-xdbe --disable-localjpeg --disable-localpng --disable-localzlib --disable-debug --disable-cairo --enable-gl --disable-threads --disable-xft --disable-xinerama
and (for octave)
./configure --prefix=/usr --build=x86_64-pc-linux-gnu --host=x86_64-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib --libdir=/usr/lib64 --disable-silent-rules --disable-dependency-tracking --docdir=/usr/share/doc/octave-3.8.2 --enable-shared --disable-static --localstatedir=/var/state/octave --with-blas=-L/usr/lib64/blas/reference -lblas --with-lapack=-llapack -L/usr/lib64/blas/reference -lblas --enable-docs --disable-java --enable-gui --disable-jit --enable-readline --without-curl --without-fftw3 --without-fftw3f --disable-fftw-threads --with-glpk --without-hdf5 --with-opengl --with-qhull --with-qrupdate --with-arpack --with-umfpack --with-colamd --with-ccolamd --with-cholmod --with-cxsparse --with-x --with-z --with-magick=GraphicsMagick
If I examine libfltk_gl.so.1.3 with nm, I see that the following symbols are exported:
$ nm -D /usr/lib64/fltk/libfltk_gl.so.1.3
U XCreateColormap
U XGetUtf8FontAndGlyph
w _ITM_deregisterTMCloneTable
w _ITM_registerTMCloneTable
w _Jv_RegisterClasses
U _Z10fl_measurePKcRiS1_i
000000000000e170 T _Z10gl_descentv
000000000000e590 T _Z10gl_measurePKcRiS1_
... <snip>
According to nm manual, the U designates that the symbol is global (external) but unknown. My question is whether this unknown symbol status is the origin of the error reported from octave, suggesting that the problem lies with how fltk was compiled, or whether the octave compilation is somehow at fault.
Edit: Solved by enabling Xft support: Please see comments below, and I thank Andy again for his help.
XGetUtf8FontAndGlyph should be in libfltk.so.1.3.
nm -D /usr/lib/x86_64-linux-gnu/libfltk.so.1.3 |grep XGetU
00000000000c2fc0 T XGetUtf8FontAndGlyph
It's very likely that this is a problem with your configure flags for fltk and not GNU Octave. Just try it with the default settings first.
You can test if the UTF8 stuff with OpenGL is okay with the "cube" test. Just digg into the fltk-source dir tests:
cd fltk-1.3.3/test
make cube && ./cube
Does the text in the lower left of the GL window show up?
Had a similar problem. Was getting the following error while trying to run octave (undefined symbol: _ZN18Fl_XFont_On_Demand5valueEv):
bash-4.3$ octave
error: /usr/local/lib/octave/4.0.2/oct/i686-pc-linux-gnu/PKG_ADD: /usr/local/lib/octave/4.0.2/oct/i686-pc-linux-gnu/__init_fltk__.oct: failed to load: /usr/lib/libfltk_gl.so.1.3: undefined symbol: _ZN18Fl_XFont_On_Demand5valueEv
error: called from
/usr/local/lib/octave/4.0.2/oct/i686-pc-linux-gnu/PKG_ADD at line 3 column 1
Command nm -D /usr/lib/libfltk_gl.so.1.3 showed that symbol _ZN18Fl_XFont_On_Demand5valueEv is undefined (with U):
0000a3d4 T _ZN14Fl_Glut_WindowD1Ev
0000a3d4 T _ZN14Fl_Glut_WindowD2Ev
U _ZN18Fl_Font_DescriptorD1Ev
U _ZN18Fl_Graphics_Driver11clip_regionEP8_XRegion
U _ZN18Fl_XFont_On_Demand5valueEv
The solution was to apply a patch file mentioned here to some files inside source directory of FLTK-1.3.3 and then recompile and reinstall FLTK. Now octave works with FLTK without any problem.

How to execute scala swing application?

i am new for scala .and trying to execute swing application.
I am using scala 2.8
I have compiled the program successfully but..
while executing it is showing the error like no such file..
can any 1 please help me out?
i m providing the code i am trying to execute.
Gui.scala
import swing._
object Gui extends SimpleSwingApplication
{
def top=new MainFrame {
title="swing"
val b1=new Button{
text = "ok"
}
}
}
scalac Gui.scala
it compiles successfully and create class file
but when I try
scala Gui
it just replies
No such File
Setup:
D:\src\scala_ex\ex1>dir
Volume in drive D is Data
Volume Serial Number is 5C88-8D6C
Directory of D:\src\scala_ex\ex1
01.12.2010 09:25 <DIR> .
01.12.2010 09:25 <DIR> ..
01.12.2010 09:24 173 gui.scala
1 File(s) 173 bytes
2 Dir(s) 24 575 205 376 bytes free
D:\src\scala_ex\ex1>more gui.scala
import swing._
object Gui extends SimpleSwingApplication {
def top = new MainFrame {
title = "swing"
val b1 = new Button{
text = "ok"
}
}
}
D:\src\scala_ex\ex1>scalac -version
Scala compiler version 2.8.1.final -- Copyright 2002-2010, LAMP/EPFL
Compile:
D:\src\scala_ex\ex1>scalac gui.scala
D:\src\scala_ex\ex1>dir
Volume in drive D is Data
Volume Serial Number is 5C88-8D6C
Directory of D:\src\scala_ex\ex1
01.12.2010 09:26 <DIR> .
01.12.2010 09:26 <DIR> ..
01.12.2010 09:26 485 Gui$$anon$1$$anon$2.class
01.12.2010 09:26 557 Gui$$anon$1.class
01.12.2010 09:26 558 Gui$.class
01.12.2010 09:26 1 467 Gui.class
01.12.2010 09:24 173 gui.scala
5 File(s) 3 240 bytes
2 Dir(s) 24 575 201 280 bytes free
Execute:
D:\src\scala_ex\ex1>scala -cp . Gui
And the applications starts.
This is not a direct cut&paste from the Scala code, as the blank line between object Gui and { causes a compilation error.
Now, if you fix that error and compile this with Scala 2.8, you should get these classes in the local directory:
Gui$$anon$1$$anon$2.class
Gui$$anon$1.class
Gui$.class
Gui.class
If you don't, then either the compilation did not work, or there's something else missing. For example, if you declared a package X at the top (and removed it from the example), then Gui won't be in the local directory, but under a subdirectory X, and you should invoke it by typing scala X.Gui.
Another possibility is that you have some Java environment variable pointing the output directory to someplace else.