Find the source of an Haskell exception - exception

I am trying to figure out where an exception is thrown. These are my first experiences with exception handling in Haskell. I am trying to call an XML-RPC function on a remote host, that is accessed using https:
ghci> import Network.XmlRpc.Client
ghci> import Network.XmlRpc.Internals
ghci> remote "https://rpc.ote.gandi.net/xmlrpc/" "domain.count" (ValueString "01234567890ABCDEF")
*** Exception: user error (https not supported)
In order to figure out if I just forgot to enable SSL support in some package or if it's something different, I would like to know which package throws the exception.
I started by following the instructions in the GHC docs, but it is not working out as expected:
ghci> :set -fbreak-on-exception
ghci> :trace remote "https://rpc.ote.gandi.net/xmlrpc/" "domain.count" (ValueString "01234567890ABCDEF")
Stopped at <exception thrown>
_exception :: e = _
ghci> :hist
Empty history. Perhaps you forgot to use :trace?
All relevant packages should be compiled with --enable-library-profiling.
How to locate the exception?

The reason you couldn't get any good information out of it is that :trace can't go into library code -- we need to interpret any code we want to trace. Whether it was compiled with profiling is irrelevant. After installing some dependencies, I did this to get some more information:
% cabal unpack haxr
% cd haxr-3000.8.5
% ghci Network/XmlRpc/Client.hs -XOverlappingInstances -XTypeSynonymInstances -XFlexibleInstances -XTemplateHaskell
*Network.XmlRpc.Client> :set -fbreak-on-exception
*Network.XmlRpc.Client> :trace remote "https://rpc.ote.gandi.net/xmlrpc/" "domain.count" (ValueString "01234567890ABCDEF")
Stopped at <exception thrown>
_exception :: e = _
[<exception thrown>] *Network.XmlRpc.Client> :hist
-1 : authHdr (Network/XmlRpc/Client.hs:169:27-33)
-2 : request:parseUserInfo (Network/XmlRpc/Client.hs:161:34-40)
-3 : request:parseUserInfo (Network/XmlRpc/Client.hs:161:31-73)
-4 : request:parseUserInfo:(...) (Network/XmlRpc/Client.hs:159:55-70)
-5 : request:parseUserInfo:(...) (Network/XmlRpc/Client.hs:159:39-51)
-6 : request:parseUserInfo:(...) (Network/XmlRpc/Client.hs:159:39-70)
-7 : request:parseUserInfo (Network/XmlRpc/Client.hs:160:34-39)
-8 : request:parseUserInfo (Network/XmlRpc/Client.hs:160:31-64)
-9 : request:parseUserInfo (Network/XmlRpc/Client.hs:(160,29)-(161,74))
-10 : request:parseUserInfo (Network/XmlRpc/Client.hs:(159,5)-(161,74))
-11 : authHdr (Network/XmlRpc/Client.hs:(169,1)-(175,60))
-12 : request:headers (Network/XmlRpc/Client.hs:158:33-47)
-13 : request:headers (Network/XmlRpc/Client.hs:158:33-63)
-14 : request:headers (Network/XmlRpc/Client.hs:158:33-70)
-15 : request:headers (Network/XmlRpc/Client.hs:158:20-71)
-16 : request:headers (Network/XmlRpc/Client.hs:157:16-65)
-17 : request:headers (Network/XmlRpc/Client.hs:156:16-47)
-18 : request:headers (Network/XmlRpc/Client.hs:155:16-44)
-19 : request:headers (Network/XmlRpc/Client.hs:(155,15)-(158,71))
-20 : request (Network/XmlRpc/Client.hs:(149,28)-(152,54))
...
Hopefully that gets you started. You may find that this leads you to another library boundary -- if so, you'll need to unpack and interpret that library to go deeper. Good luck!

Related

Load custom package model to get model vocabulary in AllenNLP python interface

I'm trying to get the vocabulary from some publicly-available pre-trained models (that aren't mine) using the python interface of AllenNLP, using self.vocab. However, I'm running into problems trying to load in the model. I'm looking to get the vocabulary from the dygiepp models, using the following code:
from allennlp.models.model import Model
scierc_model = Model.from_archive('https://s3-us-west-2.amazonaws.com/ai2-s2-research/dygiepp/master/scierc.tar.gz')
However, I get the following error:
---------------------------------------------------------------------------
ConfigurationError Traceback (most recent call last)
/tmp/local/63381207/ipykernel_7616/3549263982.py in <module>
----> 1 scierc_model = Model.from_archive('https://s3-us-west-2.amazonaws.com/ai2-s2-research/dygiepp/master/scierc.tar.gz')
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/model.py in from_archive(cls, archive_file, vocab)
480 from allennlp.models.archival import load_archive # here to avoid circular imports
481
--> 482 model = load_archive(archive_file).model
483 if vocab:
484 model.vocab.extend_from_vocab(vocab)
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/archival.py in load_archive(archive_file, cuda_device, overrides, weights_file)
231 # Instantiate model and dataset readers. Use a duplicate of the config, as it will get consumed.
232 dataset_reader, validation_dataset_reader = _load_dataset_readers(
--> 233 config.duplicate(), serialization_dir
234 )
235 model = _load_model(config.duplicate(), weights_path, serialization_dir, cuda_device)
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/archival.py in _load_dataset_readers(config, serialization_dir)
267
268 dataset_reader = DatasetReader.from_params(
--> 269 dataset_reader_params, serialization_dir=serialization_dir
270 )
271 validation_dataset_reader = DatasetReader.from_params(
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/common/from_params.py in from_params(cls, params, constructor_to_call, constructor_to_inspect, **extras)
586 "type",
587 choices=as_registrable.list_available(),
--> 588 default_to_first_choice=default_to_first_choice,
589 )
590 subclass, constructor_name = as_registrable.resolve_class_name(choice)
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/common/params.py in pop_choice(self, key, choices, default_to_first_choice, allow_class_names)
322 """{"model": "my_module.models.MyModel"} to have it imported automatically."""
323 )
--> 324 raise ConfigurationError(message)
325 return value
326
ConfigurationError: dygie not in acceptable choices for dataset_reader.type: ['babi', 'conll2003', 'interleaving', 'multitask', 'multitask_shim', 'sequence_tagging', 'sharded', 'text_classification_json']. You should either use the --include-package flag to make sure the correct module is loaded, or use a fully qualified class name in your config file like {"model": "my_module.models.MyModel"} to have it imported automatically.
The error describes how to fix the error from the command line, but not in the python interface. I additionally tried adding the line import dygie to my code to import the missing package, but that didn't solve the problem.
Wondering if anyone knows how to get around this?
To run this model, you'll need to have the code from this repo: https://github.com/dwadden/dygiepp.
In particular, you need to import the DyGIE dataset reader from here: https://github.com/dwadden/dygiepp/blob/master/dygie/data/dataset_readers/dygie.py#L29

Stanford NER Tagger and NLTK - not working [OSError: Java command failed ]

Trying to run Stanford NER Taggerand NLTK from a jupyter notebook.
I am continuously getting
OSError: Java command failed
I have already tried the hack at
https://gist.github.com/alvations/e1df0ba227e542955a8a
and thread
Stanford Parser and NLTK
I am using
NLTK==3.3
Ubuntu==16.04LTS
Here is my python code:
Sample_text = "Google, headquartered in Mountain View, unveiled the new Android phone"
sentences = sent_tokenize(Sample_text)
tokenized_sentences = [word_tokenize(sentence) for sentence in sentences]
PATH_TO_GZ = '/home/root/english.all.3class.caseless.distsim.crf.ser.gz'
PATH_TO_JAR = '/home/root/stanford-ner.jar'
sn_3class = StanfordNERTagger(PATH_TO_GZ,
path_to_jar=PATH_TO_JAR,
encoding='utf-8')
annotations = [sn_3class.tag(sent) for sent in tokenized_sentences]
I got these files using following commands:
wget http://nlp.stanford.edu/software/stanford-ner-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-postagger-full-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-parser-full-2015-04-20.zip
# Extract the zip file.
unzip stanford-ner-2015-04-20.zip
unzip stanford-parser-full-2015-04-20.zip
unzip stanford-postagger-full-2015-04-20.zip
I am getting the following error:
CRFClassifier invoked on Thu May 31 15:56:19 IST 2018 with arguments:
-loadClassifier /home/root/english.all.3class.caseless.distsim.crf.ser.gz -textFile /tmp/tmpMDEpL3 -outputFormat slashTags -tokenizerFactory edu.stanford.nlp.process.WhitespaceTokenizer -tokenizerOptions "tokenizeNLs=false" -encoding utf-8
tokenizerFactory=edu.stanford.nlp.process.WhitespaceTokenizer
Unknown property: |tokenizerFactory|
tokenizerOptions="tokenizeNLs=false"
Unknown property: |tokenizerOptions|
loadClassifier=/home/root/english.all.3class.caseless.distsim.crf.ser.gz
encoding=utf-8
Unknown property: |encoding|
textFile=/tmp/tmpMDEpL3
outputFormat=slashTags
Loading classifier from /home/root/english.all.3class.caseless.distsim.crf.ser.gz ... Error deserializing /home/root/english.all.3class.caseless.distsim.crf.ser.gz
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1380)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1331)
at edu.stanford.nlp.ie.crf.CRFClassifier.main(CRFClassifier.java:2315)
Caused by: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
at edu.stanford.nlp.ie.crf.CRFClassifier.loadClassifier(CRFClassifier.java:2164)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1249)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1366)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1377)
... 2 more
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-15-5621d0f8177d> in <module>()
----> 1 ne_annot_sent_3c = [sn_3class.tag(sent) for sent in tokenized_sentences]
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag(self, tokens)
79 def tag(self, tokens):
80 # This function should return list of tuple rather than list of list
---> 81 return sum(self.tag_sents([tokens]), [])
82
83 def tag_sents(self, sentences):
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag_sents(self, sentences)
102 # Run the tagger and get the output
103 stanpos_output, _stderr = java(cmd, classpath=self._stanford_jar,
--> 104 stdout=PIPE, stderr=PIPE)
105 stanpos_output = stanpos_output.decode(encoding)
106
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/__init__.pyc in java(cmd, classpath, stdin, stdout, stderr, blocking)
134 if p.returncode != 0:
135 print(_decode_stdoutdata(stderr))
--> 136 raise OSError('Java command failed : ' + str(cmd))
137
138 return (stdout, stderr)
OSError: Java command failed : [u'/usr/bin/java', '-mx1000m', '-cp', '/home/root/stanford-ner.jar', 'edu.stanford.nlp.ie.crf.CRFClassifier', '-loadClassifier', '/home/root/english.all.3class.caseless.distsim.crf.ser.gz', '-textFile', '/tmp/tmpMDEpL3', '-outputFormat', 'slashTags', '-tokenizerFactory', 'edu.stanford.nlp.process.WhitespaceTokenizer', '-tokenizerOptions', '"tokenizeNLs=false"', '-encoding', 'utf-8']
Download Stanford Named Entity Recognizer version 3.9.1: see ‘Download’ section from The Stanford NLP website.
Unzip it and move 2 files "ner-tagger.jar" and "english.all.3class.distsim.crf.ser.gz" to your folder
Open jupyter notebook or ipython prompt in your folder path and run the following python code:
import nltk
from nltk.tag.stanford import StanfordNERTagger
sentence = u"Twenty miles east of Reno, Nev., " \
"where packs of wild mustangs roam free through " \
"the parched landscape, Tesla Gigafactory 1 " \
"sprawls near Interstate 80."
jar = './stanford-ner.jar'
model = './english.all.3class.distsim.crf.ser.gz'
ner_tagger = StanfordNERTagger(model, jar, encoding='utf8')
words = nltk.word_tokenize(sentence)
# Run NER tagger on words
print(ner_tagger.tag(words))
I tested this on NLTK==3.3 and Ubuntu==16.0.6LTS

Save data in octave

In Octave, I have x=-13:0.1:13
then, I save as:
save file.dat x
and when I open file.dat I get:
-13
-12.9
-12.8
-12.7
-12.6
-12.5
-12.4
-12.3
-12.2
-12.1
-12
-11.9
-11.8
-11.7
-11.6
-11.5
-11.4
-11.3
-11.2
-11.1
-11
-10.9
-10.8
-10.7
-10.6
-10.5
-10.4
-10.3
-10.2
-10.1
-10
-9.9
-9.800000000000001
-9.699999999999999
-9.6
-9.5
-9.4
-9.300000000000001
-9.199999999999999
-9.1
-9
-8.899999999999999
-8.800000000000001
-8.699999999999999
...
But I would like to save -8.7, and not -8.699999999999999; -8.8 and not -8.800000000000001...
Without knowing what function you're using to write to the file, one way to fix the problem is specify the format with single digit precision.
For example:
fprintf(fid, "%4.1f", x(i));
writes x(i) to fid with 1 digit after the decimal.

POST JSON data using drakma:http-request

I am trying to POST some JSON data to a web service using drakma.
(ql:quickload :st-json)
(ql:quickload :cl-json)
(ql:quickload :drakma)
(defvar *rc* (merge-pathnames (user-homedir-pathname) ".apirc"))
(defvar *user*
(with-open-file (s *rc*)
(st-json:read-json s)))
(defvar api-url (st-json:getjso "url" *user*))
(defvar api-key (st-json:getjso "key" *user*))
(defvar api-email (st-json:getjso "email" *user*))
(setf drakma:*header-stream* *standard-output*)
(defvar *req* '(("date" . "20071001") ("time" . "00") ("origin" . "all")))
(format t "json:~S~%" (json:encode-json-to-string *req*))
(defun retrieve (api request)
(let* ((cookie-jar (make-instance 'drakma:cookie-jar))
(extra-headers (list (cons "From" api-email)
(cons "X-API-KEY" api-key)))
(url (concatenate 'string api-url api "/requests"))
(stream (drakma:http-request url
:additional-headers extra-headers
:accept "application/json"
:method :post
:content-type "application/json"
:external-format-out :utf-8
:external-format-in :utf-8
:redirect 100
:cookie-jar cookie-jar
:content (json:encode-json-to-string request)
:want-stream t)))
(st-json:read-json stream)))
(retrieve "/datasets/tigge" *req*)
Unfortunately, I get an error, although the data seems to be encoded OK to JSON and the headers generated by drakma too, I think. Apparently something is wrong with the :content (the list of integers in the errors message is just the list of ASCII codes of the JSON encoded data).
json:"{\"date\":\"20071001\",\"time\":\"00\",\"origin\":\"all\",\"type\":\"pf\",\"param\":\"tp\",\"area\":\"70\\/-130\\/30\\/-60\",\"grid\":\"2\\/2\",\"target\":\"data.grib\"}"
POST /v1/datasets/tigge/requests HTTP/1.1
Host: api.service.int
User-Agent: Drakma/1.3.0 (SBCL 1.1.5; Darwin; 12.2.0; http://weitz.de/drakma/)
Accept: application/json
Connection: close
From: me#gmail.com
X-API-KEY: 19a0edb6d8d8dda1e6a3b21223e4f86a
Content-Type: application/json
Content-Length: 193
debugger invoked on a SIMPLE-TYPE-ERROR:
The value of CL+SSL::THING is #(123 34 100 97 116 97 115 101 116 34 58 34
...), which is not of type (SIMPLE-ARRAY
(UNSIGNED-BYTE 8)
(*)).
Any idea what's wrong with this code? Many thanks in advance.
Thanks to Kevin and Hans from the General interest list for Drakma and Chunga drakma-devel for helping me out - it turned out that the problem was caused by a bug in a recent version of cl+ssl, already fixed in a development branch. I use quicklisp, and here is what Hans Hübner advised my to do to update my cl+ssl installation, which worked:
You can check out the latest cl+ssl - which contains a fix for the
problem:
cd ~/quicklisp/local-projects/
git clone git://gitorious.org/cl-plus-ssl/cl-plus-ssl.git
Quicklisp will automatically find cl+ssl from that location. Remember
to remove that checkout after you've upgraded to a newer quicklisp
release that has the fix in the future.

How to execute scala swing application?

i am new for scala .and trying to execute swing application.
I am using scala 2.8
I have compiled the program successfully but..
while executing it is showing the error like no such file..
can any 1 please help me out?
i m providing the code i am trying to execute.
Gui.scala
import swing._
object Gui extends SimpleSwingApplication
{
def top=new MainFrame {
title="swing"
val b1=new Button{
text = "ok"
}
}
}
scalac Gui.scala
it compiles successfully and create class file
but when I try
scala Gui
it just replies
No such File
Setup:
D:\src\scala_ex\ex1>dir
Volume in drive D is Data
Volume Serial Number is 5C88-8D6C
Directory of D:\src\scala_ex\ex1
01.12.2010 09:25 <DIR> .
01.12.2010 09:25 <DIR> ..
01.12.2010 09:24 173 gui.scala
1 File(s) 173 bytes
2 Dir(s) 24 575 205 376 bytes free
D:\src\scala_ex\ex1>more gui.scala
import swing._
object Gui extends SimpleSwingApplication {
def top = new MainFrame {
title = "swing"
val b1 = new Button{
text = "ok"
}
}
}
D:\src\scala_ex\ex1>scalac -version
Scala compiler version 2.8.1.final -- Copyright 2002-2010, LAMP/EPFL
Compile:
D:\src\scala_ex\ex1>scalac gui.scala
D:\src\scala_ex\ex1>dir
Volume in drive D is Data
Volume Serial Number is 5C88-8D6C
Directory of D:\src\scala_ex\ex1
01.12.2010 09:26 <DIR> .
01.12.2010 09:26 <DIR> ..
01.12.2010 09:26 485 Gui$$anon$1$$anon$2.class
01.12.2010 09:26 557 Gui$$anon$1.class
01.12.2010 09:26 558 Gui$.class
01.12.2010 09:26 1 467 Gui.class
01.12.2010 09:24 173 gui.scala
5 File(s) 3 240 bytes
2 Dir(s) 24 575 201 280 bytes free
Execute:
D:\src\scala_ex\ex1>scala -cp . Gui
And the applications starts.
This is not a direct cut&paste from the Scala code, as the blank line between object Gui and { causes a compilation error.
Now, if you fix that error and compile this with Scala 2.8, you should get these classes in the local directory:
Gui$$anon$1$$anon$2.class
Gui$$anon$1.class
Gui$.class
Gui.class
If you don't, then either the compilation did not work, or there's something else missing. For example, if you declared a package X at the top (and removed it from the example), then Gui won't be in the local directory, but under a subdirectory X, and you should invoke it by typing scala X.Gui.
Another possibility is that you have some Java environment variable pointing the output directory to someplace else.