I'm automating with robot framework but I'm getting this error
No keyword with name 'Fetch From Left' found.
I don't understand the cause, I'm new in this tool
Here is my code:
${FILE_RUTS_INFORMACION_PERSONAL} archivo2.csv
Im reading CSV file with three DNI
Carga RUTs
[Documentation] Carga lista de RUTs a validar desde archivo csv
[Arguments] ${file_name}
${data}= read csv file ${file_name}
[return] ${data}
Here is the error:
Consultar WS
[Documentation] Lee RUTs de archivo csv y consulta cada uno al Experto Original y Migrado.
[Arguments] ${data}
${rutsd}= Set Variable 0
${dv} = Set Variable 0
Log To Console .
:FOR ${element} IN ${data}
\ ${rutsd}= Fetch From Left #{element}[0] ;
\ ${dvymas}= Get Substring #{element}[0] -1
\ ${dv}= Fetch From Left ${dvymas} ;
\ Log To Console Consultando RUT ${rutsd}...
\ Run Keyword And Continue On Failure WS Experto Orignal ${rutsd} ${dv}
ERROR:
FOR ${element} IN [ ${data} ]
Start / End / Elapsed: 20170823 10:00:11.149 / 20170823 10:00:11.151 / 00:00:00.002
00:00:00.002VAR ${element} = [['169233xxx;'], ['169129xxx;'], ['189925xxx;']]
Start / End / Elapsed: 20170823 10:00:11.149 / 20170823 10:00:11.151 / 00:00:00.002
00:00:00.000KEYWORD ${rutsd}= = Fetch From Left #{element}[0], ;
Start / End / Elapsed: 20170823 10:00:11.151 / 20170823 10:00:11.151 / 00:00:00.000
10:00:11.151 FAIL No keyword with name 'Fetch From Left' found.
You need to import the String library. See the user guide for more information. Only the BuiltIn library doesn't need to be imported. Looking at the code you posted, you don't seem to be importing it.
Adding this to your settings section should do the trick
*** Settings ***
Library String
Related
How can we import a csv file with specific caractere in colnames ? When I open this CSV with Visual Code, 2 colnames contain this caractere �. Is there a way to import my file with proc import (not a datastep) ?
SAS version 9.04
I try this :
filename test2 "my_file.csv" encoding="utf-8" ;
proc import datafile= test2
out=my_df
dbms=dlm
replace;
delimiter=";";
getnames=yes;
GUESSINGROWS=MAX;
run;
Raise an error :
ERROR: Invalid string.
ERROR: Invalid string.
ERROR: Invalid string.
Nombre de noms trouvé inférieur au nombre de variables trouvé.
Le nom n'est pas un nom SAS correct.
Problèmes détectés dans les noms fournis. Consultez le Journal.
....
ERROR: Invalid string.
FATAL: Unrecoverable I/O error detected in the execution of the DATA step program. Aborted during
the EXECUTION phase.
Do the trick encoding="Windows-1250":
filename test2 "my_file.csv" encoding="Windows-1250" ;
proc import datafile= test2
out=my_df
dbms=dlm
replace;
delimiter=";";
getnames=yes;
GUESSINGROWS=MAX;
run;
Tips : to choose right encoding :
File > Import data > Choose your file ... > In open window click on "Codage ..." > In my case Codage "Europe de l'Ouest (Windows) (Page de code 1252) Import correctly "é" : check below in overview.
To have right syntax of encoding "1250" refers to : https://support.sas.com/documentation/onlinedoc/dfdmstudio/2.6/dmpdmsug/Content/dfU_Encodings.html
I am trying to make an animation from a graph and save the result as GIF or video.
The graph and the animation work well in MATLAB but I cannot save it as GIF.
imread gives this error:
Magick: "gswin32c" "-q" "-dBATCH" "-dSAFER" "-dMaxBitmap=50000000" "-
dNOPAUSE" "-sDEVICE=ppmraw" "-dTextAlphaBits=4" "-dG
raphicsAlphaBits=4" "-r72x72" "-dFirstPage=1" "-dLastPage=1" "-
sOutputFile=C:\Users ...
I tried the solution from here, but that error pops up.
This is my code:
clear all;
clc;
for m = 1:10
n = m.*100;
x = linspace(0,1500,1500);
x2 = linspace(n,1000+n,500);
y2 = [((20e-6.*n + 0.008).*(sin(pi/1000.*(x2-n))).^2) ];
y1 = zeros(1,n);
y3 = zeros(1,1000-n);
y7 = [y1 y2 y3];
y8 = y7 + 0;
y1 = (sin(pi/150*x)).^2;
y3 = exp((x/400)-5);
y5 = (y1.*y3)+y8;
plot( x,y5);
drawnow;
#pause(0.01);
endfor
im = imread ("animation.pdf", "Index", "all");
imwrite (im, "animation.gif", "DelayTime", .5);
EDIT:
This is the complete error. Directory name redacted for privacy.
Magick: "gswin32c" "-q" "-dBATCH" "-dSAFER" "-dMaxBitmap=50000000" "-dNOPAUSE" "-sDEVICE=ppmraw"
"-dTextAlphaBits=4" "-dGraphicsAlphaBits=4" "-r72x72" "-dFirstPage=1" "-dLastPage=1" "-sOutputFil
e=C:\Users\XXXXXXXXXXXXXXXXXXXXXXXXXX" "--" "C:\Users\XXXXXXXXXXXXXXXXXXXXXXXXXX" "-c
" "quit" [No such file or directory].
error: Magick++ exception: Magick: Postscript delegate failed (F:\XXXXXXXXXXXXXXXXXXXXXXXXXX\animation.pdf) reported by coders/pdf.c:434 (ReadPDFImage)
error: called from
__imread__ at line 78 column 10
imageIO at line 118 column 26
imread at line 106 column 30
test_animation2 at line 6 column 4
I had the same problem.
For me, the solution was to install or reinstall the GhostScript program.
This was the error message I received:
Magick: "gswin32c" "-q" "-dBATCH" "-dSAFER" "-dMaxBitmap=50000000" "-dNOPAUSE" "-sDEVICE=ppmraw"
"-dTextAlphaBits=4" "-dGraphicsAlphaBits=4" "-r72x72" "-dFirstPage=1" "-dLastPage=1" "-sOutputFil
e=C:\Users\xxxxxx\AppData\Local\Temp\gmncleNd" "--" "C:\Users\xxxxxx\AppData\Local\Temp\gmEh1Xv4" "
-c" "quit" [No such file or directory].
error: Magick++ exception: Magick: Postscript delegate failed (C:\Users\xxxxxx\Desktop\TexML2.0\IN
entrada\in.pdf) reported by coders/pdf.c:434 (ReadPDFImage)
error: called from
_imread_ at line 80 column 10
imageIO at line 118 column 28
imread at line 106 column 33
Trying to run Stanford NER Taggerand NLTK from a jupyter notebook.
I am continuously getting
OSError: Java command failed
I have already tried the hack at
https://gist.github.com/alvations/e1df0ba227e542955a8a
and thread
Stanford Parser and NLTK
I am using
NLTK==3.3
Ubuntu==16.04LTS
Here is my python code:
Sample_text = "Google, headquartered in Mountain View, unveiled the new Android phone"
sentences = sent_tokenize(Sample_text)
tokenized_sentences = [word_tokenize(sentence) for sentence in sentences]
PATH_TO_GZ = '/home/root/english.all.3class.caseless.distsim.crf.ser.gz'
PATH_TO_JAR = '/home/root/stanford-ner.jar'
sn_3class = StanfordNERTagger(PATH_TO_GZ,
path_to_jar=PATH_TO_JAR,
encoding='utf-8')
annotations = [sn_3class.tag(sent) for sent in tokenized_sentences]
I got these files using following commands:
wget http://nlp.stanford.edu/software/stanford-ner-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-postagger-full-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-parser-full-2015-04-20.zip
# Extract the zip file.
unzip stanford-ner-2015-04-20.zip
unzip stanford-parser-full-2015-04-20.zip
unzip stanford-postagger-full-2015-04-20.zip
I am getting the following error:
CRFClassifier invoked on Thu May 31 15:56:19 IST 2018 with arguments:
-loadClassifier /home/root/english.all.3class.caseless.distsim.crf.ser.gz -textFile /tmp/tmpMDEpL3 -outputFormat slashTags -tokenizerFactory edu.stanford.nlp.process.WhitespaceTokenizer -tokenizerOptions "tokenizeNLs=false" -encoding utf-8
tokenizerFactory=edu.stanford.nlp.process.WhitespaceTokenizer
Unknown property: |tokenizerFactory|
tokenizerOptions="tokenizeNLs=false"
Unknown property: |tokenizerOptions|
loadClassifier=/home/root/english.all.3class.caseless.distsim.crf.ser.gz
encoding=utf-8
Unknown property: |encoding|
textFile=/tmp/tmpMDEpL3
outputFormat=slashTags
Loading classifier from /home/root/english.all.3class.caseless.distsim.crf.ser.gz ... Error deserializing /home/root/english.all.3class.caseless.distsim.crf.ser.gz
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1380)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1331)
at edu.stanford.nlp.ie.crf.CRFClassifier.main(CRFClassifier.java:2315)
Caused by: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
at edu.stanford.nlp.ie.crf.CRFClassifier.loadClassifier(CRFClassifier.java:2164)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1249)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1366)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1377)
... 2 more
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-15-5621d0f8177d> in <module>()
----> 1 ne_annot_sent_3c = [sn_3class.tag(sent) for sent in tokenized_sentences]
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag(self, tokens)
79 def tag(self, tokens):
80 # This function should return list of tuple rather than list of list
---> 81 return sum(self.tag_sents([tokens]), [])
82
83 def tag_sents(self, sentences):
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag_sents(self, sentences)
102 # Run the tagger and get the output
103 stanpos_output, _stderr = java(cmd, classpath=self._stanford_jar,
--> 104 stdout=PIPE, stderr=PIPE)
105 stanpos_output = stanpos_output.decode(encoding)
106
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/__init__.pyc in java(cmd, classpath, stdin, stdout, stderr, blocking)
134 if p.returncode != 0:
135 print(_decode_stdoutdata(stderr))
--> 136 raise OSError('Java command failed : ' + str(cmd))
137
138 return (stdout, stderr)
OSError: Java command failed : [u'/usr/bin/java', '-mx1000m', '-cp', '/home/root/stanford-ner.jar', 'edu.stanford.nlp.ie.crf.CRFClassifier', '-loadClassifier', '/home/root/english.all.3class.caseless.distsim.crf.ser.gz', '-textFile', '/tmp/tmpMDEpL3', '-outputFormat', 'slashTags', '-tokenizerFactory', 'edu.stanford.nlp.process.WhitespaceTokenizer', '-tokenizerOptions', '"tokenizeNLs=false"', '-encoding', 'utf-8']
Download Stanford Named Entity Recognizer version 3.9.1: see ‘Download’ section from The Stanford NLP website.
Unzip it and move 2 files "ner-tagger.jar" and "english.all.3class.distsim.crf.ser.gz" to your folder
Open jupyter notebook or ipython prompt in your folder path and run the following python code:
import nltk
from nltk.tag.stanford import StanfordNERTagger
sentence = u"Twenty miles east of Reno, Nev., " \
"where packs of wild mustangs roam free through " \
"the parched landscape, Tesla Gigafactory 1 " \
"sprawls near Interstate 80."
jar = './stanford-ner.jar'
model = './english.all.3class.distsim.crf.ser.gz'
ner_tagger = StanfordNERTagger(model, jar, encoding='utf8')
words = nltk.word_tokenize(sentence)
# Run NER tagger on words
print(ner_tagger.tag(words))
I tested this on NLTK==3.3 and Ubuntu==16.0.6LTS
Hi im reading a CSV file (to get information from a WEB Service) with this values:
169233xxx
169129xxx
189925xxx
This is my keyword definition:
Carga RUTs
[Documentation] Carga lista de RUTs a validar desde archivo csv
[Arguments] ${file_name}
${data}= read csv file ${file_name}
[return] ${data}
I catch this values in a for:
Consultar WS
[Documentation] Lee RUTs de archivo csv y consulta cada uno al Experto Original y Migrado.
[Arguments] ${data}
${rutsd}= Set Variable 0
Log To Console .
:FOR ${element} IN #{data}
\ ${rutsd}= Fetch From Left ${element}[0] ;
\ ${dvymas}= Get Substring ${element}[0] -2
\ Log To Console Consultando RUT ${rutsd}...
\ Run Keyword And Continue On Failure WS Experto Orignal ${rutsd}
At the moment of consulting....Appears for each "[" or "$apos" icon , i dont understand
<ns1:rut>['189925xxx </ns1:rut>
<ns1:rut>[$apos 189925xxx </ns1:rut>
At finally response:
Server raised fault: '1001: RUT no Válido' (translate DNI invalid(
Can someone explain to me why I am getting this error?
Thanks,
Regards!!
EDIT:
To not creat a new post i have a new error, i modified
${element}[0] for #{element}[0]
The previous error was solved.
Now i have a new error:
List variable '#{element}' has no item in index 0
Can someone explain to me why I am getting this error?
thanks again
I am working on a LuaLaTeX file which would take data from my database using LuaSQL. So this are the 003-v1.tex and 003-v1.lua files that I came up with:
003-v1.tex file:
\documentclass{article}
\usepackage{luacode}
% Lua kodo vpišemo v ločeno datoteko zaradi syntax highlithing
\directlua{dofile('003-v1.lua')}
\newcommand{\stranke}{\luadirect{stranke()}}
\begin{document}
\begin{tabular}{ll}
\hline
id stranke & ime \\
\hline
\stranke
\hline
\end{tabular}
\end{document}
003-v1.lua file:
function stranke ()
package.cpath = package.cpath .. ";/usr/lib/i386-linux-gnu/lua/5.1/?.so"
luasql = require "luasql.mysql"
env = assert (luasql.mysql())
con = assert (env:connect("linux_krozki","root","mypassword"))
cur = assert (con:execute("SELECT * FROM stranke"))
vnos = cur:fetch ({}, "a")
while vnos do
print(
string.format([[%s & %s \\]], vnos.id_stranke, vnos.ime)
)
vnos = cur:fetch (vnos, "a")
end
end
This files ought to work but when I try to compile using lualatex 003-v1.tex I get error:
This is LuaTeX, Version 1.0.4 (TeX Live 2017/Arch Linux)
restricted system commands enabled.
(./003-v1.tex
LaTeX2e <2017-04-15>
(using write cache: /home/ziga/.texlive/texmf-var/luatex-cache/generic)(using r
ead cache: /var/lib/texmf/luatex-cache/generic /home/ziga/.texlive/texmf-var/lu
atex-cache/generic)
luaotfload | main : initialization completed in 0.144 seconds
Babel <3.12> and hyphenation patterns for 1 language(s) loaded.
(/usr/share/texmf-dist/tex/latex/base/article.cls
Document Class: article 2014/09/29 v1.4h Standard LaTeX document class
(/usr/share/texmf-dist/tex/latex/base/size10.clo(load luc: /home/ziga/.texlive/
texmf-var/luatex-cache/generic/fonts/otl/lmroman10-regular.luc)))
(/usr/share/texmf-dist/tex/lualatex/luacode/luacode.sty
(/usr/share/texmf-dist/tex/generic/oberdiek/ifluatex.sty)
(/usr/share/texmf-dist/tex/luatex/luatexbase/luatexbase.sty
(/usr/share/texmf-dist/tex/luatex/ctablestack/ctablestack.sty))) (./003-v1.aux)
003-v1.lua:8: module 'luasql.mysql' not found:
no field package.preload['luasql.mysql']
[kpse lua searcher] file not found: 'luasql.mysql'
[kpse C searcher] file not found: 'luasql.mysql'
no file '/usr/local/lib/lua/5.2/luasql.so'
no file '/usr/local/lib/lua/5.2/loadall.so'
no file './luasql.so'
no file '/usr/lib/i386-linux-gnu/lua/5.1/luasql.so'
stack traceback:
[C]: in function 'require'
003-v1.lua:8: in function 'stranke'
[\directlua]:1: in main chunk.
\luadirect ... { \luacode#maybe#printdbg {#1} #1 }
l.14 \stranke
And according to this topic this error arrizes because LuaLaTeX can't load module luasql.mysql while lua can on its own. How do I know this? If I comment out first line (function stranke ()) and last line (end) from 003-v1.lua before compiling with lua 003-v1.lua I get an output which is completely fine:
1 & Žiga \\
2 & Ranja \\
3 & Romana \\
So my question is, how to make sure that module luasql.mysql loads when LuaLateX is called? I am on Archlinux and am using texlive. I heard that people compile the texlive again with support for luasql, but can't find the step by step guide... That would be awesome! It would be even better if there is anyone who already compiled it.
Here is the info about my Texlive version:
[ziga#laptop ~]$ pacman -Qs tex | grep live
local/texlive-bibtexextra 2017.44915-1 (texlive-most)
local/texlive-bin 2017.44590-2
local/texlive-core 2017.44918-1 (texlive-most)
local/texlive-fontsextra 2017.44818-1 (texlive-most)
local/texlive-formatsextra 2017.44177-2 (texlive-most)
local/texlive-games 2017.44131-1 (texlive-most)
local/texlive-humanities 2017.44833-1 (texlive-most)
local/texlive-langchinese 2017.44333-1 (texlive-lang)
local/texlive-langcyrillic 2017.44895-1 (texlive-lang)
local/texlive-langextra 2017.44908-1 (texlive-lang)
local/texlive-langgreek 2017.44917-1 (texlive-lang)
local/texlive-langjapanese 2017.44914-1 (texlive-lang)
local/texlive-langkorean 2017.44467-1 (texlive-lang)
local/texlive-latexextra 2017.44907-1 (texlive-most)
local/texlive-music 2017.44885-1 (texlive-most)
local/texlive-pictures 2017.44899-1 (texlive-most)
local/texlive-pstricks 2017.44742-1 (texlive-most)
local/texlive-publishers 2017.44916-1 (texlive-most)
local/texlive-science 2017.44906-1 (texlive-most)
We found an answer on Archlinux forums after this thread was posted. It looks like there are some internal problems with Lua language - package.cpath wasn't able to reckognize the questionmark, so ?.so had to be changed into mysql.so. Can anyone explain why this happened?