Itcl: inconsistency of $this variable - tcl

While doing my project migration from Tcl 8.5.9/Itcl 3.4 to Tcl 8.6.6/Itcl 4.0.5 I've faced with inconsistency of $this variable depending how it's accessed. Here is the minimized testcase:
puts "Tcl version : $tcl_patchLevel"
puts "Itcl version : [package require Itcl]"
itcl::class Base {
public {
method base_process {script} {
uplevel $m_main main_process [list $script]
}
method set_main {main} {
set m_main $main
}
}
protected {
variable m_main
}
}
itcl::class Main {
inherit Base
public {
method main_process {script} {
uplevel $script
}
}
}
itcl::class Worker {
inherit Base
public {
method worker_process_direct {} {
puts "Direct query: this = $this"
}
method worker_process_inderect {} {
base_process {puts "Indirect query: this = $this"}
}
method worker_process_both {} {
puts "Direct query: this = $this"
base_process {puts "Indirect query: this = $this"}
}
}
}
Main main
Worker worker
worker set_main main
puts "\n==== worker_process_direct ===="
worker worker_process_direct
puts "\n==== worker_process_indirect ===="
worker worker_process_inderect
puts "\n==== worker_process_both ===="
worker worker_process_both
worker_process_direct and worker_process_both functions always provide correct results. But worker_process_inderect works correctly only with old version of Tcl/Itcl. For Tcl 8.6.6/Itcl 4.0.5 $this variable is strangely changed to the instance of Main class instead of Worker.
Here is the output of the script above for two versions of Tcl/Itcl.
Tcl version : 8.5.9
Itcl version : 3.4
==== worker_process_direct ====
Direct query: this = ::worker
==== worker_process_indirect ====
Indirect query: this = ::worker <<<<<<<<<<<< CORRECT
==== worker_process_both ====
Direct query: this = ::worker
Indirect query: this = ::worker
Tcl version : 8.6.6
Itcl version : 4.0.5
==== worker_process_direct ====
Direct query: this = ::worker
==== worker_process_indirect ====
Indirect query: this = ::main <<<<<<<<<< INCORRECT
==== worker_process_both ====
Direct query: this = ::worker
Indirect query: this = ::worker
Did I miss something and there were significant changes in Tcl/Itcl which I haven't noticed?

Now that is very curious! I augmented your script to define Main like this:
itcl::class Main {
inherit Base
public {
method main_process {script} {
uplevel $script
# Print what is actually going on!
puts >>[tcl::unsupported::disassemble script $script]<<
}
}
}
With 8.5/3.4 I get this output:
Tcl version : 8.5.9
Itcl version : 3.4
==== worker_process_direct ====
Direct query: this = ::worker
==== worker_process_indirect ====
Indirect query: this = ::worker
>>ByteCode 0x0x7fedea044c10, refCt 1, epoch 3, interp 0x0x7fedea033010 (epoch 3)
Source "puts \"Indirect query: this = $this\""
Cmds 1, src 35, inst 12, litObjs 3, aux 0, stkDepth 3, code/src 0.00
Commands 1:
1: pc 0-10, src 0-34
Command 1: "puts \"Indirect query: this = $this\""
(0) push1 0 # "puts"
(2) push1 1 # "Indirect query: this = "
(4) push1 2 # "this"
(6) loadScalarStk
(7) concat1 2
(9) invokeStk1 2
(11) done
<<
==== worker_process_both ====
Direct query: this = ::worker
Indirect query: this = ::worker
>>ByteCode 0x0x7fedea044c10, refCt 1, epoch 3, interp 0x0x7fedea033010 (epoch 3)
Source "puts \"Indirect query: this = $this\""
Cmds 1, src 35, inst 12, litObjs 3, aux 0, stkDepth 3, code/src 0.00
Commands 1:
1: pc 0-10, src 0-34
Command 1: "puts \"Indirect query: this = $this\""
(0) push1 0 # "puts"
(2) push1 1 # "Indirect query: this = "
(4) push1 2 # "this"
(6) loadScalarStk
(7) concat1 2
(9) invokeStk1 2
(11) done
<<
With 8.6/4.0 I get this instead:
Tcl version : 8.6.3
Itcl version : 4.0.2
==== worker_process_direct ====
Direct query: this = ::worker
==== worker_process_indirect ====
Indirect query: this = ::main
>>ByteCode 0x0x1009af010, refCt 1, epoch 136, interp 0x0x100829a10 (epoch 136)
Source "puts \"Indirect query: this = $this"...
Cmds 1, src 35, inst 12, litObjs 3, aux 0, stkDepth 3, code/src 0.00
Commands 1:
1: pc 0-10, src 0-34
Command 1: "puts \"Indirect query: this = $this"...
(0) push1 0 # "puts"
(2) push1 1 # "Indirect query: this = "
(4) push1 2 # "this"
(6) loadStk
(7) strcat 2
(9) invokeStk1 2
(11) done
<<
==== worker_process_both ====
Direct query: this = ::worker
Indirect query: this = ::worker
>>ByteCode 0x0x1009b0210, refCt 1, epoch 136, interp 0x0x100829a10 (epoch 136)
Source "puts \"Indirect query: this = $this"...
Cmds 1, src 35, inst 11, litObjs 2, aux 0, stkDepth 3, code/src 0.00
Commands 1:
1: pc 0-9, src 0-34
Command 1: "puts \"Indirect query: this = $this"...
(0) push1 0 # "puts"
(2) push1 1 # "Indirect query: this = "
(4) loadScalar1 %v0
(6) strcat 2
(8) invokeStk1 2
(10) done
<<
So, 8.5 uses the loadScalarStk instruction to read the variable in both (indirect) cases, whereas 8.6 uses loadStk and loadScalar1 to load the variable in the two cases. Which is mighty strange; I wouldn't expect loadScalar1 to appear in a script fragment (it needs a Local Variable Table) but at least it is picking up the expected value, whereas loadStk is just picking up the wrong value entirely. I've also tried using exactly the same value in the two places — with the script kept in a shared variable — but that produces the same output; it looks like in one place it is evaluating but picking up the wrong value (perhaps a variable resolver issue?) and in the other it is picking up the right value but for the wrong reasons (as the LVT shouldn't be used in a script fragment; that's for full procedures/methods only). Either way, it's Bad News.
Please file a bug report at http://core.tcl-lang.org/tcl/tktnew as this smells like several sorts of wrong behaviour compounded.

Related

Why the parsed dicts are equal while the pickled dicts are not?

I'm working on an aggregated config file parsing tool, hoping it can support .json, .yaml and .toml files. So, I have done the next tests:
The example.json config file is as:
{
"DEFAULT":
{
"ServerAliveInterval": 45,
"Compression": true,
"CompressionLevel": 9,
"ForwardX11": true
},
"bitbucket.org":
{
"User": "hg"
},
"topsecret.server.com":
{
"Port": 50022,
"ForwardX11": false
},
"special":
{
"path":"C:\\Users",
"escaped1":"\n\t",
"escaped2":"\\n\\t"
}
}
The example.yaml config file is as:
DEFAULT:
ServerAliveInterval: 45
Compression: yes
CompressionLevel: 9
ForwardX11: yes
bitbucket.org:
User: hg
topsecret.server.com:
Port: 50022
ForwardX11: no
special:
path: C:\Users
escaped1: "\n\t"
escaped2: \n\t
and the example.toml config file is as:
[DEFAULT]
ServerAliveInterval = 45
Compression = true
CompressionLevel = 9
ForwardX11 = true
['bitbucket.org']
User = 'hg'
['topsecret.server.com']
Port = 50022
ForwardX11 = false
[special]
path = 'C:\Users'
escaped1 = "\n\t"
escaped2 = '\n\t'
Then, the test code with output is as:
import pickle,json,yaml
# TOML, see https://github.com/hukkin/tomli
try:
import tomllib
except ModuleNotFoundError:
import tomli as tomllib
path = "example.json"
with open(path) as file:
config1 = json.load(file)
assert isinstance(config1,dict)
pickled1 = pickle.dumps(config1)
path = "example.yaml"
with open(path, 'r', encoding='utf-8') as file:
config2 = yaml.safe_load(file)
assert isinstance(config2,dict)
pickled2 = pickle.dumps(config2)
path = "example.toml"
with open(path, 'rb') as file:
config3 = tomllib.load(file)
assert isinstance(config3,dict)
pickled3 = pickle.dumps(config3)
print(config1==config2) # True
print(config2==config3) # True
print(pickled1==pickled2) # False
print(pickled2==pickled3) # True
So, my question is, since the parsed obj are all dicts, and these dicts are equal to each other, why their pickled codes are not the same, i.e., why is the pickled code of the dict parsed from json different to other two?
Thanks in advance.
The difference is due to:
The json module performing memoizing for object attributes with the same value (it's not interning them, but the scanner object contains a memo dict that it uses to dedupe identical attribute strings within a single parsing run), while yaml does not (it just makes a new str each time it sees the same data), and
pickle faithfully reproducing the exact structure of the data it's told to dump, replacing subsequent references to the same object with a back-reference to the first time it was seen (among other reasons, this makes it possible to dump recursive data structures, e.g. lst = [], lst.append(lst), without infinite recursion, and reproduce them faithfully when unpickled)
Issue #1 isn't visible in equality testing (strs compare equal with the same data, not just the same exact object in memory). But when pickle sees "ForwardX11" the first time, it inserts the pickled form of the object and emits a pickle opcode that assigns a number to that object. If that exact object is seen again (same memory address, not merely same value), instead of reserializing it, it just emits a simpler opcode that just says "Go find the object associated with the number from last time and put it here as well". If it's a different object though, even one with the same value, it's new, and gets serialized separately (and assigned another number in case the new object is seen again).
Simplifying your code to demonstrate the issue, you can inspect the generated pickle output to see how this is happening:
s = r'''{
"DEFAULT":
{
"ForwardX11": true
},
"FOO":
{
"ForwardX11": false
}
}'''
s2 = r'''DEFAULT:
ForwardX11: yes
FOO:
ForwardX11: no
'''
import io, json, yaml, pickle, pickletools
d1 = json.load(io.StringIO(s))
d2 = yaml.safe_load(io.StringIO(s2))
pickletools.dis(pickle.dumps(d1))
pickletools.dis(pickle.dumps(d2))
Try it online!
The output from that code for the json parsed input is (with # comments inline to point out important things), at least on Python 3.7 (the default pickle protocol and exact pickling format can change from release to release), is:
0: \x80 PROTO 3
2: } EMPTY_DICT
3: q BINPUT 0
5: ( MARK
6: X BINUNICODE 'DEFAULT'
18: q BINPUT 1
20: } EMPTY_DICT
21: q BINPUT 2
23: X BINUNICODE 'ForwardX11' # Serializes 'ForwardX11'
38: q BINPUT 3 # Assigns the serialized form the ID of 3
40: \x88 NEWTRUE
41: s SETITEM
42: X BINUNICODE 'FOO'
50: q BINPUT 4
52: } EMPTY_DICT
53: q BINPUT 5
55: h BINGET 3 # Looks up whatever object was assigned the ID of 3
57: \x89 NEWFALSE
58: s SETITEM
59: u SETITEMS (MARK at 5)
60: . STOP
highest protocol among opcodes = 2
while the output from the yaml loaded data is:
0: \x80 PROTO 3
2: } EMPTY_DICT
3: q BINPUT 0
5: ( MARK
6: X BINUNICODE 'DEFAULT'
18: q BINPUT 1
20: } EMPTY_DICT
21: q BINPUT 2
23: X BINUNICODE 'ForwardX11' # Serializes as before
38: q BINPUT 3 # and assigns code 3 as before
40: \x88 NEWTRUE
41: s SETITEM
42: X BINUNICODE 'FOO'
50: q BINPUT 4
52: } EMPTY_DICT
53: q BINPUT 5
55: X BINUNICODE 'ForwardX11' # Doesn't see this 'ForwardX11' as being the exact same object, so reserializes
70: q BINPUT 6 # and marks again, in case this copy is seen again
72: \x89 NEWFALSE
73: s SETITEM
74: u SETITEMS (MARK at 5)
75: . STOP
highest protocol among opcodes = 2
printing the id of each such string would get you similar information, e.g., replacing the pickletools lines with:
for k in d1['DEFAULT']:
print(id(k))
for k in d1['FOO']:
print(id(k))
for k in d2['DEFAULT']:
print(id(k))
for k in d2['FOO']:
print(id(k))
will show a consistent id for both 'ForwardX11's in d1, but differing ones for d2; a sample run produced (with inline comments added):
140067902240944 # First from d1
140067902240944 # Second from d1 is *same* object
140067900619760 # First from d2
140067900617712 # Second from d2 is unrelated object (same value, but stored separately)
While I didn't bother checking if toml behaved the same way, given that it pickles the same as the yaml, it's clearly not attempting to dedupe strings; json is uniquely weird there. It's not a terrible idea that it does so mind you; the keys of a JSON dict are logically equivalent to attributes on an object, and for huge inputs (say, 10M objects in an array with the same handful of keys), it might save a meaningful amount of memory on the final parsed output by deduping (e.g. on CPython 3.11 x86-64 builds, replacing 10M copies of "ForwardX11" with a single copy would reduce 590 MB for string data to just 59 bytes).
As a side-note: This "dicts are equal, pickles are not" issue could also occur:
When the two dicts were constructed with the same keys and values, but the order in which the keys were inserted differed (modern Python uses insertion-ordered dicts; comparisons between them ignore ordering, but pickle would be serializing them in whatever order they iterate in naturally).
When there are objects which compare equal but have different types (e.g. set vs. frozenset, int vs. float); pickle would treat them separately, but equality tests would not see a difference.
Neither of these is the issue here (both json and yaml appear to be constructing in the same order seen in the input, and they're parsing the ints as ints), but it's entirely possible for your test of equality to return True, while the pickled forms are unequal, even when all the objects involved are unique.

How to use the Blink_1Hz program in a clock program

I want to use the RPi Pico to build a clock.
In the documentation I found "blink_1hz.py" and I want to use the
1 second interrupt as counter for my clock.
The original program prints the system time every second correctly.
I only replaced the call to the lambda function in the irq by my own seconds counter function.
My problem is:
My version the of program prints only once instead of ten times, and does not increment t
I looked all over the internet but there is little specific information about the interrupts used by the StateMachines.
All suggestions are welcome
Here is my code:
# Example using PIO to blink an LED and raise an IRQ at 1Hz.
import time
from machine import Pin
import rp2
#rp2.asm_pio(set_init=rp2.PIO.OUT_LOW)
def blink_1hz():
# Cycles: 1 + 1 + 6 + 32 * (30 + 1) = 1000
irq(rel(0))
set(pins, 1)
set(x, 31) [5]
label("delay_high")
nop() [29]
jmp(x_dec, "delay_high")
# Cycles: 1 + 7 + 32 * (30 + 1) = 1000
set(pins, 0)
set(x, 31) [6]
label("delay_low")
nop() [29]
jmp(x_dec, "delay_low")
def secs():
global t
t = t+1
print("secs",t)
t = 0
# Create the StateMachine with the blink_1hz program, outputting on Pin(25).
sm = rp2.StateMachine(0, blink_1hz, freq=2000, set_base=Pin(25))
# Set the IRQ handler to print the millisecond timestamp.
sm.irq(handler = secs()) # prints secs only once
#sm.irq(lambda p: print( time.ticks_ms())) # original, prints ticks every second
# Start the StateMachine.
sm.active(1)
time.sleep(10)
# Stop the StateMachine
sm.active(0)
p rint("main",t)

Is there a way to lookup a value from a CSV in nextflow? Or, alternately, reuse a CSV?

I have a simple csv created as part of a workflow, like below:
sample,value
A,1
B,0.5
Separately, I have another channel with file names matching the sample names. I'd like to be able to use the values associated with each sample name within a new process.
I've tried splitting the CSV using .splitCsv but (unsurprisingly) sometimes the incorrect value gets used with a sample, although it does run the correct number of times. I've also tried just using awk within the script to pull out the corresponding value and save it to a variable, and this causes the correct value to be used, but it consumes the CSV file and so only one sample gets processed.
Super simplified nextflow (DSL2) script:
#!/usr/bin/env nextflow
nextflow.enable.dsl=2
process foo {
input:
path input_file
output:
path 'file.csv', emit csv
"""
script that creates csv
"""
}
process bar {
input:
path input_file2
output:
path 'file.bam', emit bam
"""
script that creates bam files
"""
}
process help_me {
input:
path csv
path bam
output:
path 'result'
"""
script that uses value from csv on associated bam file
"""
}
workflow {
foo(params.input)
bar(params.input2)
help_me(foo.out.csv, bar.out.bam)
}
Thanks!!
Edit: In essence, is there a way to synchronize two channels such that I can use a csv's individual rows with associated files?
If you have a value channel, you can reuse a file (like a CSV) an unlimited number of times without consuming the channel. For example:
workflow {
input1 = file( params.input1 )
input2 = file( params.input2 )
foo( input1 )
bar( input2 )
help_me(foo.out.csv, bar.out.bam)
}
Here, both input1 and input2 are value channels. Also, (emphasis mine):
A value channel is implicitly created by a process when an input
specifies a simple value in the from clause. Moreover, a value channel
is also implicitly created as output for a process whose inputs are
only value channels.
Means that both foo.out.csv and bar.out.bam are also value channels. Additionally, help_me.out is also a value channel. If input2 was instead a queue channel, you can see that input1 can be re-used an unlimited number of times:
$ mkdir -p ./path/to/bams
$ touch ./path/to/bams/{A,B,C}.bam
$ touch ./foo.txt
params.input1 = './foo.txt'
params.input2 = './path/to/bams/*.bam'
workflow {
input1 = file( params.input1 )
input2 = Channel.fromPath( params.input2 )
foo( input1 )
bar( input2 )
help_me(foo.out.csv, bar.out.bam)
}
Results:
$ nextflow run script.nf
N E X T F L O W ~ version 22.04.0
Launching `script.nf` [trusting_allen] DSL2 - revision: 75209e4c85
executor > local (7)
[24/d459f7] process > foo [100%] 1 of 1 ✔
[04/a903e4] process > bar (2) [100%] 3 of 3 ✔
[24/7a9a1d] process > help_me (3) [100%] 3 of 3 ✔
Note that bar.out.bam and help_me.out are now queue channels.
If instead, you have one CSV per sample (or similar configuration), you will need some way to join these channels prior and adjust your new process' input declaration accordingly. What you want to avoid is declaring two (or more) queue channels in your input block. This part of docs is well worth the time investment: Understand how multiple input channels work, and would explain why you saw the incorrect value being associated with a particular sample when consuming the splitCsv output. To join these channels, you can use the join operator. For example, given your simple CSV (as 'foo.csv') and the test bams created previously:
nextflow.enable.dsl=2
params.input1 = './foo.csv'
params.input2 = './path/to/bams/*.bam'
process help_me {
debug true
input:
tuple val(sample), val(myval), path(bam)
output:
path 'result'
"""
echo -n "sample: ${sample}, myval: ${myval}, bam: ${bam}"
touch result
"""
}
workflow {
Channel.fromPath( params.input1 ) \
| splitCsv( header:true ) \
| map { row -> tuple( row.sample, row.value ) } \
| set { rows_ch }
Channel.fromPath( params.input2 ) \
| map { bam -> tuple( bam.baseName, bam ) } \
| join( rows_ch ) \
| map { sample, bam, myval -> tuple( sample, myval, bam ) } \
| help_me
}
Results:
$ nextflow run script.nf
N E X T F L O W ~ version 22.04.0
Launching `script.nf` [lethal_mayer] DSL2 - revision: 395732babc
executor > local (2)
[c5/e96085] process > help_me (1) [100%] 2 of 2 ✔
sample: B, myval: 0.5, bam: B.bam
sample: A, myval: 1, bam: A.bam
If your CSV has more than one value for a particalar sample and these are specified on seperate lines, you probably want instead the combine operator. For example, if your 'foo.csv' contains:
sample,value
A,1
B,0.5
B,2
And replace, join( rows_ch ) with combine( rows_ch, by:0 ) in the above example. Results:
nextflow run script.nf
N E X T F L O W ~ version 22.04.0
Launching `script.nf` [festering_miescher] DSL2 - revision: f8de1e0d20
executor > local (3)
[ee/8af543] process > help_me (3) [100%] 3 of 3 ✔
sample: A, myval: 1, bam: A.bam
sample: B, myval: 0.5, bam: B.bam
sample: B, myval: 2, bam: B.bam

Julia CSV.read not recognizing "select" keyword

I am reading in a space-delimited file using the CSV library in Julia.
edgeList = CSV.read(
joinpath(dataDirectory, "out.file"),
types=[Int, Int],
header=["node1", "node2"],
skipto=3,
select=[1,2]
)
This yields the following error:
MethodError: no method matching CSV.File(::String; types=DataType[Int64, Int64], header=["node1", "node2"], skipto=3, select=[1, 2])
Closest candidates are:
CSV.File(::Any; header, normalizenames, datarow, skipto, footerskip, limit, transpose, comment, use_mmap, ignoreemptylines, missingstrings, missingstring, delim, ignorerepeated, quotechar, openquotechar, closequotechar, escapechar, dateformat, decimal, truestrings, falsestrings, type, types, typemap, categorical, pool, strict, silencewarnings, threaded, debug, parsingdebug, allowmissing) at /Users/n.jordanjameson/.julia/packages/CSV/4GOjG/src/CSV.jl:221 got unsupported keyword argument "select"
I am using Julia v. 1.6.2. Here is the output versioninfo():
Julia Version 1.6.2
Commit 1b93d53fc4 (2021-07-14 15:36 UTC)
Platform Info:
OS: macOS (x86_64-apple-darwin18.7.0)
CPU: Intel(R) Core(TM) i7-5650U CPU # 2.20GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-11.0.1 (ORCJIT, broadwell)
The version of CSV is 0.10.4. The wiki for this version of CSV is here: https://csv.juliadata.org/stable/reading.html#CSV.read, and it has a select / drop entry.
The file I am trying to read is from here: http://konect.cc/networks/moreno_crime/ (the file I'm using is called "out.moreno_crime_crime"). The first few lines are:
% bip unweighted
% 1476 829 551
1 1
1 2
1 3
1 4
2 5
2 6
2 7
2 8
2 9
2 10
I get a different error than you, can you restart Julia and make sure?
julia> CSV.read("/home/akako/Downloads/moreno_crime/out.moreno_crime_crime"; types=[Int, Int],
header=["node1", "node2"],
skipto=3,
select=[1,2]
)
ERROR: ArgumentError: provide a valid sink argument, like `using DataFrames; CSV.read(source, DataFrame)`
Stacktrace:
[1] read(source::String, sink::Nothing; copycols::Bool, kwargs::Base.Pairs{Symbol, Any, NTuple{4, Symbol}, NamedTuple{(:types, :header, :skipto, :select), Tuple{Vector{DataType}, Vector{String}, Int64, Vector{Int64}}}})
# CSV ~/.julia/packages/CSV/jFiCn/src/CSV.jl:89
[2] top-level scope
# REPL[8]:1
Stacktrace:
this error is telling you you can't CSV.read without a target sink, you might want to use CSV.File
julia> CSV.File("/home/akako/Downloads/moreno_crime/out.moreno_crime_crime"; types=[Int, Int],
header=["node1", "node2"],
skipto=3,
select=[1,2]
)
┌ Warning: thread = 1 warning: parsed expected 2 columns, but didn't reach end of line around data row: 1. Parsing extra columns and widening final columnset
└ # CSV ~/.julia/packages/CSV/jFiCn/src/file.jl:579
1476-element CSV.File:
CSV.Row: (node1 = 1, node2 = 1, Column3 = missing)
CSV.Row: (node1 = 1, node2 = 2, Column3 = missing)
CSV.Row: (node1 = 1, node2 = 3, Column3 = missing)
CSV.Row: (node1 = 1, node2 = 4, Column3 = missing)

rOpengov/mpg, looping through VIN numbers returns error vs single use?

I'm trying to loop through the fevehicle() function in the mpg package courtesy of:
https://github.com/rOpenGov/mpg
I've been trying to feed the function multiple vinids, even giving the function 5 seconds of rest between loops just in case, but I keep getting an HTTP error even though alone, the function works fine. Any ideas what it might be? Below is the code:
#using a loop
vin = c("19UUA86209A000532", "19UUA86239A021598", "19UUA8F20CA037748", "19UUA8F21CA008002", "19UUA8F21CA017878")
for (i in vin) {
library(mpg)
print(i)
print(substr(i, 13, 17))
q = substr(i, 13, 17)
z = feVehicle(q)
Sys.sleep(5)
z = t(unlist(z))
}
or
#using lapply to see a difference
lapply(vin, feVehicle)
both throw the following error:
[1] "19UUA86209A000532"
[1] "00532"
failed to load HTTP resource
Error in t.default(unlist(z)) : argument is not a matrix
> lapply(vin, feVehicle)
failed to load HTTP resource
failed to load HTTP resource
failed to load HTTP resource
failed to load HTTP resource
failed to load HTTP resource
But when I run it on just one at a time it works fine:
mpg::feVehicle(00532)
Vehicle data:
value
atvType Diesel
barrels08 16.616739130434784
barrelsA08 0.0
c240Dscr NULL
c240bDscr NULL
charge120 0.0
charge240 0.0
charge240b 0.0
city08 21
city08U 0.0
cityA08 0
cityA08U 0.0
city
It's because in your single example you gave a number but in the loop you used a character:
#using a loop
vin = c("19UUA86209A000532", "19UUA86239A021598", "19UUA8F20CA037748", "19UUA8F21CA008002", "19UUA8F21CA017878")
for (i in vin) {
library(mpg)
print(i)
print(substr(i, 13, 17))
q = substr(i, 13, 17)
z = feVehicle(as.numeric(q))
Sys.sleep(5)
z = t(unlist(z))
}
[1] "19UUA86209A000532"
[1] "00532"
[1] "19UUA86239A021598"
[1] "21598"
[1] "19UUA8F20CA037748"
[1] "37748"
[1] "19UUA8F21CA008002"
[1] "08002"
[1] "19UUA8F21CA017878"
[1] "17878"