Julia CSV.read not recognizing "select" keyword - csv

I am reading in a space-delimited file using the CSV library in Julia.
edgeList = CSV.read(
joinpath(dataDirectory, "out.file"),
types=[Int, Int],
header=["node1", "node2"],
skipto=3,
select=[1,2]
)
This yields the following error:
MethodError: no method matching CSV.File(::String; types=DataType[Int64, Int64], header=["node1", "node2"], skipto=3, select=[1, 2])
Closest candidates are:
CSV.File(::Any; header, normalizenames, datarow, skipto, footerskip, limit, transpose, comment, use_mmap, ignoreemptylines, missingstrings, missingstring, delim, ignorerepeated, quotechar, openquotechar, closequotechar, escapechar, dateformat, decimal, truestrings, falsestrings, type, types, typemap, categorical, pool, strict, silencewarnings, threaded, debug, parsingdebug, allowmissing) at /Users/n.jordanjameson/.julia/packages/CSV/4GOjG/src/CSV.jl:221 got unsupported keyword argument "select"
I am using Julia v. 1.6.2. Here is the output versioninfo():
Julia Version 1.6.2
Commit 1b93d53fc4 (2021-07-14 15:36 UTC)
Platform Info:
OS: macOS (x86_64-apple-darwin18.7.0)
CPU: Intel(R) Core(TM) i7-5650U CPU # 2.20GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-11.0.1 (ORCJIT, broadwell)
The version of CSV is 0.10.4. The wiki for this version of CSV is here: https://csv.juliadata.org/stable/reading.html#CSV.read, and it has a select / drop entry.
The file I am trying to read is from here: http://konect.cc/networks/moreno_crime/ (the file I'm using is called "out.moreno_crime_crime"). The first few lines are:
% bip unweighted
% 1476 829 551
1 1
1 2
1 3
1 4
2 5
2 6
2 7
2 8
2 9
2 10

I get a different error than you, can you restart Julia and make sure?
julia> CSV.read("/home/akako/Downloads/moreno_crime/out.moreno_crime_crime"; types=[Int, Int],
header=["node1", "node2"],
skipto=3,
select=[1,2]
)
ERROR: ArgumentError: provide a valid sink argument, like `using DataFrames; CSV.read(source, DataFrame)`
Stacktrace:
[1] read(source::String, sink::Nothing; copycols::Bool, kwargs::Base.Pairs{Symbol, Any, NTuple{4, Symbol}, NamedTuple{(:types, :header, :skipto, :select), Tuple{Vector{DataType}, Vector{String}, Int64, Vector{Int64}}}})
# CSV ~/.julia/packages/CSV/jFiCn/src/CSV.jl:89
[2] top-level scope
# REPL[8]:1
Stacktrace:
this error is telling you you can't CSV.read without a target sink, you might want to use CSV.File
julia> CSV.File("/home/akako/Downloads/moreno_crime/out.moreno_crime_crime"; types=[Int, Int],
header=["node1", "node2"],
skipto=3,
select=[1,2]
)
┌ Warning: thread = 1 warning: parsed expected 2 columns, but didn't reach end of line around data row: 1. Parsing extra columns and widening final columnset
└ # CSV ~/.julia/packages/CSV/jFiCn/src/file.jl:579
1476-element CSV.File:
CSV.Row: (node1 = 1, node2 = 1, Column3 = missing)
CSV.Row: (node1 = 1, node2 = 2, Column3 = missing)
CSV.Row: (node1 = 1, node2 = 3, Column3 = missing)
CSV.Row: (node1 = 1, node2 = 4, Column3 = missing)

Related

Why the parsed dicts are equal while the pickled dicts are not?

I'm working on an aggregated config file parsing tool, hoping it can support .json, .yaml and .toml files. So, I have done the next tests:
The example.json config file is as:
{
"DEFAULT":
{
"ServerAliveInterval": 45,
"Compression": true,
"CompressionLevel": 9,
"ForwardX11": true
},
"bitbucket.org":
{
"User": "hg"
},
"topsecret.server.com":
{
"Port": 50022,
"ForwardX11": false
},
"special":
{
"path":"C:\\Users",
"escaped1":"\n\t",
"escaped2":"\\n\\t"
}
}
The example.yaml config file is as:
DEFAULT:
ServerAliveInterval: 45
Compression: yes
CompressionLevel: 9
ForwardX11: yes
bitbucket.org:
User: hg
topsecret.server.com:
Port: 50022
ForwardX11: no
special:
path: C:\Users
escaped1: "\n\t"
escaped2: \n\t
and the example.toml config file is as:
[DEFAULT]
ServerAliveInterval = 45
Compression = true
CompressionLevel = 9
ForwardX11 = true
['bitbucket.org']
User = 'hg'
['topsecret.server.com']
Port = 50022
ForwardX11 = false
[special]
path = 'C:\Users'
escaped1 = "\n\t"
escaped2 = '\n\t'
Then, the test code with output is as:
import pickle,json,yaml
# TOML, see https://github.com/hukkin/tomli
try:
import tomllib
except ModuleNotFoundError:
import tomli as tomllib
path = "example.json"
with open(path) as file:
config1 = json.load(file)
assert isinstance(config1,dict)
pickled1 = pickle.dumps(config1)
path = "example.yaml"
with open(path, 'r', encoding='utf-8') as file:
config2 = yaml.safe_load(file)
assert isinstance(config2,dict)
pickled2 = pickle.dumps(config2)
path = "example.toml"
with open(path, 'rb') as file:
config3 = tomllib.load(file)
assert isinstance(config3,dict)
pickled3 = pickle.dumps(config3)
print(config1==config2) # True
print(config2==config3) # True
print(pickled1==pickled2) # False
print(pickled2==pickled3) # True
So, my question is, since the parsed obj are all dicts, and these dicts are equal to each other, why their pickled codes are not the same, i.e., why is the pickled code of the dict parsed from json different to other two?
Thanks in advance.
The difference is due to:
The json module performing memoizing for object attributes with the same value (it's not interning them, but the scanner object contains a memo dict that it uses to dedupe identical attribute strings within a single parsing run), while yaml does not (it just makes a new str each time it sees the same data), and
pickle faithfully reproducing the exact structure of the data it's told to dump, replacing subsequent references to the same object with a back-reference to the first time it was seen (among other reasons, this makes it possible to dump recursive data structures, e.g. lst = [], lst.append(lst), without infinite recursion, and reproduce them faithfully when unpickled)
Issue #1 isn't visible in equality testing (strs compare equal with the same data, not just the same exact object in memory). But when pickle sees "ForwardX11" the first time, it inserts the pickled form of the object and emits a pickle opcode that assigns a number to that object. If that exact object is seen again (same memory address, not merely same value), instead of reserializing it, it just emits a simpler opcode that just says "Go find the object associated with the number from last time and put it here as well". If it's a different object though, even one with the same value, it's new, and gets serialized separately (and assigned another number in case the new object is seen again).
Simplifying your code to demonstrate the issue, you can inspect the generated pickle output to see how this is happening:
s = r'''{
"DEFAULT":
{
"ForwardX11": true
},
"FOO":
{
"ForwardX11": false
}
}'''
s2 = r'''DEFAULT:
ForwardX11: yes
FOO:
ForwardX11: no
'''
import io, json, yaml, pickle, pickletools
d1 = json.load(io.StringIO(s))
d2 = yaml.safe_load(io.StringIO(s2))
pickletools.dis(pickle.dumps(d1))
pickletools.dis(pickle.dumps(d2))
Try it online!
The output from that code for the json parsed input is (with # comments inline to point out important things), at least on Python 3.7 (the default pickle protocol and exact pickling format can change from release to release), is:
0: \x80 PROTO 3
2: } EMPTY_DICT
3: q BINPUT 0
5: ( MARK
6: X BINUNICODE 'DEFAULT'
18: q BINPUT 1
20: } EMPTY_DICT
21: q BINPUT 2
23: X BINUNICODE 'ForwardX11' # Serializes 'ForwardX11'
38: q BINPUT 3 # Assigns the serialized form the ID of 3
40: \x88 NEWTRUE
41: s SETITEM
42: X BINUNICODE 'FOO'
50: q BINPUT 4
52: } EMPTY_DICT
53: q BINPUT 5
55: h BINGET 3 # Looks up whatever object was assigned the ID of 3
57: \x89 NEWFALSE
58: s SETITEM
59: u SETITEMS (MARK at 5)
60: . STOP
highest protocol among opcodes = 2
while the output from the yaml loaded data is:
0: \x80 PROTO 3
2: } EMPTY_DICT
3: q BINPUT 0
5: ( MARK
6: X BINUNICODE 'DEFAULT'
18: q BINPUT 1
20: } EMPTY_DICT
21: q BINPUT 2
23: X BINUNICODE 'ForwardX11' # Serializes as before
38: q BINPUT 3 # and assigns code 3 as before
40: \x88 NEWTRUE
41: s SETITEM
42: X BINUNICODE 'FOO'
50: q BINPUT 4
52: } EMPTY_DICT
53: q BINPUT 5
55: X BINUNICODE 'ForwardX11' # Doesn't see this 'ForwardX11' as being the exact same object, so reserializes
70: q BINPUT 6 # and marks again, in case this copy is seen again
72: \x89 NEWFALSE
73: s SETITEM
74: u SETITEMS (MARK at 5)
75: . STOP
highest protocol among opcodes = 2
printing the id of each such string would get you similar information, e.g., replacing the pickletools lines with:
for k in d1['DEFAULT']:
print(id(k))
for k in d1['FOO']:
print(id(k))
for k in d2['DEFAULT']:
print(id(k))
for k in d2['FOO']:
print(id(k))
will show a consistent id for both 'ForwardX11's in d1, but differing ones for d2; a sample run produced (with inline comments added):
140067902240944 # First from d1
140067902240944 # Second from d1 is *same* object
140067900619760 # First from d2
140067900617712 # Second from d2 is unrelated object (same value, but stored separately)
While I didn't bother checking if toml behaved the same way, given that it pickles the same as the yaml, it's clearly not attempting to dedupe strings; json is uniquely weird there. It's not a terrible idea that it does so mind you; the keys of a JSON dict are logically equivalent to attributes on an object, and for huge inputs (say, 10M objects in an array with the same handful of keys), it might save a meaningful amount of memory on the final parsed output by deduping (e.g. on CPython 3.11 x86-64 builds, replacing 10M copies of "ForwardX11" with a single copy would reduce 590 MB for string data to just 59 bytes).
As a side-note: This "dicts are equal, pickles are not" issue could also occur:
When the two dicts were constructed with the same keys and values, but the order in which the keys were inserted differed (modern Python uses insertion-ordered dicts; comparisons between them ignore ordering, but pickle would be serializing them in whatever order they iterate in naturally).
When there are objects which compare equal but have different types (e.g. set vs. frozenset, int vs. float); pickle would treat them separately, but equality tests would not see a difference.
Neither of these is the issue here (both json and yaml appear to be constructing in the same order seen in the input, and they're parsing the ints as ints), but it's entirely possible for your test of equality to return True, while the pickled forms are unequal, even when all the objects involved are unique.

Unexpected and missing keys in state_dict when converting pytorch to onnx

When I convert a '.pth' model from PyTorch to ONNX, an error like Unexpected keys and missing keys occur.
This is my model:
1 import torch
2 import torch.onnx
3 from mmcv import runner
4 import torch.`enter code here`nn as nn
5 from mobilenet import MobileNet
6 # A model class instance (class not shown)
7 md=MobileNet(1,2)
8 model = md
9 device_ids = [0,2,6,7,8]
10 model = nn.DataParallel(model,device_ids)
11 #torch.backends.cudnn.benchmark = True
12 # Load the weights from a file (.pth usually)
13 runner.load_checkpoint(model,'../mmdetection- master/work_dmobile/faster_rcnn_r50_fpn_1x/epoch_60.pth')
14 #model = MMDataParallel(model, device_ids=[0])
15 #state_dict=torch.load('../mmdetection-master/r.pkl.json')
16 # Load the weights now into a model net architecture defined by our class
17 #model.load_state_dict(state_dict)
18 #model = runner.load_state_dict(state_dict)
19 model=runner.load_state_dict({k.replace('module.',' '):v for k,v in state_dict['state_dict'].items()})
20 # Create the right input shape (e.g. for an image)
21 dummy_input = torch.randn(1, 64, 512, 256)
22
23 torch.onnx.export(model, dummy_input, "onnx_model_name.onnx")
And this is the error:
unexpected key in source state_dict: backbone.stem.0.conv.weight, backbone.stem.0.bn.weight, backbone.stem.0.bn.bias, backbone.stem.0.bn.running_mean, backbone.stem.0.bn.running_var, backbone.stem.0.bn.num_batches_tracked, backbone.stem.1.depthwise.0.weight, backbone.stem.1.depthwise.1.weight, backbone.stem.1.depthwise.1.bias, backbone.stem.1.depthwise.1.running_mean, backbone.stem.1.depthwise.1.running_var, backbone.stem.1.depthwise.1.num_batches_tracked, backbone.stem.1.pointwise.0.weight, backbone.stem.1.pointwise.0.bias, backbone.stem.1.pointwise.1.weight, backbone.stem.1.pointwise.1.bias, backbone.stem.1.pointwise.1.running_mean, backbone.stem.1.pointwise.1.running_var, backbone.stem.1.pointwise.1.num_batches_tracked, backbone.conv1.0.depthwise.0.weight, backbone.conv1.0.depthwise.1.weight, backbone.conv1.0.depthwise.1.bias, backbone.conv1.0.depthwise.1.running_mean, backbone.conv1.0.depthwise.1.running_var, backbone.conv1.0.depthwise.1.num_batches_tracked, backbone.conv1.0.pointwise.0.weight, backbone.conv1.0.pointwise.0.bias, backbone.conv1.0.pointwise.1.weight, backbone.conv1.0.pointwise.1.bias, backbone.conv1.0.pointwise.1.running_mean, backbone.conv1.0.pointwise.1.running_var, backbone.conv1.0.pointwise.1.num_batches_tracked, backbone.conv1.1.depthwise.0.weight, backbone.conv1.1.depthwise.1.weight, backbone.conv1.1.depthwise.1.bias, backbone.conv1.1.depthwise.1.running_mean, backbone.conv1.1.depthwise.1.running_var, backbone.conv1.1.depthwise.1.num_batches_tracked, backbone.conv1.1.pointwise.0.weight, backbone.conv1.1.pointwise.0.bias, backbone.conv1.1.pointwise.1.weight, backbone.conv1.1.pointwise.1.bias, backbone.conv1.1.pointwise.1.running_mean, backbone.conv1.1.pointwise.1.running_var, backbone.conv1.1.pointwise.1.num_batches_tracked, backbone.conv2.0.depthwise.0.weight, backbone.conv2.0.depthwise.1.weight, backbone.conv2.0.depthwise.1.bias, backbone.conv2.0.depthwise.1.running_mean, backbone.conv2.0.depthwise.1.running_var, backbone.conv2.0.depthwise.1.num_batches_tracked, backbone.conv2.0.pointwise.0.weight, backbone.conv2.0.pointwise.0.bias, backbone.conv2.0.pointwise.1.weight, backbone.conv2.0.pointwise.1.bias, backbone.conv2.0.pointwise.1.running_mean, backbone.conv2.0.pointwise.1.running_var, backbone.conv2.0.pointwise.1.num_batches_tracked, backbone.conv2.1.depthwise.0.weight, backbone.conv2.1.depthwise.1.weight, backbone.conv2.1.depthwise.1.bias, backbone.conv2.1.depthwise.1.running_mean, backbone.conv2.1.depthwise.1.running_var, backbone.conv2.1.depthwise.1.num_batches_tracked, backbone.conv2.1.pointwise.0.weight, backbone.conv2.1.pointwise.0.bias, backbone.conv2.1.pointwise.1.weight, backbone.conv2.1.pointwise.1.bias, backbone.conv2.1.pointwise.1.running_mean, backbone.conv2.1.pointwise.1.running_var, backbone.conv2.1.pointwise.1.num_batches_tracked, backbone.conv3.0.depthwise.0.weight, backbone.conv3.0.depthwise.1.weight, backbone.conv3.0.depthwise.1.bias, backbone.conv3.0.depthwise.1.running_mean, backbone.conv3.0.depthwise.1.running_var, backbone.conv3.0.depthwise.1.num_batches_tracked, backbone.conv3.0.pointwise.0.weight, backbone.conv3.0.pointwise.0.bias, backbone.conv3.0.pointwise.1.weight, backbone.conv3.0.pointwise.1.bias, backbone.conv3.0.pointwise.1.running_mean, backbone.conv3.0.pointwise.1.running_var, backbone.conv3.0.pointwise.1.num_batches_tracked, backbone.conv3.1.depthwise.0.weight, backbone.conv3.1.depthwise.1.weight, backbone.conv3.1.depthwise.1.bias, backbone.conv3.1.depthwise.1.running_mean, backbone.conv3.1.depthwise.1.running_var, backbone.conv3.1.depthwise.1.num_batches_tracked, backbone.conv3.1.pointwise.0.weight, backbone.conv3.1.pointwise.0.bias, backbone.conv3.1.pointwise.1.weight, backbone.conv3.1.pointwise.1.bias, backbone.conv3.1.pointwise.1.running_mean, backbone.conv3.1.pointwise.1.running_var, backbone.conv3.1.pointwise.1.num_batches_tracked, backbone.conv3.2.depthwise.0.weight, backbone.conv3.2.depthwise.1.weight, backbone.conv3.2.depthwise.1.bias, backbone.conv3.2.depthwise.1.running_mean, backbone.conv3.2.depthwise.1.running_var, backbone.conv3.2.depthwise.1.num_batches_tracked, backbone.conv3.2.pointwise.0.weight, backbone.conv3.2.pointwise.0.bias, backbone.conv3.2.pointwise.1.weight, backbone.conv3.2.pointwise.1.bias, backbone.conv3.2.pointwise.1.running_mean, backbone.conv3.2.pointwise.1.running_var, backbone.conv3.2.pointwise.1.num_batches_tracked, backbone.conv3.3.depthwise.0.weight, backbone.conv3.3.depthwise.1.weight, backbone.conv3.3.depthwise.1.bias, backbone.conv3.3.depthwise.1.running_mean, backbone.conv3.3.depthwise.1.running_var, backbone.conv3.3.depthwise.1.num_batches_tracked, backbone.conv3.3.pointwise.0.weight, backbone.conv3.3.pointwise.0.bias, backbone.conv3.3.pointwise.1.weight, backbone.conv3.3.pointwise.1.bias, backbone.conv3.3.pointwise.1.running_mean, backbone.conv3.3.pointwise.1.running_var, backbone.conv3.3.pointwise.1.num_batches_tracked, backbone.conv3.4.depthwise.0.weight, backbone.conv3.4.depthwise.1.weight, backbone.conv3.4.depthwise.1.bias, backbone.conv3.4.depthwise.1.running_mean, backbone.conv3.4.depthwise.1.running_var, backbone.conv3.4.depthwise.1.num_batches_tracked, backbone.conv3.4.pointwise.0.weight, backbone.conv3.4.pointwise.0.bias, backbone.conv3.4.pointwise.1.weight, backbone.conv3.4.pointwise.1.bias, backbone.conv3.4.pointwise.1.running_mean, backbone.conv3.4.pointwise.1.running_var, backbone.conv3.4.pointwise.1.num_batches_tracked, backbone.conv3.5.depthwise.0.weight, backbone.conv3.5.depthwise.1.weight, backbone.conv3.5.depthwise.1.bias, backbone.conv3.5.depthwise.1.running_mean, backbone.conv3.5.depthwise.1.running_var, backbone.conv3.5.depthwise.1.num_batches_tracked, backbone.conv3.5.pointwise.0.weight, backbone.conv3.5.pointwise.0.bias, backbone.conv3.5.pointwise.1.weight, backbone.conv3.5.pointwise.1.bias, backbone.conv3.5.pointwise.1.running_mean, backbone.conv3.5.pointwise.1.running_var, backbone.conv3.5.pointwise.1.num_batches_tracked, backbone.conv4.0.depthwise.0.weight, backbone.conv4.0.depthwise.1.weight, backbone.conv4.0.depthwise.1.bias, backbone.conv4.0.depthwise.1.running_mean, backbone.conv4.0.depthwise.1.running_var, backbone.conv4.0.depthwise.1.num_batches_tracked, backbone.conv4.0.pointwise.0.weight, backbone.conv4.0.pointwise.0.bias, backbone.conv4.0.pointwise.1.weight, backbone.conv4.0.pointwise.1.bias, backbone.conv4.0.pointwise.1.running_mean, backbone.conv4.0.pointwise.1.running_var, backbone.conv4.0.pointwise.1.num_batches_tracked, backbone.conv4.1.depthwise.0.weight, backbone.conv4.1.depthwise.1.weight, backbone.conv4.1.depthwise.1.bias, backbone.conv4.1.depthwise.1.running_mean, backbone.conv4.1.depthwise.1.running_var, backbone.conv4.1.depthwise.1.num_batches_tracked, backbone.conv4.1.pointwise.0.weight, backbone.conv4.1.pointwise.0.bias, backbone.conv4.1.pointwise.1.weight, backbone.conv4.1.pointwise.1.bias, backbone.conv4.1.pointwise.1.running_mean, backbone.conv4.1.pointwise.1.running_var, backbone.conv4.1.pointwise.1.num_batches_tracked, neck.lateral_convs.0.conv.weight, neck.lateral_convs.0.conv.bias, neck.lateral_convs.1.conv.weight, neck.lateral_convs.1.conv.bias, neck.lateral_convs.2.conv.weight, neck.lateral_convs.2.conv.bias, neck.fpn_convs.0.conv.weight, neck.fpn_convs.0.conv.bias, neck.fpn_convs.1.conv.weight, neck.fpn_convs.1.conv.bias, neck.fpn_convs.2.conv.weight, neck.fpn_convs.2.conv.bias, rpn_head.rpn_conv.weight, rpn_head.rpn_conv.bias, rpn_head.rpn_cls.weight, rpn_head.rpn_cls.bias, rpn_head.rpn_reg.weight, rpn_head.rpn_reg.bias, bbox_head.fc_cls.weight, bbox_head.fc_cls.bias, bbox_head.fc_reg.weight, bbox_head.fc_reg.bias, bbox_head.shared_fcs.0.weight, bbox_head.shared_fcs.0.bias, bbox_head.shared_fcs.1.weight, bbox_head.shared_fcs.1.bias
missing keys in source state_dict: conv2.1.depthwise.1.weight, conv4.0.depthwise.0.weight, conv4.1.pointwise.1.weight, conv3.2.depthwise.0.weight, conv3.1.pointwise.0.weight, conv3.4.pointwise.1.bias, conv3.5.depthwise.1.bias, conv2.1.pointwise.1.weight, stem.1.pointwise.1.running_mean, conv3.3.pointwise.1.weight, conv3.3.depthwise.1.running_mean, conv3.1.depthwise.1.num_batches_tracked, conv3.0.depthwise.1.num_batches_tracked, conv2.1.depthwise.1.running_var, conv1.0.depthwise.1.weight, conv3.5.depthwise.1.running_var, stem.0.bn.bias, conv3.2.depthwise.1.num_batches_tracked, conv2.0.depthwise.0.weight, conv2.1.pointwise.0.bias, conv3.1.pointwise.1.bias, conv3.2.pointwise.1.bias, conv2.0.pointwise.1.num_batches_tracked, stem.1.pointwise.0.weight, conv2.0.depthwise.1.weight, stem.1.depthwise.0.weight, conv1.1.pointwise.1.weight, conv3.5.pointwise.0.weight, conv3.4.depthwise.1.running_var, conv1.0.pointwise.0.bias, conv3.3.depthwise.1.running_var, conv3.0.pointwise.1.weight, conv4.0.pointwise.1.num_batches_tracked, conv4.1.depthwise.1.running_var, stem.1.depthwise.1.running_var, conv3.0.pointwise.1.running_var, conv3.4.depthwise.0.weight, conv3.4.pointwise.1.num_batches_tracked, conv4.0.depthwise.1.num_batches_tracked, conv3.0.depthwise.1.weight, conv3.3.pointwise.0.bias, conv3.0.depthwise.1.running_mean, conv3.2.pointwise.1.running_mean, conv3.1.pointwise.0.bias, conv3.5.depthwise.1.num_batches_tracked, conv3.5.pointwise.1.running_mean, conv3.1.pointwise.1.running_var, conv1.0.depthwise.1.running_mean, stem.1.pointwise.1.bias, conv1.0.depthwise.0.weight, conv3.2.pointwise.0.weight, conv4.0.pointwise.1.running_mean, conv2.1.pointwise.1.running_mean, stem.1.pointwise.1.weight, conv4.1.depthwise.1.weight, conv4.0.pointwise.0.weight, conv1.1.depthwise.1.bias, conv3.2.pointwise.1.num_batches_tracked, conv4.1.depthwise.0.weight, conv3.4.depthwise.1.running_mean, conv1.0.depthwise.1.bias, conv2.0.pointwise.0.bias, conv3.4.depthwise.1.num_batches_tracked, conv4.1.pointwise.1.running_mean, conv2.1.depthwise.1.bias, conv3.2.depthwise.1.weight, conv2.0.pointwise.1.weight, conv1.0.pointwise.0.weight, conv3.1.depthwise.1.running_var, conv2.0.pointwise.1.bias, conv4.0.depthwise.1.bias, conv3.3.pointwise.1.running_var, conv3.4.pointwise.1.weight, conv4.0.pointwise.0.bias, conv3.4.depthwise.1.bias, conv4.1.depthwise.1.num_batches_tracked, conv2.0.pointwise.1.running_mean, conv1.1.depthwise.1.weight, conv2.0.pointwise.1.running_var, stem.1.depthwise.1.running_mean, conv3.4.pointwise.1.running_var, stem.1.depthwise.1.num_batches_tracked, conv3.3.depthwise.1.weight, stem.1.pointwise.1.running_var, conv4.1.depthwise.1.bias, conv3.0.pointwise.1.bias, conv2.0.depthwise.1.running_mean, conv1.1.pointwise.1.bias, conv4.1.pointwise.0.bias, conv3.2.pointwise.0.bias, conv1.1.pointwise.0.weight, conv1.0.pointwise.1.weight, conv1.0.pointwise.1.running_mean, stem.0.conv.weight, stem.1.depthwise.1.bias, conv3.3.depthwise.0.weight, conv1.1.depthwise.1.num_batches_tracked, conv3.3.pointwise.1.num_batches_tracked, conv3.2.pointwise.1.running_var, conv3.2.depthwise.1.running_mean, conv3.3.depthwise.1.bias, conv4.1.pointwise.1.num_batches_tracked, conv2.0.depthwise.1.num_batches_tracked, conv3.0.pointwise.0.bias, conv3.1.depthwise.1.running_mean, conv3.1.depthwise.1.weight, conv3.0.pointwise.1.num_batches_tracked, conv3.1.pointwise.1.weight, conv4.0.pointwise.1.bias, conv3.3.depthwise.1.num_batches_tracked, conv3.4.pointwise.0.weight, stem.1.pointwise.0.bias, conv3.0.depthwise.1.bias, conv1.1.pointwise.0.bias, conv4.0.pointwise.1.running_var, stem.0.bn.weight, conv1.0.pointwise.1.num_batches_tracked, conv2.1.depthwise.1.running_mean, conv4.1.depthwise.1.running_mean, conv1.1.pointwise.1.running_var, conv2.1.pointwise.1.num_batches_tracked, conv2.0.depthwise.1.running_var, conv3.5.depthwise.1.weight, conv3.0.depthwise.0.weight, conv4.0.depthwise.1.running_mean, stem.0.bn.num_batches_tracked, conv3.3.pointwise.1.running_mean, conv2.1.pointwise.1.running_var, conv3.0.pointwise.1.running_mean, conv1.1.depthwise.1.running_var, conv3.0.depthwise.1.running_var, conv1.0.depthwise.1.running_var, stem.1.pointwise.1.num_batches_tracked, conv4.0.pointwise.1.weight, conv1.1.pointwise.1.running_mean, conv2.1.depthwise.0.weight, conv1.0.depthwise.1.num_batches_tracked, conv1.0.pointwise.1.running_var, conv3.5.pointwise.1.weight, conv3.5.depthwise.1.running_mean, conv3.1.depthwise.1.bias, conv3.1.depthwise.0.weight, conv1.1.depthwise.1.running_mean, conv2.0.pointwise.0.weight, conv4.1.pointwise.1.bias, conv3.2.depthwise.1.running_var, conv3.5.pointwise.0.bias, conv3.4.depthwise.1.weight, conv3.2.depthwise.1.bias, stem.0.bn.running_mean, conv4.0.depthwise.1.running_var, conv1.1.depthwise.0.weight, stem.0.bn.running_var, conv4.1.pointwise.0.weight, conv2.1.pointwise.1.bias, conv3.4.pointwise.0.bias, conv1.0.pointwise.1.bias, conv3.5.pointwise.1.running_var, conv1.1.pointwise.1.num_batches_tracked, conv3.1.pointwise.1.running_mean, conv2.1.depthwise.1.num_batches_tracked, conv2.1.pointwise.0.weight, stem.1.depthwise.1.weight, conv3.5.pointwise.1.bias, conv3.5.pointwise.1.num_batches_tracked, conv3.1.pointwise.1.num_batches_tracked, conv3.2.pointwise.1.weight, conv3.5.depthwise.0.weight, conv3.3.pointwise.0.weight, conv2.0.depthwise.1.bias, conv3.0.pointwise.0.weight, conv3.3.pointwise.1.bias, conv3.4.pointwise.1.running_mean, conv4.0.depthwise.1.weight, conv4.1.pointwise.1.running_var
In line 19, try using model=runner.load_state_dict(..., strict=False).
Using the parameter strict=False tells the load_state_dict function that there might be missing keys in the checkpoint, which usually come from the BatchNorm layer as I see in this case.

Assign puppet Hash to hieradata yaml

I want to assign a hash variable from puppet to a hiera data structure but i only get a string.
Here is a example to illustrate, what I want. Finaly I don't want to access a fact.
1 ---
2 filesystems:
3 - partitions: "%{::partitions}"
And here is my debug code:
1 $filesystemsarray = lookup('filesystems',Array,'deep',[])
2 $filesystems = $filesystemsarray.map | $fs | {
3 notice("fs: ${fs['partitions']}")
4 }
5
6 notice("sda1: ${filesystemsarray[0]['partitions']['/dev/sda1']}")
The map leads to the following output:
Notice: Scope(Class[Profile::App::Kms]): fs: {"/dev/mapper/localhost--vg-root"=>{"filesystem"=>"ext4", "mount"=>"/", "size"=>"19.02 GiB", "size_bytes"=>20422066176, "uuid"=>"02e2ba2c-2ee4-411d-ac63-fc963c8026b4"}, "/dev/mapper/localhost--vg-swap_1"=>{"filesystem"=>"swap", "size"=>"512.00 MiB", "size_bytes"=>536870912, "uuid"=>"95ba4b2a-7434-48fd-9331-66443c752a9e"}, "/dev/sda1"=>{"filesystem"=>"ext2", "mount"=>"/boot", "partuuid"=>"de90a5ed-01", "size"=>"487.00 MiB", "size_bytes"=>510656512, "uuid"=>"398f2ab6-a7e8-4983-bd81-db03984fbd0e"}, "/dev/sda2"=>{"size"=>"1.00 KiB", "size_bytes"=>1024}, "/dev/sda5"=>{"filesystem"=>"LVM2_member", "partuuid"=>"de90a5ed-05", "size"=>"19.52 GiB", "size_bytes"=>20961034240, "uuid"=>"wLKRQm-9bdn-mHA8-M8bE-NL76-Gmas-L7Gp0J"}}
Seem to be a Hash as expected but the notice in Line 6 leads to:
Error: Evaluation Error: A substring operation does not accept a String as a character index. Expected an Integer at ...
What's my fault?

Is there a Counter object in Julia?

In Python, it's possible to count items in list using a high-performance object collections.Counter:
>>> from collections import Counter
>>> l = [1,1,2,4,1,5,12,1,51,2,5]
>>> Counter(l)
Counter({1: 4, 2: 2, 5: 2, 4: 1, 12: 1, 51: 1})
I've search in http://docs.julialang.org/en/latest/search.html?q=counter but I can't seem to find a counter object.
I've also looked at http://docs.julialang.org/en/latest/stdlib/collections.html but I couldn't find it either.
I've tried the histogram function in Julia and it returned a wave of deprecation messages:
> l = [1,1,2,4,1,5,12,1,51,2,5]
> hist(l)
[out]:
WARNING: sturges(n) is deprecated, use StatsBase.sturges(n) instead.
in depwarn(::String, ::Symbol) at ./deprecated.jl:64
in sturges(::Int64) at ./deprecated.jl:623
in hist(::Array{Int64,1}) at ./deprecated.jl:646
in include_string(::String, ::String) at ./loading.jl:441
in execute_request(::ZMQ.Socket, ::IJulia.Msg) at /Users/liling.tan/.julia/v0.5/IJulia/src/execute_request.jl:175
in eventloop(::ZMQ.Socket) at /Users/liling.tan/.julia/v0.5/IJulia/src/eventloop.jl:8
in (::IJulia.##13#19)() at ./task.jl:360
while loading In[65], in expression starting on line 1
WARNING: histrange(...) is deprecated, use StatsBase.histrange(...) instead
in depwarn(::String, ::Symbol) at ./deprecated.jl:64
in histrange(::Array{Int64,1}, ::Int64) at ./deprecated.jl:582
in hist(::Array{Int64,1}, ::Int64) at ./deprecated.jl:645
in hist(::Array{Int64,1}) at ./deprecated.jl:646
in include_string(::String, ::String) at ./loading.jl:441
in execute_request(::ZMQ.Socket, ::IJulia.Msg) at /Users/liling.tan/.julia/v0.5/IJulia/src/execute_request.jl:175
in eventloop(::ZMQ.Socket) at /Users/liling.tan/.julia/v0.5/IJulia/src/eventloop.jl:8
in (::IJulia.##13#19)() at ./task.jl:360
while loading In[65], in expression starting on line 1
WARNING: hist(...) and hist!(...) are deprecated. Use fit(Histogram,...) in StatsBase.jl instead.
in depwarn(::String, ::Symbol) at ./deprecated.jl:64
in #hist!#994(::Bool, ::Function, ::Array{Int64,1}, ::Array{Int64,1}, ::FloatRange{Float64}) at ./deprecated.jl:629
in hist(::Array{Int64,1}, ::FloatRange{Float64}) at ./deprecated.jl:644
in hist(::Array{Int64,1}, ::Int64) at ./deprecated.jl:645
in hist(::Array{Int64,1}) at ./deprecated.jl:646
in include_string(::String, ::String) at ./loading.jl:441
in execute_request(::ZMQ.Socket, ::IJulia.Msg) at /Users/liling.tan/.julia/v0.5/IJulia/src/execute_request.jl:175
in eventloop(::ZMQ.Socket) at /Users/liling.tan/.julia/v0.5/IJulia/src/eventloop.jl:8
in (::IJulia.##13#19)() at ./task.jl:360
while loading In[65], in expression starting on line 1
**Is there a Counter object in Julia?**
If you are using Julia 0.5+, the histogram functions has been deprecated and you are supposed to use the StatsBase.jl module instead. It is also described in the warning:
WARNING: hist(...) and hist!(...) are deprecated. Use fit(Histogram,...) in StatsBase.jl instead.
But if you are using StatsBase.jl, probably countmap is closer to what you need:
julia> import StatsBase: countmap
julia> countmap([1,1,2,4,1,5,12,1,51,2,5])
Dict{Int64,Int64} with 6 entries:
4 => 1
2 => 2
5 => 2
51 => 1
12 => 1
1 => 4
The DataStructures.jl package also has Accumulators / Counters with a more
general set of methods for using and combining counters.
Once you've added the package
using Pkg
Pkg.add("DataStructures")
you can count the elements of a sequence by constructing a counter
# generate some data to count
using Random
seq = [ Random.randstring('a':'c', 2) for _ in 1:100 ]
# count the elements in seq
using DataStructures
counts = counter(seq)

Error in eval(expr, envir, enclos) while using Predict function

When I try to run predict() on the dataset, it keeps giving me error -
Error in eval(expr, envir, enclos) : object 'LoanRange' not found
Here is the part of dataset -
LoanRange Loan.Type N WAFICO WALTV WAOrigRev WAPTValue
1 0-99999 Conventional 109 722.5216 63.55385 6068.239 0.6031879
2 0-99999 FHA 30 696.6348 80.00100 7129.650 0.5623650
3 0-99999 VA 13 698.6986 74.40525 7838.894 0.4892977
4 100000-149999 Conventional 860 731.2333 68.25817 6438.330 0.5962638
5 100000-149999 FHA 285 673.2256 82.42225 8145.068 0.5211495
6 100000-149999 VA 125 704.1686 87.71306 8911.461 0.5020074
7 150000-199999 Conventional 1291 738.7164 70.08944 8125.979 0.6045117
8 150000-199999 FHA 403 672.0891 84.65318 10112.192 0.5199632
9 150000-199999 VA 195 694.1885 90.77495 10909.393 0.5250807
10 200000-249999 Conventional 1162 740.8614 70.65027 8832.563 0.6111419
11 200000-249999 FHA 348 667.6291 85.13457 11013.856 0.5374226
12 200000-249999 VA 221 702.9796 91.76759 11753.642 0.5078298
13 250000-299999 Conventional 948 742.0405 72.22742 9903.160 0.6106858
Following is the code used for predicting count data N after determining the overdispersion-
model2=glm(N~Loan.Type+WAFICO+WALTV+WAOrigRev+WAPTValue, family=quasipoisson(link = "log"), data = DF)
summary(model2)
This is what I have done to create a sequence of count and use predict function-
countaxis <- seq (0,1500,150)
Y <- predict(model2, list(N=countaxis, type = "response")
At this step, I get the error -
Error in eval(expr, envir, enclos) : object 'LoanRange' not found
Can someone please point me where is the problem here.
Think about what exactly you are trying to predict. You are providing the predict function values of N (via countaxis), but in fact the way you set up your model, N is your response variable and the remaining variables are the predictors. That's why R is asking for LoanRange. It actually needs values for LoanRange, Loan.Type, ..., WAPTValue in order to predict N. So you need to feed predict inputs that let the model try to predict N.
For example, you could do something like this:
# create some fake data to predict N
newdata1 = data.frame(rbind(c("0-99999", "Conventional", 722.5216, 63.55385, 6068.239, 0.6031879),
c("150000-199999", "VA", 12.5216, 3.55385, 60.239, 0.0031879)))
colnames(newdata1) = c("LoanRange" ,"Loan.Type", "WAFICO" ,"WALTV" , "WAOrigRev" ,"WAPTValue")
# ensure that numeric variables are indeed numeric and not factors
newdata1$WAFICO = as.numeric(as.character(newdata1$WAFICO))
newdata1$WALTV = as.numeric(as.character(newdata1$WALTV))
newdata1$WAPTValue = as.numeric(as.character(newdata1$WAPTValue))
newdata1$WAOrigRev = as.numeric(as.character(newdata1$WAOrigRev))
# make predictions - this will output values of N
predict(model2, newdata = newdata1, type = "response")