Assign puppet Hash to hieradata yaml - json

I want to assign a hash variable from puppet to a hiera data structure but i only get a string.
Here is a example to illustrate, what I want. Finaly I don't want to access a fact.
1 ---
2 filesystems:
3 - partitions: "%{::partitions}"
And here is my debug code:
1 $filesystemsarray = lookup('filesystems',Array,'deep',[])
2 $filesystems = $filesystemsarray.map | $fs | {
3 notice("fs: ${fs['partitions']}")
4 }
5
6 notice("sda1: ${filesystemsarray[0]['partitions']['/dev/sda1']}")
The map leads to the following output:
Notice: Scope(Class[Profile::App::Kms]): fs: {"/dev/mapper/localhost--vg-root"=>{"filesystem"=>"ext4", "mount"=>"/", "size"=>"19.02 GiB", "size_bytes"=>20422066176, "uuid"=>"02e2ba2c-2ee4-411d-ac63-fc963c8026b4"}, "/dev/mapper/localhost--vg-swap_1"=>{"filesystem"=>"swap", "size"=>"512.00 MiB", "size_bytes"=>536870912, "uuid"=>"95ba4b2a-7434-48fd-9331-66443c752a9e"}, "/dev/sda1"=>{"filesystem"=>"ext2", "mount"=>"/boot", "partuuid"=>"de90a5ed-01", "size"=>"487.00 MiB", "size_bytes"=>510656512, "uuid"=>"398f2ab6-a7e8-4983-bd81-db03984fbd0e"}, "/dev/sda2"=>{"size"=>"1.00 KiB", "size_bytes"=>1024}, "/dev/sda5"=>{"filesystem"=>"LVM2_member", "partuuid"=>"de90a5ed-05", "size"=>"19.52 GiB", "size_bytes"=>20961034240, "uuid"=>"wLKRQm-9bdn-mHA8-M8bE-NL76-Gmas-L7Gp0J"}}
Seem to be a Hash as expected but the notice in Line 6 leads to:
Error: Evaluation Error: A substring operation does not accept a String as a character index. Expected an Integer at ...
What's my fault?

Related

Julia CSV.read not recognizing "select" keyword

I am reading in a space-delimited file using the CSV library in Julia.
edgeList = CSV.read(
joinpath(dataDirectory, "out.file"),
types=[Int, Int],
header=["node1", "node2"],
skipto=3,
select=[1,2]
)
This yields the following error:
MethodError: no method matching CSV.File(::String; types=DataType[Int64, Int64], header=["node1", "node2"], skipto=3, select=[1, 2])
Closest candidates are:
CSV.File(::Any; header, normalizenames, datarow, skipto, footerskip, limit, transpose, comment, use_mmap, ignoreemptylines, missingstrings, missingstring, delim, ignorerepeated, quotechar, openquotechar, closequotechar, escapechar, dateformat, decimal, truestrings, falsestrings, type, types, typemap, categorical, pool, strict, silencewarnings, threaded, debug, parsingdebug, allowmissing) at /Users/n.jordanjameson/.julia/packages/CSV/4GOjG/src/CSV.jl:221 got unsupported keyword argument "select"
I am using Julia v. 1.6.2. Here is the output versioninfo():
Julia Version 1.6.2
Commit 1b93d53fc4 (2021-07-14 15:36 UTC)
Platform Info:
OS: macOS (x86_64-apple-darwin18.7.0)
CPU: Intel(R) Core(TM) i7-5650U CPU # 2.20GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-11.0.1 (ORCJIT, broadwell)
The version of CSV is 0.10.4. The wiki for this version of CSV is here: https://csv.juliadata.org/stable/reading.html#CSV.read, and it has a select / drop entry.
The file I am trying to read is from here: http://konect.cc/networks/moreno_crime/ (the file I'm using is called "out.moreno_crime_crime"). The first few lines are:
% bip unweighted
% 1476 829 551
1 1
1 2
1 3
1 4
2 5
2 6
2 7
2 8
2 9
2 10
I get a different error than you, can you restart Julia and make sure?
julia> CSV.read("/home/akako/Downloads/moreno_crime/out.moreno_crime_crime"; types=[Int, Int],
header=["node1", "node2"],
skipto=3,
select=[1,2]
)
ERROR: ArgumentError: provide a valid sink argument, like `using DataFrames; CSV.read(source, DataFrame)`
Stacktrace:
[1] read(source::String, sink::Nothing; copycols::Bool, kwargs::Base.Pairs{Symbol, Any, NTuple{4, Symbol}, NamedTuple{(:types, :header, :skipto, :select), Tuple{Vector{DataType}, Vector{String}, Int64, Vector{Int64}}}})
# CSV ~/.julia/packages/CSV/jFiCn/src/CSV.jl:89
[2] top-level scope
# REPL[8]:1
Stacktrace:
this error is telling you you can't CSV.read without a target sink, you might want to use CSV.File
julia> CSV.File("/home/akako/Downloads/moreno_crime/out.moreno_crime_crime"; types=[Int, Int],
header=["node1", "node2"],
skipto=3,
select=[1,2]
)
┌ Warning: thread = 1 warning: parsed expected 2 columns, but didn't reach end of line around data row: 1. Parsing extra columns and widening final columnset
└ # CSV ~/.julia/packages/CSV/jFiCn/src/file.jl:579
1476-element CSV.File:
CSV.Row: (node1 = 1, node2 = 1, Column3 = missing)
CSV.Row: (node1 = 1, node2 = 2, Column3 = missing)
CSV.Row: (node1 = 1, node2 = 3, Column3 = missing)
CSV.Row: (node1 = 1, node2 = 4, Column3 = missing)

Sphinx 3 Search engine: Having problems reading JSON from CSV source

When I try to read JSON content from a field I get:
WARNING: document 1, attribute assorted: JSON error: syntax error, unexpected TOK_IDENT, expecting $end near 'a:foo'
Here are the details:
This is the (super simplified) CSV file I'm trying to read:
1,hello world, document number one,a:foo
22,hello again, document number two,foo:bar
23,hello now, This is some stuff,foo:{bar:baz}
24,hello cow, more test stuff and things,{foo:bar}
55,hello suess, box and sox and goats and moats,[a]
56,hello raven, nevermore said the thing,foo:bar
When I run the indexer this is the result I get:
../bin/indexer --config /home/ec2-user/sphinx/etc/sphinx.conf --all --rotate
Sphinx 3.3.1 (commit b72d67b)
Copyright (c) 2001-2020, Andrew Aksyonoff
Copyright (c) 2008-2016, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file '/home/ec2-user/sphinx/etc/sphinx.conf'...
indexing index 'csvtest'...
WARNING: document 1, attribute assorted: JSON error: syntax error, unexpected TOK_IDENT, expecting $end near 'a:foo'
WARNING: document 22, attribute assorted: JSON error: syntax error, unexpected TOK_IDENT, expecting $end near 'foo:bar'
WARNING: document 23, attribute assorted: JSON error: syntax error, unexpected TOK_IDENT, expecting $end near 'foo:{bar:baz}'
WARNING: document 24, attribute assorted: JSON error: syntax error, unexpected '}', expecting '[' near '}'
WARNING: document 55, attribute assorted: JSON error: syntax error, unexpected ']', expecting '[' near ']'
WARNING: document 56, attribute assorted: JSON error: syntax error, unexpected TOK_IDENT, expecting $end near 'foo:bar'
collected 6 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 6 docs, 0.1 Kb
total 0.0 sec, 17.7 Kb/sec, 1709 docs/sec
rotating indices: successfully sent SIGHUP to searchd (pid=14393).
This is the entire config file:
source csvsrc
{
type = csvpipe
csvpipe_delimiter = ,
csvpipe_command = cat /home/ec2-user/sphinx/etc/example.csv
csvpipe_field_string =t
csvpipe_attr_string =c
csvpipe_attr_json =assorted
}
index csvtest
{
source = csvsrc
path = /var/data/test7
morphology = stem_en
rt_field = t
rt_field = c
rt_field = assorted
}
indexer
{
mem_limit = 128M
}
searchd
{
listen = 9312
listen = 9306:mysql41
log = /var/log/searchd.log
query_log = /var/log/query.log
pid_file = /var/log/searchd.pid
binlog_path = /var/data
}
And If I do log in and query, it's pretty obvious that the JSON was not, in fact, indexed (as expected from the warnings)
select * from csvtest;
+------+-------------+----------------------------------+----------+
| id | t | c | assorted |
+------+-------------+----------------------------------+----------+
| 1 | hello world | document number one | NULL |
| 22 | hello again | document number two | NULL |
| 23 | hello now | This is some stuff | NULL |
| 24 | hello cow | more test stuff and things | NULL |
| 55 | hello suess | box and sox and goats and moats | NULL |
| 56 | hello raven | nevermore said the thing | NULL |
+------+-------------+----------------------------------+----------+
6 rows in set (0.00 sec)
I have tried a few things, but I'm just groping in the dark.
Some things I have tried:
Alternate formats of JSON. I have tried using {foo:bar} and {[foo:bar]} and [{foo,bar}] based on some experiences with other JSON inputs where they want it to be either an array or dict at the top level. These actually generate slightly different errors:
WARNING: document 24, attribute assorted: JSON error: syntax error, unexpected '}', expecting '[' near '}'
WARNING: document 55, attribute assorted: JSON error: syntax error, unexpected ']', expecting '[' near ']'
I have tried adding a trailing comma thinking that might be the $end token that the parser is looking for. This generates an actual error ERROR: index 'csvtest': source 'csvsrc': not all columns found (found=5, total=4, line=1). which prevents index generation. That makes sense to me
2a) I tried adding a whole other column after the JSON so I could have the ending comma but not get an error that would prevent the index from generating. This did generate the index, but did not provide the $end token that the JSON parser was looking for.
I'm totally stumped.
Well as such a:foo isnt a valid JSON value AFAIK. LOoks like it meant to be object? So would need {...} surrounding it.
But even {foo:bar} is not valid either. At the very least the 'value' shoud be quoted {foo:"bar"}. But really the keys quoting too {"foo":"bar"}
Javascript Objects technically allow unquoted key names, but JSON requires the quoting.
... but also remember it CSV. Quotes are typically used for quoting (eg when columns contain commas), so the quotes need double encoding! Ends up a bit messy...
24,hello cow, more test stuff and things,"{""foo"":""bar""}"

How to capture the values from Get Response Body - Robot framework

Output from Response Body
{"data":[{"id”:122,"name”:”Test 1“,”description”:”TEST 1 Test 2 …..}]},{"id”:123,"name”:”DYNAMO”……}]},{"id”:126,”name”:”T DYNAMO”……
*** Keywords ***
Capture The Data Ids
#{ids}= Create List 122 123 126 167 190
${header} Create Dictionary Authoriztion...
${resp} Get Response httpsbin /data
${t_ids}= Get Json Value ${resp.content} /data/0/id
Problem
I have created a list of above ids in the test case and I need to compare the created data against the id returned in the response body.
t_ids returns 122and when 0 is replaced by 1, returns 123
Rather than capturing individual id, is it possible to put them in for loop?
:FOR ${i} IN ${ids}
\ ${the_id= Get Json Value ${resp.content} /data/${i}/id ?
I tried this and failed.
What is the possible solution to compare the ids from the response data against the created list?
Thank you.
It is possible to what you want, but it is always good to know what kind of data structure your variable contains. In the below example loading a json file replaces the received answer in ${resp.content}. To my knowledge this is a string, which is also what Get File returns.
The example is split into the json file and the robot file.
so_json.json
{
"data":[
{
"id":122,
"name": "Test 1",
"description": "TEST 1 Test 2"
},
{
"id": 123,
"name": "DYNAMO"
},
{
"id": 126,
"name": "T DYNAMO"
}
]
}
so_robot.robot
*** Settings ***
Library HttpLibrary.HTTP
Library OperatingSystem
Library Collections
*** Test Cases ***
TC
${json_string} Get File so_json.json
${json_object} Parse Json ${json_string}
:FOR ${item} IN #{json_object['data']}
\ Log To Console ${item['id']}
Which in turn gives the following result:
==============================================================================
Robot - Example
==============================================================================
Robot - Example.SO JSON
==============================================================================
TC 122
123
126
| PASS |
------------------------------------------------------------------------------
Robot - Example.SO JSON | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
Robot - Example | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================

Is there a Counter object in Julia?

In Python, it's possible to count items in list using a high-performance object collections.Counter:
>>> from collections import Counter
>>> l = [1,1,2,4,1,5,12,1,51,2,5]
>>> Counter(l)
Counter({1: 4, 2: 2, 5: 2, 4: 1, 12: 1, 51: 1})
I've search in http://docs.julialang.org/en/latest/search.html?q=counter but I can't seem to find a counter object.
I've also looked at http://docs.julialang.org/en/latest/stdlib/collections.html but I couldn't find it either.
I've tried the histogram function in Julia and it returned a wave of deprecation messages:
> l = [1,1,2,4,1,5,12,1,51,2,5]
> hist(l)
[out]:
WARNING: sturges(n) is deprecated, use StatsBase.sturges(n) instead.
in depwarn(::String, ::Symbol) at ./deprecated.jl:64
in sturges(::Int64) at ./deprecated.jl:623
in hist(::Array{Int64,1}) at ./deprecated.jl:646
in include_string(::String, ::String) at ./loading.jl:441
in execute_request(::ZMQ.Socket, ::IJulia.Msg) at /Users/liling.tan/.julia/v0.5/IJulia/src/execute_request.jl:175
in eventloop(::ZMQ.Socket) at /Users/liling.tan/.julia/v0.5/IJulia/src/eventloop.jl:8
in (::IJulia.##13#19)() at ./task.jl:360
while loading In[65], in expression starting on line 1
WARNING: histrange(...) is deprecated, use StatsBase.histrange(...) instead
in depwarn(::String, ::Symbol) at ./deprecated.jl:64
in histrange(::Array{Int64,1}, ::Int64) at ./deprecated.jl:582
in hist(::Array{Int64,1}, ::Int64) at ./deprecated.jl:645
in hist(::Array{Int64,1}) at ./deprecated.jl:646
in include_string(::String, ::String) at ./loading.jl:441
in execute_request(::ZMQ.Socket, ::IJulia.Msg) at /Users/liling.tan/.julia/v0.5/IJulia/src/execute_request.jl:175
in eventloop(::ZMQ.Socket) at /Users/liling.tan/.julia/v0.5/IJulia/src/eventloop.jl:8
in (::IJulia.##13#19)() at ./task.jl:360
while loading In[65], in expression starting on line 1
WARNING: hist(...) and hist!(...) are deprecated. Use fit(Histogram,...) in StatsBase.jl instead.
in depwarn(::String, ::Symbol) at ./deprecated.jl:64
in #hist!#994(::Bool, ::Function, ::Array{Int64,1}, ::Array{Int64,1}, ::FloatRange{Float64}) at ./deprecated.jl:629
in hist(::Array{Int64,1}, ::FloatRange{Float64}) at ./deprecated.jl:644
in hist(::Array{Int64,1}, ::Int64) at ./deprecated.jl:645
in hist(::Array{Int64,1}) at ./deprecated.jl:646
in include_string(::String, ::String) at ./loading.jl:441
in execute_request(::ZMQ.Socket, ::IJulia.Msg) at /Users/liling.tan/.julia/v0.5/IJulia/src/execute_request.jl:175
in eventloop(::ZMQ.Socket) at /Users/liling.tan/.julia/v0.5/IJulia/src/eventloop.jl:8
in (::IJulia.##13#19)() at ./task.jl:360
while loading In[65], in expression starting on line 1
**Is there a Counter object in Julia?**
If you are using Julia 0.5+, the histogram functions has been deprecated and you are supposed to use the StatsBase.jl module instead. It is also described in the warning:
WARNING: hist(...) and hist!(...) are deprecated. Use fit(Histogram,...) in StatsBase.jl instead.
But if you are using StatsBase.jl, probably countmap is closer to what you need:
julia> import StatsBase: countmap
julia> countmap([1,1,2,4,1,5,12,1,51,2,5])
Dict{Int64,Int64} with 6 entries:
4 => 1
2 => 2
5 => 2
51 => 1
12 => 1
1 => 4
The DataStructures.jl package also has Accumulators / Counters with a more
general set of methods for using and combining counters.
Once you've added the package
using Pkg
Pkg.add("DataStructures")
you can count the elements of a sequence by constructing a counter
# generate some data to count
using Random
seq = [ Random.randstring('a':'c', 2) for _ in 1:100 ]
# count the elements in seq
using DataStructures
counts = counter(seq)

Fullpath of current TCL script

Is there a possibility to get the full path of the currently executing TCL script?
In PHP it would be: __FILE__
Depending on what you mean by "currently executing TCL script", you might actually seek info script, or possibly even info nameofexecutable or something more esoteric.
The correct way to retrieve the name of the file that the current statement resides in, is this (a true equivalent to PHP/C++'s __FILE__):
set thisFile [ dict get [ info frame 0 ] file ]
Psuedocode (how it works):
set thisFile <value> : sets variable thisFile to value
dict get <dict> file : returns the file value from a dict
info frame <#> : returns a dict with information about the frame at the specified stack level (#), and 0 will return the most recent stack frame
NOTICE: See end of post for more information on info frame.
In this case, the file value returned from info frame is already normalized, so file normalize <path> in not needed.
The difference between info script and info frame is mainly for use with Tcl Packages. If info script was used in a Tcl file that was provided durring a package require (require package <name>), then info script would return the path to the currently executing Tcl script and would not provide the actual name of the Tcl file that contained the info script command; However, the info frame example provided here would correctly return the file name of the file that contains the command.
If you want the name of the script currently being evaluated, then:
set sourcedScript [ info script ]
If you want the name of the script (or interpreter) that was initially invoked, then:
set scriptAtInvocation $::argv0
If you want the name of the executable that was initially invoked, then:
set exeAtInvocation [ info nameofexecutable ]
UPDATE - Details about: info frame
Here is what a stacktrace looks like within Tcl. The frame_index is the showing us what info frame $frame_index looks like for values from 0 through [ info frame ].
Calling info frame [ info frame ] is functionally equivalent to info frame 0, but using 0 is of course faster.
There are only actually 1 to [ info frame ] stack frames, and 0 behaves like [ info frame ]. In this example you can see that 0 and 5 (which is [ info frame ]) are the same:
frame_index: 0 | type = source | proc = ::stacktrace | line = 26 | level = 0 | file = /tcltest/stacktrace.tcl | cmd = info frame $frame_counter
frame_index: 1 | type = source | line = 6 | level = 4 | file = /tcltest/main.tcl | cmd = a
frame_index: 2 | type = source | proc = ::a | line = 2 | level = 3 | file = /tcltest/a.tcl | cmd = b
frame_index: 3 | type = source | proc = ::b | line = 2 | level = 2 | file = /tcltest/b.tcl | cmd = c
frame_index: 4 | type = source | proc = ::c | line = 5 | level = 1 | file = /tcltest/c.tcl | cmd = stacktrace
frame_index: 5 | type = source | proc = ::stacktrace | line = 26 | level = 0 | file = /tcltest/stacktrace.tcl | cmd = info frame $frame_counter
See:
https://github.com/Xilinx/XilinxTclStore/blob/master/tclapp/xilinx/profiler/app.tcl#L273
You want $argv0
You can use [file normalize] to get the fully normalized name, too.
file normalize $argv0
file normalize [info nameofexecutable]
seconds after I've posted my question ... lindex $argv 0 is a good starting point ;-)