awk to translate config file to json - json

I have a config file like this one:
[sectionOne]
key1_1=value1_1
key1_n=value1_n
#this is a comment
[sectionTwo]
key2_1=value2_1
key2_n=value2_n
;this is a comment also
[SectionThree]
key3_1=value3_1
key3_n=value3_n
[SectionFor]
...
I need to translate this into json, using minimal shell tools (no perl,python,php, just sed,awk available)
The desired output is :
[
{"sectionOne": { "key1_1": "value1_1","key1_n": "value1_n"} },
{"sectionTwo": { "key2_1": "value2_1","key2_n": "value2_n"} },
{"sectionThree": { "key3_1": "value3_1","key3_n": "value3_n"}}
....
]
I tried several ways/hours, no success
Thank you in advance

There's some inconsistencies between your sample input and desired output so it's hard to be sure but this should be close and easy to tweak if not 100% what you want:
$ cat file
[sectionOne]
key1_1=value1_1
key1_n=value1_n
#this is a comment
[sectionTwo]
key2_1=value2_1
key2_n=value2_n
;this is a comment also
[SectionThree]
key3_1=value3_1
key3_n=value3_n
$
$ cat tst.awk
BEGIN{
FS="="
print "["
}
/^([#;]|[[:space:]]*$)/ {
next
}
gsub(/[][]/,"") {
printf "%s{\"%s\": { ", rs, $0
rs="} },\n"
fs=""
next
}
{
printf "%s\"%s\": \"%s\"", fs, $1, $2
fs=","
}
END{
print rs "]"
}
$
$ awk -f tst.awk file
[
{"sectionOne": { "key1_1": "value1_1","key1_n": "value1_n"} },
{"sectionTwo": { "key2_1": "value2_1","key2_n": "value2_n"} },
{"SectionThree": { "key3_1": "value3_1","key3_n": "value3_n"} },
]

awk 'BEGIN{ print "[" }
/^[#;]/{ next } # Ignore comments
/^\[/{ gsub( "[][]", "" ); printf "%s{\"%s\": { ", s ? "}},\n" : "", $0; n=0; s=1 }
/=/ { gsub( "=", "\":\"" ); printf "%c\"%s\" ", n ? "," : "", $0; n=1 }
END{ print "}}\n]" }
'

Here's a solution in bash using awk:
#!/bin/bash
awk -F"=" 'BEGIN{in_section=0; first_field=0; printf "["}
{
last=length($1);
if ( (substr($1,1,1) == "[") && (substr($1,last, last) == "]")) {
if (in_section==1) {
printf "} },";
}
section=substr($1, 2, last-2);
printf "\n{\"%s\":", section;
printf " {";
first_field=1;
in_section=1;
} else if ( substr($1, 1, 1) == "#" || substr($1, 1, 1) == ";"){
} else if ( ($1 != "") && ($2 != "") ) {
if (first_field==0) {
printf ", ";
}
printf "\"%s\": \"%s\"", $1, $2;
first_field=0;
}
}
END{printf "} }\n]\n"}'

Related

implode Warning: array to string conversion in PHP-Nuke Titanium function deepPurifier

This works without any warnings in every version of PHP except 8
I think they have changed something with implode and I have tried all the examples to no avail.
Perhaps I could get this done some other way. I need some PHP 8 eyes as I'm very new to PHP 8 and up.
The warning in my function is on the following line:
Warning: array to string conversion
$test = implode(', ', $data);
It is very hard to make sense of certain things when you use so many different languages with similar syntax. I fear that this is just a minor brain fart on my part.
This code appears to be working all though I have the warning, I am wondering if this is just a bug in PHP 8 and 8.1 etc
function deepPurifier($data)
{
global $html_auth, $admin;
static $config, $purifier;
# Error check
if(empty($data) || !isset($data))
return $data;
if(!is_array($data))
return stripslashes((string) $data);
// THIS IS WHERE MY WARNING IS
// warning: array to string conversion
$test = implode(', ', $data);
if(!preg_match('[<|>]', $test))
{
return $data;
}
if(!isset($config) || empty($config))
{
set_include_path(NUKE_BASE_DIR. get_include_path() );
require_once(NUKE_VENDOR_DIR.'ezyang/htmlpurifier/library/HTMLPurifier/Bootstrap.php');
require_once(NUKE_VENDOR_DIR.'ezyang/htmlpurifier/library/HTMLPurifier.autoload.php');
$config = HTMLPurifier_Config::createDefault();
$config->set('Core.Encoding', 'UTF-8');
$config->set('HTML.Doctype', 'HTML 4.01 Transitional');
if(!is_god($admin) || (is_god($admin) && !$html_auth))
{
$config->set('HTML.Trusted', true);
$config->set('HTML.SafeObject', true);
$config->set('HTML.SafeEmbed', true);
$config->set('HTML.AllowedAttributes','img#height,img#width,img#src,iframe#src,iframe#allowfullscreen');
$config->set('HTML.AllowedAttributes', 'src, height, width, alt');
$config->set('HTML.AllowedElements', ['img', 'iframe', 'div', 'script', 'object', 'p', 'span', 'pre', 'b', 'i', 'u', 'strong', 'em', 'sup', 'a', 'img', 'table', 'tr', 'td', 'tbody', 'thead', 'param']);
$config->set('Output.FlashCompat', true);
$config->set('Attr.EnableID', true);
$config->set('Filter.Custom', [new HTMLPurifier_Filter_YouTube()]);
}
$def = $config->getHTMLDefinition(true);
$def->addAttribute('iframe', 'allowfullscreen', 'Bool');
$purifier = new HTMLPurifier($config);
}
# Loop through the data
foreach ($data as $k => $v) {
# If its an array
if (is_array($data[$k])) {
# Go though this function again
$data[$k] = array_map('deepStrip', $data[$k]);
} elseif (is_numeric($v) || empty($v)) {
$data[$k] = $v;
} else {
if (isset($_GET['op']) && $_GET['op'] == 'Configure' && isset($_GET['sub']) && $_GET['sub'] == '11') {
$data[$k] = $v;
continue;
} elseif ($k == 'xsitename' || $k == 'xslogan') {
$data[$k] = $v;
continue;
} elseif (isset($_GET['name'])) {
# If forum post let it pass to the forum html security
if ($_GET['name'] == 'Forums' && (isset($_GET['file']) && ($_GET['file'] == 'posting')) && ($k == 'message' || $k == 'subject')) {
$data[$k] = $v;
continue;
}
# If PM let it pass to the forum html security
if ($_GET['name'] == 'Private_Messages' && ($k == 'message' || $k == 'subject')) {
$data[$k] = $v;
continue;
}
# If SIG let it pass to the forum html security
if ($_GET['name'] == 'Profile' && (isset($_GET['mode']) && ($_GET['mode'] == 'signature')) && $k == 'signature') {
$data[$k] = $v;
continue;
}
}
# If its a strip lets purify it
if (!is_god($admin) || (is_god($admin) && !$html_auth)) {
$data[$k] = $purifier->purify($v);
}
$data[$k] = str_replace('\n', "\n", (string) $data[$k]);
# Get the registered globals also
global ${$k};
if (isset(${$k}) && !empty(${$k})) {
${$k} = $data[$k];
}
}
}
return $data;
}
var_dump($test);
string(20) "Forums,viewtopic,284" string(20) "Forums,viewtopic,284"
The warning appears in PHP 8 when the array being imploded is nested. From the docs, the $array argument should be an array of strings, not an array of arrays.
For example, the following produces no warnings in both PHP 7.4 and PHP 8.1:
$data = ["a", "b"];
print(implode(" ", $data));
Whereas the following gives the warning Array to string conversion (note the arrays within the first array):
$data = ["a", "b" => ["c"], "d" => ["e"]];
print(implode(" ", $data));
You can verify the behaviour with docker and different PHP versions:
docker run --rm php:7.4 -r 'print(implode(" ", ["a", "b" => ["c"], "d" => ["e"]]));'
docker run --rm php:8.1 -r 'print(implode(" ", ["a", "b" => ["c"], "d" => ["e"]]));'
Both produce the output:
a Array Array
But PHP 8 will now raise warnings:
Warning: Array to string conversion in Command line code on line 1
To get rid of the warning you must change the following:
From:
$test = implode(',', $data);
To:
$test = json_encode($data);
Also Change Improper Syntax:
From:
if(!preg_match('[<|>]', $test))
{
return $data;
}
To:
if(!preg_match('/[<|>]/', $test))
{
return $data;
}

How to print JSON objects in AWK

I was looking for some built-in functions inside awk to easily generate JSON objects. I came across several answers and decided to create my own.
I'd like to generate JSON from multidimensional arrays, where I store table style data, and to use separate and dynamic definition of JSON schema to be generated from that data.
Desired output:
{
"Name": JanA
"Surname": NowakA
"ID": 1234A
"Role": PrezesA
}
{
"Name": JanD
"Surname": NowakD
"ID": 12341D
"Role": PrezesD
}
{
"Name": JanC
"Surname": NowakC
"ID": 12342C
"Role": PrezesC
}
Input file:
pierwsza linia
druga linia
trzecia linia
dane wspólników
imie JanA
nazwisko NowakA
pesel 11111111111A
funkcja PrezesA
imie Ja"nD
nazwisko NowakD
pesel 11111111111
funkcja PrezesD
imie JanC
nazwisko NowakC
pesel 12342C
funkcja PrezesC
czwarta linia
reprezentanci
imie Tomek
Based on input file i created a multidimensional array:
JanA NowaA 1234A PrezesA
JanD NowakD 12341D PrezesD
JanC NowakC 12342C PrezesC
I'll take a stab at a gawk solution. The indenting isn't perfect and the results aren't ordered (see "Sorting" note below), but it's at least able to walk a true multidimensional array recursively and should produce valid, parsable JSON from any array. Bonus: the data array is the schema. Array keys become JSON keys. There's no need to create a separate schema array in addition to the data array.
Just be sure to use the true multidimensional array[d1][d2][d3]... convention of constructing your data array, rather than the concatenated index array[d1,d2,d3...] convention.
Update:
I've got an updated JSON gawk script posted as a GitHub Gist. Although the script below is tested as working with OP's data, I might've made improvements since this post was last edited. Please see the Gist for the most thoroughly tested, bug-squashed version.
#!/usr/bin/gawk -f
BEGIN { IGNORECASE = 1 }
$1 ~ "imie" { record[++idx]["name"] = $2 }
$1 ~ "nazwisko" { record[idx]["surname"] = $2 }
$1 ~ "pesel" { record[idx]["ID"] = $2 }
$1 ~ "funkcja" { record[idx]["role"] = $2 }
END { print serialize(record, "\t") }
# ==== FUNCTIONS ====
function join(arr, sep, _p, i) {
# syntax: join(array, string separator)
# returns a string
for (i in arr) {
_p["result"] = _p["result"] ~ "[[:print:]]" ? _p["result"] sep arr[i] : arr[i]
}
return _p["result"]
}
function quote(str) {
gsub(/\\/, "\\\\", str)
gsub(/\r/, "\\r", str)
gsub(/\n/, "\\n", str)
gsub(/\t/, "\\t", str)
return "\"" str "\""
}
function serialize(arr, indent_with, depth, _p, i, idx) {
# syntax: serialize(array of arrays, indent string)
# returns a JSON formatted string
# sort arrays on key, ensures [...] values remain properly ordered
if (!PROCINFO["sorted_in"]) PROCINFO["sorted_in"] = "#ind_num_asc"
# determine whether array is indexed or associative
for (i in arr) {
_p["assoc"] = or(_p["assoc"], !(++_p["idx"] in arr))
}
# if associative, indent
if (_p["assoc"]) {
for (i = ++depth; i--;) {
_p["end"] = _p["indent"]; _p["indent"] = _p["indent"] indent_with
}
}
for (i in arr) {
# If key length is 0, assume its an empty object
if (!length(i)) return "{}"
# quote key if not already quoted
_p["key"] = i !~ /^".*"$/ ? quote(i) : i
if (isarray(arr[i])) {
if (_p["assoc"]) {
_p["json"][++idx] = _p["indent"] _p["key"] ": " \
serialize(arr[i], indent_with, depth)
} else {
# if indexed array, dont print keys
_p["json"][++idx] = serialize(arr[i], indent_with, depth)
}
} else {
# quote if not numeric, boolean, null, already quoted, or too big for match()
if (!((arr[i] ~ /^[0-9]+([\.e][0-9]+)?$/ && arr[i] !~ /^0[0-9]/) ||
arr[i] ~ /^true|false|null|".*"$/) || length(arr[i]) > 1000)
arr[i] = quote(arr[i])
_p["json"][++idx] = _p["assoc"] ? _p["indent"] _p["key"] ": " arr[i] : arr[i]
}
}
# I trial and errored the hell out of this. Problem is, gawk cant distinguish between
# a value of null and no value. I think this hack is as close as I can get, although
# [""] will become [].
if (!_p["assoc"] && join(_p["json"]) == "\"\"") return "[]"
# surround with curly braces if object, square brackets if array
return _p["assoc"] ? "{\n" join(_p["json"], ",\n") "\n" _p["end"] "}" \
: "[" join(_p["json"], ", ") "]"
}
Output resulting from OP's example data:
[{
"ID": "1234A",
"name": "JanA",
"role": "PrezesA",
"surname": "NowakA"
}, {
"ID": "12341D",
"name": "JanD",
"role": "PrezesD",
"surname": "NowakD"
}, {
"ID": "12342C",
"name": "JanC",
"role": "PrezesC",
"surname": "NowakC"
}, {
"name": "Tomek"
}]
Sorting
Although the results by default are ordered in a manner only gawk understands, it is possible for gawk to sort the results on a field. If you'd like to sort on the ID field for example, add this function:
function cmp_ID(i1, v1, i2, v2) {
if (!isarray(v1) && v1 ~ /"ID"/ ) {
return v1 < v2 ? -1 : (v1 != v2)
}
}
Then insert this line within your END section above print serialize(record):
PROCINFO["sorted_in"] = "cmp_ID"
See Controlling Array Traversal for more information.
My updated awk implementation of simple array printer with regex based validation for each column(running using gawk):
function ltrim(s) { sub(/^[ \t]+/, "", s); return s }
function rtrim(s) { sub(/[ \t]+$/, "", s); return s }
function sTrim(s){
return rtrim(ltrim(s));
}
function jsonEscape(jsValue) {
gsub(/\\/, "\\\\", jsValue)
gsub(/"/, "\\\"", jsValue)
gsub(/\b/, "\\b", jsValue)
gsub(/\f/, "\\f", jsValue)
gsub(/\n/, "\\n", jsValue)
gsub(/\r/, "\\r", jsValue)
gsub(/\t/, "\\t", jsValue)
return jsValue
}
function jsonStringEscapeAndWrap(jsValue) {
return "\42" jsonEscape(jsValue) "\42"
}
function jsonPrint(contentArray, contentRowsCount, schemaArray){
result = ""
schemaLength = length(schemaArray)
for (x = 1; x <= contentRowsCount; x++) {
result = result "{"
for(y = 1; y <= schemaLength; y++){
result = result "\42" sTrim(schemaArray[y]) "\42:" sTrim(contentArray[x, y])
if(y < schemaLength){
result = result ","
}
}
result = result "}"
if(x < contentRowsCount){
result = result ",\n"
}
}
return result
}
function jsonValidateAndPrint(contentArray, contentRowsCount, schemaArray, schemaColumnsCount, errorArray){
result = ""
errorsCount = 1
for (x = 1; x <= contentRowsCount; x++) {
jsonRow = "{"
for(y = 1; y <= schemaColumnsCount; y++){
regexValue = schemaArray[y, 2]
jsonValue = sTrim(contentArray[x, y])
isValid = jsonValue ~ regexValue
if(isValid == 0){
errorArray[errorsCount, 1] = "\42" sTrim(schemaArray[y, 1]) "\42"
errorArray[errorsCount, 2] = "\42Value " jsonValue " not match format: " regexValue " \42"
errorArray[errorsCount, 3] = x
errorsCount++
jsonValue = "null"
}
jsonRow = jsonRow "\42" sTrim(schemaArray[y, 1]) "\42:" jsonValue
if(y < schemaColumnsCount){
jsonRow = jsonRow ","
}
}
jsonRow = jsonRow "}"
result = result jsonRow
if(x < contentRowsCount){
result = result ",\n"
}
}
return result
}
BEGIN{
rowsCount =1
matchCount = 0
errorsCount = 0
shareholdersJsonSchema[1, 1] = "Imie"
shareholdersJsonSchema[2, 1] = "Nazwisko"
shareholdersJsonSchema[3, 1] = "PESEL"
shareholdersJsonSchema[4, 1] = "Funkcja"
shareholdersJsonSchema[1, 2] = "\\.*"
shareholdersJsonSchema[2, 2] = "\\.*"
shareholdersJsonSchema[3, 2] = "^[0-9]{11}$"
shareholdersJsonSchema[4, 2] = "\\.*"
errorsSchema[1] = "PropertyName"
errorsSchema[2] = "Message"
errorsSchema[3] = "PositionIndex"
resultSchema[1]= "ShareHolders"
resultSchema[2]= "Errors"
}
/dane wspólników/,/czwarta linia/{
if(/imie/ || /nazwisko/ || /pesel/ || /funkcja/){
if(/imie/){
shareholdersArray[rowsCount, 1] = jsonStringEscapeAndWrap($2)
matchCount++
}
if(/nazwisko/){
shareholdersArray[rowsCount, 2] = jsonStringEscapeAndWrap($2)
matchCount ++
}
if(/pesel/){
shareholdersArray[rowsCount, 3] = $2
matchCount ++
}
if(/funkcja/){
shareholdersArray[rowsCount, 4] = jsonStringEscapeAndWrap($2)
matchCount ++
}
if(matchCount==4){
rowsCount++
matchCount = 0;
}
}
}
END{
shareHolders = jsonValidateAndPrint(shareholdersArray, rowsCount - 1, shareholdersJsonSchema, 4, errorArray)
shareHoldersErrors = jsonPrint(errorArray, length(errorArray) / length(errorsSchema), errorsSchema)
resultArray[1,1] = "\n[\n" shareHolders "\n]\n"
resultArray[1,2] = "\n[\n" shareHoldersErrors "\n]\n"
resultJson = jsonPrint(resultArray, 1, resultSchema)
print resultJson
}
Produces output:
{"ShareHolders":
[
{"Imie":"JanA","Nazwisko":"NowakA","PESEL":null,"Funkcja":"PrezesA"},
{"Imie":"Ja\"nD","Nazwisko":"NowakD","PESEL":11111111111,"Funkcja":"PrezesD"},
{"Imie":"JanC","Nazwisko":"NowakC","PESEL":null,"Funkcja":"PrezesC"}
]
,"Errors":
[
{"PropertyName":"PESEL","Message":"Value 11111111111A not match format: ^[0-9]{11}$ ","PositionIndex":1},
{"PropertyName":"PESEL","Message":"Value 12342C not match format: ^[0-9]{11}$ ","PositionIndex":3}
]
}

AWK. Swap fields in a csv file

The csv file contains nine fields. Fields $ 1, $ 8 and $ 9 must be respected. Add fields $ 2, $ 3, $ 4, $ 5, $ 6 and $ 7 and replace them in lines where the field $1 is repeated. It is hard to describe the rules.
I have to finish this or do something like this. I need Standalone script.
BEGIN{
FS=";"
OFS="";
x="\"\"";
}
{
for(i=2;i<=7;i++)
if($i!= x)
{
k=match(a[$1], $i);
if (k == 0)
{
a[$1]=a[$1]";"$i;
}
b[$1]=b[$1]"-"$8""FS""$9;
}
END {
for (g in a)
t=split(a[g], A, ";");
if (t == 2)
{
a[g]=a[g]";"x";"x";"x";"x";"x";";
}
if (t == 3)
{
a[g]=a[g]";"x";"x";"x";"x";";
}
if (t == 4)
{
a[g]=a[g]";"x";"x";"x";";
}
if (t == 5)
{
a[g]=a[g]";"x";"x";";
}
for (h in b)
q=split(b[h], B, "-");
for (z=1; z <= q; z++)
b[h]=B[z];
}
}
CSV File:
"1033reto";"V09B";"";"";"";"";"";"QVN";"V09B"
"1033reto";"V010";"";"";"";"";"";"QVN";"V010"
"1033reto";"V015";"";"";"";"";"";"QVN";"V015"
"1033reto";"V08C";"";"";"";"";"";"QVN";"V08C"
"1040reto";"V03D";"";"";"";"";"";"QVN";"V03D"
"1040reto";"V01C";"";"";"";"";"";"QVN";"V01C"
"1050reto";"V03D";"";"";"";"";"";"QVN";"V03D"
"1050reto";"V01F";"V07L";"";"";"";"";"QVN";"V01C"
Desired Output
"1033reto";"V09B";"V010";"V015";"V08C";"";"QVN";"V09B"
"1033reto";"V09B";"V010";"V015";"V08C";"";"QVN";"V010"
"1033reto";"V09B";"V010";"V015";"V08C";"";"QVN";"V015"
"1033reto";"V09B";"V010";"V015";"V08C";"";"QVN";"V08C"
"1040reto";"V03D";"V01C";"";"";"";"";"QVN";"V03D"
"1040reto";"V03D";"V01C";"";"";"";"";"QVN";"V01C"
"1050reto";"V03D";"V01F";"V07L";"";"";"";"QVN";"V03D"
"1050reto";"V03D";"V01F";"V07L";"";"";"";"QVN";"V01C"
By studying the Karakfa code I managed to find an alternative without double bypass.
BEGIN{
FS=";"
x="\"\"";
}
{
for(i=2;i<=7;i++)
{
if($i!= x)
{
k=match(a[$1], $i);
if (k == 0)
{
a[$1]=a[$1]";"$i;
}
}
}
b[$1]=b[$1]"-"$8""FS""$9;
}
END {
for (g in a)
{
sub("^;","",a[g]);
t=split(a[g], A, ";");
for (y=t; y<6; y++)
{
a[g]=a[g]";"x;
}
mx=split(b[g], B, "-");
for (i=2; i<=mx; i++)
{
print g""FS""a[g]""FS""B[i];
}
}
}

How do I pretty print JSON with multiple levels of minimization?

We have standard pretty printed JSON:
{
"results": {
"groups": {
"alpha": {
"items": {
"apple": {
"attributes": {
"class": "fruit"
}
},
"pear": {
"attributes": {
"class": "fruit"
}
},
"dog": {
"attributes": {
"class": null
}
}
}
},
"beta": {
"items": {
"banana": {
"attributes": {
"class": "fruit"
}
}
}
}
}
}
}
And we have JMin:
{"results":{"groups":{"alpha":{"items":{"apple":{"attributes":{"class":"fruit"}},"pear":{"attributes":{"class":"fruit"}},"dog":{"attributes":{"class":null}}}},"beta":{"items":{"banana":{"attributes":{"class":"fruit"}}}}}}}
But I want to be able to print JSON like this on the fly:
{
"results" : {
"groups" : {
"alpha" : {
"items" : {
"apple":{"attributes":{"class":"fruit"}},
"pear":{"attributes":{"class":"fruit"}},
"dog":{"attributes":{"class":null}}
}
},
"beta" : {
"items" : {
"banana":{"attributes":{"class":"fruit"}}}
}
}
}
}
The above I would describe as "pretty-print JSON, minimized at level 5". Are there any tools that do that?
I wrote my own JSON formatter, based on this script:
#! /usr/bin/env python
VERSION = "1.0.1"
import sys
import json
from optparse import OptionParser
def to_json(o, level=0):
if level < FOLD_LEVEL:
newline = "\n"
space = " "
else:
newline = ""
space = ""
ret = ""
if isinstance(o, basestring):
o = o.encode('unicode_escape')
ret += '"' + o + '"'
elif isinstance(o, bool):
ret += "true" if o else "false"
elif isinstance(o, float):
ret += '%.7g' % o
elif isinstance(o, int):
ret += str(o)
elif isinstance(o, list):
#ret += "[" + ",".join([to_json(e, level+1) for e in o]) + "]"
ret += "[" + newline
comma = ""
for e in o:
ret += comma
comma = "," + newline
ret += space * INDENT * (level+1)
ret += to_json(e, level+1)
ret += newline + space * INDENT * level + "]"
elif isinstance(o, dict):
ret += "{" + newline
comma = ""
for k,v in o.iteritems():
ret += comma
comma = "," + newline
ret += space * INDENT * (level+1)
#ret += '"' + str(k) + '"' + space + ':' + space
ret += '"' + str(k) + '":' + space
ret += to_json(v, level+1)
ret += newline + space * INDENT * level + "}"
elif o is None:
ret += "null"
else:
#raise TypeError("Unknown type '%s' for json serialization" % str(type(o)))
ret += str(o)
return ret
#main():
FOLD_LEVEL = 10000
INDENT = 4
parser = OptionParser(usage='%prog json_file [options]', version=VERSION)
parser.add_option("-f", "--fold-level", action="store", type="int",
dest="fold_level", help="int (all json is minimized to this level)")
parser.add_option("-i", "--indent", action="store", type="int",
dest="indent", help="int (spaces of indentation, default is 4)")
parser.add_option("-o", "--outfile", action="store", type="string",
dest="outfile", metavar="filename", help="write output to a file")
(options, args) = parser.parse_args()
if len(args) == 0:
infile = sys.stdin
elif len(args) == 1:
infile = open(args[0], 'rb')
else:
raise SystemExit(sys.argv[0] + " json_file [options]")
if options.outfile == None:
outfile = sys.stdout
else:
outfile = open(options.outfile, 'wb')
if options.fold_level != None:
FOLD_LEVEL = options.fold_level
if options.indent != None:
INDENT = options.indent
with infile:
try:
obj = json.load(infile)
except ValueError, e:
raise SystemExit(e)
with outfile:
outfile.write(to_json(obj))
outfile.write('\n')
The script accepts fold level, indent and output file from the command line:
$ jsonfold.py -h
Usage: jsonfold.py json_file [options]
Options:
--version show program's version number and exit
-h, --help show this help message and exit
-f FOLD_LEVEL, --fold-level=FOLD_LEVEL
int (all json is minimized to this level)
-i INDENT, --indent=INDENT
int (spaces of indentation, default is 4)
-o filename, --outfile=filename
write output to a file
To get my example from above, fold at the 5th level:
$ jsonfold.py test2 -f 5
{
"results": {
"groups": {
"alpha": {
"items": {
"pear": {"attributes":{"class":"fruit"}},
"apple": {"attributes":{"class":"fruit"}},
"dog": {"attributes":{"class":None}}
}
},
"beta": {
"items": {
"banana": {"attributes":{"class":"fruit"}}
}
}
}
}
}

Tcl. How to replace line of code with the value that is returned by string map?

Note: Below there is part of code that I think will be more then enough for my question. But I also put the zip archive of whole script at the end of this post.
What is need to do: I need to replace line " argc ; number " with the value that is returned by string map [ list // $function_parameter_comment_sign ] "\" argc // number \"" and the line " argv ; arguments " with the value that is returned by string map [ list // $function_parameter_comment_sign ] "\" argv // arguments \""
I tried different ways to do that: to include them to [] or {}, to assign value to variable and place it in that line and many other ways but did not succeed. How can I do it?
if { [ llength $argv ] == 1 } {
set test [ lindex $argv 0 ]
testone $test
} else {
testmain
}
proc testmain { } {
global ut_all_tests ut_current_test
foreach test $ut_all_tests {
set ut_current_test $test
puts $test
$test
}
}
proc testone { torun } {
global ut_all_tests ut_current_test
foreach test $ut_all_tests {
if { $torun == $test } {
set ut_current_test $test
puts $test
$test
}
}
}
proc tproc { name args body } {
global ut_all_tests
lappend ut_all_tests $name
proc $name $args $body
}
tproc extract_tcl_signature_test { } {
proc load_generators {} {
set script_path [ file dirname [ file normalize [ info script ] ] ]
set drakon_editor_path [string trimright $script_path "unittest" ]
set scripts [ glob -- "$drakon_editor_path/generators/*.tcl" ]
foreach script $scripts {
source $script
}
}
namespace eval gen {
array set generators {}
# This procedure is called by language generator files. In the beginning of every language generator there is calling code.
proc add_generator { language generator } {
variable generators
if { [ info exists generator($language) ] } {
error "Generator for language $language already registered."
}
set gen::generators($language) $generator
}
}
load_generators
puts "=================================="
puts "Started: extract_tcl_signature_test"
foreach { language generator } [ array get gen::generators ] {
puts "----------------------------------"
puts $language
puts $generator
namespace eval current_file_generation_info {}
set current_file_generation_info::language $language
set current_file_generation_info::generator $generator
set find [string first :: $generator]
set generator_namespace [ string range $generator 0 $find-1 ]
# These 3 lines is to check is current generator have commentator procedure.
# If not commentator_status_var is set to "" .
set commentator_for_namespace_text "::commentator"
set commentator_call_text "$generator_namespace$commentator_for_namespace_text"
set commentator_status_var [ namespace which $commentator_call_text ]
# If current language does not have commentator procedure or current languages is in if conditions, then // sign for function parameter commenting will be used.
# It is done so for compability with diagrams which are made with previous versions of DRAKON Editor.
# If you are adding new language generator to DRAKON Editor and want to use line comment sign as
# commenting sign for function parameters, just make commentator procedure in your language generator
# as it is for example in AutoHotkey code generator.
if { $commentator_status_var == "" ||
$language == "C" ||
$language == "C#" ||
$language == "C++" ||
$language == "D" ||
$language == "Erlang" ||
$language == "Java" ||
$language == "Javascript" ||
$language == "Lua" ||
$language == "Processing.org" ||
$language == "Python 2.x" ||
$language == "Python 3.x" ||
$language == "Tcl" ||
$language == "Verilog" } {
good_signature_tcl { " " " #comment " "" "what?" } comment "" {} ""
good_signature_tcl { ""
""
" argc // number "
" argv // arguments "
" "
} procedure public { "argc" "argv" } ""
good_signature_tcl { "one"
"two" } procedure public { "one" "two" } ""
} else {
# Get current generator line comment simbol and calculate its length without space sign.
set function_parameter_comment_sign [ $commentator_call_text "" ]
set function_parameter_comment_sign [string trim $function_parameter_comment_sign " " ]
if { $function_parameter_comment_sign == "#" } {
#good_signature_tcl { " " " #comment " "" "what?" } comment "" {} ""
good_signature_tcl { ""
""
" argc # number "
" argv # arguments "
" "
} procedure public { "argc" "argv" } ""
good_signature_tcl { "one"
"two" } procedure public { "one" "two" } ""
} else {
good_signature_tcl { " " " #comment " "" "what?" } comment "" {} ""
good_signature_tcl { ""
""
" argc ; number "
" argv ; arguments "
" "
} procedure public { "argc" "argv" } ""
good_signature_tcl { "one"
"two" } procedure public { "one" "two" } ""
}
}
#puts $function_parameter_comment_sign
}
puts "----------------------------------"
puts "Successfully ended: extract_tcl_signature_test"
puts "=================================="
}
proc good_signature_tcl { lines type access parameters returns } {
set text [ join $lines "\n" ]
unpack [ gen_tcl::extract_signature $text foo ] message signature
equal $message ""
unpack $signature atype aaccess aparameters areturns
equal $atype $type
equal $aaccess $access
set par_part0 {}
foreach par $aparameters {
lappend par_part0 [ lindex $par 0 ]
}
list_equal $par_part0 $parameters
equal [ lindex $areturns 0 ] $returns
}
Above code parts are from files: unittest.tcl , utest_utils.tcl and gen_test.tcl
Download code link for whole code: https://mega.co.nz/#!QFhlnSIS!8lxgCFbXAweqrj72Gj8KRbc6o9GVlX-V9T1Fw9jwhN0
I'm not entirely sure what you're looking for, but if I got it right, you could try using something like this:
good_signature_tcl [list "" \
"" \
[string map [list // $function_parameter_comment_sign] " argc // number "] \
[string map [list // $function_parameter_comment_sign] " argv // arguments "] \
" " \
] procedure public { "argc" "argv" } ""
Since your {} are being used to make a list, then using [list] should yield the same thing with the bonus of having functions and substitutions work.
Assuming that you always want a constant replacement (i.e., that you're not varying $function_parameter_comment_sign at every site!) then it's not too hard. Just build the map to apply with string map in several stages. (I'm not 100% sure that I've got what you want to replace right, but it should be easy for you to fix it from here.)
# Shared value to keep line length down
set replacer [list "//" $function_parameter_comment_sign]
# Build the map (no need to do it in one monster [list]!)
set map {}
lappend map {" argc ; number "} [string map $replacer "\" argc // number \""]
lappend map {" argv ; arguments "} [string map $replacer "\" argv // arguments \""]
# Apply the map
set theMappedText [string map $map $theInputText]
If you want to do something other than replace each instance of the unwanted string with a constant, things will quite a bit more complex as you would need to use subst (and hence you would also need an extra cleaning step). Fortunately, I don't think you need that; your case appears to be simple enough.