I have a json file like this:
"tool_name": {
"command": "$ENV{TOOL_BIN_DIR}/some_file_name",
"args": "some args"
}
I am using use:JSON from Perl 5.14. and using decode_json function to read the file and get data into perl hash.
But when I refer to this read data from code like this:
my $cmd = "$data->{tool_name}->{command}";
print $cmd;
I get
$ENV{TOOL_BIN_DIR}/some_file_name
How can I make perl resolve the value of this variable?
This example uses enviornment variable but in general if I want to use variables from JSON - how can I do that?
Using eval opens you to up to malicious or accidental damage: the string you are executing could contain any Perl code that may do anything at all to your system
It is preferable to use interpolate from the String::Interpolate module, which uses Perl's own interpolation engine that expands ordinary double-quoted strings at run time
This program sets up a value for the environment variable TOOL_BIN_DIR and expands all the values in the tool_name hash that contain a dollar $ or at # sigil
I've used Data::Dump to display the contents of the data after the interpolation
You may want to write a recursive subroutine that will process the values of all nested hashes and arrays if you don't know which values are likely to contain a value that needs to be expanded
use strict;
use warnings 'all';
use JSON 'decode_json';
use String::Interpolate 'interpolate';
use Data::Dump 'dd';
my $data = decode_json <<'__END_JSON__';
{
"tool_name": {
"command": "$ENV{TOOL_BIN_DIR}/some_file_name",
"args": "some args"
}
}
__END_JSON__
$ENV{TOOL_BIN_DIR} = 'tool_dir_test';
for ( values %{ $data->{tool_name} } ) {
$_ = interpolate($_) if /[\$\#]/;
}
dd $data;
output
{
tool_name => { args => "some args", command => "tool_dir_test/some_file_name" },
}
You can use eval like in:
my $cmd = '$ENV{HOME}/toto';
print eval('"' . $cmd . '"'), "\n";
But remember, this is very unsafe from a security point of view. You should probably avoid having to do this.
Related
I am working with the Text::CSV library of Perl to import data from a CSV file, using the functional interface. The data is stored in an array of hashes, and the problem is that when the script tries to access those elements/keys, they are uninitialized (or undefined).
Using the library Dumper, it is possible to see that the array and the hashes are not empty, in fact, they are correctly filled with the data of the CSV file.
With this small piece of code, I get the following output:
my $array = csv(
in => $csv_file,
headers => 'auto');
foreach my $mem (#{$array}) {
print Dumper $mem;
foreach (keys $mem) {
print $mem{$_};
}
}
Last part of the output:
$VAR1 = {
'Column' => '16',
'Width' => '13',
'Type' => 'RAM',
'Depth' => '4096'
};
Use of uninitialized value in print at ** line 81.
Use of uninitialized value in print at ** line 81.
Use of uninitialized value in print at ** line 81.
Use of uninitialized value in print at ** line 81.
This happens with all the elements of the array. Is this problem related to the encoding, or I am just simply accessing the elements in a incorrect way?
$mem is a reference to a hash, but you keep trying to use it directly as a hash. Change your code to:
foreach (keys %$mem) {
print $mem->{$_};
}
There is a slight complication in that in some versions of perl, 'keys $mem' was allowed directly as an experimental feature, which later got removed. In any case, adding
use warnings;
use strict;
would likely have given you some helpful clues as to what was happening.
When I run your code on my version of Perl (5.24), I get this error:
Experimental keys on scalar is now forbidden at ... line ...
This points to the line:
foreach (keys $mem) {
You should dereference the hash ref:
use warnings;
use strict;
use Data::Dumper;
use Text::CSV qw( csv );
my $csv_file="data.csv";
my $array = csv(
in => $csv_file,
headers => 'auto');
foreach my $mem (#{$array}) {
print Dumper($mem);
foreach (keys %{ $mem }) {
print $mem->{$_}, "\n";
}
}
I have pored over this site (and others) trying to glean the answer for this but have been unsuccessful.
use Text::CSV;
my $csv = Text::CSV->new ( { binary => 1, auto_diag => 1 } );
$line = q(data="a=1,b=2",c=3);
my $csvParse = $csv->parse($line);
my #fields = $csv->fields();
for my $field (#fields) {
print "FIELD ==> $field\n";
}
Here's the output:
# CSV_XS ERROR: 2034 - EIF - Loose unescaped quote # rec 0 pos 6 field 1
FIELD ==>
I am expecting 2 array elements:
data="a=1,b=2"
c=3
What am I missing?
You may get away with using Text::ParseWords. Since you are not using real csv, it may be fine. Example:
use strict;
use warnings;
use Data::Dumper;
use Text::ParseWords;
my $line = q(data="a=1,b=2",c=3);
my #fields = quotewords(',', 1, $line);
print Dumper \#fields;
This will print
$VAR1 = [
'data="a=1,b=2"',
'c=3'
];
As you requested. You may want to test further on your data.
Your input data isn't "standard" CSV, at least not the kind that Text::CSV expects and not the kind that things like Excel produce. An entire field has to be quoted or not at all. The "standard" encoding of that would be "data=""a=1,b=2""",c=3 (which you can see by asking Text::CSV to print your expected data using say).
If you pass the allow_loose_quotes option to the Text::CSV constructor, it won't error on your input, but it won't consider the quotes to be "protecting" the comma, so you will get three fields, namely data="a=1, b=2" and c=3.
From command line, we are passing multiple values separated by commas such as sydney,delhi,NY,Russia as an option. These values are getting stored under $runTest in the perl script. Now I want to create a new file under the script with contents of $runTest but as line by line. For example:
INPUT (passed values from command line):
sydney,delhi,NY,Russia
OUTPUT (under new file: myfile):
sydney
delhi
NY
Russia
In this simple example, it is better to use split on a delimiter than tr in such case. A few minor points: use snake_case for names instead of CamelCase, and use autodie to make open, close, etc, fatal, without the need to clutter the code with or die "...":
use autodie;
my $run_test = 'sydney,delhi,NY,Russia';
open my $out, '>', 'myFile';
print {$out} map { "$_\n" } split /,/, $run_test;
close $out;
For more robust parsing in general, beyond this simple example, prefer specialized modules, such as Text::CSV or Text::CSV_XS for csv parsing. Compared to the overly simplistic split, Text::CSV_XS enables correct input/output of quoted fields, fields containing the delimiter (comma), binary characters, provides error messages and more. Example:
use Text::CSV_XS;
use autodie;
open my $out, q{>}, q{myFile};
# All of these input strings are parsed correctly, unlike when using "split":
# my $run_test = q{sydney,delhi,NY,Russia};
# my $run_test = q{sydney,delhi,NY,Russia,"field,with,commas"};
my $run_test = q{sydney,delhi,NY,Russia,"field,with,commas","field,with,missing,quote};
# binary => 1 : enable parsing binary characters in quoted fields.
# auto_diag => 1 : print the internal error code and the associated error message to STDERR.
my $csv = Text::CSV_XS->new( { binary => 1, auto_diag => 1 } );
if ( $csv->parse( $run_test ) ) {
print {$out} map { "$_\n" } $csv->fields;
}
else {
print STDERR q{parse() failed on: }, $csv->error_input, qq{\n};
}
I'm working on parsing JSON data using JSON.sh. And I wanted to read data from json file (test.json) whose content will be something like,
{
"/home/ukrishnan/projects/test.yml": {
"LOG_DRIVER": "syslog",
"IMAGE": "mysql:5.6"
},
"/home/ukrishnan/projects/mysql/app.xml": {
"ENV_ACCOUNT_BRIDGE_ENDPOINT": "/u01/src/test/sample.txt"
}
}
And I try to parse this JSON using JSON.sh by using,
test_parser=`sh ./lib/JSON.sh < test/test.json`
echo $test_parser
It prints,
["/home/ukrishnan/projects/test.yml","LOG_DRIVER"] "syslog" ["/home/ukrishnan/projects/test.yml","IMAGE"] "mysql:5.6" ["/home/ukrishnan/projects/test.yml"] {"LOG_DRIVER":"syslog","IMAGE":"mysql:5.6"} ["/home/ukrishnan/projects/mysql/app.xml","ENV_ACCOUNT_BRIDGE_ENDPOINT"] "/u01/src/test/sample.txt" ["/home/ukrishnan/projects/mysql/app.xml"] {"ENV_ACCOUNT_BRIDGE_ENDPOINT":"/u01/src/test/sample.txt"} [] {"/home/ukrishnan/projects/test.yml":{"LOG_DRIVER":"syslog","IMAGE":"mysql:5.6"},"/home/ukrishnan/projects/mysql/app.xml":{"ENV_ACCOUNT_BRIDGE_ENDPOINT":"/u01/src/test/sample.txt"}}
Whereas, the same command (sh ./lib/JSON.sh < test/test.json), if I run through terminal, it is printing with line breaks,
["/home/ukrishnan/projects/test.yml","LOG_DRIVER"] "syslog"
["/home/ukrishnan/projects/test.yml","IMAGE"] "mysql:5.6"
["/home/ukrishnan/projects/test.yml"] {"LOG_DRIVER":"syslog","IMAGE":"mysql:5.6"}
["/home/ukrishnan/projects/mysql/app.xml","ENV_ACCOUNT_BRIDGE_ENDPOINT"] "/u01/src/test/sample.txt"
["/home/ukrishnan/projects/mysql/app.xml"] {"ENV_ACCOUNT_BRIDGE_ENDPOINT":"/u01/src/test/sample.txt"}
[] {"/home/ukrishnan/projects/test.yml":{"LOG_DRIVER":"syslog","IMAGE":"mysql:5.6"},"/home/ukrishnan/projects/mysql/app.xml":{"ENV_ACCOUNT_BRIDGE_ENDPOINT":"/u01/src/test/sample.txt"}}
I wanted to read this and assign to bash variables like,
file_name='/home/ukrishnan/projects/test.yml'
key='LOG_DRIVER'
value='syslog'
As I'm almost completely new to shell script and grep or awk, I don't have much idea of how to achieve this. Any help on this would be greatly appreciated.
I wrote a JSON serializer / deserializer for gawk, if you're interested. Save that script and modify it, replacing everything above # === FUNCTIONS === with the following:
#!/usr/bin/gawk -f
# capture JSON string from beginning to end into a scalar variable
{ json = json ORS $0 }
END {
# objectify JSON string to the multilevel array "obj"
deserialize(json, obj)
for (filename in obj) {
print "file_name=" quote(filename)
for (key in obj[filename]) {
# print key="value"
print key "=" quote(obj[filename][key])
}
}
}
Do chmod 755 json.awk and execute it. Output will resemble this:
$ ./json.awk test5.json
file_name="/home/ukrishnan/projects/mysql/app.xml"
ENV_ACCOUNT_BRIDGE_ENDPOINT="/u01/src/test/sample.txt"
file_name="/home/ukrishnan/projects/test.yml"
LOG_DRIVER="syslog"
IMAGE="mysql:5.6"
Hopefully the logic is reasonably easy to follow. If you prefer to output filename=, key=, and value= on every loop iteration, modify the nested for loops accordingly:
for (filename in obj) {
for (key in obj[filename]) {
print "file_name=" quote(filename)
print "key=" quote(key)
print "value=" quote(obj[filename][key])
}
}
That change will result in the following output:
$ ./json.awk test5.json
file_name="/home/ukrishnan/projects/mysql/app.xml"
key="ENV_ACCOUNT_BRIDGE_ENDPOINT"
value="/u01/src/test/sample.txt"
file_name="/home/ukrishnan/projects/test.yml"
key="LOG_DRIVER"
value="syslog"
file_name="/home/ukrishnan/projects/test.yml"
key="IMAGE"
value="mysql:5.6"
Anyway, with that output, you can do something silly in BASH like this to populate and act upon the variables:
#!/bin/bash
./test.awk test5.json | while read -r line; do {
eval $line
[ "${line/=*/}" = "value" ] && {
echo "bash: file_name=$file_name"
echo "bash: key=$key"
echo "bash: value=$value"
echo "------"
}
}; done
It'd probably be more graceful just to do all processing within gawk from start to finish and not mess with the polyglot handoff, though.
Getting back to json.awk, if you prefer to keep json.awk modular for easy reuse in future projects, you could remove everything above # === FUNCTIONS ===, create a separate main.awk containing the code block at the top of this answer, and #include "json.awk" as a helper library pretty much anywhere outside of END {...} (just below the shbang, for example).
JSON.sh (from http://json.org) offers a nice bash friendly means of flattening out a JSON file. Which you've already provided how it looks in your question. So, the flatten form is the format:
[node] tab value
You have to think in UNIX script in extracting the information you want, you'll note the lines you're interested in actually follow this pattern:
["filename","key"] tab ["value"]
In regex notation, we replace:
filename with (.*)
key with (.*)
tab with \t
value with (.*)
We can retrieve the first, second and third matching groups with \1, \2, \3 respectively.
When used in sed we also note that these symbols []() need to be escaped with a backslash \, resulting in the following script:
./lib/JSON.sh < test/test.json | sed 's/\["\(.*\)","\(.*\)\"]\t"\(.*\)"/\1,\2,\3/;t;d'
/home/ukrishnan/projects/test.yml,LOG_DRIVER,syslog
/home/ukrishnan/projects/test.yml,IMAGE,mysql:5.6
/home/ukrishnan/projects/mysql/app.xml,ENV_ACCOUNT_BRIDGE_ENDPOINT,/u01/src/test/sample.txt
Now we put the lines in a loop and for each line, we can extract out filename,key,value:
for line in $(./lib/JSON.sh < test/test.json | sed 's/\["\(.*\)","\(.*\)\"]\t"\(.*\)"/\1,\2,\3/;t;d')
do
IFS="," read -ra arr <<< $line
filename=${arr[0]}
key=${arr[1]}
value=${arr[2]}
cat <<EOF
filename : $filename
key : $key
value : $value
EOF
done
Which outputs:
filename : /home/ukrishnan/projects/test.yml
key : LOG_DRIVER
value : syslog
filename : /home/ukrishnan/projects/test.yml
key : IMAGE
value : mysql:5.6
filename : /home/ukrishnan/projects/mysql/app.xml
key : ENV_ACCOUNT_BRIDGE_ENDPOINT
value : /u01/src/test/sample.txt
I have to extract data from JSON file depending on a specific key. The data then has to be filtered (based on the key value) and separated into different fixed width flat files. I have to develop a solution using shell scripting.
Since the data is just key:value pair I can extract them by processing each line in the JSON file, checking the type and writing the values to the corresponding fixed-width file.
My problem is that the input JSON file is approximately 5GB in size. My method is very basic and would like to know if there is a better way to achieve this using shell scripting ?
Sample JSON file would look like as below:
{"Type":"Mail","id":"101","Subject":"How are you ?","Attachment":"true"}
{"Type":"Chat","id":"12ABD","Mode:Online"}
The above is a sample of the kind of data I need to process.
Give this a try:
#!/usr/bin/awk
{
line = ""
gsub("[{}\x22]", "", $0)
f=split($0, a, "[:,]")
for (i=1;i<=f;i++)
if (a[i] == "Type")
file = a[++i]
else
line = line sprintf("%-15s",a[i])
print line > file ".fixed.out"
}
I made assumptions based on the sample data provided. There is a lot based on those assumptions that may need to be changed if the data varies much from what you've shown. In particular, this script will not work properly if the data values or field names contain colons, commas, quotes or braces. If this is a problem, it's one of the primary reasons that a proper JSON parser should be used. If it were my assignment, I'd push back hard on this point to get permission to use the proper tools.
This outputs lines that have type "Mail" to a file named "Mail.fixed.out" and type "Chat" to "Chat.fixed.out", etc.
The "Type" field name and field value ("Mail", etc.) are not output as part of the contents. This can be changed.
Otherwise, both the field names and values are output. This can be changed.
The field widths are all fixed at 15 characters, padded with spaces, with no delimiters. The field width can be changed, etc.
Let me know how close this comes to what you're looking for and I can make some adjustments.
perl script
#!/usr/bin/perl -w
use strict;
use warnings;
no strict 'refs'; # for FileCache
use FileCache; # avoid exceeding system's maximum number of file descriptors
use JSON;
my $type;
my $json = JSON->new->utf8(1); #NOTE: expect utf-8 strings
while(my $line = <>) { # for each input line
# extract type
eval { $type = $json->decode($line)->{Type} };
$type = 'json_decode_error' if $#;
$type ||= 'missing_type';
# print to the appropriate file
my $fh = cacheout '>>', "$type.out";
print $fh $line; #NOTE: use cache if there are too many hdd seeks
}
corresponding shell script
#!/bin/bash
#NOTE: bash is used to create non-ascii filenames correctly
__extract_type()
{
perl -MJSON -e 'print from_json(shift)->{Type}' "$1"
}
__process_input()
{
local IFS=$'\n'
while read line; do # for each input line
# extract type
local type="$(__extract_type "$line" 2>/dev/null ||
echo json_decode_error)"
[ -z "$type" ] && local type=missing_type
# print to the appropriate file
echo "$line" >> "$type.out"
done
}
__process_input
Example:
$ ./script-name < input_file
$ ls -1 *.out
json_decode_error.out
Mail.out