Encode complex JSON structure using Perl - json

I want to encode the result of a MySQL query into a JSON string using JSON::XS. The JSON string needs to look like this
{
"database" : "dbname"
"retentionPolicy" : "mytest",
"tags" : {
"type" : "generate",
"location" : "total",
"source" : "ehz"
},
"points" : [{
"precision" : "ms",
"timestamp" : "ts1",
"name" : "power",
"values" : {
"value" : "val1"
}
}, {
"precision" : "ms",
"timestamp" : "ts2",
"name" : "power",
"values" : {
"value" : "val2"
}
}, {
"precision" : "ms",
"timestamp" : "ts3",
"name" : "power",
"values" : {
"value" : "val3"
}
}
]
}
The points array with each point's values element is giving me immense headaches.
Here is the code block that generates the JSON
my %json_body = (
'database' => $db_name,
'retentionPolicy' => $retention,
'tags' => {
'source' => $metric_source,
'type' => $metric_type,
'location' => $metric_location
}
);
# loop through mysql result
while ( ($timestamp, $value) = $query->fetchrow_array() ) {
my %json_point1 = (
'name' => $series_name,
'timestamp' => ($timestamp * 1),
'precision' => "ms"
);
%json_point2 = ('value' => $value);
%json_values = (%json_point1, 'values' => \%json_point2);
push(#all_values, \%json_values);
}
$query->finish();
# Encode json
my %json_data = (%json_body, "points" => \#all_values);
$influx_json = encode_json(\%json_data);
I think the line push(#all_values, \%json_values) is my problem. If I pass %json_data as a hash reference, only the last value from the while loop is retained. If I use %json_values directly, the encoded JSON is messed up because it loses the structure.
Any hint would be appreciated. And please bear with me: this array and hash references are already making my head explode.

I'm pretty sure you problem will be because you're using a globally scoped hash for %json_point and %json_point2.
You see, the root of this is - you simply don't get a list of hashes. You get a list of hash references.
So the problem here is - when you push a reference to your hash into #all_values - you're pushing the same reference each time. But then you're overwriting the contents of the hash that you're referencing.
Try this:
#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;
my %hash_thing;
my #all_values;
for ( 1..3 ) {
%hash_thing = ( "test" => $_ );
push ( #all_values, \%hash_thing ) ;
}
print join ( "\n", #all_values );
print Dumper \#all_values;
And you'll see you have the same 'value' 3 times:
HASH(0x74478c)
HASH(0x74478c)
HASH(0x74478c)
And so if you dump it, then of course - you don't get the right array - and so your encoded JSON doesn't work either.
$VAR1 = [
{
'test' => 3
},
$VAR1->[0],
$VAR1->[0]
];
The simplest fix is to use my to scope the hashes to the loop. (And turn on use strict; and use warnings if you haven't.)
Alternatively, you can use a hash reference like this:
my #all_values;
my $hash_ref;
for ( 1..3 ) {
$hash_ref = { "test" => $_ };
push ( #all_values, $hash_ref ) ;
}
print #all_values;
print Dumper \#all_values;
Because $hash_ref is a scalar, and it's a reference to an anonymous hash, it can be inserted into the array by value, rather than by reference.

Related

Perl CSV to Hash of Arrays natively

I'm trying to build an associative array from a csv file that stores only unique keys. All without using extra features like Text::CSV
An example text file:
emp1,dept1,1090
emp2,dept2,8920
emp3,dept1,3213
emp3,dept2,3234
I would like the data to be organized by dept to look like
$hash = {
dept=>[dept1, dept2, dept3]
}
and within each dept to have its respective emp and ids
So far, I have tried
my %hash;
while (<$fh>){
my #data = split(/,/, $fh);
push #{$hash{$_}}, shift #data
for qw(emp dept id);
}
However, this does not seem to fill the arrays properly and instead just initializes the arrays with no data in them. I've looked all over for examples of how to do this but my searches always contain people mentioning Text::CSV
Your first problem is the with this line
my #data = split(/,/, $fh);
You are splitting of the filehandle, not the data returned from the while statement. That is stored in $_
Below is you code changes to fix the split line. I'm also using the inline DATA filehandle to make it easier on myself. Finally, I've added a call to Data::Dumper to see what is getting stored into the hash.
use Data::Dumper ;
my %hash;
while (<DATA>){
my #data = split(/,/, $_);
push #{$hash{$_}}, shift #data
for qw(emp dept id);
}
print "Hash is " . Dumper(\%hash);
__DATA__
emp1,dept1,1090
emp2,dept2,8920
emp3,dept1,3213
emp3,dept2,3234
Running that gives this, which shows the second issue -- you are including a newline in the id column
Hash is $VAR1 = {
'dept' => [
'dept1',
'dept2',
'dept1',
'dept2'
],
'emp' => [
'emp1',
'emp2',
'emp3',
'emp3'
],
'id' => [
'1090
',
'8920
',
'3213
',
'3234
'
]
};
Fix that with a call to chomp before the split line
use Data::Dumper ;
my %hash;
while (<DATA>){
chomp;
my #data = split(/,/, $_);
push #{$hash{$_}}, shift #data
for qw(emp dept id);
}
print "Hash is " . Dumper(\%hash);
__DATA__
emp1,dept1,1090
emp2,dept2,8920
emp3,dept1,3213
emp3,dept2,3234
output is now
Hash is $VAR1 = {
'id' => [
'1090',
'8920',
'3213',
'3234'
],
'emp' => [
'emp1',
'emp2',
'emp3',
'emp3'
],
'dept' => [
'dept1',
'dept2',
'dept1',
'dept2'
]
};
That looks better, but you have duplicates in the hash. To deal with that, I'm going to store the data read from the CSV as a hash-of-hashes. That will get rid of the duplicates
my %hash;
my #cols = qw( emp dept id);
while (<DATA>)
{
chomp $_;
my #data = split /,/, $_ ;
for my $i (0 .. #cols-1)
{
# Store as a hash of hashes
$hash{ $cols[$i] }{ $data[$i] } ++;
}
}
print "Hash is " . Dumper(\%hash);
That looks better - the duplicates are removed
Hash is $VAR1 = {
'dept' => {
'dept2' => 2,
'dept1' => 2
},
'emp' => {
'emp3' => 2,
'emp2' => 1,
'emp1' => 1
},
'id' => {
'3213' => 1,
'8920' => 1,
'1090' => 1,
'3234' => 1
}
};
Your requirement was to has a hash of arrays, so add a final step to dump the hash-of-hashes into the format you require
my %result;
for my $col (keys %hash)
{
push #{ $result{$col} }, sort keys %{ $hash{$col} } ;
}
print "Hash is " . Dumper(\%result);
That outputs this
Hash is $VAR1 = {
'dept' => [
'dept1',
'dept2'
],
'emp' => [
'emp1',
'emp2',
'emp3'
],
'id' => [
'1090',
'3213',
'3234',
'8920'
]
};

Laravel: Update a nested json object

I have a column in my db for saving a users' settings. This is what the data structure looks like:
{"email":{"subscriptions":"{\"Foo\":true,\"Bar\":false}"}}
I am using a vue toggle to change the status of each property (true/false). Everything seems to be working, however when I save, I am wiping out the structure and saving the updated values like this:
{\"Foo\":true,\"Bar\":false}"}
php
$user = auth()->user();
$array = json_decode($user->preferences['email']['subscriptions'], true);
dd($array);
The above gets me:
array:2 [
"Foo" => true
"Bar" => false
]
So far so good...
$preferences = array_merge($array, $request->all());
dd($preferences);
Gets me:
array:2 [
"Foo" => true
"Bar" => true
]
Great - the values are now picking up the values passed in from the axios request. Next up; update the user's data:
$user->update(compact('preferences'));
Now my data looks like this:
{"Foo":true,"Bar":true}
The values are no-longer nested; I've wiped out email and subscriptions.
I've tried this:
$user->update([$user->preferences['email']['subscriptions'] => json_encode($preferences)]);
But it doesn't seem to save the data. How can I use the $preferences variable to update the data - and keep the data nested correctly?
You can create an array with the structure you want the resulting json to have. So, for this json:
{
"email":{
"subscriptions":{
"Foo":true,
"Bar":false
}
}
}
you can create an array like this:
[
'email' => [
'subscriptions' => [
'Foo' => true,
'Bar' => false
]
]
]
an then, encode the entire structure:
json_encode([
'email' => [
'subscriptions' => [
'Foo' => true,
'Bar' => false
]
]
]);
So, in your code, as you already have the nested array in the $preferences variable, I think this should work:
$json_preferences = json_encode([
'email' => [
'subscriptions' => $preferences
]
]);
Then you can update the user 'preferences' attribute (just for example):
User::where('id', auth()->user()->id)->update(['preferences' => $json_preferences]);
or
$user = auth()->user();
$user->preferences = $json_preferences;
$user->save();

Correct and easy way convert JSON::PP::Boolean to 0,1 with perl

Trying convert JSON to YAML. Have this code
#!/usr/bin/env perl
use 5.014;
use warnings;
use JSON;
use YAML;
my $json_string = q(
{
"some" : [
{ "isFlagged" : true, "name" : "Some name" },
{ "isFlagged" : false, "name" : "Some other name" }
]
}
);
my $data = decode_json($json_string);
say Dump($data);
it produces:
---
some:
- isFlagged: !!perl/scalar:JSON::PP::Boolean 1
name: Some name
- isFlagged: !!perl/scalar:JSON::PP::Boolean 0
name: Some other name
I need convert the JSON::PP::Boolean objects to 0 or 1. Of course, I could remove every !!perl/scalar:JSON::PP::Boolean string from the YAML output, but this doesn't seems to me as an correct solution.
So, what is the easy and correct way convert all JSON::PP::Boolean objects to simple 0 and 1, so the YAML will generate
---
some:
- isFlagged: 1
name: Some name
- isFlagged: 0
name: Some other name
Use YAML's Stringify option:
{
local $YAML::Stringify = 1;
say Dump($data);
}
This makes YAML use the stringification overloads from JSON::PP::Boolean instead of dumping object internals.
A general solution:
use Carp qw( carp );
sub convert_bools {
my %unrecognized;
local *_convert_bools = sub {
my $ref_type = ref($_[0]);
if (!$ref_type) {
# Nothing.
}
elsif ($ref_type eq 'HASH') {
_convert_bools($_) for values(%{ $_[0] });
}
elsif ($ref_type eq 'ARRAY') {
_convert_bools($_) for #{ $_[0] };
}
elsif (
$ref_type eq 'JSON::PP::Boolean' # JSON::PP
|| $ref_type eq 'Types::Serialiser::Boolean' # JSON::XS
) {
$_[0] = $_[0] ? 1 : 0;
}
else {
++$unrecognized{$ref_type};
}
};
&_convert_bools;
carp("Encountered an object of unrecognized type $_")
for sort values(%unrecognized);
}
my $data = decode_json($json);
convert_bools($data);
Using YAML::XS 0.67 or higher, you can do the following:
#!/usr/bin/env perl
use 5.014;
use warnings;
use JSON;
use YAML::XS;
my $json_string = q(
{
"some" : [
{ "isFlagged" : true, "name" : "Some name" },
{ "isFlagged" : false, "name" : "Some other name" }
]
}
);
my $data = decode_json($json_string);
local $YAML::XS::Boolean = 'JSON::PP';
say Dump($data);

How to pass multidimensional values to a json array?

I am trying to pass the value to an api through json request.
$payload = json_encode( array("phones"=> "971xxxxxxx",
"emails"=> "fadfad#xyz.com",
"id"=> "1"
) );
How will i pass the below multi dimensional values to a json request like the above code?
{ "contactList": [ { "phones" : ["+91 9000000034"], "emails" : [fadfad#xyz.com], "id" : 1 }, { "phones" : ["+91 903-310-00-001"], "emails" : [krs#xyz.in], "id" : 2 } ] }
Store the data as nested / multidimensional arrays before passing it to json_encode.
$array = array();
$array["contactList"] = array(
array( "phones"=> "971xxxxxxx",
"emails"=> "fadfad#xyz.com",
"id"=> "1"
),
array( "phones"=> "+91 903-310-00-001",
"emails"=> "krs#xyz.in",
"id"=> "2"
)
);
$payload = json_encode($array);
echo $payload;
produces
{"contactList":[{"phones":"971xxxxxxx","emails":"fadfad#xyz.com","id":"1"},{"phones":"+91 903-310-00-001","emails":"krs#xyz.in","id":"2"}]}
If you need to hold values like the phone numbers inside an array, simply wrap them in an array.

using logstash to parse csv file

I have an elasticsearch index which I am using to index a set of documents.
These documents are originally in csv format and I am looking parse these using logstash as this has powerful regular expression tools such as grok.
My problem is that I have something along the following lines
field1,field2,field3,number#number#number#number#number#number
In the last column I have key value pairs key#value separated by # and there can be any number of these
Is there a way for me to use logstash to parse this and get it to store the last column as the following json in elasticsearch (or some other searchable format) so I am able to search it
[
{"key" : number, "value" : number},
{"key" : number, "value" : number},
...
]
First, You can use CSV filter to parse out the last column.
Then, you can use Ruby filter to write your own code to do what you need.
input {
stdin {
}
}
filter {
ruby {
code => '
b = event["message"].split("#");
ary = Array.new;
for c in b;
keyvar = c.split("#")[0];
valuevar = c.split("#")[1];
d = "{key : " << keyvar << ", value : " << valuevar << "}";
ary.push(d);
end;
event["lastColum"] = ary;
'
}
}
output {
stdout {debug => true}
}
With this filter, When I input
1#10#2#20
The output is
"message" => "1#10#2#20",
"#version" => "1",
"#timestamp" => "2014-03-25T01:53:56.338Z",
"lastColum" => [
[0] "{key : 1, value : 10}",
[1] "{key : 2, value : 20}"
]
FYI. Hope this can help you.