How can i fix my json output? - json

I have an issue to decode the json that i receive from CURL,
i used json_last_error to see what may be the reason and it looks like my json is malformed.
// Make the REST call, returning the result
$response = curl_exec($this->curl); // result is as per screenshot below
$resp_json = json_decode($response, true);
echo "<pre>";
print_r($resp_json); // display nothing
echo "</pre>";
Click here to see the json output and the issue
from jsonlint, response is like...
{
"RESPONSE_DATA": [{
"property_address": "6\/192 Kingsgrove Road",
"price": 0.0,
"contact_name": "Nicholas Smith",
"property_facing_direction": "unknown",
"agent_name": "",
"client_id": 46984,
"property_suburb": "Kingsgrove",
"agent_phone": "",
"contact_phone": "0407 787 288",
"ordered_datetime": "2017-12-05 04:15:03",
"agent_email": "",
"property_state": "NSW",
"job_id": 2324,
"im_job_id": "40432-o",
"product_ids": 3000000,
"confirmed_datetime": "",
"photographer_comment": "Photography Premium Daylight 3 photos $145.00\
nAdd Per Premium Photo 2 at $20 .00 each\ n Total $185 .00 ","
contact_company ":"
Raine & Horne Commerical - Marrickville ","
agent_id ":"
","
preferred_datetime ":"
2017 - 12 - 07 11: 00: 00 ","
property_postcode ":0000,"
status_code ":"
N "}],"
RESPONSE_MESSAGE ":"
OK ","
RESPONSE_CODE ":200}

There are 4 issues broking the json:
Unexpected newline
\n in photographer_comment
\ n in photographer_comment
property_postcode ":0000
That you can hardcode to replace it by:
$json = trim(preg_replace('/\s+/', ' ', $json));
$json = str_replace("\\n", "", $json);
$json = str_replace("\\ n", "", $json);
$json = str_replace(":0000", ":\"0000\"", $json);
It's quick and dirty, and not covering other cases; you can try regex if you want to use a more general way. But I think it would be more reasonable to fix it at the data provider side.
Besides above, some keys' name are not good due to the tailing empty space, such as: "preferred_datetime "

Related

Jira Api: Why is description being merged with summary, when creating a new issue?

I'm creating a Powershell script for monitoring disk space and creating issues on Jira, with how much disk space is left.
I can't seem to figure out how to separate my summary and my description. They are "merged" together, and are both being passed into the summary when the issue is created.
I'm guessing that the formatting of my JSON body might just be off, but I can't seem to figure out what I've done wrong.
The body i'm sending is looking like this:
$body =
'{
"fields":
{
"project":
{
"key": "' + $projectKey + '"
},
"issuetype":
{
"name": "' + $issueType + '"
},
"summary": "' + $summary + '",
"description": "' + $description + '",
"priority":
{
"id": "' + $priority + '"
}
}
}';
summary and description looks like this:
$description = "{0}% space is available on the {1} drive. {2} of {3} GB of space is available." -f [math]::truncate($diskSpace), $drive, [math]::truncate($currentDrive.FreeSpace / 1gb), [math]::truncate($currentDrive.Size / 1gb);
$summary = "There is plenty of space left on the {0} drive" -f , $drive;
The issue wasn't the JSON, but the way I called my function, which was responsible for creating the Jira.
I changed it from this:
CreateJira($summary, $description)
To This:
CreateJira $summary, $description

How to substitute '\n' for newline with jq [duplicate]

I have some logs that output information in JSON. This is for collection to elasticsearch.
Some testers and operations people want to be able to read logs on the servers.
Here is some example JSON:
{
"#timestamp": "2015-09-22T10:54:35.449+02:00",
"#version": 1,
"HOSTNAME": "server1.example",
"level": "WARN",
"level_value": 30000,
"logger_name": "server1.example.adapter",
"message": "message"
"stack_trace": "ERROR LALALLA\nERROR INFO NANANAN\nSOME MORE ERROR INFO\nBABABABABABBA BABABABA ABABBABAA BABABABAB\n"
}
And so on.
Is it possible to make Jq print newline instead of the \n character sequence as seen in the value of .stack_trace?
Sure! Using the -r option, jq will print string contents directly to the terminal instead of as JSON escaped strings.
jq -r '.stack_trace'
Unless you're constraint to use jq only, you can "fix" (or actually "un-json-ify") jq output with sed:
cat the-input | jq . | sed 's/\\n/\n/g'
If you happen to have tabs in the input as well (\t in JSON), then:
cat the-input | jq . | sed 's/\\n/\n/g; s/\\t/\t/g'
This would be especially handy if your stack_trace was generated by Java (you didn't tell what is the source of the logs), as the Java stacktrace lines begin with <tab>at<space>.
Warning: naturally, this is not correct, in a sense that JSON input containing \\n will result in a "" output, however it should result in "n" output. While not correct, it's certainly sufficient for peeking at the data by humans. The sed patterns can be further improved to take care for this (at the cost of readability).
The input as originally given isn't quite valid JSON, and it's not clear precisely what the desired output is, but the following might be of interest. It is written for the current version of jq (version 1.5) but could easily be adapted for jq 1.4:
def json2qjson:
def pp: if type == "string" then "\"\(.)\"" else . end;
. as $in
| foreach keys[] as $k (null; null; "\"\($k)\": \($in[$k] | pp)" ) ;
def data: {
"#timestamp": "2015-09-22T10:54:35.449+02:00",
"#version": 1,
"HOSTNAME": "server1.example",
"level": "WARN",
"level_value": 30000,
"logger_name": "server1.example.adapter",
"message": "message",
"stack_trace": "ERROR LALALLA\nERROR INFO NANANAN\nSOME MORE ERROR INFO\nBABABABABABBA BABABABA ABABBABAA BABABABAB\n"
};
data | json2qjson
Output:
$ jq -rnf json2qjson.jq
"#timestamp": "2015-09-22T10:54:35.449+02:00"
"#version": 1
"HOSTNAME": "server1.example"
"level": "WARN"
"level_value": 30000
"logger_name": "server1.example.adapter"
"message": "message"
"stack_trace": "ERROR LALALLA
ERROR INFO NANANAN
SOME MORE ERROR INFO
BABABABABABBA BABABABA ABABBABAA BABABABAB
"

Merge multiple lines based on pattern with sed

This is an output example of git's log in JSON format.
The issue is that, from time to time, the body key has got break lines in it, which makes the parsing of this JSON file impossible, unless it gets corrected.
# start of cross-section
[{
"commit-hash": "11d07df4ce627d98bd30eb1e37c27ac9515c75ff",
"abbreviated-commit-hash": "11d07df",
"author-name": "Robert Lucian CHIRIAC",
"author-email": "robert.lucian.chiriac#gmail.com",
"author-date": "Sat, 27 Jan 2018 22:33:37 +0200",
"subject": "#fix(automation): patch versions aren't released",
"sanitized-subject-line": "fix-automation-patch-versions-aren-t-released",
"body": "Nothing else to add.
Fixes #24.",
"commit-notes": ""
},
# end of cross-section
I've been going through sed's manual page and the explanation is quite hard to be digested. Does anyone have some suggestions on how I can put the value of body into one line and hence get rid of all those break lines? The idea is to make the file valid in order to be able to parse it.
At the end, it should look like this:
...
"body": "Nothing else to add. Fixes #24."
...
This, using GNU awk for multi-char RS and patsplit(), will work whether there's escaped quotes in the input or not:
$ cat tst.awk
BEGIN { RS="^$"; ORS="" }
{
gsub(/#/,"#A")
gsub(/\\"/,"#B")
nf = patsplit($0,flds,/"[^"]*"/,seps)
$0 = ""
for (i=0; i<=nf; i++) {
$0 = $0 gensub(/\s*\n\s*/," ","g",flds[i]) seps[i]
}
gsub(/#B/,"\\\"")
gsub(/#A/,"#")
print
}
$ awk -f tst.awk file
# start of cross-section
[{
"commit-hash": "11d07df4ce627d98bd30eb1e37c27ac9515c75ff",
"abbreviated-commit-hash": "11d07df",
"author-name": "Robert Lucian CHIRIAC",
"author-email": "robert.lucian.chiriac#gmail.com",
"author-date": "Sat, 27 Jan 2018 22:33:37 +0200",
"subject": "#fix(automation): patch versions aren't released",
"sanitized-subject-line": "fix-automation-patch-versions-aren-t-released",
"body": "Nothing else to add. Fixes #24.",
"commit-notes": ""
},
# end of cross-section
It replaces every escaped quote with a string that cannot exist in the input (which the first gsub() ensures) then operates on the "..." strings then puts the escaped quotes back.
You could try this but escaped double quotes in the string values will probably break it:
Using double quote as the field separator, we count how many fields are in each line.
We expect there to be 5 fields.
If there are 4, then we have an "open" string.
If we're in an open string, when we see 2 fields, that line contains the closing double quote
awk -F'"' '
NF == 4 {in_string = 1}
in_string && NF == 2 {in_string = 0}
{printf "%s%s", $0, in_string ? " " : ORS}
' file.json
To handle the inner quotes problem, let's try replacing all escaped quotes with other text, handle the newlines, then restore the escaped quotes:
awk -F'"' -v escaped_quote_marker='!#_Q_#!' '
{gsub(/\\\"/, escaped_quote_marker)}
NF == 4 {in_string = 1}
in_string && NF == 2 {in_string = 0}
{
gsub(escaped_quote_marker, "\\\"")
printf "%s%s", $0, in_string ? " " : ORS
}
' <<END
[{
"foo":"bar",
"baz":"a string with \"escaped
quotes\" and \"newlines\"
."
}]
END
[{
"foo":"bar",
"baz":"a string with \"escaped quotes\" and \"newlines\" ."
}]
I assume git log is at least kind enough to escape quotes for you.
sed doesn't handle multi-line input easily. You may use perl in slurp mode:
perl -0777 -pe 's~("body":\h*"|\G(?<!^))([^\n"]*)\n+~$1$2 ~' file
# start of cross-section
[{
"commit-hash": "11d07df4ce627d98bd30eb1e37c27ac9515c75ff",
"abbreviated-commit-hash": "11d07df",
"author-name": "Robert Lucian CHIRIAC",
"author-email": "robert.lucian.chiriac#gmail.com",
"author-date": "Sat, 27 Jan 2018 22:33:37 +0200",
"subject": "#fix(automation): patch versions aren't released",
"sanitized-subject-line": "fix-automation-patch-versions-aren-t-released",
"body": "Nothing else to add. Fixes #24.",
"commit-notes": ""
},
# end of cross-section
\G asserts position at the end of the previous match or the start of the string for the first match.
(?<!^) is a negative lookahead to ensure we don't match start position.
("body":\h*"|\G(?<!^)) expression matches "body": or end of previous match
RegEx Demo

Bash sqlite3 -line | How to convert to JSON format

I want to convert my sqlite data from my database to JSON format.
I would like to use this syntax:
sqlite3 -line members.db "SELECT * FROM members LIMIT 3" > members.txt
OUTPUT:
id = 1
fname = Leif
gname = Håkansson
genderid = 1
id = 2
fname = Yvonne
gname = Bergman
genderid = 2
id = 3
fname = Roger
gname = Sjöberg
genderid = 1
How to do this with nice and structur code in a for loop?
(Only in Bash)
I have tried some awk and grep but not with a great succes yet.
Would be nice with some tips.
I want a result similar to this:
[
{
"id":1,
"fname":"Leif",
"gname":"Hakansson",
"genderid":1
},
{
"id":2,
"fname":"Yvonne",
"gname":"Bergman",
"genderid":2
},
{
"id":3,
"fname":"Roger",
"gname":"Sjberg",
"genderid":1
}
}
If your sqlite3 is compiled with the json1 extension (or if you can obtain a version of sqlite3 with the json1 extension), then you can use it to generate JSON objects (one JSON object per row). For example:
select json_object('id', id, 'fname', fname, 'gname', gname, 'genderid', genderid) ...
You can then use a tool such as jq to convert the stream of objects into an array of objects, e.g. pipe the output of the sqlite3 to jq -s ..
(A less tiresome alternative might be to use the sqlite3 function json_array(), which produces an array, which you can reassemble into an object using jq.)
If the json1 extension is unavailable, then you could use the following as a starting point:
awk 'BEGIN { print "["; }
function out() {if (n++) {print ","}; if (line) {print "{" line "}"}; line="";}
function trim(x) { sub(/^ */, "", x); sub(/ *$/, "", x); return x; }
NF==0 { out(); next};
{if (line) {line = line ", " }
i=index($0,"=");
line = line "\"" trim(substr($0,1,i-1)) ": \"" substr($0, i+2) "\""}
END {out(); print "]"} '
Alternatively, you could use the following jq script, which converts numeric strings that occur on the RHS of "=" to numbers:
def trim: sub("^ *"; "") | sub(" *$"; "");
def keyvalue: index("=") as $i
| {(.[0:$i] | trim): (.[$i+2:] | (tonumber? // .))};
[foreach (inputs, "") as $line ({object: false, seed: {} };
if ($line|trim) == "" then { object: .seed, seed : {} }
else {object: false,
seed: (.seed + ($line | keyvalue)) }
end;
.object | if . and (. != {}) then . else empty end ) ]
Just type -json argument with SQLite 3.33.0 or higher and get json output:
$ sqlite3 -json database.db "select * from TABLE_NAME"
from SQLite Release 3.33.0 note:
...
CLI enhancements:
Added four new output modes: "box", "json", "markdown", and "table".
The "column" output mode automatically expands columns to contain the longest output row and automatically turns ".header" on if it has
not been previously set.
The "quote" output mode honors ".separator"
The decimal extension and the ieee754 extension are built-in to the CLI
...
I think I would prefer to parse sqlite output with a single line per record rather than the very wordy output format you suggested with sqlite3 -line. So, I would go with this:
sqlite3 members.db "SELECT * FROM members LIMIT 3"
which gives me this to parse:
1|Leif|Hakansson|1
2|Yvonne|Bergman|2
3|Roger|Sjoberg|1
I can now parse that with awk if I set the input separator to | with
awk -F '|'
and pick up the 4 fields on each line with the following and save them in an array like this:
{ id[++i]=$1; fname[i]=$2; gname[i]=$3; genderid[i]=$4 }
Then all I need to do is print the output format you need at the end. However, you have double quotes in your output and they are a pain to quote in awk, so I temporarily use another pipe symbol (|) as a double quote and then, at the very end, I get tr to replace all the pipe symbols with double quotes - just to make the code easier on the eye. So the total solution looks like this:
sqlite3 members.db "SELECT * FROM members LIMIT 3" | awk -F'|' '
# sqlite output line - pick up fields and store in arrays
{ id[++i]=$1; fname[i]=$2; gname[i]=$3; genderid[i]=$4 }
END {
printf "[\n";
for(j=1;j<=i;j++){
printf " {\n"
printf " |id|:%d,\n",id[j]
printf " |fname|:|%s|,\n",fname[j]
printf " |gname|:|%s|,\n",gname[j]
printf " |genderid|:%d\n",genderid[j]
closing=" },\n"
if(j==i){closing=" }\n"}
printf closing;
}
printf "]\n";
}' | tr '|' '"'
Sqlite-utils does exactly what you're looking for. By default, the output will be JSON.
Better late than never to plug jo.
Save sqlite3 to a text file.
Get jo (jo's also available in distro repos)
and use this bash script.
while read line
do
id=`echo $line | cut -d"|" -f1`
fname=`echo $line | cut -d"|" -f2`
gname=`echo $line | cut -d"|" -f3`
genderid=`echo $line | cut -d"|" -f4`
jsonline=`jo id="$id" fname="$fname" gname="$gname" genderid="$genderid"`
json="$json $jsonline"
done < "$1"
jo -a $json
Please don't create (or parse) json with awk. There are dedicated tools for this. Tools like xidel.
While first and foremost a html, xml and json parser, xidel can also parse plain text.
I'd like to offer a very elegant solution using this tool (with much less code than jq).
I'll assume your 'members.txt'.
First to create a sequence of each json object to-be:
xidel -s members.txt --xquery 'tokenize($raw,"\n\n")'
Or...
xidel -s members.txt --xquery 'tokenize($raw,"\n\n") ! (position(),.)'
1
id = 1
fname = Leif
gname = Håkansson
genderid = 1
2
id = 2
fname = Yvonne
gname = Bergman
genderid = 2
3
id = 3
fname = Roger
gname = Sjöberg
genderid = 1
...to better show you the individual items in the sequence.
Now you have 3 multi-line strings. To turn each item/string into another sequence where each item is a new line:
xidel -s members.txt --xquery 'tokenize($raw,"\n\n") ! x:lines(.)'
(x:lines(.) is a shorthand for tokenize(.,'\r\n?|\n'))
Now for each line tokenize on the " = " (which creates yet another sequence) and save it to a variable. For the first line for example this sequence is ("id","1"), for the second line ("fname","Leif"), etc.:
xidel -s members.txt --xquery 'tokenize($raw,"\n\n") ! (for $x in x:lines(.) let $a:=tokenize($x," = ") return ($a[1],$a[2]))'
Finally remove leading whitespace (normalize-space()), create a json object ({| {key-value-pair} |}) and put all json objects in an array ([ ... ]):
xidel -s members.txt --xquery '[tokenize($raw,"\n\n") ! {|for $x in x:lines(.) let $a:=tokenize($x," = ") return {normalize-space($a[1]):$a[2]}|}]'
Prettified + output:
xidel -s members.txt --xquery '
[
tokenize($raw,"\n\n") ! {|
for $x in x:lines(.)
let $a:=tokenize($x," = ")
return {
normalize-space($a[1]):$a[2]
}
|}
]
'
[
{
"id": "1",
"fname": "Leif",
"gname": "Håkansson",
"genderid": "1"
},
{
"id": "2",
"fname": "Yvonne",
"gname": "Bergman",
"genderid": "2"
},
{
"id": "3",
"fname": "Roger",
"gname": "Sjöberg",
"genderid": "1"
}
]
Note: For xidel-0.9.9.7173 and newer --json-mode=deprecated is needed to create a json array with [ ]. The new (XQuery 3.1) way to create a json array is to use array{ }.

JQ how to print newline and not newline character from json value

I have some logs that output information in JSON. This is for collection to elasticsearch.
Some testers and operations people want to be able to read logs on the servers.
Here is some example JSON:
{
"#timestamp": "2015-09-22T10:54:35.449+02:00",
"#version": 1,
"HOSTNAME": "server1.example",
"level": "WARN",
"level_value": 30000,
"logger_name": "server1.example.adapter",
"message": "message"
"stack_trace": "ERROR LALALLA\nERROR INFO NANANAN\nSOME MORE ERROR INFO\nBABABABABABBA BABABABA ABABBABAA BABABABAB\n"
}
And so on.
Is it possible to make Jq print newline instead of the \n character sequence as seen in the value of .stack_trace?
Sure! Using the -r option, jq will print string contents directly to the terminal instead of as JSON escaped strings.
jq -r '.stack_trace'
Unless you're constraint to use jq only, you can "fix" (or actually "un-json-ify") jq output with sed:
cat the-input | jq . | sed 's/\\n/\n/g'
If you happen to have tabs in the input as well (\t in JSON), then:
cat the-input | jq . | sed 's/\\n/\n/g; s/\\t/\t/g'
This would be especially handy if your stack_trace was generated by Java (you didn't tell what is the source of the logs), as the Java stacktrace lines begin with <tab>at<space>.
Warning: naturally, this is not correct, in a sense that JSON input containing \\n will result in a "" output, however it should result in "n" output. While not correct, it's certainly sufficient for peeking at the data by humans. The sed patterns can be further improved to take care for this (at the cost of readability).
The input as originally given isn't quite valid JSON, and it's not clear precisely what the desired output is, but the following might be of interest. It is written for the current version of jq (version 1.5) but could easily be adapted for jq 1.4:
def json2qjson:
def pp: if type == "string" then "\"\(.)\"" else . end;
. as $in
| foreach keys[] as $k (null; null; "\"\($k)\": \($in[$k] | pp)" ) ;
def data: {
"#timestamp": "2015-09-22T10:54:35.449+02:00",
"#version": 1,
"HOSTNAME": "server1.example",
"level": "WARN",
"level_value": 30000,
"logger_name": "server1.example.adapter",
"message": "message",
"stack_trace": "ERROR LALALLA\nERROR INFO NANANAN\nSOME MORE ERROR INFO\nBABABABABABBA BABABABA ABABBABAA BABABABAB\n"
};
data | json2qjson
Output:
$ jq -rnf json2qjson.jq
"#timestamp": "2015-09-22T10:54:35.449+02:00"
"#version": 1
"HOSTNAME": "server1.example"
"level": "WARN"
"level_value": 30000
"logger_name": "server1.example.adapter"
"message": "message"
"stack_trace": "ERROR LALALLA
ERROR INFO NANANAN
SOME MORE ERROR INFO
BABABABABABBA BABABABA ABABBABAA BABABABAB
"