I have the below requirement.
Input is
{ "packageConfiguration": [
{
"packageId": [
"AIM_PACKAGE"
],
"component": [
"Handbook"
],
"fieldName": [
"Upload Handbook Document"
],
"assetUrl": [
"sflydamlocation.handbookfilename.pdf"
]
}
]}
I need to convert above json array into this output format:
{
"pakage": ""packageId":"AIM_PACKAGE", "component":"Handbook", "fieldName":"Upload Handbook Document","assetUrl":"sflydamlocation.handbookfilename.pdf""
}
You can do that treating all fields as strings, however note that:
The inner quotes must be escaped. Otherwise the output is not valid JSON.
Take in account that the value of "package" is not really valid JSON either, in case you want to parse it. It should an object (eg " { \"package\":... }")
This script expects all the arrays to have exactly 1 element. More elements are ignored and less could give an error. This is not a very robust design.
Script (not recommended):
%dw 2.0
output application/json
---
package: using (pc = payload.packageConfiguration[0]) (
" \"packageId\": \"$(pc.packageId[0])\", " ++
" \"component\": \"$(pc.component[0])\" " ++
" \"fieldName\": \"$(pc.fieldName[0])\" " ++
" \"assetUrl\": \"$(pc.assetUrl[0])\" "
)
Output:
{
"package": " \"packageId\": \"AIM_PACKAGE\", \"component\": \"Handbook\" \"fieldName\": \"Upload Handbook Document\" \"assetUrl\": \"sflydamlocation.handbookfilename.pdf\" "
}
This is an ugly string concatenation. Instead I would suggest to just write the desired output as a JSON object.
Script (recommended):
%dw 2.0
output application/dw
var pc = payload.packageConfiguration[0]
---
package:
write({
packageId: pc.packageId[0],
component: pc.component[0],
fieldName: pc.fieldName[0],
assetUrl: pc.assetUrl[0]
}, "application/json") replace /\n/ with ""
Output
{
"package": "{ \"packageId\": \"AIM_PACKAGE\", \"component\": \"Handbook\", \"fieldName\": \"Upload Handbook Document\", \"assetUrl\": \"sflydamlocation.handbookfilename.pdf\"}"
}
The second script is much cleaner, less error prone and returns an escaped JSON object that you could unescape to use as JSON.
Something like this should work, unless you require something more flexible. I'm assuming you're working w/ Mule3/DW1:
%dw 1.0
%output application/json
%var packageConfig = payload.packageConfiguration[0]
---
{
package: packageConfig mapObject ((value, key) -> {
(key): value[0]
})
}
Related
and this is an example of its input
[
{
\"type\":\"Non Custom\",
\"so_no\":\"3250109150\",
\"material_code\":\"F100101180028\",
\"po_no\":\"JDC/00067/02/22/2/DL\",
\"pr_no\":\"\",
\"gr_no\":\"\",
\"gr_date\":\"\"
},
{
\"type\":\"Non Custom\",
\"so_no\":\"3250109150\",
\"material_code\":\"F100101180030\",
\"po_no\":\"JDC/00067/02/22/2/DL\",
\"pr_no\":\"\",
\"gr_no\":\"\",
\"gr_date\":\"\"
}
]
Remove the \ sign in raw json input
And please help who can fix it
To solve this problem, you can pass the JSON into a string and replace every backslash like shown below:
json := `[
{
\"type\":\"Non Custom\",
\"so_no\":\"3250109150\",
\"material_code\":\"F100101180028\",
\"po_no\":\"JDC/00067/02/22/2/DL\",
\"pr_no\":\"\",
\"gr_no\":\"\",
\"gr_date\":\"\"
},
{
\"type\":\"Non Custom\",
\"so_no\":\"3250109150\",
\"material_code\":\"F100101180030\",
\"po_no\":\"JDC/00067/02/22/2/DL\",
\"pr_no\":\"\",
\"gr_no\":\"\",
\"gr_date\":\"\"
}
]`
fmt.Println(strings.ReplaceAll(json, "\\", ""))
I need to alter some values in JSON data, and would like to include it in an already existing shell script. I'm trying to do so using jq, and will need the "sub()" function to cut off a piece of a string value.
Using this command line:
jq '._meta[][].ansible_ssh_pass | sub(" .*" ; "")'
with the data below will correctly replace the value (cutting off anything including the first space in the data), but only prints out the value, not the complete JSON structure.
Here's sample JSON data:
{_meta": {
"hostvars": {
"10.1.1.3": {
"hostname": "core-gw1",
"ansible_user": "",
"ansible_ssh_pass": "test123 / ena: test2",
"configsicherung": "true",
"os": "ios",
"managementpaket": ""
}
}
}}
Output should be something like this:
{"_meta": {
"hostvars": {
"10.1.1.3": {
"hostname": "core-gw1",
"ansible_user": "",
"ansible_ssh_pass": "test123",
"configsicherung": "true",
"os": "ios",
"managementpaket": ""
}
}
}}
I assume I have to add some sort of "if... then" based arguments, but haven't been able to get jq to understand me ;) Manual is a bit sketchy and I haven't been able to find any example I could get to match up with what I need to do ...
OK, as usual ... once you post a public question, you then manage to find a solution yourself ... ;)
This jq-call does what I need:
jq '. ._meta.hostvars[].ansible_ssh_pass |= sub(" .*";"" )'
Need your expertise here!
I am trying to load a JSON file (generated by JSON dumps) into redshift using copy command which is in the following format,
[
{
"cookieId": "cb2278",
"environment": "STAGE",
"errorMessages": [
"70460"
]
}
,
{
"cookieId": "cb2271",
"environment": "STG",
"errorMessages": [
"70460"
]
}
]
We ran into the error - "Invalid JSONPath format: Member is not an object."
when I tried to get rid of square braces - [] and remove the "," comma separator between JSON dicts then it loads perfectly fine.
{
"cookieId": "cb2278",
"environment": "STAGE",
"errorMessages": [
"70460"
]
}
{
"cookieId": "cb2271",
"environment": "STG",
"errorMessages": [
"70460"
]
}
But in reality most JSON files from API s have this formatting.
I could do string replace or reg ex to get rid of , and [] but I am wondering if there is a better way to load into redshift seamlessly with out modifying the file.
One way to convert a JSON array into a stream of the array's elements is to pipe the former into jq '.[]'. The output is sent to stdout.
If the JSON array is in a file named input.json, then the following command will produce a stream of the array's elements on stdout:
$ jq ".[]" input.json
If you want the output in jsonlines format, then use the -c switch (i.e. jq -c ......).
For more on jq, see https://stedolan.github.io/jq
This is the json .
"{
'places': [
{
'name': 'New\x20Orleans,
\x20US\x20\x28New\x20Lakefront\x20\x2D\x20NEW\x29',
'code': 'NEW'
}
]
}"
I am getting json parsererror. I am checking on http://jsonlint.com/ and it shows following error
Parse error on line 1:
"{ 'places': [
^
Expecting '{', '['
Please explain what are the problems with the json and do I correct it?
If you literally mean that the string, as a whole, is your JSON text (containing something that isn't JSON), there are three issues:
It's just a JSON fragment, not a full JSON document.
Literal line breaks within strings are not valid in JSON, use \n.
\x is an invalid escape sequence in JSON strings. If you want your contained non-JSON text to have a \x escape (e.g., when you read the value of the overall string and parse it), you have to escape that backslash: \\x.
In a full JSON document, the top level must be an object or array:
{"prop": "value"}
[1, 2, 3]
Most JSON parsers support parsing fragments, such as standalone strings. (For instance, JavaScript's JSON.parse supports this.) http://jsonlint.com is doing full document parsing, however.
Here's your fragment wrapped in an object with the line breaks and \x issue handled:
{
"stuff": "{\n 'places': [\n {\n 'name': 'New\\x20Orleans,\n \\x20US\\x20\\x28New\\x20Lakefront\\x20\\x2D\\x20NEW\\x29',\n 'code': 'NEW'\n }\n \n ]\n }"
}
The text within the string is also not valid JSON, but perhaps it's not meant to be. For completeness: JSON requires that all keys and strings be in double quotes ("), not single quotes ('). It also doesn't allow literal line breaks within string literals (use \n instead), and doesn't support \x escapes. See http://json.org for details.
Here's a version as valid JSON with the \x converted to the correct JSON \u escape:
{
"places": [
{
"name": "New\u0020Orleans,\n\u0020US\u0020\u0028New\u0020Lakefront\u0020\u002D\u0020NEW\u0029",
"code": "NEW"
}
]
}
...also those escapes are all actually defining perfectly normal characters, so:
{
"places": [
{
"name": "New Orleans,\n US (New Lakefront - NEW)",
"code": "NEW"
}
]
}
read http://json.org/
{
"places": [
{
"name": "New\\x20Orleans,\\x20US\\x20\\x28New\\x20Lakefront\\x20\\x2D\\x20NEW\\x29",
"code": "NEW"
}
]
}
I have an elasticsearch index which I am using to index a set of documents.
These documents are originally in csv format and I am looking parse these using logstash as this has powerful regular expression tools such as grok.
My problem is that I have something along the following lines
field1,field2,field3,number#number#number#number#number#number
In the last column I have key value pairs key#value separated by # and there can be any number of these
Is there a way for me to use logstash to parse this and get it to store the last column as the following json in elasticsearch (or some other searchable format) so I am able to search it
[
{"key" : number, "value" : number},
{"key" : number, "value" : number},
...
]
First, You can use CSV filter to parse out the last column.
Then, you can use Ruby filter to write your own code to do what you need.
input {
stdin {
}
}
filter {
ruby {
code => '
b = event["message"].split("#");
ary = Array.new;
for c in b;
keyvar = c.split("#")[0];
valuevar = c.split("#")[1];
d = "{key : " << keyvar << ", value : " << valuevar << "}";
ary.push(d);
end;
event["lastColum"] = ary;
'
}
}
output {
stdout {debug => true}
}
With this filter, When I input
1#10#2#20
The output is
"message" => "1#10#2#20",
"#version" => "1",
"#timestamp" => "2014-03-25T01:53:56.338Z",
"lastColum" => [
[0] "{key : 1, value : 10}",
[1] "{key : 2, value : 20}"
]
FYI. Hope this can help you.