How to Remove multiple root element in JSON? - json

"{
"LEINumber": "335800QRNLKAHGA1BL68",
"LegalName": "METROPOLITAN CLEARING CORPORATION OF INDIA LIMITED",
"NextRenewalDate": "04-03-2022 00:00:00 +05:30",
"LegalForm": "Public Limited Companies",
"RegistrationStatus": "ISSUED"
}"
In above Json Packet i want to remove the "" in JSON. So after removing the "" i want the data in below format.
{
"LEINumber": "335800QRNLKAHGA1BL68",
"LegalName": "METROPOLITAN CLEARING CORPORATION OF INDIA LIMITED",
"NextRenewalDate": "04-03-2022 00:00:00 +05:30",
"LegalForm": "Public Limited Companies",
"RegistrationStatus": "ISSUED"
}

Well for this simple solution is to print the data not in " " in apigee. If you use double inverted commas it will print the data with commas but if you print the data without double inverted commas it will not print in commas as shown below.
{
"LEINumber": "335800QRNLKAHGA1BL68",
"LegalName": "METROPOLITAN CLEARING CORPORATION OF INDIA LIMITED",
"NextRenewalDate": "04-03-2022 00:00:00 +05:30",
"LegalForm": "Public Limited Companies",
"RegistrationStatus": "ISSUED"
}

Related

Concatenate values from non-adjacent objects based on multiple matching criteria

I received help on a related question previously on this forum and am wondering if there is a similarly straightforward way to resolve a more complex issue.
Given the following snippet, is there a way to merge the partial sentence (the one which does not end with a "[punctuation mark][white space]" pattern) with its remainder based on the matching TextSize? When I tried to adjust the answer from the related question I quickly ran into issues, but I am basically looking to translate a rule such as if .Text !endswith("[punctuation mark][white space]") then .Text + next .Text where .TextSize matches
{
"Text": "Was it political will that established social democratic policies in the 1930s and ",
"Path": "P",
"TextSize": 9
},
{
"Text": "31 Lawrence Mishel and Jessica Schieder, Economic Policy Institute website, May 24, 2016 at (https://www.epi.org/publication/as-union-membership-has-fallen-the-top-10-percent-have-been-getting-a-larger-share-of-income/). ",
"Path": "Footnote",
"TextSize": 8
},
{
"Text": "Fig. 9.2 Higher union membership has been associated with a higher share of income to lower income brackets (the lower 90%) and a lower share of income to the top 10% of earners. ",
"Path": "P",
"TextSize": 8
},
{
"Text": "1940s, or that undermined them after the 1970s? Or was it abundant and cheap energy resources that enabled social democratic policies to work until the 1970s, and energy constraints that forced a restructuring of policy after the 1970s? ",
"Path": "P",
"TextSize": 9
},
{
"Text": "Recall that my economic modeling discussed in Chap. 6 shows that, even with no change in the assumption related to labor \u201cbargaining power,\u201d you can explain a shift from increasing to declining income equality (higher equality expressed as a higher wage share) by a corresponding shift from a period of rapidly increasing per capita resource consumption to one of constant per capita resource consumption. ",
"Path": "P",
"TextSize": 9
}
The result I'm looking for would be as follows:
{
"Text": "Was it political will that established social democratic policies in the 1930s and 1940s, or that undermined them after the 1970s? Or was it abundant and cheap energy resources that enabled social democratic policies to work until the 1970s, and energy constraints that forced a restructuring of policy after the 1970s? ",
"Path": "P",
"TextSize": 9
},
{
"Text": "31 Lawrence Mishel and Jessica Schieder, Economic Policy Institute website, May 24, 2016 at (https://www.epi.org/publication/as-union-membership-has-fallen-the-top-10-percent-have-been-getting-a-larger-share-of-income/). ",
"Path": "Footnote",
"TextSize": 8
},
{
"Text": "Fig. 9.2 Higher union membership has been associated with a higher share of income to lower income brackets (the lower 90%) and a lower share of income to the top 10% of earners. ",
"Path": "P",
"TextSize": 8
},
{
"Text": "Recall that my economic modeling discussed in Chap. 6 shows that, even with no change in the assumption related to labor \u201cbargaining power,\u201d you can explain a shift from increasing to declining income equality (higher equality expressed as a higher wage share) by a corresponding shift from a period of rapidly increasing per capita resource consumption to one of constant per capita resource consumption. ",
"Path": "P",
"TextSize": 9
}
The following, which assumes the input is a valid JSON array, will merge every .Text with at most one successor, but can easily be modified to merge multiple .Text values together as shown in Part 2 below.
Part 1
# input and output: an array of {Text, Path, TextSize} objects.
# Attempt to merge the .Text of the $i-th object with the .Text of a subsequent compatible object.
# If a merge is successful, the subsequent object is removed.
def attempt_to_merge_next($i):
.[$i].TextSize as $class
| first( (range($i+1; length) as $j | select(.[$j].TextSize == $class) | $j) // null) as $j
| if $j then .[$i].Text += .[$j].Text | del(.[$j])
else .
end;
reduce range(0; length) as $i (.;
if .[$i] == null then .
elif .[$i].Text|test("[,.?:;]\\s*$")|not
then attempt_to_merge_next($i)
else .
end)
Part 2
Using the above def:
def merge:
def m($i):
if $i >= length then .
elif .[$i].Text|test("[,.?:;]\\s*$")|not
then attempt_to_merge_next($i) as $x
| if ($x|length) == length then m($i+1)
else $x|m($i)
end
else m($i+1)
end ;
m(0);
merge

AWS DynamoDB Issues adding values to existing table

I have already created a table called Sensors and identified Sensor as the hash key. I am trying to add to the table with my .json file. The items in my file look like this:
{
"Sensor": "Room Sensor",
"SensorDescription": "Turns on lights when person walks into room",
"ImageFile": "rmSensor.jpeg",
"SampleRate": "1000",
"Locations": "Baltimore, MD"
}
{
"Sensor": "Front Porch Sensor",
"SensorDescription": " ",
"ImageFile": "fpSensor.jpeg",
"SampleRate": "2000",
"Locations": "Los Angeles, CA"
}
There's 20 different sensors in the file. I was using the following command:
aws dynamodb batch-write-item \
--table-name Sensors \
--request-items file://sensorList.json \
--returned-consumed-capacity TOTAL
I get the following error:
Error parsing parameter '--request-items': Invalid JSON: Extra data: line 9 column 1 (char 189)
I've tried adding --table name Sensors to the cl and it says Unknown options: --table-name, Sensors. I've tried put-item and a few others. I am trying to understand what my errors are, what I need to change in my .json if anything, and what I need to change in my cl. Thanks!
Your input file is not a valid json. You are missing a comma to separate both objects, and you need to enclose everything with brackets [ ..... ]

Why are there no 'To' nor 'From' headers in the output from the internetMessageHeaders selector?

When I make the following call:
/beta/me/messages/{id}?$select=internetMessageHeaders
I get the following output:
{
"#odata.context": "https://graph.microsoft.com/beta/$metadata#users('...')/messages(internetMessageHeaders)/$entity",
"#odata.etag": "...",
"id": "AAMkAGY1Mz...",
"internetMessageHeaders": [
{
"name": "Received",
"value": "from CY1PR16MB0549.namprd16.prod.outlook.com (2603:10b6:903:13d::13) by DM3PR16MB0553.namprd16.prod.outlook.com with HTTPS via CY4PR06CA0051.NAMPRD06.PROD.OUTLOOK.COM; Fri, 16 Feb 2018 22:14:45 +0000"
},
...
]
}
And nowhere do I find 'To' or 'From' fields in the response. Why? Is there a way to retrieve this information?
From the documentation, this property holds:
A key-value pair that represents an Internet message header, as defined by RFC5322, that provides details of the network path taken by a message from the sender to the recipient.
Based on that description, your result looks correct to me:
from CY1PR16MB0549.namprd16.prod.outlook.com (2603:10b6:903:13d::13)
by DM3PR16MB0553.namprd16.prod.outlook.com
with HTTPS
via CY4PR06CA0051.NAMPRD06.PROD.OUTLOOK.COM;
Fri, 16 Feb 2018 22:14:45 +0000
For the To and From addresses, you need to add toRecipients and from to your $select clause.
/beta/me/messages/{id}?$select=toRecipients,from,internetMessageHeaders

Replace single quotes in double quotes in brackets

I must modify a file json. I must replace the single quotes in double quotes but I can't use the following command sed -i -r "s/'/\"/g" file because in the file there are more single quotes that I don't change.
The following code is an example of string:
"categories": [['Clothing, Shoes & Jewelry', 'Girls'], ['Clothing, Shoes & Jewelry', 'Novelty, Costumes & More', 'Costumes & Accessories', 'More Accessories', 'Kids & Baby']]
The desided result should be:
"categories": [["Clothing, Shoes & Jewelry", "Girls"], ["Clothing, Shoes & Jewelry", "Novelty, Costumes & More", "Costumes & Accessories", "More Accessories", "Kids & Baby"]]
sample file:
{"categories": [['Movies & TV', 'Movies']], "title": "Understanding Seizures and Epilepsy DVD"},
{"title": "Who on Earth is Tom Baker?", "salesRank": {"Books": 3843450}, "categories": [['Books']]},
{"categories": [['Clothing, Shoes & Jewelry', 'Girls'], ['Clothing, Shoes & Jewelry', 'Novelty, Costumes & More', 'Costumes & Accessories', 'More Accessories', 'Kids & Baby']], "description": "description, "title": "Mog's Kittens", "salesRank": {"Books": 1760368}}},
{"description": "Three Dr. Suess' Puzzles", "brand": "Dr. Seuss", "categories": [['Toys & Games', 'Puzzles', 'Jigsaw Puzzles']]},
I used a regular expression but the problem is that I don't know how many element are in brackets. So I would a way for replace all single quotes in the brackets, this is a perfect way, but I can not find the solution.
#!/usr/bin/perl -w
use strict;
# read each line from stdin
while (my $l=<>) {
chomp($l); # remove newline char
# split: get contents of innermost square brackets
my #a=split(/(\[[^][]*\])/,$l);
foreach my $i (#a) {
# replace quotes iff innermost square brackets
if ($i=~/^\[/) { $i=~s/'/"/g; }
}
# join and print
print join('',#a)."\n";
}
I found a way to do that, using python.
Note that the json stream you provided is not recognized by python json because of single quotes (and also some copy/paste problems, missing quotes, I fixed that).
My solution is using fully the python libraries, I doubt you can do the same with sed, that's why I provide it despite the fact you didn't mention that technology.
I read the data using ast.literal_eval since it's a list of dictionaries with the exact python syntax. Single quotes are not a problem for ast
I write the data using json.dump. It writes the data using double quotes.
Note that I write it in a "fake" file (i.e. a string with I/O write method to "fool" the json serializer).
Here's a standalone snippet that works:
import io
foo = """[{"categories": [['Movies & TV', 'Movies']], "title": "Understanding Seizures and Epilepsy DVD"},
{"title": "Who on Earth is Tom Baker?", "salesRank": {"Books": 3843450}, "categories": [['Books']]},
{"categories": [['Clothing, Shoes & Jewelry', 'Girls'], ['Clothing, Shoes & Jewelry', 'Novelty, Costumes & More', 'Costumes & Accessories', 'More Accessories', 'Kids & Baby']], "description": "description", "title": "Mog's Kittens", "salesRank": {"Books": 1760368}},
{"description": "Three Dr. Suess' Puzzles",
"brand": "Dr. Seuss", "categories": [['Toys & Games', 'Puzzles', 'Jigsaw Puzzles']]}
]"""
fp = io.StringIO()
json_data=ast.literal_eval(foo)
json.dump(json_data,fp)
print(fp.getvalue())
result:
[{"categories": [["Movies & TV", "Movies"]], "title": "Understanding Seizures and Epilepsy DVD"}, {"salesRank": {"Books": 3843450}, "categories": [["Books"]], "title": "Who on Earth is Tom Baker?"}, {"description": "description", "salesRank": {"Books": 1760368}, "categories": [["Clothing, Shoes & Jewelry", "Girls"], ["Clothing, Shoes & Jewelry", "Novelty, Costumes & More", "Costumes & Accessories", "More Accessories", "Kids & Baby"]], "title": "Mog's Kittens"}, {"brand": "Dr. Seuss", "description": "Three Dr. Suess' Puzzles", "categories": [["Toys & Games", "Puzzles", "Jigsaw Puzzles"]]}]
Here's a full script taking 2 parameters (input file & output file) and performing the conversion. You can use this script within your already existing bash scripts if you're not comfortable with python (save that in fix_quotes.py for instance):
import ast,json,sys
input_file = sys.argv[1]
output_file = sys.argv[2]
with open(input_file,"r") as fr:
json_data=ast.literal_eval(fr.read())
with open(output_file,"w") as fw:
json.dump(json_data,fw)

APEX JSON Generator writeString escapes quotes

Is there a way NOT to escape the quotes in a string when using JSON Generator's method writeString? I'm getting the following result:
"{\"Name\":\"asdsads\",\"Query\":\"adasdasd\"},{\"Name\":\"12312312\",\"Query\":\"3123123\"},{\"Name\":\"d23d2\",\"Query\":\"3d23d2\"}"
instead of:
{"Name":"asdsads","Query":"adasdasd"},{"Name":"12312312","Query":"3123123"},{"Name":"d23d2","Query":"3d23d2"}
I have tried replace('\\', ''); as well as replace('\\"', '"'); but didn't work.
Any help is appreciated.
Solved it. Had to do the following:
String genString = gen.getAsString();
genString = genString.replace('\\"', '"');
genString = genString.replace('"{', '{');
genString = genString.replace('}"', '}');
Replacing on the fly didn't work for some reason.
Its better not to add in the above snippet in the RestResource Class rather I recommend to add this snippet before you parse the Json.
It worked fine for me as i have the Json Generated from RestResource class.
"{\n "Status" : "Success",\n "Count" : 6,\n "Accounts" : [ "AccontFromMyVF", "United Oil & Gas, UK", "United Oil & Gas, Singapore", "United Oil & Gas Corp.", "AccontFromMyVF", "AccontFromMyVF12" ]\n}"
after adding the below
Accountresult = Accountresult.replace('\\n', '');
Accountresult = Accountresult.replace('\\"', '"');
Accountresult = Accountresult.replace('"{', '{');
Accountresult = Accountresult.replace('}"', '}');
My response turned into
{ "Status" : "Success", "Count" : 6, "Accounts" : [ "AccontFromMyVF", "United Oil & Gas, UK", "United Oil & Gas, Singapore", "United Oil & Gas Corp.", "AccontFromMyVF", "AccontFromMyVF12" ]}
from this Json we can easily perform parse actions