How I can remove null values and keys from JSON using cobol statments- V6.1 COBOL ENTERPRISE - json

I can't figure out how to remove null values (and corresponding keys) from the output JSON.
Using JSON GENERATE, I am creating JSON output and in output I am getting \U0000 as null.
I want to identify and remove null values and keys from the JSON.
Cobol version - 6.1.0
I am generating a file with PIC X(5000). Tried INSPECT and other statements but no luck :(
For example :
{
"Item": "A",
"Price": 12.23,
"Qty": 123
},
{
"Item": "B",
"Price": \u000,
"Qty": 234
},
{
"Item": "C",
"Price": 23.2,
"Qty": \u0000
}
In output I want:
{
"Item": "A",
"Price": 12.23,
"Qty": 123
},
{
"Item": "B",
"Qty": 234
},
{
"Item": "C",
"Price": 23.2,
}
Approach 1:
created JSON using JSON GENERATE command and defined output file in
PIC X(50000).
after converting to UTF-8 trying to use INSPECT to find the '\U000' using hex value but there is no effect on UTF8 arguments and not able to search '\U000' values.
perform varying utf-8-pos from 1 by 1
until utf-8-pos = utf-8-end
EVAUATE TRUE
WHEN JSONOUT(1:utf-8-pos) = X'5C' *> first finding a "\" value in o/p
perform varying utf-8-pos from 1 by 1
until JSONOUT(1:utf-8-pos) = x'22' *> second finding a end position string " as X'22'
move JSONOUT(1: utf-8-end - utf-8-pos) to JSONOUT *> skip the position of null
end-perform
WHEN JSONOUT(1:utf-8-pos) NOT= X'5C'
continue
WHEN OTHER
continue
END-EVALUATE
end-perform
Approach 2:
Convert the item to UTF-16 in a national data item by using NATIONAL-OF function.
using INSPECT, EVALUATE OR PERFORM to find '\U000' using hex value N'005C'.
but not able to find the correct position of '\u000' also tried NX'005C' to find but no luck.
IDENTIFICATION DIVISION.
PROGRAM-ID. JSONTEST.
ENVIRONMENT DIVISION.
CONFIGURATION SECTION.
SOURCE-COMPUTER. IBM-370-158.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 root.
05 header.
10 systemId PIC X(10).
10 timestamp PIC X(30).
05 payload.
10 customerid PIC X(10).
77 Msglength PIC 9(05).
77 utf-8-pos PIC 9(05).
77 utf-8-end PIC 9(05).
01 jsonout PIC X(30000).
PROCEDURE DIVISION.
MAIN SECTION.
MOVE "2012-12-18T12:43:37.464Z" to timestamp
MOVE LOW-VALUES to customerid
JSON GENERATE jsonout
FROM root
COUNT IN Msglength
NAME OF root is OMITTED
systemId IS 'id'
ON EXCEPTION
DISPLAY 'JSON EXCEPTION'
STOP RUN
END-JSON
DISPLAY jsonout (1:Msglength)
PERFORM skipnull.
MAIN-EXIT.
EXIT.
Skipnull SECTION.
perform varying utf-8-pos from 1 by 1
until utf-8-pos = utf-8-end
EVALUATE TRUE
WHEN JSONOUT(1:utf-8-pos) = X'5C' *> first finding a "\" value in o/p
perform varying utf-8-pos from 1 by 1
until JSONOUT(1:utf-8-pos) = x'22' *> second finding a end position string " as X'22'
move JSONOUT(1: utf-8-end - utf-8-pos) to JSONOUT *> skip the position of null
end-perform
WHEN JSONOUT(1:utf-8-pos) NOT= X'5C'
continue
WHEN OTHER
continue
END-EVALUATE
end-perform.
Skipnull-exit.
EXIT.
sample output : As we don't have any value to fill for customer id so in o/p results we are getting :
{"header" : {
"timestamp" : "2012-12-18T12:43:37.464Z",
"customerid" : "\u0000\u0000\u00000" }
}
in result I want to skip customerid from the o/p. I want to skip both object:Name from the o/p file.

Since Enterprise COBOL's JSON GENERATE is an all-in-one-go command I don't think there's an easy way to do this in V6.1.
Just to give you something to look forward to: Enterprise-COBOL 6.3 offers an extended SUPPRESS-clause that does just what you need:
JSON GENERATE JSONOUT FROM MYDATA COUNT JSONLEN
SUPPRESS WHEN ZERO
ON EXCEPTION DISPLAY 'ERROR JSON-CODE: ' JSON-CODE
NOT ON EXCEPTION DISPLAY 'JSON GENERATED - LEN=' JSONLEN
You can also suppress WHEN SPACES, WHEN LOW-VALUE or WHEN HIGH-VALUE.
You can also limit suppression to certain fields:
SUPPRESS Price WHEN ZERO
Qty WHEN ZERO
Unfortunately this feature hasn't been backported to 6.1 yet (it's been added to 6.2 with the December 2020 PTF) and I don't know whether it will be...

I don't know anything about cobol but I was need the same thing in javascript, so I will share my javascript function for you. If you can translate this to cobol maybe it will help to you.
function clearMyJson(obj) {
for (var i in obj) {
if ($.isArray(obj[i]))
if (obj[i].length == 0) //remove empty arrays
delete obj[i];
else
clearMyJson(obj[i]); //calling function for clear the array
else if ($.isPlainObject(obj[i]))
if ((obj[i] == null || obj[i] == "")) // delete property if its null or empty
delete obj[i];
else
clearMyJson(obj[i]); //calling function for clear the object
}
}

Related

How can Postgres extract parts of json, including arrays, into another JSON field?

I'm trying to convince PostgreSQL 13 to pull out parts of a JSON field into another field, including a subset of properties within an array based on a discriminator (type) property. For example, given a data field containing:
{
"id": 1,
"type": "a",
"items": [
{ "size": "small", "color": "green" },
{ "size": "large", "color": "white" }
]
}
I'm trying to generate new_data like this:
{
"items": [
{ "size": "small" },
{ "size": "large"}
]
}
items can contain any number of entries. I've tried variations of SQL something like:
UPDATE my_table
SET new_data = (
CASE data->>'type'
WHEN 'a' THEN
json_build_object(
'items', json_agg(json_array_elements(data->'items') - 'color')
)
ELSE
null
END
);
but I can't seem to get it working. In this case, I get:
ERROR: set-returning functions are not allowed in UPDATE
LINE 6: 'items', json_agg(json_array_elements(data->'items')...
I can get a set of items using json_array_elements(data->'items') and thought I could roll this up into a JSON array using json_agg and remove unwanted keys using the - operator. But now I'm not sure if what I'm trying to do is possible. I'm guessing it's a case of PEBCAK. I've got about a dozen different types each with slightly different rules for how new_data should look, which is why I'm trying to fit the value for new_data into a type-based CASE statement.
Any tips, hints, or suggestions would be greatly appreciated.
One way is to handle the set json_array_elements() returns in a subquery.
UPDATE my_table
SET new_data = CASE
WHEN data->>'type' = 'a' THEN
(SELECT json_build_object('items',
json_agg(jae.item::jsonb - 'color'))
FROM json_array_elements(data->'items') jae(item))
END;
db<>fiddle
Also note that - isn't defined for json only for jsonb. So unless your columns are actually jsonb you need a cast. And you don't need an explicit ... ELSE NULL ... in a CASE expression, NULL is already the default value if no other value is specified in an ELSE branch.

How can I load the following JSON (deeply nested) to a DataFrame?

A sample of the JSON is as shown below:
{
"AN": {
"dates": {
"2020-03-26": {
"delta": {
"confirmed": 1
},
"total": {
"confirmed": 1
}
}
}
},
"KA": {
"dates": {
"2020-03-09": {
"delta": {
"confirmed": 1
},
"total": {
"confirmed": 1
}
},
"2020-03-10": {
"delta": {
"confirmed": 3
},
"total": {
"confirmed": 4
}
}
}
}
}
I would like to load it into a DataFrame, such that the state names (AN, KA) are represented as Row names, and the dates and nested entries are present as Columns.
Any tips to achieve this would be very much appreciated. [I am aware of json_normalize, however I haven't figured out how to work it out yet.]
The output I am expecting, is roughly as shown below:
Can you update your post with the DataFrame you have in mind ? It'll be easier to understand what you want.
Also sometimes it's better to reshape your data if you can't make it work the way they are now.
Update:
Following your update here's what you can do.
You need to reshape your data, as I said when you can't achieve what you want it is best to look at the problem from another point of view. For instance (and from the sample you shared) the 'dates' keys is meaningless as the other keys are already dates and there are no other keys ate the same level.
A way to achieve what you want would be to use MultiIndex, it'll help you group your data the way you want. To use it you can for instance create all the indices you need and store in a dictionary the values associated.
Example :
If the only index you have is ('2020-03-26', 'delta', 'confirmed') you should have values = {'AN' : [1], 'KA':None}
Then you only need to create your DataFrame and transpose it.
I gave it a quick try and came up with a piece of code that should work. If you're looking for performance I don't think this will do the trick.
import pandas as pd
# d is the sample you shared
index = [[],[],[]]
values = {}
# Get all the dates
dates = [date for c in d.keys() for date in d[c]['dates'].keys() ]
for country in d.keys():
# For each country we create an array containing all 6 values for each date
# (missing values as None)
values[country] = []
for date in dates:
if date in d[country]['dates']:
for method in ['delta', 'total']:
for step in ['confirmed', 'recovered', 'tested']:
# Incrementing indices
index[0].append(date)
index[1].append(method)
index[2].append(step)
if step in value.keys():
values[country].append(deepcopy(d[country]['dates'][date][method][step]))
else :
values[country].append(None)
# When country does not have a date fill with None
else :
for method in ['delta', 'total']:
for step in ['confirmed', 'recovered', 'tested']:
index[0].append(date)
index[1].append(method)
index[2].append(step)
values[country].append(None)
# Removing duplicates introduced because we added n_countries times
# the indices
# 3 is the number of steps
# 2 is the number of methods
number_of_rows = 3*2*len(dates)
index[0] = index[0][:number_of_rows]
index[1] = index[1][:number_of_rows]
index[2] = index[2][:number_of_rows]
df = pd.DataFrame(values, index=index).T
Here is what I have for the transposed data frame of my output :
Hope this can help you
You clearly needs to reshape your json data before load it into a DataFrame.
Have you tried load your json like a dict ?
dataframe = pd.DataFrame.from_dict(JsonDict, orient="index")
The “orient” of the data. If the keys of the passed dict should be the columns of the resulting DataFrame, pass ‘columns’ (default). Otherwise if the keys should be rows, pass ‘index’.

netlogo: no " " in csv spreadsheet since NetLogo 6.0.3

I want to use the syntax to substitute "#N/A" instead of the calculated value 0, but "" is not displayed in the csv file in NetLogo 6.0.3 (This is displayed ⇒ #N/A. I want to calculate the average value by mixing "#N/A" with numerical data in Excel, but #N/A is displayed as calculation result. If "#N/A" is displayed as a csv file, it could be calculated with Excel. In NetLogo 6.0.1, this was possible. What should I do with NetLogo 6.0.3?
The "correct" way to do this is to handle it in excel by ignoring N/As in your average. That way, you preserve those values as N/As and so have to be conscious about how you deal with them. You can do this by calculating the average with something like =AVERAGE(IF(ISNUMBER(A2:A5), A2:A5)) and then entering with ctrl+shift+enter instead of just enter. That, of course, is kind of annoying.
To solve it on the netlogo side, report the value "\"#N/A\"" instead of "#N/A". That will preserve the quotes when you import into excel. Alternatively, you could output pretty much any other string other than "#N/A". For instance, reporting "not-a-number" would make it a string, or even just using an empty string. The quotes you see in excel are actually part of the string, not just indicators that the field is a string. In general, fields in CSV don't have a type. Excel just interprets what it can as a number. It treats the exact field of #N/A as special, so modifying it in any way (not just adding quotes around it) will prevent it from interpreting in that special way.
It's also worth noting that this was a bug in previous versions of NetLogo (I'm assuming you're using BehaviorSpace here; the CSV extension has always worked this way). There was no way to output a string without having a quote at the beginning and end of the string. That is, the string value itself would have quotes in it. This behavior is a consequence of fixing it. Now, you can output true #N/A values if you want to, which there was no way of doing before.
Maybe this will work for you. Assuming you have the csv extension enabled:
extensions [ csv ]
You can use a reporter that replaces 0 values in a list (or list of lists) with the string value "#NA" (or "N/A" if you want, but for me #NA is what works with Excel).
to-report replace-zeroes [ list_ ]
if list_ = [] [ report [] ]
let out map [ i ->
ifelse-value is-list? i
[ replace-zeroes i ]
[ ifelse-value ( i != 0 ) [ i ] [ "#NA" ] ]
] list_
report out
end
As a quick check:
to test
ca
; make fake list of lists for csv output
let fake n-values 3 [ i -> n-values 5 [ random 4 ] ]
; replace the 0 values with the NA values
let replaced replace-zeroes fake
; print both the base and 0-replaced lists
print fake
print replaced
; export to csv
csv:to-file "replaced_out.csv" replaced
reset-ticks
end
Observer output (random):
[[0 0 2 2 0] [3 0 0 3 0] [2 3 2 3 1]]
[[#NA #NA 2 2 #NA] [3 #NA #NA 3 #NA] [2 3 2 3 1]]
Excel output:

Parsing JSON object in RUBY with a wildcard?

Problem:
I'm relatively new to programming and learning Ruby, I've worked with JSON before but have been stumped by this problem.
I'm taking a hash, running hash.to_json, and returning a json object that looks like this:
'quantity' =
{
"line_1": {
"row": "1",
"productNumber": "111",
"availableQuantity": "4"
},
"line_2": {
"row": "2",
"productNumber": "112",
"availableQuantity": "6"
},
"line_3": {
"row": "3",
"productNumber": "113",
"availableQuantity": "10"
}
I want to find the 'availableQuantity' value that's greater than 5 and return the line number.
Further, I'd like to return the line number and the product number.
What I've tried
I've been searching on using a wildcard in a JSON query to get over the "line_" value for each entry, but with no luck.
to simply identify a value for 'availableQuantity' within the JSON object greater than 5:
q = JSON.parse(quantity)
q.find {|key| key["availableQuantity"] > 5}
However this returns the error: "{TypeError}no implicit conversion of String into Integer."
I've googled this error but I can not understand what it means in the context of this problem.
or even
q.find {|key, value| value > 2}
which returns the error: "undefined method `>' for {"row"=>"1", "productNumber"=>111, "availableQuantity"=>4}:Hash"
This attempt looks so simplistic I'm ashamed, but it reveals a fundamental gap in my understanding of how to work with looping over stuff using enumerable.
Can anyone help explain a solution, and ideally what the steps in the solution mean? For example, does the solution require use of an enumerable with find? Or does Ruby handle a direct query to the json?
This would help my learning considerably.
I want to find the 'availableQuantity' value that's greater than 5 and [...] return the line number and the product number.
First problem: your value is not a number, so you can't compare it to 5. You need to_i to convert.
Second problem: getting the line number is easiest with regular expressions. /\d+/ is "any consecutive digits". Combining that...
q.select { |key, value|
value['availableQuantity'].to_i > 5
}.map { |key, value|
[key[/\d+/].to_i, value['productNumber'].to_i]
}
# => [[2, 112], [3, 113]]

Referencing JSON elements in AppleScript

I have a JSON result I am trying to work with in AppleScript, but because the top level items are "unnamed" I can only access them by piping the item reference, which in this case is a number. As a result, I can't iterate through it, it has to be hard coded (scroll down to the last code sample to see what I mean)
For example, this is the JSON I'm looking at:
{
"1": {
"name": "Tri 1"
},
"2": {
"name": "Tri 2"
},
"3": {
"name": "Tri 3"
},
"4": {
"name": "Orb Dave"
},
"5": {
"name": "Orb Fah"
}
}
With the help of JSON Helper I get the JSON to a more usable format (for AppleScript).
{|3|:{|name|:"Tri 3"}, |1|:{|name|:"Tri 1"}, |4|:{|name|:"Orb Dave"}, |2|:{|name|:"Tri 2"}, |5|:{|name|:"Orb Fah"}}
I can then use this code to get a list of "lights" the objects in question:
set lights to (every item in theReturn) as list
repeat with n from 1 to count of lights
set light to item n of lights
log n & light
end repeat
From that, I get:
(*1, Tri 3*)
(*2, Tri 1*)
(*3, Orb Dave*)
(*4, Tri 2*)
(*5, Orb Fah*)
You may notice the result is not in the desired order. The index is the index within the list of lights. It's not the number that appears at the top of the object. If you look to the top two pre-formated areas, you'll see the items 1,2 and 3 are Tri 1, Tri 2, and Tri 3. It is correct that Tri 3 comes first, Tri 1 second, and an Orb is third.
What I need to do is find a way to be able to iterate through the JSON in any order (sorted or not) and be able to line up "1" with "Tri 1", "3" with "Tri 3" and "5" with "Orb Fah". But I can't find ANY way to interact with the returned JSON that lets me reference the third light and return it's name. The ONLY way I can seem to be able to do it is to hard code the light indexes, such that:
log |name| of |1| of theReturn
log |name| of |2| of theReturn
log |name| of |3| of theReturn
log |name| of |4| of theReturn
log |name| of |5| of theReturn
which gives me the correct light with the correct name:
(*Tri 1*)
(*Tri 2*)
(*Tri 3*)
(*Orb Dave*)
(*Orb Fah*)
I'm thinking the problem is arising because the light ID doesn't have a descriptor or sorts. That I can't change, but I need to iterate through them programatically. Hard coding them as above is not acceptable.
Any help would be appreciated
You are dealing with a list of records here, not a list of lists. Records are key/value pairs. They do not have indexes like a list. That makes it easy if you know the keys because you just ask for the one you want. And your records have records inside them so you have 2 layers of records. Therefore if you want the value of the |name| record corresponding to |3| record then ask for it as you've discovered...
set jsonRecord to {|3|:{|name|:"Tri 3"}, |1|:{|name|:"Tri 1"}, |4|:{|name|:"Orb Dave"}, |2|:{|name|:"Tri 2"}, |5|:{|name|:"Orb Fah"}}
set record3name to |name| of |3| of jsonRecord
The downside of records in applescript is that there is no command to find the record keys. Other programming languages give you the tools to find the keys (like objective-c) but applescript does not. You have to know them ahead of time and use them as I showed.
If you don't know the keys ahead of time then you can either use JSON Helper to give you the results in a different form or use a different programming language (python, ruby, etc) to extract the information from the records.
One other option you have is to just use the json text itself without using JSON Helper. For example, if you have the json as text then you can extract the information using standard applescript commands for text objects. Your json text has the information you want on the 3rd line, the 6th, 9th etc. You could use that to your advantage and do something like this...
set jsonText to "{
\"1\": {
\"name\": \"Tri 1\"
},
\"2\": {
\"name\": \"Tri 2\"
},
\"3\": {
\"name\": \"Tri 3\"
},
\"4\": {
\"name\": \"Orb Dave\"
},
\"5\": {
\"name\": \"Orb Fah\"
}
}"
set jsonList to paragraphs of jsonText
set namesList to {}
set AppleScript's text item delimiters to ": \""
repeat with i from 3 to count of jsonList by 3
set theseItems to text items of (item i of jsonList)
set end of namesList to text 1 through -2 of (item 2 of theseItems)
end repeat
set AppleScript's text item delimiters to ""
return namesList
For each index, loop through all the items in the list looking for the one whose name matches the index:
tell application "System Events"
-- Convert the JSON file to a property list using plutil.
do shell script "plutil -convert xml1 /Users/mxn/Desktop/tri.json -o /Users/mxn/Desktop/tri.plist"
-- Read in the plist
set theItems to every property list item of property list file "/Users/mxn/Desktop/tri.plist"
set theLights to {}
-- Iterate once per item in the plist.
repeat with i from 1 to count of theItems
set theName to i as text
-- Find the item whose name is the current index.
repeat with theItem in theItems
if theItem's name is theName then
-- We found it, so add it to the results.
set theValue to theItem's value
copy {i, theValue's |name|} to the end of theLights
-- Move on to the next index.
exit repeat
end if
end repeat
end repeat
return theLights
end tell
Result:
{{1, "Tri 1"}, {2, "Tri 2"}, {3, "Tri 3"}, {4, "Orb Dave"}, {5, "Orb Fah"}}
Ideally, instead of the nested loop, we’d be able to say something like this:
set theName to i as text
set theItem to (the first item in theItems whose name is theName)
But unfortunately that produces an error.
This solution also demonstrates an alternative to JSON Helper: you can convert the JSON file to a property list using the handy plutil command line tool and use System Events' built-in support for property lists.