Json data in an object - json

I would like put the json data in a object in opa.
I saw a comment ( parsing a webclient.Result content in OPA ) which describes how to do that but I tried that approach and can't make it work with my application.
Here's my code :
type service = {
string name,
string kind,
string vendor
}
contentResult = WebClient.Get.try_get(url);
match(contentResult)
{
case { failure : _ }: "Error, could not connect";
case ~{success}:
match (Json.deserialize(success.content)){
case {none}: "Error the string doesn't respect json specification";
case {some:jsast}: match (OpaSerialize.Json.unserialize_unsorted(jsast)){
case {none}: "The type 'result' doesn't match jsast";
case {some =(value:service)}: value.name;
}
}
}
I want to create an object of type service with the data from the json file.
The last match seems to be a problem because it displays in output
The type 'result' doesn't match jsast
In fact, i have more keys in my json file than this 3 above, here my log:
X Data: [
{
"name": "redis-d80d7",
"type": "key-value",
"vendor": "redis",
"version": "2.2",
"tier": "free",
"properties": {},
"meta": {
"created": 1310772000,
"updated": 1310772000,
"tags": [
"redis",
"redis-2.2",
"key-value",
"nosql"
],
"version": 1
}
},
JsAST: {List = [{Record = [{f1 = name; f2 = {String = redis-d80d7}},
{f1 = type; f2 = {String = key-value}}, {f1 = vendor; f2 = {String = redis}},
{f1 = version; f2 = {String = 2.2}}, {f1 = tier; f2 = {String = free}},
{f1 = properties; f2 = {Record = []}}, {f1 = meta; f2 = {Record =
[{f1 = created; f2 = {Int = 1310772000}}, {f1 = updated;
f2 = {Int = 1310772000}}, {f1 = tags; f2 = {List = [{String = redis},
{String = redis-2.2}, {String = key-value}, {String = nosql}]}},
{f1 = version; f2 = {Int = 1}}]}}]}

Related

JSON to Excel PowerQuery import - how to get a row per nested field

I'm looking to use the Excel Power Query to import some json that looks like the following (but much bigger, more fields etc.):
example-records.json
{
"records": {
"record_id_1": {
"file_no": "5792C",
"loads": {
"load_id_1": {
"docket_no": "3116115"
},
"load_id_2": {
"docket_no": "3116118"
},
"load_id_3": {
"docket_no": "3208776"
}
}
},
"record_id_2": {
"file_no": "5645C",
"loads": {
"load_id_4": {
"docket_no": "2000527155"
},
"load_id_5": {
"docket_no": "2000527156"
},
"load_id_6": {
"docket_no": "2000527146"
}
}
}
}
}
I want to get a table like the following at the load_id / docket level. A row per load_id
What I've tried
Clicking buttons in power query UI I get the following.
The problem is I can't include a file_no column and this doesn't work when there are lots of load ids.
let
Source = Json.Document(File.Contents("H:\Software\Site Apps\example-records.json")),
records = Source[records],
#"Converted to Table" = Record.ToTable(records),
#"Expanded Value" = Table.ExpandRecordColumn(#"Converted to Table", "Value", {"file_no", "loads"}, {"Value.file_no", "Value.loads"}),
#"Removed Columns" = Table.RemoveColumns(#"Expanded Value",{"Value.file_no"}),
#"Expanded Value.loads" = Table.ExpandRecordColumn(#"Removed Columns", "Value.loads", {"load_id_1", "load_id_2", "load_id_3", "load_id_4", "load_id_5", "load_id_6"}, {"Value.loads.load_id_1", "Value.loads.load_id_2", "Value.loads.load_id_3", "Value.loads.load_id_4", "Value.loads.load_id_5", "Value.loads.load_id_6"}),
#"Unpivoted Columns" = Table.UnpivotOtherColumns(#"Expanded Value.loads", {"Name"}, "Attribute", "Value"),
#"Expanded Value1" = Table.ExpandRecordColumn(#"Unpivoted Columns", "Value", {"docket_no"}, {"Value.docket_no"})
in
#"Expanded Value1"
You can use
let Source = JSON(Json.Document(File.Contents("c:\temp\example.json"))),
#"Removed Other Columns" = Table.SelectColumns(Source,{"Name.1", "Name.3", "Value"}),
#"Added Custom" = Table.AddColumn(#"Removed Other Columns", "Custom", each if [Name.3]=null then [Value] else null),
#"Filled Down" = Table.FillDown(#"Added Custom",{"Custom"}),
#"Filtered Rows" = Table.SelectRows(#"Filled Down", each ([Name.3] <> null))
in #"Filtered Rows"
based on this function I named JSON which comes from Imke https://www.thebiccountant.com/2018/06/17/automatically-expand-all-fields-from-a-json-document-in-power-bi-and-power-query/ which is reproduced below
let
func = (JSON) =>
let
Source = JSON,
ParseJSON = try Json.Document(Source) otherwise Source,
TransformForTable =
if Value.Is(ParseJSON, type record) then
Record.ToTable(ParseJSON)
else
#table(
{"Name", "Value"},
List.Zip({List.Repeat({0}, List.Count(ParseJSON)), ParseJSON})
),
AddSort = Table.Buffer(Table.AddColumn(TransformForTable, "Sort", each 0)),
LG = List.Skip(
List.Generate(
() => [Next = AddSort, Counter = 1, AddIndex = #table({"Sort"}, {{""}})],
each [AddIndex]{0}[Sort] <> "End",
each [
AddIndex = Table.AddIndexColumn([Next], "Index", 0, 1),
MergeSort = Table.CombineColumns(
Table.TransformColumnTypes(
AddIndex,
{{"Sort", type text}, {"Index", type text}},
"en-GB"
),
{"Sort", "Index"},
Combiner.CombineTextByDelimiter(".", QuoteStyle.None),
"Sort"
),
PJson = Table.TransformColumns(
MergeSort,
{{"Value", each try Json.Document(_) otherwise _}}
),
AddType = Table.AddColumn(
PJson,
"Type",
each
if Value.Is([Value], type record) then
"Record"
else if Value.Is([Value], type list) then
"List"
else if Value.Is([Value], type table) then
"Table"
else
"other"
),
AddStatus = Table.AddColumn(
AddType,
"Status",
each if [Type] = "other" then "Finished" else "Unfinished"
),
Finished = Table.SelectRows(AddStatus, each ([Status] = "Finished")),
Unfinished = Table.SelectRows(AddStatus, each ([Status] = "Unfinished")),
AddNext = Table.AddColumn(
Unfinished,
"Next",
each if [Type] = "Record" then {[Value]} else [Value]
),
RemoveCols = Table.RemoveColumns(AddNext, {"Value", "Type", "Status"}),
ExpandNext = Table.ExpandListColumn(RemoveCols, "Next"),
AddIndex2 = Table.AddIndexColumn(ExpandNext, "Index", 0, 1),
MergeSort2 = Table.CombineColumns(
Table.TransformColumnTypes(
AddIndex2,
{{"Sort", type text}, {"Index", type text}},
"en-GB"
),
{"Sort", "Index"},
Combiner.CombineTextByDelimiter(".", QuoteStyle.None),
"Sort"
),
TransformRecord = Table.TransformColumns(
MergeSort2,
{
{
"Next",
each try
Record.ToTable(_)
otherwise
try
if Value.Is(Text.From(_), type text) then
#table({"Value"}, {{_}})
else
_
otherwise
_
}
}
),
FilterOutNulls = Table.SelectRows(TransformRecord, each [Next] <> null),
Next =
if Table.IsEmpty(FilterOutNulls) then
#table({"Sort"}, {{"End"}})
else if Value.Is(FilterOutNulls[Next]{0}, type table) = true then
Table.ExpandTableColumn(
FilterOutNulls,
"Next",
{"Name", "Value"},
{"Name." & Text.From([Counter]), "Value"}
)
else
Table.RenameColumns(FilterOutNulls, {{"Next", "Value"}}),
Counter = [Counter] + 1
],
each Table.AddColumn([Finished], "Level", (x) => _[Counter] - 2)
)
),
Check = LG{2},
Combine = Table.Combine(LG),
Clean = Table.RemoveColumns(Combine, {"Status", "Type"}),
Trim = Table.TransformColumns(Clean, {{"Sort", each Text.Trim(_, "."), type text}}),
// Dynamic Padding for the sort-column so that it sorts by number in text strings
SelectSort = Table.SelectColumns(Trim, {"Sort"}),
SplitSort = Table.AddColumn(
SelectSort,
"Custom",
each List.Transform(try Text.Split([Sort], ".") otherwise {}, Number.From)
),
ToTable = Table.AddColumn(
SplitSort,
"Splitted",
each Table.AddIndexColumn(Table.FromColumns({[Custom]}), "Pos", 1, 1)
),
ExpandTable = Table.ExpandTableColumn(ToTable, "Splitted", {"Column1", "Pos"}),
GroupPos = Table.Group(
ExpandTable,
{"Pos"},
{{"All", each _, type table}, {"Max", each List.Max([Column1]), type text}}
),
Digits = Table.AddColumn(GroupPos, "Digits", each Text.Length(Text.From([Max]))),
FilteredDigits = List.Buffer(Table.SelectRows(Digits, each ([Digits] <> null))[Digits]),
SortNew = Table.AddColumn(
Trim,
"SortBy",
each Text.Combine(
List.Transform(
List.Zip({Text.Split([Sort], "."), List.Positions(Text.Split([Sort], "."))}),
each Text.PadStart(_{0}, FilteredDigits{_{1}}, "0")
),
"."
)
),
FilterNotNull = Table.SelectRows(SortNew, each ([Value] <> null)),
Reorder = Table.ReorderColumns(
FilterNotNull,
{"Value", "Level", "Sort", "SortBy"}
& List.Difference(
Table.ColumnNames(FilterNotNull),
{"Value", "Level", "Sort", "SortBy"}
)
),
Dots = Table.AddColumn(
#"Reorder",
"Dots",
each List.Select(Table.ColumnNames(#"Reorder"), (l) => Text.StartsWith(l, "Name"))
),
// This sort is just to view in the query editor. When loaded to the data model it will not be kept. Use "Sort by column" in the data model instead.
Sort = Table.Sort(Dots, {{"SortBy", Order.Ascending}})
in
Sort,
documentation = [
Documentation.Name = " Table.JsonExpandAll ",
Documentation.Description
= " Dynamically expands the <Json> Record and returns values in one column and additional columns to navigate. ",
Documentation.LongDescription
= " Dynamically expands the <Json> Record and returns values in one column and additional columns to navigate. Input can be JSON in binary format or the already parsed JSON. ",
Documentation.Category = " Table ",
Documentation.Version = " 1.2: Added column [Dots] (22/02/2019)",
Documentation.Author = " Imke Feldmann: www.TheBIccountant.com . ",
Documentation.Examples = {[Description = " ", Code = " ", Result = " "]}
]
in
Value.ReplaceType(func, Value.ReplaceMetadata(Value.Type(func), documentation))
Managed to use an added custom column, the action that enables the expansion to one load id per row.
#"Added Custom" = Table.AddColumn(#"Expanded Value", "Custom", each Record.ToTable([Value.loads]))
let
Source = Json.Document(File.Contents("H:\Software\Site Apps\example-records.json")),
records = Source[records],
#"Converted to Table" = Record.ToTable(records),
#"Expanded Value" = Table.ExpandRecordColumn(#"Converted to Table", "Value", {"file_no", "loads"}, {"Value.file_no", "Value.loads"}),
#"Added Custom" = Table.AddColumn(#"Expanded Value", "Custom", each Record.ToTable([Value.loads])),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom",{"Value.loads"}),
#"Expanded Custom" = Table.ExpandTableColumn(#"Removed Columns", "Custom", {"Name", "Value"}, {"Custom.Name", "Custom.Value"}),
#"Expanded Custom.Value" = Table.ExpandRecordColumn(#"Expanded Custom", "Custom.Value", {"docket_no"}, {"Custom.Value.docket_no"}),
#"Renamed Columns" = Table.RenameColumns(#"Expanded Custom.Value",{{"Name", "record_id"}, {"Value.file_no", "file_no"}, {"Custom.Name", "load_id"}, {"Custom.Value.docket_no", "docket_no"}})
in
#"Renamed Columns"

How to decode a JSON array into Elm list of custom types

I want to decode a JSON array of payload objects into List Payload where each Payload is a custom type:
type Payload
= PayloadP1 P1
| PayloadP2 P2
After decoding PayloadP1 and PayloadP2 using decoders below how do I decode Payload?
type alias P1 =
{ id : Int
, st : String
}
type alias P2 =
{ id : Int
, s1 : String
, s2 : String
}
type Payload
= PayloadP1 P1
| PayloadP2 P2
type alias PayloadQueue = List Payload
decodeP1 : Jd.Decoder P1
decodeP1 =
Jd.map2 P1
(Jd.field "id" Jd.int)
(Jd.field "st" Jd.string)
decodeP2 : Jd.Decoder P2
decodeP2 =
Jd.map3 P2
(Jd.field "id" Jd.int)
(Jd.field "p1" Jd.string)
(Jd.field "p2" Jd.string)
decodePayload =
Jd.field ".type" Jd.string
|> Jd.andThen decodePayload_
{-
decodePayload_ : String -> Jd.Decoder Payload
decodePayload_ ptype =
case ptype of
"P1" -> decodeP1
"P2" -> decodeP2
-}
json_str = """[
{".type" : "P1", "id" : 1, "st" : "st"},
{".type" : "P2", "id" : 2, "p1" : "p1", "p2" : "p2"},
]"""
You need to wrap P1 and P2 in PayloadP1 and PayloadP2 respectively in order to return a common type from each branch, which you can do using map. Then you also need to account for the possibility that the type field is neither P1 or P2. In that case you can either provide a default or return an error using fail. I've done the latter below.
decodePayload_ : String -> Jd.Decoder Payload
decodePayload_ ptype =
case ptype of
"P1" -> decodeP1 |> Jd.map PayloadP1
"P2" -> decodeP2 |> Jd.map PayloadP2
_ -> Jd.fail "invalid type"

How to match a top level array in json with specs2

In specs2 you can match an array for elements like this:
val json = """{"products":[{"name":"shirt","price":10, "ids":["1", "2", "3"]},{"name":"shoe","price":5}]}"""
def aProductWith(name: Matcher[JsonType], price: Matcher[JsonType]): Matcher[String] =
/("name").andHave(name) and /("price").andHave(price)
def haveProducts(products: Matcher[String]*): Matcher[String] =
/("products").andHave(allOf(products:_*))
json must haveProducts(
aProductWith(name = "shirt", price = 10) and /("ids").andHave(exactly("1", "2", "3")),
aProductWith(name = "shoe", price = 5)
)
(Example taken from here: http://etorreborre.github.io/specs2/guide/SPECS2-3.0/org.specs2.guide.Matchers.html)
How do I do the same thing i.e. match the contents of products if products is a root element in the json? What should haveProducts look like?
val json = """[{"name":"shirt","price":10, "ids":["1", "2", "3"]},{"name":"shoe","price":5}]"""
You can replace /("products").andHave(allOf(products:_*)) with have(allOf(products:_*)) like this:
val json = """[{"name":"shirt","price":10, "ids":["1", "2", "3"]},{"name":"shoe","price":5}]"""
def aProductWith(name: Matcher[JsonType], price: Matcher[JsonType]): Matcher[String] =
/("name").andHave(name) and /("price").andHave(price)
def haveProducts(products: Matcher[String]*): Matcher[String] = have(allOf(products:_*))
json must haveProducts(
aProductWith(name = "shirt", price = 10) and /("ids").andHave(exactly("1", "2", "3")),
aProductWith(name = "shoe", price = 5)
)

Adding information from one JsonBuilder object to another

As the title suggests, I'm trying to add information held in one JsonBuilder object to a second JsonBuilder object.
Currently I have this:
public String buildOneUser(DyveUserDTO user)
{
def userBuilder = new JsonBuilder()
userBuilder user.collect { usr ->
[
'Name': usr.userName,
'Allowance': usr.allowance,
'Total Holidays in Calendar': usr.totalHolidaysInCal,
'Holidays Booked': usr.numHolidaysBooked,
'Holidays Taken': usr.numHolidaysTaken,
'Holidays Remaining': usr.totalHolidaysLeft
]
}
def userHolidayBuilder = new JsonBuilder()
userHolidayBuilder user.holidayEvents.collect { usr ->
[
'Start Date': usr.startDate,
'End Date': usr.endDate,
'Days': usr.days
]
}
def userAndHolidays = userBuilder + userHolidayBuilder
return userAndHolidays.toPrettyString()
}
user.holidayEvents is a list of objects representing holidays and it could be empty or have any number of objects in it. This made me hesitant of doing something like:
def userBuilder = new JsonBuilder()
userBuilder user.collect { usr ->
[
'Name': usr.userName,
'Allowance': usr.allowance,
'Total Holidays in Calendar': usr.totalHolidaysInCal,
'Holidays Booked': usr.numHolidaysBooked,
'Holidays Taken': usr.numHolidaysTaken,
'Holidays Remaining': usr.totalHolidaysLeft
'Holiday': usr.holidayEvents[0].startDate
'Holiday': usr.holidayEvents[0].endDate
'Holiday': usr.holidayEvents[0].days
]
}
As I would only get the amount of holidays I write code for. It would also throw an exception if a user had no holidays and I told it look at usr.holidayEvents[1] as it's outside of the list range.
I've also tried nesting a .collect like this
def userBuilder = new JsonBuilder()
userBuilder {
'Name' user.userName,
'Allowance' user.allowance,
'Total Holidays in Calendar' user.totalHolidaysInCal,
'Holidays Booked' user.numHolidaysBooked,
'Holidays Taken' user.numHolidaysTaken,
'Holidays Remaining' user.totalHolidaysLeft,
'Holidays' user.holidayEvents.collect{ evt ->
[
'Start Date': evt.startDate,
'End Date': evt.endDate,
'Days': evt.days
]
}
}
But this returned all the keys except the Holidays key.
Any help would be greatly appreciated!
EDIT - My code now looks like this:
public String buildOneUser(DyveUserDTO user)
{
def userBuilder = new JsonBuilder()
userBuilder user.collect { usr ->
[
'Name': usr.userName,
'Allowance': usr.allowance,
'Total Holidays in Calendar': usr.totalHolidaysInCal,
'Holidays Booked': usr.numHolidaysBooked,
'Holidays Taken': usr.numHolidaysTaken,
'Holidays Remaining': usr.totalHolidaysLeft,
'Holidays': usr.holidayEvents.collect{ evt ->
[
'Start Date': evt.startDate,
'End Date': evt.endDate,
'Days': evt.days
]
}
]
}
}
EDIT 2 - Sample Code
Method to call:
public String buildOneUser(DyveUserDTO user)
{
def userBuilder = new JsonBuilder()
userBuilder {
Name:
user.userName
Allowance:
user.allowance
TotalHolidaysInCalendar:
user.totalHolidaysInCal
HolidaysBooked:
user.numHolidaysBooked
HolidaysTaken:
user.numHolidaysTaken
HolidaysRemaining:
user.totalHolidaysLeft
Holidays:
user.holidayEvents.collect { evt ->
[
'Start Date': evt.startDate,
'End Date' : evt.endDate,
'Days' : evt.days
]
}
}
return userBuilder.toPrettyString()
}
User to pass in:
class DyveUserDTO
{
String firstName = "Foo"
String userName = "FooBar"
Integer userID = 42
BigDecimal numHolidaysBooked = 3
BigDecimal numHolidaysTaken = 0
BigDecimal totalHolidaysInCal = 3
BigDecimal totalHolidaysLeft = 12
BigDecimal allowance = 12
List<HolidayObject> holidayEvents = []
}
Holiday objects to go in holidayEvents:
class HolidayObject
{
public Integer userID = 42
public String title = "Foo Holiday"
public String event = "Holiday"
public String amPm = "Full Day"
public String name = "Foo"
public LocalDateTime startDate = LocalDateTime.parse(2015-02-20T00:00:00)
public LocalDateTime endDate = LocalDateTime.parse(2015-02-20T00:00:00)
public BigDecimal days = 1
}
class HolidayObject
{
public Integer userID = 42
public String title = "Foo Holiday Pm"
public String event = "Holiday"
public String amPm = "Pm"
public String name = "Foo"
public LocalDateTime startDate = LocalDateTime.parse(2015-02-23T00:00:00)
public LocalDateTime endDate = LocalDateTime.parse(2015-02-24T00:00:00)
public BigDecimal days = 2
}
each just returns the list it's called upon, collect should be used for events. See the working code below:
import groovy.json.JsonBuilder
class UserEvent {
def start
def end
def days
}
class User {
def name
def events
}
def u1 = new User(name: 'u1', events: [new UserEvent(start: 0, end: 1, days: 1), new UserEvent(start: 0, end: 2, days: 2)])
def u2 = new User(name: 'u2', events: [new UserEvent(start: 0, end: 3, days: 3)])
def users = [u1, u2]
def userBuilder = new JsonBuilder()
userBuilder users.collect { usr ->
[
'name': usr.name,
'events': usr.events.collect { e ->
[
start: e.start,
end: e.end,
days: e.days,
]
}
]
}
print userBuilder.toPrettyString()
EDIT
Below is a working example:
import groovy.json.JsonBuilder
user = new DyveUserDTO()
def userBuilder = new JsonBuilder()
userBuilder {
Name user.userName
Allowance user.allowance
TotalHolidaysInCalendar user.totalHolidaysInCal
HolidaysBooked user.numHolidaysBooked
HolidaysTaken user.numHolidaysTaken
HolidaysRemaining user.totalHolidaysLeft
Holidays user.holidayEvents.collect { evt ->
[
'Start Date': evt.startDate,
'End Date' : evt.endDate,
'Days' : evt.days
]
}
}
println userBuilder.toPrettyString()
class DyveUserDTO {
String firstName = "Foo"
String userName = "FooBar"
Integer userID = 42
BigDecimal numHolidaysBooked = 3
BigDecimal numHolidaysTaken = 0
BigDecimal totalHolidaysInCal = 3
BigDecimal totalHolidaysLeft = 12
BigDecimal allowance = 12
List<HolidayObject> holidayEvents = [new HolidayObject(), new HolidayObject()]
}
class HolidayObject {
public Integer userID = 42
public String title = "Foo Holiday"
public String event = "Holiday"
public String amPm = "Full Day"
public String name = "Foo"
public String startDate = '2015-02-20T00:00:00'
public String endDate = '2015-02-20T00:00:00'
public BigDecimal days = 1
}
No colons : needed. See the sample here. Also I have no Joda dependency so replaced with String.

List of string in a record to CSV?

How does one write a list of string in a record to CSV without the lists being truncated?
CSV Writer:
let toSepFile sep header (fileName:string) (s:'record seq)=
let schemaType=typeof<'record>
let fields = Reflection.FSharpType.GetRecordFields(schemaType)
let toStr fields =
fields
|> Seq.fold(fun res field-> res+field+sep) ""
use w = new System.IO.StreamWriter(fileName)
if header then
let header_str= fields
|> Seq.map(fun field -> field.Name)
|> toStr
w.WriteLine(header_str)
let elemToStr (elem:'record) =
//for each field get value
fields
|> Seq.map(fun field -> string (FSharpValue.GetRecordField(elem,field)))
|> toStr
s
|>Seq.map(elemToStr)
|>Seq.iter(fun elem -> w.WriteLine(elem))
Test Data (Deedle test set):
let peopleRecds =
[ { Name = "Joe"; Age = 51; Countries = [ "UK"; "US"; "UK"] }
{ Name = "Tomas"; Age = 28; Countries = [ "CZ"; "UK"; "US"; "CZ" ] }
{ Name = "Suzanne"; Age = 15; Countries = [ "US" ] } ]
Current CSV Output:
Name Age Countries
"Joe 51 [CZ; UK; US; ... ] "
"Tomas 28 [CZ; UK; US; ... ] "
"Suzanne 15 [US] "
So is it possible see the full list of strings from the CSV output, instead of the "..."?
Edit: Desired output:
Name Age Countries
"Joe 51 [CZ; UK; US] "
"Tomas 28 [CZ; UK; US; CZ] "
"Suzanne 15 [US] "
The trouble you're having is that for Lists, the ToString() method truncates the output. The workaround is to not use ToString(), but instead use sprint "%A" *list here*.