Nested Json Structure modelling to NoSQL attribute - json

I want to create a table in Dynamo DB with below structure.
{
"id" : 123,
"name" : "xyz",
"info" : {
"key1" : "val1",
"key2": {
"key3": [
"val3",
],
}
}
}
The simplest solution is
| id | name | info |
| 123 | xyz | {json string} |
But what are some better solution, other than making the "key1"/"key2" as TOP-LEVEL attributes.
Please help, with a more optimal solution !

Related

circe doesn't see field when it contains an array

I've got 2 "tests", of which the one where I'm trying to decode a user works, but the one where I'm trying to decode a list of users doesn't:
import User._
import io.circe._
import io.circe.syntax._
import io.circe.parser.decode
class UserSuite extends munit.FunSuite:
test("List of users can be decoded") {
val json = """|{
| "data" : [
| {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| },
| {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| }
| ]
|}""".stripMargin
println(decode[List[User]](json))
}
test("user can be decoded") {
val json = """|{
| "data" : {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| }
|}""".stripMargin
println(decode[User](json))
}
The failing one produces
Left(DecodingFailure(List, List(DownField(data))))
despite the fact that both the json's relevant structure and the decoders (below) are the same.
final case class User(
id: String,
name: String,
username: String
)
object User:
given Decoder[List[User]] =
deriveDecoder[List[User]].prepare(_.downField("data"))
given Decoder[User] =
deriveDecoder[User].prepare(_.downField("data"))
As far as I understand this should work, even according to one of Travis' older replies but it doesn't.
Is this a bug? Am I doing something wrong?
For reference, This is Scala 3.2.0 and circe 0.14.1
The thing is that that you need two different encoders for User, the one expecting data field to decode the 2nd json and the one not expecting data field while deriving decoder for a list. Otherwise the 1st json should be
"""|{
| "data" : [
| {
| "data" :
| {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| }
| },
| {
| "data" :
| {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| }
| }
| ]
|}""
It's better to be explicit now
final case class User(
id: String,
name: String,
username: String
)
object User {
val userDec: Decoder[User] = semiauto.deriveDecoder[User]
val preparedUserDec: Decoder[User] = userDec.prepare(_.downField("data"))
val userListDec: Decoder[List[User]] = {
implicit val dec: Decoder[User] = userDec
Decoder[List[User]].prepare(_.downField("data"))
}
}
val json =
"""|{
| "data" : [
| {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| },
| {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| }
| ]
|}""".stripMargin
decode[List[User]](json)(User.userListDec)
// Right(List(User(someId,someName,someusername), User(someId,someName,someusername)))
val json1 =
"""|{
| "data" : {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| }
|}""".stripMargin
decode[User](json1)(User.preparedUserDec)
// Right(User(someId,someName,someusername))

How to transform nested JSON to csv using jq

I have tried to transform json in the following format to csv using jq on Linux cmd line, but with no success. Any help of guidance would be appreciated.
{
"dir/file1.txt": [
{
"Setting": {
"SettingA": "",
"SettingB": null
},
"Rule": "Rulechecker.Rule15",
"Description": "",
"Line": 11,
"Link": "www.sample.com",
"Message": "Some message",
"Severity": "error",
"Span": [
1,
3
],
"Match": "[id"
},
{
"Setting": {
"SettingA": "",
"SettingB": null
},
"Check": "Rulechecker.Rule16",
"Description": "",
"Line": 27,
"Link": "www.sample.com",
"Message": "Fix the rule",
"Severity": "error",
"Span": [
1,
3
],
"Match": "[id"
}
],
"dir/file2.txt": [
{
"Setting": {
"SettingA": "",
"SettingB": null
},
"Rule": "Rulechecker.Rule17",
"Description": "",
"Line": 51,
"Link": "www.example.com",
"Message": "Fix anoher 'rule'?",
"Severity": "error",
"Span": [
1,
18
],
"Match": "[source,terminal]\n----\n"
}
]
}
Ultimately, I want to present a matrix with dir/file1.txt, dir/file2.txt as rows on the left of the matrix, and all the keys to be presented as column headings, with the corresponding values.
| Filename | SettingA | SettingB | Rule | More columns... |
| -------- | -------------- | -------------- | -------------- | -------------- |
| dir/file1.txt | | null | Rulechecker.Rule15 | |
| dir/file1.txt | | null | Rulechecker.Rule16 | |
| dir/file2.txt | | null | Rulechecker.Rule17 | |
Iterate over the top-level key-value pairs obtained by to_entries to get access to the key names, then once again over its content array in .value to get the array items. Also note that newlines as present in the sample's last .Match value cannot be used as is in a line-oriented format such as CSV. Here, I chose to replace them with the literal string \n using gsub.
jq -r '
to_entries[] | . as {$key} | .value[] | [$key,
(.Setting | .SettingA, .SettingB),
.Rule // .Check, .Description, .Line, .Link,
.Message, .Severity, .Span[], .Match
| strings |= gsub("\n"; "\\n")
] | #csv
'
"dir/file1.txt","",,"Rulechecker.Rule15","",11,"www.sample.com","Some message","error",1,3,"[id"
"dir/file1.txt","",,"Rulechecker.Rule16","",27,"www.sample.com","Fix the rule","error",1,3,"[id"
"dir/file2.txt","",,"Rulechecker.Rule17","",51,"www.example.com","Fix anoher 'rule'?","error",1,18,"[source,terminal]\n----\n"
Demo
If you just want to dump all the values in the order they appear, you can simplify this by using .. | scalars to traverse the levels of the document:
jq -r '
to_entries[] | . as {$key} | .value[] | [$key,
(.. | scalars) | strings |= gsub("\n"; "\\n")
] | #csv
'
"dir/file1.txt","",,"Rulechecker.Rule15","",11,"www.sample.com","Some message","error",1,3,"[id"
"dir/file1.txt","",,"Rulechecker.Rule16","",27,"www.sample.com","Fix the rule","error",1,3,"[id"
"dir/file2.txt","",,"Rulechecker.Rule17","",51,"www.example.com","Fix anoher 'rule'?","error",1,18,"[source,terminal]\n----\n"
Demo
As for the column headings, for the first case I'd add them manually, as you spell out each value path anyways. For the latter case it will be a little complicated as not all coulmns have immediate names (what should the items of array Span be called?), and some seem to change (in the second record, column Rule is called Check). You could, however, stick to the names of the first record, and taking the deepest field name either as is or add the array indices. Something along these lines would do:
jq -r '
to_entries[0].value[0] | ["Filename", (
path(..|scalars) | .[.[[map(strings)|last]]|last:] | join(".")
)] | #csv
'
"Filename","SettingA","SettingB","Rule","Description","Line","Link","Message","Severity","Span.0","Span.1","Match"
Demo

How to make nested JSON response in Go?

I am new in Go and need some help.
In my PostgreSQL database I have 4 table. They called: surveys, questions, options and surveys_questions_options.
They looks like this:
surveys table:
| survey_id (uuid4) | survey_name (varchar) |
|--------------------------------------|-----------------------|
| 0cf1cf18-d5fd-474e-a8be-754fbdc89720 | April |
| b9fg55d9-n5fy-s7fe-s5bh-856fbdc89720 | May |
questions table:
| question_id (int) | question_text (text) |
|-------------------|------------------------------|
| 1 | What is your favorite color? |
options table:
| option_id (int) | option_text (text) |
|-------------------|--------------------|
| 1 | red |
| 2 | blue |
| 3 | grey |
| 4 | green |
| 5 | brown |
surveys_questions_options table combines data from all three previous tables:
| survey_id | question_id | option_id |
|--------------------------------------|-------------|-----------|
| 0cf1cf18-d5fd-474e-a8be-754fbdc89720 | 1 | 1 |
| 0cf1cf18-d5fd-474e-a8be-754fbdc89720 | 1 | 2 |
| 0cf1cf18-d5fd-474e-a8be-754fbdc89720 | 1 | 3 |
| b9fg55d9-n5fy-s7fe-s5bh-856fbdc89720 | 1 | 3 |
| b9fg55d9-n5fy-s7fe-s5bh-856fbdc89720 | 1 | 4 |
| b9fg55d9-n5fy-s7fe-s5bh-856fbdc89720 | 1 | 5 |
How can I make nested JSON response in Go? I use GORM library. I want a JSON response like this:
[
{
"survey_id": "0cf1cf18-d5fd-474e-a8be-754fbdc89720",
"survey_name": "April",
"questions": [
{
"question_id": 1,
"question_text": "What is your favorite color?",
"options": [
{
"option_id": 1,
"option_text": "red"
},
{
"option_id": 2,
"option_text": "blue"
},
{
"option_id": 3,
"option_text": "grey"
},
]
}
]
},
{
"survey_id": "b9fg55d9-n5fy-s7fe-s5bh-856fbdc89720",
"survey_name": "May",
"questions": [
{
"question_id": 1,
"question_text": "What is your favorite color?",
"options": [
{
"option_id": 3,
"option_text": "grey"
},
{
"option_id": 4,
"option_text": "green"
},
{
"option_id": 5,
"option_text": "brown"
},
]
}
]
}
]
My models looks like this:
type Survey struct {
SurveyID string `gorm:"primary_key" json:"survey_id"`
SurveyName string `gorm:"not null" json:"survey_name"`
Questions []Question
}
type Question struct {
QuestionID int `gorm:"primary_key" json:"question_id"`
QuestionText string `gorm:"not null;unique" json:"question_text"`
Options []Option
}
type Option struct {
OptionID int `gorm:"primary_key" json:"option_id"`
OptionText string `gorm:"not null;unique" json:"option_text"`
}
I'm not sure abour GORM part, but with JSON you need to add struct tags on the nested objects as well:
type Survey struct {
...
Questions []Question `json:"questions"`
}
type Question struct {
...
Options []Option `json:"options"`
}
We're missing some scope from your code, and so it's quite hard to point you in the right direction. Are you asking about querying GORM so you get []Survey, or are you asking about marshalling []Survey? Anyway, you should add the tag to Questions too, as slomek replied.
However, try this:
To fetch nested data in m2m relation
type Survey struct {
gorm.Model
SurveyID string `gorm:"primary_key" json:"survey_id"`
SurveyName string `gorm:"not null" json:"survey_name"`
Questions []*Question `gorm:"many2many:survey_questions;"`
}
surveys := []*model.Survey{}
db := dbSession.Where(&model.Survey{SurveyID: id}).Preload("Questions").Find(&surveys)

how to use mulesoft dataweave to transform to json with grouping and string to array

I have a database call the provides a payload as described below. how I do use dataweave to transform that payload to json in the format as provided below the example table?
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| company |status| license_id |acct status| last_inv_date | acctnum | owner | entlmt | roles |subscribed|attr_type| attr_key |attr_value|
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| company name 1|Active|02iq0000000xlBBAAY| Active |2016-02-25 22:50:04|A100001135|myemail#email.com|Standard|Admin;wcl_admin;wcl_support| 1 | cloud |cloud_num_247_t_streams| 1 |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| company name 1|Active|02iq0000000xlBBAAY| Active |2016-02-25 22:50:04|A100001135|myemail#email.com|Standard|Admin;wcl_admin;wcl_support| 1 | cloud | api_access | 1 |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| company name 1|Active|02iq0000000xlBBAAY| Active |2016-02-25 22:50:04|A100001135|myemail#email.com|Standard|Admin;wcl_admin;wcl_support| 1 | cloud |cloud_num_247_p_streams| 1 |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| company name 2|Active|02iq0000000xlBBBBZ| Active |2016-02-25 22:50:04|A100001166|myblah1#email.com|Standard| Admin | 1 | cloud |cloud_num_247_p_streams| 0 |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| company name 2|Active|02iq0000000xlBBBBZ| Active |2016-02-25 22:50:04|A100001166|myblah1#email.com|Standard| Admin | 1 | cloud | api_access | 1 |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
Final output desired in json:
{
"records": [
{
"company": "company name 1",
"has_active_subscriptions": true,
"license_status": "Active",
"license_id": "02iq0000000xlBBAAY",
"account_status": "Prospect",
"last_invoice_date": "2016-02-25 22:50:04",
"cloud_owner_email": "myemail#email.com",
"role": [
"Admin",
"wcl_admin",
"wcl_support"
],
"account_number": "A100001135",
"attributes": {
"cloud": {
"api_access": 1,
"cloud_num_247_t_streams": 1,
"cloud_num_247_p_streams": 1
}
},
"entitlement_plan": "Standard"
},
{
"company": "company name 2",
"has_active_subscriptions": true,
"license_status": "Active",
"license_id": "02iq0000000xlBBBBZ",
"account_status": "Active",
"last_invoice_date": "2016-02-25 22:50:04",
"cloud_owner_email": "myblah#email.com",
"role": [
"Admin"
],
"account_number": "A100001166",
"attributes": {
"cloud": {
"cloud_num_247_p_streams": 0,
"api_access": 1
}
},
"entitlement_plan": "Standard"
}
]
}
Supposing that the dataweave component is just after the database component, and the result of the query is still on the payload: the payload is then an ArrayList of CaseInsensitiveHashMap - similar to the records object on your JSON.
So I would try something like:
%dw 1.0
%output application/json
records: payload
You dont need DataWeave if you just want to transform resultset into JSON.
You can use ObjectToJson transformer to do that.

Writing nested JSON in spark scala

My Spark-SQL is generating an output for a query by joining two tables which has one to many cardinality. I have to convert the data into JSON.
This is how the output of the query will look like.
Address_id_parent | Address_id_child | Country_child | city_child
1 | 1 | India | Delhi
1 | 1 | US | NewYork
1 | 1 | US | NewJersey
The above data has to be converted to JSON in this way.
{
"Address": {
"Address_id_parent": "1"
},
"Address-details": [{
"Address_id_child": "1",
"location": [{
"country":"India",
"city":"Delhi",
},
{
"country":"US",
"city":"NewYork",
},
{
"country":"US",
"city":"NewJersey",
}
]
}]
}
How can I accomplish this?
Check Dataframe write interface with json:
df.write.format("json").save(path)