trying to loop through JsonNode but root jsonNode is duplicating the data - json

I am trying to loop through Jsonnode but root jsonNode is duplicating the data.
Trying to figure out but not sure where i am missing the issue. Will try to explain the issue below.
I have to Jackson API.
Json block is:
{
"queries": [
{
"id": "keyword",
"values": [
"test"
]
},{
"id": "include",
"values": [
false
]
}
]
}
My block of Java code is Iterator fieldNames = root.fieldNames();
while (fieldNames.hasNext()) {
String fieldName = fieldNames.next();
if (fieldName.equalsIgnoreCase("queries")) {
nameNode =root.get(fieldName);
}
JsonNode nameNode = root.get("queries");
for (JsonNode node : nameNode) {
JsonNode elementId = node.path("id").asText();
if (!elementId.isEmpty() && elementId.equalsIgnoreCase("include")) {
check = true;
include = node;
}
}
When debug comes to line for (JsonNode node : nameNode) { , node value is "id": "keyword", "values": [ "test" ] and nameNode is the json shown above but when it comes to next line which is " node.path("id").asText();"
nameNode variable appends "id": "keyword","values": [ "test" ] 2 times.
Now the json is the original json with "id": "keyword","values": [ "test" ] appended 2 times and gives concurrentModificationException.

change your variable node to objNode because node may be predifined value in jackson and you can also try to make for each variable to final

Related

Terraform aws_dynamodb_table_item - insert multiline JSON into attribute

I have the following terraform config:
resource "aws_dynamodb_table_item" "my_table" {
table_name = aws_dynamodb_table.my_table.name
hash_key = aws_dynamodb_table.my_table.hash_key
item = <<ITEM
{
"id": {"S": "nameAndCodes"},
"data": {"S": "[
{
"code": "03",
"displayName": "name1"
},
{
"code": "04",
"displayName": "name2"
}
]"}
}
ITEM
}
When the plan stage executes I receive the error:
Error: Invalid format of "item": Decoding failed: invalid character '\r' in string literal
The only way i can get this to work is to make the whole json a single line as follows:
"data": {"S": "[{\"code\": \"03\", \"displayName\": \"name1\"},{\"code\": \"04\", \"displayName\": \"name2\"}]"
This looks very ugly and difficult to manage.
Does anyone know how I can enter a multiline JSON inside a <<ITEM block?
To resolve that issue, You can use the jsonencode function to set the item value and put entire JSON object in there. Here is an example in Terraform from my project which creates a DynamoDB table and put an initial item.
resource "aws_dynamodb_table" "customer_table" {
name = "customer"
billing_mode = "PAY_PER_REQUEST"
hash_key = "customerId"
stream_enabled = false
attribute {
name = "customerId"
type = "S"
}
}
resource "aws_dynamodb_table_item" "customer_table_item" {
table_name = aws_dynamodb_table.customer_table.name
hash_key = aws_dynamodb_table.customer_table.hash_key
depends_on = [aws_dynamodb_table.customer_table]
item = jsonencode({
"customerId" : {
"S" : "1"
},
"firstName" : {
"S" : "John"
},
"lastName" : {
"S" : "Doe"
},
})
}
commands:
terrform init
terraform fmt
terraform plan
terraform apply

Force root xml element to be array on json conversion

I am using below (http://james.newtonking.com/projects/json) to force XML nodes to be an array when converted to JSON:
<person xmlns:json='http://james.newtonking.com/projects/json' id='1'>
<name>Alan</name>
<url>http://www.google.com</url>
<role json:Array='true'>Admin</role>
</person>
and what I get is
{
"person": {
"#id": "1",
"name": "Alan",
"url": "http://www.google.com",
"role": [
"Admin"
]
}
}
What I wanted is
{
"person": [
{
"#id": "1",
"name": "Alan",
"url": "http://www.google.com",
"role": [
"Admin"
]
}
]
}
Is it possible to force array on root node ?
I am able to get the result you desire by:
Adding json:Array='true' to the root element <person>.
Since you are already adding this attribute to <role> adding it to the root element as well should not be a burden.
Loading the XML into an XDocument (or XmlDocument) and converting the document itself rather than just the root element XDocument.Root.
Thus:
var xml = #"<person xmlns:json='http://james.newtonking.com/projects/json' id='1' json:Array='true'>
<name>Alan</name>
<url>http://www.google.com</url>
<role json:Array='true'>Admin</role>
</person>";
var xDocument = XDocument.Parse(xml);
var json1 = JsonConvert.SerializeXNode(xDocument, Newtonsoft.Json.Formatting.Indented);
Generates the JSON you want:
{
"person": [
{
"#id": "1",
"name": "Alan",
"url": "http://www.google.com",
"role": [
"Admin"
]
}
]
}
But the following does not:
var json2 = JsonConvert.SerializeXNode(xDocument.Root, Newtonsoft.Json.Formatting.Indented);
A similar result obtains using XmlDocument, in which only the following works as desired:
var xmlDocument = new XmlDocument();
xmlDocument.LoadXml(xml);
var json1 = JsonConvert.SerializeXmlNode(xmlDocument, Newtonsoft.Json.Formatting.Indented);
I confirmed this on both Json.NET 10.0.1 and Json.NET 12.0.1. It's a bit mysterious why serializing the document vs. its root element should make a difference, you might create an issue for Newtonsoft asking why it should matter.
Demo fiddle here.

Ruby: How to parse json to specific types

I have a JSON that I want to parse in Ruby. Ruby is completely new to me, but I have to work with it :-)
Here is my litte snippet, that should do the parsing:
response = File.read("app/helpers/example_announcement.json")
JSON.parse(response)
this works pretty fine. The only downside is, I do not know the properties at the point where I use it, it is not typesafe. So I created the objects for it
class Announcements
##announcements = Hash # a map key => value where key is string and value is type of Announcement
end
class Announcement
##name = ""
##status = ""
##rewards = Array
end
And this is how the json looks like
{
"announcements": {
"id1" : {
"name": "The Diamond Announcement",
"status": "published",
"reward": [
{
"id": "hardCurrency",
"amount": 100
}
]
},
"id2": {
"name": "The Normal Announcement",
"players": [],
"status": "published",
"reward": []
}
}
}
So I tried JSON parsing like this
response = File.read("app/helpers/example_announcement.json")
JSON.parse(response, Announcements)
But this is not how it works^^can anybody help me with this?

How to set Json Document attribute in GetDynamoDB processor in nifi

I am trying to get data from DynamoDB using GetDynamoDB processor in nifi, I have provided all the mandatory fields except Json Document attribute, I don't know what to set in that field.
Input Data :
{
"ProductCatalog": [
{
"PutRequest": {
"Item": {
"Id": {
"N": "101"
},
"Title": {
"S": "Book 101 Title"
},
"ISBN": {
"S": "111-1111111111"
},
"Authors": {
"L": [
{
"S": "Author1"
}
]
},
"Price": {
"N": "2"
},
"Dimensions": {
"S": "8.5 x 11.0 x 0.5"
},
"PageCount": {
"N": "500"
},
"InPublication": {
"BOOL": true
},
"ProductCategory": {
"S": "Book"
}
}
}
}]
}
Any one can be appreciated.
When you store a JSON document in DynamoDB, the JSON document you store is placed as a nested object inside a wrapper object known as a DynamoDB Item, which stores metadata about the document. The Json Document attribute lets you specify the key in the DynamoDB Item for the nested JSON document so that it can be extracted from the top-level DynamoDB Item.
Looking at the source code for the GetDynamoDB NiFi processor shows how the value (jsonDocument) is used to call the AWS DynamoDB SDK, extract the JSON for the specified DynamoDB Item attribute, and create a NiFi FlowFile for the contents:
BatchGetItemOutcome result = dynamoDB.batchGetItem(tableKeysAndAttributes);
// Handle processed items and get the json document
List<Item> items = result.getTableItems().get(table);
for (Item item : items) {
ItemKeys itemKeys = new ItemKeys(item.get(hashKeyName), item.get(rangeKeyName));
FlowFile flowFile = keysToFlowFileMap.get(itemKeys);
if ( item.get(jsonDocument) != null ) {
ByteArrayInputStream bais = new ByteArrayInputStream(item.getJSON(jsonDocument).getBytes());
flowFile = session.importFrom(bais, flowFile);
}
session.transfer(flowFile,REL_SUCCESS);
keysToFlowFileMap.remove(itemKeys);
}
Hope this helps!
Json Document attribute is which key inside your DynamodDB item do you want to become the content for the downstream flowfile.
Current, there is no way to only return the entire 'root' DynamoDB item, only inside attributes.
Below an example flow:
DynamoDB Table:
GetDynamoDB:
Inside the GenerateFlowFile I'm creating a FlowFile with the attributes hash_key and range_key.
If I configure the Json Document attribute as json the output FlowFile content would be {"key1":"name","key2":2}.
If I configure as attribute the output would be "attribute".

extract case classes from json file scala play

im trying to extract my data from json into a case class without success.
the Json file:
[
{
"name": "bb",
"loc": "sss",
"elements": [
{
"name": "name1",
"loc": "firstHere",
"elements": []
}
]
},
{
"name": "ca",
"loc": "sss",
"elements": []
}
]
my code :
case class ElementContainer(name : String, location : String,elements : Seq[ElementContainer])
object elementsFormatter {
implicit val elementFormatter = Json.format[ElementContainer]
}
object Applicationss extends App {
val el = new ElementContainer("name1", "firstHere", Seq.empty)
val el1Cont = new ElementContainer("bb","sss", Seq(el))
val source:String=Source.fromFile("src/bin/elementsTree.json").getLines.mkString
val jsonFormat = Json.parse(source)
val r1= Json.fromJson[ElementContainer](jsonFormat)
}
after running this im getting inside r1:
JsError(List((/elements,List(ValidationError(List(error.path.missing),WrappedArray()))), (/name,List(ValidationError(List(error.path.missing),WrappedArray()))), (/location,List(ValidationError(List(error.path.missing),WrappedArray())))))
been trying to extract this data forever, please advise
You have location instead loc and, you'll need to parse file into a Seq[ElementContainer], since it's an array, not a single ElementContainer:
Json.fromJson[Seq[ElementContainer]](jsonFormat)
Also, you have the validate method that will return you either errors or parsed json object..