Convert Json - Integers without quotes, Strings with quotes - json

I'm reading a CSV file with some strings and some integer columns and converting them to JSON. While this conversion, all fields and values seem to have double quotes around them. However I want the integer values NOT to have double quotes.
I'm using Jackson Fasterxml and here's my code snippet
File input = new File("/Users/name/1.csv");
File output = new File("/Users/name/output.json");
CsvSchema csvSchema = CsvSchema.builder().setUseHeader(true).build();
CsvMapper csvMapper = new CsvMapper();
// Read data from CSV file
List<Object> readAll = csvMapper.readerFor(Map.class).with(csvSchema).readValues(input).readAll();
ObjectMapper mapper = new ObjectMapper();
mapper.configure(JsonGenerator.Feature.QUOTE_NON_NUMERIC_NUMBERS, true);
// Write JSON formated data to output.json file
mapper.writerWithDefaultPrettyPrinter().writeValue(output, readAll);
Here's my expected output: Output, please note id and budget do not have double quotes on them
[ {
"id" : 120,
"name" : "Name 1",
"type" : "type1",
"budget" : 100
},
{
"id" : 130,
"name" : "Name 2",
"type" : "type2",
"budget" : 200
},
{
"id" : 140,
"name" : "Name 3",
"type" : "type2",
"budget" : 130
},
{
"id" : 150,
"name" : "Name 4",
"type" : "type4",
"budget" : 400
}
}]
However all fields and values have quotes after conversion
[ {
"id" : "120",
"name" : "Name 1",
"type" : "type1",
"budget" : "100"
},
{
"id" : "130",
"name" : "Name 2",
"type" : "type2",
"budget" : "200"
},
{
"id" : "140",
"name" : "Name 3",
"type" : "type2",
"budget" : "130"
},
{
"id" : "150",
"name" : "Name 4",
"type" : "type4",
"budget" : "400"
}
}]

Unfortunately right now it is not possible to read CSV and specify type for using schema only. You can create POJO with private int budget; and conversion will be done automatically. For other solutions take a look on this question: jackson-dataformat-csv: Mapping number value without POJO where you can see:
Usage of univocity-parsers
Trick with custom Map implementation.

Related

How to add number as firebase realtime database child without converting to array?

I want to make a DB structure like the following:
But it just converts the whole object to an array.
Here's what I get when I export this JSON.
[ null, {
"oPNfOlBcS4gl3rvZ4CIme9MDk0p1" : {
"added_at" : 1647966583316,
"city" : "Test",
"country" : "Test",
"name" : "Test",
"state" : "Test",
"uid" : "oPNfOlBcS4gl3rvZ4CIme9MDk0p1"
}
}, {
"oPNfOlBcS4gl3rvZ4CIme9MDk0p1" : {
"added_at" : 1647966583316,
"city" : "Test",
"country" : "Test",
"name" : "Test",
"state" : "Test",
"uid" : "oPNfOlBcS4gl3rvZ4CIme9MDk0p1"
}
} ]
I tried keeping the first child as 1 and the second as "first" (or some random string) and exported JSON and here's what I got...
{
"1" : {
"oPNfOlBcS4gl3rvZ4CIme9MDk0p1" : {
"added_at" : 1647966583316,
"city" : "Test",
"country" : "Test",
"name" : "Test",
"state" : "Test",
"uid" : "oPNfOlBcS4gl3rvZ4CIme9MDk0p1"
}
},
"first" : {
"oPNfOlBcS4gl3rvZ4CIme9MDk0p1" : {
"added_at" : 1647966583316,
"city" : "Test",
"country" : "Test",
"name" : "Test",
"state" : "Test",
"uid" : "oPNfOlBcS4gl3rvZ4CIme9MDk0p1"
}
}
}
I want something similar, but with children like "1", "2" instead.
Is it possible? Or using children like "first", "second" is the only solution right now?
When you read a node with sequential numeric keys (as in your screenshot and first JSON example), the Firebase SDK (and REST) API) automatically coerce that data to an array. There is no way to configure this behavior.
If you don't want the array coercion you should use non-numeric keys. My common approach is to prefix each key with a short string, like "key1", "key2", etc.
Also see: Best Practices: Arrays in Firebase.

Read FlowFile with Groovy Nifi

I try to transform an Avro file to an SQL request. My file is like this :
{
"type" : "record",
"name" : "warranty",
"doc" : "Schema generated by Kite",
"fields" : [ {
"name" : "id",
"type" : "long",
"doc" : "Type inferred from '1'"
}, {
"name" : "train_id",
"type" : "long",
"doc" : "Type inferred from '21691'"
}, {
"name" : "siemens_nr",
"type" : "string",
"doc" : "Type inferred from 'Loco-001'"
}, {
"name" : "uic_nr",
"type" : "long",
"doc" : "Type inferred from '193901'"
}, {
"name" : "Configuration",
"type" : "string",
"doc" : "Type inferred from 'ZP28'"
}, {
"name" : "Warranty_Status",
"type" : "string",
"doc" : "Type inferred from 'Out_of_Warranty'"
}, {
"name" : "Warranty_Data_Type",
"type" : "string",
"doc" : "Type inferred from 'Real_based_on_preliminary_acceptance_date'"
}
and my code is :
import groovy.json.JsonSlurper
def ff = session.get()
if(!ff)return
//parse afro schema from flow file content
def schema = ff.read().withReader("UTF-8"){ new JsonSlurper().parse(it) }
//define type mapping
def typeMap = [
"string" : "varchar(255)",
"long" : "numeric(10)",
[ "null", "string" ]: "varchar(255)",
[ "null", "long" ] : "numeric(10)",
]
//build create table statement
def createTable = "create table ${schema.name} (" +
schema.fields.collect{ "\n ${it.name.padRight(39)} ${typeMap[it.type]}" }.join(',') +
"\n)"
//execute statement through the custom defined property
//SQL.mydb references http://docs.groovy-lang.org/2.4.10/html/api/groovy/sql/Sql.html object
SQL.mydb.execute(createTable)
//transfer flow file to success
REL_SUCCESS << ff
And i got this error :
ERROR nifi.processors.script.ExecuteScript ExecuteScript[id=e65b733e-0161-1000-45f0-3264d6fb51dd] ExecuteSc$ Possible solutions: getId(), find(), grep(), each(groovy.lang.Closure), find(groovy.lang.Closure), grep(java.lang.Object); rolling back session: {} org.apache.nifi.processor.exception.ProcessException: javax.script.ScriptException: javax.script.ScriptException: groovy.lang.MissingMethodException: No signature of m$ Possible solutions: getId(), find(), grep(), each(groovy.lang.Closure), find(groovy.lang.Closure), grep(java.lang.Object)
Can someone help me plz
This references a script from another SO post, I commented there and I provided an answer on a different forum, which I will copy here for completeness:
The variable createTable is a GString, not a Java String. This causes invocation of Sql.execute(GString), which converts the embedded expressions into parameters, and you can't use a parameter for a table name. Use the following instead:
SQL.mydb.execute(createTable.toString())
This will cause the invocation of Sql.execute(String), which does not try to parameterize the statement.

How to push new key and value in JSON array in mongodb?

How can I push new key and value in JSON array?
I tried I used push keyword in update query but I got a different output. I used:
db.users.updateOne({"name":"viki"},{$push{"address.district":"thambaram"}})
I have this document:
{ "_id" : ObjectId("58934f10c7592b1494fd9a4d"), "name" : "viki", "age" : 100, "subject" : [ "c", "node.js", "java" ], "address" : { "city" : "chennai", "state" : "tamilnadu", "pincode" : "123" } }
I want to add "district":"thambaram" in address json array
I need like:
{ "_id" : ObjectId("58934f10c7592b1494fd9a4d"), "name" : "viki", "age" : 100, "subject" : [ "c", "node.js", "java" ], "address" : { "city" : "chennai", "state" : "tamilnadu", "pincode" : "123","district":"thambaram"} }
Use $set
db.users.updateOne({"name":"viki"},{$set:{"address.district":"thambaram"}})
This should work.
The $push operator appends a specified value to an array. In your case you should use $set

How to update a nested array value in mongodb?

I want update a array value that is nested within an array value: i.e. set
status = enabled
where alerts.id = 2
{
"_id" : ObjectId("5496a8ed49847b6cd7c7b350"),
"name" : "joe",
"locations" : [
{
"name": "my location",
"alerts" : [
{
"id" : 1,
"status" : null
},
{
"id" : 2,
"status" : null
}
]
}
]
}
I would have used the position $ character, but cannot use it twice in a statement - multi positional operators are not supported yet: https://jira.mongodb.org/browse/SERVER-831
How do I issue a statement to only update the status field of an alert matching an id of 2?
UPDATE
If I change the schema as follows:
{
"_id" : ObjectId("5496ab2149847b6cd7c7b352"),
"name" : "joe",
"locations" : {
"my location" : {
"alerts" : [
{
"id" : 1,
"status" : "enabled"
},
{
"id" : 2,
"status" : "enabled"
}
]
},
"my other location" : {
"alerts" : [
{
"id" : 3,
"status" : null
},
{
"id" : 4,
"status" : null
}
]
}
}
}
I can then use:
update({"locations.my location.alerts.id":1},{$set: {"locations.my location.alerts.$.status": "enabled"}});
Problem is I cannot create indexes on the alert id :-(
it may be better of modelled as such, specially if an index on location and,or alerts.id is needed.
{
"_id" : ObjectId("5496a8ed49847b6cd7c7b350"),
"name" : "joe",
"location" : "myLocation",
"alerts" : [{
"id" : 1,
"status" : null
},
{
"id" : 2,
"status" : null
}
]
}
{
"_id" : ObjectId("5496a8ed49847b6cd7c7b350"),
"name" : "joe",
"location" : "otherLocation",
"alerts" : [{
"id" : 1,
"status" : null
},
{
"id" : 2,
"status" : null
}
]
}
I think you are having a wrong tool for the job. What you have in your example is relational data and it's much easier to handle with relational database. So I would suggest to use SQL-database instead of mongo.
But if you really want to do it with mongo, then I guess the only option is to fetch the document and modify it and put it back.

How to get JSON into a Freemarker Template (FTL)

I've got a MongoDB which I query and the result I serialize and this string I send to my ftl template. Below is the serialized result:
[
{
"id" : "10",
"title" : "Test Title 1",
"partner" : {
"id" : "1",
"name" : "partner 1 ",
"location" : [{
"locationname" : "locationname 1a",
"city" : ""
},{
"locationname" : "locationname 1b",
"city" : ""
}]
}
},
{
"id" : "6",
"title" : "Test Title 2",
"partner" : {
"id" : "1",
"name" : "partner 2 ",
"location" : [{
"locationname" : "locationname 2b",
"city" : ""
}]
}
}
]
How would I use this in my ftl template?
Thanks for any help.
If you really have to serialize before giving the result to FreeMarker... The JSON syntax for maps and lists happens to be a subset of FTL, so assuming the serialized result is in res, res?eval will give you a list of maps.