I have some JSON file like this:
{
"2": {
"_id": 2,
"_date": "Mon Apr 05 2021",
"_timestamp": 1617654662313,
"description": "Some text",
"isStarred": true,
"boards": [
"#3.0",
"#Some-day"
],
"_isTask": false,
"isComplete": false,
"inProgress": false,
"priority": 1
},
"7": {
"_id": 7,
"_date": "Mon Apr 05 2021",
"_timestamp": 1617658197721,
"description": "Some text too",
"isStarred": false,
"boards": [
"#Some-day"
],
"_isTask": false
}
}
and I want to parse it in my class Entry:
require "json"
enum Priority
Low # 1
Medium # 2
High # 3
end
class Entry
include JSON::Serializable
property _id : UInt32
property _date : Time
property _timestamp : UInt64
property description : String
property isStarred : Bool
property boards : Array(String)
property _isTask : Bool
property isComplete : Bool
property inProgress : Bool
property priority : Priority
end
When I try to parse using Hash(String, Entry).from_json it does not work: Expected BeginObject but was String
I cannot reproduce your error, there's no from_string method in the standard library, so the fault might be within whatever that is.
However using from_json still requires some adjustments to your example:
The date format in _date is non-standard and requires an explicit Time::Format passed as a field converter using the #[JSON::Field] annotation's converter attribute.
Similarly enums serialize to their name value as a string by default and require Enum::ValueConverter to be set to serialize to their numerical value instead.
Going by your example JSON some of the properties are optional and need to be marked as nilable.
https://carc.in/#/r/e2e7
The #[JSON::Field] annotation is also handy to set different external names while using more conventional names on the Crystal side of things.
Related
Suppose I have this JSON object
[
{ "label": "The entire place", "value": "entire_place" },
{ "label": "A shared room", "value": "shared_room" },
{ "label": "A private room", "value": "private_room" },
]
Those represent the possible values of a dropdown menu. The label is what the user sees, and the value is what's stored in the database.
However, I also need to create a type as follows:
type Property = "entire_place" | "private_room" | "shared_room";
How can I do so without defining the data twice?
I don't want to change the type and JSON object every time I need to add a new possible values.
Any ideas how to do so? If it's not possible, what's a better alternative to store data (along their label) and use it for validation.
First, you need to declare your array as a const, that way typescript generates the specific values rather than just string. Then you can use typeof and index the array with number and lastly value:
const data = [
{ label: 'The entire place', value: 'entire_place' },
{ label: 'A shared room', value: 'shared_room' },
{ label: 'A private room', value: 'private_room' },
] as const;
type Property = typeof data[number]['value'];
Declaring it as a const does mean you can't modify the array, which is a good thing. If you want to keep it entirely in sync with the array, I don't believe that's possible.
I have a jsonld file that I am parsing using Jena. The file has #type #id "rdfs:label" and "rdfs:comment" and also ranges and domains. I have code like this
Model m = ModelFactory.createDefaultModel();
Reader fileReader = new FileReader(fileName);
Model model = m.read(fileReader, null, "JSON-LD");
StmtIterator it = model.listStatements();
Set<String> set = new HashSet<>();
System.out.println("Labels");
while (it.hasNext()) {
Statement statement = it.next();
....
It seems to pick up all the content but does not see the #type statements with rdfs:container. How do I pick up these statements using this parser?
A fragment of the json-ld is
{
"#id": "aaa:bbb",
"#type": [
"rdfs:container"
],
"rdfs:label": {
"#language": "en",
"#value": "cccc"
},
"rdfs:comment": {
"#language": "en",
"#value": "dddd."
},
"rdfs:member": [
{
"#id": "aaaa:eeee"
},
{
"#id": "aaaa:fffff"
}
],
When the type is rdfs:class - I get a statement coming through with predicate "type" and the object as the RDFClass, but when the type is rdfs:container - as in the above example I do not get a statement through. I was expecting a statement to come through with the predicate of "type" and a subject with localName of bbb and an object specifying the container class. I do not see such a statement. How to I detect in the parser that the presence of the rdfs:container?
I notice Jena has the concept of Container : https://jena.apache.org/documentation/javadoc/jena/org/apache/jena/rdf/model/Container.html.
It looks like the object was coming though as a string or http://www.w3.org/2000/01/rdf-schema#container. So I can find it.
I am using json object having json data as :
{
"name": "Black Cat",
"description": "Cat Family",
"PublicDirect1": **18446744073709551615**
}
While parsing it through :
JSONObject jsonobject = new JSONObject(jsonFile);
Using org.json.JSONObject
I am getting output JSONObject as :
{
"name": "Black Cat",
"description": "Cat Family",
"PublicDirect1": **1.8446744073709552E19**
}
I do not want PublicDirect1 value to be changed, I want to use the raw value 18446744073709551615 as it is, so how to do that?
Any other class which I can use?
Please check the data type of PublicDirect1. In case it is not String, Convert that to String and then get that value. By this you can preserve the value.
Hello I have a JSON in the following format.I need to parse this in the map function to get the gender information of all the records.
[
{
"SeasonTicket" : false,
"name" : "Vinson Foreman",
"gender" : "male",
"age" : 50,
"email" : "vinsonforeman#cyclonica.com",
"annualSalary" : "$98,501.00",
"id" : 0
},
{
"SeasonTicket": true,
"name": "Genevieve Compton",
"gender": "female",
"age": 28,
"email": "genevievecompton#cyclonica.com",
"annualSalary": "$46,881.00",
"id": 1
},
{
"SeasonTicket": false,
"name": "Christian Crawford",
"gender": "male",
"age": 53,
"email": "christiancrawford#cyclonica.com",
"annualSalary": "$53,488.00",
"id": 2
}
]
I have tried using JSONparser but am not able to get through the JSON structure.I have been advised to use JAQL and pig but cannot do so.
Any help would be appreciated.
What I understand is that you have a huge file with an array of JSONs. Of this, you need to read the same to a mapper and emit say <id : gender>. The challenge is that JSON falls across to multiple lines.
In this is the case, I would suggest you to change the default delimiter to "}" instead of "\n".
In this case, you will be able to get parts of the JSON into the map method as value. You can discard the key ie. byte offset and do slight re-fractor on the value like removing off unwanted [ ] or , and adding chars like "}" and then parse the remaining string.
This solution works because there is no nesting within JSON and } is a valid JSON end delimiter as per the given example.
For changing the default delimiter, just set the property textinputformat.record.delimiter to "}"
Please check out this example.
Also check this jira.
I have a basic Json question - I have a JSON file. Every object in this file has columns repeated.
[
{
id: 1,
name: "ABCD"
},
{
id: 2,
name: "ABCDE"
},
{
id: 3,
name: "ABCDEF"
}
]
For optimization I was thinking to remove repeated column names.
{
"cols": [
"id",
"name"
],
"rows": [
[
"1",
"ABCD"
],
[
"2",
"ABCDE"
]
]
}
What I am trying to understand is - is this a better approach? Are there any disadvantages of this format? Say for writing unit tests?
EDIT
The second case (after your editing) is valid json. You can derive it to the following class using json2csharp
public class RootObject
{
public List<string> cols { get; set; }
public List<List<string>> rows { get; set; }
}
The very important point to note about a valid json is that it has no other way but to repeat the column names (or, keys in general) to represent values in json. You can test the validity of your json putting it # jsonlint.com
But if you want to optimize json by compressing it using some compression library like gzip (likewise), then I would recommend Json.HPack.
According to this format, it has many compression levels ranging from 0 to 4 (4 is the best).
At compression level 0:
you have to remove keys (property names) from the structure creating a header on index 0 with each property name. Then your compressed json would look like:
[
[
"id",
"name"
],
[
1,
"ABCD"
],
[
2,
"ABCDE"
],
[
3,
"ABCDEF"
]
]
In this way, you can compress your json at any levels as you want. But in order to work with any json library, you must have to decompress it to valid json first like the one you provided earlier with repeated property names.
For your kind information, you can have a look at the comparison between different compression techniques:
{
"cols": [
"id",
"name"
],
"rows": [
"1",
"ABCD"
], [
"2",
"ABCDE"
], [
"3",
"ABCDEF"
]
}
In this approach it will be hard to determine which value stand for which item (id,name). Your first approach was good if you use this JSON for communication.
A solution for it, is use any type (by your preference) of Object-Relational-Mapper,
By that, you can compress your JSON data and still using legible structure/code.
Please, see this article: What is "compressed JSON"?