I'm using the TJsonDataObjects Delphi component (https://github.com/ahausladen/JsonDataObjects). I am using it as the data store for what is is displayed in a editable TreeView. In the treeview I store the "path" using a JsonPath string. When the user modifies the values in the Treeview, the path property allows me locate the record by path and modify it via the component path property.
My issue is when a user wants to delete a record, I need to remove it from the JSON file. It does note seem like there is a simple way to do this via its "path. I expect I could trim off the item from the path to gets it parent and then delete it by "name" or "index" if an array. I was hoping there might just be an easier way before I start to code this up.
On a similar node, I didn't find any way to extract the text path of a given item. While it can modify or locate a node by path, there does not seem to be a way to get the actual path so I'm doing that manually as I parse the JSON file (yuck). Anyone have a better solution?
For example, this is the path of the "value" property in the JSON below: Level1.Level2.Level3
{
"Level1": {
"Level2": {
"Level3": "value"
}
}
}
In TJsonDataObjects you can set the path with:
Json.Path['Level1.Level2.Level3'] := "value";
//or
Json['Level1']['Level2']['Level3'] := "value";
Or retrieve it with:
prop := Json.Path['Level1.Level2.Level3'];
// or
prop := Json['Level1']['Level2']['Level3'];
So if you want to remove Level3, it would be nice if there was some simple function like Json.DeletePath('Level1.Level2.Level3');. As far as I can tell, there is nothing that does this. Since this is a very complex unit, I thought someone might have an easy answer that I overlooked. I have coded a way around this (as described above).
As to the second question, while you can access a value by its path, there is no function to "return" a path from a given node. And yes, I can and do build it as I go along, it would be handy as that way it remains consistent in its format of the JsonPath.
Related
I have files in S3 with inline JSON per line of the structure:
{ "resources": [{"resourceType":"A","id":"A",...},{...}] }
If I run glue over it, I get "resource: array" as the top level element. I want however the elements of the array to be inspected and used as the top level table elements. All the elements per resources array will have the same schema. So I expect
resourceType: string
id: string
....
Theoretically, a custom JSON classifier should handle this:
$.resources[*]
However, the path is not picked up. So I still get the resources:array as the top level element.
I could now run some pre-processing to extract the array elements myself and write them line per line. However, I want to understand why my path is not working.
UPDATE 1:
It might be something with the JSON that I do not understand (its valid JSON produced via JAVA Jackson). If I remove the outer object with the resources attribute and change the structure to
[{"resourceType":"A","id":"A",...},{...}]
the classifier $[*] should pick the sub-objects up. But I still get array:array as top level element.
UPDATE 2:
Its indeed a formatting issue. If I change the JSON files to
[
{"resourceType":"A","id":"A",...},{...}
]
$[*] starts to work.
UPDATE 3:
Its however not fixing the issue with $.resources[*] to reformat to
{
"resources": [
{"resourceType":"A","id":"A",...},{...}
]
}
UPDATE 4:
If I take my file and run it through a Intellij re-format, hence produce a JSON object where all nested elements have line breaks, it also starts working with $.resources[*]. Basically, like in UPDATE 3 just applied down the structure.
{
"resources": [
{
"resourceType":"A",
"id":"A"
},
{
...
}
]
}
What bothers me is, that the requirements regarding the structure are still not clear to me, since UPDATE 2 worked, but not UPDATE 3. I also find nowhere in the documentation a formal requirement regarding the JSON structure.
In this sense, I think I got to the conclusion of my own question, but the systematics stay a bit unclear.
To conclude here:
The issue is related to unclear documented JSON formatting requirements of Glue.
A normalisation via json.dumps(my_json, separators=(',',':')) produces compact JSON that works for my use case.
I normalised now the content via a lambda.
Lambda code as reference for whomever it may help:
s3 = boto3.client('s3')
paginator = s3.get_paginator('list_objects_v2')
pages = paginator.paginate(Bucket=my_bucket)
for page in pages:
try:
contents = page["Contents"]
except KeyError:
break
for obj in contents:
key = obj["Key"]
obj = s3.get_object(Bucket=my_bucket, Key=key)
j = json.loads(obj['Body'].read().decode('utf-8'))
new_json = json.dumps(j, separators=(',',':'))
target = 'nrmlzd/' + key
s3.put_object(
Body=new_json,
Bucket=my_bucket,
Key= target
)
After writing a few files for saving in my JSON file in Godot. I saved the information in a variable called LData and it is working. LData looks like this:
{
"ingredients":[
"[KinematicBody2D:1370]"
],
"collected":[
{
"iname":"Pineapple",
"collected":true
},{
"iname":"Banana",
"collected":false
}
]
}
What does it mean when the file says KinematicBody2D:1370? I understand that it is saving the node in the file - or is it just saving a string? Is it saving the node's properties as well?
When I tried retrieving data - a variable that is assigned to the saved KinematicBody2D.
Code:
for ingredient in LData.ingredients:
print(ingredient.iname)
Error:
Invalid get index name 'iname' (on base: 'String')
I am assuming that the data is stored as a String and I need to put some code to get the exact node it saved. Using get_node is also throwing an error.
Code:
for ingredient in LData.ingredients:
print(get_node(ingredient).iname)
Error:
Invalid get index 'iname' (on base: 'null instance')
What information is it exactly storing when it says [KinematicBody2D:1370]? How do I access the variable iname and any other variables - variables that are assigned to the node when the game is loaded - and is not changed through the entire game?
[KinematicBody2D:1370] is just the string representation of a Node, which comes from Object.to_string:
Returns a String representing the object. If not overridden, defaults to "[ClassName:RID]".
If you truly want to serialize an entire Object, you could use Marshalls.variant_to_base64 and put that string in your json file. However, this will likely bloat your json file and contain much more information than you actually need to save a game. Do you really need to save an entire KinematicBody, or can you figure out the few properties that need to be saved (postion, type of object, ect.) and reconstruct the rest at runtime?
You can also save objects as Resources, which is more powerful and flexible than a JSON file, but tends to be better suited to game assets than save games. However, you could read the Resource docs and see if saving Resources seems like a more appropriate solution to you.
let me explain the problem that I'm facing:
I have two JSON objects, let's call them js1 and js2, I need to update js1 using "parts" of js2 object, and to do that I need to identify where this parts that needs to be updated are in js1.
To do that, I'm using a function that, for a certain input, return the full JsPath from root to the input value, and I get back a JsPath like this:
/priceLists(1)/sections(0)/items(0)(0)/itemIdentifier
what I need to do is navigate backward one step, to obtain a JsPath like
/priceLists(1)/sections(0)/items(0)(0)
I'm probably very dumb (and with not so much experience with Scala in general) but I can't find any way to do that.
The only way I found to get rid of that last part of the path is to transform JsPath into a list of PathNode, but then I don't know how to transform back that list of PathNodes into a JsPath.
I'm using Play 2.6 and Scala 2.11.8.
I am returning some xml structure as json using the built-in MarkLogic json module. For the most part it does what I expect. However, when an element marked as an array is empty, it returns an empty string instead of an empty array. Here is an example:
xquery version "1.0-ml";
import module namespace json = "http://marklogic.com/xdmp/json"
at "/MarkLogic/json/json.xqy";
let $config := json:config("custom")
return (
map:put( $config, "array-element-names", ("item") ),
json:transform-to-json(<result>
<item>21</item>
<item>22</item>
<item>23</item>
</result>, $config),
json:transform-to-json(<result></result>, $config))
Result:
{"result":{"item":["21", "22", "23"]}}
{"result":""}
I would expect an empty array if there were no items matching in the array-element-name called "item". i.e.
{"result":{"item":[]}}
Is there some way to configure it so it knows the element is required ?
No - it will not create anything that is not there. In your case, what if the XML was more complex. There is no context of 'where' such an element might live - so it could not create it even if it wanted to.
Solution is to repair the content if needed by adding one element - or transforming it into the json-basic namespace - where those elements live inside of of an element noted as an array (which can be empty) - or third, use an XSD to hint to the processor what to do . But that would still need a containing element for the 'array' - and then the items would be minOccurance=0. But if this is the case, then repair and transform into the json/basic namespace is probably nice and simple for your example.
I want to edit only one value in an existing JSON file.
Is there any way to do that without parsing and re-writing the whole file? (I use Jackson Streaming API to generate and parse the file, but I'm not sure that Streaming API can do that).
my Example.json file contains the following:
{
"id" : "20120421141411",
"name" : "Example",
"time_start" : "2012-04-21T14:14:14"
}
Example given: I want to edit the value of the "name" from "Example" to "other name".
Not that I know of; either at JSON level, or at file level -- unless length of the values happened to be exactly same, underlying file system typically requires rest of the file to be rewritten from point of change.
You can read and write file using Streaming API, replacing value on the go; see JsonGenerator.copyCurrentEvent(jp) to simplify the task -- it just copies the input event exactly as is. For everything except for replacing particular value, you can call that; and for value, can call JsonGenerator.writeString().
If the file is small and the input value you're looking to replace is unique "enough", and you're open to quick-and-dirty, use apache commons-exec or something to shell out:
bash$> echo '{
"id" : "20120421141411",
"name" : "Example",
"time_start" : "2012-04-21T14:14:14"
}' | sed -e 's/Example/othername/'
outputs:
{
"id" : "20120421141411",
"name" : "othername",
"time_start" : "2012-04-21T14:14:14"
}
Use cat file | sed ... if you know the path to the file.
If you really wanted to edit the file in-place, only writing to those bytes you want to change, it's only possible if the data you are writing will not overwrite subsequent data in the file. You are much better off going with one of the solutions above.
Suppose the JSON file were massive (>1GB?), then would this technique make sense? NO, what the heck are you doing with a JSON file that big? Split it up! But for sake of argument...
You really want to do it, so you hook into a JSON parser to keep track of the byte offset within the file and be able to tie that back to the object representing the JsonNode you will be manipulating. You might end up writing your own parser at this point; JSON grammar is intentionally simple. Then you'd just open the file, skip to that offset, and write the JsonNode data... unless it will overwrite something after it (do you pre-populate the file with buffer of space after every value, just in case? hmmm... this is starting to sound like a database problem). In that case, you'll end up rewriting the entire rest of the file as the larger value "pushes" everything else downward. Not a big deal if the edits are always near the end of file. But if they are random, your performance is doomed. You'll bottleneck serializing writes.