How to add entries to a JSON array/list - json

I'm trying to set up a Discord bot that only lets people on a list in a JSON file use it, I am wondering how to add data to the JSON array/list but I'm not sure how to move forward and I have had no real luck looking for answers elsewhere.
This is an example of how the JSON file looks:
{
IDs: [
"2359835092385",
"4634637576835",
"3454574836835"
]
}
Now, what I am looking to do, is add a new ID to "IDs" and not have it completely break, and I wish to be able to have other entries in the JSON file as well so i can make something like "AdminIDs" for people that can do more stuff to the bot.
Yes. I know I can do this stuff role based in guilds/servers, but I would like to be able to use the bot in DMs as well as on guilds/server.
What I want/need is a short and simple to manipulate script that I can easily put in to a new command so I can add new people to the bot without having to open and edit the JSON file manually.

If you haven't parsed your data already via the package json then you can do the following for parsing the data:
import json
json_code = { "..": ... }
parsed_json = json.dumps(json_code)
print(parsed_json['IDs'])
Then you can simply use this data like a normal list and append data to it.

All keys must be surrounded by a string
In this cause the key is the IDs while the value is the list and the value of the list would be the items inside it
import json
data={
"IDs":[
"2359835092385",
"4634637576835",
"3454574836835"
]
}
Let's say that your JSON data is from a file, to load it so that you can manipulate it do the following
raw_json_data=open('filename.json',encoding='utf-8')
j_data=json.load(raw_json_data) #Now j_data is basically the same as data except difference in name
print(j_data)
# >> {'IDs': ['2359835092385', '4634637576835', '3454574836835']}
To add things inside the list IDs you use the append method
data['IDs'].append('adding something') #or j_data['IDs'].append("SOMEthing")
print(data)
# >> {'IDs': ['2359835092385', '4634637576835', '3454574836835', 'adding something']}
To add a new key
data['Names']=['Jack','Nick','Alice','Nancy']
print(data)
# >> {'IDs': ['2359835092385', '4634637576835', '3454574836835', 'adding something'], 'Names': ['Jack', 'Nick', 'Alice', 'Nancy']}

Related

JSON variable indent for different entries

Background: I want to store a dict object in json format that has say, 2 entries:
(1) Some object that describes the data in (2). This is small data mostly definitions, parameters that control, etc. and things (maybe called metadata) that one would like to read before using the actual data in (2). In short, I want good human readability of this portion of the file.
(2) The data itself is a large chunk- should more like machine readable (no need for human to gaze over it on opening the file).
Problem: How to specify some custom indent, say 4 to the (1) and None to the (2). If I use something like json.dump(data, trig_file, indent=4) where data = {'meta_data': small_description, 'actual_data': big_chunk}, meaning the large data will have a lot of whitespace making the file large.
Assuming you can append json to a file:
Write {"meta_data":\n to the file.
Append the json for small_description formatted appropriately to the file.
Append ,\n"actual_data":\n to the file.
Append the json for big_chunk formatted appropriately to the file.
Append \n} to the file.
The idea is to do the json formatting out the "container" object by hand, and using your json formatter as appropriate to each of the contained objects.
Consider a different file format, interleaving keys and values as distinct documents concatenated together within a single file:
{"next_item": "meta_data"}
{
"description": "human-readable content goes here",
"split over": "several lines"
}
{"next_item": "actual_data"}
["big","machine-readable","unformatted","content","here","....."]
That way you can pass any indent parameters you want to each write, and you aren't doing any serialization by hand.
See How do I use the 'json' module to read in one JSON object at a time? for how one would read a file in this format. One of its answers wisely suggests the ijson library, which accepts a multiple_values=True argument.

AWS Glue classifier for extracting JSON array values

I have files in S3 with inline JSON per line of the structure:
{ "resources": [{"resourceType":"A","id":"A",...},{...}] }
If I run glue over it, I get "resource: array" as the top level element. I want however the elements of the array to be inspected and used as the top level table elements. All the elements per resources array will have the same schema. So I expect
resourceType: string
id: string
....
Theoretically, a custom JSON classifier should handle this:
$.resources[*]
However, the path is not picked up. So I still get the resources:array as the top level element.
I could now run some pre-processing to extract the array elements myself and write them line per line. However, I want to understand why my path is not working.
UPDATE 1:
It might be something with the JSON that I do not understand (its valid JSON produced via JAVA Jackson). If I remove the outer object with the resources attribute and change the structure to
[{"resourceType":"A","id":"A",...},{...}]
the classifier $[*] should pick the sub-objects up. But I still get array:array as top level element.
UPDATE 2:
Its indeed a formatting issue. If I change the JSON files to
[
{"resourceType":"A","id":"A",...},{...}
]
$[*] starts to work.
UPDATE 3:
Its however not fixing the issue with $.resources[*] to reformat to
{
"resources": [
{"resourceType":"A","id":"A",...},{...}
]
}
UPDATE 4:
If I take my file and run it through a Intellij re-format, hence produce a JSON object where all nested elements have line breaks, it also starts working with $.resources[*]. Basically, like in UPDATE 3 just applied down the structure.
{
"resources": [
{
"resourceType":"A",
"id":"A"
},
{
...
}
]
}
What bothers me is, that the requirements regarding the structure are still not clear to me, since UPDATE 2 worked, but not UPDATE 3. I also find nowhere in the documentation a formal requirement regarding the JSON structure.
In this sense, I think I got to the conclusion of my own question, but the systematics stay a bit unclear.
To conclude here:
The issue is related to unclear documented JSON formatting requirements of Glue.
A normalisation via json.dumps(my_json, separators=(',',':')) produces compact JSON that works for my use case.
I normalised now the content via a lambda.
Lambda code as reference for whomever it may help:
s3 = boto3.client('s3')
paginator = s3.get_paginator('list_objects_v2')
pages = paginator.paginate(Bucket=my_bucket)
for page in pages:
try:
contents = page["Contents"]
except KeyError:
break
for obj in contents:
key = obj["Key"]
obj = s3.get_object(Bucket=my_bucket, Key=key)
j = json.loads(obj['Body'].read().decode('utf-8'))
new_json = json.dumps(j, separators=(',',':'))
target = 'nrmlzd/' + key
s3.put_object(
Body=new_json,
Bucket=my_bucket,
Key= target
)

passing variable to json file while matching response in karate

I'm validating my response from a GET call through a .json file
match response == read('match_response.json')
Now I want to reuse this file for various other features as only one field in the .json varies. Let's say this param in the json file is "varyingField"
I'm trying to pass this field every time I am matching the response but not able to
def varyingField = 'VARIATION1'
match response == read('match_response.json') {'varyingField' : '#(varyingField)'}}
In the json file I have
"varyingField": "#(varyingField)"
You are trying to use an argument to read for a JSON file ? Sorry such a thing is not supported in Karate, please read the docs.
Use this pattern:
create a JSON file that has all your "happy path" values set
use the read() syntax to load the file (which means this is re-usable across multiple tests)
use the set keyword to update only the field for your scenario or negative test
For more details, refer this answer: https://stackoverflow.com/a/51896522/143475

append data to an existing json file

Appreciate if someone can point me to the right direction in here, bit new to python :)
I have a json file that looks like this:
[
{
"user":"user5",
"games":"game1"
},
{
"user":"user6",
"games":"game2"
},
{
"user":"user5",
"games":"game3"
},
{
"user":"user6",
"games":"game4"
}
]
And i have a small csv file that looks like this:
module_a,module_b
10,20
15,16
1,11
2,6
I am trying to append the csv data into the above mentioned json so it looks this, keeping the order as it is:
[
{
"user":"user5",
"module_a":"10",
"games":"game1",
"module_b":"20"
},
{
"user":"user6",
"module_a":"15",
"games":"game2",
"module_b":"16"
},
{
"user":"user5",
"module_a":"1",
"games":"game3",
"module_b":"11"
},
{
"user":"user6",
"module_a":"2",
"games":"game4",
"module_b":"6"
}
]
what would be the best approach to achive this keep the output order as it is.
Appreciate any guidance.
JSON specification doesn't prescribe orderness and it won't be enforced (unless it's a default mode of operation of the underlying platform) by any JSON parser so going a long way just to keep the order when processing JSON files is usually pointless. To quote:
An object is an unordered collection of zero or more name/value
pairs, where a name is a string and a value is a string, number,
boolean, null, object, or array.
...
JSON parsing libraries have been observed to differ as to whether or
not they make the ordering of object members visible to calling
software. Implementations whose behavior does not depend on member
ordering will be interoperable in the sense that they will not be
affected by these differences.
That being said, if you really insist on order, you can parse your JSON into a collections.OrderedDict (and write it back from it) which will allow you to inject data at specific places while keeping the overall order. So, first load your JSON as:
import json
from collections import OrderedDict
with open("input_file.json", "r") as f: # open the JSON file for reading
json_data = json.load(f, object_pairs_hook=OrderedDict) # read & parse it
Now that you have your JSON, you can go ahead and load up your CSV, and since there's not much else to do with the data you can immediately apply it to the json_data. One caveat, tho - since there is no direct map between the CSV and the JSON one has to assume index as being the map (i.e. the first CSV row being applied to the first JSON element etc.) so we'll use enumerate() to track the current index. There is also no info on where to insert individual values so we'll assume that the first column goes after the first JSON object entry, the second goes after the second entry and so on, and since they can have different lenghts we'll use itertools.izip_longest() to interleave them. So:
import csv
from itertools import izip_longest # use zip_longest on Python 3.x
with open("input_file.csv", "rb") as f: # open the CSV file for reading
reader = csv.reader(f) # build a CSV reader
header = next(reader) # lets store the header so we can get the key values later
for index, row in enumerate(reader): # enumerate and iterate over the rest
if index >= len(json_data): # there are more CSV rows than we have elements in JSO
break
row = [(header[i], v) for i, v in enumerate(row)] # turn the row into element tuples
# since collections.OrderedDict doesn't support random access by index we'll have to
# rebuild it by mixing in the CSV elements with the existing JSON elements
# use json_data[i].items() on Python 3.x
data = (v for p in izip_longest(json_data[index].iteritems(), row) for v in p)
# then finally overwrite the current element in json_data with a new OrderedDict
json_data[index] = OrderedDict(data)
And with our CSV data nicely inserted into the json_data, all that's left is to write back the JSON (you may overwrite the original file if you wish):
with open("output_file.json", "w") as f: # open the output JSON file for writing
json.dump(json_data, f, indent=2) # finally, write back the modified JSON
This will produce the result you're after. It even respects the names in the CSV header so you can replace them with bob and fred and it will insert those keys in your JSON. You can even add more of them if you need more elements added to your JSON.
Still, just because it's possible, you really shouldn't rely on JSON orderness. If it's user-readibility you're after, there are far more suitable formats with optional orderness like YAML.

Multiple entry in one mysql cell and Json data

I have lots folder named as a some series name. And every series folder it has its own chapter folder. In chapter folder some images in it. My website is manga(comic) site. So I am gonna record this folder's and image's path to mysql and return to the Json data for using with AngularjS. So How should I save these folders path or names to mysql for the get proper Json data and using with angularjs.
My table is like this: Can change,
id series_name folder path
1 Dragon Ball 788 01.jpg02.jpg03.jpg04.jpg05.jpg06.jpg..........
2 One Piece 332 01.jpg...................
3 One PÄ°ece 333 01.jpg02.jpg...........
My current website:
Link to Reader Part of My WebSite
I'm assuming you're using PHP on a LAMP stack. So first you would need to grab all your SQL fields and change them into JSON keys.
Create JSON-object the correct way
Then you can create your JSON object like this and pass it to Angular when it does an AJAX request. Make sure you create the array before placing it into your JSON object (for path).
{
id: Number,
series_name: String,
folder: Number,
path: [
String, String, String, ...
]
}
Here is the Angular documentation for an Angular GET request.
https://docs.angularjs.org/api/ng/service/$http
EDIT:
It's difficult because of how your filenames are formatted. If it were formatted like "01.jpg,02.jpg,03.jpg" it would be easier.
You can use preg_split with the regex:
$string = "01.jpg02.jpg03.jpg04.jpg05.jpg06.jpg";
$keywords = preg_split("/(.jpg|.png|.bmp)/", $string);
but you would need them all to be the same extension, then you need to re-append the extension to each element after you split it.
There may be a better way.