In my Django project, my View is converting a ValuesQuerySet to a JSON string:
import json
# ...
device_list = list(Device.objects.values())
device_json = json.dumps(device_list)
The resulting JSON string:
[{"field1": "value", "location_id": 1, "id": 1, "field2": "value"},
{...}]
How can I include the data within the location object represented by "location_id": 1, instead of the ID number? Something like this:
[{"field1": "value", "location_name": "name", "location_region": "region", "another_location_field": "value", "id": 1, "field2": "value"},
{...}]
I found that you can use Field Lookups to follow relationships and access fields in another related model:
import json
# ...
device_list = list(Device.objects.values('field1', 'field2', 'location__name', 'location__region'))
json.dumps(device_list)
The resulting JSON string:
[{field1": "value", "field2": "value", "location__name": "name", "location__region": "region"},
{...}]
Related
I have a JSONB column "deps" like this
[
[{
"name": "A"
}, "823"],
[{
"name": "B"
}, "332"],
[{
"name": "B"
}, "311"]
]
I want to set a column "stats" to NULL for all rows where the JSON array in the column "deps" contains a tuple with "name" "B". In the example above the column "deps" has two such tuples.
Is it possible?
The dictionary {"name": "B"} always comes first in the tuple.
Would the same search in this JSON be faster:
[{
"id": {
"name": "A"
},
"value": "823"
}, {
"id": {
"name": "B"
},
"value": "332"
},
{
"id": {
"name": "B"
},
"value": "311"
}
]
You could use the #? psql JSON operator in combination with jsonpath to check if your JSONB blob contains any {"name": "B"} for any first value of all occurring tuples.
Here is an example with the JSONB blob from the stated question:
-- returns 'true' if any first tuple value contains an object with key "name" and value "B"
SELECT '[[{"name": "A"}, "823"], [{"name": "B"}, "332"], [{"name": "B"}, "311"]]'::JSONB #? '$[*][0].name ? (# == "B")';
Now you can combine this with your UPDATE logic:
UPDATE my_table
SET stats = NULL
WHERE deps #? '$[*][0].name ? (# == "B")';
You can use the contains operator #> if the content has the structure of your first example
update the_table
set stats = null
where deps #> '[[{"name": "B"}]]'
For the structure in the second example, you would need to use:
where deps #> '[{"id": {"name": "B"}}]'
Data sample:
import pandas as pd
patients_df = pd.read_json('C:/MyWorks/Python/Anal/data_sample.json', orient="records", lines=True)
patients_df.head()
//in python
//my json data sample
"data1": {
"id": "myid",
"seatbid": [
{
"bid": [
{
"id": "myid",
"impid": "1",
"price": 0.46328014,
"adm": "adminfo",
"adomain": [
"domain.com"
],
"iurl": "url.com",
"cid": "111",
"crid": "1111",
"cat": [
"CAT-0101"
],
"w": 00,
"h": 00
}
],
"seat": "27"
}
],
"cur": "USD"
},
What I want to do is to check if there is a "cat" value in my very large JSON data.
The "cat" value may/may not exist, but I'm trying to use Python Pandas to check it.
for seatbid in patients_df["win_res"]:
for bid in seatbid["seatbid"]:
I tried to access JSON data while writing a loop like that, but it's not being accessed properly.
I simply want to check if "cat" exist or not.
You can use python's json library as follows:
import json
patient_data = json.loads(patientJson)
if "cat" in student:
print("Key exist in JSON data")
else
print("Key doesn't exist in JSON data")
I have about 100 JSON files, all titled with different dates and I need to merge them into one CSV file that has headers "date", "real_name", "text".
There are no dates listed in the JSON itself, and the real_name is nested. I haven't worked with JSON in a while and am a little lost.
The basic structure of the JSON looks more or less like this:
Filename: 2021-01-18.json
[
{
"client_msg_id": "xxxx",
"type": "message",
"text": "THIS IS THE TEXT I WANT TO PULL",
"user": "XXX",
"user_profile": {
"first_name": "XXX",
"real_name": "THIS IS THE NAME I WANT TO PULL",
"display_name": "XXX",
"is_restricted": false,
"is_ultra_restricted": false
},
"blocks": [
{
"type": "rich_text",
"block_id": "yf=A9",
}
]
}
]
So far I have
import glob
read_files = glob.glob("*.json")
output_list = []
all_items = []
for f in read_files:
with open(f, "rb") as infile:
output_list.append(json.load(infile))
data = {}
for obj in output_list[]
data['date'] = f
data['text'] = 'text'
data['real_name'] = 'real_name'
all_items.append(data)
Once you've read the JSON object, just index into the dictionaries for the data. You might need obj[0]['text'], etc., if your JSON data is really in a list in each file, but that seems odd and I'm assuming your data was pasted from output_list after you'd collected the data. So assuming your file content is exactly like below:
{
"client_msg_id": "xxxx",
"type": "message",
"text": "THIS IS THE TEXT I WANT TO PULL",
"user": "XXX",
"user_profile": {
"first_name": "XXX",
"real_name": "THIS IS THE NAME I WANT TO PULL",
"display_name": "XXX",
"is_restricted": false,
"is_ultra_restricted": false
},
"blocks": [
{
"type": "rich_text",
"block_id": "yf=A9",
}
]
}
test.py:
import json
import glob
from pathlib import Path
read_files = glob.glob("*.json")
output_list = []
all_items = []
for f in read_files:
with open(f, "rb") as infile:
output_list.append(json.load(infile))
data = {}
for obj in output_list:
data['date'] = Path(f).stem
data['text'] = obj['text']
data['real_name'] = obj['user_profile']['real_name']
all_items.append(data)
print(all_items)
Output:
[{'date': '2021-01-18', 'text': 'THIS IS THE TEXT I WANT TO PULL', 'real_name': 'THIS IS THE NAME I WANT TO PULL'}]
Tools: Spring Booot v2.1.3.RELEASE, MySQL 5.7
I have table with column of type JSON named "properties".
I use jdbcTemplate.queryForList(sql) method to read from this table.
Rest service returns something like this:
[
{
"id": 1,
"name": "users",
"properties": "{\"prop1\": \"value1\"}",
"description": "smpl descr1",
"log_enabled": false
},
{
"id": 2,
"name": "members",
"properties": null,
"description": "sample description 2",
"log_enabled": true
}
]
As you can see the "properties" object is type of String.
How to force jdbcTemplete to convert data from JSON column into JSON instead of String?
Expected result:
[
{
"id": 1,
"name": "users",
"properties": {
"prop1": "value1"
},
"description": "smpl descr1",
"log_enabled": false
},
{
"id": 2,
"name": "members",
"properties": null,
"description": "sample description 2",
"log_enabled": true
}
]
I am sorry that JdbcTemplete does not have such function. You have to convert the JSON string to the java object by yourself using your favourite JSON library.
For example , in case of Jackson , you can convert any JSON string to a Map using:
ObjectMapper mapper = new ObjectMapper();
String json = "{\"prop1\": \"value1\" , \"prop2\": 123}";
Map<String,Object> result = mapper.readValue(json,new TypeReference<Map<String,Object>>() {});
result.get("prop1") // "value1"
result.get("prop2") // 123
I have the contacts.json file:
{
"emergencyContacts": [
{
"name": "Jane Doe",
"phone": "888-555-1212",
"relationship": "spouse"
},
{
"name": "Justin Doe",
"phone": "877-123-1212",
"relationship": "parent"
}
]
}
So I wanna access Name key in emergencyContacts array in Julia. I'm trying this:
import JSON
dict = Dict()
open("contacts.json", "r") do f
global dict
dicttxt = readstring(f) # file information to string
dict=JSON.parse(dicttxt) # parse and transform data
end
for (values) in dict["emergencyContacts"]
println(values)
end
This is a poorly specified question:
There is no "firstname" key.
There is no "Employees" array.
Presumably, you are looking for
julia> first_names = String[]
0-element Array{String,1}
julia> for contact in dict["emergencyContacts"]
push!(first_names, split(contact["name"]," ")[1])
end
julia> first_names
2-element Array{String,1}:
"Jane"
"Justin"
The "nested" key called "name" can be extracted for an array element using dict["emergencyContacts"][n]["name"] where n is array index.