Python3: JSON to CSV - json

I have a JSON dict in Python which I would like to parse into a CSV, my data and code looks like this:
import csv
import json
x = {
"success": 1,
"return": {
"variable_id": {
"var1": "val1",
"var2": "val2"
}...
f = csv.writer(open("foo.csv", "w", newline=''))
for x in x:
f.writerow([x["success"],
'--variable value--',
x["return"]["variable_id"]["var1"],
x["return"]["variable_id"]["var2"])
However, since variable_id's value is going to change I don't know how to refer to in the code. Apologies if this is trivial but I guess I lack the terminology to find the solution.

You can use the * (unpack) operator to do this, assuming only the values in your variable_id matter :
f.writerow([x["success"],
'--variable value--',
*[val for variable_id in x['return'].values() for val in variable_id.values()])
The unpack operator essentially takes everything in x["return"]["variable_id"].values() and appends it in the list you're creating as input for writerow.
EDIT this should now work if you don't know how to referencevariable_id. This will work best if you have several variable_ids in x['return'].
If you only have one variable_id, then you can also try this :
f.writerow([x["success"],
'--variable value--',
*list(x['return'].values())[0].values()])
Or
f.writerow([x["success"],
'--variable value--',
*next(iter(x['return'].values())).values()])

You can get variable_id's value using x['success']['return'].keys[0].

Related

How to select right values in JSON file in pyspark

I got a json file similar to this.
"code": 298484,
"details": {
"date": "0001-01-01",
"code" : 0
}
code appears twice, one is filled and the other one is empty. I need the first one with the data in details. What is the approach in pyspark?
I tried to filter
df = rdd.map(lambda r: (r['code'], r['details'])).toDF()
But it shows _1, _2 (no schema).
Please try the following:
spark.read.json("path to json").select("code", "details.date")

Check if a value exists in a json file with python

I've the following json file (banneds.json):
{
"players": [
{
"avatar": "https://steamcdn-a.akamaihd.net/steamcommunity/public/images/avatars/07/07aa315f664efa92456569429230bc2c254c3ff8_full.jpg",
"created": 1595050663,
"created_by": "<#128152620136267776>",
"nick": "teste",
"steam64": 76561198046619692
},
{
"avatar": "https://steamcdn-a.akamaihd.net/steamcommunity/public/images/avatars/21/21fa5c468597e9c890212b2e3bdb0fac781c040c_full.jpg",
"created": 1595056420,
"created_by": "<#128152620136267776>",
"nick": "ingridão",
"steam64": 76561199058918551
}
]
}
And I want to insert new values if the new value (inserted by user) is not already in the json, however when I try to search if the value is already there I receive a false value, an example of what I'm doing ( not the original code, only an example ):
import json
check = 76561198046619692
with open('banneds.json', 'r') as file:
data = json.load(file)
if check in data:
print(True)
else:
print(False)
I'm always receiving the "False" result, but the value is there, someone can give me a light of what I'm doing wrong please? I tried the entire night to find a solution, but no one works :(
Thanks for the help!
You are checking data as a dictionary object. When checking using if check in data it checks if data object have a key matching the value of the check variable (data.keys() to list all keys).
One easy way would be to use if check in data["players"].__str__() which will convert value to a string and search for the match.
If you want to make sure that check value only checks for the steam64 values, you can write a simple function that will iterate over all "players" and will check their "steam64" values. Another solution would be to make list of "steam64" values for faster and easier checking.
You can use any() to check if value of steam64 key is there.
For example:
import json
def check_value(data, val):
return any(player['steam64']==val for player in data['players'])
with open('banneds.json', 'r') as f_in:
data = json.load(f_in)
print(check_value(data, 76561198046619692))
Prints:
True

Python: import JSON file into SQLAlchemy JSON field

I'm relatively new to Python so I'm hoping that I've just missed something really obvious... But all the similar questions/answers here on StackOverflow seem really overly complex for the simple task that I am trying to achieve.
I have a few hundred text files containing JSON data (the actual data structure isn't important, this block below is just to show you what kind of thing I have, the actual structure of the data could be wildly different but it will always be valid JSON data).
{
"config": {
"item1": "value1",
"item2": "value2"
},
"data": [
{
"dataA1": "valueA1",
"itemA2": "valueA2"
},
{
"dataB1": "valueB1",
"itemB2": "valueB2",
"itemB3": "valueB3"
}
]
}
My Model is something like this:
class ModelName(db.Model):
__tablename__ = 'table_name'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(64))
data1 = db.Column(db.JSON)
data2 = db.Column(db.JSON)
I have multiple data columns here, data1 and data2, simply so I can do a visual comparison of the inserted data. The final model will only have a single data field.
Here is the data insert where everything seems to be going wrong:
import json
new_record = ModelName(
name='Foo',
data1=open('./filename.json').read(),
data2=json.dumps(open('./filename.json').read(), indent=2)
)
try:
db.session.add(new_record)
db.session.commit()
print('Insert successful')
except:
print('Insert failed')
The data that ends up in data1 and data2 get littered with varying numbers of \ to escape double quotes and line breaks, plus it wraps the whole data insert in a set of double-quotes. As a result, the data is simply unusable. So I'm currently having to copy and paste the data into the DB manually which although this tedious task works fine, it is far from the right thing to have to do.
I don't need to edit, manipulate, or do anything to the data in any way. I simply want to read the JSON string from a given file and then insert its content into a record in the database, that is it, end of story, nothing else.
Is there really no SIMPLE way to achieve this?
When you read in a file you need json.loads().
And there's no indent kwarg for that.
So instead do:
data2=json.loads(open('filename.json').read())

Assigning Variables from JSON in Python

I've searched across dozens of answers for the last week but I haven't been able to find an example of what I'm trying to do, happy to be pointed to something that I've missed, and I'm new to Python so I apologise if this is something trivial.
I'm trying to read in a configuration from a JSON file so that I can abstract the configuration from the script itself.
I want to be able to assign the configuration value to a variable and perform an action on it, before moving on to the next category in a nested list, of which the categories could change/expand over time (music, pictures, etc).
The JSON file (library.json) currently looks like this:
{"media":{
"tv": [{
"source": "/tmp/tv",
"dest": "/tmp/dest"
}],
"movies": [{
"source": "/tmp/movies",
"dest": "/tmp/dest"
}]
}}
The relevant script looks like this:
import json
with open(libfile) as data_file:
data = json.load(data_file)
for k, v in (data['media']['tv']):
print (k, v)
What I was hoping to see as output was:
dest /tmp/dest
source /tmp/tv
What I am seeing is:
dest source
It feels like I'm missing something simple.
This works,
import json
with open('data.json') as json_file:
data = json.load(json_file)
for p in data['media']['tv']:
dst = (p['dest'])
src = (p['source'])
print (src, dst)
Something like this? Using f-strings and zip() that will aggregate elements.
import json
with open("dummy.json") as data_file:
data = json.load(data_file)
for i, j in data["media"].items():
print(i)
print("\n".join(f'{str(k)} {str(l)}' for k,l in list(zip(j[0].keys(), j[0].values()))))
print("\n")
Output:
tv
source /tmp/tv
dest /tmp/dest
movies
source /tmp/movies
dest /tmp/dest
The problem here is that data['media']['tv'] is actually a list of dictionaries.
You can tell because it looks like this: "movies": [{.. (Note the bracket [)
That means that instead of this:
for k, v in (data['media']['tv']):
print (k, v)
You should be doing this:
for dct in (data['media']['tv']):
for k, v in dct.items():
print(k, v)

Python: create json query

I'm trying to get python to create a json formatted like :
[
{
"machine_working": true
},
{
"MachineName": "TBL165-169",
"MachineType": "Rig Test"
}
]
However, i can seam to do it, this is the code i have currently but its giving me error
this_is_a_dict_too=[]
this_is_a_dict_too = dict(State="on",dict(MachineType="machinetype1",MachineName="MachineType2"))
File "c:\printjson.py", line 40
this_is_a_dict_too = dict(Statedsf="test",dict(MachineType="Rig Test",MachineName="TBL165-169")) SyntaxError: non-keyword arg after
keyword arg
this_is_a_dict_too = [dict(machine_working=True),dict(MachineType="machinetype1",MachineName="MachineType2")]
print(this_is_a_dict_too)
You are trying to make dictionary in dictionary, the error message say that you try to add element without name (corresponding key)
dict(a='b', b=dict(state='on'))
will work, but
dict(a='b', dict(state='on'))
won't.
The thing that you presented is list, so you can use
list((dict(a='b'), dict(b='a')))
Note that example above use two dictionaries packed into tuple.
or
[ dict(a='b'), dict(b='a') ]