Value Error while trying to read json file - json

In Django, I am trying to read the countries to cities json file that's available here: https://raw.githubusercontent.com/David-Haim/CountriesToCitiesJSON/master/countriesToCities.json
I have downloaded the file locally into my static assets folder and I am doing the following to open, read and push all cities into another array
obj = []
filename = 'static/json/countriesToCities.json'
with open(filename, "r") as f:
data = json.loads(f.read())
for key, values in data:
obj.append(key[0])
However, this gives me the following error:
ValueError at /citiesUrl/
No JSON object could be decoded
How do I push all the values of each key into a new array?

use load instead loads (first for files, second for strings)
I've tested your JSON and it works:
json_data = open('/Users/madzohan/Downloads/data.json', 'r')
data = json.load(json_data)

Related

How can I get data from external API JSON file in ODOO

[{"Id":"605a321e-7c10-49e4-9d34-ba03c4b34f69","Url":"","Type":"INBOUND_OUTBOUND",
"ClearCurrentData":true,"FillOutFormFields":true,"RequestProtocol":"HTTP_REQUEST",
"FileToSend":"NONE","SendDefaultData":false,"SendDetectionData":false,"ShowDialogMessage":false,"IsActive":false,"SendingTrigger":"MANUALLY","TCPSocketMethod":"","TriggerButtonName":"Get data"}]
This is an External API Call JSON file how can i get the data in ODOO Any Solutions please ?
Mentioned Above code API JSON file
i had only External JSON File,don't have a table name & Database name is it possible to get data in ODOO
create a model for this data for example
class apiCall(models.Model):
_name = 'apiCall.api'
id = fields.Char(string='Id')
url = fields.Char(string='url')
#api.model
def call_api(self):
### result of calling external api is "result"
for line of self:
id = result[0].Id
url = result[0].url
You can call a method on button click and access the json file. Below is the code for accessing json file:-
import json
def api_call(self):
# Opening JSON file
f = open('file_name.json',)
# returns JSON object as
# a dictionary
data = json.load(f)
# Iterating through the json
# list
for i in self:
print(i)
# Closing file
f.close()
Below is the button code which is applied in xml file:-
<button name="api_call" type="object" string="Call API" class="oe_highlight" />
by this way you can call the json file with external api in odoo.
In your custom module (my_custom_module), move your json file to its subdirectory static/src:
import json
from odoo.modules.module import get_module_resource
def call_api(self)
json_filepath = get_module_resource('my_custom_module', 'static/src/', 'my_api_cred.json')
with open(json_filepath) as f:
data = json.load(f)

load a json file containing list of strings

I have a json file containing a list of strings like this:
['Hello\nHow are you?', 'What is your name?\nMy name is john']
I have to read this file and store it as a list of strings but I am so confused that how should I read json file like this. Also, I should use utf-8 encoding format.
Let's assume you have one or multiple lines as described in the json file. Here is my suggestion (Remember to replace the file name test.json to yours):
import ast
with open("test.json", "r") as input_file:
line_list = input_file.readlines()
all_texts = [item for sublist in line_list for item in ast.literal_eval(sublist)]
print(all_texts)
The file you have shown is not in json format. Anyways, to read a json file you have to do following
import json
jsonObj = json.loads('path/to/file.json')
This will return a dictionary object and store it in jsonObj.

Combining and loading Json content as python dictionary

Ihave 100 json files (file1- file 100)in my directory.All these 100 have the same fields and my aim is to load allcontents in one dictionary or dataframe.Basically the content of each file (ie file1- file100) willbe a row for my dictionary or dataframe
To test the code first,I wrote a script to load contents from one json file
file2 = open(r"\Users\sbz\file1.txt","w+")
import json
import traceback
def read_json_file(file2):
with open(file2, "r") as f:
try:
return json.load(f)
for combining i wrote this
def combine_dictionaries(dictionary_list):
my_dictionary = {}
for key in dictionary_list:
my_dictionary.update(key)
return my_dictionary
I am unable to load the file or display contents of dictionary using print(file2)
Is there something I am missing? Or is there better wayto loop in all 100 files and load them as a single dictionary?
If json.load isn't working, my guess is that your JSON file is probably formatted incorrectly. Try getting it to work with a simple file like:
{
"test": 0
}
After that works, then try loading one of your 100 files. I copy-pasted your read_json_file function and I'm able to see the data in my file: print(read_json_file("data.json"))
For looping through the files and combining them:
It doesn't look like your combine_dictionaries function is 100% there yet for what you want to do. update doesn't merge the dictionaries into rows as you want; it will replace the keys of one dictionary with the keys of another, and since each file has the same fields the resulting dictionary will be the last one in the list. Technically, a list of dictionaries is already a list of rows which is what you want and you can index the list based on row number, for example, list_of_dictionaries[0] will get the dictionary created from file1 if you fill the list in order of file1 to file100. If you want to go further than file numbers, you can put all of these dictionaries into another dictionary if you can generate a unique key for each dictionary:
def combine_dictionaries(dictionary_list):
my_dictionary = {}
for dictionary in dictionary_list:
my_dictionary[generate_key(dictionary)] = dictionary
return my_dictionary
Where generate_key is a function that will return a key unique to that dictionary. Now combined_dictionary.get(0) will get file1's dictionary, and combined_dictionary.get(0).get("somefield") will get the "somefield" data from file1.

Convert csv to json using phython

This is my code and it convert successfully. However, when i import this json into firebase and it state that Invalid JSON files.
import csv
import json
csvfile = open('C:/Users/Senior/seaborn-data/Denver DatasetCleaning Finalize.csv', 'r')
jsonfile = open('C:/Users/Senior/seaborn-data/Denver DatasetCleaning Finalize.json', 'w')
fieldnames = ("OFFENSE_CODE ","OFFENSE_CATEGORY_ID","FIRST_OCCURRENCE_DATE","DATE","YEAR","MONTH","DAY","TIME","HOUR","MINUTE","INCIDENT_ADDRESS","GEO_LON","GEO_LAT","NEIGHBORHOOD_ID")
reader = csv.DictReader( csvfile, fieldnames)
for row in reader:
json.dump(row, jsonfile)
jsonfile.write('\n')
each time json.dump is called it is outputting json. but several json strings concatenated together are not still json
what you maybe want to do is read the entire csv into a variable, then json.dump that

How to read whole file in one string

I want to read json or xml file in pyspark.lf my file is split in multiple line in
rdd= sc.textFile(json or xml)
Input
{
" employees":
[
{
"firstName":"John",
"lastName":"Doe"
},
{
"firstName":"Anna"
]
}
Input is spread across multiple lines.
Expected Output {"employees:[{"firstName:"John",......]}
How to get the complete file in a single line using pyspark?
There are 3 ways (I invented the 3rd one, the first two are standard built-in Spark functions), solutions here are in PySpark:
textFile, wholeTextFile, and a labeled textFile (key = file, value = 1 line from file. This is kind of a mix between the two given ways to parse files).
1.) textFile
input:
rdd = sc.textFile('/home/folder_with_text_files/input_file')
output: array containing 1 line of file as each entry ie. [line1, line2, ...]
2.) wholeTextFiles
input:
rdd = sc.wholeTextFiles('/home/folder_with_text_files/*')
output: array of tuples, first item is the "key" with the filepath, second item contains 1 file's entire contents ie.
[(u'file:/home/folder_with_text_files/', u'file1_contents'), (u'file:/home/folder_with_text_files/', file2_contents), ...]
3.) "Labeled" textFile
input:
import glob
from pyspark import SparkContext
SparkContext.stop(sc)
sc = SparkContext("local","example") # if running locally
sqlContext = SQLContext(sc)
for filename in glob.glob(Data_File + "/*"):
Spark_Full += sc.textFile(filename).keyBy(lambda x: filename)
output: array with each entry containing a tuple using filename-as-key with value = each line of file. (Technically, using this method you can also use a different key besides the actual filepath name- perhaps a hashing representation to save on memory). ie.
[('/home/folder_with_text_files/file1.txt', 'file1_contents_line1'),
('/home/folder_with_text_files/file1.txt', 'file1_contents_line2'),
('/home/folder_with_text_files/file1.txt', 'file1_contents_line3'),
('/home/folder_with_text_files/file2.txt', 'file2_contents_line1'),
...]
You can also recombine either as a list of lines:
Spark_Full.groupByKey().map(lambda x: (x[0], list(x[1]))).collect()
[('/home/folder_with_text_files/file1.txt', ['file1_contents_line1', 'file1_contents_line2','file1_contents_line3']),
('/home/folder_with_text_files/file2.txt', ['file2_contents_line1'])]
Or recombine entire files back to single strings (in this example the result is the same as what you get from wholeTextFiles, but with the string "file:" stripped from the filepathing.):
Spark_Full.groupByKey().map(lambda x: (x[0], ' '.join(list(x[1])))).collect()
If your data is not formed on one line as textFile expects, then use wholeTextFiles.
This will give you the whole file so that you can parse it down into whatever format you would like.
This is how you would do in scala
rdd = sc.wholeTextFiles("hdfs://nameservice1/user/me/test.txt")
rdd.collect.foreach(t=>println(t._2))
"How to read whole [HDFS] file in one string [in Spark, to use as sql]":
e.g.
// Put file to hdfs from edge-node's shell...
hdfs dfs -put <filename>
// Within spark-shell...
// 1. Load file as one string
val f = sc.wholeTextFiles("hdfs:///user/<username>/<filename>")
val hql = f.take(1)(0)._2
// 2. Use string as sql/hql
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
val results = hiveContext.sql(hql)
Python way
rdd = spark.sparkContext.wholeTextFiles("hdfs://nameservice1/user/me/test.txt")
json = rdd.collect()[0][1]