Flask read uploaded json file - json

I'm uploading a json file via flask, but I'm having trouble actually reading what is in the file.
# named fJson b/c of other json imports
from flask import json as fJson
#app.route('/upload', methods=['GET', 'POST'])
def upload():
if request.method == 'POST':
file = request.files['file']
# data = fJson.load(file)
# myfile = file.read()
I'm trying to deal with this by using the 'file' variable. I looked at http://flask.pocoo.org/docs/0.10/api/#flask.json.load, but I get the error "No JSON object could be decoded". I also looked at Read file data without saving it in Flask which recommended using file.read(), but that didn't work, returns either "None" or "".

Request.files
A MultiDict with files uploaded as part of a POST or PUT request. Each file is stored as FileStorage object. It basically behaves like a standard file object you know from Python, with the difference that it also has a save() function that can store the file on the filesystem.
http://flask.pocoo.org/docs/0.10/api/#flask.Request.files
You don't need use json, just use read(), like this:
if request.method == 'POST':
file = request.files['file']
myfile = file.read()

For some reason the position in the file was at the end. Doing
file.seek(0)
before doing a read or load fixes the problem.

Related

Django FileField saving empty file to database

I have a view that should generate a temporary JSON file and save this TempFile to the database. The content to this file, a dictionary named assets, is created using DRF using serializers. This file should be written to the database in a model called CollectionSnapshot.
class CollectionSnapshotCreate(generics.CreateAPIView):
permission_classes = [MemberPermission, ]
def create(self, request, *args, **kwargs):
collection = get_collection(request.data['collection_id'])
items = Item.objects.filter(collection=collection)
assets = {
"collection": CollectionSerializer(collection, many=False).data,
"items": ItemSerializer(items, many=True).data,
}
fp = tempfile.TemporaryFile(mode="w+")
json.dump(assets, fp)
fp.flush()
CollectionSnapshot.objects.create(
final=False,
created_by=request.user,
collection_id=collection.id,
file=ContentFile(fp.read(), name="assets.json")
)
fp.close()
return JsonResponse({}, status=200)
Printing assets returns the dictionary correctly. So I am getting the dictionary normally.
Following the solution below I do get the file saved to the db, but without any content:
copy file from one model to another
Seems that json.dump(assets, fp) is failing silently, or I am missing something to actually save the content to the temp file prior to sending it to the database.
The question is: why is the files in the db empty?
I found out that fp.read() throws content based on the current pointer in the file. At least, that is my understanding. So after I dump assets dict as json the to temp file, I have to bring back the cursor to the beggining using fp.seek(0). This way, when I call fp.read() inside file=ContentFile(fp.read(), ...) it actually reads all the content. It was giving me empty because there was nothing to read since the cursor was at the end of the file.
fp = tempfile.TemporaryFile(mode="w+")
json.dump(assets, fp)
fp.flush()
fp.seek(0) // important
CollectionSnapshot.objects.create // stays the same

Django rest api csv to json

I would like to create a django rest api that receives a csv file and then response the file in a json format. How do I achieve this without using models and anything related with databases? I don't want to use a database because it's assumed that the csv files will have different structure every time
This is my first django project and I've seen lots of tutorials, even I did the first tutorial from django website but I can't understand how to do this task without the database.
Thanks!
since you have not tried anything on your own Here is how you can do it
views.py
from rest_franework.generics import CreateAPIView
class ReadCSVView(CreateAPIView):
# permission_classes = [IsAuthenticated]
serializer_class = ReadCSVSerializer
queryset = ''
def perform_create(self, serializer):
file = serializer.validated_data['file']
decoded_file = file.read().decode()
io_string = io.StringIO(decoded_file)
reader = csv.reader(io_string)
next(reader) # incase you want to skip first row else remove this
return reader
def create(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
final_data = []
for row in self.perform_create(serializer):
if row[0]=="jack":
#do your logic here
final_data.append(row)
return Response(final_data, status=status.HTTP_201_CREATED)
just create one serializers to read csv.
serializers.py
from rest_framework import serializers
class ReadCSVSerializer(serializers.Serializer):
file = serializers.FileField()
now go to your urls.py and call the view class this way
urlpatterns = [
path("read-csv",views.ReadCSVView.as_view(),name="csv")]
hope this clarifies your doubt

How can I get data from external API JSON file in ODOO

[{"Id":"605a321e-7c10-49e4-9d34-ba03c4b34f69","Url":"","Type":"INBOUND_OUTBOUND",
"ClearCurrentData":true,"FillOutFormFields":true,"RequestProtocol":"HTTP_REQUEST",
"FileToSend":"NONE","SendDefaultData":false,"SendDetectionData":false,"ShowDialogMessage":false,"IsActive":false,"SendingTrigger":"MANUALLY","TCPSocketMethod":"","TriggerButtonName":"Get data"}]
This is an External API Call JSON file how can i get the data in ODOO Any Solutions please ?
Mentioned Above code API JSON file
i had only External JSON File,don't have a table name & Database name is it possible to get data in ODOO
create a model for this data for example
class apiCall(models.Model):
_name = 'apiCall.api'
id = fields.Char(string='Id')
url = fields.Char(string='url')
#api.model
def call_api(self):
### result of calling external api is "result"
for line of self:
id = result[0].Id
url = result[0].url
You can call a method on button click and access the json file. Below is the code for accessing json file:-
import json
def api_call(self):
# Opening JSON file
f = open('file_name.json',)
# returns JSON object as
# a dictionary
data = json.load(f)
# Iterating through the json
# list
for i in self:
print(i)
# Closing file
f.close()
Below is the button code which is applied in xml file:-
<button name="api_call" type="object" string="Call API" class="oe_highlight" />
by this way you can call the json file with external api in odoo.
In your custom module (my_custom_module), move your json file to its subdirectory static/src:
import json
from odoo.modules.module import get_module_resource
def call_api(self)
json_filepath = get_module_resource('my_custom_module', 'static/src/', 'my_api_cred.json')
with open(json_filepath) as f:
data = json.load(f)

Get information out of large JSON file

I am new to JSON file and i'm strugeling to get any information out of it.
The structure of the JSON file is as following:
Json file Structure
Now what I need is to access the "batches", to get the data from each variable.
I did try codes (shown below) i've found to reach deeper keys but somehow i still didnt get any results.
1.
def safeget(dct, *keys):
for key in keys:
try:
dct = dct[key]
except KeyError:
return None
return dct
safeget(mydata,"batches")
def dict_depth(mydata):
if isinstance(mydata, dict):
return 1 + (max(map(dict_depth, mydata.values()))
if mydata else 0)
return 0
print(dict_depth(mydata))
The final goal then would be to create a loop to extract all the information but thats something for the future.
Any help is highly appreciated, also any recommendations how i should ask things here in the future to get the best answers!
As far as I understood, you simply want to extract all the data without any ordering?
Then this should work out:
# Python program to read
# json file
import json
# Opening JSON file
f = open('data.json',)
# returns JSON object as
# a dictionary
data = json.load(f)
# Iterating through the json
# list
for i in data['emp_details']:
print(i)
# Closing file
f.close()

how to write r.headers from different urls into one json?

I would like to crawl several urls, while using the requests library in python. I am scrutinizing the GET requests as well as the response headers. However, when crawling and getting the data from different urls I am facing the problem, that I don't know all 'key:values', which are coming in. Thus writing those data to a valid csv file is not really possible, in my point of view. Therefore I want to write the data into a json file.
The problem is similar to the following thread from 2014, but not the same:
Get a header with Python and convert in JSON (requests - urllib2 - json)
import requests, json
urls = ['http://www.example.com/', 'http://github.com']
with open('test.json', 'w') as f:
for url in urls:
r = requests.get(url)
rh = r.headers
f.write(json.dumps(dict(rh), sort_keys=True, separators=(',', ':'), indent=4))
I expect a json file, with the headers for each URL. I get a Json file with those data, but my IDE (PyCHarm) is showing an Error, which states out that
JSON standard allows only one top-level value. I have read the documentation:https://docs.python.org/3/library/json.html#repeated-names-within-an-object; but did not get it. Any hint would be appreciated.
EDIT: The only thing which is missing in the outcome is another comma. But where do I enter it and what command do I need for this?
You need to add it to an array and then finally do the json dump to a file. This will work.
urls = ['http://www.example.com/', 'http://github.com']
headers = []
for url in urls:
r = requests.get(url)
header_dict = dict(r.headers)
header_dict['source_url'] = url
headers.append(header_dict)
with open('test.json', 'w', encoding='utf-8') as f:
json.dump(headers, f, sort_keys=True, separators=(',', ':'), indent=4)
You still can write it to a csv:
import pandas as pd
df = pd.DataFrame(headers)
df.to_csv('test.csv')