I'm trying to create a view to import a csv using drf and django-import-export.
My example (I'm doing baby steps and debugging to learn):
class ImportMyExampleView(APIView):
parser_classes = (FileUploadParser, )
def post(self, request, filename, format=None):
person_resource = PersonResource()
dataset = Dataset()
new_persons = request.data['file']
imported_data = dataset.load(new_persons.read())
return Response("Ok - Babysteps")
But I get this error (using postman):
Tablib has no format 'None' or it is not registered.
Changing to imported_data = Dataset().load(new_persons.read().decode(), format='csv', headers=False) I get this new error:
InvalidDimensions at /v1/myupload/test_import.csv
No exception message supplied
Does anyone have any tips or can indicate a reference? I'm following this site, but I'm having to "translate" to drf.
Starting with baby steps is a great idea. I would suggest get a standalone script working first so that you can check the file can be read and imported.
If you can set breakpoints and step into the django-import-export source, this will save you a lot of time in understanding what's going on.
A sample test function (based on the example app):
def test_import():
with open('./books-sample.csv', 'r') as fh:
dataset = Dataset().load(fh)
book_resource = BookResource()
result = book_resource.import_data(dataset, raise_errors=True)
print(result.totals)
You can adapt this so that you import your own data. Once this works OK then you can integrate it with your post() function.
I recommend getting the example app running because it will demonstrate how imports work.
InvalidDimensions means that the dataset you're trying to load doesn't match the format expected by Dataset. Try removing the headers=False arg or explicitly declare the headers (headers=['h1', 'h2', 'h3'] - swap in the correct names for your headers).
Related
I might describe this very poorly but here we go. What I am doing is creating a file with the user id from discord. In a different function, I want to access the file but I don't know the name of it
import json
import discord
#client.command()
async def function1(ctx):
test1 = 3
author = ctx.message.author.mention
await ctx.send(author)
with open(author+'.txt', 'w+') as outfile:
json.dump(test1, outfile)
#client.command()
async def function2(ctx):
with open(author+'.txt') as infile: #The "author+'.txt'" does not work because it does not know what author is.
test2 = json.load(infile)
await ctx.send(test2)
How would I get around this and access this file? I have an idea, I could save the author and then call it in function two. This would not work because there are many people using the bot. Am I on the right track? Thanks for the help in advance.
I have found various methods of invoking a JSON/REST API call from Maximo to an external system on the web, but none of them have matched exactly what I'm looking for and all of them seem to use different methods, which is causing me a lot of confusion because I am EXTREMELY rusty at jython coding. So, hopefully, you all can help me out. Please be as detailed as possible: script language, script type (for integration, object launch point, publish channel process/user exit class, etc).
My Maximo Environment is 7.6.1.1. I have created the following...
Maximo Object Structure (MXSRLEAK): with 3 objects (SR, TKSERVICEADDRESS, and WORKLOG)
Maximo Publish Channel (MXSRLEAK-PC): uses the MXSRLEAK OS and contains my processing rules to skip records if they don't meet the criteria (SITEID = 'TWOO', TEMPLATEID IN ('LEAK','LEAKH','LEAKW'), HISTORYFLAG = 0)
Maximo End Point (LEAKTIX): HTTP Handler, HEADERS ("Content-Type:application/json"), HTTPMETHOD ("POST"), URL (https:///api/ticket/?ticketid=), USERNAME ("Maximo"), and PASSWORD (). The Allow Override is checked for HEADERS, HTTPMETHOD, and URL.
At this point, I need an automation script to:
Limit the Maximo attributes that I'm sending. This will vary depending on what happens on the Maximo side. If an externally created (SOURCE = LEAKREP, EXTERNALRECID IS NOT NULL) service request ticket gets cancelled, I need to send the last worklog with logtype = "CANCOMM" (Both summary/description and details/description_longdescription) as well as the USERID that changed status. If an externally created SR ticket gets closed, I need to send the last worklog with logtype <> "CANCOMM". If the externally created SR ticket was a duplicate, I need to also include a custom field called "DUPLICATE" (which uses a table domain to show all open SR's with similar TEMPLATEID's in the UI). If a "LEAK" SR ticket originated in Maximo (doesn't have a SOURCE or EXTERNALRECID), then I need to send data from the SR (ex. DESCRIPTION, REPORTDATE, REPORTEDBY, etc), TKSERVICEADDRESS (FORMATTEDADDRESS,etc), and WORKLOG (DESCRIPTION, LONGDESCRIPTION if they exist) objects to the external system and parse the response to update SOURCE and EXTERNALRECID.
Update Maximo End Point values for API call: HTTPMETHOD to "POST" or "PATCH", Add HEADERS (Authorization: Basic Base64Userid/Password), etc.
Below is my latest attempt with an automation script, which doesn't work because the "mbo is not defined" (I'm sure there are more problems with it but it fails early on in script). The script was created for integration, with a publish channel (MXSRLEAK-PC) using the External Exit option in Jython. I was trying to start with just one scenario where the Maximo SR ticket was originally created via an API call from the external system into Maximo and was actually a duplicate of another Maximo SR ticket. My thought was if I got this part correct, I could update the script to include the other scenarios, such as if the SR ticket originated in Maximo and needed to POST a new record to external system.
My final question is, is it better (easier for future eyes to understand) to have one Object Structure, Publish Channel, End Point, and Automation Script to handle all scenarios or to create separate ones for each scenario?
from com.ibm.json.java import JSONObject
from java.io import BufferedReader, IOException, InputStreamReader
from java.lang import System, Class, String, StringBuffer
from java.nio.charset import Charset
from java.util import Date, Properties, List, ArrayList, HashMap
from org.apache.commons.codec.binary import Base64
from org.apache.http import HttpEntity, HttpHeaders, HttpResponse, HttpVersion
from org.apache.http.client import ClientProtocolException, HttpClient
from org.apache.http.client.entity import UrlEncodedFormEntity
from org.apache.http.client.methods import HttpPost
from org.apache.http.entity import StringEntity
from org.apache.http.impl.client import DefaultHttpClient
from org.apache.http.message import BasicNameValuePair
from org.apache.http.params import BasicHttpParams, HttpParams, HttpProtocolParamBean
from psdi.mbo import Mbo, MboRemote, MboSet, MboSetRemote
from psdi.security import UserInfo
from psdi.server import MXServer
from psdi.iface.router import Router
from sys import *
leakid = mbo.getString("EXTERNALRECID")
#Attempting to pull current SR worklog using object relationship and attribute
maxlog = mbo.getString("DUPWORKLOG.description")
maxloglong = mbo.getString("DUPWORKLOG.description_longdescription")
clientEndpoint = Router.getHandler("LEAKTIX")
cEmap = HashMap()
host = cEmap.get("URL")+leakid
method = cEmap.get("HTTPMETHOD")
currhead = cEmap.get("HEADERS")
tixuser = cEmap.get("USERNAME")
tixpass = cEmap.get("PASSWORD")
auth = tixuser + ":" + tixpass
authHeader = String(Base64.encodeBase64(String.getBytes(auth, 'ISO-8859-1')),"UTF-8")
def createJSONstring():
jsonStr = ""
obj = JSONObject()
obj.put("status_code", "1")
obj.put("solution", "DUPLICATE TICKET")
obj.put("solution_notes", maxlog+" "+maxloglong)
jsonStr = obj.serialize(True)
return jsonStr
def httpPost(path, jsonstring):
params = BasicHttpParams()
paramsBean = HttpProtocolParamBean(params)
paramsBean.setVersion(HttpVersion.HTTP_1_1)
paramsBean.setContentCharset("UTF-8")
paramsBean.setUseExpectContinue(True)
entity = StringEntity(jsonstring, "UTF-8")
client = DefaultHttpClient()
request = HttpPost(host)
request.setParams(params)
#request.addHeader(currhead)
request.addHeader(HttpHeaders.CONTENT_TYPE, "application/json")
request.addHeader(HttpHeaders.AUTHORIZATION, "Basic "+authHeader)
request.setEntity(entity)
response = client.execute(request)
status = response.getStatusLine().getStatusCode()
obj = JSONObject.parse(response.getEntity().getContent())
System.out.println(str(status)+": "+str(obj))
Sorry for the late response. Ideally, the external exit script doesn’t have mbo as its instance. Instead, it uses irdata after breaking the structure data. Then another manipulation comes.
What I understood is that you need to post the payload dynamically based on some conditions in Maximo. For that, I think you can write a custom handler which will be called during the post.
I know this question has been answered before, but I seem to have a different problem. Up until a few days ago, my querying of YouTube never had a problem. Now, however, every time I query data on any video the rows of actual video data come back as a single empty array.
Here is my code in full:
# -*- coding: utf-8 -*-
import os
import google.oauth2.credentials
import google_auth_oauthlib.flow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from google_auth_oauthlib.flow import InstalledAppFlow
import pandas as pd
import csv
SCOPES = ['https://www.googleapis.com/auth/yt-analytics.readonly']
API_SERVICE_NAME = 'youtubeAnalytics'
API_VERSION = 'v2'
CLIENT_SECRETS_FILE = 'CLIENT_SECRET_FILE.json'
def get_service():
flow = InstalledAppFlow.from_client_secrets_file(CLIENT_SECRETS_FILE, SCOPES)
credentials = flow.run_console()
#builds api-specific service
return build(API_SERVICE_NAME, API_VERSION, credentials = credentials)
def execute_api_request(client_library_function, **kwargs):
response = client_library_function(
**kwargs
).execute()
print(response)
columnHeaders = []
# create a CSV output for video list
csvFile = open('video_result.csv','w')
csvWriter = csv.writer(csvFile)
csvWriter.writerow(["views","comments","likes","estimatedMinutesWatched","averageViewDuration"])
if __name__ == '__main__':
# Disable OAuthlib's HTTPs verification when running locally.
# *DO NOT* leave this option enabled when running in production.
os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = '1'
youtubeAnalytics = get_service()
execute_api_request(
youtubeAnalytics.reports().query,
ids='channel==UCU_N4jDOub9J8splDAPiMWA',
#needs to be of form YYYY-MM-DD
startDate='2018-01-01',
endDate='2018-05-01',
metrics='views,comments,likes,dislikes,estimatedMinutesWatched,averageViewDuration',
dimensions='day',
filters='video==ZeY6BKqIZGk,YKFWUX9w4eY,bDPdrWS-YUc'
)
You can see in the Reports: Query front page that you need to use the new scope:
https://www.googleapis.com/auth/youtube.readonly
instead of the old one:
https://www.googleapis.com/auth/yt-analytics.readonly
After changing the scope, perform a re-authentication (delete the old credentials) for the new scope to take effect.
This is also confirmed in this forum.
One of the mishaps may come if you chose wrong account/accounts during oAuth2 authorisation. For instance you may have to get "account" on the firs screen but then on second screen (during authorisation) use "brand account" and not the main account from the first step that also is on a list for second step.
I got the same problem and replacing with https://www.googleapis.com/auth/youtube.readonly scope doesn't work.
(Even making requests in the API webpage, it returns empty rows.)
Instead, using the https://www.googleapis.com/auth/youtube scope works fine in my case.
I am creating a micro-service to be used locally. From some input I am generating one large matrix each time. Right now I am using json to transfer the data but it is really slow and became the bottleneck of my application.
Here is my client side:
headers={'Content-Type': 'application/json'}
data = {'model': 'model_4', \
'input': "this is my input."}
r = requests.post("http://10.0.1.6:3000/api/getFeatureMatrix", headers=headers, data=json.dumps(data))
answer = json.loads(r.text)
My server is something like:
app = Flask(__name__, static_url_path='', static_folder='public')
#app.route('/api/getFeatureMatrix', methods = ['POST'])
def get_feature_matrix():
arguments = request.get_json()
#processing ... generating matrix
return jsonify(matrix=matrix.tolist())
How can I send large matrices ?
In the end I ended up using
np.save(matrix_path, mat)
return send_file(matrix_path+'.npy')
On the client side I save the matrix before loading it.
I suppose that the problem is that the matrix takes time to generate. It's a CPU bound application
One solution would be to handle the request asynchronously. Meaning that:
The server receives request and returns a 202 ACCEPTED and the link to where the client can check the progress of the creation of the matrix
The client checks the returned url he either gets:
a 200 OK response if the matrix is not yet created
a 201 CREATED response if the matrix is finally created, with a link to the resource
However, Flask handles one request at a time. So you'll need to use multithreading or multiprocessing or greenthreads.
On the client side you could do something like:
with open('binariy.file', 'rb') as f:
file = f.read()
response = requests.post('/endpoint', data=file)
and on the Server side:
import numpy as np
...
#app.route('/endpoint', methods=['POST'])
def endpoint():
filestr = request.data
file = np.fromstring(filestr)
I would like calls to /contacts/1.json to return json, 1.api to return browsableAPI, and calls with format=None aka /contacts/1/ to return a template where we call render_form. This way end-users can have pretty forms, and developers can use the .api format, and ajax/apps etc use .json. Seems like a common use case but something isn't clicking for me here in DRF...
Struggling with how DRF determines the Renderer used when no format is given. I found and then lost some info here on stack exchange that basically said to split the responses based on format. Adding the TemplateHTMLRenderer caused all sorts of pain. I had tried to split based on format but that is giving me JSON error below.
I don't understand the de facto way to define what renderer should be used. Especially when no format is provided. I mean, it "just works" when using Response(data). And I can get the TemplateHTMLRenderer to work but at the cost of having no default Renderer.
GET /contacts/1/ Gives the error:
<Contact: Contact object> is not JSON serializable
Using this code:
class ContactDetail(APIView):
permission_classes = (permissions.IsAuthenticatedOrReadOnly,
IsOwnerOrReadOnly,)
queryset = Contact.objects.all()
renderer_classes = (BrowsableAPIRenderer, JSONRenderer, TemplateHTMLRenderer,)
"""
Retrieve, update or delete a contact instance.
"""
def get_object(self, pk):
try:
return Contact.objects.get(pk=pk)
except Contact.DoesNotExist:
raise Http404
def get(self, request, pk, format=None):
contact = self.get_object(pk)
serializer = ContactSerializer(contact)
if format == 'json' or format == "api":
return Response(serializer.data)
else:
return Response({'contact': contact, 'serializer':serializer}, template_name="contact/contact_detail.html")
But GET /contacts/1.json , 1.api, or 1.html ALL give me the correct output. So it seems that I have created an issue with the content negotiation for the default i.e. format=None
I must be missing something fundamental. I have gone through the 2 tutorials and read the Renderers docs but I am unclear on what I messed up here as far as the default. I am NOT using the DEFAULT_RENDERERS in settings.py, didn't seem to make a difference if in default or inside the actual class as shown above.
Also if anyone knows a way to use TemplateHTMLRenderer without needing to switch on format value, I'm all ears.
EDIT: IF I use
if format == 'json' or format == "api" or format == None:
return Response(serializer.data)
else:
return Response({'contact': contact, 'serializer':serializer},
Then I am shown the browsable API by default. Unfortunately, what I want is the Template HTML view by default, which is set to show forms for end users. I would like to keep the .api format for developers.
TL; DR: Check the order of your renderers - they are tried in order of declaration until a content negotiation match or an error occurs.
Changing the line
renderer_classes = (BrowsableAPIRenderer, JSONRenderer, TemplateHTMLRenderer, )
to
renderer_classes = (TemplateHTMLRenderer, BrowsableAPIRenderer, JSONRenderer, )
Worked for me. I believe the reason is because the content negotiator starts at the first element in the renderer classes tuple when trying to find a renderer. When I have format==None, I'm thinking there is nothing else for DRF to go on, so it assumes I mean the "default" which seems to be the first in the tuple.
EDIT: So, as pointed out by #Ross in his answer, there is also a global setting in the settings.py for the project. If I remove my class level renderer_classes declaration and instead use this in settings.py
# ERROR
REST_FRAMEWORK = {
'DEFAULT_RENDERER_CLASSES': (
'rest_framework.renderers.BrowsableAPIRenderer',
'rest_framework.renderers.JSONRenderer',
'rest_framework.renderers.TemplateHTMLRenderer',
)
}
Then I get a (different) JSON error. However, as long as
'rest_framework.renderers.BrowsableAPIRenderer',
is not listed first, for example:
# SUCCESS, even though JSON renderer is checked first
'DEFAULT_RENDERER_CLASSES': (
'rest_framework.renderers.JSONRenderer',
'rest_framework.renderers.TemplateHTMLRenderer',
'rest_framework.renderers.BrowsableAPIRenderer',
)
So if we hit BrowsableAPIRenderer before we try TemplateHTMLRenderer then we get an error - whether or not we are relying on renderer_classes or DEFAULT_RENDERER_CLASSES. I imagine it passes through JSONRenderer gracefully but for whatever reason BrowsableAPIRenderer raises an exception.
So I have simplified my view code after analyzing this...
def get(self, request, pk, format=None):
contact = self.get_object(pk)
serializer = ContactSerializer(contact)
if format == None:
return Response({'contact': contact, 'serializer':serializer}, template_name="contact/contact_detail.html")
else:
return Response(serializer.data)
..which better reflects what I was originally trying to do anyway.
When I look at the source code, the priority seems to be the order of the renderers specified in the DEFAULT_RENDERER_CLASSES parameter in settings.py:
REST_FRAMEWORK = {
'DEFAULT_RENDERER_CLASSES': (
'rest_framework.renderers.JSONRenderer',
'rest_framework.renderers.TemplateHTMLRenderer',
),
'DEFAULT_PARSER_CLASSES': (
'rest_framework.parsers.JSONParser',
'rest_framework.parsers.TemplateHTMLRenderer',
)
}
So, if you specify a bunch of renderer classes, the first renderer that is valid will be selected based on if it is valid for the request given the .json/.api/.html extension and the Accept: header (not content-type, as I said in the comment on your question).