reading json into django error - json

I'm passing a context variable, x, into a template from a Djano view. It is a list of strings
x = ['Braselton', 'Buford']
Then I am using an ajax function to pass that variable back to a django view. The problem is when I retrieve that variable in a python view with the following code:
new_x = request.GET['x']
print(new_x)
I see the following:
['Braselton', 'Buford']
I've tried json.loads(request.GET['x']) and I keep getting the following error
json.decoder.JSONDecodeError: Expecting value: line 1 column 2 (char 1)
Any help is much appreciated

You need to unescape those characters, there are lots of ways to do it..
Python Documentation for more info
import html.parser
import json
variable = "['Braselton', 'Buford']"
parser = html.parser.HTMLParser()
new_variable = parser.unescape(variable)
new_variable = json.loads(parser.unescape(new_variable).replace("'",'"')) # replace single quote
>>> ['Braselton', 'Buford'] # Type List

The problem is that python escaping the HTML elements. Note that it's not JSON.
to unescape you have to use HTML module.
import html
y = html.unenscape(new_x)
print(y) # output is ['Braselton', 'Buford']

Mark the variable as safe.
'{{ x | safe }}'

Related

Count the number of people having a property bounded by two numbers

The following code goes over the 10 pages of JSON returned by GET request to the URL.
and checks how many records satisfy the condition that bloodPressureDiastole is between the specified limits. It does the job, but I was wondering if there was a better or cleaner way to achieve this in python
import urllib.request
import urllib.parse
import json
baseUrl = 'https://jsonmock.hackerrank.com/api/medical_records?page='
count = 0
for i in range(1, 11):
url = baseUrl+str(i)
f = urllib.request.urlopen(url)
response = f.read().decode('utf-8')
response = json.loads(response)
lowerlimit = 110
upperlimit = 120
for elem in response['data']:
bd = elem['vitals']['bloodPressureDiastole']
if bd >= lowerlimit and bd <= upperlimit:
count = count+1
print(count)
There is no access through fields to json content because you get dict object from json.loads (see translation scheme here). It realises access via __getitem__ method (dict[key]) instead of __getattr__ (object.field) as keys may be any hashible objects not only strings. Moreover, even strings cannot serve as fields if they starts with digits or are the same with built-in dictionary methods.
Despite this, you can define your own custom class realising desired behavior with acceptable key names. json.loads has an argument object_hook wherein you may put any callable object (function or class) that take a dict as the sole argument (not only the resulted one but every one in json recursively) & return something as the result. If your jsons match some template, you can define a class with predefined fields for the json content & even with methods in order to get a robust Python-object, a part of your domain logic.
For instance, let's realise the access through fields. I get json content from response.json method from requests but it has the same arguments as in json package. Comments in code contain remarks about how to make your code more pythonic.
from collections import Counter
from requests import get
class CustomJSON(dict):
def __getattr__(self, key):
return self[key]
def __setattr__(self, key, value):
self[key] = value
LOWER_LIMIT = 110 # Constants should be in uppercase.
UPPER_LIMIT = 120
base_url = 'https://jsonmock.hackerrank.com/api/medical_records'
# It is better use special tools for handling URLs
# in order to evade possible exceptions in the future.
# By the way, your option could look clearer with f-strings
# that can put values from variables (not only) in-place:
# url = f'https://jsonmock.hackerrank.com/api/medical_records?page={i}'
counter = Counter(normal_pressure=0)
# It might be left as it was. This option is useful
# in case of additional counting any other information.
for page_number in range(1, 11):
records = get(
base_url, data={"page": page_number}
).json(object_hook=CustomJSON)
# Python has a pile of libraries for handling url requests & responses.
# urllib is a standard library rewritten from scratch for Python 3.
# However, there is a more featured (connection pooling, redirections, proxi,
# SSL verification &c.) & convenient third-party
# (this is the only disadvantage) library: urllib3.
# Based on it, requests provides an easier, more convenient & friendlier way
# to work with url-requests. So I highly recommend using it
# unless you are aiming for complex connections & url-processing.
for elem in records.data:
if LOWER_LIMIT <= elem.vitals.bloodPressureDiastole <= UPPER_LIMIT:
counter["normal_pressure"] += 1
print(counter)

Format jinja template as INT in operator parameter in Airflow

I'm trying to format a jinja template parameter as an integer so I can pass it to an operator which expects INT (could be custom or PythonOperator) and I'm not able to.
See sample DAG below. I'm using the built-in Jinja filter | int but that's not working - the type remains <class 'str'>
I'm still new with Airflow but I don't think this is possible based on what I've read about Jinja/Airflow works. I see two main workarounds:
Change the operator parameter to expect string and handle the conversion underneath.
Handle this conversion in a separate PythonOperator which converts the string to an int and export that using xcom/task context. (I think this will work but not sure)
Please let me know of any other workarounds
def greet(mystr):
print (mystr)
print(type(mystr))
default_args = {
'owner': 'airflow',
'start_date': days_ago(2)
}
dag = DAG(
'template_dag',
default_args=default_args,
description='template',
schedule_interval='0 13 * * *'
)
with dag:
# foo = "{{ var.value.my_custom_var | int }}" # from variable
foo = "{{ execution_date.int_timestamp | int }}" # built in macro
# could be MyCustomOperator
opr_greet = PythonOperator(task_id='greet',
python_callable=greet,
op_kwargs={'mystr': foo}
)
opr_greet
Airflow 1.10.11
Updated answer:
As of Airflow 2.1, you can pass render_template_as_native_obj=True to the dag and Airflow will return the Python type (dict, int, etc) instead of string. See this pull request
dag = DAG(
dag_id="example_template_as_python_object",
schedule_interval=None,
start_date=days_ago(2),
render_template_as_native_obj=True,
)
Old answer for prior versions:
I found a related question that provides the best workaround, IMO.
Airflow xcom pull only returns string
The trick is to use a PythonOperator, do the datatype conversion there and then call the main operator with the parameter. Below is an example in converting a json string to a dict object. Same can apply for converting string to int, etc.
def my_func(ds, **kwargs):
ti = kwargs['ti']
body = ti.xcom_pull(task_ids='privious_task_id')
import_body= json.loads(body)
op = CloudSqlInstanceImportOperator(
project_id=GCP_PROJECT_ID,
body= import_body,
instance=INSTANCE_NAME,
gcp_conn_id='postgres_default',
task_id='sql_import_task',
validate_body=True,
)
op.execute()
p = PythonOperator(task_id='python_task', python_callable=my_func)
I believe Jinja is going always to return you a string: the template is a string and replacing values inside the template will return you a string.
If you are sure that foo is always an integer, you can do
opr_greet = PythonOperator(task_id='greet',
python_callable=greet,
op_kwargs={'mystr': int(foo)}
)
Update: it looks like Airflow uses the render method from Jinja2, which returns a Unicode string.
At this point, if you can modify greet, it is easier to manage the input parameter in that function.

Is it possible to access a solver propertiese through Pycaffe?

Is it possible to read and access the solver properties in Pycaffe?
I need to use some of the information stored in the solver file, but apparently the solver object which is created using
import caffe
solver = caffe.get_solver(solver_path)
is of no use in this case. Is there any other way to get around this problem?
I couldn't find what I was after using solver object and I ended up writing a quick function to get around this issue:
def retrieve_field(solver_path, field=None):
'''
Returns a specific solver parameter value using the specified field
or the whole content of the solver file, when no field is provided.
returns:
a string, as a field value or the whole content as list
'''
lines = []
field_segments = []
with open(solver_path, 'r') as file:
for line in file:
line = line.strip()
lines.append(line)
field_segments = line.split(':')
if (field_segments[0] == field):
#if that line contains # marks (for comments)
if('#' in field_segments[-1]):
idx = field_segments[-1].index('#')
return field_segments[-1][0:idx]
else:
return field_segments[-1]
return lines

JMeter - Specify CSV row failure

Within JMeter, I am running a script which uses a .CSV file to enter data as well as verify results. It is working correctly, but I cannot figure out how to tell which row/line of the .CSV caused the individual failures. Is there a way to do this?
Somewhat of an example scenario (not specific to what I'm doing, but similar):
Each row of the .CSV file contains a mathematical equation as well as the expected result.
On page 1, enter the equation (2+2)
Then on Page 2, you get the response: 3.
That test would obviously be a failure.
Say there are 1,000 tests being ran, some that pass and some that do not. How can I tell which .CSV row/line didn't pass?
Do you have any columns in your CSV file which help you to uniquely identify a row?
Let me assume you have a column called 'TestCaseNo' which has values as TC001, TC002,TC003...etc
Add a Beanshell Post Processor to store the result for each iteration.
Add the below code. I assume the you have the PASS or FAIL result stored in the 'Result' variable.
import java.io.File;
import org.apache.jmeter.services.FileServer;
f = new FileOutputStream("someptah/tcstatus.csv", true);
p = new PrintStream(f);
p.println( vars.get("TestCaseNo") + "," + vars.get("Result"));
p.close();
f.close();
The above code creates a CSV file with the results for each testcase.
EDIT:
Do the assertion yourself in the Beanshell post processor.
import java.io.File;
import org.apache.jmeter.services.FileServer;
Result = "FAIL";
Response = prev.getResponseDataAsString();
if (Response.contains("value")) // replace the value with the expected text
Result = "PASS";
f = new FileOutputStream("someptah/tcstatus.csv", true);
p = new PrintStream(f);
p.println( vars.get("TestCaseNo") + "," + Result);
p.close();
f.close();
I would use the following approach
__CSVRead() function - to get data from the .csv file.
__counter function - to track CSV file position. You can include counter variable name into Sampler's label so current .csv file line could be reported. See below image for example
For more information on aforementioned and other useful JMeter functions see How to Use JMeter Functions posts series.

save list to CSV - python

I try to save my output as csv using the "import csv" and only get errors. Any reason why?
Since I can not make it run will it also notify if the file already exists?
Thanks a lot
from tkinter import *
from tkinter.filedialog import asksaveasfilename
from tkinter import ttk
import csv
def data():
...
output= <class 'list'> #just an example
...
def savefile():
name= asksaveasfilename()
create = csv.writer(open(name, "wb"))
create.writerow(output)
for x in output:
create.writerow(x)
root = Tk()
Mframe = ttk.Frame(root)
Mframe.grid(column=0, row=0, sticky=(N, W, E, S))
bSave=ttk.Button(Mframe, text='Save File', command=savefile)
bSave.grid(column=1, row=0)
root.mainloop()
You are opening the file, but not closing it. A good practise is to use a with statement make sure you close it. By the way, I don't know how is the output list, but if it isn't a list of lists, it makes more sense to me to call writerow once.
Besides, make sure this list is also a global variable, otherwise it won't be available within the scope of savefile. However, global variables are not a very good solution, so consider to pass it as an argument to savefile or use a class to hold all this data:
def savefile():
name = asksaveasfilename()
with open(name, 'w', newline='') as csvfile:
create = csv.writer(csvfile)
create.writerow(output)