How to convert bytes to json in python - json

I'm trying to convert bytes data into JSON data. I got errors in converting data.
a = b'orderId=570d3e38-d6486056e&orderAmount=10.00&referenceId=34344&txStatus=SUCCESS&txMsg=Transaction+Successful&txTime=2021-06-26+12%3A03%3A12&signature=njtH5Dzmg6RJ1KB'
I used this to convert
json.loads(a.decode('utf-8'))
and I want to get a response like
orderAmount = 10.00
orderId = 570d3e38-d6486056e
txStatus = SUCCESS

What you here see is a query string [wiki], you can read the querystring with:
from django.http import QueryDict
a = b'orderId=570d3e38-d6486056e&orderAmount=10.00&referenceId=34344&txStatus=SUCCESS&txMsg=Transaction+Successful&txTime=2021-06-26+12%3A03%3A12&signature=njtH5Dzmg6RJ1KB'
b = QueryDict(a)
with the given sample data, we have a QueryDict [Django-doc] that looks like:
>>> b
<QueryDict: {'orderId': ['570d3e38-d6486056e'], 'orderAmount': ['10.00'], 'referenceId': ['34344'], 'txStatus': ['SUCCESS'], 'txMsg': ['Transaction Successful'], 'txTime': ['2021-06-26 12:03:12'], 'signature': ['njtH5Dzmg6RJ1KB']}>
If you subscript this, then you always get the last item, so:
>>> b['orderId']
'570d3e38-d6486056e'
>>> b['orderAmount']
'10.00'
>>> b['orderId']
'570d3e38-d6486056e'
>>> b['txStatus']
'SUCCESS'
If you work with a view, you can also find this querydict of the URL with:
def my_view(request):
b = request.GET
# …

Related

How can I save some json files generated in a for loop as csv?

Sorry, I am new in coding in Python, I would need to save a json file generated in a for loop as csv for each iteration of the loop.
I wrote a code that works fine to generate the first csv file but then it is overwritten and I did not find a solution yet. Can anyone help me? many thanks
from twarc.client2 import Twarc2
import itertools
import pandas as pd
import csv
import json
import numpy as np
# Your bearer token here
t = Twarc2(bearer_token="AAAAAAAAAAAAAAAAAAAAA....WTW")
# Get a bunch of user handles you want to check:
list_of_names = np.loadtxt("usernames.txt",dtype="str")
# Get the `data` part of every request only, as one list
def get_data(results):
return list(itertools.chain(*[result['data'] for result in results]))
user_objects = get_data(t.user_lookup(users=list_of_names, usernames=True))
for user in user_objects:
following = get_data(t.following(user['id']))
# Do something with the lists
print(f"User: {user['username']} Follows {len(following)} -2")
json_string = json.dumps(following)
df = pd.read_json(json_string)
df.to_csv('output_file.csv')
You need to add a sequence number or some other unique identifier to the filename. The clearest example would be to keep track of a counter, or use a GUID. Below I've used a counter that is initialized before your loop, and is incremented in each iteration. This will produce a list of files like output_file_1.csv, output_file_2.csv, output_file_3.csv and so on.
counter = 0
for user in user_objects:
following = get_data(t.following(user['id']))
# Do something with the lists
print(f"User: {user['username']} Follows {len(following)} -2")
json_string = json.dumps(following)
df = pd.read_json(json_string)
df.to_csv('output_file_' + str(counter) + '.csv')
counter += 1
We convert the integer to a string, and paste it inbetween the name of your file and its extension.
from twarc.client2 import Twarc2
import itertools
import pandas as pd
import csv
import json
import numpy as np
# Your bearer token here
t = Twarc2(bearer_token="AAAAAAAAAAAAAAAAAAAAA....WTW")
# Get a bunch of user handles you want to check:
list_of_names = np.loadtxt("usernames.txt",dtype="str")
# Get the `data` part of every request only, as one list
def get_data(results):
return list(itertools.chain(*[result['data'] for result in results]))
user_objects = get_data(t.user_lookup(users=list_of_names, usernames=True))
for idx, user in enumerate(user_objects):
following = get_data(t.following(user['id']))
# Do something with the lists
print(f"User: {user['username']} Follows {len(following)} -2")
json_string = json.dumps(following)
df = pd.read_json(json_string)
df.to_csv(f'output_file{str(idx)}.csv')

sympy: function to convert string into actual sympy object

assume I have a csv-file in the following format:
csc($0);\csc##{$0};1;;;;;https://docs.sympy.org/latest/modules/functions/elementary.html#csc
cot($0);\cot##{$0};1;;;;;https://docs.sympy.org/latest/modules/functions/elementary.html#cot
> ;;;;;;;
sinh($0);\sinh##{$0};1;;;;;https://docs.sympy.org/latest/modules/functions/elementary.html#sinh
I want to read it into a Python script using:
default_semantic_latex_table = {}
with open("CAS_SymPy.csv") as file:
reader = csv.reader(file, delimiter=";")
for line in reader:
if line[0] != '':
tup = (line[0], line[2])
default_semantic_latex_table[tup] = str(line[1])
I'd like to get a dict of the following form: default_semantic_latex_table = {
(sympy.functions.elementary.trigonometric.sinh, 1): FormatTemplate(r"\sinh#{$0}")}
The first element of the tuple should not be a string but an actual sympy-object.
Does anyone know about a function that can convert a string into a sympy-object such as sympy.functions.elementary.trigonometric.sin, sympy.functions.elementary.exponential.exp or sympy.concrete.products.Product? I'd greatly appreciate any help!
sympify converst strings to SymPy objects:
>>> from sympy import sympify, pi
>>> sympify('cos')
cos
>>> _(pi)
-1

Python: request json gets error - If using all scalar values, you must pass an index

When attempting to request a json file from API, i have an error after get the first result.
Does anyone have any idea why is requested an index from a list ?
Best regards :)
import requests
import json
import pandas as pd
import time
import datetime
### OCs List ids
OCs = ['1003473-1116-SE21','1003473-1128-AG21','1031866-12-CC21','1057440-3184-AG21','1070620-1832-CM21', '1070620-2219-SE21', '1070620-2499-CM21']
for i in OCs:
link ="http://api.mercadopublico.cl/servicios/v1/publico/ordenesdecompra.json?codigo="+ str(i) +"&ticket=F8537A18-6766-4DEF-9E59-426B4FEE2844"
response = requests.get(link, [])
data = response.json()
df = pd.DataFrame.from_dict(data)
### remove unnecessary columns
df.drop(df.columns[[0,1,2]],axis=1, inplace=True)
### flat json to pandas dataframe
df_detail = pd.json_normalize(df['Listado'])
ValueError: If using all scalar values, you must pass an index
The server detects too many requests and sends error response (and then your script throws an error). Solution is to wait for correct response, for example:
import requests
import json
import pandas as pd
import time
import datetime
### OCs List ids
OCs = [
"1003473-1116-SE21",
"1003473-1128-AG21",
"1031866-12-CC21",
"1057440-3184-AG21",
"1070620-1832-CM21",
"1070620-2219-SE21",
"1070620-2499-CM21",
]
for i in OCs:
link = (
"http://api.mercadopublico.cl/servicios/v1/publico/ordenesdecompra.json?codigo="
+ str(i)
+ "&ticket=F8537A18-6766-4DEF-9E59-426B4FEE2844"
)
while True: # <--- repeat until we get correct response
print(link)
response = requests.get(link, [])
data = response.json()
if "Listado" in data:
break
time.sleep(3) # <--- wait 3 seconds and try again
df = pd.DataFrame.from_dict(data)
### remove unnecessary columns
df.drop(df.columns[[0, 1, 2]], axis=1, inplace=True)
### flat json to pandas dataframe
df_detail = pd.json_normalize(df["Listado"])
# ...

How to Convert a data set from a database query to a specific JSON format for input to a REST api

I am fairly new to python and would like some assistance with a problem.
I have a SQL select query that returns a column with values. I would like to pass the records in a header request to a REST API call. The issue is that the API call expects the data in a specific JSON format.
How can I convert the data returned by the query to the specific JSON format shown below?
Query from SQL returns:
InfoId
------
1
2
3
4
5
6
I need to pass these values to a REST API as JSON in the following format:
{
"InfoId":[
1,2,3,4,5,6
]
}
I have tried couple of options to solve this problem.
I have tried converting the data into json using the pandas datatable.to_json method with the various orient parameters but none of them return the desired format as shown above.
import requests
import json
import pyodbc
import pandas as pd
conn = pyodbc.connect('Driver={SQL SERVER};'
'Server=myServer;'
'Database=TestDb;'
'Trusted_Connection=yes;'
)
cursor = conn.cursor()
sql_query = pd.read_sql_query('SELECT InfoId FROM tbl_info', conn)
#print(sql_query)
print(sql_query.to_json(orient='values', index=False))
url = "http://ldapiserver:5000/patterns/v1/swirl?idType=Info"
#sample payload
#payload = "{\r\n \"InfoId\": [\r\n 1,2,3,4,5,6\r\n ]\r\n}"
payload = sql_query.to_json(orient='records')
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=json.dumps(payload, indent=4))
resp_body = response.json()
print(resp_body)
print(response.elapsed.total_seconds())
The 2nd method I have tried is to convert the rows from SQL query into an list object and then form the json string. It works that way but I would like to automate is so that irrespective of the query it can for the json string.
import requests
import json
import pyodbc
conn = pyodbc.connect('Driver={SQL SERVER};'
'Server=myServer;'
'Database=TestDb;'
'Trusted_Connection=yes;'
)
cursor = conn.cursor()
cursor.execute("""
SELECT InfoId FROM tbl_info
""")
rows = cursor.fetchall()
# Convert query to row arrays
rowarray_list = []
for row in rows:
t = (row.InfoId)
rowarray_list.append(t)
j = json.dumps(rowarray_list)
conn.close()
txt = '{"InfoId": ', j, '}'
# print(txt)
payload = txt[0]+txt[1]+txt[2]
url = "http://ldapiserver:5000/patterns/v1/swirl?idType=Info"
# payload = "{\r\n \"InfoId\": [\r\n 72,74\r\n ]\r\n}"
#print(json .dumps(payload, indent=4))
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
resp_body = response.json()
print(resp_body)
print(response.elapsed.total_seconds())
Appreciate any help with this.
Thank you.
To convert your SQL query to JSON,
.
.
.
rows = cursor.fetchall()
# convert to list
json_rows = [dict(zip([key[0] for key in cursor.description], row)) for row in rows]
Then you can return your response as you like

Failing to test POST in Django

Using Python3, Django, and Django Rest Frameworks.
Previously I had a test case that was making a POST to an endpoint.
The payload was a dictionary:
mock_data = {some data here}
In order to send the data over the POST I was doing:
mock_data = base64.b64encode(json.dumps(mock_data).encode('utf-8'))
When doing the POST:
response = self.client.post(
'/some/path/here/', {
...,
'params': mock_data.decode('utf-8'),
},
)
And on the receiving end, in validate() method I was doing:
params = data['params']
try:
params_str = base64.b64decode(params).decode('utf-8')
app_data = json.loads(params_str)
except (UnicodeDecodeError, ValueError):
app_data = None
That was all fine, but now I need to use some hmac validation, and the json I am passing can no longer be a dict - its ordering would change each time, so hmac.new(secret, payload, algorithm) would be different.
I was trying to use a string:
payload = """{data in json format}"""
But when I am doing:
str_payload = payload.encode('utf-8')
b64_payload = base64.b64encode(str_payload)
I cannot POST it, and getting an error:
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: b'ewog...Cn0=' is not JSON serializable
even if I do b64_payload.decode('utf-8') like before, still getting similar error:
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: b'\xa2\x19...\xff' is not JSON serializable
Some python in the terminal:
>>> d = {'email': 'user#mail.com'} # regular dictionary
>>> d
{'email': 'user#mail.com'}
>>> dn = base64.b64encode(json.dumps(d).encode('utf-8'))
>>> dn
b'eyJlbWFpbCI6ICJ1c2VyQG1haWwuY29tIn0='
>>> s = """{"email": "user#mail.com"}""" # string
>>> s
'{"email": "user#mail.com"}'
>>> sn = base64.b64encode(json.dumps(s).encode('utf-8'))
>>> sn
b'IntcImVtYWlsXCI6IFwidXNlckBtYWlsLmNvbVwifSI=' # different!
>>> sn = base64.b64encode(s.encode('utf-8'))
>>> sn
b'eyJlbWFpbCI6ICJ1c2VyQG1haWwuY29tIn0=' # same
The issue is resolved.
The problem was NOT with the payload, but with a parameter I was passing:
signature': b'<signature'>
Solved by passing the signature field like this:
'signature': signature_b64.decode('utf-8')
Thank you!