I have an API in my localhost. when I call it on browser, I will see this information:
<string xmlns="http://tempuri.org/">
{"STATUS":"202","STATUS_DT":"98/12/07","CitizenMobile":"091234567","PROFILEKEY":"1233"}
</string>
I want to use this information in my code and parse it as json. my code is:
import json
import requests
url = ""
response = requests.get(url)
print("type of response= ",type(response))
print(response.status_code)
data = response.text
parsed = json.loads(data)
print(parsed)
My output is:
type of response= <class 'requests.models.Response'>
200
Traceback (most recent call last):
File "C:/Users/MN/dev/New Project/form/WebService/TEST_MAZAHERI/Test_Stack.py", line 11, in
<module>
parsed = json.loads(data)
File "C:\Users\MN\AppData\Local\Programs\Python\Python38-32\lib\json\__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "C:\Users\MN\AppData\Local\Programs\Python\Python38-32\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\MN\AppData\Local\Programs\Python\Python38-32\lib\json\decoder.py", line 355, in
raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
I encountered this error : json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Can you help me?
just use response.json to get data in json format
import json
import requests
url = ""
response = requests.get(url)
print("type of response= ",type(response))
print(response.status_code)
data = response.json # changed here
print(data)
Try this. Works fine.
JSON is in XML, get the data, put that into xml library as string and get json from tag.
>>> data = r'<string xmlns="http://tempuri.org/"> {"STATUS":"202","STATUS_DT":"98/12/07","CitizenMobile":"091234567","PROFILEKEY":"1233"}</string>'
>>> from io import StringIO
>>> from lxml import etree
>>> root = etree.parse(StringIO(data))
>>> r = root.getroot()
>>> r.tag #also, you can iterate through particular tag, if xml has multiple tags
'{http://tempuri.org/}string'
>>> r.text #Get json text
' {"STATUS":"202","STATUS_DT":"98/12/07","CitizenMobile":"091234567","PROFILEKEY":"1233"}'
>>>
>>> import json
>>> json.loads(r.text)
{'STATUS': '202', 'STATUS_DT': '98/12/07', 'CitizenMobile': '091234567', 'PROFILEKEY': '1233'}
>>> #further operations add here.
Related
I have a problem with API POST, can you help me?
import requests
import json
url = 'http://xxx/api/getTotalPrice'
param = dict(itineraryType=1,
departureAirportCode='HAN',
destinationAirportCode='DLI',
departureDate='2020-12-30T14:00',
returnDate='2020-12-30T09:00',
adult=1,
children=1,
infant=1
)
resp = requests.post(url=url, params=param)
u_data = resp.json()
print(u_data)
I want to receive data from API POST. This is body API and this made error.
Here, this is error
C:\Users\Admin\PycharmProjects\untitled\venv\Scripts\python.exe C:/Users/Admin/PycharmProjects/untitled/venv/test.py
Traceback (most recent call last):
File "C:\Users\Admin\PycharmProjects\untitled\venv\test.py", line 17, in <module>
u_data = resp.json()
File "C:\Users\Admin\PycharmProjects\untitled\venv\lib\site-packages\requests\models.py", line 900, in json
return complexjson.loads(self.text, **kwargs)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\json\__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Process finished with exit code 1
I can get data from POST API without body, but with body, I cannot.
Can you help to fix code?
Many thanks!
P/S:
Here value which I want to get
{
"departureFlight": {
"totalPrice": 1896000.0,
"airlineCode": "VN"
},
"returnFlight": {
"totalPrice": 3263800.0,
"airlineCode": "VJ"
}
}
OK, I have an answer.
data = {'itineraryType':1,'departureAirportCode':'HAN','destinationAirportCode':'DLI','departureDate':'2020-12-30T15:00','returnDate':'2020-12-30T09:00','adult':1,'children':1,'infant':1}
headers={'Content-type':'application/json', 'Accept':'application/json'}
resp = requests.post(url=url, json=data, headers=headers)
u_data = resp.json()
print(u_data)
change dict(...) to {...} and set headers.
I am getting jsondecode error when I am trying trying to print requests.responce.json() after running the post method.
import requests
import requests_ntlm,json
import sharepy
import warnings
warnings.filterwarnings("ignore")
base_url = 'https://company.sharepoint.com'
folderUrl = 'Shared Documents'
headers = {
"Accept":"application/jason; odata=verbose",
"Content-Type":"application/jason; odata=verbose",
"odata": "verbose",
"X-RequestDigest":"true"
}
r1 = sharepy.connect(base_url,username='user#company.onmicrosoft.com',password='password')
filename = 'testupload.mp4'
request_url = base_url + '/_api/web/getFolderByServerRelativeUrl(\'''Shared Documents''\')/Files/add(url=\'' + filename + '\',overwrite=true)'
k = r1.post(url= base_url +"/_api/contextinfo",headers=headers)
print(k.json())
i am getting below error
Traceback (most recent call last):
File "sharepointextended.py", line 28, in <module>
print(json.loads(k.content.decode('utf-8')))
File
"C:\Users\saipr\AppData\Local\Programs\Python\Python37\lib\json\__init__.py",
line 348, in loads
return _default_decoder.decode(s)
File
"C:\Users\saipr\AppData\Local\Programs\Python\Python37\lib\json\decoder.py",
line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File
"C:\Users\saipr\AppData\Local\Programs\Python\Python37\lib\json\decoder.py",
line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Looks like typo in your request headers.
I am assuming the value should be 'application/json' and not 'application/jason'.
Please ignore this answer if this doesnt help as I am new to python. But, I dont think the post call would have been successful in line with general XMLHTTPRequest implementations.
Pulled details of server from internal URL's API using Python with SOAP+XML
Code:
import requests
import xml.etree.ElementTree as ET
headers = {'content-type': 'application/soap+xml'}
body = """<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<SOAP-ENV:Header>
<m:Username xmlns:m="http://www.ab.com">test</m:Username>
<m:Password xmlns:m="http://www.ab.com">testpass</m:Password>
</SOAP-ENV:Header>
<SOAP-ENV:Body>
<m:GetCI xmlns:m="http://www.ab.com">
<CIType xsi:type="xsd:string">System</CIType>
<CIName xsi:type="xsd:string">server.example.nl</CIName>
<CIID></CIID>
<AttrFilter xsi:type="xsd:string">PrimaryName+ArpaDomain+SystemStatus+Environment+ServiceLevel+IsVirtual+Coverage</AttrFilter>
<SubObjFilter xsi:type="xsd:string"></SubObjFilter>
</m:GetCI>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>"""
r = requests.post("URL of API", data=body, headers=headers, verify=False)
print(r.text)
SOAP XML response which i got:
'<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Header><Username xmlns="http://www.ab.com">test</Username><Password xmlns="http://www.hp.com">testpass</Password></soap:Header><soap:Body><ns1:GetCIResponse xmlns:ns1="http://www.ab.com"><returnWord><![CDATA[{"Attributes":{"PrimaryName":"server","ArpaDomain":"example.nl","SystemStatus":"obsolete","Environment":"Development","ServiceLevel":"standard","
IsVirtual":"no","Coverage":"24x7 (00:00-24:00 Mon-Sun)"},"DataCenter":{"DCID":"1041166","SourceID":null,"SourceTool":null},{"InstanceID":"159364108","SolutionName":"oracle engine","SolutionCategory":"engine","InstanceName":null,"InstanceStatus":"deinstalled","InstanceEnvironment":"Test","InstanceImpact":"3 - Multiple Users","InstanceServiceLevel":"standard","
InstanceAvailability":"99.98","InstanceTimezone":"(GMT+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna","InstanceCoverage":"11x5 (07:00-18:00 Mon-Fri)","InstanceVersion":null,"InstanceDescription":"Acceptance - Pe
oplesoft Financials Shadow","InstanceAssignmentGroup":"E-INCSSP-APAC-MDR-DBA-EHM","SourceID":null,"SourceTool":null}],"BusinessID":"10292","Owner":"1","User":"1","ACTION":null}],"MaintContracts":[{"ContractName":"123456","ContractStart":null,"ContractEnd":null,"CRCName":null,"SystemHandle":null,"ContractInfo":null,"ACTION":nul
l},{"ContractName":"RedHat Support","ContractStart":null,"ContractEnd":null,"CRCName":null,"SystemHandle":null,"ContractInfo":null,"ACTION":null}],
"Params":[{"ParCd":"AVBM","SrsValue":"Option 3","ACTION":null},{"ParCd":"BILLABLE","SrsValue":"Yes","ACTION":null},{"ParCd":"BT_SERVICE_LEVEL","SrsValue":"Bronze","ACTION":null},{"ParCd":"CPU_TYPE","SrsValue":"Intel(R) Xeon(R) CPU E5530 # 2.40GHz","ACTION":null},{"ParCd":"DRP_PRIORITY","SrsValue":"DR4","ACTION":null},{"ParCd":"RIM_LOAD","SrsValue":"06 JUL 2017 09:41:00","ACTION":null}],
"ABSA":[{"MID":"67260252","MeshID":"4","MeshShortName":"APAC_sdn","MeshName":"AB-SA APAC SDN"}]}]]></returnWord></ns1:GetCIResponse></soap:Body></soap:Envelope>'
Converting string output to Dictionary:
>>> type(r.text)
<class 'str'>
>>> data=json.loads(r.text)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\dupakunt\AppData\Local\Programs\Python\Python36-32\lib\json\__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "C:\Users\dupakunt\AppData\Local\Programs\Python\Python36-32\lib\json\decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\dupakunt\AppData\Local\Programs\Python\Python36-32\lib\json\decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
How to Convert this output to dictionary and extract only "attributes".
import re
root = ET.fromstring(r.text)
for child in root.iter('returnWord'):
text=child.text
l=re.split("[{}]+", text)
details=l[2]
I am trying to send a http request to any url and get the response using urllib library. Following is the code that I have used :
>>> import requests
>>> r = requests.get("http://www.youtube.com/results?bad+blood")
>>> r.status_code
200
when I try to do this I get following error.
>>> r.json()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/requests/models.py", line 808, in json
return complexjson.loads(self.text, **kwargs)
File "/Library/Python/2.7/site-packages/simplejson/__init__.py", line 516, in loads
return _default_decoder.decode(s)
File "/Library/Python/2.7/site-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/Library/Python/2.7/site-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
simplejson.scanner.JSONDecodeError: Expecting value: line 1 column 3 (char 2)
can someone tell me whats wrong with the code.
PS: I am using python 2.7.10
The response isn't JSON, it is 'text/html; charset=utf-8'. If you want to parse it, use something like BeautifulSoup.
>>> import requests, bs4
>>> rsp = requests.get('http://www.youtube.com/results?bad+blood')
>>> rsp.headers['Content-Type']
'text/html; charset=utf-8'
>>> soup = bs4.BeautifulSoup(rsp.content, 'html.parser')
I'd recommend using the YouTube Search API instead. Log in to Google Developers Console, set up a API key following the API Key Setup instructions, then you can make the request using the YouTube Search API:
>>> from urllib import parse
>>> import requests
>>> query = parse.urlencode({'q': 'bad blood',
... 'part': 'snippet',
... 'key': 'OKdE7HRNPP_CzHiuuv8FqkaJhPI2MlO8Nns9vuM'})
>>> url = parse.urlunsplit(('https', 'www.googleapis.com',
... '/youtube/v3/search', query, None))
>>> rsp = requests.get(url, headers={'Accept': 'application/json'})
>>> rsp.raise_for_status()
>>> response = rsp.json()
>>> response.keys()
dict_keys(['pageInfo', 'nextPageToken', 'regionCode', 'etag', 'items', 'kind'])
Note that the example is using Python 3. If you want to use Python 2, then you will have to import urlencode from urllib and urlunsplit from urlparse.
That URL returns HTML, not JSON, so there's no point calling .json() on the response.
I am trying to write a spider which crawls through the following JSON response:
http://gdata.youtube.com/feeds/api/standardfeeds/UK/most_popular?v=2&alt=json
How would the spider look if I would want to crawl all the titles of the videos? All my Spiders dont work.
from scrapy.spider import BaseSpider
import json
from youtube.items import YoutubeItem
class MySpider(BaseSpider):
name = "youtubecrawler"
allowed_domains = ["gdata.youtube.com"]
start_urls = ['http://www.gdata.youtube.com/feeds/api/standardfeeds/DE/most_popular?v=2&alt=json']
def parse(self, response):
items []
jsonresponse = json.loads(response)
for video in jsonresponse["feed"]["entry"]:
item = YoutubeItem()
print jsonresponse
print video["media$group"]["yt$videoid"]["$t"]
print video["media$group"]["media$description"]["$t"]
item ["title"] = video["title"]["$t"]
print video["author"][0]["name"]["$t"]
print video["category"][1]["term"]
items.append(item)
return items
I always get following error:
2014-01-05 16:55:21+0100 [youtubecrawler] ERROR: Spider error processing <GET http://gdata.youtube.com/feeds/api/standardfeeds/DE/most_popular?v=2&alt=json>
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/base.py", line 1201, in mainLoop
self.runUntilCurrent()
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/base.py", line 824, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 382, in callback
self._startRunCallbacks(result)
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 490, in _startRunCallbacks
self._runCallbacks()
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/bxxxx/svn/ba_txxxxx/scrapy/youtube/spiders/test.py", line 15, in parse
jsonresponse = json.loads(response)
File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 365, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
exceptions.TypeError: expected string or buffer
found two issues in your code:
start url is not accessible, I took out the www from it
changed json.loads(response) to json.loads(response.body_as_unicode())
this works well for me:
class MySpider(BaseSpider):
name = "youtubecrawler"
allowed_domains = ["gdata.youtube.com"]
start_urls = ['http://gdata.youtube.com/feeds/api/standardfeeds/DE/most_popular?v=2&alt=json']
def parse(self, response):
items = []
jsonresponse = json.loads(response.body_as_unicode())
for video in jsonresponse["feed"]["entry"]:
item = YoutubeItem()
print video["media$group"]["yt$videoid"]["$t"]
print video["media$group"]["media$description"]["$t"]
item ["title"] = video["title"]["$t"]
print video["author"][0]["name"]["$t"]
print video["category"][1]["term"]
items.append(item)
return items