CSV error opening file - csv

I am getting an error opening a file that I can't resolve. I am able to open
this exact file with no issues using another small program I wrote.
First Program (doesn't work):
import csv
passwd = "f:\mark\python\etc_password.txt"
output = "f:\mark\python\output.txt"
with open(passwd, 'r') as passwd1, open(output, 'w') as output1:
ro = csv.reader(passwd1, delimiter=':')
wo = csv.writer(output1, delimiter='\t')
for record in ro:
# if not record[0].startswith('#'):
if len(record) > 1:
wo.writerow((record[0], record[2]))
Error:
Traceback (most recent call last):
File "C:/Users/Mark/PycharmProjects/main/main.py", line 11, in <module>
for record in ro:
ValueError: I/O operation on closed file.
Second Program (works):
etcfile = "f:\mark\python\etc_password.txt"
users = {}
with open(etcfile, "r") as datafile:
for line in datafile:
if not line.startswith("#"):
info = line.split(':')
users[info[0]] = info[2]
for username in sorted(users):
print("{}:{}".format(username, users[username]))
The first program has the issue that I can't figure out. The second program works just fine opening the same file.

The error ValueError: I/O operation on closed file. is telling you
you cannot read from a closed file. If you look at the indentation of your
first program, you are opening a csv reader to a file which is then closed
at the end of the with block. A simpler example of this behavior would be
In [1]: import csv
In [2]: file = open('test.csv')
In [3]: ro = csv.reader(file)
In [4]: file.close()
In [5]: for record in ro:
...: print(record)
...:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-5-1f7adaf76d31> in <module>()
----> 1 for record in ro:
2 print(record)
3
ValueError: I/O operation on closed file.

Related

'NoneType' object has no attribute 'read' when reading from JSON file

I am making a script for a school project that requires that I receive a JSON file that tells me if a license plate is visible in a picture. Right now the code sends a POST with an image to an API that then gives me a JSON in return, that JSON data is sent to the file "lastResponse.json."
The code that is giving out the error
with open('lastResponse.json', 'r+') as fp:
f = json.dump(r.json(), fp, sort_keys=True, indent=4) # Where the response data is sent to the JSON
data = json.load(f) # Line that triggers the error
print(data["results"]) # Debug code
print("------------------") # Debug code
print(data) # Debug code
# This statement just checks if a license plate is visible
if data["results"]["plate"] is None:
print("No car detected!")
else:
print("Car with plate number '" + data["results"]["plate"] + "' has been detected")
The Error
Traceback (most recent call last):
File "DetectionFinished.py", line 19, in <module>
data = json.load(f)
File "/usr/lib/python3.7/json/__init__.py", line 293, in load
return loads(fp.read(),
AttributeError: 'NoneType' object has no attribute 'read'
I am not very experienced in Python so I would appreciate explanations!
It turns out, after rereading the API's documentation and using their examples I was able to fix my issues
import requests
from pprint import pprint
regions = ['gb', 'it']
with open('/path/to/car.jpg', 'rb') as fp:
response = requests.post(
'https://api.platerecognizer.com/v1/plate-reader/',
data=dict(regions=regions), # Optional
files=dict(upload=fp),
headers={'Authorization': 'Token API_TOKEN'})
pprint(response.json())

JSON Parsing with Nao robot - AttributeError

I'm using a NAO robot with naoqi version 2.1 and Choregraphe on Windows. I want to parse json from an attached file to the behavior. I attached the file like in that link.
Code:
def onLoad(self):
self.filepath = os.path.join(os.path.dirname(ALFrameManager.getBehaviorPath(self.behaviorId)), "fileName.json")
def onInput_onStart(self):
with open(self.filepath, "r") as f:
self.data = self.json.load(f.get_Response())
self.dataFromFile = self.data['value']
self.log("Data from file: " + str(self.dataFromFile))
But when I run this code on the robot (connected with a router) I'll get an error:
[ERROR] behavior.box :_safeCallOfUserMethod:281 _Behavior__lastUploadedChoregrapheBehaviorbehavior_1136151280__root__AbfrageKontostand_3__AuslesenJSONDatei_1: Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/naoqi.py", line 271, in _safeCallOfUserMethod
func()
File "<string>", line 20, in onInput_onStart
File "/usr/lib/python2.7/site-packages/inaoqi.py", line 265, in <lambda>
__getattr__ = lambda self, name: _swig_getattr(self, behavior, name)
File "/usr/lib/python2.7/site-packages/inaoqi.py", line 55, in _swig_getattr
raise AttributeError(name)
AttributeError: json
I already tried to understand the code from the correspondending lines but I couldn't fixed the error. But I know that the type of my object f is 'file'. How can I open the json file as a json file?
Your problem comes from this:
self.json.load(f.get_Response())
... there is no such thing as "self.json" on a Choregraphe box, import json and then do json.load. And what is get_Response ? That method doesn't exist on anything in Python that I know of.
You might want to first try making a standalone python script (that doesn't use the robot) that can read your json file before you try it with choregraphe. It will be easier.

Pandas: read_csv() with engine=C issue (bug or feature?)

I am using pandas 0.18 on python 2.7.9 on Suse Enterprise Linux 11.
I have a file that contains multiple tables:
TABLE_A
col1,col2,...,col8
...
TABLE_B
col1,col2,...,col7
...
Table A is about 7300 lines, and Table B is about 100 lines. I make an initial pass through the file to determine the start/end positions of each table. Then, I use read_csv() in pandas w/ skiprows, nrows options to read the appropriate table into memory. I'm using engine='c'.
I'm seeing weird behavior when using engine='c'. I'm able to read the first 4552 or so lines of TABLE_A without any issues. But If I try to read 4553 lines, I get an error:
>>> df = pd.read_csv(f,engine='c',skiprows=1,nrows=4552)
>>> df.shape
(4552, 7)
>>> df = pd.read_csv(f,engine='c',skiprows=1,nrows=4553)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/python_pkgs/lib/python2.7/site-packages/pandas-0.18.0-py2.7-linux-x86_64.egg/pandas/io/parsers.py", line 529, in parser_f
return _read(filepath_or_buffer, kwds)
File "/python_pkgs/lib/python2.7/site-packages/pandas-0.18.0-py2.7-linux-x86_64.egg/pandas/io/parsers.py", line 301, in _read
return parser.read(nrows)
File "/python_pkgs/lib/python2.7/site-packages/pandas-0.18.0-py2.7-linux-x86_64.egg/pandas/io/parsers.py", line 763, in read
ret = self._engine.read(nrows)
File "/python_pkgs/lib/python2.7/site-packages/pandas-0.18.0-py2.7-linux-x86_64.egg/pandas/io/parsers.py", line 1213, in read
data = self._reader.read(nrows)
File "pandas/parser.pyx", line 766, in pandas.parser.TextReader.read (pandas/parser.c:7988)
File "pandas/parser.pyx", line 800, in pandas.parser.TextReader._read_low_memory (pandas/parser.c:8444)
File "pandas/parser.pyx", line 842, in pandas.parser.TextReader._read_rows (pandas/parser.c:8970)
File "pandas/parser.pyx", line 829, in pandas.parser.TextReader._tokenize_rows (pandas/parser.c:8838)
File "pandas/parser.pyx", line 1833, in pandas.parser.raise_parser_error (pandas/parser.c:22649)
pandas.parser.CParserError: Error tokenizing data. C error: Expected 7 fields in line 7421, saw 8
From the error message it seems like the C parser has continued reading way past the specified lines and has encountered TABLE_B which has 7 columns only (TABLE_A has 8 columns).
However, read with engine='python' works OK.
>>> df = pd.read_csv(f,engine='python',skiprows=1,nrows=6000)
>>> df.shape
(6000, 7)
>>>
So is this a bug or a feature/limitation? Perhaps maybe the way C parser works by reading chunks? Thanks.

Trying to merge all multiple CSV files to one excel workbook

I am able to execute the below code in python 2.7 and able to merge all csv files to a single excel workbook . But when i am trying to execute in python 3.4 . Getting an error . Let me know if anyone faced this issue and sorted out .
Code:-
import glob, csv, xlwt, os
wb = xlwt.Workbook()
for filename in glob.glob(r'E:\BMCSoftware\Datastore\utility\BPM_Datastore_Utility\*.csv'):
#print (filename)
(f_path, f_name) = os.path.split(filename)
#print (f_name)
(f_short_name, f_extension) = os.path.splitext(f_name)
#print (f_short_name)
ws = wb.add_sheet(f_short_name)
#print (ws)
with open(filename, 'rU') as f:
spamReader = csv.reader(f)
for rowx, row in enumerate(spamReader):
for colx, value in enumerate(row):
ws.write(rowx, colx, value)
wb.save("f:\find_acs_errors_ALL_EMEA.xls")
ERROR:-
>>>
Traceback (most recent call last):
File "E:\BMCSoftware\Python34\Copy of DataStore.py", line 16, in <module>
wb.save("f:\find_acs_errors_ALL_EMEA.xls")
File "E:\BMCSoftware\Python34\lib\site-packages\xlwt-1.0.0-py3.4.egg\xlwt\Workbook.py", line 696, in save
doc.save(filename_or_stream, self.get_biff_data())
File "E:\BMCSoftware\Python34\lib\site-packages\xlwt-1.0.0-py3.4.egg\xlwt\CompoundDoc.py", line 262, in save
f = open(file_name_or_filelike_obj, 'w+b')
FileNotFoundError: [Errno 2] No such file or directory: 'f:\x0cind_acs_errors_ALL_EMEA.xls'
>>>
you should either make double-backsashes or singe forward-slashes in
wb.save("f:\find_acs_errors_ALL_EMEA.xls")
i.e. one of those:
wb.save("f:\\find_acs_errors_ALL_EMEA.xls")
wb.save("f:/find_acs_errors_ALL_EMEA.xls")
hope that helps!

error loading json using topsy

When i load single record json is created just fine when i try to load multiple records i get this error. Sorry i am new to python http://tny.cz/ce1baaba
Traceback (most recent call last):
File "TweetGraber.py", line 26, in <module>
get_tweets_by_query(topic)
File "TweetGraber.py", line 15, in get_tweets_by_query
json_tree = json.loads(source)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 368, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 2 column 1 - line 11 column 1 (char 2380 - 46974)
Here is my code
def get_tweets_by_query(query, count=10):
"""A function that gets the tweets by a query."""
Tweets=[]
queryEncoded=urllib.quote(query)
api_key = "xxxxx"
source=urllib.urlopen("http://api.topsy.com/v2/content/bulktweets.json?q=%s&type=tweet&offset=0&perpage=%s&window=realtime&apikey=%s" % (queryEncoded, count, api_key)).read()
json_tree = json.loads(source)
pprint(json_tree)
topic = raw_input("Please enter a topic: ")
get_tweets_by_query(topic)
Thanks Timusan I was able to correct my json The problem with the original it was missing the root element "[" which showed we are expecting array and there "," was missing after end of each object. So here is fixed code.
So here is the code
def get_tweets_by_query(query, count=10):
"""A function that gets the tweets by a query."""
Tweets=[]
queryEncoded=urllib.quote(query)
api_key = "xx"
source=urllib.urlopen("http://api.topsy.com/v2/content/bulktweets.json?q=%s&type=tweet&offset=0&perpage=%s&window=realtime&apikey=%s" % (queryEncoded, count, api_key)).read()
source="["+source+"]"
source=source.replace("}\n{","},{")
json_tree = json.loads(source)
pprint(json_tree)