Python CSV Has No Attribute 'Writer' - csv

There's a bit of code giving me trouble. It was working great in another script I had but I must have messed it up somehow.
The if csv: is primarily because I was relying on a -csv option in an argparser. But even if I were to run this with proper indents outside the if statement, it still returns the same error.
import csv
if csv:
with open('output.csv', 'wb') as csvfile:
csvout = csv.writer(csvfile, delimiter=',',
quotechar=',', quoting=csv.QUOTE_MINIMAL)
csvout.writerow(['A', 'B', 'C'])
csvfile.close()
Gives me:
Traceback (most recent call last):
File "import csv.py", line 34, in <module>
csvout = csv.writer(csvfile, delimiter=',',
AttributeError: 'str' object has no attribute 'writer'
If I remove the if statement, I get:
Traceback (most recent call last):
File "C:\import csv.py", line 34, in <module>
csvout = csv.writer(csvfile, delimiter=',',
AttributeError: 'NoneType' object has no attribute 'writer'
What silly thing am I doing wrong? I did try changing the file name to things like test.py as I saw that in another SO post, didn't work.

For me I had named my file csv.py. So when I import csv from that file I was essentially trying to import the same file itself.

If you've set something that assigns to csv (looks like a string) then you're shadowing the module import. So, the simplest thing is to just change whatever's assigning to csv that isn't the module and call it something else...
In effect what's happening is:
import csv
csv = 'bob'
csvout = csv.writer(somefile)
Remove the further assignment to csv and go from there...

For my case, my function name happened to be csv(). Once I renamed my function, the error disappeared.

Related

reading JSON from file and extract the keys returns attribute str has no keys

I am new to Python (and JSON) so apologies of this is obvious to you.
I pull some data from an API using the following code
import requests
import json
headers = {'Content-Type': 'application/json', 'accept-encoding':'identity'}
api_url = api_url_base+api_token+api_request #variables removed for security
response = requests.get(api_url, headers=headers)
data=response.json()
keys=data.keys
if response.status_code == 200:
print(data["message"], "saving to file...")
print("Found the following keys:")
print(keys)
with open('vulns.json', 'w') as outfile:
json.dump(response.content.decode('utf-8'),outfile)
print("File Saved.")
else:
print('The site returned a', response.status_code, 'error')
this works, I get some data returned and I am able to write the file.
I am trying to change what's returned form a short format to a long format and to check its working I need to see the keys, I was trying to do this offline using the written file (as practice for reading JSON from files).
I wrote these few lines (taken from this site https://www.kite.com/python/answers/how-to-print-the-keys-of-a-dictionary-in-python)
import json
with open('vulns.json') as json_file:
data=json.load(json_file)
print(data)
keys=list(data.keys())
print(keys)
Unfortunately, whenever I run this it returns this error
Python 3.9.1 (tags/v3.9.1:1e5d33e, Dec 7 2020, 17:08:21) [MSC v.1927 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> print(keys)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'keys' is not defined
>>> & C:/Users/xxxx/AppData/Local/Microsoft/WindowsApps/python.exe c:/Temp/read-vulnfile.py
File "<stdin>", line 1
& C:/Users/xxxx/AppData/Local/Microsoft/WindowsApps/python.exe c:/Temp/read-vulnfile.py
^
SyntaxError: invalid syntax
>>> exit()
PS C:\Users\xxxx\Documents\scripts\Python> & C:/Users/xxx/AppData/Local/Microsoft/WindowsApps/python.exe c:/Temp/read-vulnfile.py
Traceback (most recent call last):
File "c:\Temp\read-vulnfile.py", line 6, in <module>
keys=list(data.keys)
AttributeError: 'str' object has no attribute 'keys'
The Print(data) command returns what looks like JSON, this is the opening line:
{"count": 1000, "message": "Vulnerabilities found: 1000", "data":
[{"...
I cant show the content it's sensitive.
why is this looking at a str object rather than a dictionary?
how do I read JSON back into a dictionary please?
You just have that content stored in file as a string. Just open the vulns.json in some editor and there most likely is something like "{'count': 1000, ... instead of {"count": 1000, ....
It's opened by json.load, but translated to string (see this table).
So you should take one step back and take a look what happens during saving to file. You take some content from your response, but dump the string decoded value into a file. Take instead a try with
json.dump(response.json(), outfile)
(or just use data variable you already have provided).
This should allow you to succesfully dump and load data as a dict.

JSON Parsing with Nao robot - AttributeError

I'm using a NAO robot with naoqi version 2.1 and Choregraphe on Windows. I want to parse json from an attached file to the behavior. I attached the file like in that link.
Code:
def onLoad(self):
self.filepath = os.path.join(os.path.dirname(ALFrameManager.getBehaviorPath(self.behaviorId)), "fileName.json")
def onInput_onStart(self):
with open(self.filepath, "r") as f:
self.data = self.json.load(f.get_Response())
self.dataFromFile = self.data['value']
self.log("Data from file: " + str(self.dataFromFile))
But when I run this code on the robot (connected with a router) I'll get an error:
[ERROR] behavior.box :_safeCallOfUserMethod:281 _Behavior__lastUploadedChoregrapheBehaviorbehavior_1136151280__root__AbfrageKontostand_3__AuslesenJSONDatei_1: Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/naoqi.py", line 271, in _safeCallOfUserMethod
func()
File "<string>", line 20, in onInput_onStart
File "/usr/lib/python2.7/site-packages/inaoqi.py", line 265, in <lambda>
__getattr__ = lambda self, name: _swig_getattr(self, behavior, name)
File "/usr/lib/python2.7/site-packages/inaoqi.py", line 55, in _swig_getattr
raise AttributeError(name)
AttributeError: json
I already tried to understand the code from the correspondending lines but I couldn't fixed the error. But I know that the type of my object f is 'file'. How can I open the json file as a json file?
Your problem comes from this:
self.json.load(f.get_Response())
... there is no such thing as "self.json" on a Choregraphe box, import json and then do json.load. And what is get_Response ? That method doesn't exist on anything in Python that I know of.
You might want to first try making a standalone python script (that doesn't use the robot) that can read your json file before you try it with choregraphe. It will be easier.

How to get the actual value of a cell with openpyxl?

I'm a beginner with Python and I need help. I'm using Python 2.7 and I'm trying to retrieve the cell values of an excel file and store it into a csv file. My code is the following:
import os, openpyxl, csv
aggname = "deu"
wb_source = openpyxl.load_workbook(filename, data_only = True)
app_file = open(filename,'a')
dest_file = csv.writer(app_file, delimiter=',', lineterminator='\n')
calib_sheet = wb_source.get_sheet_by_name('Calibration')
data = calib_sheet['B78:C88']
data = list(data)
print(data)
for i in range(len(data)):
dest_file.writerow(data[i])
app_file.close()
In my csv file, I get this, instead of the actual value (for example in my case: SFCG, 99103).
<Cell Calibration.B78>,<Cell Calibration.C78>
<Cell Calibration.B79>,<Cell Calibration.C79>
<Cell Calibration.B80>,<Cell Calibration.C80>
<Cell Calibration.B81>,<Cell Calibration.C81>
<Cell Calibration.B82>,<Cell Calibration.C82>
<Cell Calibration.B83>,<Cell Calibration.C83>
<Cell Calibration.B84>,<Cell Calibration.C84>
<Cell Calibration.B85>,<Cell Calibration.C85>
<Cell Calibration.B86>,<Cell Calibration.C86>
<Cell Calibration.B87>,<Cell Calibration.C87>
<Cell Calibration.B88>,<Cell Calibration.C88>
I tried to set the data_only = True, when opening the excel file as suggested in answers to similar questions but it doesn't solve my problem.
---------------EDIT-------------
Taking into account the first two answers I got (thank you!), I tried several things:
for i in range(len(data)):
dest_file.writerows(data[i].value)
I get this error message :
for i in range(len(data)):
dest_file.writerows(data[i].values)
Traceback (most recent call last):
File "<ipython-input-78-27828c989b39>", line 2, in <module>
dest_file.writerows(data[i].values)
AttributeError: 'tuple' object has no attribute 'values'
Then I tried this instead:
for i in range(len(data)):
for j in range(2):
dest_file.writerow(data[i][j].value)
and then I have the following error message:
for i in range(len(data)):
for j in range(2):
dest_file.writerow(data[i][j].value)
Traceback (most recent call last):
File "<ipython-input-80-c571abd7c3ec>", line 3, in <module>
dest_file.writerow(data[i][j].value)
Error: sequence expected
So then, I tried this:
import os, openpyxl, csv
wb_source = openpyxl.load_workbook(filename, data_only=True)
app_file = open(filename,'a')
dest_file = csv.writer(app_file, delimiter=',', lineterminator='\n')
calib_sheet = wb_source.get_sheet_by_name('Calibration')
list(calib_sheet.iter_rows('B78:C88'))
for row in calib_sheet.iter_rows('B78:C88'):
for cell in row:
dest_file.writerow(cell.value)
Only to get this error message:
Traceback (most recent call last):
File "<ipython-input-81-5bed62b45985>", line 12, in <module>
dest_file.writerow(cell.value)
Error: sequence expected
For the "sequence expected" error I suppose python expects a list rather than a single cell, so I did this:
import os, openpyxl, csv
wb_source = openpyxl.load_workbook(filename, data_only=True)
app_file = open(filename,'a')
dest_file = csv.writer(app_file, delimiter=',', lineterminator='\n')
calib_sheet = wb_source.get_sheet_by_name('Calibration')
list(calib_sheet.iter_rows('B78:C88'))
for row in calib_sheet.iter_rows('B78:C88'):
dest_file.writerow(row)
There is no error message but I only get the reference of the cell in csv file and changing it to dest_file.writerow(row.value) brings me back to the tuple error.
I obviously still need your help!
You've forgot to get the cell's value! See the documentation
I found a way around it using numpy, which allows me to store my values as a list of lists rather than a list of tuples.
import os, openpyxl, csv
import numpy as np
wb_source = openpyxl.load_workbook(filename, data_only=True)
app_file = open(filename,'a')
dest_file = csv.writer(app_file, delimiter=',', lineterminator='\n')
calib_sheet = wb_source.get_sheet_by_name('Calibration')
store = list(calib_sheet.iter_rows('B78:C88'))
print store
truc = np.array(store)
print truc
for i in range(11):
for j in range(1):
dest_file.writerow([truc[i][j].value, truc[i][j+1].value])
app_file.close()
I actually have a sequence as my argument in "writerow()" and with the list object I can also use the double index and the value method to retrieve the value of my cell.
Try using data.values instead of just data when you are printing it.
Hope it helps !!
**
***An example :
import openpyxl
import re
import os
wc=openpyxl.load_workbook('<path of the file>') wcsheet=wc.get_sheet_by_name('test')
store=[]
for data in wcsheet.columns[0]:
store=data
print(store.value)***
=======================
=================================================
**
Live Life Buddha Size

Converting JSON files to .csv

I've found some data that someone is downloading into a JSON file (I think! - I'm a newb!). The file contains data on nearly 600 football players.
Here you can find the file
In the past, I have downloaded the json file and then used this code:
import csv
import json
json_data = open("file.json")
data = json.load(json_data)
f = csv.writer(open("fix_hists.csv","wb+"))
arr = []
for i in data:
fh = data[i]["fixture_history"]
array = fh["all"]
for j in array:
try:
j.insert(0,str(data[i]["first_name"]))
except:
j.insert(0,'error')
try:
j.insert(1,data[i]["web_name"])
except:
j.insert(1,'error')
try:
f.writerow(j)
except:
f.writerow(['error','error'])
json_data.close()
Sadly, when I do this now in command prompt, i get the following error:
Traceback (most recent call last):
File"fix_hist.py", line 12 (module)
fh = data[i]["fixture_history"]
TypeError: list indices must be integers, not str
Can this be fixed or is there another way I can grab some of the data and convert it to .csv? Specifically the 'Fixture History'? and then 'First'Name', 'type_name' etc.
Thanks in advance for any help :)
Try this tool: http://www.convertcsv.com/json-to-csv.htm
You will need to configure a few things, but should be easy enough.

Python 3 Pandas Error: pandas.parser.CParserError: Error tokenizing data. C error: Expected 11 fields in line 5, saw 13

I checked out this answer as I am having a similar problem.
Python Pandas Error tokenizing data
However, for some reason ALL of my rows are being skipped.
My code is simple:
import pandas as pd
fname = "data.csv"
input_data = pd.read_csv(fname)
and the error I get is:
File "preprocessing.py", line 8, in <module>
input_data = pd.read_csv(fname) #raw data file ---> pandas.core.frame.DataFrame type
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/io/parsers.py", line 465, in parser_f
return _read(filepath_or_buffer, kwds)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/io/parsers.py", line 251, in _read
return parser.read()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/io/parsers.py", line 710, in read
ret = self._engine.read(nrows)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/io/parsers.py", line 1154, in read
data = self._reader.read(nrows)
File "pandas/parser.pyx", line 754, in pandas.parser.TextReader.read (pandas/parser.c:7391)
File "pandas/parser.pyx", line 776, in pandas.parser.TextReader._read_low_memory (pandas/parser.c:7631)
File "pandas/parser.pyx", line 829, in pandas.parser.TextReader._read_rows (pandas/parser.c:8253)
File "pandas/parser.pyx", line 816, in pandas.parser.TextReader._tokenize_rows (pandas/parser.c:8127)
File "pandas/parser.pyx", line 1728, in pandas.parser.raise_parser_error (pandas/parser.c:20357)
pandas.parser.CParserError: Error tokenizing data. C error: Expected 11 fields in line 5, saw 13
Solution is to use pandas built-in delimiter "sniffing".
input_data = pd.read_csv(fname, sep=None)
For those landing here, I got this error when the file was actually an .xls file not a true .csv. Try resaving as a csv in a spreadsheet app.
I had the same error, I read my csv data using this :
d1 = pd.read_json('my.csv')
then I try this
d1 = pd.read_json('my.csv', sep='\t')
and this time it's right.
So you could try this method if your delimiter is not ',', because the default is ',', so if you don't indicate clearly, it go wrong.
pandas.read_csv
This error means, you get unequal number of columns for each row. In your case, until row 5, you've had 11 columns but in line 5 you have 13 inputs (columns).
For this problem, you can try the following approach to open read your file:
import csv
with open('filename.csv', 'r') as file:
reader = csv.reader(file, delimiter=',') #if you have a csv file use comma delimiter
for row in reader:
print (row)
This parsing error could occur for multiple reasons and solutions to the different reasons have been posted here as well as in Python Pandas Error tokenizing data.
I posted a solution to one possible reason for this error here: https://stackoverflow.com/a/43145539/6466550
I have had similar problems. With my csv files it occurs because they were created in R, so it has some extra commas and different spacing than a "regular" csv file.
I found that if I did a read.table in R, I could then save it using write.csv and the option of row.names = F.
I could not get any of the read options in pandas to help me.
The problem could be that one or multiple rows of csv file contain more delimiters (commas ,) than expected. It is solved when each row matches the amount of delimiters of the first line of the csv file where the column names are defined.
use \t+ in the separator pattern instead of \t.
import pandas as pd
fname = "data.csv"
input_data = pd.read_csv(fname, sep='\t+`, header=None)