How to get CSV to work in in RevitPythonShell? - csv

Has anyone figured out how to get CSV (or any other package) to work in RevitPythonShell? I've only been able to get Excel from Interop to work.
When I try running csv in RPS, the terminal executes and shows no error or any kind of feed back, and the file is not created either.
This is the basic code I'm trying to run which comes from a tutorial on CSV I believe.
with open('mycsv2.csv', 'w') as f:
fieldnames = ['column1', 'column2', 'column3']
thewriter = csv.DictWriter(f, fieldnames=fieldnames)
thewriter.writeheader()
for i in range(1, 10):
thewriter.writerow({'column1':'one', 'column2':'two', 'column3':'three'})
I find CSV much more user friendly and easier to understand than Interop Excel. I believe I've read its doable somewhere but of course I cant find the source now.
All help, tips, or tricks are appreciated.

I can get it to work by supplying the full path name to the open function, so it looks like (showing full path to my Documents Folder):
import csv
with open(r'C:\Users\callum\Documents\mycsv2.csv', 'w') as f:
fieldnames = ['column1', 'column2', 'column3']
thewriter = csv.DictWriter(f, fieldnames=fieldnames)
thewriter.writeheader()
for i in range(1, 10):
thewriter.writerow({'column1':'one', 'column2':'two', 'column3':'three'})
Let me know if that does the trick!

Related

Expecting value: line 1 column 1 (char 0) problem

i want json file open for object detection (yolo v5 or yolo v7)
so i have 51 ten thousand images and json labeling data
enter image description here
but i don't solve this problem (i tried googleing but yet...)
jupyter notebook not print this problem
i think be a major cause
json file problem
or
enter image description here
colab memory issue
so how i solev this problem?
please help me
You are possibly using the open function wrongly. Now don't quote me on this but if you are trying to read data from a JSON file you would need to use the key word 'r' like so:
with open(filename, 'r') as f:
data = json.load(f)
'r' stands for read.

Writing a utils.py function to zip up a csv in Code Repo

I had (maybe two years ago) written up a tool to zip up several csv files written out to disk in a code repo from a dataframe for someone who's working in a platform that would work best with a zipped up csv file so they can download it and work with it elsewhere (more user friendly for some).
I can't remember if I had gotten it to work at the time but here's my recent stab at this (and yes before you ask I'm aware that there's an easy way to gzip a file using the df.write_dataframe() option... I'm doing this to have more control over the name of the zip and this is a windows user with a locked down system and a select set of tools) ...
Here's what I've got so far in utils.py:
import tempfile
import zipfile
def zipit(source_df, out, zipfile_name, internal_prefix, fileSuffix):
file_list = list(source_df.filesystem().ls())
fs = source_df.filesystem()
zipf = zipfile.ZipFile("zipfile.zip", 'w', zipfile.ZIP_DEFLATED)
for files in file_list:
temp = tempfile.NamedTemporaryFile(prefix=internal_prefix, suffix=fileSuffix)
with fs.open(files.path, 'rb') as f:
w = open(temp.name, 'wb')
w.write(f.read())
w.close()
f.close()
zipf.write(temp.name)
zipf.close()
with open("zipfile.zip", 'rb') as f:
with out.filesystem().open(zipfile_name, 'wb') as w:
w.write(f.read())
w.close()
f.close()
My issue is that this zips up the .csv file(s) and I can name it but it dumps it in a long crazy series of temp folder names and the .csv file crashes when you try to open it.
I'm sure I can figure this out but I feel like I'm blowing this way out of the water here and would appreciate the community wisdom on this.
The other (much less important) problem is that the file itself has a prefix and a suffix which is nice, but it'd be nicer if I could just name the whole file instead of getting the temp files random chars in the middle of the name.

In Python 3.6 JSON module why do I have to use both loads and load?

I am trying to persist some data to disk, I am attempting to use Python's JSON module but I can't access the data on simple json.load and I can't figure out why. Here's my code:
jsondata=json.dumps({'a':1,
'b':'string',
'c':{'k1':(1,3),'k2':(12,3)}})
f= open('jsonfile.json', 'w')
json.dump(jsondata,f)
f.close()
g=open('jsonfile.json', 'r')
result=json.load(g)
g.close()
print(result['b'])
This gives me the error "TypeError: string indicies must be integers"
However if I replace the access block with
g=open('jsonfile.json', 'r')
result=json.loads(json.load(g))
g.close()
print(result['b'])
It gives me the result I expect. I have read through the documentation a number of times and it seems like the simple json.load by itself should be sufficient. I can't figure out why I would have to use json.loads as well. I feel like I'm missing something. Any insight would be welcome.
Thanks петр костюкевич
Problem was I was converting it to a string before the dump so needed to convert it back. This code worked.
jsondata=({'a':1,
'b':'string',
'c':{'k1':(1,3),'k2':(12,3)}})
f= open('jsonfile.json', 'w')
json.dump(jsondata,f)
f.close()
g=open('jsonfile.json', 'r')
result=json.load(g)
g.close()
print(result['b'])

stream_in part of .json file

I have a large .json file and I only want to read in a part of it.
I tried the the following solutions but they didn´t work:
yelp <- stream_in(file("yelp_academic_dataset_review.json"), paigesize = 500)
yelp <- stream_in(file("yelp_academic_dataset_review.json"), nrows = 500)
Anyone know how it works?
First off- always helpful to provide the packages you are using, in your case jsonlite.
One solution is parsing the data file (as a .txt file) prior to streaming it in.
yelp <- readLines("yelp_academic_dataset_review.json")[1:500]
yelp <- stream_in(textConnection(gsub("\\n", "", yelp)))
I'm assuming your file is local?
I have had success with actual piping/streaming json in the past. Ie, from the command line,
cat x.json | parse_json.py
Then you write your python script:
import json,sys
for line in sys.stdin:
js_line = json.loads(line.rstrip())
try:
# do something with js_line['x']['y']
except ValueError:
pass
I'm not sure why you want to use stream_in, but this somewhat manual approach can be effective
I use this code for extracting 1400001 to 1450000 lines of yelp:
setwd("d:/yelp_dataset")
rm(list=ls())
library(jsonlite)
rev<- 'd:/yelp_dataset/review.JSON'
revu<-jsonlite::stream_in(textConnection(readLines(rev)[1400001:1450000],verbose=F)

Output a *.csv file created from a list of lists

Firstly, I'm just learning Python (which is my first language) so, while I recognise there are numerous websites that address this, I've spent a weekend trying to get my head around implementing a solution and got nowhere with it. So, I'm hoping someone here can help me :)
The problem is simple: I've created a list of lists in a Python program, and I need to output them to a *.csv file so I can import it into Excel etc.
The list looks like this:
[['title1','title2','title3'],['date1','info1','category1'],['date2','info2','category3'],...]
I've found solutions where the elements in each list are integers, I can't get them to work with strings.
Any help on this would be much appreciated!
Thanks,
Adam
There's a CSV module that can do this:
import csv
data = [['title1','title2','title3'],['date1','info1','category1'],['date2','info2','category3']]
with open('stuff.csv', 'wb') as csvfile:
writer = csv.writer(csvfile)
for line in data:
writer.writerow(line)