How to convert these text segments to CSV? - csv

I'm trying to make a CSV file out of a text file with html tags with questions and answers.
Unfortunately, I've tried and wasn't able to open it in LibreOffice or any other CSV compatible software.
What I'm trying to do is to convert something like this:
<b>Question 1:</b> What is the capital of United States?
<b>Answer:</b> Washington, D.C.
<b>Question 2:</b> What is the capital of United States?
<b>Answer:</b> Washington, D.C.
<b>Question 3:</b> What is the capital of United States?
<b>Answer:</b> Washington, D.C.
And so on.
The result should be:
Question SEPARATOR Answer
*SEPARATOR cannot be a color or semicolon because the question might contain most important characters (colon, semicolon, dots)
I want to import into Anki and it supports CSV files.
I've tried separating Question and Answer with a special symbol like # and only the questions are parsed in LibreOffice/OpenOffice and the question text can never contain a line break. If a text contains a line break, the CSV gets messed up.

Here's a little Python script to convert your cards to CSV format:
import re
import csv
questions = [[]]
with open("original.file", "rt") as f:
for line in f:
line = line.strip()
if line:
questions[-1].append(line)
else:
questions.append([])
# Now we've got the questions in a 2D list.
# Let's just make sure it loaded properly.
assert all(len(question) == 2 for question in questions)
# Let's write a CSV file now.
with open("output.csv", "wt", newline="") as f:
writer = csv.writer(f)
for q, a in questions:
writer.writerow([
re.fullmatch(r"<b>Question \d+:</b> (.*)", q).group(1),
re.fullmatch(r"<b>Answer:</b> (.*)", a).group(1)
])
Now you can import these with the "Basic" card type. This code discards the question number; I hope that wasn't too important.

Related

Formatting a CSV file with multiple paraphrased sentences for a given input sentence

I am trying to create a dataset for T5 model and I want to include multiple paraphrased sentences for a given input sentence in my CSV file.
For example, the input sentence is "The cat sat on the mat." and I have two different paraphrased versions of this sentence:
The cat was resting on the rug
The cat was seated on the rug
I want to include both of these versions as the target sentences for the input sentence in my CSV file.
My current CSV file look something like this:
Input
Target
The cat sat on the mat
The cat was resting on the rug
I'm going to store
I'm heading to the store
Now where can I place the other versions of paraphrased sentences in the above dataset?
Here's my code:
# instantiate
model = SimpleT5()
# load (supports t5, mt5, byT5 models)
model.from_pretrained("t5","google/flan-t5-base")
path = "dataset.csv"
df = pd.read_csv(path)
train_df, test_df = train_test_split(df, test_size=0.2)
model.train(train_df=train_df,
eval_df=test_df,
source_max_token_len=128,
target_max_token_len=50,
batch_size=8, max_epochs=3, use_gpu=False)`

Write a CSV based on another CSV file creating an additional empty row? [duplicate]

import csv
with open('thefile.csv', 'rb') as f:
data = list(csv.reader(f))
import collections
counter = collections.defaultdict(int)
for row in data:
counter[row[10]] += 1
with open('/pythonwork/thefile_subset11.csv', 'w') as outfile:
writer = csv.writer(outfile)
for row in data:
if counter[row[10]] >= 504:
writer.writerow(row)
This code reads thefile.csv, makes changes, and writes results to thefile_subset1.
However, when I open the resulting csv in Microsoft Excel, there is an extra blank line after each record!
Is there a way to make it not put an extra blank line?
The csv.writer module directly controls line endings and writes \r\n into the file directly. In Python 3 the file must be opened in untranslated text mode with the parameters 'w', newline='' (empty string) or it will write \r\r\n on Windows, where the default text mode will translate each \n into \r\n.
#!python3
with open('/pythonwork/thefile_subset11.csv', 'w', newline='') as outfile:
writer = csv.writer(outfile)
In Python 2, use binary mode to open outfile with mode 'wb' instead of 'w' to prevent Windows newline translation. Python 2 also has problems with Unicode and requires other workarounds to write non-ASCII text. See the Python 2 link below and the UnicodeReader and UnicodeWriter examples at the end of the page if you have to deal with writing Unicode strings to CSVs on Python 2, or look into the 3rd party unicodecsv module:
#!python2
with open('/pythonwork/thefile_subset11.csv', 'wb') as outfile:
writer = csv.writer(outfile)
Documentation Links
https://docs.python.org/3/library/csv.html#csv.writer
https://docs.python.org/2/library/csv.html#csv.writer
Opening the file in binary mode "wb" will not work in Python 3+. Or rather, you'd have to convert your data to binary before writing it. That's just a hassle.
Instead, you should keep it in text mode, but override the newline as empty. Like so:
with open('/pythonwork/thefile_subset11.csv', 'w', newline='') as outfile:
Note: It seems this is not the preferred solution because of how the extra line was being added on a Windows system. As stated in the python document:
If csvfile is a file object, it must be opened with the ‘b’ flag on platforms where that makes a difference.
Windows is one such platform where that makes a difference. While changing the line terminator as I described below may have fixed the problem, the problem could be avoided altogether by opening the file in binary mode. One might say this solution is more "elegent". "Fiddling" with the line terminator would have likely resulted in unportable code between systems in this case, where opening a file in binary mode on a unix system results in no effect. ie. it results in cross system compatible code.
From Python Docs:
On Windows, 'b' appended to the mode
opens the file in binary mode, so
there are also modes like 'rb', 'wb',
and 'r+b'. Python on Windows makes a
distinction between text and binary
files; the end-of-line characters in
text files are automatically altered
slightly when data is read or written.
This behind-the-scenes modification to
file data is fine for ASCII text
files, but it’ll corrupt binary data
like that in JPEG or EXE files. Be
very careful to use binary mode when
reading and writing such files. On
Unix, it doesn’t hurt to append a 'b'
to the mode, so you can use it
platform-independently for all binary
files.
Original:
As part of optional paramaters for the csv.writer if you are getting extra blank lines you may have to change the lineterminator (info here). Example below adapated from the python page csv docs. Change it from '\n' to whatever it should be. As this is just a stab in the dark at the problem this may or may not work, but it's my best guess.
>>> import csv
>>> spamWriter = csv.writer(open('eggs.csv', 'w'), lineterminator='\n')
>>> spamWriter.writerow(['Spam'] * 5 + ['Baked Beans'])
>>> spamWriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])
The simple answer is that csv files should always be opened in binary mode whether for input or output, as otherwise on Windows there are problems with the line ending. Specifically on output the csv module will write \r\n (the standard CSV row terminator) and then (in text mode) the runtime will replace the \n by \r\n (the Windows standard line terminator) giving a result of \r\r\n.
Fiddling with the lineterminator is NOT the solution.
A lot of the other answers have become out of date in the ten years since the original question. For Python3, the answer is right in the documentation:
If csvfile is a file object, it should be opened with newline=''
The footnote explains in more detail:
If newline='' is not specified, newlines embedded inside quoted fields will not be interpreted correctly, and on platforms that use \r\n linendings on write an extra \r will be added. It should always be safe to specify newline='', since the csv module does its own (universal) newline handling.
Use the method defined below to write data to the CSV file.
open('outputFile.csv', 'a',newline='')
Just add an additional newline='' parameter inside the open method :
def writePhoneSpecsToCSV():
rowData=["field1", "field2"]
with open('outputFile.csv', 'a',newline='') as csv_file:
writer = csv.writer(csv_file)
writer.writerow(rowData)
This will write CSV rows without creating additional rows!
I'm writing this answer w.r.t. to python 3, as I've initially got the same problem.
I was supposed to get data from arduino using PySerial, and write them in a .csv file. Each reading in my case ended with '\r\n', so newline was always separating each line.
In my case, newline='' option didn't work. Because it showed some error like :
with open('op.csv', 'a',newline=' ') as csv_file:
ValueError: illegal newline value: ''
So it seemed that they don't accept omission of newline here.
Seeing one of the answers here only, I mentioned line terminator in the writer object, like,
writer = csv.writer(csv_file, delimiter=' ',lineterminator='\r')
and that worked for me for skipping the extra newlines.
with open(destPath+'\\'+csvXML, 'a+') as csvFile:
writer = csv.writer(csvFile, delimiter=';', lineterminator='\r')
writer.writerows(xmlList)
The "lineterminator='\r'" permit to pass to next row, without empty row between two.
Borrowing from this answer, it seems like the cleanest solution is to use io.TextIOWrapper. I managed to solve this problem for myself as follows:
from io import TextIOWrapper
...
with open(filename, 'wb') as csvfile, TextIOWrapper(csvfile, encoding='utf-8', newline='') as wrapper:
csvwriter = csv.writer(wrapper)
for data_row in data:
csvwriter.writerow(data_row)
The above answer is not compatible with Python 2. To have compatibility, I suppose one would simply need to wrap all the writing logic in an if block:
if sys.version_info < (3,):
# Python 2 way of handling CSVs
else:
# The above logic
I used writerow
def write_csv(writer, var1, var2, var3, var4):
"""
write four variables into a csv file
"""
writer.writerow([var1, var2, var3, var4])
numbers=set([1,2,3,4,5,6,7,2,4,6,8,10,12,14,16])
rules = list(permutations(numbers, 4))
#print(rules)
selection=[]
with open("count.csv", 'w',newline='') as csvfile:
writer = csv.writer(csvfile)
for rule in rules:
number1,number2,number3,number4=rule
if ((number1+number2+number3+number4)%5==0):
#print(rule)
selection.append(rule)
write_csv(writer,number1,number2,number3,number4)
When using Python 3 the empty lines can be avoid by using the codecs module. As stated in the documentation, files are opened in binary mode so no change of the newline kwarg is necessary. I was running into the same issue recently and that worked for me:
with codecs.open( csv_file, mode='w', encoding='utf-8') as out_csv:
csv_out_file = csv.DictWriter(out_csv)

Expecting value: line 1 column 1 (char 0) problem

i want json file open for object detection (yolo v5 or yolo v7)
so i have 51 ten thousand images and json labeling data
enter image description here
but i don't solve this problem (i tried googleing but yet...)
jupyter notebook not print this problem
i think be a major cause
json file problem
or
enter image description here
colab memory issue
so how i solev this problem?
please help me
You are possibly using the open function wrongly. Now don't quote me on this but if you are trying to read data from a JSON file you would need to use the key word 'r' like so:
with open(filename, 'r') as f:
data = json.load(f)
'r' stands for read.

How can I make nltk.NaiveBayesClassifier.train() work with my dictionary

I'm currently making a simples spam/ham email filter using Naive Bayles.
For you to understand my algorithm logic: I have a folder with lots os files, which are examples of spam/ham emails. I also have two other files in this folder containing the titles of all my ham examples and another with the titles of all my spam examples. I organized like this so I can open and read this emails properly.
I'm putting all the words I judge to be important in a dictionary structure, with a label "spam" or "ham" depending from which kind of file I extracted them from.
Then I'm using nltk.NaiveBayesClassifier.train() so I can train my classifier, but I'm getting the error:
for featureset, label in labeled_featuresets:
ValueError: too many values to unpack
I don't know why this is happening. When I looked for a solution, I found that strings are not hashable, and I was using a list to do it, then I turned it into a dictionary, which are hashable as far as I know, but it keeps getting this error.
Someone knows how to solve it? Thanks!
All my code is listed below:
import nltk
import re
import random
stopwords = nltk.corpus.stopwords.words('english') #Words I should avoid since they have weak value for classification
my_file = open("spam_files.txt", "r") #my_file now has the name of each file that contains a spam email example
word = {} #a dictionary where I will storage all the words and which value they have (spam or ham)
for lines in my_file: #for each name of file (which will be represenetd by LINES) of my_file
with open(lines.rsplit('\n')[0]) as email: #I will open the file pointed by LINES, and then, read the email example that is inside this file
for phrase in email: #After that, I will take every phrase of this email example I just opened
try: #and I'll try to tokenize it
tokens = nltk.word_tokenize(phrase)
except:
continue #I will ignore non-ascii elements
for c in tokens: #for each token
regex = re.compile('[^a-zA-Z]') #I will also exclude numbers
c = regex.sub('', c)
if (c): #If there is any element left
if (c not in stopwords): #And if this element is a not a stopword
c.lower()
word.update({c: 'spam'})#I put this element in my dictionary. Since I'm analysing spam examples, variable C is labeled "spam".
my_file.close()
email.close()
#The same logic is used for the Ham emails. Since my ham emails contain only ascii elements, I dont test it with TRY
my_file = open("ham_files.txt", "r")
for lines in my_file:
with open(lines.rsplit('\n')[0]) as email:
for phrase in email:
tokens = nltk.word_tokenize(phrase)
for c in tokens:
regex = re.compile('[^a-zA-Z]')
c = regex.sub('', c)
if (c):
if (c not in stopwords):
c.lower()
word.update({c: 'ham'})
my_file.close()
email.close()
#And here I train my classifier
classifier = nltk.NaiveBayesClassifier.train(word)
classifier.show_most_informative_features(5)
nltk.NaiveBayesClassifier.train() expects “a list of tuples (featureset, label)” (see the documentation of the train() method)
What is not mentioned there is that featureset should be a dict of feature names mapped to feature values.
So, in a typical spam/ham classification with a bag-of-words model, the labels are 'spam'/'ham' or 1/0 or True/False;
the feature names are the occurring words and the values are the number of times each word occurs.
For example, the argument to the train() method might look like this:
[({'greetings': 1, 'loan': 2, 'offer': 1}, 'spam'),
({'money': 3}, 'spam'),
...
({'dear': 1, 'meeting': 2}, 'ham'),
...
]
If your dataset is rather small, you might want to replace the actual word counts with 1, to reduce data sparsity.

Parsing csv file with vim

I have a large CSV file structured as follows:
CHINESE TRANSLATION
我去上学。 Wǒ qù shàngxué. I am going to school. 上 ♦ on, on top of ♦ go to
我去过北京。 Wǒ qùguò Běijīng. I've been to Beijing. 京 -- ♦ national capital ♦ Beijing
....
The TRANSLATION column blends together three different informations: the pinyin, the English translation and additional information. These three types of information are always present and always presented in the same way and separated by a dot.
What I want to achieve is to create three different columns from the TRANSLATION column, ie to get :
CHINESE PINYIN TRANSLATION ADDITIONAL
我去上学。 Wǒ qù shàngxué. I am going to school. 上 ♦ on, on top of ♦ go to
....
Using a vim macro, how can I do this ?
I think vim macros can handle this job, but executing a vim macro on a big file several thousand times is very slow. So if you just want your job done, I have just wrote a python script, and I think it could give you what you want.
import csv
# change 'in.csv' and 'out.csv'
# to your exact file names.
with open('in.csv', 'r') as infile:
with open('out.csv', 'w') as outfile:
csvreader = csv.reader(infile)
for a, b in csvreader:
line = a + ',' + ','.join(b.split('.'))
outfile.writelines(line)