CSV read into MySQLdb failing - mysql

I am having a problem with reading my csv file into the MySQL database. I have tried a number of solutions, but the errors just keep changing and the code isn't working. This same code had worked with another csv file, so I'm thinking I might be doing something wrong with this one?
Here is my code
from database_access import *
from builtins import bytes, int, str
import codecs
import csv
import requests
from urllib.parse import urlparse, urljoin
from bs4 import BeautifulSoup
import re
import cgi
import MySQLdb
import chardet
# from database_access import *
import MySQLdb
import simplejson
if __name__ == '__main__':
with open("SIMRA.csv",'r') as file:
reader = csv.reader(file)
#reader = csv.reader(text)
next(reader, None)
print ("project running")
#print (row[7])
#rowlist = []
all_links = []
all_project_ids = []
for row in reader:
if row[7] != "" and row[16] != "":
country = row[2]
city = row[8]
description = row[11] + '' + row[12]
title = row[7].replace("'", "''")
link = row[16]
#date_start = row[9]
#print a check here
print(title,description,country, city, link)
db = MySQLdb.connect(host, username, password, database, charset='utf8')
cursor = db.cursor()
new_project = True
proj_check = "SELECT * from Projects where ProjectName like '%" + title + "%'"
#proj_check = "SELECT * from Projects where ProjectName like %s",(title,)
#cur.execute("SELECT * FROM records WHERE email LIKE %s", (search,))
cursor.execute(proj_check)
num_rows = cursor.rowcount
if num_rows != 0:
new_project = False
url_compare = "SELECT * from Projects where ProjectWebpage like '" + link + "'"
#url_compare = "SELECT * from Projects where ProjectWebpage like %s",(link,)
cursor.execute(url_compare)
num_rows = cursor.rowcount
if num_rows != 0:
new_project = False
if new_project:
project_insert = "Insert into Projects (ProjectName,ProjectWebpage,FirstDataSource,DataSources_idDataSources) VALUES (%s,%s,%s,%s)"
cursor.execute(project_insert, (title, link,'SIMRA', 5))
projectid = cursor.lastrowid
print(projectid)
#ashoka_projectids.append(projectid)
db.commit()
ins_desc = "Insert into AdditionalProjectData (FieldName,Value,Projects_idProjects,DateObtained) VALUES (%s,%s,%s,NOW())"
cursor.executemany(ins_desc, ("Description", description, str(projectid)))
db.commit()
ins_location = "Insert into ProjectLocation (Type,Country,City,Projects_idProjects) VALUES (%s,%s,%s,%s)"
cursor.execute(ins_location, ("Main", country,city, str(projectid)))
db.commit()
else:
print('Project already exists!')
print(title)
all_links.append(link)
#print out SIMRA's links to a file for crawling later
with open('simra_links', 'w', newline='') as f:
write = csv.writer(f)
for row in all_links:
columns = [c.strip() for c in row.strip(', ').split(',')]
write.writerow(columns)
When I ran this, I got the following error:
File "/usr/lib/python3.8/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa3 in position 898: invalid start byte
I did some research and tried handling the encoding error by adding different forms of encoding, as seen here - UnicodeDecodeError: ‘utf8’ codec can’t decode byte 0xa5 in position 0: invalid start byte, and Python MySQLdb TypeError: not all arguments converted during string formatting. Added this in this in the csv open parameter -
with open("SIMRA.csv", 'r', encoding="cp437", errors='ignore') as file:
Running the code with these different encoding options came up with a different error:
MySQLdb._exceptions.ProgrammingError: not all arguments converted during bytes formatting
Further research suggested using tuples or lists in order to address this problem, so I added these in the 'select' function in the code, as suggested here - Python MySQLdb TypeError: not all arguments converted during string formatting and in the Python SQL documentation here - PythonMySqldb
So the select query became:
proj_check = "SELECT * from Projects where ProjectName like %s",(title,)
cursor.execute(proj_check)
num_rows = cursor.rowcount
if num_rows != 0:
new_project = False
url_compare = "SELECT * from Projects where ProjectWebpage like %s",(link,)
cursor.execute(url_compare)
num_rows = cursor.rowcount
if num_rows != 0:
new_project = False
When I ran the code, I came up with this Assertion Error and I have no idea what to do anymore.
File "/home/ros/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 205, in execute
assert isinstance(query, (bytes, bytearray))
AssertionError
I have run out of ideas. It might be that I'm missing something small, but I can't figure this out now as I've been battling with this for two days now.
Can anyone help point out what I'm missing? It will be greatly appreciated. This code ran perfectly with another csv file. I am running this with Python 3.8 btw.

Have solved this now. I had to use a different encoding with the original code and this solved the problem. So, I changed the csv open parameter to:
with open("SIMRA.csv",'r', encoding="ISO-8859-1") as file:
reader = csv.reader(file)

Were you expecting £? You need to specify what the encoding of the file is. It may be "latin1". See the syntax of LOAD DATA for how to specify CHARACTER SET latin1.

Related

Instaloader JSON files: Convert 200 JSON files into a Single CSV (Python 3.7)

I want to automatically download pictures (or videos) along with their captions and other data from a specific Instagram Hashtag (e.g. #moodoftheday) using Instaloader. Instaloader returns JSON files including posts metadata.
The following code worked with just a single #user_profile metadata.
I want to do the same, but for a #hashtag not a specific #user.
The ultimate goal is to have all of the JSON files (e.g. 200) into a csv file.
How can I process my downloaded data in a clean excel/CSV file?
Here is my code:
# Install Instaloader
import instaloader
def get_instagram_posts(username, startdate, enddate):
# Create an instaloader object with parameters
L = instaloader.Instaloader(download_pictures = False, download_videos = False, download_comments= False, compress_json = False)
# Log in with the instaloader object
L.login("username" , "password")
# Search the instagram profile
profile = instaloader.Profile.from_username(L.context, username)
# Scrape the posts
posts = profile.get_posts()
for post in takewhile(lambda p: p.date > startdate, dropwhile(lambda p : p.date > enddate, posts)):
print(post.date)
L.download_post(post, target = profile.username)
'''
This function will now save all instagram posts and related data to a folder in you current working directory.
Let’s call this function on the instagram account of “moodoftheday”. let the script do its magic.
This might take a while so be patient.
'''
import os
import datetime
# instagram username
username = "realdonaldtrump"
# daterange of scraping
startdate = datetime(2020, 9, 1)
enddate = datetime(2020, 10, 1)
# get your current working directory
current_wkdir = os.get_cwd()
# Call the function. This will automatically store all the scrape data in a folder in your current working directory
get_instagram_posts(username, startdate, enddate)
'''
You notice that this data is NOT yet in the right format since each post has a separate json file.
You will need to process all these json files to a consolidated excel file in order to perform analyses on the data.
'''
def parse_instafiles(username, path):
"""
This function loads in all the json files generated by the instaloader package and parses it into a csv file.
"""
#print('Entering provided directory...')
os.chdir(os.path.join(path, username))
columns = ['filename', 'datetime', 'type', 'locations_id', 'locations_name', 'mentions', 'hashtags', 'video_duration']
dataframe = pd.DataFrame(columns=[])
#print('Traversing file tree...')
glob('*UTC.json')
for file in glob('*UTC.json'):
with open(file, 'r') as filecontent:
filename = filecontent.name
#print('Found JSON file: ' + filename + '. Loading...')
try:
metadata = orjson.loads(filecontent.read())
except IOError as e:
#print("I/O Error. Couldn't load file. Trying the next one...")
continue
else:
pass
#print('Collecting relevant metadata...')
time = datetime.fromtimestamp(int(metadata['node']['taken_at_timestamp']))
type_ = metadata['node']['__typename']
likes = metadata['node']['edge_media_preview_like']['count']
comments = metadata['node']['edge_media_to_comment']['count']
username = metadata['node']['owner']['username']
followers = metadata['node']['owner']['edge_followed_by']['count']
try:
text = metadata['node']['edge_media_to_caption']['edges'][0]['node']['text']
except:
text = ""
try:
post_id = metadata['node']['id']
except:
post_id = ""
minedata = {'filename': filename, 'time': time, 'text': text,
'likes': likes, 'comments' : comments, 'username' : username, 'followers' : followers, 'post_id' : post_id}
#print('Writing to dataframe...')
dataframe = dataframe.append(minedata, ignore_index=True)
#print('Closing file...')
del metadata
filecontent.close()
#print('Storing dataframe to CSV file...')
#print('Done.')
dataframe['source'] = 'Instagram'
return dataframe
'''
You can then use this function to process the "moodoftheday" Instagram data.
'''
df_instagram = parse_instafiles(username, os.getcwd() )
df_instagram.to_excel("moodoftheday.csv")
I am very new to Python and programming overall, therefore any help is very much appreciated!!
Thank you in advance! Sofia
I made some changes it's not showing error but still needs some professional works:
import instaloader
from datetime import datetime
import datetime
from itertools import takewhile
from itertools import dropwhile
import os
import glob as glob
import json
import pandas as pd
import csv
lusername = ''
lpassword = ''
def get_instagram_posts(username, startdate, enddate):
# Create an instaloader object with parameters
L = instaloader.Instaloader(download_pictures = False, download_videos = False, download_comments= False, compress_json = False)
# Log in with the instaloader object
L.login("lusername" , "lpassword")
# Search the instagram profile
profile = instaloader.Profile.from_username(L.context, username)
# Scrape the posts
posts = profile.get_posts()
for post in takewhile(lambda p: p.date > startdate, dropwhile(lambda p : p.date > enddate, posts)):
print(post.date)
L.download_post(post, target = profile.username)
# instagram username
username = "realdonaldtrump"
# daterange of scraping
startdate = datetime.datetime(2020, 9, 1,0,0)
enddate = datetime.datetime(2022, 2, 1,0,0)
# get your current working directory
current_wkdir = os.getcwd()
# Call the function. This will automatically store all the scrape data in a folder in your current working directory
get_instagram_posts(username, startdate, enddate)
def parse_instafiles(username, path):
#print('Entering provided directory...')
os.chdir(os.path.join(path, username))
columns = ['filename', 'datetime', 'type', 'locations_id', 'locations_name', 'mentions', 'hashtags', 'video_duration']
dataframe = pd.DataFrame(columns=[])
#print('Traversing file tree...')
# glob('*UTC.json')
for file in glob.glob('*UTC.json'):
with open(file, 'r') as filecontent:
filename = filecontent.name
#print('Found JSON file: ' + filename + '. Loading...')
try:
metadata = json.load(filecontent)
except IOError as e:
#print("I/O Error. Couldn't load file. Trying the next one...")
continue
else:
pass
#print('Collecting relevant metadata...')
time = datetime.datetime.fromtimestamp(int(metadata['node']['taken_at_timestamp']))
type_ = metadata['node']['__typename']
likes = metadata['node']['edge_media_preview_like']['count']
comments = metadata['node']['edge_media_to_comment']['count']
username = metadata['node']['owner']['username']
followers = metadata['node']['owner']['edge_followed_by']['count']
try:
text = metadata['node']['edge_media_to_caption']['edges'][0]['node']['text']
except:
text = ""
try:
post_id = metadata['node']['id']
except:
post_id = ""
minedata = {'filename': filename, 'time': time, 'text': text,
'likes': likes, 'comments' : comments, 'username' : username, 'followers' : followers, 'post_id' : post_id}
#print('Writing to dataframe...')
dataframe = dataframe.append(minedata, ignore_index=True)
#print('Closing file...')
del metadata
filecontent.close()
#print('Storing dataframe to CSV file...')
#print('Done.')
dataframe['source'] = 'Instagram'
return dataframe
'''
You can then use this function to process the "moodoftheday" Instagram data.
'''
df_instagram = parse_instafiles(username, os.getcwd() )
df_instagram.to_csv("moodoftheday.csv")
Instaloader has an example of hashtag search on its documentation, here's the code:
from datetime import datetime
import instaloader
L = instaloader.Instaloader()
posts = instaloader.Hashtag.from_name(L.context, "urbanphotography").get_posts()
SINCE = datetime(2020, 5, 10) # further from today, inclusive
UNTIL = datetime(2020, 5, 11) # closer to today, not inclusive
k = 0 # initiate k
#k_list = [] # uncomment this to tune k
for post in posts:
postdate = post.date
if postdate > UNTIL:
continue
elif postdate <= SINCE:
k += 1
if k == 50:
break
else:
continue
else:
L.download_post(post, "#urbanphotography")
# if you want to tune k, uncomment below to get your k max
#k_list.append(k)
k = 0 # set k to 0
#max(k_list)
here's the link for more info:
https://instaloader.github.io/codesnippets.html
I'm trying do to something similar, but I'm still very new to programming, so I'm sorry if I can't offer much help

Incorrect number of parameters in prepared statement

I'm having a heck of a time getting the mysql.connector module to work. I'd really like to find some accurate documentation on it. By hit and by miss, I have arrived here.
Traceback (most recent call last):
File "update_civicrm_address.py", line 80, in <module>
cursor.execute(mysql_select_query, address_id)
File "/home/ubuntu/.local/lib/python3.6/site-packages/mysql/connector/cursor.py", line 1210, in execute
msg="Incorrect number of arguments " \
mysql.connector.errors.ProgrammingError: 1210: Incorrect number of arguments executing prepared statement
Here is the program (it's a bit messy because I have tried so many things to get it to work). Aside from the fact that the update is not working at all, what is causing the error? There is only one parameter and it is accounted for.
import sys
import mysql.connector
import csv
import os
from mysql.connector import Error
from mysql.connector import errorcode
#Specify the import file
try:
inputCSV = 'geocoded_rhode_island_export.csv'
#Open the file and give it a handle
csvFile = open(inputCSV, 'r')
#Create a reader object for the input file
reader = csv.reader(csvFile, delimiter = ',')
except IOError as e:
print("The input file ", inputCSV, " was not found", e)
exit()
try:
conn = mysql.connector.connect(host='localhost',
database='wordpress',
user='wp_user',
password='secret!',
use_pure=True)
cursor = conn.cursor(prepared=True)
except mysql.connector.Error as error:
print( "Failed to connect to database: {}".format(error))
exit()
try:
record_count = 0
for row in reader:
contact_id,address_id,last_name, first_name, middle_name, longitude, latitude = row
print(row)
#Update single record now
print(address_id)
cursor.execute(
"""
update civicrm_address
set
geo_code_1 = %s,
geo_code_2 = %s
where
id = %s
and
location_type_id = %s
""",
(longitude, latitude, address_id, 6)
)
conn.commit
print(cursor.rowcount)
print("Record updated successfully")
mysql_select_query = """
select
id,
geo_code_1,
geo_code_2
from
civicrm_address
where
id = %s
"""
input = (address_id)
cursor.execute(mysql_select_query, address_id)
record = cursor.fetchone()
print(record)
record_count = record_count + 1
finally:
print(record_count, " records updated")
#closing database connection.
if(conn.is_connected()):
conn.close()
print("connection is closed")
The is an error in the code
conn.commit
should be
conn.commit()

How to retrieve image from database using python

i want to retrieve image from database using python but i have a problem where i execute this code
import mysql.connector
import io
from PIL import Image
connection= mysql.connector.connect(
host ="localhost",
user ="root",
passwd ="pfe_altran",
database = "testes",
)
cursor=connection.cursor()
sql1 = "SELECT * FROM pfe WHERE id = 1 "
cursor.execute(sql1)
data2 = cursor.fetchall()
file_like2 = io.BytesIO(data2[0][0])
img1=Image.open(file_like2)
img1.show()
cursor.close()
connection.close()
and i have this error :
file_like2 = io.BytesIO(data2[0][0]) TypeError: a bytes-like object
is required, not 'int'
cursor.fetchall() returns a list of rows to your variable so you need to handle that using a looper
for row in data2:
file_like2 = io.BytesIO(row[0]) #assumming row[0] contains the byte form of your image
you can use cursor.fetchmany(size=1) or cursor.fetchone() if you know that your query will return only a single row or if you know that you only need one row, this way you can manipulate it directly and not use the loop.

Python Replace throwing errors when replacing "</html>"

I am very new to Python and I'm trying to understand and use the script from this link in Anaconda running on Python 3.5.2. I have had to change some things so that the script can run in this version of Python since it is from 2013. The script (as amended by inexperienced me) is as follows and my problem is in the try block in the line html = f.read().replace("</html>", "") + "</html>".
I simply cannot understand the reason of the + "</html>" that comes after the close parenthesis. From what I have found out on the replace() method is that it takes at least two parameters, the old character/s and the new ones. As it is, this script is jumping to the except Exception as e: and prints out a bytes-like object is required, not 'str'.
Now this is, as far as I can tell, because the reading is being done as bytes whereas the replace method takes strings. I tried to divide the line into:
html = f.read
html = str.replace("</html>", "") + "</html>"
but this throws replace() takes at least 2 arguments (1 given). I also tried changing the contents of html from bytes to str as follows
html = str(f.read(), 'utf-8')
html = str.replace("</html>", "")
but this also returns the error that replace() takes two arguments (1 given). When I removed the html = str.replace("</html>", "") + "</html>" altogether and so skipped to the soup = BeautifulSoup(html), I ended up with a warning that no parser was explicitly specified and later on an AttributeError that NoneType object has no attribute get_dictionary.
Any help about the need for the mentioned line and why it is used and how to use it would be greatly appreciated. Thank you.
#!/usr/bin/python
import sys
import urllib.request
import re
import json
from bs4 import BeautifulSoup
import socket
socket.setdefaulttimeout(10)
cache = {}
for line in open(sys.argv[1]):
fields = line.rstrip('\n').split('\t')
sid = fields[0]
uid = fields[1]
# url = 'http://twitter.com/%s/status/%s' % (uid, sid)
# print url
tweet = None
text = "Not Available"
if sid in cache:
text = cache[sid]
else:
try:
f = urllib.request.urlopen("http://twitter.com/%s/status/%s" % (uid, sid))
print('URL: ', f.geturl())
# Thanks to Arturo!
# html = f.read()
html = f.read().replace("</html>", "") + "</html>"
soup = BeautifulSoup(html)
jstt = soup.find_all("p", "js-tweet-text")
tweets = list(set([x.get_text() for x in jstt]))
# print len(tweets)
# print tweets
if (len(tweets)) > 1:
continue
text = tweets[0]
cache[sid] = tweets[0]
for j in soup.find_all("input", "json-data", id="init-data"):
js = json.loads(j['value'])
if js.has_key("embedData"):
tweet = js["embedData"]["status"]
text = js["embedData"]["status"]["text"]
cache[sid] = text
break
except Exception as e:
print(e)
# except Exception as e:
continue
if tweet is not None and tweet["id_str"] != sid:
text = "Not Available"
cache[sid] = "Not Available"
text = text.replace('\n', ' ', )
text = re.sub(r'\s+', ' ', text)
# print json.dumps(tweet, indent=2)
print("\t".join(fields + [text]).encode('utf-8'))
str.replace is using replace in its static form (calling the method from the type-class str instead of an str object).
str.replace will actually need 3 arguments: the string to act on, the char or string to replace and the new char or string.
'abcd'.replace('d', 'z') is equivallent to str.replace('abcd', 'd', 'z'):
print('abcd'.replace('d', 'z'))
# abcz
print(str.replace('abcd', 'd', 'z'))
# abcz
I have accepted the solution kindly given by #DeepSpace as an answer as it helped me to realise how to overcome the problem I was facing. The code below can now execute under Python 3 if run from command prompt as follows (Please note that I executed this from Windows command prompt):
python download_tweets.py inpuot_file.tsv > output_file.tsv. The code follows:
#!/usr/bin/python
import sys
import urllib.request
import re
import json
from bs4 import BeautifulSoup
import socket
socket.setdefaulttimeout(10)
cache = {}
for line in open(sys.argv[1]):
fields = line.rstrip('\n').split('\t')
sid = fields[0]
uid = fields[1]
tweet = None
text = "Not Available"
if sid in cache:
text = cache[sid]
else:
try:
f = urllib.request.urlopen("http://twitter.com/%s/status/%s" % (uid, sid))
# print('URL: ', f.geturl())
# Thanks to Arturo!
html = str.replace(str(f.read(), 'utf-8'), "</html>", "")
# html = f.read().replace("</html>", "") + "</html>" # original line
soup = BeautifulSoup(html, "lxml") # added "lxml" as it was giving warnings
jstt = soup.find_all("p", "js-tweet-text")
tweets = list(set([x.get_text() for x in jstt]))
# print(len(tweets))
if (len(tweets)) > 1:
continue
text = tweets[0]
cache[sid] = tweets[0]
for j in soup.find_all("input", "json-data", id="init-data"):
js = json.loads(j['value'])
if "embedData" in js:
# if js.has_key("embedData"): # original line
tweet = js["embedData"]["status"]
text = js["embedData"]["status"]["text"]
cache[sid] = text
break
except Exception as e:
print(e)
continue
if tweet is not None and tweet["id_str"] != sid:
text = "Not Available"
cache[sid] = "Not Available"
text = text.replace('\n', ' ', )
text = re.sub(r'\s+', ' ', text)
# print(json.dumps("dump: ", tweet, indent=2))
print(" \t ".join(fields + [text]).encode('utf-8'))

UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 7240: character maps to <undefined>

I am student doing my master thesis. As part of my thesis, I am working with python. I am reading a log file of .csv format and writing the extracted data to another .csv file in a well formatted way. However, when the file is read, I am getting this error:
Traceback (most recent call last): File
"C:\Users\SGADI\workspace\DAB_Trace\my_code\trace_parcer.py", line 19,
in for row in reader:
File "C:\Users\SGADI\Desktop\Python-32bit-3.4.3.2\python-3.4.3\lib\encodings\cp1252.py",
line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 7240: character maps to <undefined>
import csv
import re
#import matplotlib
#import matplotlib.pyplot as plt
import datetime
#import pandas
#from dateutil.parser import parse
#def parse_csv_file():
timestamp = datetime.datetime.strptime('00:00:00.000', '%H:%M:%S.%f')
timestamp_list = []
snr_list = []
freq_list = []
rssi_list = []
dab_present_list = []
counter = 0
f = open("output.txt","w")
with open('test_log_20150325_gps.csv') as csvfile:
reader = csv.reader(csvfile, delimiter=';')
for row in reader:
#timestamp = datetime.datetime.strptime(row[0], '%M:%S.%f')
#timestamp.split(" ",1)
timestamp = row[0]
timestamp_list.append(timestamp)
#timestamp = row[0]
details = row[-1]
counter += 1
print (counter)
#if(counter > 25000):
# break
#timestamp = datetime.datetime.strptime(row[0], '%M:%S.%f')
#timestamp_list.append(float(timestamp))
#search for SNRLevel=\d+
snr = re.findall('SNRLevel=(\d+)', details)
if snr == []:
snr = 0
else:
snr = snr[0]
snr_list.append(int(snr))
#search for Frequency=09ABC
freq = re.findall('Frequency=([0-9a-fA-F]+)', details)
if freq == []:
freq = 0
else:
freq = int(freq[0], 16)
freq_list.append(int(freq))
#search for RSSI=\d+
rssi = re.findall('RSSI=(\d+)', details)
if rssi == []:
rssi = 0
else:
rssi = rssi[0]
rssi_list.append(int(rssi))
#search for DABSignalPresent=\d+
dab_present = re.findall('DABSignalPresent=(\d+)', details)
if dab_present== []:
dab_present = 0
else:
dab_present = dab_present[0]
dab_present_list.append(int(dab_present))
f.write(str(timestamp) + "\t")
f.write(str(freq) + "\t")
f.write(str(snr) + "\t")
f.write(str(rssi) + "\t")
f.write(str(dab_present) + "\n")
print (timestamp, freq, snr, rssi, dab_present)
#print (index+1)
#print(timestamp,freq,snr)
#print (counter)
#print(timestamp_list,freq_list,snr_list,rssi_list)
'''if snr != []:
if freq != []:
timestamp_list.append(timestamp)
snr_list.append(snr)
freq_list.append(freq)
f.write(str(timestamp_list) + "\t")
f.write(str(freq_list) + "\t")
f.write(str(snr_list) + "\n")
print(timestamp_list,freq_list,snr_list)'''
f.close()
I searched for the special character and I did not find any. I searched the Internet which suggested to change the format: I tried ut8, latin1 and few other formats, but i am still getting this error. Can you please help me how to solve with pandas as well. I also tried with pandas but I am still getting the error.
I even removed a line in the log file, but the error occurs in the next line.
Please help me finding a solution, thank you.
i have solved this issue.
we can use this code
import codecs
types_of_encoding = ["utf8", "cp1252"]
for encoding_type in types_of_encoding:
with codecs.open(filename, encoding = encoding_type, errors ='replace') as csvfile:
your code
....
....
I have solved this issue by simply adding a parameter in open()
with open(filename, encoding = 'cp850') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
with open('input.tsv','rb') as f:
for ln in f:
decoded=False
line=''
for cp in ('cp1252', 'cp850','utf-8','utf8'):
try:
line = ln.decode(cp)
decoded=True
break
except UnicodeDecodeError:
pass
if decoded:
# use 'line'