Issue in scraping data from a html page using beautiful soup - csv

I am scraping some data from a website and I am able to do so using the below referred code:
import csv
import urllib2
import sys
import time
from bs4 import BeautifulSoup
from itertools import islice
page = urllib2.urlopen('http://shop.o2.co.uk/mobile_phones/Pay_Monthly/smartphone/all_brands').read()
soup = BeautifulSoup(page)
soup.prettify()
with open('O2_2012-12-21.csv', 'wb') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=',')
spamwriter.writerow(["Date","Month","Day of Week","OEM","Device Name","Price"])
oems = soup.findAll('span', {"class": "wwFix_h2"},text=True)
items = soup.findAll('div',{"class":"title"})
prices = soup.findAll('span', {"class": "handset"})
for oem, item, price in zip(oems, items, prices):
textcontent = u' '.join(islice(item.stripped_strings, 1, 2, 1))
if textcontent:
spamwriter.writerow([time.strftime("%Y-%m-%d"),time.strftime("%B"),time.strftime("%A") ,unicode(oem.string).encode('utf8').strip(),textcontent,unicode(price.string).encode('utf8').strip()])
Now, issue is 2 of the all the price values I am scraping have different html structure then rest of the values. My output csv is showing "None" value for those because of this. Normal html structure for price on webpage is
<span class="handset">
FREE to £79.99</span>
For those 2 values structure is
<span class="handset">
<span class="delivery_amber">Up to 7 days delivery</span>
<br>"FREE on all tariffs"</span>
Out which I am getting right now displays None for the second html structure instead of Free on all tariffs, also price value Free on all tariffs is mentioned under double quotes in second structure while it is outside any quotes in first structure
Please help me solve this issue, Pardon my ignorance as I am new to programming.

Just detect those 2 items with an additional if statement:
if price.string is None:
price_text = u' '.join(price.stripped_strings).replace('"', '').encode('utf8')
else:
price_text = unicode(price.string).strip().encode('utf8')
then use price_text for your CSV file. Note that I removed the " quotes with a simple replace call.

Related

Scrap info from htm with Beautifulsoup

Good morning, i can't find the data correctly with beautifulsoup, can you help me? i want to get the last numbers of this html:
i want to have only the employee id ( 22219 )
<td class="day list-item">
<div class="allocation-day click-area clickable" data-date="2022-02-07" data-url="anonymous-duty-details?beginDate=2022-02-07&allocatedEmployeeId=22219">
<div class="day-info">
<div class="date-with-type">7</div>
<div class="weekday">Mo.</div>
<div class="row-status-icons">
from bs4 import BeautifulSoup as bs
url = ('20105.html')
with open(url, 'r') as f:
contents = f.read()
soup = bs(contents, features="html.parser")
userfinder = soup.find_all('data-url', class_='')
tutte = soup.find_all('div', attrs={"data-url":"data-url"})
print(tutte)
print(tutte)
print(userfinder)
You could select your targets more specific - In this case css selectors are used two select all div with an attribute data-url:
soup.select('div[data-url]')
To get the values from the resultset you have to iterate it:
for url in soup.select('div[data-url]'):
print(url['data-url'])
##output
anonymous-duty-details?beginDate=2022-02-07&allocatedEmployeeId=22219
Getting only the ids a simple approache could be to split() the url-string, but be aware only if structure is still the same, else you have to use regex or other approaches:
for url in soup.select('div[data-url]'):
print(url['data-url'].split('=')[-1])
##output
22219
Simple regex approache:
import re
for url in soup.select('div[data-url]'):
print(re.search(r"allocatedEmployeeId=(\d*)",url['data-url']).group(1))
##output
22219
EDIT
Getting only the first result use select_one():
soup.select_one('div[data-url]')['data-url'].split('=')[-1]
or
import re
re.search(r"allocatedEmployeeId=(\d*)",soup.select_one('div[data-url]')['data-url']).group(1)

Converting variables into lists and removing duplicates

I scraped a website using the below code.
The website is structured in a certain way that requires using 4 different classes to scrape all the data which causes some data to be duplicated.
For converting my variables into lists, I tried using the split(' ') method, but it only created a list for each scraped string with /n in the beginning.
I also tried to create the variable as empty lists, api_name = [] for instance but it did not work.
For removing duplicates, I thought of using the set method, but I think it only works on lists.
I want to remove all the duplicated data from my variables before I write them into the CSV file, do I have to convert them into lists first or there is a way to remove them directly from the variables?
Any assistance or even feedback for the code would be appreciated.
Thanks.
import requests
from bs4 import BeautifulSoup
import csv
url = "https://www.programmableweb.com/apis/directory"
api_no = 0
urlnumber = 0
response = requests.get(url)
data = response.text
soup = BeautifulSoup(data, "html.parser")
csv_file = open('api_scraper.csv', 'w')
csv_writer = csv.writer(csv_file)
csv_writer.writerow(['api_no', 'API Name', 'Description','api_url', 'Category', 'Submitted'])
#THis is the place where I parse and combine all the classes, which causes the duplicates data
directories1 = soup.find_all('tr', {'class': 'odd'})
directories2 = soup.find_all('tr', {'class': 'even'})
directories3 = soup.find_all('tr', {'class': 'odd views-row-first'})
directories4 = soup.find_all('tr', {'class': 'odd views-row-last'})
directories = directories1 + directories2 + directories3 + directories4
while urlnumber <= 765:
for directory in directories:
api_NameTag = directory.find('td', {'class':'views-field views-field-title col-md-3'})
api_name = api_NameTag.text if api_NameTag else "N/A"
description_nametag = directory.find('td', {'class': 'col-md-8'})
description = description_nametag.text if description_nametag else 'N/A'
api_url = 'https://www.programmableweb.com' + api_NameTag.a.get('href')
category_nametage = directory.find('td',{'class': 'views-field views-field-field-article-primary-category'})
category = category_nametage.text if category_nametage else 'N/A'
submitted_nametag = directory.find('td', {'class':'views-field views-field-created'})
submitted = submitted_nametag.text if submitted_nametag else 'N/A'
#These are the variables I want to remove the duplicates from
csv_writer.writerow([api_no,api_name,description,api_url,category,submitted])
api_no +=1
urlnumber +=1
url = "https://www.programmableweb.com/apis/directory?page=" + str(urlnumber)
csv_file.close()
If it wasn't for the api links I would have said just use pandas read_html and take index 2. As you want the urls as well I suggest that you change your selectors. You want to limit to the table to avoid duplicates and choose the class name that depicts the column.
import pandas as pd
import requests
from bs4 import BeautifulSoup as bs
r = requests.get('https://www.programmableweb.com/apis/directory')
soup = bs(r.content, 'lxml')
api_names, api_links = zip(*[(item.text, 'https://www.programmableweb.com' + item['href']) for item in soup.select('.table .views-field-title a')])
descriptions = [item.text for item in soup.select('td.views-field-search-api-excerpt')]
categories = [item.text for item in soup.select('td.views-field-field-article-primary-category a')]
submitted = [item.text for item in soup.select('td.views-field-created')]
df = pd.DataFrame(list(zip(api_names, api_links, descriptions, categories, submitted)), columns = ['API name','API Link', 'Description', 'Category', 'Submitted'])
print(df)
Though you could just do
pd.read_html(url)[2]
and then add in the extra column for api_links from bs4 using selectors shown above.

Getting html table data other than text content (get "title" tag data)

One table entry within a table row on an html table I am trying to scrape looks like so:
<td class="top100nation" title="PAK">
<img src="/images/flag/flags_pak.jpg" alt="PAK"></td>
The web page to which this belongs is the following: http://www.relianceiccrankings.com/datespecific/odi/?stattype=bowling&day=01&month=01&year=2014. The entire column to which this belongs in the table has similar table data (i.e. it's a column of images).
I am using lxml in a python script. (Open to using BeautifulSoup instead, if I have to for some reason.) For every other column in the table, I can extract the data I want on the given row by using 'data = entry.text_content()'. Obviously, this doesn't work for this column of images. But I don't want the image data in any case. What I want to get from this table data is the 'PAK' bit - that is, I want the name of the nation. I think this is extremely simple but unfortunately I am a simpleton who doesn't understand the library he is using.
Thanks in advance
Edit: Full script, as per request
import requests
import lxml.html as lh
import csv
with open('firstPageCricinfo','w') as file:
writer = csv.writer(file)
page = requests.get(url)
doc = lh.fromstring(page.content)
#rows of the table
tr_elements = doc.xpath('//tr')
data_array = [[] for _ in range(len(tr_elements))]
del tr_elements[0]
for t in tr_elements[0]:
name=t.text_content()
if name == "":
continue
print(name)
data_array[0].append(name)
#printing out first row of table, to check correctness
print(data_array[0])
for j in range(1,len(tr_elements)):
T=tr_elements[j]
i=0
for t in T.iterchildren():
#column is not at issue
if i != 3:
data=t.text_content()
#image-based column
else:
#what do I do here???
data = t.
data_array[j].append(data)
i+=1
#printing last row to check correctness
print(data_array[len(tr_elements)-1])
with open('list1','w') as file:
writer = csv.writer(file)
for i in range(0,len(tr_elements)):
writer.writerow(data_array[i])`
Along with lxml library you'll either need to use requests or some other library to get the website content.
Without seeing the code you have so far, I can offer a BeautifulSoup solution:
url = 'http://www.relianceiccrankings.com/datespecific/odi/?stattype=bowling&day=01&month=01&year=2014'
from bs4 import BeautifulSoup
import requests
soup = BeautifulSoup(requests.get(url).text, 'lxml')
r = soup.find_all('td', {'class': 'top100cbr'})
for td in r:
print(td.text.split('v')[1].split(',')[0].strip())
outputs about 522 items:
South Africa
India
Sri Lanka
...
Canada
New Zealand
Australia
England

Scrape table from webpage when in <div> format - using Beautiful Soup

So I'm aiming to scrape 2 tables (in different formats) from a website - https://info.fsc.org/details.php?id=a0240000005sQjGAAU&type=certificate after using the search bar to iterate this over a list of license codes. I haven't included the loop fully yet but I added it at the top for completeness.
My issue is that because the two tables I want, Product Data and Certificate Data are in 2 different formats, so I have to scrape them separately. As the Product data is in the normal "tr" format on the webpage, this bit is easy and I've managed to extract a CSV file of this. The harder bit is extracting Certificate Data, as it is in "div" form.
I've managed to print the Certificate Data as a list of text, using the class function, however I need to have it in a tabular form saved in a CSV file. As you can see, I've tried several unsuccessful ways of converting it to a CSV but If you have any suggestions, it would be much appreciated, thank you!! Also any other general tips to improve my code would be great too, as I am new to web-scraping.
#namelist = open('example.csv', newline='', delimiter = 'example')
#for name in namelist:
#include all of the below
driver = webdriver.Chrome(executable_path="/Users/jamesozden/Downloads/chromedriver")
url = "https://info.fsc.org/certificate.php"
driver.get(url)
search_bar = driver.find_element_by_xpath('//*[#id="code"]')
search_bar.send_keys("FSC-C001777")
search_bar.send_keys(Keys.RETURN)
new_url = driver.current_url
r = requests.get(new_url)
soup = BeautifulSoup(r.content,'lxml')
table = soup.find_all('table')[0]
df, = pd.read_html(str(table))
certificate = soup.find(class_= 'certificatecl').text
##certificate1 = pd.read_html(str(certificate))
driver.quit()
df.to_csv("Product_Data.csv", index=False)
##certificate1.to_csv("Certificate_Data.csv", index=False)
#print(df[0].to_json(orient='records'))
print certificate
Output:
Status
Valid
First Issue Date
2009-04-01
Last Issue Date
2018-02-16
Expiry Date
2019-04-01
Standard
FSC-STD-40-004 V3-0
What I want but over hundreds/thousands of license codes (I just manually created this one sample in Excel):
Desired output
EDIT
So whilst this is now working for Certificate Data, I also want to scrape the Product Data and output that into another .csv file. However currently it is only printing 5 copies of the product data for the final license code which is not what I want.
New Code:
df = pd.read_csv("MS_License_Codes.csv")
codes = df["License Code"]
def get_data_by_code(code):
data = [
('code', code),
('submit', 'Search'),
]
response = requests.post('https://info.fsc.org/certificate.php', data=data)
soup = BeautifulSoup(response.content, 'lxml')
status = soup.find_all("label", string="Status")[0].find_next_sibling('div').text
first_issue_date = soup.find_all("label", string="First Issue Date")[0].find_next_sibling('div').text
last_issue_date = soup.find_all("label", string="Last Issue Date")[0].find_next_sibling('div').text
expiry_date = soup.find_all("label", string="Expiry Date")[0].find_next_sibling('div').text
standard = soup.find_all("label", string="Standard")[0].find_next_sibling('div').text
return [code, status, first_issue_date, last_issue_date, expiry_date, standard]
# Just insert here output filename and codes to parse...
OUTPUT_FILE_NAME = 'Certificate_Data.csv'
#codes = ['C001777', 'C001777', 'C001777', 'C001777']
df3=pd.DataFrame()
with open(OUTPUT_FILE_NAME, 'w') as f:
writer = csv.writer(f)
for code in codes:
print('Getting code# {}'.format(code))
writer.writerow((get_data_by_code(code)))
table = soup.find_all('table')[0]
df1, = pd.read_html(str(table))
df3 = df3.append(df1)
df3.to_csv('Product_Data.csv', index = False, encoding='utf-8')
Here's all you need.
No chromedriver. No pandas. Forget about it in context of scraping.
import requests
import csv
from bs4 import BeautifulSoup
# This is all what you need for your task. Really.
# No chromedriver. Don't use it for scraping. EVER.
# No pandas. Don't use it for writing csv. It's not what pandas was made for.
#Function to parse single data page based on single input code.
def get_data_by_code(code):
# Parameters to build POST-request.
# "type" and "submit" params are static. "code" is your desired code to scrape.
data = [
('type', 'certificate'),
('code', code),
('submit', 'Search'),
]
# POST-request to gain page data.
response = requests.post('https://info.fsc.org/certificate.php', data=data)
# "soup" object to parse html data.
soup = BeautifulSoup(response.content, 'lxml')
# "status" variable. Contains first's found [LABEL tag, with text="Status"] following sibling DIV text. Which is status.
status = soup.find_all("label", string="Status")[0].find_next_sibling('div').text
# Same for issue dates... etc.
first_issue_date = soup.find_all("label", string="First Issue Date")[0].find_next_sibling('div').text
last_issue_date = soup.find_all("label", string="Last Issue Date")[0].find_next_sibling('div').text
expiry_date = soup.find_all("label", string="Expiry Date")[0].find_next_sibling('div').text
standard = soup.find_all("label", string="Standard")[0].find_next_sibling('div').text
# Returning found data as list of values.
return [response.url, status, first_issue_date, last_issue_date, expiry_date, standard]
# Just insert here output filename and codes to parse...
OUTPUT_FILE_NAME = 'output.csv'
codes = ['C001777', 'C001777', 'C001777', 'C001777']
with open(OUTPUT_FILE_NAME, 'w') as f:
writer = csv.writer(f)
for code in codes:
print('Getting code# {}'.format(code))
#Writing list of values to file as single row.
writer.writerow((get_data_by_code(code)))
Everything is really straightforward here. I'd suggest you spend some time in Chrome dev tools "network" tab to have a better understanding of request forging, which is a must for scraping tasks.
In general, you don't need to run chrome to click the "search" button, you need to forge request generated by this click. Same for any form and ajax.
well... you should sharpen your skills (:
df3=pd.DataFrame()
with open(OUTPUT_FILE_NAME, 'w') as f:
writer = csv.writer(f)
for code in codes:
print('Getting code# {}'.format(code))
writer.writerow((get_data_by_code(code)))
### HERE'S THE PROBLEM:
# "soup" variable is declared inside of "get_data_by_code" function.
# So you can't use it in outer context.
table = soup.find_all('table')[0] # <--- you should move this line to
#definition of "get_data_by_code" function and return it's value somehow...
df1, = pd.read_html(str(table))
df3 = df3.append(df1)
df3.to_csv('Product_Data.csv', index = False, encoding='utf-8')
As per example you can return dictionary of values from "get_data_by_code" function:
def get_data_by_code(code):
...
table = soup.find_all('table')[0]
return dict(row=row, table=table)

python- extract html table without lossing axis titles

Q1. Is there any way to extract data from the table but still able to track back the axis titles?
Q2. which approach will be better to extract data from a html table? HTMLParser or beautifulsoup or else?
i was trying to extract this income table
http://investing.businessweek.com/research/stocks/financials/financials.asp?ticker=TSCO:LN
i would like to to be
"Currency in Millions of British Pounds","2009","2010","2011","2012"
"Revenues", "53,898.0", "56,910.0", "60,455.0", "64,539.0"
"TOTAL REVENUES", "53,898.0", "56,910.0", "60,455.0", "64,539.0"
in the meantime i want to know "56,910.0" is the revenue in 2009
but i experienced two issues:
HTMLParser.HTMLParseError: malformed start tag, at line 1148, column 47 or
HTMLParser.HTMLParseError: bad end tag: "", at line 225, column 104
cant keep track of the axis titles
Many thanks
I've done quite a bit of scraping and BeautifulSoup rarely disappoints.
from BeautifulSoup import BeautifulSoup
URL = "http://investing.businessweek.com/research/stocks/financials/financials.asp?ticker=TSCO:LN"
from urllib import urlopen
HTML = urlopen ( URL )
soup = BeautifulSoup ( HTML )
statement = soup . find ( 'table', { 'class' : "financialStatement" } )
rows = statement . findAll ( 'tr' )
At this point I think you will find that rows has a length of 25 and that its first item is the header and last is the final row of the desired table.