Webscrape using BeautifulSoup to Dataframe - html

This is the html code:
<div class="wp-block-atomic-blocks-ab-accordion ab-block-accordion ab-font-size-18"><details><summary class="ab-accordion-title"><strong>American Samoa</strong></summary><div class="ab-accordion-text">
<ul><li><strong>American Samoa Department of Health Travel Advisory</strong></li><li>March 2, 2020—Governor Moliga <a rel="noreferrer noopener" href="https://www.rnz.co.nz/international/pacific-news/410783/american-samoa-establishes-govt-taskforce-to-plan-for-coronavirus" target="_blank">appointed</a> a government taskforce to provide a plan for preparation and response to the covid-19 coronavirus. </li></ul>
<ul><li>March 25, 2020 – The Governor issued an Executive Order 001 recognizing the Declared Public Health Emergency and State of Emergency, and imminent threat to public health. The order requires the immediate and comprehensive enforcement by the Commissioner of Public Safety, Director of Health, Attorney General, and other agency leaders.
<ul>
<li>Business are also required to provide necessary supplies to the public and are prohibited from price gouging.</li>
</ul>
</li></ul>
</div></details></div>
I want to extract State, date and text and add to a dataframe with these three columns
State: American Samoa
Date: 2020-03-25
Text: The Governor Executive Order 001 recognizing the Declared Public Health Emergency and State of Emergency, and imminent threat to public health
My code so far:
soup = bs4.BeautifulSoup(data)
for tag in soup.find_all("summary"):
print("{0}: {1}".format(tag.name, tag.text))
for tag1 in soup.find_all("li"):
#print(type(tag1))
ln = tag1.text
dt = (ln.split(' – ')[0])
dt = (dt.split('—')[0])
#txt = ln.split(' – ')[1]
print(dt)
Need Help:
How to get the text till a "." only, I dont need the entire test
How to add to the dataframe as new row as I loop through (I have only attached a part if the source code of webpage)
Appreciate your help!

As a start I have added the code below. Unfortunately the web page is not uniform in it's use of HTML lists some ul elements contain nested uls others don't. This code is not perfect but a starting point, for example American Samoa has an absolute mess of nested ul elements so only appears once in the df.
from bs4 import BeautifulSoup
import requests
import re
import pandas as pd
HEADERS = {
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0',
}
# You need to specify User Agent headers or else you get a 403
data = requests.get("https://www.nga.org/coronavirus-state-actions-all/", headers=HEADERS).text
soup = BeautifulSoup(data, 'lxml')
rows_list = []
for detail in soup.find_all("details"):
state = detail.find('summary')
ul = detail.find('ul')
for li in ul.find_all('li', recursive=False):
# Three types of hyphen are used on this webpage
split = re.split('(?:-|–|—)', li.text, 1)
if len(split) == 2:
rows_list.append([state.text, split[0], split[1]])
else:
print("Error", li.text)
df = pd.DataFrame(rows_list)
with pd.option_context('display.max_rows', None, 'display.max_columns', None, 'display.max_colwidth', -1):
print(df)
It creates and prints a data frame with 547 rows and prints some error messages for text it can not split. You will have to work out exactly which data you need and how to tweak the code to suit your purpose.
You can use 'html.parser' if you don't have 'lxml' installed.
UPDATED
Another approach is to use regex to match any string beginning with a date:
from bs4 import BeautifulSoup
import requests
import re
import pandas as pd
HEADERS = {
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0',
}
# You need to specify User Agent headers or else you get a 403
data = requests.get("https://www.nga.org/coronavirus-state-actions-all/", headers=HEADERS).text
soup = BeautifulSoup(data, 'html.parser')
rows_list = []
for detail in soup.find_all("details"):
state = detail.find('summary')
for li in detail.find_all('li'):
p = re.compile(r'(\s*(Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\s*(\d{1,2}),*\s*(\d{4}))', re.IGNORECASE)
m = re.match(p, li.text)
if m:
rows_list.append([state.text, m.group(0), m.string.replace(m.group(0), '')])
else:
print("Error", li.text)
df = pd.DataFrame(rows_list)
df.to_csv('out.csv')
this gives far more records 4,785. Again it is a starting point some data gets missed but far less. It writes the data to a csv file, out.csv.

Related

How to efficiently parse JSON data with multiple keys in Python 2.7?

I'm writing a script that will check the CVS COVID vaccine availability for cities in my state of VA. I have been successful getting the data I'm looking for, but my code is hard coded in some areas. I'm specifically asking for help improving my code in the areas number 1 & 2 below:
The JSON file can be found here:
https://www.cvs.com//immunizations/covid-19-vaccine.vaccine-status.VA.json?vaccineinfo
I'm trying to access the data in the responsePayloadData key. The only way I could figure out how to do this is to make it the only key. For that reason, I deleted the other key responseMetaData:
#remove the key that we don't need
del obj['responseMetaData']
I'm also not sure how to dynamically loop through the VA items without hard coding the number of cities I know are there in the data:
for x, y in obj.items():
for a in range(34):
Here's the full code:
import requests
import json
import time
from datetime import datetime
import urllib2
try:
import indigo
except:
pass
strAvail = "False"
strAvailCity = "None"
try:
# download raw json object from CVS Virginia Website
url = "https://www.cvs.com//immunizations/covid-19-vaccine.vaccine-status.VA.json?vaccineinfo"
data = urllib2.urlopen(url).read().decode()
except urllib2.HTTPError, err:
return {"error": err.reason, "error_code": err.code}
# parse json object
obj = json.loads(data)
# remove the key that we don't need
del obj['responseMetaData']
# loop through the JSON dictionary and check availability
# status options: {"Fully Booked", "Available"}
for x, y in obj.items():
for a in range(34):
# print('City: ' + y['data']['VA'][a]['city'])
# print('Total Available: ' + y['data']['VA'][a]['totalAvailable'])
# print('Percent Available: ' + y['data']['VA'][a]['pctAvailable'])
# print('Status: ' + y['data']['VA'][a]['status'])
# print("------------------------------")
# If there is availability anywhere in the state, take some action.
if y['data']['VA'][a]['status'] == "Available":
strAvail = True
strAvailCity = y['data']['VA'][a]['city']
# Log timestamp for this check to the JSON
now = datetime.now()
strDateTime = now.strftime("%m/%d/%Y %I:%M %p")
EDIT: Since the JSON is not available outside the US. I've pasted it below:
{"responsePayloadData":{"currentTime":"2021-02-11T14:55:00.470","data":{"VA":[{"totalAvailable":"1","city":"ABINGDON","state":"VA","pctAvailable":"0.19%","status":"Fully Booked"},{"totalAvailable":"0","city":"ALEXANDRIA","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"ARLINGTON","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"BEDFORD","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"BLACKSBURG","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"CHARLOTTESVILLE","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"CHATHAM","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"CHESAPEAKE","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"1","city":"DANVILLE","state":"VA","pctAvailable":"0.19%","status":"Fully Booked"},{"totalAvailable":"2","city":"DUBLIN","state":"VA","pctAvailable":"0.39%","status":"Fully Booked"},{"totalAvailable":"0","city":"FAIRFAX","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"FREDERICKSBURG","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"GAINESVILLE","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"HAMPTON","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"HARRISONBURG","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"LEESBURG","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"LYNCHBURG","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"MARTINSVILLE","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"MECHANICSVILLE","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"MIDLOTHIAN","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},
{"totalAvailable":"0","city":"NEWPORT NEWS","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"NORFOLK","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"PETERSBURG","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"PORTSMOUTH","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"RICHMOND","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"ROANOKE","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},
{"totalAvailable":"0","city":"ROCKY MOUNT","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"STAFFORD","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"SUFFOLK","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},
{"totalAvailable":"0","city":"VIRGINIA BEACH","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"WARRENTON","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"WILLIAMSBURG","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"WINCHESTER","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"},{"totalAvailable":"0","city":"WOODSTOCK","state":"VA","pctAvailable":"0.00%","status":"Fully Booked"}]}},"responseMetaData":{"statusDesc":"Success","conversationId":"Id-beb5f68730b34e6aa3bbc1fd927ea12b","refId":"Id-b4a7256078789eb59b8912b4","operation":"getInventorybyCity","statusCode":"0000"}}
Regarding problem 1, you can just access the data by key. You don't need to delete the other key:
payload = obj['responsePayloadData']
For the second problem, you can just iterate over the items in the list associated with obj['data']['VA']:
for city in payload['data']['VA']:
print(city)
{'city': 'ABINGDON',
'pctAvailable': '0.19%',
'state': 'VA',
'status': 'Fully Booked',
'totalAvailable': '1'}
{'city': 'ALEXANDRIA',
'pctAvailable': '0.00%',
'state': 'VA',
'status': 'Fully Booked',
'totalAvailable': '0'}
...

How to extract value from html via BeautifulSoup

I have parsed my string via BeautifulSoup.
from bs4 import BeautifulSoup
import requests
import re
def otoMoto(link):
URL = link
page = requests.get(URL).content
bs = BeautifulSoup(page, 'html.parser')
for offer in bs.find_all('div', class_= "offer-item__content ds-details-container"):
# print(offer)
# print("znacznik")
linkOtoMoto = offer.find('a', class_="offer-title__link").get('href')
# title = offer.find("a")
titleOtoMoto = offer.find('a', class_="offer-title__link").get('title')
rokProdukcji = offer.find('li', class_="ds-param").get_text().strip()
rokPrzebPojemPali = offer.find_all('li',class_="ds-param")
print(linkOtoMoto+" "+titleOtoMoto+" "+rokProdukcji)
print(rokPrzebPojemPali)
break
URL = "https://www.otomoto.pl/osobowe/bmw/seria-3/od-2016/?search%5Bfilter_float_price%3Afrom%5D=50000&search%5Bfilter_float_price%3Ato%5D=65000&search%5Bfilter_float_year%3Ato%5D=2016&search%5Bfilter_float_mileage%3Ato%5D=100000&search%5Bfilter_enum_financial_option%5D=1&search%5Border%5D=filter_float_price%3Adesc&search%5Bbrand_program_id%5D%5B0%5D=&search%5Bcountry%5D="
otoMoto(URL)
Result:
https://www.otomoto.pl/oferta/bmw-seria-3-x-drive-nowe-opony-ID6Dr4JE.html#d51bf88c70 BMW Seria 3 2016
[<li class="ds-param" data-code="year">
<span>2016 </span>
</li>, <li class="ds-param" data-code="mileage">
<span>50 000 km</span>
</li>, <li class="ds-param" data-code="engine_capacity">
<span>1 998 cm3</span>
</li>, <li class="ds-param" data-code="fuel_type">
<span>Benzyna</span>
</li>]
So I can extract single strings, but if I see this same class
class="ds-param"
I can't assigne, for example, production date to variable. Please let me know if you have any ideas :).
Have a nice day !
from the docs:
Some attributes, like the data-* attributes in HTML 5, have names that can’t be used as the names of keyword arguments:
data_soup = BeautifulSoup('<div data-foo="value">foo!</div>')
data_soup.find_all(data-foo="value")
# SyntaxError: keyword can't be an expression
You can use these attributes in searches by putting them into a dictionary and passing the dictionary into find_all() as the attrs argument:
data_soup.find_all(attrs={"data-foo": "value"})
# [<div data-foo="value">foo!</div>]
so you could do something like
data_soup.find_all(attrs={"data-code": "year" })[0]. get_text()

Python Scraper - Find Data in Column

I am working on my first website scraper and am trying to get the number 41,110 that is saved in a column on the webpage https://mcassessor.maricopa.gov/mcs.php?q=14014003N. Below is my code.
How can I get to this number and print it?
from bs4 import BeautifulSoup
import requests
web_page = 'https://mcassessor.maricopa.gov/mcs.php?q=14014003N'
web_header = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'}
response = requests.get(web_page,headers=web_header)
soup = BeautifulSoup(response.content,'html.parser')
for row in soup.findAll('table')[0].thread.tr.findAll('tr'):
first_column = row.findAll('th')[0].contents
print(first_column)
A straightforward approach would involve getting the "improvements" table, getting the first non-header row and then the last cell in this row:
table = soup.find("table", id="improvements-table")
first_row = table.find_all("tr")[1] # skipping a header
last_cell = first_row.find_all("td")[-1]
print(last_cell.get_text()) # prints 41,110
A more generic approach would involve making a list of dictionaries out of this table where keys are header names:
table = soup.find("table", id="improvements-table")
headers = [th.get_text() for th in table('th')]
data = [dict(zip(headers, [td.get_text() for td in row('td')])) for row in table("tr")[1:]]
print(data)
print(data[0]['Sq Ft.'])
Prints:
[
{u'Imp #': u'000101', u'Description': u'Mini-Warehouse', u'Age': u'1', u'Rank': u'2', u'Sq Ft.': u'41,110', u'CCI': u'C', u'Model': u'386'},
{u'Imp #': u'000201', u'Description': u'Site Improvements', u'Age': u'1', u'Rank': u'2', u'Sq Ft.': u'1', u'CCI': u'D', u'Model': u'163'}
]
41,110

R: getting google finance JSON data into a dataframe

I am trying to get google finance JSON data into a dataframe.
I tried:
library(jsonlite)
dat1 <- fromJSON("http://www.google.com/finance/info?q=NSE:%20AAPL,MSFT,TSLA,AMZN,IBM")
dat1
However I get an error:
Error in feed_push_parser(readBin(con, raw(), n), reset = TRUE) :
parse error: trailing garbage
Thank you for any help.
I could not replicate your error using fromJSON due to proxy issues from my side but the following works using httr
require(jsonlite)
require(httr)
#Set your proxy setting if needed
#set_config(use_proxy(url='hostname',port= port,username="",password=""))
url.name = "http://www.google.com/finance/info?q=NSE:%20AAPL,MSFT,TSLA,AMZN,IBM"
url.get = GET(url.name)
#parsing the content as json results in similar error as you encountered
#url.content = content(url.get,type="application/json")
#Error in parseJSON(txt) : parse error: trailing garbage
# " : "0.57" ,"yld" : "2.46" } ,{ "id": "358464" ,"t" : "MSFT"
# (right here) ------^
#read content as html text
url.content = content(url.get, as="text")
#remove html tags
clean.text = gsub("<.*?>", "", url.content)
#remove residual text
clean.text = gsub("\\n|\\//","",clean.text)
DF = fromJSON(clean.text)
head(DF[,1:10],5)
# id t e l l_fix l_cur s ltt lt lt_dts
#1 22144 AAPL NASDAQ 92.51 92.51 92.51 1 4:00PM EDT May 11, 4:00PM EDT 2016-05-11T16:00:02Z
#2 358464 MSFT NASDAQ 51.05 51.05 51.05 1 4:00PM EDT May 11, 4:00PM EDT 2016-05-11T16:00:02Z
#3 12607212 TSLA NASDAQ 208.96 208.96 208.96 1 4:00PM EDT May 11, 4:00PM EDT 2016-05-11T16:00:02Z
#4 660463 AMZN NASDAQ 713.23 713.23 713.23 1 4:00PM EDT May 11, 4:00PM EDT 2016-05-11T16:00:02Z
#5 18241 IBM NYSE 148.95 148.95 148.95 2 6:59PM EDT May 11, 6:59PM EDT 2016-05-11T18:59:12Z
I got the below code from here. Let me know if this helps. On a side note, I would also recommend netfonds. Netfonds is the only source I've found that provides intra-day tick level data for both historical prices and the open book. I posted some additional links below for pulling the Netfonds data if you're interested.
http://www.blackarbs.com/blog/3/22/2015/how-to-get-free-intraday-stock-data-from-netfonds
http://www.onestepremoved.com/free-stock-data/
import urllib
from datetime import date, datetime
""" googlefinance
This module provides a Python API for retrieving stock data from Google Finance.
"""
_month_dict = {
'Jan': 1,
'Feb': 2,
'Mar': 3,
'Apr': 4,
'May': 5,
'Jun': 6,
'Jul': 7,
'Aug': 8,
'Sep': 9,
'Oct': 10,
'Nov': 11,
'Dec': 12}
# Google doesn't like Python's user agent...
class FirefoxOpener(urllib.FancyURLopener):
version = 'Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/2.0.0.11'
def __request(symbol):
url = 'http://google.com/finance/historical?q=%s&output=csv' % symbol
opener = FirefoxOpener()
return opener.open(url).read().strip().strip('"')
def get_historical_prices(symbol, start_date=None, end_date=None):
"""
Get historical prices for the given ticker symbol.
Returns a nested list. fields are Date, Open, High, Low, Close, Volume.
"""
price_data = [data.split(',') for data in __request(symbol).split('\n')[1:]]
for quote in price_data:
quote[0] = _format_date(quote[0])
return price_data
def _format_date(datestr):
""" Change datestr from google format ('20-Jul-12') to the format yahoo uses ('2012-07-20')
"""
parts = datestr.split('-')
day = int(parts[0])
month = _month_dict[parts[1]]
year = int('20'+ parts[2])
return date(year, month, day).strftime('%Y-%m-%d')
If the Google finance endpoint returns a newline delimited json, the solution in R should be:
library(jsonlite)
dat1 <- stream_in(url("http://www.google.com/finance/info?q=NSE:%20AAPL,MSFT,TSLA,AMZN,IBM"))
But it seems the endpoint is not accepting such request (any more?):
HTTP status was '403 Forbidden'

How to fix non-compliant HTML so Expat will parse it (htmltidy not working)

I'm trying to scrape information from http://www.nfl.com/scores (in particular, find out when a game is over so my computer can stop recording it). I can download the HTML easily enough, and it makes this claim about compliance with standards:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
But
An attempt to parse it with Expat produces the error not well-formed (invalid token).
The W3C's online validation service reports 399 Errors and 121 warnings.
I tried to run HTML tidy (just called tidy) on my Linux system with the -xml option, but tidy reports 56 warnings and 117 errors and is unable to recover a good XML file. The errors look like this:
line 409 column 122 - Warning: unescaped & or unknown entity "&role"
...
line 409 column 172 - Warning: unescaped & or unknown entity "&tabSeq"
...
line 1208 column 65 - Error: unexpected </td> in <br>
line 1209 column 57 - Error: unexpected </tr> in <br>
line 1210 column 49 - Error: unexpected </table> in <br>
But when I check the input, the "unknown entities" appear to be part of a properly quoted URL, so I don't know if a double quote is missing somewhere or what.
I know that there is something out there that can parse this stuff because both Firefox and w3m display something reasonable. What tool will fix the non-compliant HTML so that I can parse it with Expat?
They're using some kind of Javascript on the score boxes, so you're going to have to play more clever tricks (line breaks mine):
/* box of awesome */
// iscurrentweek ? true;
(new nfl.scores.Game('2009112905','54635',{state:'pre',container:'scorebox-2009112905',
wrapper:'sb-wrapper-2009112905',template:($('scorebox-2009112905').innerHTML),homeabbr:'NYJ',
awayabbr:'CAR'}));
However, to answer your question, BeautifulSoup parses it (seemingly) fine:
fp = urlopen("http://www.nfl.com/scores")
data = ""
while 1:
r = fp.read()
if not r:
break
data += r
fp.close()
soup = BeautifulSoup(data)
print soup.contents[2].contents[1].contents[1]
Outputs:
<title>NFL Scores: 2009 - Week 12</title>
Might be easier to scrape Yahoo's NFL scoreboard, in my opinion...in fact, off to try it.
EDIT: Used your question as an excuse to get around to learning BeautifulSoup. Alex Martelli has been singing its praise, so I figured it worth a try -- man, am I impressed.
Anyway, I was able to cook up a rudimentary score scraper from the Yahoo! scoreboard, like so:
def main():
soup = BeautifulSoup(YAHOO_SCOREBOARD)
on_first_team = True
scores = []
hold = None
# Iterate the tr that contains a team's box score
for item in soup(name="tr", attrs={"align": "center", "class": "ysptblclbg5"}):
# Easy
team = item.b.a.string
# Get the box scores since we're industrious
boxscore = []
for quarter in item(name="td", attrs={"class": "yspscores"}):
boxscore.append(int(quarter.string))
# Final score
sub = item(name="span", attrs={"class": "yspscores"})[0]
if sub.b:
# Winning score
final = int(sub.b.string)
else:
data = sub.string.replace(" ", "")
if ":" in data:
# Catch TV: XXX and 0:00pm ET
final = None
else:
try: final = int(data)
except: final = None
if on_first_team:
hold = { team : (boxscore, final) }
on_first_team = False
else:
hold[team] = (boxscore, final)
scores.append(hold)
on_first_team = True
for game in scores:
print "--- Game ---"
for team in game:
print team, game[team]
I would tweak this on Sunday to see how it operates, as it's really rough. Here's what it outputs as of right now:
--- Game ---
Green Bay ([0, 13, 14, 7], 34)
Detroit ([7, 0, 0, 5], 12)
--- Game ---
Oakland ([0, 0, 7, 0], 7)
Dallas ([3, 14, 0, 7], 24)
Look at that, I snagged box scores too... for a game that hasn't happened yet, we get:
--- Game ---
Washington ([], None)
Philadelphia ([], None)
Anyway, a peg for you to jump from. Good luck.
There's a Flash-based auto-updating scoreboard thing at the top of nfl.com. Some monitoring of its network traffic finds:
http://www.nfl.com/liveupdate/scorestrip/ss.xml
That will probably be a bit easier to parse than the HTML scoreboard.
Look into tagsoup. If you want to end up with a DOM tree or a SAX stream in Java, it's the ticket. If you just want to extract specific information, Beautiful Soup is a Beautiful Thing.