I am trying to webscrape some website for information. i have saved the page I want to scrape as .html file and have opened it with sublime text but there are some parts that cannot be displayed in a prettified way ; I have the same problem when trying to use beautifulsoup ; see picture below (I cannot really share full code since it's disclosing private info).
Just feed the HTML as a multiline string to BeautifulSoup object and use soup.prettify(). That should work. However beautifulsoup has default indentation to 2 spaces. So if you want custom indent you can writeup a little wrapper like this:
def indentPrettify(soup, indent=4):
# where desired_indent is number of spaces as an int()
pretty_soup = str()
previous_indent = 0
# iterate over each line of a prettified soup
for line in soup.prettify().split("\n"):
# returns the index for the opening html tag '<'
current_indent = str(line).find("<")
# which is also represents the number of spaces in the lines indentation
if current_indent == -1 or current_indent > previous_indent + 2:
current_indent = previous_indent + 1
# str.find() will equal -1 when no '<' is found. This means the line is some kind
# of text or script instead of an HTML element and should be treated as a child
# of the previous line. also, current_indent should never be more than previous + 1.
previous_indent = current_indent
pretty_soup += writeOut(line, current_indent, indent)
return pretty_soup
def writeOut(line, current_indent, desired_indent):
new_line = ""
spaces_to_add = (current_indent * desired_indent) - current_indent
if spaces_to_add > 0:
for i in range(spaces_to_add):
new_line += " "
new_line += str(line) + "\n"
return new_line
Related
My objective is to extract strings from numbered/bulleted lists in multiple Microsoft Word documents, then to organize those strings into a single, one-line string where each string is ordered in the following manner: 1.string1 2.string2 3.string3 etc. I refer to these one-line strings as procedures, consisting of 'steps' 1., 2., 3., etc.
The reason it has to be in this format is because the procedure strings are being put into a database, the database is used to create Excel spreadsheet outputs, a formatting macro is used on the spreadsheets, and the procedure strings in question have to be in this format in order for that macro to work properly.
The numbered/bulleted lists in MSword are all similar in format, but some use numbers, some use bullets, and some have extra line spaces before the first point, or extra line spaces after the last point.
The following text shows three different examples of how the Word documents are formatted:
Paragraph Keyword 1: arbitrary text
1. Step 1
2. Step 2
3. Step 3
Paragraph Keyword 2: arbitrary text
Paragraph Keyword 3: arbitrary text
• Step 1
• Step 2
• Step 3
Paragraph Keyword 4: arbitrary text
Paragraph Keyword 5: arbitrary text
Step 1
Step 2
Step 3
Paragraph Keyword 6: arbitrary text
(For some reason the first two lists didn't get indented in the formatting of the post, but in my word document all the indentation is the same)
When the numbered/bulleted list is formatted without line extra spaces, my code works fine, e.g. between "paragraph keyword 1:" and "paragraph keyword 2:".
I was trying to use isspace() to isolate the instances where there are extra line spaces that aren't part of the list that I want to include in my procedure strings.
Here is my code:
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
def extractStrings(file):
doc = file
for i in range(len(doc.paragraphs)):
str1 = doc.paragraphs[i].text
if "Paragraph Keyword 1:" in str1:
start1=i
if "Paragraph Keyword 2:" in str1:
finish1=i
if "Paragraph Keyword 3:" in str1:
start2=i
if "Paragraph Keyword 4:" in str1:
finish2=i
if "Paragraph Keyword 5:" in str1:
start3=i
if "Paragraph Keyword 6:" in str1:
finish3=i
print("----------------------------")
procedure1 = ""
y=1
for x in range(start1 + 1, finish1):
temp = str((doc.paragraphs[x].text))
print(temp)
if not temp.isspace():
if y > 1:
procedure1 = (procedure1 + " " + str(y) + "." + temp)
else:
procedure1 = (procedure1 + str(y) + "." + temp)
y=y+1
print(procedure1)
print("----------------------------")
procedure2 = ""
y=1
for x in range(start2 + 1, finish2):
temp = str((doc.paragraphs[x].text))
print(temp)
if not temp.isspace():
if y > 1:
procedure2 = (procedure2 + " " + str(y) + "." + temp)
else:
procedure2 = (procedure2 + str(y) + "." + temp)
y=y+1
print(procedure2)
print("----------------------------")
procedure3 = ""
y=1
for x in range(start3 + 1, finish3):
temp = str((doc.paragraphs[x].text))
print(temp)
if not temp.isspace():
if y > 1:
procedure3 = (procedure3 + " " + str(y) + "." + temp)
else:
procedure3 = (procedure3 + str(y) + "." + temp)
y=y+1
print(procedure3)
print("----------------------------")
del doc
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
import docx
doc1 = docx.Document("docx_isspace_experiment_042420.docx")
extractStrings(doc1)
del doc1
Unfortunately I have no way of putting the output into this post, but the problem is that whenever there is a blank line in the word doc, isspace() returns false, and a number "x." is assigned to empty space, so I end up with something like: 1. 2.Step 1 3.Step 2 4.Step 3 5. 6. (that's the last iteration of print(procedure3) from the code)
The problem is that isspace() is returning false even when my python console output shows that the string is just a blank line.
Am I using isspace() incorrectly? Is there something in the string I am not detecting that is causing isspace() to return false? Is there a better way to accomplish this?
Use the test:
# --- for s a str value, like paragraph.text ---
if s.strip() == "":
print("s is a blank line")
str.isspace() returns True if the string contains only whitespace. An empty str contains nothing, and so therefore does not contain whitespace.
I would like to parse a multiline text file having a content as
section1:
key1 val1
key2 val2
section2:
val1
val2
val3
section3:
section4:
somevalue
The header of the sections (section1, section2, ...) are defined. The goal is to read the values under the different sections. I'm getting in trouble with using the pyparsing module over several lines (the real problem is much more complex than this simple example).
When I use the following code, the parser expects on every line the full list of defined keywords:
# -*- coding: utf-8 -*-
from pyparsing import Literal, ZeroOrMore, LineEnd, ParseException
FileSyntax = None
def Grammar():
#section1:
section1 = Literal("section1:").suppress() + ZeroOrMore(LineEnd())
#section2:
section2 = Literal("section2:").suppress() + ZeroOrMore(LineEnd())
#section3:
section3 = Literal("section3:").suppress() + ZeroOrMore(LineEnd())
#section4:
section4 = Literal("section4:").suppress() + ZeroOrMore(LineEnd())
return section1 + section2 + section3 + section4
def parseFile(filename : str):
global FileSyntax
print("\nparse results:\n")
try:
TestFile = open(filename)
testdata = "".join( TestFile.readlines())
FileSyntax = Grammar()
FileSyntax.parseString(testdata)
except ParseException as err:
print(err.line)
print(" "*(err.column-1) + "^")
print("* " + str(err))
except Exception as e:
import traceback
traceback.print_exc(e)
parseFile("testdata.txt")
How can I make a stateful parsing (dependent on the different sections)? Thank you.
If you print out the grammar expression itself, you'll get something like:
{{{{Suppress:("section1:") [LineEnd]...} {Suppress:("section2:") [LineEnd]...}} {Suppress:("section3:") [LineEnd]...}} {Suppress:("section4:") [LineEnd]...}}
That is, you are parsing all the section headers, but not the body of the sections. So you are probably failing on the first line after 'section1:'.
Also, there is no need to call readlines() and then join everything back together. Just call TestFile.read(). Or even better, pathlib.Path(test_file_name).read_text()
I wanted to extract an email message content. It is in html content, used the BeautifulSoup to fetch the From, To and subject. On fetching the body content, it fetches the first line alone. It leaves the remaining lines and paragraph.
I miss something over here, how to read all the lines/paragraphs.
CODE:
email_message = mail.getEmail(unreadId)
print (email_message['From'])
print (email_message['Subject'])
if email_message.is_multipart():
for payload in email_message.get_payload():
bodytext = email_message.get_payload()[0].get_payload()
if type(bodytext) is list:
bodytext = ','.join(str(v) for v in bodytext)
else:
bodytext = email_message.get_payload()[0].get_payload()
if type(bodytext) is list:
bodytext = ','.join(str(v) for v in bodytext)
print (bodytext)
parsedContent = BeautifulSoup(bodytext)
body = parsedContent.findAll('p').getText()
print body
Console:
body = parsedContent.findAll('p').getText()
AttributeError: 'list' object has no attribute 'getText'
When I use
body = parsedContent.find('p').getText()
It fetches the first line of the content and it is not printing the remaining lines.
Added
After getting all the lines from the html tag, I get = symbol at the end of each line and also   ; , < is displayed.How to overcome those.
Extracted text:
Dear first,All of us at GenWatt are glad to have xyz as a
customer. I would like to introduce myself as your Account
Manager. Should you = have any questions, please feel free to
call me at or email me at ash= wis#xyz.com. You
can also contact GenWatt on the following numbers: Main:
810-543-1100Sales: 810-545-1222Customer Service & Support:
810-542-1233Fax: 810-545-1001I am confident GenWatt will serve you
well and hope to see our relationship=
Let's inspect the result of soup.findAll('p')
python -i test.py
----------
import requests
from bs4 import BeautifulSoup
bodytext = requests.get("https://en.wikipedia.org/wiki/Earth").text
parsedContent = BeautifulSoup(bodytext, 'html.parser')
paragraphs = soup.findAll('p')
----------
>> type(paragraphs)
<class 'bs4.element.ResultSet'>
>> issubclass(type(paragraphs), list)
True # It's a list
Can you see? It's a list of all paragraphs. If you want to access their content you will need iterate over the list or access an element by an index, like a normal list.
>> # You can print all content with a for-loop
>> for p in paragraphs:
>> print p.getText()
Earth (otherwise known as the world (...)
According to radiometric dating and other sources of evidence (...)
...
>> # Or you can join all content
>> content = []
>> for p in paragraphs:
>> content.append(p.getText())
>>
>> all_content = "\n".join(content)
>>
>> print(all_content)
Earth (otherwise known as the world (...) According to radiometric dating and other sources of evidence (...)
Using List Comprehension your code will looks like:
parsedContent = BeautifulSoup(bodytext)
body = '\n'.join([p.getText() for p in parsedContent.findAll('p')]
When I use
body = parsedContent.find('p').getText()
It fetches the first line of the content and it is not printing the
remaining lines.
Do parsedContent.find('p') is exactly the same that do parsedContent.findAll('p')[0]
>> parsedContent.findAll('p')[0].getText() == parsedContent.find('p').getText()
True
I had a question concerning some basic transformations in Haskell.
Basically, I have a written Input file, named Input.md. This contains some markdown text that is read in my project file, and I want to write a few functions to do transformations on the text. After completing these functions under a function called convertToHTML, I have output the file as an .html file in the correct format.
module Main
(
convertToHTML,
main
) where
import System.Environment (getArgs)
import System.IO
import Data.Char (toLower, toUpper)
process :: String -> String
process s = head $ lines s
convertToHTML :: String -> String
convertToHTML str = do
x <- str
if (x == '#')
then "<h1>"
else return x
--convertToHTML x = map toUpper x
main = do
args <- getArgs -- command line args
let (infile,outfile) = (\(x:y:ys)->(x,y)) args
putStrLn $ "Input file: " ++ infile
putStrLn $ "Output file: " ++ outfile
contents <- readFile infile
writeFile outfile $ convertToHTML contents
So,
How would I read through my input file, and transform any line that starts with a # to an html tag
How would I read through my input file once more and transform any WORD that is surrounded by _word_ (1 underscore) to another html tag
Replace any Character with an html string.
I tried using such functions such as Map, Filter, ZipWith, but could not figure out how to iterate through the text and transform each text. Please if anybody has any suggestions. I've been working on this for 2 days straight and have a bunch of failed code to show for a couple of weeks and have a bunch of failed code to show it.
I tried using such functions such as Map, Filter, ZipWith, but could not figure out how to iterate through the text and transform each text.
Because they work on appropriate element collection. And they don't really "iterate"; you simply have to feed the appropriate data. Let's tackle the # problem as an example.
Our file is one giant String, and what we'd like is to have it nicely split in lines, so [String]. What could do it for us? I have no idea, so let's just search Hoogle for String -> [String].
Ah, there we go, lines function! Its counterpart, unlines, is also going to be useful. Now we can write our line wrapper:
convertHeader :: String -> String
convertHeader [] = [] -- that prevents us from calling head on an empty line
convertHeader x = if head x == '#' then "<h1>" ++ x ++ "</h1>"
else x
and so:
convertHeaders :: String -> String
convertHeaders = unlines . map convertHeader . lines
-- ^String ^[String] ^[String] ^String
As you can see the function first converts the file to lines, maps convertHeader on each line, and the puts the file back together.
See it live on Ideone
Try now doing the same with words to replace your formatting patterns. As a bonus exercise, change convertHeader to count the number of # in front of the line and output <h1>, <h2>, <h3> and so on accordingly.
Just wondering if these two functions are to be done using Nokogiri or via more basic Ruby commands.
require 'open-uri'
require 'nokogiri'
require "net/http"
require "uri"
doc = Nokogiri.parse(open("example.html"))
doc.xpath("//meta[#name='author' or #name='Author']/#content").each do |metaauth|
puts "Author: #{metaauth}"
end
doc.xpath("//meta[#name='keywords' or #name='Keywords']/#content").each do |metakey|
puts "Keywords: #{metakey}"
end
etc...
Question 1: I'm just trying to parse a directory of .html documents, get the information from the meta html tags, and output the results to a text file if possible. I tried a simple *.html wildcard replacement, but that didn't seem to work (at least not with Nokogiri.parse(open()) maybe it works with ::HTML or ::XML)
Question 2: But more important, is it possible to output all of those meta content outputs into a text file to replace the puts command?
Also forgive me if the code is overly complicated for the simple task being performed, but I'm a little new to Nokogiri / xpath / Ruby.
Thanks.
I have a code similar.
Please refer to:
module MyParser
HTML_FILE_DIR = `your html file dir`
def self.run(options = {})
file_list = Dir.entries(HTML_FILE_DIR).reject { |f| f =~ /^\./ }
result = file_list.map do |file|
html = File.read("#{HTML_FILE_DIR}/#{file}")
doc = Nokogiri::HTML(html)
parse_to_hash(doc)
end
write_csv(result)
end
def self.parse_to_hash(doc)
array = []
array << doc.css(`your select conditons`).first.content
... #add your selector code css or xpath
array
end
def self.write_csv(result)
::CSV.open("`your out put file name`", 'w') do |csv|
result.each { |row| csv << row }
end
end
end
MyParser.run
You can output to a file like so:
File.open('results.txt','w') do |file|
file.puts "output" # See http://ruby-doc.org/core-2.1.2/IO.html#method-i-puts
end
Alternatively, you could do something like:
authors = doc.xpath("//meta[#name='author' or #name='Author']/#content")
keywrds = doc.xpath("//meta[#name='keywords' or #name='Keywords']/#content")
results = authors.map{ |x| "Author: #{x}" }.join("\n") +
keywrds.map{ |x| "Keywords: #{x}" }.join("\n")
File.open('results.txt','w'){ |f| f << results }