requests doesn't get the full body content - html

I know, this is the question that have been already asked much. So I tried some solutions, and it worked for my other works.
But this site is different, I think.
I tried this at first.
html = requests.get(url = "http://loawa.com")
soup = BeautifulSoup(html.content.decode('utf-8','replace'), 'html.parser')
print(soup)
It fetches me a head, and slight of body.
<body class="p-0 bg-theme-6" style="overflow-x:hidden"><script>window.location.reload(true);</script></body>
So I used prerender as
html = requests.get(url = "http://service.prerender.io/http://loawa.com")
soup = BeautifulSoup(html.content.decode('utf-8','replace'), 'html.parser')
print(soup)
It gives me the same result.
So I tried it with headers.
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.80 Safari/537.36','Content-Type': 'text/html',}
response = requests.get("http://loawa.com",headers=headers)
html = response.text
soup = BeautifulSoup(html.content.decode('utf-8','replace'), 'html.parser')
print(soup)
The html comes out as empty. Not sure I did a right job with headers.
What can I try more with? I don't want to use selenium for this work.
Hope someone can enlighten me. Thanks!

Related

Timeout Error - DHL API to Google Sheets - UrlFetchApp

In Python I use as headers the "Request Headers" that are in the request captured using the browser's developer options and it works fine.
I tried the same with Apps Script, but UrlFetchApp retrieves Timeout exception:
function WS() {
var myHeaders = {
'accept': '*/*',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9,es;q=0.8,pt;q=0.7',
'cookie': '', //the cookies that appear here in my browser
'referer': 'https://www.dhl.com/global-en/home/tracking/tracking-express.html?submit=1&tracking-id=4045339815',
'sec-ch-ua': '"Microsoft Edge";v="105", "Not)A;Brand";v="8", "Chromium";v="105"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 Edg/105.0.1343.53',
'x-sec-clge-req-type': 'ajax',
};
var options = {
'method': 'GET',
'headers': myHeaders,
}
var response = UrlFetchApp.fetch("https://www.dhl.com/utapi?trackingNumber=4045339815&language=en&source=tt",options);
Logger.log(response.getContentText())
};
I would appreciate any ideas / hint.
EDIT :
Website to catch cookies :
https://www.dhl.com/global-en/home/tracking/tracking-express.html?submit=1&tracking-id=4045339815
I think the problem is most likely the user-agent header. Apps Script's URL Fetch Service uses Google's servers to send the request instead of your browser. As a result, Apps Script forces its own user agent that looks like this:
"User-Agent": "Mozilla/5.0 (compatible; Google-Apps-Script; beanserver; +https://script.google.com; id: ...)"
On the other hand, Python sends the headers exactly as you specified them. You can test this yourself by sending your requests to a test server like https://httpbin.org/headers. The only difference between the Python and Apps Script requests is the user-agent header.
It doesn't look like there's a way to bypass this. There's a request in Google's issue tracker here to allow customization of the user agent but it's been open since 2013 so it doesn't seem like something they want to do, maybe for transparency reasons or something similar.
The reason why this header would be a problem is because DHL doesn't want you to use their user-facing endpoints to request information with scripts, though you probably already know this since you're trying to replicate the browser's headers and cookies. Trying to access the endpoint without the right headers just results in this message:
My guess is that DHL has blacklisted the Apps Script user agent, hence the timeout. If you want to use Apps Script you probably will have to go to https://developer.dhl and set up a developer account to get your own API key. If you want to keep using your current method then you'll have to stick to Python or anything else that won't change your headers.
Edit:
Here's a quick Python sample that seems to support the theory:
import requests
#Chrome user agent, this works
useragent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 Edg/105.0.1343.53'
#No user agent, this also works
#useragent = ''
#Fake user agent, this still works
#useragent = 'Mozilla/5.0 (compatible; Googlu-Opps-Script)'
#Apps Script user agent, this just hangs
#useragent = 'Mozilla/5.0 (compatible; Google-Apps-Script)'
headers= {
'accept': '*/*',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9,es;q=0.8,pt;q=0.7',
'cookie': 'your-cookie',
'referer': 'https://www.dhl.com/global-en/home/tracking/tracking-express.html?submit=1&tracking-id=4045339815',
'sec-ch-ua': '"Microsoft Edge";v="105", "Not)A;Brand";v="8", "Chromium";v="105"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': useragent,
'x-sec-clge-req-type': 'ajax'}
url="https://www.dhl.com/utapi?trackingNumber=4045339815&language=en&source=tt"
result = requests.get(url, headers=headers)
print(result.content.decode())
Based on my testing in Python, even a blank or fake user agent will work, but one that has Google-Apps-Script will just keep hanging. Even changing a single letter to Google-Opps-Script or something similar will make it work.

Extracting Text from Span Tag using BeautifulSoup

I am trying to extract the estimated monthly cost of "$1,773" from this url:
https://www.zillow.com/homedetails/4651-Genoa-St-Denver-CO-80249/13274183_zpid/
Upon inspecting that part of the page, I see this data:
<div class="sc-qWfCM cdZDcW">
<span class="Text-c11n-8-48-0__sc-aiai24-0 dQezUG">Estimated monthly cost</span>
<span class="Text-c11n-8-48-0__sc-aiai24-0 jLucLe">$1,773</span></div>
To extract $1,773, I have tried this:
from bs4 import BeautifulSoup
import requests
url = 'https://www.zillow.com/homedetails/4651-Genoa-St-Denver-CO-80249/13274183_zpid/'
headers = {"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0"}
soup = BeautifulSoup(requests.get(url, headers=headers).content, "html")
print(soup.findAll('span', {'class': 'Text-c11n-8-48-0__sc-aiai24-0 jLucLe'}))
This returns a list of three elements, with no mention of $1,773.
[<span class="Text-c11n-8-48-0__sc-aiai24-0 jLucLe">$463,300</span>,
<span class="Text-c11n-8-48-0__sc-aiai24-0 jLucLe">$1,438</span>,
<span class="Text-c11n-8-48-0__sc-aiai24-0 jLucLe">$2,300<!-- -->/mo</span>]
Can someone please explain how to return $1,773?
I think you have to find the first parent element.
for example:
parent_div = soup.find('div', {'class': 'sc-fzqBZW bzsmsC'})
result = parent_div.findAll('span', {'class': 'Text-c11n-8-48-0__sc-aiai24-0 jLucLe'})
While parsing a web page we need to separate components of the page in the way they are rendered. There are components that are statically or dynamically rendered. The dynamic content also takes some time to load, as the page calls for backend API of some sort.
Read more here
I tried parsing your page using Selenium ChromeDriver
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get("https://www.zillow.com/homedetails/4651-Genoa-St-Denver-CO-80249/13274183_zpid/")
time.sleep(3)
time.sleep(3)
el = driver.find_elements_by_xpath("//span[#class='Text-c11n-8-48-0__sc-aiai24-0 jLucLe']")
for e in el:
print(e.text)
time.sleep(3)
driver.quit()
#OUTPUT
$463,300
$1,773
$2,300/mo

BeautifulSoup IndexError: list index out of range

My code below:
import requests
from bs4 import BeautifulSoup
def investopedia():
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:75.0) Gecko/20100101 Firefox/75.0'}
ticker = 'TSLA'
url = f'https://www.investopedia.com/markets/quote?tvwidgetsymbol={ticker.lower()}'
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'lxml')
ip_price = soup.find_all('div', {'class':'tv-symbol-price-quote__value js-symbol-last'})[0].find('span').text
print(ip_price)
investopedia()
The class I used while inspecting element (in html):
<div class="tv-symbol-price-quote__value js-symbol-last"><span>736.27</span></div>
736.27 in "span" is the number I need
Please help out a web scraping beginnger here. Thanks in advance!
You get index out of range error because your code can't find any HTML elements you are looking for right now.
Information you are looking for is kept within an iframe. In order to retrieve the data you want, we have to switch to that iframe. One way to do it is using Selenium.
from selenium import webdriver
def investopedia():
ticker = 'TSLA'
url = f'https://www.investopedia.com/markets/quote?tvwidgetsymbol={ticker.lower()}'
driver = webdriver.Chrome()
driver.get(url)
time.sleep(5) # it takes time to download the webpage
iframe = driver.find_elements_by_css_selector('.tradingview-widget-container > iframe')[0]
driver.switch_to.frame(iframe)
time.sleep(1)
ip_price = driver.find_elements_by_xpath('.//div[#class="tv-symbol-price-quote__value js-symbol-last"]')[0].get_attribute('innerText').strip()
print(ip_price)
investopedia()

How to get div with multiple classes BS4

What is the most efficient way to get divs with BeautifulSoup4 if they have multiple classes?
I have an html structure like this:
<div class='class1 class2 class3 class4'>
<div class='class5 class6 class7'>
<div class='comment class14 class15'>
<div class='date class20 showdate'> 1/10/2017</div>
<p>comment2</p>
</div>
<div class='comment class25 class9'>
<div class='date class20 showdate'> 7/10/2017</div>
<p>comment1</p>
</div>
</div>
</div>
I want to get div with comment. Usually there is no problem with nested classes, but I don't know why the command:
html = BeautifulSoup(content, "html.parser")
comments = html.find_all("div", {"class":"comment"})
doesn't work. It gives empty array.
And I guess this happens because there are a lot of classes, so he looks for div with only comment class and it doesn't exist. How can I find all the comments?
Apparently, the URL that fetches the comments section is different from the original URL that retrieves the main contents.
This is the original URL you gave:
http://community.sparknotes.com/2017/10/06/find-out-your-colleges-secret-mantra-we-hack-college-life-at-the-100-of-the-best
Behind the scenes, if you record the network log in the network tab of Chrome's developer menu, you'll see a list of all URLs that are sent by the browser. Most of them are for fetching images and scripts. Few relate to other sites such as Facebook or Google (for analytics, etc.). The browser sends another request to this particular site (sparknotes), which gives you the comments section. This is the URL:
http://community.sparknotes.com/commentlist?post_id=1375724&page=1&comment_type=&_=1507467541548
The value for post_id can be found in the web page returned when we request the first URL. It is contained in an input tag which has a hidden attribute.
<input type="hidden" id="postid" name="postid" value="1375724">
You can extract this info from the first web page using a simple soup.find('input', {'id': 'postid'})['value']. Of course, since this identifies the post uniquely, you need not worry about its changing dynamically on each request.
I couldn't find the '1507467541548' value passed to '_' parameter (last parameter of the URL) anywhere in the main page or anywhere in the cookies set by response headers of any of the pages.
However, I went out on a limb and tried to fetch the URL by passing it without the '_' parameter, and it worked.
So, here's the entire script that worked for me:
from bs4 import BeautifulSoup
import requests
req_headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive',
'Host': 'community.sparknotes.com',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'
}
with requests.Session() as s:
url = 'http://community.sparknotes.com/2017/10/06/find-out-your-colleges-secret-mantra-we-hack-college-life-at-the-100-of-the-best'
r = s.get(url, headers=req_headers)
soup = BeautifulSoup(r.content, 'lxml')
post_id = soup.find('input', {'id': 'postid'})['value']
# url = 'http://community.sparknotes.com/commentlist?post_id=1375724&page=1&comment_type=&_=1507467541548' # the original URL found in network tab
url = 'http://community.sparknotes.com/commentlist?post_id={}&page=1&comment_type='.format(post_id) # modified by removing the '_' parameter
r = s.get(url)
soup = BeautifulSoup(r.content, 'lxml')
comments = soup.findAll('div', {'class': 'commentCite'})
for comment in comments:
c_name = comment.div.a.text.strip()
c_date_text = comment.find('div', {'class': 'commentBodyInner'}).text.strip()
print(c_name, c_date_text)
As you can see, I haven't used headers for the second requests.get. So I'm not sure if it's required at all. You can experiment omitting them in the first request as well. But make sure you use requests, as I haven't tried using urllib. Cookies might play a vital role here.

How to generate static HTML pages from ASP.NET Web Forms app

I have a Web Forms app solution made with Visual Studio 2013 and I want to generate static HTML pages from it. Does anyone know a good tool or maybe script and had experience with this?
I tried with Pretzel, but it does not support ASP.
You can generate HTML pages using HtmlTextWriter as:
using (StreamWriter sw = new StreamWriter(Server.MapPath("fileName.html")))
using (HtmlTextWriter writer = new HtmlTextWriter(sw))
{
writer.RenderBeginTag(HtmlTextWriterTag.Html);
writer.RenderBeginTag(HtmlTextWriterTag.Head);
writer.Write("Head Contents");
writer.RenderEndTag();
writer.RenderBeginTag(HtmlTextWriterTag.Body);
writer.Write("Body Contents");
writer.RenderEndTag();
writer.RenderEndTag();
}
Here is the code I used and it is working fine:
Uri url = new Uri(serverPath + pageName);
WebClient wc = new WebClient();
wc.Headers.Add(HttpRequestHeader.Accept,"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");
wc.Headers.Add(HttpRequestHeader.UserAgent, "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:30.0) Gecko/20100101 Firefox/30.0");
string finalHTML = wc.DownloadString(url);
//change the asp extensions if there are any linking
finalHTML = finalHTML.Replace(".aspx", ".html");
//create HTML file
System.IO.File.WriteAllText(string.Format("{0}{1}", filePathSave, pageName), finalHTML);