The idea is to collect all soundcloud users' id's (not names) who posted tracks that first letter is e.g. "f" in the period in our case of "past year".
I used filters on soundcloud and got results in the next URL: https://soundcloud.com/search/sounds?q=f&filter.created_at=last_year&filter.genre_or_tag=hip-hop%20%26%20rap
I found the first user's id ("wavey-hefner") in the follow line of html code:
<a class="sound__coverArt" href="/wavey-hefner/foreign" draggable="true">
I want to get every user's id from the whole html.
My code is:
import requests
import re
from bs4 import BeautifulSoup
html = requests.get("https://soundcloud.com/search/sounds?q=f& filter.created_at=last_year&filter.genre_or_tag=hip-hop%20%26%20rap")
soup = BeautifulSoup(html.text, 'html.parser')
for id in soup.findAll("a", {"class" : "sound_coverArt"}):
print (id.get('href'))
It returns nothing :(
The page is rendered in JavaScript. You can use Selenium to render it, first install Selenium:
pip3 install selenium
Then get a driver e.g. https://sites.google.com/a/chromium.org/chromedriver/downloads (if you are on Windows or Mac you can get a headless version of Chrome - Canary if you like) put the driver in your path.
from bs4 import BeautifulSoup
from selenium import webdriver
import time
browser = webdriver.Chrome()
url = ('https://soundcloud.com/search/sounds?q=f& filter.created_at=last_year&filter.genre_or_tag=hip-hop%20%26%20rap')
browser.get(url)
time.sleep(5)
# To make it load more scroll to the bottom of the page (repeat if you want to)
browser.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(5)
html_source = browser.page_source
browser.quit()
soup = BeautifulSoup(html_source, 'html.parser')
for id in soup.findAll("a", {"class" : "sound__coverArt"}):
print (id.get('href'))
Outputs:
/tee-grizzley/from-the-d-to-the-a-feat-lil-yachty
/empire/fat-joe-remy-ma-all-the-way-up-ft-french-montana
/tee-grizzley/first-day-out
/21savage/feel-it
/pluggedsoundz/famous-dex-geek-1
/rodshootinbirds/fairytale-x-rod-da-god
/chancetherapper/finish-line-drown-feat-t-pain-kirk-franklin-eryn-allen-kane-noname
/alkermith/future-low-life-ft-the-weeknd-evol
/javon-woodbridge/fabolous-slim-thick
/hamburgerhelper/feed-the-streets-prod-dequexatron-1000
/rob-neal-139819089/french-montana-lockjaw-remix-ft-gucci-mane-kodak-black
/pluggedsoundz/famous-dex-energy
/ovosoundradiohits/future-ft-drake-used-to-this
/pluggedsoundz/famous
/a-boogie-wit-da-hoodie/fucking-kissing-feat-chris-brown
/wavey-hefner/foreign
/jalensantoy/foreplay
/yvng_swag/fall-in-luv
/rich-the-kid/intro-prod-by-lab-cook
/empire/fat-joe-remy-ma-money-showers-feat-ty-dolla-ign
Related
1.) I'am trying to Download the Article PDF Files from multiple web pages to a local folder on the computer. But there is now "Download PDF" button on the web pages. What would be the Quickest and Best way to do this with Selenium ?
2.) One way I have thought of is to use the Keyboard Keys for Print, "Control"- "P", but inside Selenium, none of the Keyboard Keys are working when i run the program. The Code is below,
from selenium import webdriver
import chromedriver_binary # Adds chromedriver binary to path
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
import time
driver = webdriver.Chrome()
driver.maximize_window() # Makes Full Screen of the Window Browser
time.sleep(4)
url = 'https://finance.yahoo.com/news/why-warren-buffett-doesnt-buy 152303112.html'
driver.get(url)
time.sleep(10)
a = ActionChains(driver)
a.key_down(Keys.CONTROL).send_keys('P').key_up(Keys.CONTROL).perform()
You can do that by using ChromeOptions() and have a setting id, origin etc.
Also you can give savefile.default_directory to save the PDF file.
Code:
import time
from selenium import webdriver
import json
Options = webdriver.ChromeOptions()
settings = {
"recentDestinations": [{
"id": "Save as PDF",
"origin": "local",
"account": "",
}],
"selectedDestinationId": "Save as PDF",
"version": 2
}
prefs = {'printing.print_preview_sticky_settings.appState': json.dumps(settings), 'savefile.default_directory': 'C:\\Users\\****\\path\\'}
Options.add_experimental_option('prefs', prefs)
Options.add_argument('--kiosk-printing')
driver_path = r'C:\\Users\\***\\***\\chromedriver.exe'
driver = webdriver.Chrome(options=Options, executable_path=driver_path)
driver.maximize_window() # Makes Full Screen of the Window Browser
time.sleep(4)
url = 'https://finance.yahoo.com/'
driver.get(url)
time.sleep(2)
driver.execute_script('window.print();')
Output:
You should see a PDF file in the directory like this:
Update:
driver_path = r'C:\\Users\\***\***\\chromedriver.exe'
driver = webdriver.Chrome(options=Options, executable_path=driver_path)
I am trying to extract the estimated monthly cost of "$1,773" from this url:
https://www.zillow.com/homedetails/4651-Genoa-St-Denver-CO-80249/13274183_zpid/
Upon inspecting that part of the page, I see this data:
<div class="sc-qWfCM cdZDcW">
<span class="Text-c11n-8-48-0__sc-aiai24-0 dQezUG">Estimated monthly cost</span>
<span class="Text-c11n-8-48-0__sc-aiai24-0 jLucLe">$1,773</span></div>
To extract $1,773, I have tried this:
from bs4 import BeautifulSoup
import requests
url = 'https://www.zillow.com/homedetails/4651-Genoa-St-Denver-CO-80249/13274183_zpid/'
headers = {"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0"}
soup = BeautifulSoup(requests.get(url, headers=headers).content, "html")
print(soup.findAll('span', {'class': 'Text-c11n-8-48-0__sc-aiai24-0 jLucLe'}))
This returns a list of three elements, with no mention of $1,773.
[<span class="Text-c11n-8-48-0__sc-aiai24-0 jLucLe">$463,300</span>,
<span class="Text-c11n-8-48-0__sc-aiai24-0 jLucLe">$1,438</span>,
<span class="Text-c11n-8-48-0__sc-aiai24-0 jLucLe">$2,300<!-- -->/mo</span>]
Can someone please explain how to return $1,773?
I think you have to find the first parent element.
for example:
parent_div = soup.find('div', {'class': 'sc-fzqBZW bzsmsC'})
result = parent_div.findAll('span', {'class': 'Text-c11n-8-48-0__sc-aiai24-0 jLucLe'})
While parsing a web page we need to separate components of the page in the way they are rendered. There are components that are statically or dynamically rendered. The dynamic content also takes some time to load, as the page calls for backend API of some sort.
Read more here
I tried parsing your page using Selenium ChromeDriver
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get("https://www.zillow.com/homedetails/4651-Genoa-St-Denver-CO-80249/13274183_zpid/")
time.sleep(3)
time.sleep(3)
el = driver.find_elements_by_xpath("//span[#class='Text-c11n-8-48-0__sc-aiai24-0 jLucLe']")
for e in el:
print(e.text)
time.sleep(3)
driver.quit()
#OUTPUT
$463,300
$1,773
$2,300/mo
I have the following html content :
from bs4 import BeautifulSoup
import re
html = """<a href="http://app_url1" >install app xyz</a>
install app xyz
<a href="http://app_url3" >install app aaa</a>
install app aaa"""
soup = BeautifulSoup(html, "html.parser")
print(soup.findAll("a", text=re.compile("xyz$")))
I want to filter the anchor tag texts that end with a given regex pattern (like xyz here)? I am looking to pass a regex pattern to findAll instead of extra iteration of all anchor tags. But I am getting output only one anchor tag as
install app xyz
The other anchor tag which has img in front of text is getting ignored
expected output:
<a href="http://app_url1" >install app xyz</a>
install app xyz
You can use CSS selector select instead of extra iteration of all anchor tags.
Example:
from bs4 import BeautifulSoup
import re
html = """<a href="http://app_url1" >install app xyz</a>
install app xyz
<a href="http://app_url3" >install app aaa</a>
install app aaa"""
soup = BeautifulSoup(html, "html.parser")
print(soup.select('a:contains("xyz")'))
Output will be:
[install app xyz, <img src="/path.jpg"/>install app xyz]
For getting href content from the list of the above output:
anchors = soup.select('a:contains("xyz")')
href = [i['href'] for i in anchors]
print(href)
Output will be:
['http://app_url1', 'http://app_url2']
Filter by only text=re.compile("xyz$") then use .parent
Ex:
from bs4 import BeautifulSoup
import re
html = """<a href="http://app_url1" >install app xyz</a>
install app xyz
<a href="http://app_url3" >install app aaa</a>
install app aaa"""
soup = BeautifulSoup(html, "html.parser")
result = [el.parent for el in soup.findAll(text=re.compile("xyz$"))]
print(result)
Output:
[install app xyz, <img src="/path.jpg"/>install app xyz]
I am trying to extract the links of this webpage: https://search.cisco.com/search?query=iot
Using this code I am not getting anything returned:
# Get Html Data from webpage
html = urllib.request.urlopen(url, context=ctx).read()
soup = BeautifulSoup(html, 'html5lib')
# Retrieve all of the anchor tags
tags = soup('a') for tag in tags:
print(tag.get('href'))
I have tried the find_all() method but had the same problem.
Seems like java script render to pages.You can use selenium and beautiful soup to fetch the links.
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome()
driver.get("https://search.cisco.com/search?query=iot&locale=enUS")
soup = BeautifulSoup(driver.page_source, 'html.parser')
driver.quit()
for a in soup.find_all('a', href=True):
print(a['href'])
Output:
https://onesearch.cloudapps.cisco.com/searchpage?queryFilter=iot
/login?query=iot&locale=enUS
/login?query=iot&locale=enUS
https://secure.opinionlab.com/ccc01/o.asp?id=pGuoWfLm&static=1&custom_var=undefined%7CS%7CenUS%7Ciot%7Cundefined%7CNA
https://www.cisco.com/c/en/us/support/index.html
//www.cisco.com/en/US/support/tsd_most_requested_tools.html
https://apps.cisco.com/WOC/WOConfigUI/pages/configset/configset.jsp?flow=nextgen&createNewConfigSet=Y
http://www.cisco-servicefinder.com/ServiceFinder.aspx
http://www.cisco-servicefinder.com/WarrantyFinder.aspx
//www.cisco.com/web/siteassets/sitemap/index.html
https://www.cisco.com/c/dam/en/us/products/collateral/se/internet-of-things/at-a-glance-c45-731471.pdf?dtid=osscdc000283
https://www.cisco.com/c/en/us/solutions/internet-of-things/overview.html?dtid=osscdc000283
https://www.cisco.com/c/en/us/solutions/internet-of-things/iot-kinetic.html?dtid=osscdc000283
https://www.cisco.com/c/m/en_us/solutions/internet-of-things/iot-system.html?dtid=osscdc000283
https://learningnetworkstore.cisco.com/internet-of-things?dtid=osscdc000283
https://connectedfutures.cisco.com/tag/internet-of-things/?dtid=osscdc000283
https://blogs.cisco.com/internet-of-things?dtid=osscdc000283
https://learningnetwork.cisco.com/community/internet_of_things?dtid=osscdc000283
https://learningnetwork.cisco.com/community/learning_center/training-catalog/internet-of-things?dtid=osscdc000283
https://blogs.cisco.com/digital/internet-of-things-at-mwc?dtid=osscdc000283
https://cwr.cisco.com/
https://engage2demand.cisco.com/LP=4213?dtid=osscdc000283
https://engage2demand.cisco.com/LP=15823?dtid=osscdc000283
https://video.cisco.com/detail/video/4121788948001/internet-of-things:-empowering-the-enterprise?dtid=osscdc000283
https://video.cisco.com/detail/video/4121788948001/internet-of-things:-empowering-the-enterprise?dtid=osscdc000283
https://video.cisco.com/detail/video/3740968721001/protecting-the-internet-of-things?dtid=osscdc000283
https://video.cisco.com/detail/video/3740968721001/protecting-the-internet-of-things?dtid=osscdc000283
https://video.cisco.com/detail/video/4657296333001/the-internet-of-things:-the-vision-and-new-directions-ahead?dtid=osscdc000283
https://video.cisco.com/detail/video/4657296333001/the-internet-of-things:-the-vision-and-new-directions-ahead?dtid=osscdc000283
/search/videos?locale=enUS&query=iot
/search/videos?locale=enUS&query=iot
https://secure.opinionlab.com/ccc01/o.asp?id=pGuoWfLm&static=1&custom_var=undefined%7CS%7CenUS%7Ciot%7Cundefined%7CNA
You don't need selenium. It is better to use requests. The page uses an API so request from that
import requests
body = {"query":"iot","startIndex":0,"count":10,"searchType":"CISCO","tabName":"Cisco","debugScoreExplain":"false","facets":[],"localeStr":"enUS","advSearchFields":{"allwords":"","phrase":"","words":"","noOfWords":"","occurAt":""},"sortType":"RELEVANCY","isAdvanced":"false","dynamicRelevancyId":"","accessLevel":"","breakpoint":"XS","searchProfile":"","ui":"one","searchCat":"","searchMode":"text","callId":"j5JwndwQZZ","requestId":1558540148392,"bizCtxt":"","qnaTopic":[],"appName":"CDCSearhFE","social":"false"}
r = requests.post('https://search.cisco.com/api/search', json = body).json()
for item in r['items']:
print(item['url'])
Alter parameters to get more results etc.
Try following the template given in the documentation:
for link in soup.find_all('a'):
print(link.get('href'))
In this question How can I get a url from Chrome by Python?, it was brought up that you could grab the url from python in pywinauto 0.6. How is it done?
Using inspect.exe (which is mentioned in Getting Started) you can find Chrome's address bar element, and that its parameter "value" contains the current url.
I found two ways to get this url:
from __future__ import print_function
from pywinauto import Desktop
chrome_window = Desktop(backend="uia").window(class_name_re='Chrome')
address_bar_wrapper = chrome_window['Google Chrome'].main.Edit.wrapper_object()
Here's the first way:
url_1 = address_bar_wrapper.legacy_properties()['Value']
Here's the second:
url_2 = address_bar_wrapper.iface_value.CurrentValue
print(url_1)
print(url_2)
Also if protocol is "http" Chrome removes "http://" prefix. U can add sth like:
def format_url(url):
if url and not url.startswith("https://"):
return "http://" + url
return url