my goal is to write some lines of R code which allow me to make web scraping from
www.skyscanner.it/trasporti/voli/mila/fran/180201?adults=1&children=0&adultsv2=1&childrenv2=&infants=0&cabinclass=economy&rtn=0&preferdirects=false&outboundaltsenabled=false&inboundaltsenabled=false&ref=home#results
getting: airline, departure and arrival airportS, departure and arrival timeS, price.
I decided to use the Rcrawler package (here how it works) but, having no experience of HTML, i've no idea of how to set the ExtractXpathPat option to get data.
Rcrawler(Website = "https://www.skyscanner.it/trasporti/voli/mila/fran/180201?adults=1&children=0&adultsv2=1&childrenv2=&infants=0&cabinclass=economy&rtn=0&preferdirects=false&outboundaltsenabled=false&inboundaltsenabled=false&ref=day-view#results",
no_cores = 4, no_conn = 4, ExtractXpathPat = c("?????"))
What should i do? How can i learn how to set path?
Thanks!
Be careful according to the policy of the domain is not allowed to extract through web scraping the information. However to get the css code or the xpath you can use "Selector Gadget" or the inspect button in your browser.
To make sure web scraping is allowed you must visit the robots.txt of the domain. In your case: http://www.skyscanner.com/robots.txt. You can also use robotstxt package.
Related
I'd like to have a list of just the current titles for all questions in one of the smaller (less than 10,000 questions) stackexchange site. I tried the interactive utility here: https://api.stackexchange.com/docs/questions and it both reports the result as a json at the bottom, and produces the requesting url at the top. For example:
https://api.stackexchange.com/2.2/questions?order=desc&sort=activity&tagged=apples&site=cooking
returns this JSON in my browser:
{"items":[{"tags":["apples","crumble"],"owner":{ ...
...
...],"has_more":true,"quota_max":300,"quota_remaining":252}
What is quota? It was 10,000 on one search on one site, but suddenly it's only 300 here.
I won't be doing this very often, what I'd like is the quickest way to edit that (or similar of course) url so I can get a list of all of the titles on a small site. I don't understand how to use paging, and I don't need any of the other fields. I don't care if I get them, but I'm thinking if I exclude them I can have more at once.
If I need to script it, python (2.7) is my preferred (only) language.
quota_max is the number of requests your application is allowed per day. 300 is the default for an unregistered application. This used to be mentioned directly on the page describing throttles, but seems to have been removed. Here is historical information describing the default.
To increase this to 10,000, you need to register an application and then authenticate by passing an access token in your script.
To get all titles on a site, you can use a Python library to help:
StackAPI. The answer below will use this library. DISCLAIMER: I wrote this library
Py-StackExchange
SEAPI
StackPy
Assuming you have registered your application and authenticated we can proceed.
First, install StackAPI (documentation):
pip install stackapi
This code will then grab the 10,000 most recent questions (max_pages * page_size) for the site hardwarerecs. Each page costs you one API hit, so the more items per page, the few API calls.
from stackapi import StackAPI
SITE = StackAPI('hardwarerecs')
SITE.page_size = 100
SITE.max_pages = 100
# Filter to only get question title and link
filter = '!BHMIbze0EQ*ved8LyoO6rNjkuLgHPR'
questions = SITE.fetch('questions', filter=filter)
In the questions variable is a dictionary that looks very similar to the API output, except that the library did all the paging for you. Your data is in questions['data'] and, in this case, contains a list of dictionaries that look like this:
[
...
{u'link': u'http://hardwarerecs.stackexchange.com/questions/29/sound-board-to-replace-a-gl2200-in-a-house-of-worship-foh-setting',
u'title': u'Sound board to replace a GL2200 in a house-of-worship FOH setting?'},
{ u'link': u'http://hardwarerecs.stackexchange.com/questions/31/passive-gps-tracker-logger',
u'title': u'Passive GPS tracker/logger'}
...
]
This result set is limited to only the title and the link because of the filter we applied. You can find the appropriate filter by adjusting what fields you want in the web UI and copying the filter field.
The hardwarerecs parameter that is passed when creating the SITE parameter is the first part of the site's domain URL. Alternatively, you can find it by looking at the api_site_parameter for your site when looking at the /sites end point.
I'm trying to fetch "current / max Players" from this site:
http://rust-servers.net/server/64099/
I would like to display just the player numbers, eg. "70 / 175" on another website and have it update every time someone visits my .html page (I can change that one to .php if needed.)
How would I go about doing that in the most simple and efficient way?
I've googled the issue for some hours without any luck, I'm no closer to understanding what I would want to use to do this as everyone seems to suggest widely different methods and most seem way too verbose for the simple thing I'm trying to do - many examples fetch the data as JSON (?) with some JS/jQuery (?) and use a bit of code to find specific items in that data, define it as a variable or array and then display it later.
I've figured that the information I want can be referred to using XPath "/html/body/div[4]/div/div/div[9]/div[1]/table/tbody/tr[4]/td[2]/strong" but that's about it. How do I proceed from there?
Thank you.
It depends on which platform do you want to use.
If you use C#, you can use HtmlAgilityPack library.
It is basically like this:
var webClient.DownloadString("http://rust-servers.net/server/64099/");
var doc = new HtmlAgilityPack.HtmlDocument();
doc.LoadHtml(htmlCode);
var node = doc.DocumentNode.SelectSingleNode("/html[1]/body[1]/div[4]/div[1]/div[1]/div[9]/div[1]/table[1]/tbody[1]/tr[4]/td[2]");
var nodeValue = node.InnerText;
I'm doing a little project for my class and I'm just a beginner, so please forgive me if I mix up some of my terminology.
Basically, I'm creating an interactive journey planner for my city's public transit system. Unfortunately, they haven't made all the data I need publicly available. So instead of putting all my time into gathering the data for personal use, I've opted to do some screen scraping - letting their servers calculate the journey info from a START and STOP variable and then displaying the selected info on my page.
So is it possible to fill out a form's fields remotely, and then scrape the data on the page that subsequently loads? And if so, what would be the quickest, most convenient way? This happens to be a case where the data can't be manipulated via the URL, so it has to access the data by filling out the form first.
The website in question:
http://jp.translink.com.au/travel-information/journey-planner
Here is what you can do:
1.) Send a POST Request to the journey-planner with some data like that (be aware that CORS might jump in, then you could use cURL via PHP or whatsoever):
Start:Wickham Tce, Spring Hill
End:Upper Edward St, Spring Hill
SearchDate:10/05/2013 12:00:00 AM
TimeSearchMode:LeaveAfter
SearchHour:7
SearchMinute:40
TimeMeridiem:AM
TransportModes:Bus
TransportModes:Train
TransportModes:Ferry
MaximumWalkingDistance:1500
WalkingSpeed:Normal
ServiceTypes:Regular
ServiceTypes:Express
ServiceTypes:NightLink
FareTypes:Standard
FareTypes:Prepaid
FareTypes:Free
2.) You will get a new response location. This seems to be a REST link. Important for you is the id at the end. You will have to call that page and parse the HTML and look for a div with the HTML-id option-summaries, where you will find more information within the divs travel-option-1 to travel-option-n. You have to look at it carefully in order to find out which information is stored whee and how you will be able to use it.
In order to find such things you should learn how to use Firebug or Chrome's development tools.
This is one way to solve your problem. Probably not the best but still better than "screen-scraping" anything. But it will ask you for a lot of skills and effort. Furthermore if the data provider is going to change just a bit your solution will not work anymore. Additionally they might prevent your access by CORS or anything else (blocking your IP etc.)
I have a personal blog I built using rails. I want to add a section to my site that displays my current streak of github contributions. What would be the best way about doing this?
edit: for clarification, here is what I want:
just the number of days is all that is necessary for me.
Considering the GitHub API for Users doesn't yet expose that particular information (number of days for current stream of contributions), you might have to:
scrape it (extract it by reading the user's GitHub page)
As klamping mentions in his answer (upvoted), the url to scrap would be:
https://github.com/users/<username>/contributions_calendar_data
https://github.com/users/<username>/contributions
(for public repos only, though)
SherlockStd has an updated (May 2017) parsing code below:
https://github-stats.com/api/user/streak/current/:username
try projects which are using https://github.com/users/<username>/contributions_calendar_data (as listed in Marques Johansson's answer, upvoted)
IonicaBizau/git-stats:
akerl/githubchart (Github contribution SVG generator)
akerl/githubstats (Github contribution statistics)
build that graph yourself: see the GitHub project git-cal
git-cal is a simple script to view commits calendar (similar to GitHub contributions calendar) on command line.
Each block in the graph corresponds to a day and is shaded with one of the 5 possible colors, each representing relative number of commits on that day.
or establish a service that will report, each day, any new commit for that given day to a Google Calendar (using the Google Calendar API through a project like nf/streak).
You can then read that information and report it in your blog.
You can find various example of scraping that information:
github_team_calendar.py
weekend-commits.js
As in:
$.getJSON('https://github.com/users/' + location.pathname.replace(/\//g, '') + '/contributions_calendar_data', weekendWork);
leaderboard.rb:
Like:
leaderboard = members.map do |u|
user_stats = get("https://github.com/users/#{u}/contributions_calendar_data")
total = user_stats.map { |s| s[1] }.reduce(&:+)
[u, total]
end
... (you get the idea)
The URL for the plain JSON data was:
https://github.com/users/[username]/contributions_calendar_data
[Edit: Looks like this URL no longer works)
There is a URL which generates the SVG, which other answers have indicated. That is here:
https://github.com/users/[username]/contributions
Simply replace [username] with your github username in the URL and you should be able to see the chart. See other answers for more in-depth explanations
If you want something that matches the visual appearance of GitHub's chart, check out these projects which use https://github.com/users/<username>/contributions_calendar_data but also apply other factors based on Github's logic.
https://github.com/akerl/githubchart
https://github.com/akerl/githubstats
[Project deprecated and unavalaible for now, will be back online soon.]
Since the URL https://github.com/users/<username>/contributions_calendar_data don't work anymore, you have to parse the SVG from https://github.com/users/<username>/contributions.
Unfortunately, Github loves security and CORS is disabled on their server.
To solve this issue, I've setup an API for me and everyone who needs it, just GET https://github-stats.com/api/user/streak/current/{username} (CORS allowed), and you'll get and answer like:
{
"success":true,
"currentStreak": 3
}
https://github-stats.com will soon implement more stats endpoints :)
Please ask for new endpoint at https://github.com/SherloxFR/github-stats.com/issues, it will be a pleasure to find a way to implement them !
I noticed that iTunes preview allows you to crawl and scrape pages via the http:// protocol. However, many of the links are trying to be opened in iTunes rather than the browser. For example, when you go to the iBooks page, it immediately tries opening a url with an itms:// protocol.
Are there any other methods of crawling the App Store or is this the only way?
Can the itms:// protocol links themselves be crawled somehow?
I would have a decent look at the iTunes Search API and the iTunes Enterprise Partner API
Search API -
http://www.apple.com/itunes/affiliates/resources/blog/introduction---search-api.html
Enterprise Partner API -
http://www.apple.com/itunes/affiliates/resources/documentation/itunes-enterprise-partner-feed.html
You might get most/all of the information you need in a nice JSON file format.
If you can't get the information you need with the API, I would be interested what it is :)
As phillipp mentioned, the iTunes search API is an easy way to retrieve data about your App Store listings in JSON format.
Simply query for this with your app id (you can find the app id by viewing the web listing for your app at itunes.apple.com), ex:
http://itunes.apple.com/lookup?id=INSERT_YOUR_APP_ID_HERE
then, parse the resulting JSON to your heart's content.
The only difference between http:// links and itms:// links is that you need to set your User-Agent to an iTunes user-agent, and depending on the version you may also have to include a verification code based on some not-so-secret algorithm.
For example this is the code for iTunes 9:
# Some magic. Generates a seed we use for X-Apple-Validation. Adapted from LWP::UserAgent::iTMS_Client.
function comp_seed($url, $user_agent) {
$random = sprintf( "%04X%04X", rand(0,0x10000), rand(0,0x10000) );
$static = base64_decode("ROkjAaKid4EUF5kGtTNn3Q==");
$url_end = ( preg_match("|.*/.*/.*(/.+)$|",$url,$matches)) ? $matches[1] : '?';
$digest = md5(join("",array($url_end, $user_agent, $static, $random)) );
return $random . '-' . strtoupper($digest);
}
However if you are only scraping, iTunes preview should work for your purposes, the link you gave us to the iBooks page had more than enough information to scrape.
We tried scraping ourselves too about a year ago and it just became too much of a headache. Philipp's comment is a good one as the enterprise feed from apple (need to apply for it with a legitimate use) does have a good amount of useful info that you might be after in scraping.
There are a few companies that offer data as a service too - abto and AppMonsta are two I heard of when I was looking. I can't seem to find abto anymore but http://appmonsta.com seems to be. The search API looks ok (never experimented) but limited.
Good luck!