When writing a web crawler/scraper, what algorithms and techniques are available to throttle requests and avoid DoS'ing the server/getting banned? This comes up often when reading about web scraping (for example, here), but always as something like "I should have implemented throttling, but didn't" :)
My Google-fu may be weak, as I found mostly discussions on how to throttle requests server-side, and others (like this question) are specific about some library.
The most generic, cross-language way is to sleep between requests. Something like 10 seconds sleep should mimic how fast a real human goes through web pages. To avoid robot identifying algorithms some people sleep a random amount of time: sleep(ten_seconds + rand()).
You can make it fancier by keeping track of different sleep timeouts for each domain so that you can fetch something from another server while waiting for the sleep timeout.
The second method is to actually try to reduce the bandwidth for your request. You may need to write your own http client with this feature to do it. Or on linux you can use the network stack to do it for you - google qdisc.
You can of course combine both methods.
Note that reducing bandwidth is not very friendly to sites with lots of small resources. That's because you're increasing the amount of time you're connected for each resource hence occupying one network socket and probably one web server thread while you're at it.
On the other hand not reducing bandwidth is not very friendly to sites with lots of large resources like mp3 files or videos. That's because you're saturating their network - switches, routers, ISP connection - by downloading as fast as they can serve.
A smart implementation will download small files at full speed, sleeping between downloads, but reduce bandwidth for large files.
Related
With Chrome disabling Flash by default very soon I need to start looking into flash/rtmp html5 replacement solutions.
Currently with Flash + RTMP I have a live video stream with < 1-2 second delay.
I've experimented with MPEG-DASH which seems to be the new industry standard for streaming but that came up short with 5 second delay being the best I could squeeze from it.
For context, I am trying to allow user's to control physical objects they can see on the stream, so anything above a couple of seconds of delay leads to a frustrating experience.
Are there any other techniques, or is there really no low latency html5 solutions for live streaming yet?
Technologies and Requirements
The only web-based technology set really geared toward low latency is WebRTC. It's built for video conferencing. Codecs are tuned for low latency over quality. Bitrates are usually variable, opting for a stable connection over quality.
However, you don't necessarily need this low latency optimization for all of your users. In fact, from what I can gather on your requirements, low latency for everyone will hurt the user experience. While your users in control of the robot definitely need low latency video so they can reasonably control it, the users not in control don't have this requirement and can instead opt for reliable higher quality video.
How to Set it Up
In-Control Users to Robot Connection
Users controlling the robot will load a page that utilizes some WebRTC components for connecting to the camera and control server. To facilitate WebRTC connections, you need some sort of STUN server. To get around NAT and other firewall restrictions, you may need a TURN server. Both of these are usually built into Node.js-based WebRTC frameworks.
The cam/control server will also need to connect via WebRTC. Honestly, the easiest way to do this is to make your controlling application somewhat web based. Since you're using Node.js already, check out NW.js or Electron. Both can take advantage of the WebRTC capabilities already built in WebKit, while still giving you the flexibility to do whatever you'd like with Node.js.
The in-control users and the cam/control server will make a peer-to-peer connection via WebRTC (or TURN server if required). From there, you'll want to open up a media channel as well as a data channel. The data side can be used to send your robot commands. The media channel will of course be used for the low latency video stream being sent back to the in-control users.
Again, it's important to note that the video that will be sent back will be optimized for latency, not quality. This sort of connection also ensures a fast response to your commands.
Video for Viewing Users
Users that are simply viewing the stream and not controlling the robot can use normal video distribution methods. It is actually very important for you to use an existing CDN and transcoding services, since you will have 10k-15k people watching the stream. With that many users, you're probably going to want your video in a couple different codecs, and certainly a whole array of bitrates. Distribution with DASH or HLS is easiest to work with at the moment, and frees you of Flash requirements.
You will probably also want to send your stream to social media services. This is another reason why it's important to start with a high quality HD stream. Those services will transcode your video again, reducing quality. If you start with good quality first, you'll end up with better quality in the end.
Metadata (chat, control signals, etc.)
It isn't clear from your requirements what sort of metadata you need, but for small message-based data, you can use a web socket library, such as Socket.IO. As you scale this up to a few instances, you can use pub/sub, such as Redis, to distribution messaging throughout the servers.
To synchronize the metadata to the video depends a bit on what's in that metadata and what the synchronization requirement is, specifically. Generally speaking, you can assume that there will be a reasonable but unpredictable delay between the source video and the clients. After all, you cannot control how long they will buffer. Each device is different, each connection variable. What you can assume is that playback will begin with the first segment the client downloads. In other words, if a client starts buffering a video and begins playing it 2 seconds later, the video is 2 seconds behind from when the first request was made.
Detecting when playback actually begins client-side is possible. Since the server knows the timestamp for which video was sent to the client, it can inform the client of its offset relative to the beginning of video playback. Since you'll probably be using DASH or HLS and you need to use MCE with AJAX to get the data anyway, you can use the response headers in the segment response to indicate the timestamp for the beginning the segment. The client can then synchronize itself. Let me break this down step-by-step:
Client starts receiving metadata messages from application server.
Client requests the first video segment from the CDN.
CDN server replies with video segment. In the response headers, the Date: header can indicate the exact date/time for the start of the segment.
Client reads the response Date: header (let's say 2016-06-01 20:31:00). Client continues buffering the segments.
Client starts buffering/playback as normal.
Playback starts. Client can detect this state change on the player and knows that 00:00:00 on the video player is actualy 2016-06-01 20:31:00.
Client displays metadata synchronized with the video, dropping any messages from previous times and buffering any for future times.
This should meet your needs and give you the flexibility to do whatever you need to with your video going forward.
Why not [magic-technology-here]?
When you choose low latency, you lose quality. Quality comes from available bandwidth. Bandwidth efficiency comes from being able to buffer and optimize entire sequences of images when encoding. If you wanted perfect quality (lossless for each image) you would need a ton (gigabites per viewer) of bandwidth. That's why we have these lossy codecs to begin with.
Since you don't actually need low latency for most of your viewers, it's better to optimize for quality for them.
For the 2 users out of 15,000 that do need low latency, we can optimize for low latency for them. They will get substandard video quality, but will be able to actively control a robot, which is awesome!
Always remember that the internet is a hostile place where nothing works quite as well as it should. System resources and bandwidth are constantly variable. That's actually why WebRTC auto-adjusts (as best as reasonable) to changing conditions.
Not all connections can keep up with low latency requirements. That's why every single low latency connection will experience drop-outs. The internet is packet-switched, not circuit-switched. There is no real dedicated bandwidth available.
Having a large buffer (a couple seconds) allows clients to survive momentary losses of connections. It's why CD players with anti-skip buffers were created, and sold very well. It's a far better user experience for those 15,000 users if the video works correctly. They don't have to know that they are 5-10 seconds behind the main stream, but they will definitely know if the video drops out every other second.
There are tradeoffs in every approach. I think what I have outlined here separates the concerns and gives you the best tradeoffs in each area. Please feel free to ask for clarification or ask follow-up questions in the comments.
So I want to be able to let one client distribute live video feed to alot of other clients only watching. Is it possible to use WebRTC for this? Or will I basically have to go through a service like ustream or whatever.
It is possible, with a caveat. One client can establish many outgoing P2P connections to every viewer and stream video to them directly. However, for more than a very small handful of viewers this will quickly saturate the origin's bandwidth and possibly CPU. You would not be able to serve many viewers this way; however this would work entirely without middleman.*
* Safe for the WebRTC connection negotiation server.
To be able to serve a larger number of viewers you'll have to use a centralised distribution server. The source would send exactly one video stream to that server, and the server would stream it to anyone who's interested. This still requires that server to have a beefy CPU and a lot of bandwidth; but this is more realistic to scale than the client side.
You may need to spend a lot of money on such a server; have a look at c3.xlarge and better instances at AWS to get an idea. Using an established, cheap infrastructure like ustream may indeed be the more realistic option.
Many Comet implementations like Caplin provide server scalable solutions.
Following is one of the statistics from Caplin site:
A single instance of Caplin liberator can support up to 100,000 clients each receiving 1 message per second with an average latency of less than 7ms.
How does this to compare to HTML5 websockets on any webserver? Can anyone point me to any HTML 5 websockets statistics?
Disclosure - I work for Caplin.
There is a bit of misinformation on this page so I'd like to try and make it clearer..
I think we could split up the methods we are talking about into three camps..
Comet HTTP polling - including long polling
Comet HTTP streaming - server to client messages use a single persistent socket with no HTTP header overhead after initial setup
Comet WebSocket - single bidirectional socket
I see them all as Comet, since Comet is just a paradigm, but since WebSocket came along some people want to treat it like it is different or replaces Comet - but it is just another technique - and unless you are happy only supporting the latest browsers then you can't just rely on WebSocket.
As far as performance is concerned, most benchmarks concentrate on server to client messages - numbers of users, numbers of messages per second, and the latency of those messages. For this scenario there is no fundamental difference between HTTP Streaming and WebSocket - both are writing messages down an open socket with little or no header or overhead.
Long polling can give good latency if the frequency of messages is low. However, if you have two messages (server to client) in quick succession then the second one will not arrive at the client until a new request is made after the first message is received.
I think someone touched on HTTP KeepAlive. This can obviously improve Long polling - you still have the overhead of the roundtrip and headers, but not always the socket creation.
Where WebSocket should improve upon HTTP Streaming in scenarios where there are more client to server messages. Relating these scenarios to the real world creates slightly more arbitrary setups, compared to the simple to understand 'send lots of messages to lots of clients' which everyone can understand. For example, in a trading application, creating a scenario where you include users executing trades (ie client to server messages) is easy, but the results a bit less meaningful than the basic server to client scenarios. Traders are not trying to do 100 trades/sec - so you end up with results like '10000 users receiving 100 messages/sec while also sending a client message once every 5 minutes'. The more interesting part for the client to server message is the latency, since the number of messages required is usually insignificant compared to the server to client messages.
Another point someone made above, about 64k clients, You do not need to do anything clever to support more than 64k sockets on a server - other than configuring the number file descriptors etc. If you were trying to do 64k connection from a single client machine, that is totally different as they need a port number for each one - on the server end it is fine though, that is the listen end, and you can go above 64k sockets fine.
In theory, WebSockets can scale much better than HTTP but there are some caveats and some ways to address those caveats too.
The complexity of the handshake header processing of HTTP vs WebSockets is about the same. The HTTP (and initial WebSocket) handshake can easily be over 1K of data (due to cookies, etc). The important difference is that the HTTP handshake happens again every message. Once a WebSocket connection is established, the overhead per message is only 2-14 bytes.
The excellent Jetty benchmark links posted in #David Titarenco's answer (1, 2) show that WebSockets can easily achieve more than an order of magnitude better latency when compared to Comet.
See this answer for more information on scaling of WebSockets vs HTTP.
Caveats:
WebSocket connections are long-lived unlike HTTP connections which are short-lived. This significantly reduces the overhead (no socket creation and management for every request/response), but it does mean that to scale a server above 64k separate simultaneous client hosts you will need to use tricks like multiple IP addresses on the same server.
Due to security concerns with web intermediaries, browser to server WebSocket messages have all payload data XOR masked. This adds some CPU utilization to the server to decode the messages. However, XOR is one of the most efficient operations in most CPU architectures and there is often hardware assist available. Server to browser messages are not masked and since many uses of WebSockets don't require large amounts of data sent from browser to server, this isn't a big issue.
It's hard to know how that compares to anything because we don't know how big the (average) payload size is. Under the hood (as in how the server is implemented), HTTP streaming and websockets are virtually identical - apart from the initial handshake which is more complicated when done with HTTP obviously.
If you wrote your own websocket server in C (ala Caplin), you could probably reach those numbers without too much difficulty. Most websocket implementations are done through existing server packages (like Jetty) so the comparison wouldn't really be fair.
Some benchmarks:
http://webtide.intalio.com/2011/09/cometd-2-4-0-websocket-benchmarks/
http://webtide.intalio.com/2011/08/prelim-cometd-websocket-benchmarks/
However, if you look at C event lib benchmarks, like libev and libevent, the numbers look significantly sexier:
http://libev.schmorp.de/bench.html
Ignoring any form of polling, which as explained elsewhere, can introduce latency when the update rate is high, the three most common techniques for JavaScript streaming are:
WebSocket
Comet XHR/XDR streaming
Comet Forever IFrame
WebSocket is by far the cleanest solution, but there are still issues in terms of browser and network infrastructure not supporting it. The sooner it can be relied upon the better.
XHR/XDR & Forever IFrame are both fine for pushing data to clients from the server, but require various hacks to be made to work consistently across all browsers. In my experience these Comet approaches are always slightly slower than WebSockets not least because there is a lot more client side JavaScript code required to make it work - from the server's perspective, however, sending data over the wire happens at the same speed.
Here are some more WebSocket benchmark graphs, this time for our product my-Channels Nirvana.
Skip past the multicast and binary data graphs down to the last graph on the page (JavaScript High Update Rate)
In summary - The results show Nirvana WebSocket delivering 50 events/sec to 2,500k users with 800 microsecond latency. At 5,000 users (total of 250k events/sec streamed) the latency is 2 milliseconds.
I am curious if anyone has any information about the scalability of HTML WebSockets. For everything I've read it appears that every client will maintain an open line of communication with the server. I'm just wondering how that scales and how many open WebSocket connections a server can handle. Maybe leaving those connections open isn't a problem in reality, but it feels like it is.
In most ways WebSockets will probably scale better than AJAX/HTML requests. However, that doesn't mean WebSockets is a replacement for all uses of AJAX/HTML.
Each TCP connection in itself consumes very little in terms server resources. Often setting up the connection can be expensive but maintaining an idle connection it is almost free. The first limitation that is usually encountered is the maximum number of file descriptors (sockets consume file descriptors) that can be open simultaneously. This often defaults to 1024 but can easily be configured higher.
Ever tried configuring a web server to support tens of thousands of simultaneous AJAX clients? Change those clients into WebSockets clients and it just might be feasible.
HTTP connections, while they don't create open files or consume port numbers for a long period, are more expensive in just about every other way:
Each HTTP connection carries a lot of baggage that isn't used most of the time: cookies, content type, conetent length, user-agent, server id, date, last-modified, etc. Once a WebSockets connection is established, only the data required by the application needs to be sent back and forth.
Typically, HTTP servers are configured to log the start and completion of every HTTP request taking up disk and CPU time. It will become standard to log the start and completion of WebSockets data, but while the WebSockets connection doing duplex transfer there won't be any additional logging overhead (except by the application/service if it is designed to do so).
Typically, interactive applications that use AJAX either continuously poll or use some sort of long-poll mechanism. WebSockets is a much cleaner (and lower resource) way of doing a more event'd model where the server and client notify each other when they have something to report over the existing connection.
Most of the popular web servers in production have a pool of processes (or threads) for handling HTTP requests. As pressure increases the size of the pool will be increased because each process/thread handles one HTTP request at a time. Each additional process/thread uses more memory and creating new processes/threads is quite a bit more expensive than creating new socket connections (which those process/threads still have to do). Most of the popular WebSockets server frameworks are going the event'd route which tends to scale and perform better.
The primary benefit of WebSockets will be lower latency connections for interactive web applications. It will scale better and consume less server resources than HTTP AJAX/long-poll (assuming the application/server is designed properly), but IMO lower latency is the primary benefit of WebSockets because it will enable new classes of web applications that are not possible with the current overhead and latency of AJAX/long-poll.
Once the WebSockets standard becomes more finalized and has broader support, it will make sense to use it for most new interactive web applications that need to communicate frequently with the server. For existing interactive web applications it will really depend on how well the current AJAX/long-poll model is working. The effort to convert will be non-trivial so in many cases the cost just won't be worth the benefit.
Update:
Useful link: 600k concurrent websocket connections on AWS using Node.js
Just a clarification: the number of client connections that a server can support has nothing to do with ports in this scenario, since the server is [typically] only listening for WS/WSS connections on one single port. I think what the other commenters meant to refer to were file descriptors. You can set the maximum number of file descriptors quite high, but then you have to watch out for socket buffer sizes adding up for each open TCP/IP socket. Here's some additional info: https://serverfault.com/questions/48717/practical-maximum-open-file-descriptors-ulimit-n-for-a-high-volume-system
As for decreased latency via WS vs. HTTP, it's true since there's no more parsing of HTTP headers beyond the initial WS handshake. Plus, as more and more packets are successfully sent, the TCP congestion window widens, effectively reducing the RTT.
Any modern single server is able to server thousands of clients at once. Its HTTP server software has just to be is Event-Driven (IOCP) oriented (we are not in the old Apache one connection = one thread/process equation any more). Even the HTTP server built in Windows (http.sys) is IOCP oriented and very efficient (running in kernel mode). From this point of view, there won't be a lot of difference at scaling between WebSockets and regular HTTP connection. One TCP/IP connection uses a little resource (much less than a thread), and modern OS are optimized for handling a lot of concurrent connections: WebSockets and HTTP are just OSI 7 application layer protocols, inheriting from this TCP/IP specifications.
But, from experiment, I've seen two main problems with WebSockets:
They do not support CDN;
They have potential security issues.
So I would recommend the following, for any project:
Use WebSockets for client notifications only (with a fallback mechanism to long-polling - there are plenty of libraries around);
Use RESTful / JSON for all other data, using a CDN or proxies for cache.
In practice, full WebSockets applications do not scale well. Just use WebSockets for what they were designed to: push notifications from the server to the client.
About the potential problems of using WebSockets:
1. Consider using a CDN
Today (almost 4 years later), web scaling involves using Content Delivery Network (CDN) front ends, not only for static content (html,css,js) but also your (JSON) application data.
Of course, you won't put all your data on your CDN cache, but in practice, a lot of common content won't change often. I suspect that 80% of your REST resources may be cached... Even a one minute (or 30 seconds) CDN expiration timeout may be enough to give your central server a new live, and enhance the application responsiveness a lot, since CDN can be geographically tuned...
To my knowledge, there is no WebSockets support in CDN yet, and I suspect it would never be. WebSockets are statefull, whereas HTTP is stateless, so is much easily cached. In fact, to make WebSockets CDN-friendly, you may need to switch to a stateless RESTful approach... which would not be WebSockets any more.
2. Security issues
WebSockets have potential security issues, especially about DOS attacks. For illustration about new security vulnerabilities , see this set of slides and this webkit ticket.
WebSockets avoid any chance of packet inspection at OSI 7 application layer level, which comes to be pretty standard nowadays, in any business security. In fact, WebSockets makes the transmission obfuscated, so may be a major breach of security leak.
Think of it this way: what is cheaper, keeping an open connection, or opening a new connection for every request (with the negotiation overhead of doing so, remember it's TCP.)
Of course it depends on the application, but for long-term realtime connections (e.g. an AJAX chat) it's far better to keep the connection open.
The max number of connections will be capped by the max number of free ports for the sockets.
No it does not scale, gives tremendous work to intermediate routes switches. Then on the server side the page faults (you have to keep all those descriptors) are reaching high values, and the time to bring a resource into the work area increases. These are mostly JAVA written servers and it might be faster to hold on those gazilions of sockets then to destroy/create one.
When you run such a server on a machine any other process can't move anymore.
I'm the primary developer at a web firm, but often end up doing some sysadmin stuff, and was wondering what resources are available for learning how to troubleshoot slow page load times.
My experience with sysadmin tools is almost none. I'm relatively proficient at the Linux/Unix command line, but have never used any type of packet tracking software and only know the basics of using dig for ip resolves. My experience with apache and mysql is mostly limited to configuring initial setup and then using them.
Are there any good books or web sites that cover the topics needed for accurately diagnosing website performance/bottlenecks and if so what are they, or is the gamut of technologies used to large and experience/time with using the technologies typically how people get good at this stuff?
Overall, there's no substitute for experience. A concept as broad as "slow web page load time" could be hitting a bottleneck in any number of different places:
Client is slow to resolve IP from domain.
Network between client and domain is slow or congested.
Server is slow to respond to request.
Requested page is large.
Requested resources embedded in page are large.
Page contains server-side code that requires significant processing.
Database is slow to respond.
Page manipulates a lot of data before responding.
Rendered page on client contains a lot of code and runs slowly.
etc.
For any given page, it's a matter of know where the bottlenecks could be and determining what that bottleneck is in order to address it. Having a full and complete understanding of everything that goes on from end to end in "loading a page" is essential. Identifying patterns of slow load times across multiple disparate requests will help narrow down the potential bottlenecks. etc.
It's very much a case-by-case scenario.
Have you diagnosed exactly where the problems are? Slow load times could be caused by anything, such as:
too much going on in the dom/js
a) not caching js or other resources on the client
b) not minifying/compressing resources
c) making too many requests with ajax/doing silly things in the browser like redrawing dom that doesn't need it.
too much going on in the server
a) no cache on db tables, no indexes
b) not handling long running tasks asynchronously
c) improperly configured proxies/apache servers
network issues -- wish I knew more about this.
Step 1 is always to figure out where the worst slowdown is. Do some metrics on the server to make sure it is doing easy stuff fast. And hard stuff reasonably fast. Look in the browser to see how long loading resources is taking. Look in the chrome/firebug profilers to see how much time the javascript is taking to run.
You will probably find a bunch of things that could be improved. Prioritize and address the issues...