There are a lot of cool tools for making powerful "single-page" JavaScript websites nowadays. In my opinion, this is done right by letting the server act as an API (and nothing more) and letting the client handle all of the HTML generation stuff. The problem with this "pattern" is the lack of search engine support. I can think of two solutions:
When the user enters the website, let the server render the page exactly as the client would upon navigation. So if I go to http://example.com/my_path directly the server would render the same thing as the client would if I go to /my_path through pushState.
Let the server provide a special website only for the search engine bots. If a normal user visits http://example.com/my_path the server should give him a JavaScript heavy version of the website. But if the Google bot visits, the server should give it some minimal HTML with the content I want Google to index.
The first solution is discussed further here. I have been working on a website doing this and it's not a very nice experience. It's not DRY and in my case I had to use two different template engines for the client and the server.
I think I have seen the second solution for some good ol' Flash websites. I like this approach much more than the first one and with the right tool on the server it could be done quite painlessly.
So what I'm really wondering is the following:
Can you think of any better solution?
What are the disadvantages with the second solution? If Google in some way finds out that I'm not serving the exact same content for the Google bot as a regular user, would I then be punished in the search results?
While #2 might be "easier" for you as a developer, it only provides search engine crawling. And yes, if Google finds out your serving different content, you might be penalized (I'm not an expert on that, but I have heard of it happening).
Both SEO and accessibility (not just for disabled person, but accessibility via mobile devices, touch screen devices, and other non-standard computing / internet enabled platforms) both have a similar underlying philosophy: semantically rich markup that is "accessible" (i.e. can be accessed, viewed, read, processed, or otherwise used) to all these different browsers. A screen reader, a search engine crawler or a user with JavaScript enabled, should all be able to use/index/understand your site's core functionality without issue.
pushState does not add to this burden, in my experience. It only brings what used to be an afterthought and "if we have time" to the forefront of web development.
What your describe in option #1 is usually the best way to go - but, like other accessibility and SEO issues, doing this with pushState in a JavaScript-heavy app requires up-front planning or it will become a significant burden. It should be baked in to the page and application architecture from the start - retrofitting is painful and will cause more duplication than is necessary.
I've been working with pushState and SEO recently for a couple of different application, and I found what I think is a good approach. It basically follows your item #1, but accounts for not duplicating html / templates.
Most of the info can be found in these two blog posts:
http://lostechies.com/derickbailey/2011/09/06/test-driving-backbone-views-with-jquery-templates-the-jasmine-gem-and-jasmine-jquery/
and
http://lostechies.com/derickbailey/2011/06/22/rendering-a-rails-partial-as-a-jquery-template/
The gist of it is that I use ERB or HAML templates (running Ruby on Rails, Sinatra, etc) for my server side render and to create the client side templates that Backbone can use, as well as for my Jasmine JavaScript specs. This cuts out the duplication of markup between the server side and the client side.
From there, you need to take a few additional steps to have your JavaScript work with the HTML that is rendered by the server - true progressive enhancement; taking the semantic markup that got delivered and enhancing it with JavaScript.
For example, i'm building an image gallery application with pushState. If you request /images/1 from the server, it will render the entire image gallery on the server and send all of the HTML, CSS and JavaScript down to your browser. If you have JavaScript disabled, it will work perfectly fine. Every action you take will request a different URL from the server and the server will render all of the markup for your browser. If you have JavaScript enabled, though, the JavaScript will pick up the already rendered HTML along with a few variables generated by the server and take over from there.
Here's an example:
<form id="foo">
Name: <input id="name"><button id="say">Say My Name!</button>
</form>
After the server renders this, the JavaScript would pick it up (using a Backbone.js view in this example)
FooView = Backbone.View.extend({
events: {
"change #name": "setName",
"click #say": "sayName"
},
setName: function(e){
var name = $(e.currentTarget).val();
this.model.set({name: name});
},
sayName: function(e){
e.preventDefault();
var name = this.model.get("name");
alert("Hello " + name);
},
render: function(){
// do some rendering here, for when this is just running JavaScript
}
});
$(function(){
var model = new MyModel();
var view = new FooView({
model: model,
el: $("#foo")
});
});
This is a very simple example, but I think it gets the point across.
When I instante the view after the page loads, I'm providing the existing content of the form that was rendered by the server, to the view instance as the el for the view. I am not calling render or having the view generate an el for me, when the first view is loaded. I have a render method available for after the view is up and running and the page is all JavaScript. This lets me re-render the view later if I need to.
Clicking the "Say My Name" button with JavaScript enabled will cause an alert box. Without JavaScript, it would post back to the server and the server could render the name to an html element somewhere.
Edit
Consider a more complex example, where you have a list that needs to be attached (from the comments below this)
Say you have a list of users in a <ul> tag. This list was rendered by the server when the browser made a request, and the result looks something like:
<ul id="user-list">
<li data-id="1">Bob
<li data-id="2">Mary
<li data-id="3">Frank
<li data-id="4">Jane
</ul>
Now you need to loop through this list and attach a Backbone view and model to each of the <li> items. With the use of the data-id attribute, you can find the model that each tag comes from easily. You'll then need a collection view and item view that is smart enough to attach itself to this html.
UserListView = Backbone.View.extend({
attach: function(){
this.el = $("#user-list");
this.$("li").each(function(index){
var userEl = $(this);
var id = userEl.attr("data-id");
var user = this.collection.get(id);
new UserView({
model: user,
el: userEl
});
});
}
});
UserView = Backbone.View.extend({
initialize: function(){
this.model.bind("change:name", this.updateName, this);
},
updateName: function(model, val){
this.el.text(val);
}
});
var userData = {...};
var userList = new UserCollection(userData);
var userListView = new UserListView({collection: userList});
userListView.attach();
In this example, the UserListView will loop through all of the <li> tags and attach a view object with the correct model for each one. it sets up an event handler for the model's name change event and updates the displayed text of the element when a change occurs.
This kind of process, to take the html that the server rendered and have my JavaScript take over and run it, is a great way to get things rolling for SEO, Accessibility, and pushState support.
Hope that helps.
I think you need this: http://code.google.com/web/ajaxcrawling/
You can also install a special backend that "renders" your page by running javascript on the server, and then serves that to google.
Combine both things and you have a solution without programming things twice. (As long as your app is fully controllable via anchor fragments.)
So, it seem that the main concern is being DRY
If you're using pushState have your server send the same exact code for all urls (that don't contain a file extension to serve images, etc.) "/mydir/myfile", "/myotherdir/myotherfile" or root "/" -- all requests receive the same exact code. You need to have some kind url rewrite engine. You can also serve a tiny bit of html and the rest can come from your CDN (using require.js to manage dependencies -- see https://stackoverflow.com/a/13813102/1595913).
(test the link's validity by converting the link to your url scheme and testing against existence of content by querying a static or a dynamic source. if it's not valid send a 404 response.)
When the request is not from a google bot, you just process normally.
If the request is from a google bot, you use phantom.js -- headless webkit browser ("A headless browser is simply a full-featured web browser with no visual interface.") to render html and javascript on the server and send the google bot the resulting html. As the bot parses the html it can hit your other "pushState" links /somepage on the server mylink, the server rewrites url to your application file, loads it in phantom.js and the resulting html is sent to the bot, and so on...
For your html I'm assuming you're using normal links with some kind of hijacking (e.g. using with backbone.js https://stackoverflow.com/a/9331734/1595913)
To avoid confusion with any links separate your api code that serves json into a separate subdomain, e.g. api.mysite.com
To improve performance you can pre-process your site pages for search engines ahead of time during off hours by creating static versions of the pages using the same mechanism with phantom.js and consequently serve the static pages to google bots. Preprocessing can be done with some simple app that can parse <a> tags. In this case handling 404 is easier since you can simply check for the existence of the static file with a name that contains url path.
If you use #! hash bang syntax for your site links a similar scenario applies, except that the rewrite url server engine would look out for _escaped_fragment_ in the url and would format the url to your url scheme.
There are a couple of integrations of node.js with phantom.js on github and you can use node.js as the web server to produce html output.
Here are a couple of examples using phantom.js for seo:
http://backbonetutorials.com/seo-for-single-page-apps/
http://thedigitalself.com/blog/seo-and-javascript-with-phantomjs-server-side-rendering
If you're using Rails, try poirot. It's a gem that makes it dead simple to reuse mustache or handlebars templates client and server side.
Create a file in your views like _some_thingy.html.mustache.
Render server side:
<%= render :partial => 'some_thingy', object: my_model %>
Put the template your head for client side use:
<%= template_include_tag 'some_thingy' %>
Rendre client side:
html = poirot.someThingy(my_model)
To take a slightly different angle, your second solution would be the correct one in terms of accessibility...you would be providing alternative content to users who cannot use javascript (those with screen readers, etc.).
This would automatically add the benefits of SEO and, in my opinion, would not be seen as a 'naughty' technique by Google.
Interesting. I have been searching around for viable solutions but it seems to be quite problematic.
I was actually leaning more towards your 2nd approach:
Let the server provide a special website only for the search engine
bots. If a normal user visits http://example.com/my_path the server
should give him a JavaScript heavy version of the website. But if the
Google bot visits, the server should give it some minimal HTML with
the content I want Google to index.
Here's my take on solving the problem. Although it is not confirmed to work, it might provide some insight or idea's for other developers.
Assume you're using a JS framework that supports "push state" functionality, and your backend framework is Ruby on Rails. You have a simple blog site and you would like search engines to index all your article index and show pages.
Let's say you have your routes set up like this:
resources :articles
match "*path", "main#index"
Ensure that every server-side controller renders the same template that your client-side framework requires to run (html/css/javascript/etc). If none of the controllers are matched in the request (in this example we only have a RESTful set of actions for the ArticlesController), then just match anything else and just render the template and let the client-side framework handle the routing. The only difference between hitting a controller and hitting the wildcard matcher would be the ability to render content based on the URL that was requested to JavaScript-disabled devices.
From what I understand it is a bad idea to render content that isn't visible to browsers. So when Google indexes it, people go through Google to visit a given page and there isn't any content, then you're probably going to be penalised. What comes to mind is that you render content in a div node that you display: none in CSS.
However, I'm pretty sure it doesn't matter if you simply do this:
<div id="no-js">
<h1><%= #article.title %></h1>
<p><%= #article.description %></p>
<p><%= #article.content %></p>
</div>
And then using JavaScript, which doesn't get run when a JavaScript-disabled device opens the page:
$("#no-js").remove() # jQuery
This way, for Google, and for anyone with JavaScript-disabled devices, they would see the raw/static content. So the content is physically there and is visible to anyone with JavaScript-disabled devices.
But, when a user visits the same page and actually has JavaScript enabled, the #no-js node will be removed so it doesn't clutter up your application. Then your client-side framework will handle the request through it's router and display what a user should see when JavaScript is enabled.
I think this might be a valid and fairly easy technique to use. Although that might depend on the complexity of your website/application.
Though, please correct me if it isn't. Just thought I'd share my thoughts.
Use NodeJS on the serverside, browserify your clientside code and route each http-request's(except for static http resources) uri through a serverside client to provide the first 'bootsnap'(a snapshot of the page it's state). Use something like jsdom to handle jquery dom-ops on the server. After the bootsnap returned, setup the websocket connection. Probably best to differentiate between a websocket client and a serverside client by making some kind of a wrapper connection on the clientside(serverside client can directly communicate with the server). I've been working on something like this: https://github.com/jvanveen/rnet/
Use Google Closure Template to render pages. It compiles to javascript or java, so it is easy to render the page either on the client or server side. On the first encounter with every client, render the html and add javascript as link in header. Crawler will read the html only but the browser will execute your script. All subsequent requests from the browser could be done in against the api to minimize the traffic.
This might help you : https://github.com/sharjeel619/SPA-SEO
Logic
A browser requests your single page application from the server,
which is going to be loaded from a single index.html file.
You program some intermediary server code which intercepts the client
request and differentiates whether the request came from a browser or
some social crawler bot.
If the request came from some crawler bot, make an API call to
your back-end server, gather the data you need, fill in that data to
html meta tags and return those tags in string format back to the
client.
If the request didn't come from some crawler bot, then simply
return the index.html file from the build or dist folder of your single page
application.
How can I disable "Save Video As..." from a browser's right-click menu to prevent clients from downloading a video?
Are there more complete solutions that prevent the client from accessing a file path directly?
You can't.
That's because that's what browsers were designed to do: Serve content. But you can make it harder to download.
Convenient "Solution"
I'd just upload my video to a third-party video site, like YouTube or Vimeo. They have good video management tools, optimizes playback to the device, and they make efforts in preventing their videos from being ripped with zero effort on your end.
Workaround 1, Disabling "The Right Click"
You could disable the contextmenu event, aka "the right click". That would prevent your regular skiddie from blatantly ripping your video by right clicking and Save As. But then they could just disable JS and get around this or find the video source via the browser's debugger. Plus this is bad UX. There are lots of legitimate things in a context menu than just Save As.
Workaround 2, Video Player Libraries
Use custom video player libraries. Most of them implement video players that customize the context menu to your liking. So you don't get the default browser context menu. And if ever they do serve a menu item similar to Save As, you can disable it. But again, this is a JS workaround. Weaknesses are similar to Workaround 1.
Workaround 3, HTTP Live Streaming
Another way to do it is to serve the video using HTTP Live Streaming. What it essentially does is chop up the video into chunks and serve it one after the other. This is how most streaming sites serve video. So even if you manage to Save As, you only save a chunk, not the whole video. It would take a bit more effort to gather all the chunks and stitch them using some dedicated software.
Workaround 4, Painting on Canvas
Another technique is to paint <video> on <canvas>. In this technique, with a bit of JavaScript, what you see on the page is a <canvas> element rendering frames from a hidden <video>. And because it's a <canvas>, the context menu will use an <img>'s menu, not a <video>'s. You'll get a Save Image As instead of a Save Video As.
Workaround 5, CSRF Tokens
You could also use CSRF tokens to your advantage. You'd have your sever send down a token on the page. You then use that token to fetch your video. Your server checks to see if it's a valid token before it serves the video, or get an HTTP 401. The idea is that you can only ever get a video by having a token which you can only ever get if you came from the page, not directly visiting the video url.
This is a simple solution for those wishing to simply remove the right-click "save" option from the html5 videos
$(document).ready(function(){
$('#videoElementID').bind('contextmenu',function() { return false; });
});
Yes, you can do this in three steps:
Place the files you want to protect in a subdirectory of the directory where your code is running.
www.foo.com/player.html www.foo.com/videos/video.mp4
Save a file in that subdirectory named ".htaccess" and add the lines below.
www.foo.com/videos/.htaccess
#Contents of .htaccess
RewriteEngine on
RewriteCond %{HTTP_REFERER} !^http://foo.com/.*$ [NC]
RewriteCond %{HTTP_REFERER} !^http://www.foo.com/.*$ [NC]
RewriteRule .(mp4|mp3|avi)$ - [F]
Now the source link is useless, but we still need to make sure any user attempting to download the file cannot be directly served the file.
For a more complete solution, now serve the video with a flash player (or html canvas) and never link to the video directly. To just remove the right click menu, add to your HTML:
<body oncontextmenu="return false;">
The Result:
www.foo.com/player.html will correctly play video, but if you visit www.foo.com/videos/video.mp4:
Error Code 403: FORBIDDEN
This will work for direct download, cURL, hotlinking, you name it.
This is a complete answer to the two questions asked and not an answer to the question: "can I stop a user from downloading a video they have already downloaded."
Simple answer,
YOU CAN'T
If they are watching your video, they have it already
You can slow them down but can't stop them.
The best way that I usually use is very simple, I fully disable context menu in the whole page, pure html+javascript:
<body oncontextmenu="return false;">
That's it! I do that because you can always see the source by right click.
Ok, you say: "I can use directly the browser view source" and it's true but we start from the fact that you CAN'T stop downloading html5 videos.
As a client-side developer I recommend to use blob URL,
blob URL is a client-side URL which refers to a binary object
<video id="id" width="320" height="240" type='video/mp4' controls > </video>
in HTML leave your video src blank,
and in JS fetch the video file using AJAX, make sure the response type is blob
window.onload = function() {
var xhr = new XMLHttpRequest();
xhr.open('GET', 'mov_bbb.mp4', true);
xhr.responseType = 'blob'; //important
xhr.onload = function(e) {
if (this.status == 200) {
console.log("loaded");
var blob = this.response;
var video = document.getElementById('id');
video.oncanplaythrough = function() {
console.log("Can play through video without stopping");
URL.revokeObjectURL(this.src);
};
video.src = URL.createObjectURL(blob);
video.load();
}
};
xhr.send();
}
Note: This method is not recommended for large file
EDIT
Use cross-origin blocking and header token checking to prevent direct downloading.
If the video is delivered via an API; Use a different http method (PUT / POST) instead of 'GET'
PHP sends the html5 video tag together with a session where the key is a random string and the value is the filename.
ini_set('session.use_cookies',1);
session_start();
$ogv=uniqid();
$_SESSION[$ogv]='myVideo.ogv';
$webm=uniqid();
$_SESSION[$webm]='myVideo.webm';
echo '<video autoplay="autoplay">'
.'<source src="video.php?video='.$ogv.' type="video/ogg">'
.'<source src="video.php?video='.$webm.' type="video/webm">'
.'</video>';
Now PHP is asked to send the video. PHP recovers the filename; deletes the session and sends the video instantly. Additionally all the 'no cache' and mime-type headers must be present.
ini_set('session.use_cookies',1);
session_start();
$file='myhiddenvideos/'.$_SESSION[$_GET['video']];
$_SESSION=array();
$params = session_get_cookie_params();
setcookie(session_name(),'', time()-42000,$params["path"],$params["domain"],
$params["secure"], $params["httponly"]);
if(!file_exists($file) or $file==='' or !is_readable($file)){
header('HTTP/1.1 404 File not found',true);
exit;
}
readfile($file);
exit:
Now if the user copy the url in a new tab or use the context menu he will have no luck.
We could make that not so easy by hiding context menu, like this:
<video oncontextmenu="return false;" controls>
<source src="https://yoursite.com/yourvideo.mp4" >
</video>
You can use
<video src="..." ... controlsList="nodownload">
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/controlsList
It doesn't prevent saving the video, but it does remove the download button and the "Save as" option in the context menu.
We ended up using AWS CloudFront with expiring URLs. The video will load, but by the time the user right clicks and chooses Save As the video url they initially received has expired. Do a search for CloudFront Origin Access Identity.
Producing the video url requires a key pair which can be created in the AWS CLI. FYI this is not my code but it works great!
$resource = 'http://cdn.yourwebsite.com/videos/yourvideourl.mp4';
$timeout = 4;
//This comes from key pair you generated for cloudfront
$keyPairId = "AKAJSDHFKASWERASDF";
$expires = time() + $timeout; //Time out in seconds
$json = '{"Statement":[{"Resource":"'.$resource.'","Condition" {"DateLessThan":{"AWS:EpochTime":'.$expires.'}}}]}';
//Read Cloudfront Private Key Pair
$fp=fopen("/absolute/path/to/your/cloudfront_privatekey.pem","r");
$priv_key=fread($fp,8192);
fclose($fp);
//Create the private key
$key = openssl_get_privatekey($priv_key);
if(!$key)
{
echo "<p>Failed to load private key!</p>";
return;
}
//Sign the policy with the private key
if(!openssl_sign($json, $signed_policy, $key, OPENSSL_ALGO_SHA1))
{
echo '<p>Failed to sign policy: '.openssl_error_string().'</p>';
return;
}
//Create url safe signed policy
$base64_signed_policy = base64_encode($signed_policy);
$signature = str_replace(array('+','=','/'), array('-','_','~'), $base64_signed_policy);
//Construct the URL
$url = $resource.'?Expires='.$expires.'&Signature='.$signature.'&Key-Pair-Id='.$keyPairId;
return '<div class="videowrapper" ><video autoplay controls style="width:100%!important;height:auto!important;"><source src="'.$url.'" type="video/mp4">Your browser does not support the video tag.</video></div>';
You can at least stop the the non-tech savvy people from using the right-click context menu to download your video. You can disable the context menu for any element using the oncontextmenu attribute.
oncontextmenu="return false;"
This works for the body element (whole page) or just a single video using it inside the video tag.
<video oncontextmenu="return false;" controls>...</video>
First of all realise it is impossible to completely prevent a video being downloaded, all you can do is make it more difficult. I.e. you hide the source of the video.
A web browser temporarily downloads the video in a buffer, so if could prevent download you would also be preventing the video being viewed as well.
You should also know that <1% of the total population of the world will be able to understand the source code making it rather safe anyway. That does not mean you should not hide it in the source as well - you should.
You should not disable right click, and even less you should display a message saying "You cannot save this video for copyright reasons. Sorry about that.". As suggested in this answer.
This can be very annoying and confusing for the user. Apart from that; if they disable JavaScript on their browser they will be able to right click and save anyway.
Here is a CSS trick you could use:
video {
pointer-events: none;
}
CSS cannot be turned off in browser, protecting your video without actually disabling right click. However one problem is that controls cannot be enabled either, in other words they must be set to false. If you are going to inplament your own Play/Pause function or use an API that has buttons separate to the video tag then this is a feasible option.
controls also has a download button so using it is not such a good idea either.
Here is a JSFiddle example.
If you are going to disable right click using JavaScript then also store the source of the video in JavaScript as well. That way if the user disables JavaScript (allowing right click) the video will not load (it also hides the video source a little better).
From TxRegex answer:
<video oncontextmenu="return false;" controls>
<source type="video/mp4" id="video">
</video>
Now add the video via JavaScript:
document.getElementById("video").src = "https://www.w3schools.com/html/mov_bbb.mp4";
Functional JSFiddle
Another way to prevent right click involves using the embed tag. This is does not however provide the controls to run the video so they would need to be inplamented in JavaScript:
<embed src="https://www.w3schools.com/html/mov_bbb.mp4"></embed>
well, you can't protect it 100% but you can make it harder. these methods that I'm explaining, I faced them during studying protection methods in PluralSight and BestDotNetTraining. nevertheless, none of these methods stopped me from downloading what I want, but I had a hard time to curate the downloader to pass their protection.
In addition to other mentioned methods to disable the context menu. the user still is able to use third-party tools like InternetDownload manager or other similar software to download the videos. the protection method that I'm explaining here is to mitigate those 3rd party software.
the requirement of all of these methods is to block a user when you identify someone is downloading your videos. in this way they are able to download only one or two videos only before you banned them from accessing to your website.
disclaimer
I will not accept any responsibility if someone abuses these methods or use it to harm others or the websites that I mentioned as an example. it's just for sharing knowledge to help you to protect your intellectual product.
generate links with an expiry
the requirement for this is to create a download link per user. that one can easily be handled by azure blob storage or amazon s3. you can create a download link with twice of the video length expiry timestamp. then you need to capture that video link and the time that is requested. this is necessary for the next method. the catch for this method is you are generating the download link when the user click the play button.
on play button event you will send a request to the server and get the link and update the source.
throttle the video request rate
then you monitor how fast the user request for the second video. if the user request for a download link too fast, then you block them right away. you can't put this threshold too big because you can mistakenly block users that are just browsing or skimming through the videos.
Enable HTTP Range
use some js library like videojs to play your video, also you need to return an AcceptRange in your header. Azure blob storage supports this out of the box. this way the browser starts to download the video chunk by chunk. usually, 32byte by 32byte. then you need to listen to videojs timeupdate change and update your server about the percentage that the video is watched. the percentage that the video is watched can't be more than the percentage that video is delivered. and if you are delivering a video content without receiving any percentage change, then you can block the user. because for sure they are downloading.
implementing this is tricky because the user can skip the video forward or backwards so be conscious about this when you are implementing this.
this is how BestDotnetTraining is handling the timeupdate
myPlayer.ready(function () {
//var player = this;
this.src({
type: "video/mp4",
src: videoURL
});
if (videoId) {
myPlayer.play();
this.on('timeupdate', function () {
var currentPercent = parseInt(100 * myPlayer.currentTime() / myPlayer.duration());//calcualte as percentage
if (currentPercent % 5 == 0) {
//send percentage to server
SaveVideoDurationWatched(currentPercent, videoId);
}
});
}
});
anyway, the user is able to work around this by using some download method that downloads a file through streaming. almost c# do it out of the box and for nodejs, you can use request module. then you need to start a stopWatch, listen to a package received and compare the total byte received compare to the total size. this way you can calculate a percentage and the time spent to get that amount of percentage. then use the Thread.Sleep() or something like that to delay the thread the amount that you have to wait if you watch the video normally. also before the sleep the user can call the server and update the percentage that is received. so the server thinks that the user is actually watching a video.
the calculation will be something like this, for example, if you calculate that you received 1 per cent so far, then you can calculate the amount that you should wait to sleep the download thread. in this way you can't download a video faster than what it's actual length is. if a video is 24 min. it will takes 24 min to download it. (plus the threshold we put in the first method)
original video length 24 minute
24 min *60000 = 1,440,000 miliseconds
1,440,000 % 100 = 14,400 milisecond is needed to download one percent
check the browser agent
when you are serving a webpage and serving the video link or accepting the progress update request you can look at the browser agent. if it's different then ban the user.
just be aware that some old browser doesn't pass this information. so you should ignore this when there is no browser agent in both video request and webpage request. but if one request has it and another one doesn't, then you should ban the user.
to work around this the user can set the browser agent header manually same as the headless browser that they are using to capture the download link.
check the referer header
when the referer is something other than your host URL or the page URL that you are serving the video, you can ban the user, because they put the download link in another tab or another application. even you can do that for the progress update request.
the requirement for this is to has a mapping of video and the page that shows that video. you can create some convention or pattern to understand what the URL should be, it's up to your design.
to work around it the user can set the referrer header manually equal to the download page URL when downloading the videos.
Calculate the time between request
if you receive so many requests that the time between them is the same, then you should block the user. you should put this to capture how much is time between the video link generation request. if they are the same (plus/minus some threshold) and it happens more than a number of times, then you can ban the user. because if there is a bot that is going to crawl your website or videos, then usually they have the same sleep time between their request. so if you receive each request, for example, every 1.3(plus/mins some deviation) minutes. then you raise an alarm. for this, you can use some statistic calculation to know the deviation between the requests.
to workaround this, the user can put a random sleep time between the requests.
sample code
I have a repo PluralSight-Downloader that is doing it halfway. I created this repo almost 5 years ago. because I wrote it for study purpose and own personal use only, the repo isn't received any update so far and I'm not going to update or make it easy to work with. it's just an example of how it can be done.
The
<body oncontextmenu="return false;">
no longer works. Chrome and Opera as of June 2018 has a submenu on the timeline to allow straight download, so user doesn't need to right click to download that video. Interestingly Firefox and Edge don't have this ...
Using a service such as Vimeo: Sign in Vimeo > Goto Video > Settings > Privacy > Mark as Secured, and also select embed domains. Once the embed domains are set, it will not allow anyone to embed the video or display it from the browser unless connecting from the domains specified. So, if you have a page that is secured on your server which loads the Vimeo player in iframe, this makes it pretty difficult to get around.
+1 simple and cross-browser way:
You can also put transparent picture over the video with css z-index and opacity.
So users will see "save picture as" instead of "save video" in context menu.
Here's what I did:
function noRightClick() {
alert("You cannot save this video for copyright reasons. Sorry about that.");
}
<body oncontextmenu="noRightClick();">
<video>
<source src="http://calumchilds.com/videos/big_buck_bunny.mp4" type="video/mp4">
</video>
</body>
This also works for images, text and pretty much anything. However, you can still access the "Inspect" and the "View source" tool through keyboard shortcuts. (As the answer at the top says, you can't stop it entirely.) But you can try to put barriers up to stop them.
Here's a complete solution for disabling download including right click > Save as... in the context menu:
<video oncontextmenu="return false;" controlsList="nodownload">
</video>
Short Answer: Encrypt the link like youtube does, don't know how than ask youtube/google of how they do it. (Just in case you want to get straight into the point.)
I would like to point out to anyone that this is possible because youtube does it and if they can so can any other website and it isn't from the browser either because I tested it on a couple browsers such as microsoft edge and internet explorer and so there is a way to disable it and seen that people still say it...I tries looking for an answer because if youtube can than there has to be a way and the only way to see how they do it is if someone looked into the scripts of youtube which I am doing now. I also checked to see if it was a custom context menu as well and it isn't because the context menu is over flowing the inspect element and I mean like it is over it and I looked and it never creates a new class and also it is impossible to actually access inspect element with javascript so it can't be. You can tell when it double right-click a youtube video that it pops up the context menu for chrome. Besides...youtube wouldn't add that function in. I am doing research and looking through the source of youtube so I will be back if I find the answer...if anyone says you can't than, well they didn't do research like I have. The only way to download youtube videos is through a video download.
Okay...I did research and my research stays that you can disable it except there is no javascript to it...you have to be able to encrypt the links to the video for you to be able to disable it because I think any browser won't show it if it can't find it and when I opened a youtube video link it showed as this "blob:https://www.youtube.com/e5c4808e-297e-451f-80da-3e838caa1275" without quotes so it is encrypting it so it cannot be saved...you need to know php for that but like the answer you picked out of making it harder, youtube makes it the hardest of heavy encrypting it, you need to be an advance php programmer but if you don't know that than take the person you picked as best answer of making it hard to download it...but if you know php than heavy encrypt the video link so it only is able to be read on yours...I don't know how to explain how they do it but they did and there is a way. The way youtube Encrypts there videos is quite smart so if you want to know how to than just ask youtube/google of how they do it...hope this helps for you although you already picked a best answer. So encrypting the link is best in short terms.
controlsList Prevent action such as download begin fullscreen without adding any other JavaScript function
<video width="400" controlsList="nofullscreen nodownload" controls>
Try this for disable download Video options
<video src="" controls controlsList="nodownload"></video>
It seems like streaming the video through websocket is a viable option, as in stream the frames and draw them on a canvas sort of thing.
Video streaming over websockets using JavaScript
I think that would provide another level of protection making it more difficult for the client to acquire the video and of course solve your problem with "Save video as..." right-click context menu option ( overkill ?! ).
If you are looking for a complete solution/plugin, I've found this very useful
https://github.com/mediaelement/mediaelement
Prevent HTML5 video from being downloaded (right-click saved)
<video type="video/mp4" width="330" height="300" controlsList="nodownload" oncontextmenu="return false;" controls></video>
You can't.
For example, people can use some APIfor example desktopCapture, getUserMedia that
allows users to record screen, window, tab.
People can use it and write it to the canvas and then concatenate all the chunks together to get the video,
So there is no way to stop them from downloading the video if they really want it.
I found a good answer to a similar problem, using PHP instead of JavaScript for better security.
I want to play test.mp4 in the user's browser using the browser's default player (just as though URL/test.mp4 had been clicked on a Web page), but requiring a password, which is either supplied by the user or internally by software.
Here is a brief sketch of the idea. It starts with the user going to (running) a program I wrote called secure.php to play test.mp4.
The file test.mp4 is in a subdirectory ("secureSubdirectory") that contains a .htaccess containing "Require all denied". This immediately prevents any direct access through a URL.
When secure.php is run, it supplies a password (or queries the user for a password), then does a POST to itself that includes the password, verifies it using a salt, using the PHP commands:
$Hash=base64_encode(hash_hmac("sha256",$Pwd,$Salt,true));
$HashesAreSame=hash_equals($Hash,$GoalHash);
then tests for test.mp4 existing, and executes the following PHP code to return the test.mp4 file as a byte stream to the user's browser:
header("Content-Type: video/mp4");
echo file_get_contents("secureSubdirectory/$path");
exit;
The video shows as expected. If I then right-click on the page showing the video and try saving the video, the resulting file will just contain an error string, like "Error: password not found", since test.mp4 is being queried using the plain secure.php URL, not through POST with the correct password.
Of course, you can obtain the response payload (the video bytes) using the Network option of the browser debugging tools, but this could be prevented by the PHP program or the .htaccess file if the browser provided an option to prevent access to the debugging tools.
I can't imagine a failure case, but I'd be very interested if one exists, as simple but perfect authorization is a very rare thing. (Note that, since this method relies on a password, associating it with the user is not a secure way to authenticate, since the user can accidentally or deliberately publish or share the password.)
#Clayton-Graul had what I was looking for, except I needed the CoffeeScript version for a site using AngularJS. Just in case you need that too, here's what you put in the AngularJS controller in question:
# This is how to we do JQuery ready() dom stuff
$ ->
# let's hide those annoying download video options.
# of course anyone who knows how can still download
# the video, but hey... more power to 'em.
$('#my-video').bind 'contextmenu', ->
false
"strange things are afoot at the circle k" (it's true)
Everything you see in the browser is downloaded content. The question being alluded to is how to save that content in the browser. To view content, client browsers download from content servers and make it available locally.
One solution becoming popular is to save (ephemeral) content in browser only, and for a limited time, in a way that cannot be saved directly. Blobs are one implementation of this with the added benefit of reducing bandwidth & storage overheads, since the content is stored in binary objects.
The short expiry of content makes persistent storage almost impossible to ordinary users since new content is displayed before user can attempt to save expired content.
I have a page where there are several search / filter button which, when clicked, refresh the contents of a list below through AJAX.
In the process, I'm modifying history (through pushstate) so that the new filtered page is bookmarkable, and so the back button works. I'm also listening for the popstate event, to react to Back.
My code looks more or less like this:
window.addEventListener("popstate", function(ev) {
if (!window.history_ready) { return; } // Avoid the one time it runs on load
refreshFilter(window.location.href, true);
});
refreshFilter: function(newURL, backButtonPressed){
$.ajax({ url: newURL}).done( blah );
if (!backButtonPressed) {
window.history_ready = true;
history.pushState(null, null, newURL);
}
}
This works wonderfully, except for one weird case...
User is in page "A"
They click a link to go to this page that plays with history (let's call it "B")
They run a couple of filters, then press Back a few times, so they're back at the initial state of "B"
They click Back once again, which sends them back to "A"
At this time, if they press Forward, instead of making a request to the server again for Page "B", the browser simply displays a bunch of JSON code as the page contents (this JSOn is the response of one of my AJAX requests to filter stuff)
At least in latest Chrome
Why is this happening and how can I avoid it?
Chrome caches the pages you visit and when you go back or forward it uses the cache to display the page quickly. If the URLs you are using to retrieve JSON from the server by AJAX is the same one Chrome would hit, then it's possible Chrome is picking that page from the cache, which instead of being the nice HTML it's just a JSON dump.
There is a cache option for $.ajax:
$.ajax({ cache: false, url: newURL})
See http://api.jquery.com/jquery.ajax/
#pupeno is right, but to give a more solution oriented answer, you need to differentiate the JSON from HTML in the routes your server has.
I know two ways of doing this:
1) If you call /users you get HTML, if you call /users.json you get JSON.
2) If you call /users you get HTML, if you call /api/users you get JSON.
I like 1 a lot better, but it depends on the web framework if whichever is used by default or wether you configure that yourself.
1 is used in Ruby on Rails, 2 is used in other frameworks too.
I have done much research into the issue, I'm not blindly asking but I can't grasp this concept. So my website contains a single index.php file that loads data into divs via ajax so the page never refreshes and the url never changes. I now know I need links to certain content using url rewriting. The site contains posts, so for instance all posts are pulled from the db and 'site.com' is the url. But I want to be able to do 'site.com/post-one' and have that link go to that post. I am thinking first I need to append a variable to the end of the url when the dynamic content for that post is loaded as such: site.com?post=1 so from there I can use url rewrite; the problem I'm having is this. Since the content for post 1 would be loaded into a div, if I went this route, and implemented the url rewrite, would site.com/post-1 now just pull the data dynamically as well or does the page have to be static?
Your problem is that this would necessarily require the use of a hash, rather than a GET variable (because GET requires a page refresh, a hash doesn't). This is done via the usage of the window.location.hash variable in JavaScript, which is updated whenever the URL's content after a # changes (ex: if I were to change http://site.com/#lol to http://site.com/#lmao, window.location.hash would change from #lol to #lmao). Hashes are being used commonly in Ajax-based navigation sites, such as Twitter (I think Google's implementing it as well).
If you're using jQuery, you should try the jQuery BBQ plugin which will allow you to do things such as hash change detection (otherwise, you will have to implement some kind of similar engine yourself, because it will be needed for any kind of hash-based navigation).
You should remember, though, that this doesn't have anything to do with mod_rewrite, thus you shouldn't need to add any kind of rewrite rules. All your work (fetching data, etcetera) would be done through Ajax XML HTTP requests, rather than common HTTP requests.
Using this, you could make your url look like http://site.com/#!/post/1 (this could go whichever format you'd like, such as http://site.com/#!/p/this-is-the-posts-title) instead of http://site.com/?post=1, although you would be missing on http://site.com/post/1.
Your idea of using mod_rewrite seems sound. If you used a rewrite directive which passes part of the URI as a POST variable into your index.php, couldn't you throw some code into the top of your index file which checks for that data and then dynamically generates the ajax to dump into the divs?
Ex:
RewriteRule ^(.*)$ index.php?post=$1 [L]
Index.php:
<?php if (isset($_POST['post'])) { ?>
$.ajax({
**grab post content dump into div**
});
<?php } ?>
Also, don't forget to sanitize your $_POST data before processing it.