I'm hosting my own server for my website, and I would like to store all the resources on my isps ftp instead (like images, scripts and stuff like that) to prevent unnecessary strain on my server, and because my isps network speed ought to be quicker than the service they provide me. Now the fonts and the javascripts work fine, but when I try the following in my css:
background-image:url("-url-");
It does not want to display on my website, and in chrome I get this:
XMLHttpRequest cannot load -url-. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://my-domain-name/' is therefore not allowed access.
What does this mean?
-edit-
Actually it does seem to display, but I don't think it is supposed to. But they load up in a weird manner so I think I might just host the files on my own server after all.
I don't know the format of your URL (I assume you are using http, not ftp to link the image) or the nature of your "IPS's server" but the error suggests it does not allow hotlinking of images (I.e. cross-domain access to the files is not allowed by the server).
What is the service you are using to host the images? If it isn't indended for this purpose and you want to reduce strain on your own server it would make more sense to use a proper CDN service like AWS (they have a free tier I think).
Related
I am creating a small internet site for my personal stuff. I want to put there a few links to e.g. FTP resources or SVN server.
The important thing is that the FTP server has the same IP address as the page. I don't want to hard-code the address of my site in the link, because I consider this an anti-pattern. Instead, I would like to tell browser that the resource is on the current server, whichever server it is.
Let's say that the current page is https://example.com/stuff/index.html. If I create a tag things, it will lead to https://example.com/things.index.html.
However, if I add a protocol identifier to an URL, it won't work. For example, download will lead to ftp:///files/thingies.tar.gz, not to ftp://example.com/files/thingies.tar.gz.
What magic code should I put in the place of question marks:
download thingies
UPDATE:
I would prefer a client-side solution. My server machine has very low processing power and RAM amount.
In php (server side language code) if you'd like to forward
ftp:///files/thingies.tar.gz
to
ftp://example.com/files/thingies.tar.gz
considering example.com is the domain where your server is hosted, just do
echo 'ftp://'.$_SERVER['HTTP_HOST'].'files/thingies.tar.gz';
or, in your specific case
download thingies
I am creating a Virtual reality 360-degree video website using the krpano html5 player.
This was going great until testing on safari and I realised it didn't work. The reason for this is because safari does not support CORS (cross-origin resource sharing) for videos going through WebGL.
To clarify, if my videos where on the same server with my application files it would work, but because I have my files hosted on Amazon s3 , they are CORS. Now I'm unsure what to do because I have built my application on digital ocean which connects to my amazon s3 bucket, but I cannot afford to up my droplet just to get the storage I need(which is around 100GB to start and will increase in the future to Terrabytes and my video collection gets bigger).
So does anyone know a way I can get around this to make it seem like the video is not coming from a different origin or alternatively anything I can do to get past this obstacle?
Is there any way that I could set up amazon s3 and amazon EC2 so that they dont see each other as cross-origin resource sharing?
EDIT:
I load my videos like this:
<script>
function showVideo(){
embedpano({
swf:"/krpano/krpano.swf",
xml:"/krpano/videopano.xml",
target:"pano",
html5:"only",
});
}
</script>
This then calls my xml file which calls the video file:
<krpano>
<!-- add the video sources and play the video -->
<action name="add_video_sources">
videointerface_addsource(‘medium', 'https://s3-eu-west-1.amazonaws.com/myamazonbucket/Shoots/2016/06/the-first-video/videos/high.mp4|https://s3-eu-west-1.amazonaws.com/myama…ideos/high.webm');
videointerface_play(‘medium');
</action>
</krpano>
I don't know exactly how krpano core works, I assume it the javascript gets the URLs from the XML file and then makes a request to pull them in.
#datasage mentions in comments that CloudFront is a common solution. I don't know if this is what he was thinking of but it certainly will work.
I described using this solution to solve a different problem, in detail, on Server Fault. In that case, the question was about integrating the main site and "/blog/*" from a different server under a single domain name, making a unified web site.
This is exactly the same thing you need, for a different reason.
Create a CloudFront distribution, setting the alternate domain name to your site's name.
Create two (or more) origin servers pointing to your dynamic and static content origin servers.
Use one of them as default, initially handling all possible path patterns (*, the default cache behavior) and then carve out appropriate paths to point to the other origin (e.g. /asset/* might point to the bucket, while the default behavior points to the application itself).
In this case, CloudFront is being used other than for its primary purpose as a CDN and instead, we're leveraging a secondary purpose, using it as a reverse proxy that can selectively route requests to multiple back-ends, based on the path of the request, without the browser being aware that there are in fact multiple origins, because everything sits behind the single hostname that points to CloudFront (which, obviously, you'll need to point to CloudFront in DNS.)
The caching features can be disabled if you don't yet want/need/fully-understand them, particularly on requests to the application itself, where disabling caching is easily done by selecting the option to forward all request headers to the origin, in any cache behavior that sends requests to the application itself. For your objects in S3, be sure you've set appropriate Cache-Control headers on the objects when you uploaded them, or you can add them after uploading, using the S3 console.
Side bonus, using CloudFront allows you to easily enable SSL for the entire site, with a free SSL certificate from Amazon Certificate Manager (ACM). The certificate needs to be created in the us-east-1 region of ACM, regardless of where your bucket is, because that is the region CloudFront uses when fetching the cert from ACM. This is a provisioning role only, and has no performance implications if your bucket is in another region.
You need to allow your host in CORS Configuration of your AWS-S3 bucket .
Refer to Add CORS Configuration in Editing Bucket Permissions.
Hence after that, every request you make to the S3 bucket files, will have CORS headers set.
In case you need to serve the content via AWS-CDN CloudFront then follow these steps, ignore if you server content directly via S3 :
Go to AWS CloudFront Console.
Select your CloudFront Distribution.
Go to Behaviors Tab.
Create a Behavior(for the files which needs to be served with CORS Header).
Enter Path Pattern, Select Protocol & Methods.
Select All in Forward Headers option.
Save the behavior.
If needed, Invalidate the CloudFront Edge Caches by running an Invalidation request for the Files you just allowed for CORS.
I have the same problem with this post context.getImageData() on localhost?, but instead of localhost I am working on a https site so this provokes the problem with the canvas. It there a solution for this case?
These problems arise when you are trying to get data from images loaded across different domains.
One way to solve this (if you are the one in control of serving the images) is by enabling CORS (Cross-Origin Resource Sharing). What this does is basically add an Access-Control-Allow-Origin header to the served image.
You can read all about it in http://www.w3.org/TR/cors/. Your use case is described specifically in http://www.w3.org/TR/cors/#use-cases, section "Not tainting the canvas element".
There is a great resource for understanding how to enable CORS in http://enable-cors.org/. If you're running an apache instance, the easiest way is to use a .htaccess file to enable the headers.
However, if you are not in control of the served images, then you may need to ask permission to use them and probably copy them to your own server.
Not 100% sure if this is the right SE site to ask this, so feel free to move/warn me.
If I have a site www.mysite.com with a form on it and define its action as "http://www.mysite.com/handlepost" instead of "/handlepost", does it still get parsed as a local address by apache? That is, will apache figure out that I'm trying to send my form data to the same server the form resides on and do an automatic local post, or will the data be forced to make a round trip, going online, looking up the domain and actually being sent as an outside request?
Apache does not look at this information. It's your browser which does this job.
On the Apache side the job is only outputing content (html in this case), apache does not care about the way you write your url in this content.
On the browser side the page is analysed and GET requests (images,etc) are sent automatically to all collected url. The browser SHOULD know that relative url /foo are in fact http://currentsite/foo - or it's a really dump browser -. It is his job. And then it's his job to push the request to the right server (and to known if he should make a new DNS request, build a new HTTP connection, reuse an existing opened connection, build several connections -- usually max 3 conn per DNS--, etc). Apache does nothing in this part of the job.
So why absolute url are bad? Not because of the job the browser should have to do handling it (which is in fact nothing, his job is transforming relative url to absolute ones); It's because if your web application use only relative url the admin of the web server will have far more possibilities on proxying your application. For example:
he will be able to server your web application on several different DNS domains
(and then make the browser think he's talking to several servers, parallelizing static files downloads)
he could as use use this multi-domain to set up the application for different costumers
he could build an HTTPS access for external network access and an HTTP (without the S) access on a local name for the local network
And if your application is building the absolute url these tasks will become really harder.
dont use absolute URL's . As i feel it will do a round trip in your case as you have used round trip for the action part. so better use releative URL's
This is for a in house system, that is required to be set-up this way.
I need specific web files, for example, all images to be manually pre-installed on a proxy server and never downloaded from the web server.
When the browser request the page, the only thing sent from the web server to the proxy should be the plain html page.
I would then like the images the html page uses, to be grabbed from the the proxy every time, for the complete render to the browser.
Is there a name for this set up? I have seen about caching but I do not even want the images downladed once. In addition, how would the html page know to use the images from the proxy?
How would I set up such a thing? I do not have a proxy server set-up yet so I do not know what platform I will be using, but suggestions are appreciated.
Thank You.
Well, I think that you could, for example, use Privoxy and tell it to redirect some queries for images to a local server or even let it replace the img src=" attribute inline using regexes.