In AS3 / Flash Can a URL Request / Loader Use "//" without Specifying a http or https Protocol? - actionscript-3

I have a flash app running that loads remote data and we're transitioning to use (SSL) https://
I am wondering is it possible to just use "//" as you would in JavaScript to automatically assume the parent page's protocol (http or https).
Thanks
update: it seems to me that you can use a url format like "//www.something.com" but instead of assuming the page protocol it seems like it's just defaulting to "http://www.something.com".
Now I'm working around this by checking if the SWF is an SSL url. Something like this:
if( loaderInfo.url.indexOf("https:") == 0 ) {
//replace http: with https:
}
Which is unfortunately inconvenient to be doing that everywhere you handle a remote asset URL. Just loading everything with matching proto would be a lot nicer... like "//www.someurl.com/wouldbenicer.xml", especially since js and html both work that way.
Blah.
Any ideas?

"//" relative proto doesn't work in flash the way the browser works with urls in HTML, instead it defaults to http://
Workaround:
Check the URL of the SWF to see if the URL about to be loaded should be modified to have https:// protocol:
if( loaderInfo.url.indexOf("https:") == 0 ) {
//replace http: with https:
} else {
//replace https: with http:
}

Building upon OG Sean's answer, here's a wrapper function that'll manage protocol-relative URLs and default to HTTP.
function relativeURL(url:String) {
var scheme = (loaderInfo.url.indexOf("https:") == 0) ? "https:": "http:";
var url = scheme + url.replace(/^https?:/,"");
return url;
}

using a string contains splash, and add it twice
var singlesplash:String = "/";
var doublesplash:String = singlesplash + singlesplash;
myurl = "http:" + doublesplash + "www.google.com";
or
myurl = "http:/" + "/www.google.com";

Related

HTTP: redirect "/foo" to "/foo/"

I have a web page currently hosted at https://somesite.zz/foo/. When loaded, that returns https://somesite.zz/foo/index.html, which loads various CSS and JavaScript using relative paths:
<link rel="stylesheet" href="default.css"/>
<script type="text/javascript" src="scripts.js"></script>
Since the paths are relative, the browser loads https://somesite.zz/foo/default.css and https://somesite.zz/foo/scripts.js and everyone is happy. The problem is when someone omits the trailing slash and loads https://somesite.zz/foo. The server still returns the contents of https://somesite.zz/foo/index.html, but now the browser doesn't realize that it is in a subdirectory so the relative paths are wrong: it tries to load https://somesite.zz/default.css and https://somesite.zz/scripts.js. Those 404, of course, so nothing works.
How can I redirect /foo to /foo/ (or otherwise make them equivalent)? I can't use JS in index.html because of my CSP: any JS needs to be in an external file, which I can't load because the paths are wrong. So this probably can't be solved using JS. This site is hosted on AWS Cloudfront + S3; is there a way to configure such a redirect there?
With a little more research, I discovered that request-triggered Cloudfront Functions can return responses, so I created the following function with the "viewer request" trigger:
function handler(event) {
var request = event.request;
var uri = request.uri;
if (uri.endsWith('/')) {
request.uri += "index.html";
} else {
var leafIdx = uri.lastIndexOf('/');
if ((-1 != leafIdx) && (!uri.substring(leafIdx+1).includes('.'))) {
return {
statusCode: 301,
statusDescription: "Redirect",
headers: {
"location": { "value": request.uri + "/index.html" }
}
}
}
}
return request;
}
The first branch of that function does the typical silent "redirect" to index.html. In the second, I check if the last path component appears to be a file or directory based on the presence or absence of a '.' in the name; anything without a '.' is interpreted as a directory and a 301 redirect is issued. Since this this is a real HTTP redirect, the browser knows to change its path, avoiding my relative-path problem.

Removing '#' from URL using htaccess in HTML web page

I am trying to remove the # in the following URL: (www.example.com/#section1). How could I do this using the htaccess file. I am sure this could be done using regular expression, but I am not sure how I would do this.
This is what I have written within the htaccess file RewriteRule ^[#].
Thanks for your help!
Hashes (#) are not send to the server, so you can't manipulate them on the server.
If you really need to remove them, you would have to use JavaScript on each page.
// Wait for the page to load, and call 'removeHash'.
document.addEventListener("DOMContentLoaded", removeHash);
document.addEventListener("Load", removeHash);
function removeHash() {
// If there is no hash, don't do anything.
if (!location.hash) return;
// http://<domain></pathname>?<search><#hash>
// Build an URL for the page, sans the domain and hash
var url = location.pathname;
if (location.search) {
// Include the query string, if any
url += '?' + location.search;
}
// Replace the loaded url with the built url, without reloading the page.
history.replaceState('', document.title, url);
}

HREF with https link in a frame doesn't work [duplicate]

I am trying to put google.com into an iframe on my website, this works with many other websites including yahoo. But it does not work with google as it just shows a blank iframe. Why does it not render? Are there any tricks to do that?
I have tried it in an usual way to show a website in an iframe like this:
<iframe name="I1" id="if1" width="100%"
height="254" style="visibility:visible"
src="http://www.google.com"></iframe>
The google.com page does not render in the iframe, it's just blank. What is going on?
The reason for this is, that Google is sending an "X-Frame-Options: SAMEORIGIN" response header. This option prevents the browser from displaying iFrames that are not hosted on the same domain as the parent page.
See: Mozilla Developer Network - The X-Frame-Options response header
IT IS NOT IMPOSSIBLE.
Use a reverse proxy server to handle the Different-Origin-Problem. I used to using Nginx with proxy_pass to change the url of page. you can have a try.
Another way is to write a simple proxy page runs on server by yourself, just request from Google and output the result to the client.
As it has been outlined here, because Google is sending an "X-Frame-Options: SAMEORIGIN" response header you cannot simply set the src to "http://www.google.com" in a iframe.
If you want to embed Google into an iframe you can do what sudopeople suggested in a comment above and use a Google custom search link like the following. This worked great for me (left 'q=' blank to start with blank search).
<iframe id="if1" width="100%" height="254" style="visibility:visible" src="http://www.google.com/custom?q=&btnG=Search"></iframe>
EDIT:
This answer no longer works. For information, and instructions on how to replace an iframe search with a google custom search element check out:
https://support.google.com/customsearch/answer/2641279
You can use https://www.google.com/search?igu=1 instead of https://google.com/ , it works. This issue is it has X-Frame-Options Header policy and browsers follow those policies.
You can solve using Google CSE (Custom Searche Engine), which can be easily inserted into an iframe. You can create your own search engine, that search selected sites or also in entire Google's database.
The results can be styled as you prefer, also similar to Google style. Google CSE works with web and images search.
google.php
<script>
(function() {
var cx = 'xxxxxxxxxxxxxxxxxxxxxx';
var gcse = document.createElement('script');
gcse.type = 'text/javascript';
gcse.async = true;
gcse.src = 'https://cse.google.com/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(gcse, s);
})();
</script>
<gcse:searchresults-only></gcse:searchresults-only>
yourpage.php
<iframe src="google.php?q=<?php echo urlencode('your query'); ?>"></iframe>
You can bypass X-Frame-Options in an using YQL.
var iframe = document.getElementsByTagName('iframe')[0];
var url = iframe.src;
var getData = function (data) {
if (data && data.query && data.query.results && data.query.results.resources && data.query.results.resources.content && data.query.results.resources.status == 200) loadHTML(data.query.results.resources.content);
else if (data && data.error && data.error.description) loadHTML(data.error.description);
else loadHTML('Error: Cannot load ' + url);
};
var loadURL = function (src) {
url = src;
var script = document.createElement('script');
script.src = 'https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20data.headers%20where%20url%3D%22' + encodeURIComponent(url) + '%22&format=json&diagnostics=true&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=getData';
document.body.appendChild(script);
};
var loadHTML = function (html) {
iframe.src = 'about:blank';
iframe.contentWindow.document.open();
iframe.contentWindow.document.write(html.replace(/<head>/i, '<head><base href="' + url + '"><scr' + 'ipt>document.addEventListener("click", function(e) { if(e.target && e.target.nodeName == "A") { e.preventDefault(); parent.loadURL(e.target.href); } });</scr' + 'ipt>'));
iframe.contentWindow.document.close();
}
loadURL(iframe.src);
<iframe src="http://www.google.co.in" width="500" height="300"></iframe>
Run it here: http://jsfiddle.net/2gou4yen/
Code from here: How Can I Bypass the X-Frame-Options: SAMEORIGIN HTTP Header?
If you are using PHP you can use file_get_contents() to print the content:
<?php
$page = file_get_contents('https://www.google.com');
echo $page;
?>
This will print whatever content file_get_contents() function gets in this url.
Please note that since you are displaying content as string instead as a actual web page, things like relative path images are not shown correctly, because /img/myimg.jpg is now loading from your server and not from google.com anymore.
However, you can play with some tricks like str_replace() function to replace absolute urls in images:
<?php
$page = file_get_contents('https://www.google.com');
echo str_replace('src="img/','src="https://google.com/img/',$page);
?>
This used to work because I used it to create custom Google searches with my own options. Google made changes on their end and broke my private customized search page :( No longer working sample below. It was very useful for complex search patterns.
<form method="get" action="http://www.google.com/search" target="main"><input name="q" value="" type="hidden"> <input name="q" size="40" maxlength="2000" value="" type="text">
web
I guess the better option is to just use Curl or similar.
Its not ideal but you can use a proxy server and it works fine. For example go to hidemyass.com put in www.google.com and put the link it goes to in an iframe and it works!

"Tainted canvases may not be loaded" Cross domain issue with WebGL textures

I've learnt a lot in the last 48 hours about cross domain policies, but apparently not enough.
Following on from this question. My HTML5 game supports Facebook login. I'm trying to download profile pictures of people's friends. In the HTML5 version of my game I get the following error in Chrome.
detailMessage: "com.google.gwt.core.client.JavaScriptException:
(SecurityError) ↵ stack: Error: Failed to execute 'texImage2D' on
'WebGLRenderingContext': Tainted canvases may not be loaded.
As I understand it, this error occurs because I'm trying to load an image from a different domain, but this can be worked around with an Access-Control-Allow-Origin header, as detailed in this question.
The URL I'm trying to download from is
https://graph.facebook.com/1387819034852828/picture?width=150&height=150
Looking at the network tab in Chrome I can see this has the required access-control-allow-origin header and responds with a 302 redirect to a new URL. That URL varies, I guess depending on load balancing, but here's an example URL.
https://fbcdn-profile-a.akamaihd.net/hprofile-ak-xap1/v/t1.0-1/c0.0.160.160/p160x160/11046398_1413754142259317_606640341449680402_n.jpg?oh=6738b578bc134ff207679c832ecd5fe5&oe=562F72A4&gda=1445979187_2b0bf0ad3272047d64c7bfc2dbc09a29
This URL also has the access-control-allow-origin header. So I don't understand why this is failing.
Being Facebook, and the fact that thousands of apps, games and websites display users profile pictures, I'm assuming this is possible. I'm aware that I can bounce through my own server, but I'm not sure why I should have to.
Answer
I eventually got cross domain image loading working in libgdx with the following code (which is pretty hacky and I'm sure can be improved). I've not managed to get it working with the AssetDownloader yet. I'll hopefully work that out eventually.
public void downloadPixmap(final String url, final DownloadPixmapResponse response) {
final RootPanel root = RootPanel.get("embed-html");
final Image img = new Image(url);
img.getElement().setAttribute("crossOrigin", "anonymous");
img.addLoadHandler(new LoadHandler() {
#Override
public void onLoad(LoadEvent event) {
HtmlLauncher.application.getPreloader().images.put(url, ImageElement.as(img.getElement()));
response.downloadComplete(new Pixmap(Gdx.files.internal(url)));
root.remove(img);
}
});
root.add(img);
}
interface DownloadPixmapResponse {
void downloadComplete(Pixmap pixmap);
void downloadFailed(Throwable e);
}
are you setting the crossOrigin attribute on your img before requesting it?
var img = new Image();
img.crossOrigin = "anonymous";
img.src = "https://graph.facebook.com/1387819034852828/picture?width=150&height=150";
It's was working for me when this question was asked. Unfortunately the URL above no longer points to anything so I've changed it in the example below
var img = new Image();
img.crossOrigin = "anonymous"; // COMMENT OUT TO SEE IT FAIL
img.onload = uploadTex;
img.src = "https://i.imgur.com/ZKMnXce.png";
function uploadTex() {
var gl = document.createElement("canvas").getContext("webgl");
var tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
try {
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img);
log("DONE: ", gl.getError());
} catch (e) {
log("FAILED to use image because of security:", e);
}
}
function log() {
var div = document.createElement("div");
div.innerHTML = Array.prototype.join.call(arguments, " ");
document.body.appendChild(div);
}
<body></body>
How to check you're receiving the headers
Open your devtools, pick the network tab, reload the page, select the image in question, look at both the REQUEST headers and the RESPONSE headers.
The request should show your browser sent an Origin: header
The response should show you received
Access-Control-Allow-Methods: GET, OPTIONS, ...
Access-Control-Allow-Origin: *
Note, both the response AND THE REQUEST must show the entries above. If the request is missing Origin: then you didn't set img.crossOrigin and the browser will not let you use the image even if the response said it was ok.
If your request has the Origin: header and the response does not have the other headers than that server did not give permission to use the image to display it. In other words it will work in an image tag and you can draw it to a canvas but you can't use it in WebGL and any 2d canvas you draw it into will become tainted and toDataURL and getImageData will stop working
this is a classic crossdomain issue that happens when you're developing locally.
I use python's simple server as a quick fix for this.
navigate to your directory in the terminal, then type:
$ python -m SimpleHTTPServer
and you'll get
Serving HTTP on 0.0.0.0 port 8000 ...
so go to 0.0.0.0:8000/ and you should see the problem resolved.
You can base64 encode your texture.

The right way of setting <a href=""> when it's a local file

I'm trying to link to a local file. I've set href as follows:
Link Anchor
In Firefox, when I right click and "open link in new tab", nothing happens.
When I right click and "copy link location", then manually open a new tab and paste the copied link, it works fine. So it seems my file:// syntax is fine. I've also tried it with 3 slashes like file:/// but it's the same result.
What am I doing wrong?
By definition, file: URLs are system-dependent, and they have little use. A URL as in your example works when used locally, i.e. the linking page itself is in the user’s computer. But browsers generally refuse to follow file: links on a page that it has fetched with the HTTP protocol, so that the page's own URL is an http: URL. When you click on such a link, nothing happens. The purpose is presumably security: to prevent a remote page from accessing files in the visitor’s computer. (I think this feature was first implemented in Mozilla, then copied to other browsers.)
So if you work with HTML documents in your computer, the file: URLs should work, though there are system-dependent issues in their syntax (how you write path names and file names in such a URL).
If you really need to work with an HTML document on your computers and another HTML document on a web server, the way to make links work is to use the local file as primary and, if needed, use client-side scripting to fetch the document from the server,
Organize your files in hierarchical directories and then just use relative paths.
Demo:
HTML (index.html)
<a href='inner/file.html'>link</a>
Directory structure:
base/
base/index.html
base/inner/file.html
....
The href value inside the base tag will become your reference point for all your relative paths and thus override your current directory path value otherwise - the '~' is the root of your site
<head>
<base href="~/" />
</head>
This can happen when you are running IIS and you run the html page through it, then the Local file system will not be accessible.
To make your link work locally the run the calling html page directly from file browser not visual studio F5 or IIS simply click it to open from the file system, and make sure you are using the link like this:
Intro
../htmlfilename with .html
User can do this
This will solve your problem of redirection to anypage for local files.
Try swapping your colon : for a bar |. that should do it
Link Anchor
The right way of setting a href=“” when it's a local file.
It will not make any issue when code or file is online.
FAQ
Hope it will help you.
Here is the alternative way to download local file by client side and server side effort:
<a onclick='fileClick(this)' href="file://C:/path/to/file/file.html"/>
Js:
function fileClick(a) {
var linkTag = a.href;
var substring = "file:///";
if (linkTag.includes(substring)) {
var url = '/cnm/document/v/downloadLocalfile?path=' + encodeURIComponent(linkTag);
fileOpen(url);
}
else {
window.open(linkTag, '_blank');
}
}
function fileOpen(url) {
$.ajax({
url: url,
complete: function (jqxhr, txt_status) {
console.log("Complete: [ " + txt_status + " ] " + jqxhr);
if (txt_status == 'success') {
window.open(url, '_self');
}
else {
alert("File not found[404]!");
}
// }
}
});
}
Server side[java]:
#GetMapping("/v/downloadLocalfile")
public void downloadLocalfile(#RequestParam String path, HttpServletResponse
response) throws IOException, JRException {
try {
String nPath = path.replace("file:///", "").trim();
File file = new File(nPath);
String fileName = file.getName();
response.setHeader("Content-Disposition", "attachment;filename=" +
fileName);
if (file.exists()) {
FileInputStream in = new FileInputStream(file);
response.setStatus(200);
OutputStream out = response.getOutputStream();
byte[] buffer = new byte[1024];
int numBytesRead;
while ((numBytesRead = in.read(buffer)) > 0) {
out.write(buffer, 0, numBytesRead);
}
// out.flush();
in.close();
out.close();
}
else {
response.setStatus(404);
}
} catch (Exception ex) {
logger.error(ex.getLocalizedMessage());
}
return;
}