IFrame on Safari 11 behaving weirldy - html

In a very similar fashion with a related question, the web application I have creates an iFrame to a login form, with a certain URL and a bunch of GET parameters, but until the said URL is opened in a separate window at least once, the page loaded in the iFrame doesn't seem to be getting the GET parameters at all. This is not a PHP application, however, so there's no session_start issue as suggested in the answer to the related question.
I tried tracing the network with Charles, and the only outgoing request I see is a CONNECT request to the domain of the URL without any GET payload.
Not sure if related or important: the main page domain is HTTP, the login form is on HTTPS.
Is there some preflight voodoo that needs to be done for this to work?
The whole solution works as is on Safari 10 and other browsers, IE included.

The problem was with the Preferences -> Privacy -> Prevent cross-site tracking being set to on. When switched to off works like a charm.

Related

How can I make a Chrome extension make a HTTPS request to the current website and attach all the cookies and other data along with it?

I'm working on an extesion which will work of some kind of a helper. Namely, a user gets authenticated on a website as he normally would -- via a website itself.
Then he'd open my extension and click a button "Action". A click would make HTTPS request to the /api/some-method of the current domain and send along with it all the data the current browser would. In particular, all the cookies of the current domain, and, preferably, the correct user-agent too.
That is, it's as if domain123.com/api/some-method was called by the browser itself.
How can I make my extension attach all that info and make a request in such a manner?

Use chrome extension to trick page into thinking it's not in an iFrame

Is there a way to create a Chrome extension to trick a site loaded in an iFrame into thinking it's not in a frame?
We load clients' sites into an iframe for demos, but some resources get blocked due to them disallowing being loaded in an iFrame. We'd like to load these sites into a frame as though you were browsing directly to the site in a standalone tab.
You should use the Chrome's webRequest in order to intercept the server response. See the API. Here you go for onHeadersReceived event where you are in control of any response headers => you need to remove X-Frame-Options header from the response.
That's pretty much it, if this is the only problem in loading those sites.
However, for the sake of completeness, in order to fully trick the browser (which you most likely do not need) you need also to inject a script into every page that would clear up some things like window.parent by simple removing them from window object and some other things like origin etc. However removing the header would work for 99.9999% of your use cases.

HTML basic authentification in URL

I host my own website at home (using wamp) that uses basic authentification. The authentification worked well and username:password in the url worked too. But since last week, when I load url with username:password#url.com it doesn't work and it seems that the css /js data won´t load. I tried this with a bootstrap example site and it worked. So I guess the problem is my website.
Here is my website template, as you can see the CSS doesn´t work except when reloaded.
I don't have this problem at the bootstrap example site (Link)
I use Google Chrome Version 59.0.3071.115 on my computer and it doesn't work on my android phone too.
Edit : it seems that the problem come from the URL. Chrome try to load css with basic authentication in URL and it fail. I've got Provisional headers are shown for the css file.
Do you know why I've got this problem now and how to avoid it ?
I found somebody else with this problem :
Bypass blocking of subresource requests whose URLs contain embedded credentials
Apparently it's a new limitation of chrome :(
When going on your page, I get this warning :
[Deprecation] Subresource requests whose URLs contain embedded credentials (e.g. https://user:pass#host/) are blocked. See https://www.chromestatus.com/feature/5669008342777856 for more details.
What is happening there, is that you are doing requests to URLs like :
http://stack:123456#88.182.191.233:8090/[...]
So, since you are sending crenditentials il the URL, Chrome blocks it (returning a (blocked:origin) status)
Edit: By the way, you should never send credentials over URL, especially when you are using http only.

Why does View Source issue a new HTTP request?

I've noticed that both Firefox and Chrome issue a new HTTP request when you view the source for a web page that you've already loaded. It's particularly annoying when the page itself is slow to load or if it won't load at all.
Why is that? Wouldn't they have the existing source for the originally received page cached already? Is it based on Cache-Control headers?
This has been on my mind for a while (usually, comes up when looking at what's behind slow web apps).
In the context of Chrome, according to this link it is indeed base on Cache-Control headers.
...view-source grabs the html source from the http cache
and pretty-prints it, but for the page NOT in http cache, it's 'forced to'
make a new request.
To me, this makes sense. You wouldn't want to use what is currently rendered as the source of truth as obviously the HTML can be manipulated dynamically. If you can't use this, then the http cache would be the next likely candidate for the source. If the source in unavailable from cahce, a subsequent GET of the source seems to be the only alternative.
This does, however, introduce another interesting delima raised here.
Requesting the URL again doesn't make sense as there is no guarantee that source received during the second request will match what was received during the first request.
I would imagine this was a conscious trade-off that was made to ensure that a view-source request is always satisfied in some form or another.
You need to do "Inspect Element" for the live web page. Show-code reloads the page to show the source code without modification.

HTML5 History API back button with partial page loads

To improve the performance/responsiveness of my website I have implemented a partial page load using AJAX, replaceState, pushState, and a popstate listener.
I essentially store the central part of my page (HTML) as my state object in the history. When a link is clicked I request just the central bit of the page from the server (identifying these requests with a different Accept header) and replace it with javascript. On popstate I grab the previous central part and push it back into the dom.
This mostly works fine, however I have found a particular issue which I am stuck on. It is a little complicated to explain, so my apologies if this isn't very clear.
There is a search form on most of our pages. The partial page loading via ajax is only on GET requests, and the form performs a POST which results in a full page load.
If I navigate the following set of pages, I end up on a malformed partial page consisting of ONLY the central content, without any of the surrounding dom:
Start on the Home Page (via full page load) - perform a Search (post-redirect-get)
Takes you to Search Results (via full page load) - then click Home
Returns you to the Home Page (via dynamic get) - click browser back
Search Results (from popstate listener) - click browser back
Malformed home page.
When the malformed home page appears, my popstate listener is not present at all.
What I think is happening, is that the second load (dynamic, partial) of the home page is being cached by the browser, and then when the full page back occurs, the browser is merely showing the cached partial response rather than the full page.
To try to remedy this I have added a Vary: Accept header to the response to let the browsers know that content may change based on the accept header. I have also added Cache-Control max-age=0, pragma no-cache, and a past expiry date to the partial loaded content to try to force the browser not to cache this, but none of this solves it.
Unfortunately my company does not allow external traffic to our dev servers, so I cannot show you the problem. I have looked at various similar questions on here, but none of them seem quite the same, nor do the solutions suggested seem to work.
If I add a pointless parameter (blah=blah) to my dynamic GET requests, this solves the problem. However this is an ugly hack that I'd rather not do. This seems like it should be solvable with headers, since I think it is a caching problem. Can anyone explain what's going on?
That's a caching issue. With the response header Cache-Control set to no-cache or max-age=0, the problem doesn't happen in FF (as you said), but it persists in Chrome.
The header that worked for me is Cache-Control: no-store. That's not consistently implemented by all the browsers (you can find questions asking what is the difference between no-cache and no-store), but has the result you expect in Chrome as well.
I had a similar issue. I'm building a web-based wizard and using jquery ajax to load each step (ASP.NET MVC on the back end).
The inital route is something like /Questions/Show - which loads the whole page and displays the first question (question 0). When the user clicks the Next image, it does a jquery .load() with the url /Questions/Save/0 . This saves the answer and returns a partial view with the next question. The next save does a jquery .load() with /Questions/Save/1 ...
I implemented History.js so that the user can go back and forward (forth?). It stores the question number in the state data. When it detects a statechange (and the state's question number is different from what's on the page), it does a jquery .load() to load the correct question.
At first I was using the same route as when the initial page is loaded (/Questions/Show/X where X is the question number). On the back end I detected whether it was an ajax request, and if so returned a partial view instead of the full view. This is where the issue came in similar to yours: Say I was on question 3, went back, then forward, then went to www.google.com, and then hit the back button. It showed question 3 but it was the partial view - because the browser cached the partial for that route.
My solution was to create a separate route for the ajax call: /Questions/Load/X (it used common code on the back end). Now with two different routes (/Questions/Show for non-ajax and /Questions/Load for ajax), the browser shows the full page correctly in the above situation.
So maybe a similar solution would work for you... i.e. two different routes for your home page - one for the full page and one for a partial. Hope that helps.
When a link is clicked I request just the central bit of the page from the server (identifying these requests with a different Accept header) and replace it with javascript.
Awesome. That's the RESTful way to do it. But there's one thing left to do to make it work: add a Vary header to the response.
Vary: Accept
That tells the browser that a request with a different Accept header might get a different response. Because the two requests use different Accept headers, the browser (and any caching proxies) will cache the responses separately.
Unlike setting Cache-Control: no-store, this will still allow you to use caching.