ASP.NET, Pagespeed and the content-encoding header - html

I have a very small static site (http://www.codeinside.eu/) and tested it via Google Pagespeed. It told me I should use compression for the page and all JS/CSS files - but here is my problem: I thought that my website is already running with compression.
The website is running on Windows Azure Websites and is based on ASP.NET. For CSS/Javascript I use the builtin Bundling feature and the Website should run in release mode - so bundling and minification works fine and as far as I know IIS8 should compress dynamic content.
Then I tried another testing instrument http://www.whatsmyip.org/http-compression-test/ which said that my site is compressed.
My website and several other websites running ASP.NET and IIS (including stackoverflow.com) don't include the "content-encoding:gzip" header in the response - is this a problem with the Pagespeed analyser or is this a problem of IIS? Or is this no problem at all because the header is not that important?
Edit: Of course the browser sends the "accept-encoding:gzip" header in the request.

Found the solution for this issue:
We use the Microsoft TMG as a Proxy in our company and it seems to remove the "Content-Encoding" header (and other magic stuff). I was confused because some pages like twitter.com are served to my PC with the "content-encoding: gzip" header and other without any "Content-Encoding" header.
My wild guess: The TMG is case sensitive and only looks for "Content-Encoding", that's why I receive some requests with it and some without it.
So - compression with the correct header works as expected.

Related

Deutsch texts not displayed properly in Chrome

I have a statically generated website with GatsbyJS. It uses react-intl for a localization but I don't think this is causing a problem.
When I open the website (it is hosted on AWS S3) for the first time it usually displays german transcription very well, but when I open developer tools and refresh the page the transcriptions are often broken (the specific German letters are not displayed properly).
Transcriptions are stored in a js file.
Steps that I use to reproduce the error:
I reopen the browser and go to the website. If I see correct texts I then open the dev tools and go to Application => Frames section => Scripts dropdown and I see:
Then I refresh the page (sometimes more than once) and somehow texts are now broken:
Note that this is the same static file!
It seems that this is only appearing in Chrome.
When I build and run website locally then it seems to always work (which then would suggest it's not chrome fault).
Not sure where might be the problem
It turned out that javascript files didn’t have charset set in Content-type header in a response. Locally they had:
Content-Type: javascript; charset=utf-8;
And it worked fine but when they get uploaded to AWS S3 the charset field has been stripped out leaving only:
Content-Type: javascript;
Chrome seems to be more restrictive about it.

CSS styling differences between http and https

I have the problem, that my website acts different if I call it via the https protocol. To be more precise it looks like it handles the CSS in a different way.
What I want (and how it actually works via http) is kind of a navigation with different tabs. Here is an image of the navigation part:
http navigation
And here is an image of how it looks like when it's called via https:
https navigation
I have also created a fiddle with this part of my website although it does not proper work inside the fiddle. (maybe because the jsfiddle site is also via https protocol?)
Fiddle
However, please have a look at the current website to see the difference:
This is the website via http:
http website
and here how it looks like when it's called via https:
https website
You guys ever had a similar problem or any idea how to solve it?
I was the opinion that the protocol should not make a difference.
Take a look at your console.
You should see a lot of Mixed Content errors or warnings if you are using Chrome.
When a website is served over HTTPS, all its resources must be served over HTTPS too. When a resource is not loaded over HTTPS, the browser will block it because otherwise it defeats the whole purpose of using a HTTPS.
When one resource is blocked, the content from these resources won't get executed. Maybe that is the reason why your layout breaks because something is not being executed properly.
So try to change your resources into HTTPS protocol. If you are using APIs and those API does not provide HTTPS link, then you should look for another API.
In your case
This is the culprit.
http://fonts.googleapis.com/css?family=Source+Sans+Pro:300,400,700,300italic,400italic,700italic
You can find it in head section.
Your font should be in Source Sans Pro but because it was blocked, Helvetica or Arial was used instead. Thus breaking the layout.
Change it to HTTPS and it should be fine.
I don't know much about HTTP requests, but I can tell you two things:
- The browser cascades and parses CSS in different ways. HTTPS requests are processed in different ways. Maybe check you cascade.
- That website looks nice

Tumblr Embed not working in GitHub Pages

I'm trying to push a website I created off of my personal page and onto a GitHub page. It is successfully up at https://kalysren.github.io/, but my Tumblr embed isn't showing up. I've tried it in both Firefox and Chrome with the same results. If I check its existing location, it shows up perfectly in both Firefox and Chrome. Any thoughts?
Your site is https, but the tumblr content is http. You therefore have mixed content, and this is why it's not loading. You need to either, change your gh-pages site to http, or change the tumblr embed to https, assuming that's supported.
As explained by #dundonian, your current problem is mixed content.
As Github pages is requiring https for sites created after June 15, 2016, so you cannot downgrade to http and remove mixed content problem. Nevertheless, it's better to serve over https.
Trying to get joshwarmuth.tumblr.com served over https just by changing url protocol to https is not enough because tumblr is redirecting to the http version.
It seems that you can now change this behavior from your tumblr blog settings and force tumblr to serve over ssl.

IOS6 mobile web app caching HTML

We have a mobile web app saved to the home screen. The application is coded using a single page HTML file with jquery mobile.
In iOS5 and below the index.html file is not cached by the device so every time the application is launched the device requests for the HTML page. This is really important because we have another application that handles authentication sitting in front of our server and therefore we rely on the 302 http code which causes a redirect to authenticate. If this is successful another redirect occurs back to our index.html page.
In iOS6 though it appears the index.html file is cached even though we set a no cache control header! This is a problem because we don't get to authenticate and therefore when the user starts using the application all requests fail (they are unauthenicated).
I can't seem to find any detail of whether this was a feature implemented in iOS6. Anyone shed any light on this? I know they went a little crazy with caching (caching post responses)...
NOTE: understand the solution of the authentication is not ideal however we can't change that at the moment. Just looking for references on what apple did to cause this bug!
Update:
Just discovered something interesting after using Charles Web Debugging Proxy that the server is responding with Cache-Control:private which means that proxies won't cache however browsers will cache. This raises the question as to whether iOS 6 home screen mobile web apps now actually treat this cache-control correctly?!? Need to investigate further what hardware in our infrastructure is adding this cache-control.
I am having the same issue with an HTML5/JQM/Jersey based application. I set the cache-control header to no-cache, which now seems to work on most of the devices but still fails intermittently on some.
I was struggling with the same issue in my application, and founded that you have to set the headers of the request with Cache-control: no-cache, in order to avoid iOS6 cache the response.
Please take a look to the following link:
Is Safari on iOS 6 caching $.ajax results?
If you use:
Cache-Control:no-cache, no-store
there shouldn't be any way for iOS6 to cache the AJAX calls. I suspect iOS6 actually started obeying the rules and implemented "Cache-control:private" as it was meant to work initially, while almost any other browser just treats it as 'no-cache' directive.
I had the same problem with it, while using PHP's SAJAX framework (which was set to 'private').

Hosting Facebook iframes on pages on Cloudfront

I've switched my Facebook page to pull an iframe as a result of Facebook's recent announcement that they were supporting iframes in pages. Since you need to host the iframe page outside of Facebook, I figured it would be nice to do using Cloudfront to host the files (an HTML page, a CSS stylesheet and a jpg image). Unfortunately, despite setting the permissions on the Cloudfront files to 744, the iframe page loads correctly in a browser, but when called from Facebook, I get this error message.
When I host the same files on my Media Temple server, the iframe on the actual Facebook page also loads correctly.
Is there a reason why Facebook and Cloudfront don't play together? I haven't been able to find one so far.
Unfortunately Facebook loads the iframe page using an HTTP POST, not an HTTP GET and is not compatible since Amazon has a REST interface and properly uses POST to upload/modify content.
Best,
David Bullock
I ran into this problem today and it caused some headscratching. As David Bullock points out the problem is that Facebook loads the HTML page via a POST request but S3 (and thus by extension CloudFront) won't serve resources in response to this (it returns 405 Method Not Allowed).
You can host your CSS, scripts and images on S3 / CloudFront, but the initial HTML page has to be on some other server. If you're concerned about load or latency from across the globe then you could try implementing a tiny redirector that forwards the Facebook POST request to the CloudFront-cached location (you'll have to return 303 See Other to do this redirection so the browser makes a GET request instead: see RFC 2616).
There is a shockingly easy workaround to the fact that AWS rejects POST requests and the fact that Facebook requires page tabs to be hosted via HTTPS: just redirect the request through https://bit.ly/….
Yes, it's really that easy.
Host the HTML page wherever you like. HTTP is fine.
Create a bit.ly-shortened URL to the page.
Use it—substituting https:// for http://—as the "Secure Page Tab
URL" as you create your Facebook Page Tab.
Activate the tab using this highly-undocumented dialog URL: https://www.facebook.com/dialog/pagetab?app_id=app_id&redirect_uri=bitly_url
Boom: done.
OK, it can be done: but you need to host the images on Cloudfront and the rest of the content on S3. Amazon provides a set of clear instructions on how to this. Issue solved.
Use Cloudfront to trap the 405 error and serve your html as the "Custom Error Response" page to the desired index page
Updated:
This was down voted, however I'm going to help lots of Facebook developers with the following. The final Issue that we had with hosting a facebook application on S3 was with CORS. We fixed the 405's by doing a "Custom Error Response" See this link for details:
http://blog.celingest.com/en/2014/10/02/tutorial-using-cors-with-cloudfront-and-s3/