External stylesheets either not downloading or not being applied in very rare cases - html

I recall some very rare instances of seeing major websites (Amazon, Facebook, etc.) either not downloading a CSS file or not applying the rules, causing the page to look like this:
I've been tasked to provide an internal explanation after we received a complaint email with an attached screenshot from a user of one of our websites showing the same effect. The screenshot contains sensitive user information, so I'm unable to post it. But it shows that inline styles are being applied, but any style referenced from an external CSS file isn't being applied.
Unfortunately, I am unable to reproduce this issue, and other than just saying "styles aren't being applied", I am coming up dry with a detailed explanation and I would love to understand it myself.
I would appreciate any input on why this might happen, or reference to any articles. Even if someone knows what this event is called, I would be happy to go research it, but as of now I'm coming up blank.

There are more than one scenarios under which this can occur:
1) bandwith issues : as italo.nascimento mentioned, a slow connection, where your HTML is downloaded but your CSS is timed-out so you're left with a naked HTML page (happens often also when a website is under DoS or has many many visitors and the server can't keep up with the traffic)
2) caching problems : something is changed in your HTML, but the CSS is served from the browser's local cache so the selectors don't match...
3) FOUC : It's not really similar to what you're asking, unless the printscreens were made during the page load.. It's called Flash of Unstyled Content.
In general 90% of these kinds of problems are cause by connection issues. Dropped packets, TimeOuts, CDN's not working properly.. And as they are random I don't think you can "reproduce them" - it's not something that can get fixed.

Happened to me lots of times in major sites.
Mostly, it happens when Internet connection is very slow or oscilating, so the files doesn't load correctly from the server (packages get lost) and the site is showed in pure HTML. Maybe you could reproduce it by limiting your bandwidth and reloading the page.

Related

Is there a best practice for cache busting when html is cached?

In my situation, a wordpress install, we use the core enqueue functionality for styles and scripts, and the version number parameter which adds a GET param after the filename, for cache busting. We bump this on changes to the linked file, as per normal. This is all well and good and technically working.
My issue is that our host sets an expires header for html files for 10 days, so the html ends up in the browser cache. The html includes the link tag, which includes the old version number, which means that they get the old CSS/JS.
When we encounter this in testing, we just Ctrl-Shift-R and all is well, but I would prefer not to be asking our user's to clear their cache everytime we make a change.
My 'Nuke it from Orbit' solution would be to ask them to not cache html, but this seems like a Bad Idea(tm). Is there a good method for busting the browser HTML cache from our end? I feel like this should be a common issue and a solved problem, but maybe I'm just googling the wrong terms here because everything I have seen so far is basically - change the URL's; which seems even more of an extreme solution (Take that, accumulated SEO Ranking!)

Website showing wrong images

I have a one-page static website. My website is displaying different images than those referenced in the HTML. For example:
<img src="img/About_Us_Graphic.png" alt="About us photo" id="aboutUsPic" style="margin: auto;">
Will be sometimes displayed as the image that's actually
<img src="img/Facebook_icon.png">
This happens pretty much randomly. Sometimes the pictures are correct, sometimes they're totally different pictures. And when it's a wrong picture, it isn't consistently the same wrong picture. What causes this? How can I fix it?
My site uses Foundation 5 (not sure if that's relevant). Thanks!
I've found situations similar to the one you described to be the symptom of one of a few causes:
Someone is tinkering with the content on the site without you being aware. Ask your team members if they know of anyone who might do this.
Your client-side cache is taking over. To remedy this specific problem, go to your browser and clear out the temporary files. Sometimes you have to also clear out cookies and other historical items.
Client-side proxies. Sometimes proxy servers store caches of what they serve to reduce the load of their requests. When they work in a round-robin fashion, different servers within the proxy circle might have mismatched content. * https://en.wikipedia.org/wiki/Load_balancing_(computing)
Load-balanced web servers. I've seen some situations where servers that are load balancing content will hold onto data. In my specific scenario, a memcache was used and would seemingly hold onto content until its index was refreshed.
Without more information about your set-up, there's not much anyone can do. As oxguy3 suggested - there could even be something in your code causing this.
Please try typing the URL of the image directly in your browser and see if it consistantly comes up the same, then try the same url with "?someArbitraryText" after it where "someArbitraryText" is just some random characters.
E.G. instead of "http://my.server.com/img/About_Us_Graphic.png", use "http://my.server.com/img/About_Us_Graphic.png?arbitrary". Most servers that I've encountered will still serve the image, but if a load balancer, proxy, or memcache is involved it will consider this a different URL and load it from the source rather than from some cached file.
I've seen some cases (such as on salesforce clouds) where doing so will bring up different results.
Let us know what you discover. Any little clue could help someone identify and determine the root cause.

Advantages of the html5Mode?

I've read quite some post about the angularjs html5Mode including this one and I'm still puzzled:
What is it good for? I can see that I can use http://example.com/home instead of http://example.com/#/home, but the two chars saved are not worth mentioning, are they?
How is this related to HTML5?
The answers links to a page showing how to configure a server. It seems like the purpose of this rewriting to make the server return always the same page, no matter what the URL looks like. But doesn't it read to needlessly increased traffic?
Update after Peter Lyons's answer
I started to react in a comment, but it grew too long. His long and valuable answer raises some more questions of mine.
option of rendering the actual "/home"
Yes, but that means a lot of work.
crazy escaped fragment hacks
Yes, but this hack is easy to implement (I did it just a few hours ago). I actually don't know what I should do for in case of the html5mode (as I haven't finished reading this seo article yet.
Here's a demo
It works neither in my Chromium 25 nor in my Firefox 20. Sure, they're both ancient, but everything else I needed works in both of them.
Nope, it's the opposite. The server ONLY gets a full page request when the user first clicks
But the same holds for the hashbang, too. Moreover, a user following an external link to http://example.com/#!/home and then another link to http://example.com/#!/foreign will be always served the same page via the same URL, while in the html5mode they'll be served the same page (unless the burdensome optimization you mentioned gets done) via a different URL (which means it has to be loaded again).
but the two chars saved are not worth mentioning, are they?
Many people consider the URL without the hash considerably more "pretty" or "user friendly". Also, a very big difference is when you browse to a URL with a hash (a "fragment"), the browser does NOT include the fragment in it's request to the server, which means the server has a lot less information available to deliver the right content immediately. As compared to a regular URL without any fragment, where the full path "/home" IS including in the HTTP GET request to the server, thus the server has the option of rendering the actual "/home" content directly instead of sending the generic "index.html" content and waiting for javascript on the browser to update it once it loads and sees the fragment is "#home".
HTML5 mode is also better suited for search engine optimization without any crazy escaped fragment hacks. My guess is this is probably the largest contributing factor to the push for HTML5 mode.
How is this related to HTML5?
HTML5 introduced the necessary javascript APIs to change the browser's location bar URL without reloading the page and without just using the fragment portion of the URL. Here's a demo
It seems like the purpose of this rewriting to make the server return always the same page, no matter what the URL looks like. But doesn't it read to needlessly increased traffic?
Nope, it's the opposite. The server ONLY gets a full page request when the user first clicks a link onto the site OR does a manual browser reload. Otherwise, the user can navigate around the app clicking like mad and in some cases the server will see ZERO traffic from that. More commonly, each click will make at least one AJAX request to an API to get JSON data, but overall the approach serves to reduce browser<->server traffic. If you see an app responding instantly to clicks and the URL is changing, you have HTML5 to thank for that, as compared to a traditional app, where every click includes a certain minimum latency, a flicker as the full page reloads, input forms losing focus, etc.
It works neither in my Chromium 25 nor in my Firefox 20. Sure, they're both ancient, but everything else I needed works in both of them.
A good implementation will use HTML5 when available and fall back to fragments otherwise, but work fine in any browser. But in any case, the web is a moving target. At one point, everything was full page loads. Then there was AJAX and single page apps with fragments. Now HTML5 can do single page apps with fragmentless URLs. These are not overwhelmingly different approaches.
My feeling from this back and forth is like you want someone to declare for you one of these is canonically more appropriate than the other, and it's just not like that. It depends on the app, the users, their devices, etc. Twitter was all about fragments for a good long while and then they realized their mobile users were seeing too much latency and "time to first tweet" was too long, so they went back to server-side rendering of HTML with real data in it.
To your other point about rendering on the server being "a lot of work", it's true but some consider it the "holy grail" of web app development. Look at what airbnb has done with their
rendr framework. See also Derby JS. My point being, if you decide you want rendering in both the browser and the server, you pick a framework that offers that. Not that you have a lot of options to choose from at the moment, granted, but I wouldn't advise hacking together your own.

How does the browser handle images in the stylesheet?

Context: I ran a link sleuth / error checker on my Wordpress website, and found some broken links originating within style.css. These were caused by the theme developer who set up a long list of CSS definitions for certain classes that would load images for them by using the 'background: url()' property, when the classes are present of course. Well this raised 2 general issues for me as follows, and I searched everywhere but couldn't find the answers (the main question is in the title, can you please think of these as 'sub-questions'?):
Doesn't this way of building style.css load a lot of extra unnecessary cruft into the session? Maybe not. I am pretty sure that these image files themselves are not actually requested by the browser unless needed by the page, but I am not sure the process of how this works. Is it that the browser gets the stylesheet, parses it in conjunction with the HTML to determine which classes are 'present', and then requests the stylesheet-linked images from the server accordingly? So in that case, the "background: url(image.png)" definitions that are unused (due to that class not existing on the page), would just be ignored, and that line of the .css file would represent a link to 'somewhere' that is in the 'namespace' of the DOM, but the line of code just sits there, it isn't acted upon, and the file is not pre-fetched or anything? Do I have this correct, close enough, or how could I learn more about this topic?
What happens if, (as it does in this case which the link sleuth showed me), the image linked to from within the stylesheet actually returns a 404? I know this is happening because the theme developer forgot to add those assets into the website folder structure. But is the browser itself 'making it to' all of those 404 errors each time it parses the stylesheet (which presumably would impact performance somewhat, or trigger various alerts on my server software and other places due to all of these invalid link requests)? Or again, are those links just ignored since the classes don't exist in this particular instance of the markup, and so they would only be requested case-by-case in the event that the resource is actually needed by some code on the site?

Major Bug in Today's Chrome Update - 1000's of Web pages Display Improperly

Starting this afternoon, with the introduction of Chrome 31.0.1650.48, many web pages are displaying with random formatting errors. I've confirmed this on both Mac and Windows machines running the most recent auto-updated Chrome release (31.0.1650.48).
This problem is affecting thousands of pages, and to immediately rule out our server generating different information, you can try this to reproduce the problem:
Visit this Google cache page with Chrome version specified above: http://webcache.googleusercontent.com/search?q=cache:nt70v_rn5BwJ:alaskanmalamute.rescueme.org/Idaho+&cd=61&hl=en&ct=clnk&gl=us
Notice what dogs are displayed and where they are.
Reload the page several times and observer closely.
You will randomly see one dog listing in the middle of the page, then two dog listings, the dogs move around, the menus around the dogs move around. Each time the page is reloaded Google is corrupting the source code in different ways, resulting in major formatting issues. (NONE of this code is generated outside of Google's cache.) All the pages on the www.RescueMe.Org have this problem, I'm using a cached page on Google's server in this article for an example since it proves it is not a server issue.
This sample page should remain the same every time, and be formatted correctly. It isn't.
Google Chrome (when viewing source) seems to be making random changes to the page (Chrome is dropping < or > at random places in source code) causing major display formatting issues.
Can someone reproduce this? Hopefully the folks at Google know about this issue, or someone here can escalate it with them?
Best wishes,
Jeff
can confirm - it seems to mostly be an issue with iFrames.
VisualForce iFrames in Salesforce break the entire layout.
Version 31.0.1650.48 on Mac, all addons removed.
In case someone else runs into this issue, I've narrowed it down somewhat. Chrome/31.0.1650.48 will randomly scramble the placement of tables on a page if the following two things happen:
1) You start the page like this: and do the reverse at the end: (doesn't have to be face=arial, any font setting or even just does the same thing).
2) Include some tags within the page containing various tables.
3) Magic! (not good magic, though) Each time your tables will randomly move about the page. Here's an example to try: http://server1c.rescueme.org/testb (Reload this page several times in in Chrome/31.0.1650.48 on Windows or Mac to see the tables jump around.)
THE SOLUTION: Start the page like this instead: and do the reverse at the end: (in other words, reverse placement of the center and font tags). Here's the "fixed" version of the page above with just those tags reversed: http://server1c.rescueme.org/testbfixed
While this is a Chrome bug, I feel this is worth keeping in Stack Overflow because this bug is breaking a lot of major sites, and programmers may want to know how to reprogram their HTML so those who have affected versions of Chrome won't have a confusing experience.
FYI... There are other ways to cause and solve this problem, but I'm trying to present here just the simplest method I found.