Background
I'm looking at a WebSocket call in the Network panel of Chrome DevTools.
The call is broken into two main segments, a gray bar ("Stalled") and a blue bar ("Content Download").
I've determined that the "Stalled" bar shows the time it takes for the WebSocket to connect to the server and retrieve the data.
The question is, what does the "Content Download" bar refer to, in a detailed technical sense?
This has come up because I have someone saying that a page is slow to load, and when I look at their HAR file I can see that the "Content Download" is close to 7 seconds, whereas my "Content Download" is always under 1 second for a similar data-set.
What I've tried
The official Chrome documentation (from the "Explanation" link seen in the screenshot above) isn't very descriptive, it says:
The browser is receiving the response
which isn't very helpful.
A random stack overflow comment says that it refers to:
the time it took to load the contents into memory
which is more descriptive, but I haven't found any official information to back this up.
Another (deprecated) Chrome doc says:
"If you see lots of time spent in the Content Download phases, then
improving server response or concatenating won't help"
which makes me think "Content Download" has to do with the hardware resources available to the machine rendering the page.
I also saw an unanswered question from 2+ years ago that indicates that DevTools timing might not even be reliable, adding another layer of confusion.
Ultimately I'm trying to both understand what the "Content Download" bar represents, as well as find a way to reduce its time.
Related
This may sound like a newbie question but I looked for an answer and couldn't find one.
So, what happens exactly when one hits the refresh button ? The result is that all elements of the page, including the page itself, are refreshed, that much is clear. But the question is, if an element, say an image, hasn't changed, is it loaded again from the server ?
I assume it's not the case. It would be plain stupid. But I couldn't find a place that clearly and authoritatively states this, so I'd be grateful if somebody could provide that. Also, is this the standard behavior across all browsers or are there exceptions ?
update: I have found some useful info here
https://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.3
I'm not going to give this as an answer because it isn't about refreshing but about validating a stale cache entry. But I really see no reason that refreshing wouldn't work on the same principles. I guess a hard refresh will fetch the resource no matter what, and a soft refresh will only get entries that have changed. It would appear to be the reasonable thing to do. Hopefully somebody more knowledgeable will confirm this.
What happens when a webpage is reloaded normally (cmd + r on Mac), the browser pulls the webpage from its cache, which is its short-term memory. If you force a hard reload, or empty cache, (cmd + alt/option + r on Mac), then the page is re-downloaded from the server. This is so the computer can save storage space and bandwidth. So in a nutshell, pressing the refresh button pulls the page out of the browser's short-term memory. Here's a very good explanation that you might want for a much more in-depth analysis if you want it as well.
I have made a chrome extension with the name Newsprompt : Personal Breaking News(gcajgpbafhkbkdpbaaipjoiajnangjhp) is recently being taken down stating that it breaks "single purpose Chrome extensions policy". It shows the breaking news in new tab and keep learning about user preference to push the articles(not news) user is interested in. No adds, no hidden functionality. That is all the extension does. Since it has been taken down, I have constantly been following up with
chromewebstore-dev-support#google.com and removed a lot of features (personalization being one of them) but all they come back with exactly same copy pasted lines from the docs https://developer.chrome.com/extensions/single_purpose that it doesn't comply with their standards. It feels like talking to a bot whose only response is "it doesn't comply with standards". So my question being:
1. Has any developers faced the same issue? (I have seen a couple of developers complaining on google groups but no one getting a solution). If yes how did they solved/escalated the problem (It felt like wasting 6 months of my team's effort for nothing).
2. I have tried https://support.google.com/chrome_webstore/contact/developer_support but no response.
3. Any further leads will be appreciated.
Hello StackOverflow community,
I've just started having an issue with hyperlinks stored within an MS-Access table not behaving as expected.
I have a small database which, among other things, records links to documents hosted on a company Sharepoint site. Until a few days ago, all was working fine with both the database and the hyperlinks.
For some reason, within the last few days, whenever I (or any of my users) click on these hyperlinks through an Access form (or me clicking directly from the tables), I am getting strange behavior:
Clicking the link does open a new instance of the default browser, as desired. And that browser does navigate to the company Sharepoint site. But none of the links actually open the specific document that they are intended to point to.
Instead, all links are bringing up a general file/folder menu within the Sharepoint site. It is almost as if these links point to a non-existent file within an existing folder.
The very strange part is, if I "edit" any of the hyperlinks in my database, and simply select and copy the "address" text from within the edit hyperlink window, I will always immediately pull up the correct desired document if just paste the address directly into a new browser window.
I would have thought that this type of cutting/pasting would necessarily be equivalent to simply clicking the link. But that is obviously not the case.
I feel like I can safely rule out the possibility that any changes to the Sharepoint site itself would be causing my issue with simply clicking the links (otherwise cutting/pasting the addresses would not bring up the correct documents), but I have to admit I am simply stumped as to why just clicking the hyperlinks directly used to work, but no longer does.
I don't believe there is any code or other relevant information that might be helpful that I am neglecting to include, but would be eager to provide any clarifications/etc if anyone has any idea as to what might be happening here.
Thanks in advance for any thoughts or suggestions!
~JQN
EDIT: I had deleted this question because the issue described above had simply stopped happening. I was unable to explain why, but was also unable to reproduce the issue again after a certain point within a day or two of making the original post.
Since then, the issue has returned. I've been able determine the following:
As described in my note below, when I am getting this odd link behavior, I do NOT get the standard warning from MS-Access indicating that hyperlinks may be harmful, etc.
Strangely, simply opening up a file dialog/file picker and then navigating through that dialog to any location on the (sync'ed) Sharepoint site seems to make the problem go away. I do not need to actually select or open any location on Sharepoint, simply navigating within the sync'ed folder structure seems to do the trick.
Once this happens, all links behave as intended again (ie. they open the correct linked file directly instead of landing on the root folder page). They MS-Access hyperlink warning returns as well. The file/link behavior will remain in that state for several days. Only after, I'd estimate, a week or more of inactivity since the file dialog was last run will the issue return.
FURTHER EDIT: New update...Enough time has passed so that the issue is recurring again. As suspected, links to pages outside of Sharepoint are not affected, and open as expected without issue. Once again, the standard Microsoft hyperlink warning dialog is not coming up for any links.
Obviously, now that I've found the work-around with the file dialog, it's easy enough for me to fix the issue when it arises. I'm hoping that this rings a bell with somebody, though, and perhaps one of you could point me in the right direction for a more complete fix for my users.
Thanks again for any help with this!
YET ANOTHER EDIT: Ok....based upon all the things I've learned in the last couple of weeks (as captured in this post and the comments below), I was about to delete this question and re-post it as "Why is Sharepoint redirecting my URL requests from MS-Access?" As I tried to search the forum to make sure that that question hasnt already been asked, I stumbled across some info that I think gets at the underlying issue:
It looks like this is related to the (very opaque) way that Office processes URL requests. It apparently doesn't simply open the document at the specified link, it first "pre-tests" (I suppose that's the right word) the URL by sending a "Microsoft Office Protocol Discovery" request first.
Apparently, it is possible for Sharepoint to somehow not like the particulars of that MOPD request, and if that happens, then Sharepoint redirects to the file directory page -- and that directory page ends up being opened in the browser instead of the intended link/document.
Again, this only happens sometimes and not others. When it does happen, I've found a clumsy workaround that will correct the issue for about a week or so. I can't reproduce the issue during that week, I just have to wait for the workaround to expire (I obviously don't fully understand why my clumsy workaround works).
It doesn't seem possible to manipulate the particulars of the MOPD request. If possible, I'd love to be able to dispense with MOPD entirely, since I want all the files I'm linking to via Access to be opened as read-only anyway. Unfortunately, I don't think that that is possible either.
I've found some info on this in another SO thread HERE. I still am not quite at the point where I feel I'm ready to submit an answer to this question, but I have some ideas as to what sorts of things may function as an acceptable workaround.
It would be helpful if anyone had any ideas as to how I might be able to reproduce the issue on demand, rather than simply waiting another week for whatever keys/cookies/settings/etc to expire again. I'd need to implement any possible solutions entirely on the Access side of things if possible, rather than on the Sharepoint/server side. Thanks again for any suggestions!
I'm posting this as an answer now, but will avoid accepting it until I've had a chance to verify that it actually works.
I am inserting some code that will run on DB startup. It will open a (an invisible) form that has an Access WebBrowser control included. I'll have that control navigate to a specific file on the Sharepoint site. I believe that it is actually this action that somehow causes the link problems to resolve for a week of so.
This form will run silently in the background, navigate to the sharepoint file location, and then close. This should hopefully refresh whatever characteristics of the MODP request that are present when the links work properly (and are absent while they aren't working properly).
In essence, I'm hoping this approach will have the effect of resetting my (approximately) one week window of desired link functionality to start anew each time the database is opened. In other words, I'm thinking that this will work, although I still don't fully understand why.
Wish me luck!
;)
I've read quite some post about the angularjs html5Mode including this one and I'm still puzzled:
What is it good for? I can see that I can use http://example.com/home instead of http://example.com/#/home, but the two chars saved are not worth mentioning, are they?
How is this related to HTML5?
The answers links to a page showing how to configure a server. It seems like the purpose of this rewriting to make the server return always the same page, no matter what the URL looks like. But doesn't it read to needlessly increased traffic?
Update after Peter Lyons's answer
I started to react in a comment, but it grew too long. His long and valuable answer raises some more questions of mine.
option of rendering the actual "/home"
Yes, but that means a lot of work.
crazy escaped fragment hacks
Yes, but this hack is easy to implement (I did it just a few hours ago). I actually don't know what I should do for in case of the html5mode (as I haven't finished reading this seo article yet.
Here's a demo
It works neither in my Chromium 25 nor in my Firefox 20. Sure, they're both ancient, but everything else I needed works in both of them.
Nope, it's the opposite. The server ONLY gets a full page request when the user first clicks
But the same holds for the hashbang, too. Moreover, a user following an external link to http://example.com/#!/home and then another link to http://example.com/#!/foreign will be always served the same page via the same URL, while in the html5mode they'll be served the same page (unless the burdensome optimization you mentioned gets done) via a different URL (which means it has to be loaded again).
but the two chars saved are not worth mentioning, are they?
Many people consider the URL without the hash considerably more "pretty" or "user friendly". Also, a very big difference is when you browse to a URL with a hash (a "fragment"), the browser does NOT include the fragment in it's request to the server, which means the server has a lot less information available to deliver the right content immediately. As compared to a regular URL without any fragment, where the full path "/home" IS including in the HTTP GET request to the server, thus the server has the option of rendering the actual "/home" content directly instead of sending the generic "index.html" content and waiting for javascript on the browser to update it once it loads and sees the fragment is "#home".
HTML5 mode is also better suited for search engine optimization without any crazy escaped fragment hacks. My guess is this is probably the largest contributing factor to the push for HTML5 mode.
How is this related to HTML5?
HTML5 introduced the necessary javascript APIs to change the browser's location bar URL without reloading the page and without just using the fragment portion of the URL. Here's a demo
It seems like the purpose of this rewriting to make the server return always the same page, no matter what the URL looks like. But doesn't it read to needlessly increased traffic?
Nope, it's the opposite. The server ONLY gets a full page request when the user first clicks a link onto the site OR does a manual browser reload. Otherwise, the user can navigate around the app clicking like mad and in some cases the server will see ZERO traffic from that. More commonly, each click will make at least one AJAX request to an API to get JSON data, but overall the approach serves to reduce browser<->server traffic. If you see an app responding instantly to clicks and the URL is changing, you have HTML5 to thank for that, as compared to a traditional app, where every click includes a certain minimum latency, a flicker as the full page reloads, input forms losing focus, etc.
It works neither in my Chromium 25 nor in my Firefox 20. Sure, they're both ancient, but everything else I needed works in both of them.
A good implementation will use HTML5 when available and fall back to fragments otherwise, but work fine in any browser. But in any case, the web is a moving target. At one point, everything was full page loads. Then there was AJAX and single page apps with fragments. Now HTML5 can do single page apps with fragmentless URLs. These are not overwhelmingly different approaches.
My feeling from this back and forth is like you want someone to declare for you one of these is canonically more appropriate than the other, and it's just not like that. It depends on the app, the users, their devices, etc. Twitter was all about fragments for a good long while and then they realized their mobile users were seeing too much latency and "time to first tweet" was too long, so they went back to server-side rendering of HTML with real data in it.
To your other point about rendering on the server being "a lot of work", it's true but some consider it the "holy grail" of web app development. Look at what airbnb has done with their
rendr framework. See also Derby JS. My point being, if you decide you want rendering in both the browser and the server, you pick a framework that offers that. Not that you have a lot of options to choose from at the moment, granted, but I wouldn't advise hacking together your own.
Why on sites like Stack Overflow, Techcrunch, Smashing Magazine, etc. are the page titles (i.e. the text at the top of the page) clickable URLs that redirect to the same page that the user is on?
Some examples:
I believe that this does not effect SEO as search engines ignore internal links.
Is it for usability purposes?
It allows you to right-click on it and choose Copy link location (or equivalent) so that you can easily paste it in an email for example. This requires less time than copying it from the location bar, and some people run their browser without a visible location bar to save previous screen space.
More than anything, it provides a link to the default state of the page.
For example, for this very stack overflow page it is a user can get here through any of the following non-default links:
Why are Page Titles on some websites (including Stack Overflow) Clickable URLs?
https://stackoverflow.com/questions/904381#foobar
https://stackoverflow.com/questions/904381?sort=date
While the default link is actually:
Why are Page Titles on some websites (including Stack Overflow) Clickable URLs?
If users are unable to get to the default state, they end up bookmarking or emailing the non-default link which propagates to new users and the problem just multiplies.
Clicking on the title link of the post will restore the default state and strip off any query parameters (?sort=date), named anchors (#foobar) and fix the story slug (/why-are-page-titles/...).
I use it to refresh the page (yes, I could press F5 too).
Yes Jakob Nielsen has stated that linking to yourself is a web design mistake (nr 10). And I agree.
More reading info here. (nr 10)
The URL redirects to the beginning of the page, in case you arrived on the page via a specific answer (all answers are also clickable URLs). This way, you get the URL of the question, not of an answer.
Not sure if this is why they did it, but I find it useful to siphon off tabs:
If I look at something briefly and think "I'd like to read this thoroughly in a minute but continue with what I was doing before", I can do this:
I can right click the link, click "open in a new tab" and then click "back" and continue nicely.
It's called a Permalink... The name implies what it is, a permanent link.
It's the same reason that each answer on SO has a link you can copy.
I think it inherits the behavior from CMS where each question is a node, which has 0<= answered question. Now think you go for a search on apache questions.
The result are displayed one after another.
In terms of CMS this is called a teaser. You get a full page with lots of questions where the question's title link to the full article(question + answers)
Its not a must, but you'll find it on most sites which uses a CMS.
As long as it does not harm anyone why would people be against it?
I prefer to have those links available as hitting refresh would reload all elements of the page instead of just following the direct link (to the same page) that uses cached elements.
Makes sense to me, I find it useful! I have a lot of tabs open so I just right click the link and go back.
To me this makes perfect sense, from a SEO view this is also good! It forces it to read the page because it's linked.
UX-wise clickable titles which don't bring the user anywhere may seem unusable though that leads us into the realm of Affordance Theory and whether or not the affordance is perceptible to users.
For example, clickable page titles may provide:
A simple method for bookmarking a page to the desktop from a browser window.
A context menu with additional choices allowing users to share a blog post or article.
A method for updating the location bar so it's pointing at the canonical URL of the page.
For the sites you mentioned, however, it seems more likely the page titles were turned into hyperlinks using absolute URLs so analytics tooling could pick up inbound link clicks – those sending the referer info – resulting in DCMA takedown notices when people copied work and didn't update the URLs.
You'd be surprised what people do when they're being incentivized to produce work contractually.