I'm having some issues with Chrome canceling some HTTP requests and I'm suspecting cached authentication data to be the cause. Let me first write down some important factors about the application I'm writing.
I was using Basic Authentication scheme for some time to guard several services and resources in my web app.
In the meantime I was using/testing the app heavily using Chrome with my main Google Account fully synced. Most frequently I was using my name - "lukasz" - as the username in Basic Auth.
Recently I have switched my application to use Digest Authentication.
Now, some of the HTTP requests I'm making are failing with status=failed with no apparent reason. It only happens when I'm using user "lukasz", if I enter some other unique username - there is no problem.
I looked everywhere in the backend and frontend and I couldn't locate the issue to be in our code. I can easily reproduce this with user "lukasz" each time. So I reverted my code to Basic Auth (while not touching the rest of app) and the problem was gone.
That led me to think that there is something wrong with cached passwords. So I cleared the cache in Chrome, but that didn't help. After several hours of analyzing the issue I decided to make sure that I'm running fresh instance of Chrome, so I reinstalled it (deleting the disk data along the way). TADAAA! The problem was gone and I couldn't reproduce this anymore.
Then I synchronized my Google Account with this newly installed Chrome and after a short while the requests to my app started failing again!! So I took a deeper look at this (cleaning profile data from disk and redoing all the steps) and indeed it looks like the problem starts as soon as my account is synced with cloud!
Yes, I know it sounds dodgy. It sounds ridiculous. It sounds stupid. But I am almost sure that those two problems are somehow related (failing requests and account sync).
My idea is this: Chrome somehow remembered that I was using "lukasz/my-pass" with Basic Auth for certain services. After I switched to Digest Auth the same combination of credentials (lukasz/my-pass) is now acting funny. Perhaps under the hood Chrome still thinks that this is Basic Auth and cancels requests when it learns otherwise?
UPDATE:
I've did some low level debugging with chrome://net-internals/ and it appears that the problem is while reading cache entry. This seems to prove my initial assumption.
I did some investigation and found this article. Apparently always adding "Last-Modified" header to my http response has solved the issue in Chrome (I'm still having some problems in FF, but that's off topic).
However, it still doesn't solve my issue entirely. Why the requests were failing in the first place?
You could try using incognito mode and see what happens. It may give you some hints without having to clear the cache or re-installing Chrome.
Also take a look at How to clear basic authentication details in chrome
Related
This may sound like a very basic question but I feel like I've tried everything.
This a follow-up to this post I made earlier, where I resolved the issue, only for it to come back again.
To summarize, I was making some change to the contact.css file on the contact page of my website when I noticed the changes were working offline but didn't appear online. I narrowed this issue down to a caching issue with the above post (others could see the changes but I couldn't).
In the above example I couldn't get my website to show up as background-color:blue - eventually it worked and I thought I'd fixed it... So I go to change the color back to normal and boom, it stops refreshing the changes again.
So I think it's some sort of caching issue but for the life of me I can't get my cache to clear properly so that I can refresh and see the changes.
Here are the things I have tried already:
Clearing cache (many times) on Chrome, Firefox, and Opera
Hard refresh on Chrome, Firefox, and Opera
Disabling cache through dev tools on Chrome and Firefox (this worked initially then stopped working when I re-updated the website)
Checked multiple times that the CSS file uploaded correctly and the file path was correct. This was confirmed because the correct changes were seen by other people.
Flushed my DNS
Changed from my ISPs DNS to google's 8.8.8.8 + 8.8.4.4
I'm using HostGator to host my website, I'm wondering at this point whether it's something to do with them? I really just have no idea at this point.
Here's what I see online:
Here's what I should be seeing and what I do see on the offline version of my website:
I noticed you said "I'd really like to get to the bottom of the underlying issue" so I figured I'd write an answer to provide a few options (and if anyone wants me to add others, please feel free to add a comment). Overall though, determining your root cause is likely much harder than just solving your overall problem, but let's start with possible causes that I can think of off my head:
Multiple CDN servers taking a while to update so some are returning the old data (your current session) and some are returning new (incognito)
Server session caching so when you reload the page within one http context session you get back the same content (I've seen this in product search queries for example)
The solution to this is relatively simple though, it's called cache busting. Basically, every time you update your source code just add a unique key in either the query string, file name or something to make the url unique. For example, for your css you can link https://path/to.css?v2.0.1 and just keep increasing the version number as you go. If you use webpack for your build outputs, they have a content hash variable that you can use as a token in the file names.
As for the CDNs possibly caching things out of date... the content hash solution will solve that problem as it's an entirely different file name so the CDN will go get it from the root if it doesn't have it in it's cache. I'm unsure of the url version query parameter will do the same, maybe someone else could shed some light on that.
Have you tried using Incognito in Chrome?
Whenever I send a GET-request to my webapp using chrome, according to my apache access log two identical requests (not always, but most of the times, I can't reproduce it - it's not for the favicon) get send to the sever, although only one is shown in the chrome dev tools. I deactivated all extensions and it's still happening.
Is this https://news.ycombinator.com/item?id=1872177 true and is it a chrome feature or should I dig deeper within my app to find the bug?
I think it's even worse than that. My experience here (developing with Google App Engine), is that Chrome is making all kinds of extra requests.
This is possibly due to the option that is in the Settings, checked by default:
Predict network actions to improve page load performance
Here is a really weird example: my website's page runs a notifications check every 15 seconds (done in javascript). Even after closing all tabs related to my website, I see requests coming from my IP, some random pages but also the notification check request. To me that means that Chrome has a page of my website running in the background and is even evaluating its javascript.
When I request a page, I pretty much always get another request for one of the links in that page. And it also requests resources of the extra pages (.css, .js, .png files). Lots of requests going on.
I have seen the same behavior with the development server that runs locally.
Happens also from another computer / network.
Doesn't happen with Firefox.
Also, see What to do with chrome sending extra requests?
So recently we had a bunch of legacy applications moved to a new server and not surprisingly a bunch of stuff blew up.
This particular issue has to do with a CF 8 application. Users are hitting a page on HTTPS (e.g. https://www.mysite.com/default.cfm). The form action is something like "/action.cfm?variable=true". When the form is submitted though the page they land on is http://www.mysite.com/action.cfm?variable=true. Switching from HTTPS to HTTP is causing sessions to be lost.
One possible problem that I think may be causing it is that none of the CF 8 hot fixes got applied to the new server yet and the JVM version also needs to be updated. Could either of these be the culprit? We're planning on addressing both in the next few days but I'd like to know if it's just wishful thinking that it will fix the problem.
I'd appreciate any help.
I've built a web server using Chrome Packaged Apps. The problem I see repeatedly is that chrome.socket.accept() and chrome.socket.write() don't invoke their callback functions. It usually works more or less reliably if request rate is less than 1 request per seconds. If I go above that, then I start seeing errors or missing callbacks.
I did similar tests with sample "webserver" app build by Google (https://github.com/GoogleChrome/chrome-app-samples/tree/master/webserver). It has the same problem. It usually takes less than 100 requests before web server stops responding. The easiest way to reproduce the problem is to use Chrome browser as a client and hold F5 key for few seconds.
It would be desirable to have a sample app that demonstrates how to build reliable web server using chrome.socket. So far I tried several different workarounds for monitoring the situation from the app itself and restarting socket when socket stops working, but it's not easy because there is no reliable way to check the status of the connection or status of the last operation when callback is not fired. I tried to use getInfo() method, but it always returns "connected=true" regardless of the situation.
I saw this on Windows 7 and Chrome OS (Chromebook).
Just an update on this. According to this the issue is now fixed.
There are still other problems with the sample web server application. I noticed that I could make the sample app lock up by holding down Ctrl-R in the browser. I wrote a more robust one that you can use here: https://github.com/kzahel/web-server-chrome
This is quite complex to explain but I keep getting injection attacks from another website by just clicking on a link. Oddly though it seems Google Chrome is the one generates it.
To elaborate, I have this site: http://byassociationonly.com and I have this site: http://dev.byassociationonly.com/example (can't name site as its a client site).
Whenever I click on any of the links on http://byassociationonly.com, in Google Chrome, on my machine, none of them work and I get an injection attack (I am using a plugin to send me email notifications when something like this happens, Wordpress Firewall).
The notification I receive is this: http://cl.ly/image/2U111T0m2X35
I just don't understand this error at all, Ive never had a problem before.
I've even removed the code within that page its referencing, which is from single.php, yet the problem still exists. I thought there were conflicts with my MAMP servers running locally but even if they are switched off, the problem still exists but localhost:8888 isn't referenced at all within wp_config.
However if I do this within Firefox, I don't get any notifications at all and the links work fine.
Has anybody got any ideas how to identify where the problem lies and solutions to fix?
As requested here's the code on the single.php page, that the error is reffering to: http://pastebin.com/QKqtLXQi
Did you recently install any Chrome extension? I have run into a similar problem before and after hours of troubleshooting it turned out to be an extension blocking some stuff. The fact that it works fine in FF, it feels like an isolated issue.