So recently we had a bunch of legacy applications moved to a new server and not surprisingly a bunch of stuff blew up.
This particular issue has to do with a CF 8 application. Users are hitting a page on HTTPS (e.g. https://www.mysite.com/default.cfm). The form action is something like "/action.cfm?variable=true". When the form is submitted though the page they land on is http://www.mysite.com/action.cfm?variable=true. Switching from HTTPS to HTTP is causing sessions to be lost.
One possible problem that I think may be causing it is that none of the CF 8 hot fixes got applied to the new server yet and the JVM version also needs to be updated. Could either of these be the culprit? We're planning on addressing both in the next few days but I'd like to know if it's just wishful thinking that it will fix the problem.
I'd appreciate any help.
Related
Good day,
Since moving house and having my new internet (and new ISP) installed, some web-pages only half-load and I constantly get cloud-flare warnings requesting that I confirm that I am not a robot. I did not have this problem with the previous ISP. I have attached some examples of the issues I'm having. This does not appear to be a "Chrome-only" thing as I get the same results with IE.
1. The first image shows the web page loading, but not giving all the information.
2. The second image shows the warnings that constantly pop up.
3. The third image is just an example of buttons that won't load on clicking, and some of those buttons don't even let me click them at
all.
Issue:
Can't load into certain pages, and can't click certain buttons, and keep getting this cloud-flare thing.
Things I have tried:
- Restarting computer
- Reinstalling chrome
- DNS flushing using the command prompt
- Changing DNS to the google DNS
Any and all help is greatly appreciated. Thank you.
I'm imagining your computer is the same, but your router is new/different (for the new ISP)? Maybe your computer has existing proxy settings which were specific to your old ISP, and won't work for the new one.
Search 'proxy' in Windows search and disable any custom proxies.
I've got the same issue as well when i moved into my new house. When i researched some more and talked with my tech-savvy cousin, he said that it could be because the internet connection isn't strong (some of the data gets lost in the way) or the ISP isn't allowing you to do the things that you want to do. Sometimes this occurs when you are asking for a lot of data. It is a security feature. Call them and ask why this is happening. Maybe it's different.
Also, check if you have a VPN on. I could also disrupt the websites.
Hope this was useful!!
Try connecting a different device (like an Xbox, or PS, or your phone) and see what happens. If it can connect to the devices mentioned above correctly, then the problem is with your PC or laptop or apple device.
Apparently the issue had to do with the IP address. They gave me my own one and all was well in the world.
This may sound like a very basic question but I feel like I've tried everything.
This a follow-up to this post I made earlier, where I resolved the issue, only for it to come back again.
To summarize, I was making some change to the contact.css file on the contact page of my website when I noticed the changes were working offline but didn't appear online. I narrowed this issue down to a caching issue with the above post (others could see the changes but I couldn't).
In the above example I couldn't get my website to show up as background-color:blue - eventually it worked and I thought I'd fixed it... So I go to change the color back to normal and boom, it stops refreshing the changes again.
So I think it's some sort of caching issue but for the life of me I can't get my cache to clear properly so that I can refresh and see the changes.
Here are the things I have tried already:
Clearing cache (many times) on Chrome, Firefox, and Opera
Hard refresh on Chrome, Firefox, and Opera
Disabling cache through dev tools on Chrome and Firefox (this worked initially then stopped working when I re-updated the website)
Checked multiple times that the CSS file uploaded correctly and the file path was correct. This was confirmed because the correct changes were seen by other people.
Flushed my DNS
Changed from my ISPs DNS to google's 8.8.8.8 + 8.8.4.4
I'm using HostGator to host my website, I'm wondering at this point whether it's something to do with them? I really just have no idea at this point.
Here's what I see online:
Here's what I should be seeing and what I do see on the offline version of my website:
I noticed you said "I'd really like to get to the bottom of the underlying issue" so I figured I'd write an answer to provide a few options (and if anyone wants me to add others, please feel free to add a comment). Overall though, determining your root cause is likely much harder than just solving your overall problem, but let's start with possible causes that I can think of off my head:
Multiple CDN servers taking a while to update so some are returning the old data (your current session) and some are returning new (incognito)
Server session caching so when you reload the page within one http context session you get back the same content (I've seen this in product search queries for example)
The solution to this is relatively simple though, it's called cache busting. Basically, every time you update your source code just add a unique key in either the query string, file name or something to make the url unique. For example, for your css you can link https://path/to.css?v2.0.1 and just keep increasing the version number as you go. If you use webpack for your build outputs, they have a content hash variable that you can use as a token in the file names.
As for the CDNs possibly caching things out of date... the content hash solution will solve that problem as it's an entirely different file name so the CDN will go get it from the root if it doesn't have it in it's cache. I'm unsure of the url version query parameter will do the same, maybe someone else could shed some light on that.
Have you tried using Incognito in Chrome?
I'm having some issues with Chrome canceling some HTTP requests and I'm suspecting cached authentication data to be the cause. Let me first write down some important factors about the application I'm writing.
I was using Basic Authentication scheme for some time to guard several services and resources in my web app.
In the meantime I was using/testing the app heavily using Chrome with my main Google Account fully synced. Most frequently I was using my name - "lukasz" - as the username in Basic Auth.
Recently I have switched my application to use Digest Authentication.
Now, some of the HTTP requests I'm making are failing with status=failed with no apparent reason. It only happens when I'm using user "lukasz", if I enter some other unique username - there is no problem.
I looked everywhere in the backend and frontend and I couldn't locate the issue to be in our code. I can easily reproduce this with user "lukasz" each time. So I reverted my code to Basic Auth (while not touching the rest of app) and the problem was gone.
That led me to think that there is something wrong with cached passwords. So I cleared the cache in Chrome, but that didn't help. After several hours of analyzing the issue I decided to make sure that I'm running fresh instance of Chrome, so I reinstalled it (deleting the disk data along the way). TADAAA! The problem was gone and I couldn't reproduce this anymore.
Then I synchronized my Google Account with this newly installed Chrome and after a short while the requests to my app started failing again!! So I took a deeper look at this (cleaning profile data from disk and redoing all the steps) and indeed it looks like the problem starts as soon as my account is synced with cloud!
Yes, I know it sounds dodgy. It sounds ridiculous. It sounds stupid. But I am almost sure that those two problems are somehow related (failing requests and account sync).
My idea is this: Chrome somehow remembered that I was using "lukasz/my-pass" with Basic Auth for certain services. After I switched to Digest Auth the same combination of credentials (lukasz/my-pass) is now acting funny. Perhaps under the hood Chrome still thinks that this is Basic Auth and cancels requests when it learns otherwise?
UPDATE:
I've did some low level debugging with chrome://net-internals/ and it appears that the problem is while reading cache entry. This seems to prove my initial assumption.
I did some investigation and found this article. Apparently always adding "Last-Modified" header to my http response has solved the issue in Chrome (I'm still having some problems in FF, but that's off topic).
However, it still doesn't solve my issue entirely. Why the requests were failing in the first place?
You could try using incognito mode and see what happens. It may give you some hints without having to clear the cache or re-installing Chrome.
Also take a look at How to clear basic authentication details in chrome
A web page is not loading/hanging. How will you debug it?
I have been asked this question couple of times during my telephonic interviews. But I don't know the perfect answer.
I had given answers such as checking if the web-app is deployed properly, the internet itself might be slow, the JSP might have some errors, checking logs for any such detail, etc. But interviewer kept asking "These are all good checks, but what if all of these are fine, what else might be wrong?"
Also, it is not a JavaScript specific question. I can debug the JS/jQuery code using debugger, or following the console.log(). But how will you debug a plain JSP page?
Can any web-application gurus at SO help?
Once you know that you can't simply get to your site in the expected way (what I call the Hail Mary Test), then you need to start from the inside out.
Because of the multiple failure points a website can have, I always create a command line environment that allows me to test the framework & DB operation independently of the web server, firewall settings, etc. This can take some fiddling depending on what you are using, but I've done this successfully with Django, WordPress, Drupal, etc.
Once I know the app itself is working, I connect with a command line client (e.g. links) to see if a client coming from localhost works as expected. This confirms that the server itself is working (at least partially). Then I test from another host on the same LAN. More than once I've seen localhost work and LAN access not work, and the problem is almost always server configuration or firewall configuration.
If all of that works, but you still can't get to your site from the internet, then it is a networking / firewall setting somewhere further up the food chain. Try to find a host that is one step farther up from where you last succeeded and test that. Lather, rinse, repeat.
As far as I know, at the current moment, late 2011 the max-connections-per-server limit remains 6. Please correct me if I am wrong. This is bad that we cannot fix this easily as in Firefox. As far as I know this value is hardcoded.
One of the solutions is to download the Chromium's sources and rebuild them. Is there a more easy solution?
Is there any tricky way to hack this without creating a dozen of mirror-domains?
Why I'm asking the question: My task is to create a html-javascript slideshow that will run inside a fullscreened browser, and a huge monitor is hanging on the wall. The javascript is really complicated, it preloads photos and makes a lot of ajax calls to my web services. If WIFI connection is slow, if 6 photos are loading, the AJAX calls fail, the application runs bad. I want a fast solution based, on http or browser or ubuntu tweak something else, because rebuilding the javascript app will take days.
Offtopic: do you know any other things that can be tweaked in my concrete situation?
IE is even worse with 2 connection per domain limit. But I wouldn't rely on fixing client browsers. Even if you have control over them, browsers like chrome will auto update and a future release might behave differently than you expect. I'd focus on solving the problem within your system design.
Your choices are to:
Load the images in sequence so that only 1 or 2 XHR calls are active at a time (use the success event from the previous image to check if there are more images to download and start the next request).
Use sub-domains like serverA.myphotoserver.com and serverB.myphotoserver.com. Each sub domain will have its own pool for connection limits. This means you could have 2 requests going to 5 different sub-domains if you wanted to. The downfall is that the photos will be cached according to these sub-domains. BTW, these don't need to be "mirror" domains, you can just make additional DNS pointers to the exact same website/server. This means you don't have the headache of administrating many servers, just one server with many DNS records.
I don't know that you can do it in Chrome outside of Windows -- some Googling shows that Chrome (and therefore possibly Chromium) might respond well to a certain registry hack.
However, if you're just looking for a simple solution without modifying your code base, have you considered Firefox? In the about:config you can search for "network.http.max" and there are a few values in there that are definitely worth looking at.
Also, for a device that will not be moving (i.e. it is mounted in a fixed location) you should consider not using Wi-Fi (even a Home-Plug would be a step up as far as latency / stability / dropped connections go).
BTW, HTTP 1/1 specification (RFC2616) suggests no more than 2 connections per server.
Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy. A proxy SHOULD use up to 2*N connections to another server or proxy, where N is the number of simultaneously active users. These guidelines are intended to improve HTTP response times and avoid congestion.
There doesn't appear to be an external way to hack the behaviour of the executables.
You could modify the Chrome(ium) executables as this information is obviously compiled in. That approach brings a lot of problems with support and automatic upgrades so you probably want to avoid doing that. You also need to understand how to make the changes to the binaries which is not something most people can pick up in a few days.
If you compile your own browser you are creating a support issue for yourself as you are stuck with a specific revision. If you want to get new features and bug fixes you will have to recompile. All of this involves tracking Chrome development for bugs and build breakages - not something that a web developer should have to do.
I'd follow #BenSwayne's advice for now, but it might be worth thinking about doing some of the work outside of the client (the web browser) and putting it in a background process running on the same or different machines. This process can handle many more connections and you are just responsible for getting the data back from it. Since it is local(ish) you'll get results back quickly even with minimal connections.