I setup a subdomain to test something I’m working on and I used _test as the subdomain. The site would load but I kept getting odd errors when trying to login that I normally wouldn't get if the login credentials were wrong. After looking into this more I found that the underscore prefix is what's causing the problem which I found odd since both IIS and GoDaddy both allowed me to enter them without error.
IE9 & IE10 will show the page but won't send cookies back to the server on POST. I’m not sure if there’s anything else it’s not doing but that’s the main thing I’m seeing.
FireFox, Opera, and Safari all work as intended and I can browse the site and login to it. Presumably these all work without any quirks as IE is.
Chrome doesn't load the site at all and instead it redirects me to a Google search.
Since there's three different outcomes here does anyone know what the correct outcome of this should be? According to this question my subdomain should be valid and work meaning Chrome and IE both have bugs.
I know underscore prefixed subdomains work on some level because at work I'm running the JetBrains License Server and have a TXT record called _jetbrains-license-server that's been working fine for a couple years now.
Hostnames in the context of A or AAAA records can't have an underscore. However TXT records can contain the underscore (see here for a detailed discussion).
Related
UPDATE IN ANSWER BELOW
Is anyone else experiencing the newest couple versions of chrome causing issues with legacy Java applications? Just yesterday I needed to get the company's policy manager to allow downloading files from an internal unsecured server by adding our URLs to a whitelist - you can see the details of the process on the chromium blog here. That issue was present in v90 as well.
What I'm currently experiencing due to the v91 update is as follows: My boss was trying to use a page in one of our Java 6 legacy applications and he noticed that the page wouldn't return the data in any format - we checked and he was already v91. I was on v90 and the page worked fine. After updating Chrome to v91, I'm getting the same broken page as my boss.
I was thinking it might be something related to the CSS but I don't have time to poke at it and redeploy the legacy app every time to test the changes. Though, I have taken a peek at this chromium blog post for version 91. Though I don't see much relating to what may have caused the removal of all non-label fields and the formatting of the label fields are all wonky and out of place.
I'm going to look into investigating the struts tile that holds the code JSP code; if I find something I'll post it here for reference.
The first image below is what one row should look like with the header above it. As you see in the second picture, all there is the header with improper formatting and the grid is gone.
I have determined the problem to be the <table> tag. In the newest version (v91) of Chrome, the table rendering engine has been rewritten. the notes are here and if you want the in-depth documentation, here is the link to the Google Doc that the developers wrote. Basically, the old way of rendering tables has become obsolete and the <table> tag is now defunct.
Workaround: Disable the chrome flag named Enable TableNG and restart your browser.
Addition: I found chromestatus, a website that shows new features being added, deprecations, etc.
This may sound like a very basic question but I feel like I've tried everything.
This a follow-up to this post I made earlier, where I resolved the issue, only for it to come back again.
To summarize, I was making some change to the contact.css file on the contact page of my website when I noticed the changes were working offline but didn't appear online. I narrowed this issue down to a caching issue with the above post (others could see the changes but I couldn't).
In the above example I couldn't get my website to show up as background-color:blue - eventually it worked and I thought I'd fixed it... So I go to change the color back to normal and boom, it stops refreshing the changes again.
So I think it's some sort of caching issue but for the life of me I can't get my cache to clear properly so that I can refresh and see the changes.
Here are the things I have tried already:
Clearing cache (many times) on Chrome, Firefox, and Opera
Hard refresh on Chrome, Firefox, and Opera
Disabling cache through dev tools on Chrome and Firefox (this worked initially then stopped working when I re-updated the website)
Checked multiple times that the CSS file uploaded correctly and the file path was correct. This was confirmed because the correct changes were seen by other people.
Flushed my DNS
Changed from my ISPs DNS to google's 8.8.8.8 + 8.8.4.4
I'm using HostGator to host my website, I'm wondering at this point whether it's something to do with them? I really just have no idea at this point.
Here's what I see online:
Here's what I should be seeing and what I do see on the offline version of my website:
I noticed you said "I'd really like to get to the bottom of the underlying issue" so I figured I'd write an answer to provide a few options (and if anyone wants me to add others, please feel free to add a comment). Overall though, determining your root cause is likely much harder than just solving your overall problem, but let's start with possible causes that I can think of off my head:
Multiple CDN servers taking a while to update so some are returning the old data (your current session) and some are returning new (incognito)
Server session caching so when you reload the page within one http context session you get back the same content (I've seen this in product search queries for example)
The solution to this is relatively simple though, it's called cache busting. Basically, every time you update your source code just add a unique key in either the query string, file name or something to make the url unique. For example, for your css you can link https://path/to.css?v2.0.1 and just keep increasing the version number as you go. If you use webpack for your build outputs, they have a content hash variable that you can use as a token in the file names.
As for the CDNs possibly caching things out of date... the content hash solution will solve that problem as it's an entirely different file name so the CDN will go get it from the root if it doesn't have it in it's cache. I'm unsure of the url version query parameter will do the same, maybe someone else could shed some light on that.
Have you tried using Incognito in Chrome?
On an intranet site, let's say I want to link to a file on a share using UNC, at:
\\servername\foldername\filename.rtf
It seems the correct way to do this is with markup like this:
filename.rtf
That's five slashes - two for the protocol, one to indicate the root of the file system, then two more to indicate the start of the server name.
This works fine in IE7, but in Firefox 3.6 it will only work if the html is from a local file. I can't get it to work when the file comes from a web server. The link is "dead" - clicking on it does nothing.
Is there a workaround for this in Firefox? Those two browsers should be all I need to worry about for now.
Since this is obviously a feature of Firefox, not a bug, can someone explain what the benefit is to preventing this type of link?
This question has been asked at least twice before, but I was unable to find those posts before posting my own (sorry):
Open a direct file on the hard drive from firefox (file:///)
Firefox Links to local or network pages do not work
Here is a summary of answers from all three posts:
Use WebDAV — this is the best solution for me, although much more involved than I had anticipated.
Use http:// instead of file:///// — this will serve up a copy of the document that the user cannot edit and save.
Edit user.js on the client as described here — this worked for me in Firefox 3.6.15, but without access to client machines, it's not a solution.
In Firefox, use about:config, change the Security.fileuri.strict_origin_policy setting to false — this doesn't work for me in 3.6.15. Other users on [SO] have also reported that it doesn't work.
Use the locallinks Firefox extension — this sets the Security.fileuri.strict_origin_policy to true for you, and appears to have no other effect.
Read the file server-side and send it as the response — this presents the same problem as simply configuring your web server to use http://.
Browsers like Firefox refuse to open the file:// link when the parent HTML page itself is served using a different protocol like http://.
Your best bet is to configure your webserver to provide the network mapped file as a web resource so that it can be accessed by http:// from the same server instead of by file://.
Since it's unclear which webserver you're using, I can't go in detail as to how to achieve this.
In Firefox to Open File:\\\\\yourFileServer\docs\doc.txt for example you need to turn on some options in Firefox configuration:
user_pref("capability.policy.policynames", "localfilelinks");
user_pref("capability.policy.localfilelinks.sites", "http://yourServer1.companyname.com http://yourServer2.companyname.com");
user_pref("capability.policy.localfilelinks.checkloaduri.enabled", "allAccess");
As it turns out, I was unaware that Firefox had this limitation/feature. I can sympathize with the feature, as it prevents a user from unwittingly accessing the local file system. Fortunately, there are useful alternatives that can provide a similar user experience while sticking to the HTTP protocol.
One alternative to accessing content via UNC paths is to publish your content using the WebDAV protocol. Some content managements systems, such as MS SharePoint, use WebDAV to provide access to documents and pages. As far as the end-user experience is concerned, it looks and feels just like accessing network files with a UNC path; however, all file interactions are performed over HTTP.
It might require a modest change in your file access philosophy, so I suggest you read about the WebDAV protocol, configuration, and permission management as it relates to your specific server technology.
Here are a few links that may be helpful if you are interested in learning more about configuring and using WebDAV on a few leading HTTP servers:
Apache Module mod_dav
IIS 7.0 WebDAV Extension
Configuring WebDAV Server in IIS 7, 6, 5
Add your own policy, open configuration "about:config" in the location bar and add three new entries:
capability.policy.policynames MyPolicy
capability.policy.MyPolicy.sites http://localhost
capability.policy.MyPolicy.checkloaduri.enabled allAccess
Replace http://localhost with your website.
Works with Firefox 70.0.
I don't know if this will work, but give it a shot! Old article, but potentially still useful.
http://www.techlifeweb.com/firefox/2006/07/how-to-open-file-links-in-firefox-15.html
I have encountered the "Phishing Detected" warning in Chrome browser on my dev site. Interestingly I don't encounter the same warning in Firefox or Safari even though, as far I can tell, they are using the same phishing database (although in Safari preferences it says "google safe browsing service is unavailable"). I also don't encounter the warning on the same page of the production sites.
It first popped up on a new account verification page I created which amongst other things asked users to confirm their PayPal account with the GetVerifiedStatus API. This requires only name and email.
I have also encountered the warning on a configuration page which asks for the PayPal email address which the user wishes to receive payments to.
Neither page requests a password or any other data that would be considered a secret.
As you might gather I have zeroed in on a potential false positive on the PayPal portion of the content as if perhaps I am phishing for PayPal information beyond the payers email address. There has been no malicious code injection or any such thing. Even when i've removed all content from the page the warning is still present.
I reported the first incorrect detection to Google, and intend to do the same for the second incident, however what I really want to clear up is:
What content can lead to this warning?
How can I avoid it in the future?
How can I get some info from the "authorities" on which urls are blocked? (Webmaster Tools is not showing warnings for the dev site)
How can I flush my local cache of "bad sites" in case I want to re-test?
Clearly having a massive red alert presented to a user on a production site would be disastrous, and there is a (perhaps deliberate) lack of information about how this safe browsing service actually works.
I have been developing a website for a banking software developer and ran into the Phishing warning as well. Unlike you I had no PayPal associations in any of my code and well not even any data collection besides a simple contact form. Here are some things I managed to figure out to resolve my false positive warnings.
1) The warnings in Chrome (red gradient background) is a detection method built into the Chrome browser itself and it does not require to check any blacklists to give the warning. In fact Google themselves claim that this is one of the methods that they discover new potentially harmful sites. When your site is actually on the blacklists you get another red warning screen with diagonal lines in the background. This explains why you only see the warning in Chrome.
2) What actually triggers this warning is obviously kept kind of hidden. I could not find anything to help me debug the content of my site. You have pretty much done this, so for anybody else in need of help, I had to isolate the parts of my site to see what was triggering the warnings. Due to the nature of the site I was working on it turned out to be the combination of words and phrases in the content itself. (e.g Banking Solutions, Online Banking, Mobile Banking). Alone they did not trigger anything but when loaded together chrome would do its thing. So I'm not sure what your triggers are or even what the list of possible triggers are. Sorry...
3) I found that simply quitting Chrome completely and restarting it resets the "cache" for whether it has perviously detected a page. I closed Chrome hundreds of times while getting to the bottom of my warnings.
Thats all I have and hope it helps.
Update: My staging area was accessed via an IP address. Once I moved the site to use a domain instead all the warnings stopped in chrome.
I experienced the same today while creating an SSL test report for my web server customers. What I had there was simply something like this:
"Compare the SSL results of our server to the results of a well-known bank and its Internet banking service". I just wanted to show that the banking site had grading B whereas ours had grading A-.
I had two images from SSL-Labs (one the results for my server and the other the results of the bank). No input fields, no links to any other site and definitely no wording about then name of the bank.
One h1, two h2 titles and two paragraphs plus two images.
I moved the HTML to the page and opened it in my Chrome browser. The web server log told me that a Google service had loaded the page after 20 seconds from my first preview. Nobody else had seen it so far. The phishing site warning came to me (webmaster) in less than an hour.
So it seems to me that the damn browser is making the decision and reporting to Google which then automatically checks and blocks the site. So the site is being reported to Google by Google tools, the trial is run by Google and the sentence is given by Google. Very, very nice indeed.
This is quite complex to explain but I keep getting injection attacks from another website by just clicking on a link. Oddly though it seems Google Chrome is the one generates it.
To elaborate, I have this site: http://byassociationonly.com and I have this site: http://dev.byassociationonly.com/example (can't name site as its a client site).
Whenever I click on any of the links on http://byassociationonly.com, in Google Chrome, on my machine, none of them work and I get an injection attack (I am using a plugin to send me email notifications when something like this happens, Wordpress Firewall).
The notification I receive is this: http://cl.ly/image/2U111T0m2X35
I just don't understand this error at all, Ive never had a problem before.
I've even removed the code within that page its referencing, which is from single.php, yet the problem still exists. I thought there were conflicts with my MAMP servers running locally but even if they are switched off, the problem still exists but localhost:8888 isn't referenced at all within wp_config.
However if I do this within Firefox, I don't get any notifications at all and the links work fine.
Has anybody got any ideas how to identify where the problem lies and solutions to fix?
As requested here's the code on the single.php page, that the error is reffering to: http://pastebin.com/QKqtLXQi
Did you recently install any Chrome extension? I have run into a similar problem before and after hours of troubleshooting it turned out to be an extension blocking some stuff. The fact that it works fine in FF, it feels like an isolated issue.