Domain: https://www.amz2btc.com
Analysis from SSL Labs: https://www.ssllabs.com/ssltest/analyze.html?d=amz2btc.com
All my desktop browsers open this fine. Mobile Firefox opens this fine. Only when I tried with mobile Chrome did I get the error: err_cert_authority_invalid
I know very little about SSL, so I can't really make sense of the SSL report or why this error is coming up. If someone could ELI5, that would be ideal. :)
I just spent the morning dealing with this. The problem wasn't that I had a certificate missing. It was that I had an extra.
I started out with my ssl.conf containing my server key and three files provided by my SSL certificate authority:
# Server Certificate:
SSLCertificateFile /etc/pki/tls/certs/myserver.cer
# Server Private Key:
SSLCertificateKeyFile /etc/pki/tls/private/myserver.key
# Server Certificate Chain:
SSLCertificateChainFile /etc/pki/tls/certs/AddTrustExternalCARoot.pem
# Certificate Authority (CA):
SSLCACertificateFile /etc/pki/tls/certs/InCommonServerCA.pem
It worked fine on desktops, but Chrome on Android gave me err_cert_authority_invalid
A lot of headaches, searching and poor documentation later, I figured out that it was the Server Certificate Chain:
SSLCertificateChainFile /etc/pki/tls/certs/AddTrustExternalCARoot.pem
That was creating a second certificate chain which was incomplete. I commented out that line, leaving me with
# Server Certificate:
SSLCertificateFile /etc/pki/tls/certs/myserver.cer
# Server Private Key:
SSLCertificateKeyFile /etc/pki/tls/private/myserver.key
# Certificate Authority (CA):
SSLCACertificateFile /etc/pki/tls/certs/InCommonServerCA.pem
and now it's working on Android again. This was on Linux running Apache 2.2.
I had this same problem while hosting a web site via Parse and using a Comodo SSL cert resold by NameCheap.
You will receive two cert files inside of a zip folder:
www_yourdomain_com.ca-bundle
www_yourdomain_com.crt
You can only upload one file to Parse:
Parse SSL Cert Input Box
In terminal combine the two files using:
cat www_yourdomain_com.crt www_yourdomain_com.ca-bundle > www_yourdomain_com_combine.crt
Then upload to Parse. This should fix the issue with Android Chrome and Firefox browsers. You can verify that it worked by testing it at https://www.sslchecker.com/sslchecker
For those having this problem on IIS servers.
Explanation: sometimes certificates carry an URL of an intermediate certificate instead of the actual certificate. Desktop browsers can DOWNLOAD the missing intermediate certificate using this URL. But older mobile browsers are unable to do that. So they throw this warning.
You need to
1) make sure all intermediate certificates are served by the server
2) disable unneeded certification paths in IIS - Under "Trusted Root Certification Authorities", you need to "disable all purposes" for the certificate that triggers the download.
PS. my colleague has wrote a blog post with more detailed steps: https://www.jitbit.com/maxblog/21-errcertauthorityinvalid-on-android-and-iis/
The report from SSLabs says:
This server's certificate chain is incomplete. Grade capped to B.
....
Chain Issues Incomplete
Desktop browsers often have chain certificates cached from previous connections or download them from the URL specified in the certificate. Mobile browsers and other applications usually don't.
Fix your chain by including the missing certificates and everything should be right.
I hope i am not too late, this solution here worked for me, i am using COMODO SSL, the above solutions seem invalid over time, my website lifetanstic.co.ke
Instead of contacting Comodo Support and gain a CA bundle file You can do the following:
When You get your new SSL cert from Comodo (by mail) they have a zip file attached. You need to unzip the zip-file and open the following files in a text editor like notepad:
AddTrustExternalCARoot.crt
COMODORSAAddTrustCA.crt
COMODORSADomainValidationSecureServerCA.crt
Then copy the text of each ".crt" file and paste the texts above eachother in the "Certificate Authority Bundle (optional)" field.
After that just add the SSL cert as usual in the "Certificate" field and click at "Autofil by Certificate" button and hit "Install".
Inspired by this gist: https://gist.github.com/ipedrazas/6d6c31144636d586dcc3
I also had a problem with the chain and managed to solve using this guide https://gist.github.com/bradmontgomery/6487319
if you're like me who is using AWS and CloudFront, here's how to solve the issue. it's similar to what others have shared except you don't use your domain's crt file, just what comodo emailed you.
cat COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > ssl-bundle.crt
this worked for me and my site no longer displays the ssl warning on chrome in android.
A decent way to check whether there is an issue in your certificate chain is to use this website:
https://www.digicert.com/help/
Plug in your test URL and it will tell you what may be wrong. We had an issue with the same symptom as you, and our issue was diagnosed as being due to intermediate certificates.
SSL Certificate is not trusted
The certificate is not signed by a trusted authority (checking against
Mozilla's root store). If you bought the certificate from a trusted
authority, you probably just need to install one or more Intermediate
certificates. Contact your certificate provider for assistance doing
this for your server platform.
I solved my problem with this commands:
cat __mydomain_com.crt __mydomain_com.ca-bundle > __mydomain_com_combine.crt
and after:
cat __mydomain_com_combine.crt COMODORSADomainValidationSecureServerCA.crt
COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > mydomain.pem
And in my domain nginx .conf I put on the server 443:
ssl_certificate ssl/mydomain.pem;
ssl_certificate_key ssl/mydomain.private.key;
I don't forget restart your "Nginx"
service nginx restart
I had the same probleme but the response made by Mike A helped me to figure it out:
I had a my certificate, an intermediate certificate (Gandi) , an other intermediate (UserTrustRSA) and finally the RootCA certificate (AddTrust).
So first i made a chain file with Gandi+UserTrustRSA+AddTrust and specified it with SSLCertificateChainFile. But it didn't worked.
So i tried MikeA answer by just putting AddTruct cert in a file and specified it with SSLCACertificateFile and removing SSLCertificateChainFile.But it didn't worked.
So finnaly i made a chain file with only Gandi+UserTrustRSA specified by SSLCertificateChainFile and the other file with only the RootCA specified by SSLCACertificateFile and it worked.
# Server Certificate:
SSLCertificateFile /etc/ssl/apache/myserver.cer
# Server Private Key:
SSLCertificateKeyFile /etc/ssl/apache/myserver.key
# Server Certificate Chain:
SSLCertificateChainFile /etc/ssl/apache/Gandi+UserTrustRSA.pem
# Certificate Authority (CA):
SSLCACertificateFile /etc/ssl/apache/AddTrust.pem
Seems logical when you read but hope it helps.
I guess you should install CA certificate form one if authority canter:
ssl_trusted_certificate ssl/SSL_CA_Bundle.pem;
Just do the following for Version 44.0.2403.155 dev-m
Privacy -->Content settings -->Do not allow any site to run JavaScript
Problem Solved
Related
i am making a mitm attack using proxy. The mitm works fine but the problem is chrome detects it. therefore, i have changed the code so it installs the certificate every time you enter a new website. However, the chrome steel says that the certificate authority is invalid. The error code chrome prints:
I used the following line to install the certificate:
powershell -c Import-Certificate -FilePath '{path}' -CertStoreLocation Cert:\LocalMachine\Root
I tryed to install the certificate to chrome manually as well but nothing changed.
I am searching for a way to either install the certificate proparlly or change the browser's sttings so it will ignore the error and let me in to the site. i have searched this on google but i couldn't find a wat to do so.
All of a sudden I seem to have an issue with Google Chrome using localhost.
I'm trying to access any of my development sites (using Ampps) and I get the following error:-
Your connection is not private Attackers might be trying to steal your
information from website.dev (for example, passwords, messages
or credit cards). Learn more NET::ERR_CERT_AUTHORITY_INVALID
When I visit any of the dev sites it is redirecting from http://website.dev to https://website.dev automatically. I'm not having any issue in Safari or Firefox so I don't understand what is going on.
I've tried re-installing Google Chrome, resetting it to the factory default settings...
I think it could be an issue with Keychain Access --> Certificates but wouldn't that mean it wouldn't work in Firefox and Safari if that was the case?
I've spent a while trying to find a solution but so far nothing has worked so I would appreciate some suggestions on how I can fix this. I can't even proceed passed this warning as I don't get the proceed link (insecure) as shown below:-
Navigate to
chrome://flags/#allow-insecure-localhost
and set this to enabled.
After playing around, I came up with one kind of a solution.
First, lets talk about the problem: the cause of this error is that both of us used a .dev domain for our local development. If you go here you will find out that root .dev domain is owned by Google and applying HSTS in Chrome they enforce https-redirect for this domain. Since we use .dev domains, we get redirected to https version and at the same time we don't have any actual certificates installed. So, we see this annoying error. If you go to chrome://net-internals/#hsts you can check your .dev domain and you will actually find out that
static_sts_domain: dev
static_upgrade_mode: FORCE_HTTPS
static_sts_include_subdomains: true
which confirms that HSTS is enforced on *.dev indeed. The policy type is static and, as I understand, it's kind of hard-coded to https-redirect .dev domains.
So, there are at least 2 ways - get and set up an actual certificate somehow or just use another (not .dev) root domain in httpd-vhosts.conf for your local development (also don't forget to update /etc/hosts and launch apache again). I went another root domain route and it solved this issue.
This is really annoying to deal with, but mapping the local website to something other than .dev (I personally use .devo) does work and fixes the problem in chrome. Also, you can add an exception for the page in Mozilla Firefox and not deal with this at all. It's only a problem on Chrome 63+
Best solution is to not use .dev because it owned by Google.
Here you can find an updated list of all claimed TLDs : https://www.rfc-editor.org/rfc/rfc6761
To be safe, choose an unclaimed TLD like .test or .localhost. You can read a useful blog post here : https://iyware.com/dont-use-dev-for-development/
I got same bug, because the CRL file is out of date, and the solution is that update the crl file
you need to add remote site certificate to your local key store
To download certificate from remote site, you will require keytool, open gitbash as admin and run below command to generate the certificate
openssl s_client -showcerts -connect host:port
save the value of above command to from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- to the .crt file.
to add certificate to your local key store, run below command
keytool -import -noprompt -trustcacerts -alias name_of_certificate -file "path_of_dot_crt_file" -keystore "C:\Program Files\Java\jdk1.8.0_192\jre\lib\security\cacerts" -storepass changeit
you can also add downloaded certificate to your browser.
In my case, in order to solve the issue so that I can "Proceed to unsafe",
I needed to go to:
chrome://net-internals/#sockets
then click: "Close idle sockets", "Flush socket pools"
Afterwards go to:
chrome://net-internals/#dns
Then click "Clear host cache"
If it still doesn't work and you are a mac user, try to repeat above after removing localhost certificates using Apple's Keychain Access tool
Google Chrome is ignoring the settings in C:/Windows/System32/drivers/etc/hosts file. Both IE11 and Firefox are installed on the same machine and work as expected.
I've tried all the solutions I could find online including:
Open chrome://net-internals/#dns and click the Clear Hosts Cache button.
Go in Settings, Show Advanced Settings and uncheck the following three options: (X) Use a web service to help resolve navigation errors (X) Use a prediction service to help complete searches and URLs typed in the address bar (X) Use a prediction service to load pages more quickly
Go in Settings, Show Advanced Settings, click the Clear Browsing Data button, selected Cached Images And Files from the beginning of time, and click Clear Browsing Data.
Restart Chrome.exe.
Restart the computer.
Make sure to add http:// to the front of the web address.
Make sure proxy settings are turned off
Run cmd.exe and run ipconfig /flushdns
Uninstall and reinstall Chrome
I'm at a loss... Is there anything I missed that I can try or check?
Seems that Chrome doesn't likes the following extensions for that kind of stuff:
.dev
.localhost
.test
.example
.app
Use .local and the problem seems to disappear.
If anyone stumbles on this problem in 2021, for me the fix was to disable Use secure DNS option from chrome settings. After disabling that, all the options in the hosts file started working.
The option is located under Privacy and Security > Use secure DNS
Link to get there faster:
chrome://settings/security
This has been identified as a "bug" in Chrome, but it appears to be absolutely intentional behavior. Google Chrome does not honor /etc/hosts when connected to the Internet. It always does a DNS lookup to determine IP addresses.
While my references below mostly relate to my expereinces with this on Linux, it is not confined to Linux.
https://groups.google.com/a/chromium.org/forum/#!topic/net-dev/iKXqyc40tW0
https://superuser.com/a/887199/75128
https://bugs.chromium.org/p/chromium/issues/detail?id=117655
Okay I faced the same problem but then I found the solution.
Try this:
Go to history (Ctrl+H) -> In the left pane click on Clear browsing data
In the new window that opens go to Advanced tab
Set Time Range to All Time -> check Cached Images and Files -> click on Clear data
Restart your computer, It should start redirecting addresses mentioned in Hosts file (C:\Windows\System32\drivers\etc\hosts)
Note: This Solution is only for Google Chrome
Try clearing the DNS Cache:
1) run cmd.exe as administrator
2) type: ipconfig /flushdns
I just encountered this tonight and none of these options worked. I discovered that Chrome now hides "www" (https://www.howtogeek.com/435728/chrome-now-hides-www-and-https-in-addresses.-do-you-care/). Chrome was using my hosts file, but I had to add "www." to my hostname in my hosts file since that's what the browser is actually requesting, even if it doesn't show it.
A little late, but after hours i find a solution. It seems that Google Chrome sometimes has problems on recognize the name of the hosts defined en /etc/hosts.
I'm using linux and i'm behind a proxy.
Try adding at the end of the name server: .localhost
Example:
At: /etc/hosts:
127.0.0.1 myservername.localhost
On the virtual-hosts of your server configuration you'll need to rename the server name. In my case, i'm using apache so at /etc/apache/sites-enabled/myserver.conf rename the line of the old server name with:
...
ServerName: myservername.localhost
If you are behind a proxy, you can except all the hosts just adding to the no_proxy vars:
$no_proxy= "localhost"
Finally don't forget to restart the server and try to access on the browser with the new server name.
😊 simple answer 😊
there are 3 workarounds about this:
1- deleting Visited Links binary file (beauty👍)
2- using .local or .app instead of your desired TLD (standard & preferred by chrome docs but i don't like it)
3- restarting your computer (ugly👎)
deleting Visited Links binary:
kill all chrome tasks (close all chrome windows:))
delete C:\Users\[USERNAME]\AppData\Local\Google\Chrome\User Data\Default\Visited Links binary
you can define a function in your shell profile to perform this fast and just by a command whenever you face this issue: e.g:
function respectHosts () {
$path = $HOME + "\AppData\Local\Google\Chrome\User Data\Default\Visited Links";
Remove-Item $path;
}
important Note:
it is suggested that first time after deleting Visited Links binary file, also delete your history cause if you use a url from history, actually you are using the cached dns of that url too:
Running Chrome 105 on Windows 11, nothing seemed to work until I added ::1 (i.e. ipv6) in addition to 127.0.0.1. For example:
127.0.0.1 local.foo.com
::1 local.foo.com
While it was stated that no proxy is being used, I have had the same issue on OS X while using a proxy and the eventual solution was to add a proxy-exception for this domain.
What the OP could try is turn off async DNS via command-line switch as
mentioned here in 2015:
Async DNS: Remove toggle from about:flags
Async DNS is fairly stable at the moment, so we don't really need the
toggle in about:flags anymore. (Note that the --enable-async-dns and
--disable-async-dns command-line flags will still work for now.)
This, however, seems to have no effect in my case, as chrome://net-internals/#dns still displays the internal DNS-client as enabled with no obvious way to turn it off.
Had a similar issue working from a windows based server that had proxy settings. In the proxy advanced settings there are 2 options that can help. Ignore proxy setting for local hosts which is a check box; as well as a list of addresses set off my semi-colons where you can except out certain IP destinations. This fixed my issue.
For me
chrome://net-internals/#sockets
Flush socket pools work wonder, credit: https://superuser.com/a/611712
I am using Octave 4.0.0 for windows, and want to download stock prices from a web page that is open to all public. I use the following call:
data = urlread(https://www.netfonds.no/quotes/paperhistory.php?paper=API.A&csv_format=csv)
However, I get the following error message:
urlread: Peer certificate cannot be authenticated with given CA certificates
I have searched internet, including StackOverflow, for this error message, but do not understand the advices given there.
Q1: Is there something lacking on my pc? If so, what do I do?
Q2: Can I change the call somehow to adjust for something lacking on my pc?
Thanks in advance for any help : )
It appears that is a bug in urlread() for certain versions of Octave. For a course I'm doing, we changed this:
responseBody = urlread(submissionUrl, 'post', params);
to
[code, responseBody] = system(sprintf('echo jsonBody=%s | curl -k -X POST -d #- %s', body, submissionUrl));
Although the page is publicly available, the connection is encrypted. For an encrypted connection to make sense, it must use a key that you trust. The typical user does not thinks about whether to trust it, it leaves the job of deciding this to the OS or web browser (who then rely on certificate authorities). I am guessing this is your case.
The error you get is because the website you are accessing uses a key that was certified by something that urlread does not "trust". Ideally, you would have a single list of trusted certificates and all applications would use it. If your web browser trusts it, but the rest of your system does not, you have a configuration issue. Either your web browser is keeping its own list of trusted certificates, or libcurl (the library that urlread uses) is not finding the certificates installed on your system.
This "configuration" will be a directory with several .pem files. The specific certificate required for this website will most likely be named GlobalSign_Root_CA_-_R2.pem.
And it works here:
octave> data = urlread ("https://www.netfonds.no/quotes/paperhistory.php?paper=API.A&csv_format=csv")
data = quote_date,paper,exch,open,high,low,close,volume,value
20150508,API,Amex,0.39,0.40,0.39,0.40,85933,34194
20150507,API,Amex,0.40,0.41,0.38,0.39,163325,64062
...
For Windows a workaround is to use the curl command in the Windows console. This can be called by Octave via the system command. With the curl command you can chose the option '--insecure' that will also allow connection to websites without certificates. Only use this option if you're sure the website is safe.
sURLLink = 'https://www.netfonds.no/quotes/paperhistory.php?paper=API.A&csv_format=csv'
command=['curl --insecure ','"',sURLLink,'"'];
[status, output] =system(command);
It is specifically the checkout that concerns me:
https://checkout.shopify.com/5045549/checkouts/6321b0e319fdc3cab47b8ba2d1e30b46?_ga=1.255397768.1085821832.1430095488
When I'm on the cart page http://store.rockettags.com/cart and click the checkout button I'm taken to https://checkout.shopify.com and get the issue.
Why am I getting the issue and how do I fix it please?
I did a search and only found this info: http://weblogs.asp.net/owscott/identity-not-verified-in-chrome
Is it true that shopify.com is using out dated security settings.
See photo
Chrome is reporting Shopify is using a SHA1 signed certificate as opposed to the new recommended SHA2 certificates.
Newer versions of Chrome now warn against this.
Their Certificate Authority does have SHA2 certificates available. So it may be a matter of Shopify needing to re-deploy their Certificates from DigiCert.
EDIT:
This appears to be a bug in Debian based (Debian, Ubuntu) distributions and this shouldn't happen on Windows. More information here.
Shopify is indeed using the newer SHA-2 certificates which can be verified here.