WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! Chrome Secure Shell App extension - google-chrome

Loading NaCl plugin... done.
Connecting to user#172.27.0.31...
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:C11asdfasdfxY6asdfasdfIUfadsfasdRB4.
Please contact your system administrator.
Add correct host key in /.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /.ssh/known_hosts:21
ECDSA host key for 172.27.0.31 has changed and you have requested strict checking.
Host key verification failed.
NaCl plugin exited with status code 255.
(R)econnect, (C)hoose another connection, or E(x)it?
This error is related to Chrome Secure Shell App extension.
This error may happen if you are under man in the middle attack or due to certificate change on the server side.
Previous fix to this was to delete local entry from known hosts by using Chrome console:
term_.command.removeKnownHostByIndex(21)
But produces error:
VM237:1 Uncaught TypeError: term_.command.removeKnownHostByIndex is not a function
at <anonymous>:1:15

Now (my chrome Version 85.0.4183.83 (Official Build) (64-bit)) an entry can be deleted manually in extension settings. 3 dots (chrome right upper corner) > More tools > Extensions > Secure Shell App Details > Extensions > options > SSH Files > Delete specific entry (whole row) in ~/.ssh/known_hosts

now that the app is discouraged in favour of the extension,
in the top-left corner of the terminal you may click the icon,
and delve into terminal settings and SSH.
~/.ssh/known_hosts is there.
good luck!

Related

What are the new requirements for certificates in Chrome?

Chrome now throws NET::ERR_CERT_INVALID for some certificates that are supported by other browsers.
The only clue I can find is in this list of questions about the new Chrome Root Store that is also blocking enterprise CA installations.
https://chromium.googlesource.com/chromium/src/+/main/net/data/ssl/chrome_root_store/faq.md
In particular,
The Chrome Certificate Verifier will apply standard processing to include checking:
the certificate's key usage and extended key usage are consistent with TLS use-cases.
the certificate validity period is not in the past or future.
key sizes and algorithms are of known and acceptable quality.
whether mismatched or unknown signature algorithms are included.
that the certificate does not chain to or through a blocked CA.
conformance with RFC 5280.
I verified my certificates work as expected in Edge.
Further, I verified the certificate is version "3", has a 2048-bit key, and has the extended key usage for server authentication.
I still don't understand which "standard" this certificate is expected to conform to when the browser only says "invalid". Is there a simple template or policy I can use?
Chrome now rejects TLS certificates containing a variable known as pathLenConstraint or sometimes displayed as Path Length Constraint.
I was using certificates issued by Microsoft Active Directory Certificate Services. The Basic Constraints extension was enabled, and the AD CS incorrectly injects the Path length Constraint=0 for end entity, non-CA certificates in this configuration.
The solution is to issue certificates without Basic Constraints. Chrome is equally happy with Basic Constraints on or off, so long as the path length variable is not present.
One of the better resources for troubleshooting was this Certificate Linter:
https://crt.sh/lintcert
It found several errors in the server certificate, including the path length set to zero.
I also found a thread discussing a variety of Certificate Authorities that would issue certificates the same way, so it is a fairly common issue.
https://github.com/pyca/cryptography/issues/3856
Another good resource was the smallstep open source project that I installed as an alternative CA. After generating a generic certificate, the invalid cert error went away and I realized there was something going on between the Microsoft and Google programs.
The best favour you can do yourself is to run Chrome with debug logging to find the exact cause of the issue:
chrome --enable-logging --v=1
This, I believe, will print:
ERROR: Target certificate looks like a CA but does not set all CA properties
Meanwhile it seems they have reverted this verification, which if I'm not mistaken will be released as Chrome 111 in the beginning of March.
See: https://chromium-review.googlesource.com/c/chromium/src/+/4119124
Following #Robert's answer, I used https://crt.sh/lintcert to fix all the issues that I had, so my self-signed certificate will keep on working, as it suddenly stopped working and I got NET::ERR_CERT_INVALID
Here's How I did it:
# https://www.openssl.org/docs/manmaster/man5/x509v3_config.html
cat > "$_X509V3_CONFIG_PATH" << EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=critical,CA:true
keyUsage=critical,digitalSignature,nonRepudiation,cRLSign,keyCertSign
subjectAltName=#alt_names
issuerAltName=issuer:copy
subjectKeyIdentifier=hash
[alt_names]
DNS.1=somesubdomain.mydomain.com.test
EOF
openssl x509 -req \
-days "$_ROOTCA_CERT_EXPIRE_DAYS" \
-in "$_ROOTCA_PEM_PATH" \
-signkey "$_ROOTCA_KEY_PATH" \
-extfile "$_X509V3_CONFIG_PATH" \ # <--- Consuming the extensions file
-out "$_DOMAIN_CRT_PATH"
Following the above, my lint errors/issues are, and even though there's a single ERROR, my Chrome browser trusts the root CA and the self-signed certificate
cablint WARNING CA certificates should not include subject alternative names
cablint INFO CA certificate identified
x509lint ERROR AKID without a key identifier
x509lint INFO Checking as root CA certificate
For those of you who wish to generate a self-signed certificate for local development with HTTPS, the following gist does that trick- https://gist.github.com/unfor19/37d6240c35945b5523c77b8aa3f6eca0
Usage:
curl -L --output generate_self_signed_ca_certificate.sh https://gist.githubusercontent.com/unfor19/37d6240c35945b5523c77b8aa3f6eca0/raw/07aaa1035469f1e705fd74d4cf7f45062a23c523/generate_self_signed_ca_certificate.sh && \
chmod +x generate_self_signed_ca_certificate.sh
./generate_self_signed_ca_certificate.sh somesubdomain.mydomain.com
# Will automatically create a self-signed certificate for `somesubdomain.mydomain.com.test`

Unable to resolve .local domains with getent even though avahi-resolve-host-name succeeds

Trying to set up a network printer with CUPS.
Followed online documentation that stated:
To discover or share printers using DNS-SD/mDNS, setup .local hostname
resolution with Avahi and restart cups.service.
Followed directions for setting up Avahi to the point where avahi-browse --all --ignore-local --resolve --terminate and avahi-resolve-host-name my-domain.local are both working.
But getent hosts my-domain.local fails to resolve. This results in CUPS failing to print because it can't find my-printer.local.
I read the mdns Github page and saw a note that made me think I didn't need a /etc/mdns.allow file.
nss-mdns has a simple configuration file /etc/mdns.allow for enabling
name lookups via mDNS in other domains than .local.
Note: The "minimal" version of nss-mdns does not read /etc/mdns.allow under any circumstances. It behaves as if the file
does not exist.
In the recommended configuration, no /etc/mdns.allow file is present.
But then I saw the last note in that section:
If, during a request, the system-configured unicast DNS (specified in
/etc/resolv.conf) reports an SOA record for the top-level local name,
the request is rejected. Example: host -t SOA local returns something
other than Host local not found: 3(NXDOMAIN). This is the unicast SOA
heuristic.
I tested that out on my machine and sure enough, I was getting something OTHER than Host local not found....
Adding a /etc/mdns.allow file with a line for .local. and for .local and now I can ping my-printer.local.

Why does SimpleHTTP2Server fail to load service worker on localhost

When I try to run the Polymer Shop locally, both the bundled and unbundled builds, using the SimpleHTTP2Server , on my local host using port 5000, the request for service-worker.js fails:
An SSL certificate error occurred when fetching the script.
https://localhost:5000/service-worker.js Failed to load resource: net::ERR_INSECURE_RESPONSE
(index):1 Uncaught (in promise) DOMException: Failed to register a ServiceWorker: An SSL certificate error occurred when fetching the script.
Is there an easy way to get this to work? I tried a number of start up flags, like:
chrome.exe --ignore-certificate-errors --incognito
--unsafely-treat-insecure-origin-as-secure --allow-insecure-localhost
but that didn't help, I still get:
(index):1 Uncaught (in promise) DOMException: Failed to register a ServiceWorker: An SSL certificate error occurred when fetching the script.
Following alesc's suggestion, I found instructions here:
These are instructions for Chrome 55 on Windows 10. It seems these steps may change frequently.
On the page with the untrusted certificate (https:// is crossed out in red), click the lock so a popup opens up.
Click the Details link under the information section at the top.
Click on the View Certificate button.
Click on the Details tab
Click on the Copy to File
Click Next
Export as PKCS #7
Open up Chrome Settings > Show advanced settings > HTTPS/SSL > Manage Certificates
Import the certificate created in step 7 to both the Intermediate Certificate Authorities tab as well as the Trusted Authorities Tab.
Restart Chrome and open your localhost site.

Google Compute Engine VM instance error in google.startup.script

Upon rebooting the Google Compute Engine VM instance, I see these errors:
startupscript: Finished running startup script /var/run/google.startup.script
xxxx accounts-from-metadata: WARNING error while trying to update accounts: <urlopen error [Errno 101] Network is unreachable>
xxxx accounts-from-metadata: WARNING error while trying to update accounts: <urlopen error [Errno 101] Network is unreachable>
What could be the problem?
Update: Upon viewing the original question and reformatting it, it looks like there's a network error at bootup (was hidden due to the text in <...> being treated as HTML and not viewable), so my earlier answer (below) may not be applicable. Leaving it here for future reference.
Please check your network settings, firewalls, etc. in the meantime.
Original text:
You may have a syntax error in the sshKeys metadata key. The format is:
<username>:<protocol> <key-blob> <username#example.com>
The right hand side of the : is essentially the contents of your public key, e.g., ~/.ssh/google_compute_engine.pub.
To see your current metadata key:
ssh into the instance, e.g., via gcloud compute ssh, or via the SSH button in Developers Console
Load this key via:
curl http://metadata/computeMetadata/v1/project/attributes/sshKeys \
-H "Metadata-Flavor: Google"
and check the formatting.
You can then change the metadata on your instance.

JMeter https proxy recording not working

I am recording a https session of a JSF based web app on JMeter and it's not working.
Target application is hosted on: AWS
JMeter version: 2.9 r1437961
Browser: Chrome version 29.0.1547.65
Java: java version "1.6.0_27"
OpenJDK Runtime Environment (IcedTea6 1.12.5) (6b27-1.12.5-0ubuntu0.12.04.1)
OpenJDK Server VM (build 20.0-b12, mixed mode)
OS: Ubuntu 12.04
Proxy server config:
Port: 8084
Target Controller: Test Plan > Thread Group
Capture HTTP headers is checked.
HTTP Sample settings:
Type: not selected. Follow Redirects and Use KeepAlive checked.
URL patterns to exclude:
1. Added Suggested Excludes
2. .*\.jsf
Exceptions that are getting thrown (from JMeter.log):
ERROR - jmeter.protocol.http.proxy.Proxy: java.net.SocketException: Connection closed by remote host
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1377)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:62)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.jmeter.protocol.http.proxy.Proxy.writeToClient(Proxy.java:404)
at org.apache.jmeter.protocol.http.proxy.Proxy.run(Proxy.java:218)
ERROR - jmeter.protocol.http.proxy.Proxy: Problem with SSL certificate? Ensure browser is set to accept the JMeter proxy cert: Connection closed by remote host java.net.SocketException: Connection closed by remote host
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1377)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:62)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.jmeter.protocol.http.proxy.Proxy.writeToClient(Proxy.java:404)
at org.apache.jmeter.protocol.http.proxy.Proxy.run(Proxy.java:218)
The steps I am following are:
1. Set proxy server pointing to 8084.
2. Change proxy settings from chrome:
Set https proxy to 8084.
3. Disabled all chrome extensions and chrome account.
4. Started jmeter proxy server and hit https://url/login
5. Certificate confirmation page appears on browser. Meanwhile, jmeter.log shows:
2013/09/11 13:16:30 INFO - jmeter.protocol.http.proxy.Daemon: Creating Daemon Socket on port: 8084
2013/09/11 13:16:30 INFO - jmeter.protocol.http.proxy.Daemon: Proxy up and running!
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: Proxy will remove the headers: If-Modified-Since,If-None-Match,Host
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: Opened Keystore file: /home/abhijeet/Automation_Dev/LoadAutomation/Jmeter/apache-jmeter-2.9/bin/proxyserver.jks
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: KeyStore for SSL loaded OK and put host in map (clients4.google.com)
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: Opened Keystore file: /home/abhijeet/Automation_Dev/LoadAutomation/Jmeter/apache-jmeter-2.9/bin/proxyserver.jks
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: KeyStore for SSL loaded OK and put host in map (translate.googleapis.com)
2013/09/11 13:22:40 INFO - jmeter.protocol.http.sampler.HTTPHCAbstractImpl: Local host = abhijeet-desktop
2013/09/11 13:22:40 INFO - jmeter.protocol.http.sampler.HTTPHC4Impl: HTTP request retry count = 1
2013/09/11 13:22:40 INFO - jmeter.protocol.http.sampler.HTTPHC4Impl: Setting up HTTPS TrustAll scheme
2013/09/11 13:22:40 INFO - jmeter.protocol.http.proxy.FormCharSetFinder: Using htmlparser version: 2.0 (Release Build Sep 17, 2006)<br>
6. Thread group starts showing unknown requests to these domains:
1. translate.googleapis.com
2. clients4.google.com
3. www.google.co.in
4. www.google.com
5. ssl.gstatic.com
6. safebrowsing.google.com
7. alt1-safebrowsing.google.com
8. clients4.google.com
9. www.gstatic.com
.
.
n all other requests going to the target application.
(For every request the above exceptions are thrown)
I believe, the google domain requests above are getting recorded because chrome is dynamically searching the keywords on google, while I am typing the url string in the address bar. But I don't want these requests to get recorded in the Thread Group.
Also, I tried the solutions from these pages but they didn't work for me:
Link 1
Link 2
Link 3
I don't understand, why is JMeter not able to use the fake certificate that it already has. I checked the SSL settings in chrome and I could not find any JMeter certificates. Need help!!
To do it in chrome/IE we have to place the certificate into 'Trusted Root Certificates Store'
Double click the certificate created
Certificate Import Wizard opens
Click Next
Select Second radio button (Place All Certificates in the following store)
Click Browse and select 'Trusted Root Certificates Authorities'. Click Next
Click Finish
Check your certificate installed in Chrome Settings (under Http/SSL) - Manage certificates.. (Trusted Root Certificates Authorities Tab)
This should at cure the exceptions thrown as your screenshot shows.
I have the same problem and solve it to trust the certificate. Just like you when i look at the
Options > Advanced > Certificates > View Certificates ==> Authorities
and couldn't see a name ApacheJMeterRootCertificate.crt or a related name, but i realize that there is a name something like
_DO NOT INSTALL unless this is your certificate
I click this object and 'Edit_Trust' both item under this object. I share my screenshot. I hope this can be help you and others.
I use Firefox. At chrome there should be similar way to edit the certificate.
jmeter 2.12 has good support for HTTPS. Under the WorkBench, just select Add -> Non-Test Elements -> HTTP(S) Test Script Recorder. This version worked first time for me.
Latest versions of Google Chrome made difficult to bypass security settings to avoid security Threats as Phishing or Man-in-the-middle attacks.
I have successfully configured Google Chrome (v.54.0) to allow JMeter Self-Signed Certificate for HTTP(S) Recording.
Here the instructions (on Windows):
Open MMC console (SUPER + R, Type mmc, Press Enter)
Select File Add/Remove Snap-in
Select Certificates Snap-in for Current User
Select Trusted Root Certification Authorities >> Certificates
Right-click over Certificates folder and select All Tasks >> Import...
Import JMeter Self-Signed certificate using the wizard keeping the default options.
Once installed, right-click over JMeter Self-Signed certificate and select Properties
On General tab, make sure Enable for all purposes option is selected
On Cross-Certificates, include the URL of the application you want to record (make sure you enter the full url, e.g. https://www.live.com)
Close all windows.
Done. You should now be able to reach the destination bypassing Chrome security alert and start recording.