"The information you’re about to submit is not secure
Because the site is using a connection that’s not completely secure, your information will be visible to others."
We have started receiving this error today. I thought that this is a certificate problem, but it's secure
[]
Also there no warnings and errors in console page
What do you advise to do?
Same problem from today on chrome 87.0.4280.88.
Cert is ok and all assets are loaded over https.
My login forms are triggering this warning message, the form action url is relative so it's supposed to be sent over https too, don't know why this message is triggered...
Maybe try with absolute https action url.
[edit]
Check this tread -> https://support.google.com/chrome/thread/88331714?hl=en
For me it was the scheme on the location header when redirecting after successful login that was misconfigured on the reverse proxy, the app was sending back on http.
Fixed by adding these headers on reverse proxy conf on Nginx :
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port 443;
Make sur that your app is aware of these headers too.
For example in a symfony app -> https://symfony.com/doc/current/deployment/proxies.html
This problem disappeared. I think they solved this problem, and now we don't have it)
In ejabberd 18.01-2, installed in lxc container Ubuntu 18.04 Bionic LTS using apt, I'm trying to setup mod_http_upload.
In the section listen, I have
listen:
-
port: 5444
module: ejabberd_http
tls: true
request_handlers:
"/upload": mod_http_upload
In the configuration file, commented port was 5444, however, in the current documentation, it is 5443, so I am not sure which one is right.
In the modules section, I have
modules:
mod_http_upload:
host: "upload.ejabberd.forumanalogue.fr"
max_size: infinity
thumbnail: true
put_url: "https://ejabberd.forumanalogue.fr:5444/upload"
docroot: "/ejabberd/upload"
When I start the service, I can see an odd message in the logs
2019-11-11 21:02:35.287 [warning] <0.367.0>#ejabberd_pkix:handle_call:255 No certificate found matching 'upload.ejabberd.forumanalogue.fr': strictly configured clients or servers will reject connections with this host; obtain a certificate for this (sub)domain from any trusted CA such as Let's Encrypt (www.letsencrypt.org)
It is strange because I have a signed wildcard certificate.
certfiles:
- "/etc/letsencrypt/live/forumanalogue.fr/*.pem"
I can see the service with my client (Gajim) but when I try to send a file to another local account, I receive an error Access denied by service policy, see the complete log:
<iq xml:lang='en' to='foo#forumanalogue.fr/gajim.HCLJ4BZI' from='upload.ejabberd.forumanalogue.fr' type='error' id='1dd35274-90e9-4b3b-9608-0fab59afe34e'>
<request xmlns='urn:xmpp:http:upload'>
<filename>a.out</filename>
<size>27232</size>
<content-type>application/octet-stream</content-type>
</request>
<error code='403' type='auth'>
<forbidden xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
<text xml:lang='en' xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'>Access denied by service policy</text>
</error>
</iq>
I had to enable debug logging in order to see something. It is quite verbose, but I think that the relevant part, which is non redundant with the client message, is
2019-11-11 20:53:08.329 [debug] <0.501.0>#mod_http_upload:process_slot_request:544 Denying HTTP upload slot request from foo#forumanalogue.fr/gajim.HCLJ4BZI
Thank you for your help.
I tried with ejabberd 18.01, a configuration similar to yours, and it works for me.
Looking at the source code, that "process_slot_request:544 " error means that the account attempting to use the upload feature is not allowed by the "local" Access rule in the vhost it sended it to. Probably it's a remote account. Remote to that upload service. In other words, the service upload.whatever can only be used by accounts like user12#whatever.
In your case, you are attempting to use upload.ejabberd.forumanalogue.fr from account foo#forumanalogue.fr, which is not local to that upload service.
Several ideas, I hope one of them suits your specific setup:
A) don't mess with vhosts. If it's forumanalogue.fr, keep it that everywhere
B) use #HOST# in host and put_url options
C) Or if you really want to mess with hosts, then add Access rights so accounts in that vhost are considered "local" to the upload service.
I am using postman to test an API I have, all is good when the request does not contain sub-domain, however when I add a sub-domain to URL I am getting this response.
Could not get any response
There was an error connecting to http://subdomain.localhost:port/api/
Why this might have happened:
The server couldn't send a response:Ensure that the backend is working
properly
Self-signed SSL certificates are being blocked:Fix this by turning off
'SSL certificate verification' in Settings > General
Proxy configured incorrectly Ensure that proxy is configured correctly
in Settings > Proxy
Request timeout:Change request timeout in Settings > General
If I copy the same URL from postman and paste it into the browser I get a proper response, is there some kind of configurations I should do to make postman work with sub-domains?
First Go to Settings in Postman:
Off the SSL certificate verification in General Tab:
Off the Global Proxy Configuration and Use System Proxy in Proxy Tab:
Make Request Timeout to 0 (Zero)
Configure Apache:
If the above changes resulted in a 404 response, then continue reading ;-)
Users that host their site locally (like with XAMP and/or WAMP), may be able to visit their virtual sites using https:// prefixed address, but it's a lie, and to really enable SSL (for each virtual-site), configure Apache like:
Open httpd-vhosts.conf file (from Apache's conf/extras directory), in your preferred text editor.
Change the virtual site's settings, into something like:
<VirtualHost *:80 *:443>
ServerName my-site.local
ServerAlias *.my-site.local
DocumentRoot "C:\xampp\htdocs\my-project\public"
SSLEngine on
SSLCertificateFile "path/to/my-generated.cert"
SSLCertificateKeyFile "path/to/my-generated.key"
SetEnv APPLICATION_ENV "development"
<Directory "C:\xampp\htdocs\my-project\public">
Options Indexes FollowSymLinks
AllowOverride All
Order allow, deny
Allow from all
</Directory>
</VirtualHost>
But of course, generate a dummy-SSL-certificate, and change all file paths, like from "path/to/my-generated.cert" into real file addresses.
Finally, test by visiting the local site in the browser, but using http:// (without S) prefixed address; Apache should now give error like:
Bad Request
Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
I had the same issue. It was caused by a newline at the end of the "Authorization" header's value, which I had set manually by copy-pasting the bearer token (which accidentally contained the newline at its end)
If you get a "Could not get any response" message from Postman native apps while sending your request, open Postman Console (View > Show Postman Console), resend the request and check for any error logs in the console.
Thanks to numaanashraf
Hi This issue is resolved for me.
setting ->general -> Requesttimeout in ms = 0
If all above methods doesn't work check your environment variables, And make sure that the following environments are not set. If those are set and not needed by any other application remove them.
HTTP_PROXY
HTTPS_PROXY
Reference link
For me it was the http://localhost instead of https://localhost.
When getting the following error,
you need to do the following.
Step 1:
In Postman, click the wrench icon, go to settings, then go to the Proxy tab.
Step 2:
Create a custom Proxy. This article explains how to create a custom proxy.
After you create the custom Proxy, make sure you turn the Proxy toggle button to off. I put 61095 in for the proxy server and it worked for me.
Step 3 :
Success
I came up with this solution
In postman go to setting --> proxy
And off Global Proxy Configuration
on the Use System Proxy
And go to windows host configure file
'C:\Windows\System32\drivers\etc\hosts'
Open that file in administrator mode
And add the sub domain to hosts file
For me what worked was to add 127.0.0.1 subdomain.localhost to my host file. On OSX that was /etc/hosts. Not sure why that was necessary as I could reach the subdomain from chrome.
In postman go to setting --> proxy
And off Global Proxy Configuration
For me, it was that route that I was calling in my node server wasn't returning anything. Adding
return res.status(200).json({
message: 'success!',
response: 'success!'
});//
to the route I was calling resolved the issue.
You mentioned you are using a CER certificate.
According to the Postman page on certificates.
Choose your client certificate file in the CRT file field. Currently, we only support the CRT format. Support for other formats (like PFX) will come soon.
The name of the extension CER, CRT doesn't make the certificate that type of certificate but, these are the excepted extensions names.
CER is an X.509 certificate in binary form, DER encoded.
CRT is a binary X.509 certificate, encapsulated in text (base-64) encoding.
You can use OpenSSL to change a CER file into a CRT file. I have not had good luck with it but it looks like this.
openssl x509 -inform PEM -in certificate.cer -out certificate.crt
or
openssl x509 -inform DER -in certificate.cer -out certificate.crt
Postman for Linux Version 6.7.1 - Ubuntu 18.04 - linux 4.15.0-43-generic / x64
I had the same problem and by chance I replaced http://localhost with http://127.0.0.1 and everything worked.
My etc/hosts had the proper entries for localhost and https://localhost requests always worked as expected.
I have no clue why changing localhost for http with 127.0.0.1 solved the issue.
None of these solutions works for me. Postman is not sending any request to the server because postman is not finding the host. So, if you modify your /etc/hosts to
127.0.0.1 localhost
127.0.0.1 subdomain.localhost
It works for me.
For me the issue was that the Content-Length was too big. I placed the content of the body in NotePad++ and counted the characters and put that figure in PostMan and then it worked.
I know it does not directly answer why the op's sub-domain was not working but it might help out someone.
In my case it was invisible spaces that postman didn't recognize, the above string of text renders as without spaces in postman.
I disabled SSL certificate Validation and System Proxy even tried on postman chrome extension(which is about to be deprecated), but when I downloaded and tried Insomnia and it gave those red dots in the place where those spaces were, must have gotten there during copy/paste
For anyone who experienced this issue with real domain instead of localhost and couldn't solve it using ANY OF THE ABOVE solutions.
Try changing your Network DNS (WIFI or LAN) to some other DNS. For me, I used Google DNS 8.8.8.8, 8.8.4.4 and it worked!
solution is very simple if you are using asp.net core 2 application . Inside ConfigureServices method inside startup.cs file all this line
services.AddMvc()
.SetCompatibilityVersion(CompatibilityVersion.Version_2_1)
.AddJsonOptions(x => x.SerializerSettings.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore);
You just need to turn SSL off to send your request.
Proxy and others come with various errors.
My issue was by putting wrong parameters in the header,
the requested parameters was
Authorization: Token <string>
and is was trying
Authorization Token: <string>
After all the above methods like turning OFF SSL certificate verification, turning ON only Use System Proxy and removing HTTP_PROXY and HTTPS_PROXY system environment variables, it worked.
Note: Had to restart the Postman app, since the environment variables were changed.
Unchecking proxy and SSL Certificate Verification didn't work for me.
Unsetting PROXY environment variables did the trick.
export http_proxy=
export ftp_proxy=
export https_proxy=
Change to the directory where Postman is installed and then:
./Postman
In my case, MVC wasn't able to serialize the results (I accidentally used a model instead of DTO). I debugged down to passing a simple string, which worked. Once I fixed the serialization it all came up.
In my case the (corporate) proxy was using a self-signed SSL certificate which Postman disliked. I discovered it by activating
View->Show Postman console
and retrying the request. The console then showed the certificate error. In
Settings->General
I disabled
SSL certificate verification.
The solution for me, as I'm using the deprecated Postman extension for Chrome, to solve this issue I had to:
Call some GET request using the Chrome Browser itself.
Wait for the error page "Your connection is not private" to appear.
Click on ADVANCED and then proceed to [url] (unsafe) link.
After this, requests through the extension itself should work.
In my case it was a misconfigured subnet. Only one of the 2 subnets in the ELB worked.
I figured this out by doing a nslookup and trying to curl the returned IPs directly. Only one worked.
Postman just kept using the misconfigured one.
I had the same issue.
Turned out my timeout was set too low. I changed it to 30ms thinking it was 30sec. I set it back to 0 and it started working again.
I got the same "Could not get any response" issue because of wrong parameter in header. I fixed it by removing parameter HOST out of header.
PS: Unfortunately, I was pushed to install the other software to get this information. It should be great to get this error message from Postman instead of getting general nonsense.
In my case, I forgot to set the value of the variable in the "CURRENT VALUE" field.
I just experienced this error. In my case, the path was TOO LONG. So url like that gave me this error in postman (fake example)
http://127.0.0.1:5000/api/batch/upload_import_deactivate_from_ready_folder
whereas
http://127.0.0.1:5000/api/batch/upld_impt_deac_ready_folder
worked fine.
Hope it helps someone who by accident read that far...
OS: Ubuntu 12.04 64-bit
PHP version: 5.4.6-2~precise+1
When I test an https page I am writing through the built-in webserver (php5 -S localhost:8000), Firefox (16.0.1) says "Problem loading: The connection was interrupted", while the terminal tells me "::1:37026 Invalid request (Unsupported SSL request)".
phpinfo() tells me:
Registered Stream Socket Transports: tcp, udp, unix, udg, ssl, sslv3,
tls
[curl] SSL: Yes
SSL Version: OpenSSL/1.0.1
openssl:
OpenSSL support: enabled
OpenSSL Library Version OpenSSL 1.0.1 14 Mar 2012
OpenSSL Header Version OpenSSL 1.0.1 14 Mar 2012
Yes, http pages work just fine.
Any ideas?
See the manual section on the built-in webserver shim:
http://php.net/manual/en/features.commandline.webserver.php
It doesn't support SSL encryption. It's for plain HTTP requests. The openssl extension and function support is unrelated. It does not accept requests or send responses over the stream wrappers.
If you want SSL to run over it, try a stunnel wrapper:
php -S localhost:8000 &
stunnel3 -d 443 -r 8080
It's just for toying anyway.
It's been three years since the last update; here's how I got it working in 2021 on macOS (as an extension to mario's answer):
# Install stunnel
brew install stunnel
# Find the configuration directory
cd /usr/local/etc/stunnel
# Copy the sample conf file to actual conf file
cp stunnel.conf-sample stunnel.conf
# Edit conf
vim stunnel.conf
Modify stunnel.conf so it looks like this:
(all other options can be deleted)
; **************************************************************************
; * Global options *
; **************************************************************************
; Debugging stuff (may be useful for troubleshooting)
; Enable foreground = yes to make stunnel work with Homebrew services
foreground = yes
debug = info
output = /usr/local/var/log/stunnel.log
; **************************************************************************
; * Service definitions (remove all services for inetd mode) *
; **************************************************************************
; ***************************************** Example TLS server mode services
; TLS front-end to a web server
[https]
accept = 443
connect = 8000
cert = /usr/local/etc/stunnel/stunnel.pem
; "TIMEOUTclose = 0" is a workaround for a design flaw in Microsoft SChannel
; Microsoft implementations do not use TLS close-notify alert and thus they
; are vulnerable to truncation attacks
;TIMEOUTclose = 0
This accepts HTTPS / SSL at port 443 and connects to a local webserver running at port 8000, using stunnel's default bogus cert at /usr/local/etc/stunnel/stunnel.pem. Log level is info and log outputs are written to /usr/local/var/log/stunnel.log.
Start stunnel:
brew services start stunnel # Different for Linux
Start the webserver:
php -S localhost:8000
Now you can visit https://localhost:443 to visit your webserver: screenshot
There should be a cert error and you'll have to click through a browser warning but that gets you to the point where you can hit your localhost with HTTPS requests, for development.
I've been learning nginx and Laravel recently, and this error has came up many times. It's hard to diagnose because you need to align nginx with Laravel and also the SSL settings in your operating system at the same time (assuming you are making a self-signed cert).
If you are on Windows, it is even more difficult because you have to fight unix carriage returns when dealing with SSL certs. Sometimes you can go through the steps correctly, but you get ruined by cert validation issues. I find the trick is to make the certs in Ubuntu or Mac and email them to yourself, or use the linux subsystem.
In my case, I kept running into an issue where I declare HTTPS somewhere but php artisan serve only works on HTTP.
I just caused this Invalid request (Unsupported SSL request) error again after SSL was hooked up fine. It turned out to be that I was using Axios to make a POST request to https://. Changing it to POST http:// fixed it.
My recommendation to anyone would be to take a look at where and how HTTP/HTTPS is being used.
The textbook definition is probably something like php artisan serve only works over HTTP but requires underlying SSL layer.
Use Ngrok
Expose your server's port like so:
ngrok http <server port>
Browse with the ngrok's secure public address (the one with https).
Note: Though it works like a charm, it seems an overkill since it requires internet and would appreciate better recommendations.
I am recording a https session of a JSF based web app on JMeter and it's not working.
Target application is hosted on: AWS
JMeter version: 2.9 r1437961
Browser: Chrome version 29.0.1547.65
Java: java version "1.6.0_27"
OpenJDK Runtime Environment (IcedTea6 1.12.5) (6b27-1.12.5-0ubuntu0.12.04.1)
OpenJDK Server VM (build 20.0-b12, mixed mode)
OS: Ubuntu 12.04
Proxy server config:
Port: 8084
Target Controller: Test Plan > Thread Group
Capture HTTP headers is checked.
HTTP Sample settings:
Type: not selected. Follow Redirects and Use KeepAlive checked.
URL patterns to exclude:
1. Added Suggested Excludes
2. .*\.jsf
Exceptions that are getting thrown (from JMeter.log):
ERROR - jmeter.protocol.http.proxy.Proxy: java.net.SocketException: Connection closed by remote host
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1377)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:62)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.jmeter.protocol.http.proxy.Proxy.writeToClient(Proxy.java:404)
at org.apache.jmeter.protocol.http.proxy.Proxy.run(Proxy.java:218)
ERROR - jmeter.protocol.http.proxy.Proxy: Problem with SSL certificate? Ensure browser is set to accept the JMeter proxy cert: Connection closed by remote host java.net.SocketException: Connection closed by remote host
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1377)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:62)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.jmeter.protocol.http.proxy.Proxy.writeToClient(Proxy.java:404)
at org.apache.jmeter.protocol.http.proxy.Proxy.run(Proxy.java:218)
The steps I am following are:
1. Set proxy server pointing to 8084.
2. Change proxy settings from chrome:
Set https proxy to 8084.
3. Disabled all chrome extensions and chrome account.
4. Started jmeter proxy server and hit https://url/login
5. Certificate confirmation page appears on browser. Meanwhile, jmeter.log shows:
2013/09/11 13:16:30 INFO - jmeter.protocol.http.proxy.Daemon: Creating Daemon Socket on port: 8084
2013/09/11 13:16:30 INFO - jmeter.protocol.http.proxy.Daemon: Proxy up and running!
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: Proxy will remove the headers: If-Modified-Since,If-None-Match,Host
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: Opened Keystore file: /home/abhijeet/Automation_Dev/LoadAutomation/Jmeter/apache-jmeter-2.9/bin/proxyserver.jks
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: KeyStore for SSL loaded OK and put host in map (clients4.google.com)
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: Opened Keystore file: /home/abhijeet/Automation_Dev/LoadAutomation/Jmeter/apache-jmeter-2.9/bin/proxyserver.jks
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: KeyStore for SSL loaded OK and put host in map (translate.googleapis.com)
2013/09/11 13:22:40 INFO - jmeter.protocol.http.sampler.HTTPHCAbstractImpl: Local host = abhijeet-desktop
2013/09/11 13:22:40 INFO - jmeter.protocol.http.sampler.HTTPHC4Impl: HTTP request retry count = 1
2013/09/11 13:22:40 INFO - jmeter.protocol.http.sampler.HTTPHC4Impl: Setting up HTTPS TrustAll scheme
2013/09/11 13:22:40 INFO - jmeter.protocol.http.proxy.FormCharSetFinder: Using htmlparser version: 2.0 (Release Build Sep 17, 2006)<br>
6. Thread group starts showing unknown requests to these domains:
1. translate.googleapis.com
2. clients4.google.com
3. www.google.co.in
4. www.google.com
5. ssl.gstatic.com
6. safebrowsing.google.com
7. alt1-safebrowsing.google.com
8. clients4.google.com
9. www.gstatic.com
.
.
n all other requests going to the target application.
(For every request the above exceptions are thrown)
I believe, the google domain requests above are getting recorded because chrome is dynamically searching the keywords on google, while I am typing the url string in the address bar. But I don't want these requests to get recorded in the Thread Group.
Also, I tried the solutions from these pages but they didn't work for me:
Link 1
Link 2
Link 3
I don't understand, why is JMeter not able to use the fake certificate that it already has. I checked the SSL settings in chrome and I could not find any JMeter certificates. Need help!!
To do it in chrome/IE we have to place the certificate into 'Trusted Root Certificates Store'
Double click the certificate created
Certificate Import Wizard opens
Click Next
Select Second radio button (Place All Certificates in the following store)
Click Browse and select 'Trusted Root Certificates Authorities'. Click Next
Click Finish
Check your certificate installed in Chrome Settings (under Http/SSL) - Manage certificates.. (Trusted Root Certificates Authorities Tab)
This should at cure the exceptions thrown as your screenshot shows.
I have the same problem and solve it to trust the certificate. Just like you when i look at the
Options > Advanced > Certificates > View Certificates ==> Authorities
and couldn't see a name ApacheJMeterRootCertificate.crt or a related name, but i realize that there is a name something like
_DO NOT INSTALL unless this is your certificate
I click this object and 'Edit_Trust' both item under this object. I share my screenshot. I hope this can be help you and others.
I use Firefox. At chrome there should be similar way to edit the certificate.
jmeter 2.12 has good support for HTTPS. Under the WorkBench, just select Add -> Non-Test Elements -> HTTP(S) Test Script Recorder. This version worked first time for me.
Latest versions of Google Chrome made difficult to bypass security settings to avoid security Threats as Phishing or Man-in-the-middle attacks.
I have successfully configured Google Chrome (v.54.0) to allow JMeter Self-Signed Certificate for HTTP(S) Recording.
Here the instructions (on Windows):
Open MMC console (SUPER + R, Type mmc, Press Enter)
Select File Add/Remove Snap-in
Select Certificates Snap-in for Current User
Select Trusted Root Certification Authorities >> Certificates
Right-click over Certificates folder and select All Tasks >> Import...
Import JMeter Self-Signed certificate using the wizard keeping the default options.
Once installed, right-click over JMeter Self-Signed certificate and select Properties
On General tab, make sure Enable for all purposes option is selected
On Cross-Certificates, include the URL of the application you want to record (make sure you enter the full url, e.g. https://www.live.com)
Close all windows.
Done. You should now be able to reach the destination bypassing Chrome security alert and start recording.