CakePHP 3 - Enable SSL on development server [duplicate] - cakephp-3.0

OS: Ubuntu 12.04 64-bit
PHP version: 5.4.6-2~precise+1
When I test an https page I am writing through the built-in webserver (php5 -S localhost:8000), Firefox (16.0.1) says "Problem loading: The connection was interrupted", while the terminal tells me "::1:37026 Invalid request (Unsupported SSL request)".
phpinfo() tells me:
Registered Stream Socket Transports: tcp, udp, unix, udg, ssl, sslv3,
tls
[curl] SSL: Yes
SSL Version: OpenSSL/1.0.1
openssl:
OpenSSL support: enabled
OpenSSL Library Version OpenSSL 1.0.1 14 Mar 2012
OpenSSL Header Version OpenSSL 1.0.1 14 Mar 2012
Yes, http pages work just fine.
Any ideas?

See the manual section on the built-in webserver shim:
http://php.net/manual/en/features.commandline.webserver.php
It doesn't support SSL encryption. It's for plain HTTP requests. The openssl extension and function support is unrelated. It does not accept requests or send responses over the stream wrappers.
If you want SSL to run over it, try a stunnel wrapper:
php -S localhost:8000 &
stunnel3 -d 443 -r 8080
It's just for toying anyway.

It's been three years since the last update; here's how I got it working in 2021 on macOS (as an extension to mario's answer):
# Install stunnel
brew install stunnel
# Find the configuration directory
cd /usr/local/etc/stunnel
# Copy the sample conf file to actual conf file
cp stunnel.conf-sample stunnel.conf
# Edit conf
vim stunnel.conf
Modify stunnel.conf so it looks like this:
(all other options can be deleted)
; **************************************************************************
; * Global options *
; **************************************************************************
; Debugging stuff (may be useful for troubleshooting)
; Enable foreground = yes to make stunnel work with Homebrew services
foreground = yes
debug = info
output = /usr/local/var/log/stunnel.log
; **************************************************************************
; * Service definitions (remove all services for inetd mode) *
; **************************************************************************
; ***************************************** Example TLS server mode services
; TLS front-end to a web server
[https]
accept = 443
connect = 8000
cert = /usr/local/etc/stunnel/stunnel.pem
; "TIMEOUTclose = 0" is a workaround for a design flaw in Microsoft SChannel
; Microsoft implementations do not use TLS close-notify alert and thus they
; are vulnerable to truncation attacks
;TIMEOUTclose = 0
This accepts HTTPS / SSL at port 443 and connects to a local webserver running at port 8000, using stunnel's default bogus cert at /usr/local/etc/stunnel/stunnel.pem. Log level is info and log outputs are written to /usr/local/var/log/stunnel.log.
Start stunnel:
brew services start stunnel # Different for Linux
Start the webserver:
php -S localhost:8000
Now you can visit https://localhost:443 to visit your webserver: screenshot
There should be a cert error and you'll have to click through a browser warning but that gets you to the point where you can hit your localhost with HTTPS requests, for development.

I've been learning nginx and Laravel recently, and this error has came up many times. It's hard to diagnose because you need to align nginx with Laravel and also the SSL settings in your operating system at the same time (assuming you are making a self-signed cert).
If you are on Windows, it is even more difficult because you have to fight unix carriage returns when dealing with SSL certs. Sometimes you can go through the steps correctly, but you get ruined by cert validation issues. I find the trick is to make the certs in Ubuntu or Mac and email them to yourself, or use the linux subsystem.
In my case, I kept running into an issue where I declare HTTPS somewhere but php artisan serve only works on HTTP.
I just caused this Invalid request (Unsupported SSL request) error again after SSL was hooked up fine. It turned out to be that I was using Axios to make a POST request to https://. Changing it to POST http:// fixed it.
My recommendation to anyone would be to take a look at where and how HTTP/HTTPS is being used.
The textbook definition is probably something like php artisan serve only works over HTTP but requires underlying SSL layer.

Use Ngrok
Expose your server's port like so:
ngrok http <server port>
Browse with the ngrok's secure public address (the one with https).
Note: Though it works like a charm, it seems an overkill since it requires internet and would appreciate better recommendations.

Related

How do setup mod_http_upload in ejabberd

In ejabberd 18.01-2, installed in lxc container Ubuntu 18.04 Bionic LTS using apt, I'm trying to setup mod_http_upload.
In the section listen, I have
listen:
-
port: 5444
module: ejabberd_http
tls: true
request_handlers:
"/upload": mod_http_upload
In the configuration file, commented port was 5444, however, in the current documentation, it is 5443, so I am not sure which one is right.
In the modules section, I have
modules:
mod_http_upload:
host: "upload.ejabberd.forumanalogue.fr"
max_size: infinity
thumbnail: true
put_url: "https://ejabberd.forumanalogue.fr:5444/upload"
docroot: "/ejabberd/upload"
When I start the service, I can see an odd message in the logs
2019-11-11 21:02:35.287 [warning] <0.367.0>#ejabberd_pkix:handle_call:255 No certificate found matching 'upload.ejabberd.forumanalogue.fr': strictly configured clients or servers will reject connections with this host; obtain a certificate for this (sub)domain from any trusted CA such as Let's Encrypt (www.letsencrypt.org)
It is strange because I have a signed wildcard certificate.
certfiles:
- "/etc/letsencrypt/live/forumanalogue.fr/*.pem"
I can see the service with my client (Gajim) but when I try to send a file to another local account, I receive an error Access denied by service policy, see the complete log:
<iq xml:lang='en' to='foo#forumanalogue.fr/gajim.HCLJ4BZI' from='upload.ejabberd.forumanalogue.fr' type='error' id='1dd35274-90e9-4b3b-9608-0fab59afe34e'>
<request xmlns='urn:xmpp:http:upload'>
<filename>a.out</filename>
<size>27232</size>
<content-type>application/octet-stream</content-type>
</request>
<error code='403' type='auth'>
<forbidden xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
<text xml:lang='en' xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'>Access denied by service policy</text>
</error>
</iq>
I had to enable debug logging in order to see something. It is quite verbose, but I think that the relevant part, which is non redundant with the client message, is
2019-11-11 20:53:08.329 [debug] <0.501.0>#mod_http_upload:process_slot_request:544 Denying HTTP upload slot request from foo#forumanalogue.fr/gajim.HCLJ4BZI
Thank you for your help.
I tried with ejabberd 18.01, a configuration similar to yours, and it works for me.
Looking at the source code, that "process_slot_request:544 " error means that the account attempting to use the upload feature is not allowed by the "local" Access rule in the vhost it sended it to. Probably it's a remote account. Remote to that upload service. In other words, the service upload.whatever can only be used by accounts like user12#whatever.
In your case, you are attempting to use upload.ejabberd.forumanalogue.fr from account foo#forumanalogue.fr, which is not local to that upload service.
Several ideas, I hope one of them suits your specific setup:
A) don't mess with vhosts. If it's forumanalogue.fr, keep it that everywhere
B) use #HOST# in host and put_url options
C) Or if you really want to mess with hosts, then add Access rights so accounts in that vhost are considered "local" to the upload service.

go-ethereum - geth - puppeth - ethstat remote server : docker: command not found

I'm trying to setup a private ethereum test network using Puppeth (as Péter Szilágyi demoed in Ethereum devcon three 2017). I'm running it on a macbook pro (macOS Sierra).
When I try to setup the ethstat network component I get an "docker configured incorrectly: bash: docker: command not found" error. I have docker running and I can use it fine in the terminal e.g. docker ps.
Here are the steps I took:
What would you like to do? (default = stats)
1. Show network stats
2. Manage existing genesis
3. Track new remote server
4. Deploy network components
> 4
What would you like to deploy? (recommended order)
1. Ethstats - Network monitoring tool
2. Bootnode - Entry point of the network
3. Sealer - Full node minting new blocks
4. Wallet - Browser wallet for quick sends (todo)
5. Faucet - Crypto faucet to give away funds
6. Dashboard - Website listing above web-services
> 1
Which server do you want to interact with?
1. Connect another server
> 1
Please enter remote server's address:
> localhost
DEBUG[11-15|22:46:49] Attempting to establish SSH connection server=localhost
WARN [11-15|22:46:49] Bad SSH key, falling back to passwords path=/Users/xxx/.ssh/id_rsa err="ssh: cannot decode encrypted private keys"
The authenticity of host 'localhost:22 ([::1]:22)' can't be established.
SSH key fingerprint is xxx [MD5]
Are you sure you want to continue connecting (yes/no)? yes
What's the login password for xxx at localhost:22? (won't be echoed)
>
DEBUG[11-15|22:47:11] Verifying if docker is available server=localhost
ERROR[11-15|22:47:11] Server not ready for puppeth err="docker configured incorrectly: bash: docker: command not found\n"
Here are my questions:
Is there any documentation / tutorial describing how to setup this remote server properly. Or just on puppeth in general?
Can I not use localhost as "remote server address"
Any ideas on why the docker command is not found (it is installed and running and I can use it ok in the terminal).
Here is what I did.
For the docker you have to use the docker-compose binary. You can find it here.
Furthermore, you have to be sure that an ssh server is running on your localhost and that keys have been generated.
I didn't find any documentations for puppeth whatsoever.
I think I found the root cause to this problem. The SSH daemon is compiled with a default path. If you ssh to a machine with a specific command (other than a shell), you get that default path. This does not include /usr/local/bin for example, where docker lives in my case.
I found the solution here: https://serverfault.com/a/585075:
edit /etc/ssh/sshd_config and make sure it contains PermitUserEnvironment yes (you need to edit this with sudo)
create a file ~/.ssh/environment with the path that you want, in my case:
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
When you now run ssh localhost env you should see a PATH that matches whatever you put in ~/.ssh/environment.

Can't access WildFly over HTTPS with browsers but can with OpenSSL client

I've deployed Keycloak on WildFly 10 via Docker. SSL support was enabled via cli. Final standalone.xml has:
<security-realm name="UndertowRealm">
<server-identities>
<ssl>
<keystore path="keycloak.jks" relative-to="jboss.server.config.dir" keystore-password="changeit"
alias="mydomain" key-password="changeit"/>
</ssl>
</server-identities>
</security-realm>
Undertow subsystem:
<https-listener name="default-https" security-realm="UndertowRealm"
socket-binding="https"/>
Key was generated and placed in $JBOSS_HOME/standalone/configuration
keytool -genkey -noprompt -alias mydomain -dname "CN=mydomain,
OU=mydomain, O=mydomain, L=none, S=none, C=SI" -keystore
keycloak.jks -storepass changeit -keypass changeit
Port 8443 is exposed via Docker.
Accessing https://mydomain:8443/ in chrome results in ERR_CONNECTION_CLOSED. Firefox returns "Secure Connection Failed, the connection was interrupted..."
However, OpenSSL client works nicely:
openssl s_client -connect mydomain:8443
Input:
GET / HTTP/1.1
Host: https://mydomain:8443
This returns the Keycloak welcome page.
So clearly WildFly is working but I am being blocked by the browsers for whatever reason. What could this reason be? I was under the impression that I should be able to add an exception for self signed certificate in either browser. Maybe the generated key length is too short or maybe I am hitting some other security constraint imposed by Firefox/Chrome?
Using these parameters in keytool solved the problem: -keyalg RSA -keysize 2048
... -dname "CN=mydomain
The certificate is probably malformed. The Browsers and other user agents, like cURL and OpenSSL, use different policies to validate a end-entity certificate. The browser will reject a certificate if the hostname is in the Common Name (CN), while other user agents will accept it.
The short answer to this problem is, place DNS names in the Subject Alternate Name (SAN), and not the Common Name (CN).
You may still encounter other problems, but getting the names right will help immensely with browsers.
However, OpenSSL client works nicely:
openssl s_client -connect mydomain:8443
OpenSSL prior to 1.1.0 did not perform hostname validation. Prior version will accept any name.
cURL or Wget would be a better tool to test with in this case.
For reading on the verification you should perform when using OpenSSL, see:
SSL/TLS Client
For reading on the rules for hostnames and where they should appear in a X509 certificate, see:
How do you sign Certificate Signing Request with your Certification Authority?
How to create a self-signed certificate with openssl?

ECC Certificates not working in Chrome?

I'm attempting to configure HAProxy to serve an RSA or ECC certificate depending on the client's browser. I initially am trying to get ECC certificates configured, and I noticed that the latest version of Chrome does not support them. Wondering if anyone else is having this problem? I am using OS X 10.11.4 with the following versions:
Chrome (50.0.2661.94) (64-bit) [doesn't work]
Firefox (46.0) (64-bit) [works]
Safari (9.1 11601.5.17.1) (64-bit) [works]
cURL (7.43.0 (x86_64-apple-darwin15.0) libcurl/7.43.0 SecureTransport zlib/1.2.5) [works]
The cURL command I call via curl --ciphers ecdhe_ecdsa_aes_128_sha --ssl --head --tlsv1.2 https://<url> and it returns 200 OK.
And I am using Ubuntu Xenial 16.04 LTS on the server side with the following versions:
[root#haproxy-server]: /etc/haproxy # haproxy -vv
HA-Proxy version 1.6.4 2016/03/13
Copyright 2000-2016 Willy Tarreau <willy#haproxy.org>
Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2
OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1
Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2g 1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g-fips 1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.38 2015-11-23
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.
Here's the screenshot of the exact problem: http://imgur.com/wlmQbIi
Here's the screenshot of the same website with Safari: http://imgur.com/FEwmmj9
And finally, my haproxy.cfg file:
global
log /dev/log local0
log /dev/log local1 notice
user haproxy
group haproxy
chroot /var/lib/haproxy
daemon
stats socket /run/haproxy/admin.sock level admin
maxconn 15000
spread-checks 5
tune.ssl.default-dh-param 2048
tune.ssl.maxrecord 1400
tune.idletimer 1000
ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-server-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
defaults
log global
mode http
retries 3
balance roundrobin
hash-type map-based
option httplog
option dontlognull
option forwardfor
option http-server-close
option redispatch
option abortonclose
log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 30s
timeout http-keep-alive 10s
timeout check 10s
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend http-frontend
bind *:80 accept-proxy
reqadd X-Forwarded-Proto:\ http
use_backend %[req.hdr(host),lower,map_sub(/etc/haproxy/backend.map,test-backend)]
frontend https-frontend
bind *:443 accept-proxy ssl crt /etc/ssl/pem/ecc alpn http/1.1
log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r\ ssl_version:%sslv\ ssl_cipher:%sslc\ %[ssl_fc_sni]\ %[ssl_fc_npn]
rspadd Strict-Transport-Security:\ max-age=31536000;\ includeSubdomains;\ preload
rspadd X-Frame-Options:\ DENY
reqadd X-Forwarded-Proto:\ https
use_backend %[req.hdr(host),lower,map_sub(/etc/haproxy/backend.map,test-backend)]
backend test-backend
balance leastconn
redirect scheme https code 301 if !{ ssl_fc }
server test-server 10.10.10.40:80 check
I know this post is not in the right seciton of StackExchange (sorry!) but I wanted to post a potential solution. I think the problem is the elliptic curves support in Chrome vs. Firefox vs. Safari. From the SSLLabs website:
Safari 9 / OS X 10.11: secp256r1, secp384r1, secp521r1
Firefox 44 / OS X: secp256r1, secp384r1, secp521r1
Chrome 48 / OS X: secp256r1, secp384r1
The problem is the private key for the ECC certificate I was testing was generated with secp521r1 (http://imgur.com/dbrJQuW), which the latest version of Chrome on OS X 10.11 doesn't support.
See this issue: https://security.stackexchange.com/questions/100991/why-is-secp521r1-no-longer-supported-in-chrome-others
It seems that only the following two cipher suite are supported by your web server:
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
I suppose that missing some cipher suite (at least TLS_RSA_WITH_AES_128_CBC_SHA) is the reason of your problem.
The cipher suite TLS_RSA_WITH_AES_128_CBC_SHA must be supported in TLS 1.2 (see the section 9 Mandatory Cipher Suites or RFC5246). In the same way I would you recommend to see forward and to include protocols
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
and the suites
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
are strictly recommended too. See TLS 1.3 specification. You use Nginx web server, which should support TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 and TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, which are very good because of combination the security and the performance. I'd recommend you to include all the Cipher Suites.
I'd recommend you additionally to use or at least to examine carefully the recommendation of Nginx setting for modern or intermediate web browsers by Mozilla SSL Configuration Generator. You can read more about the suites here.

JMeter https proxy recording not working

I am recording a https session of a JSF based web app on JMeter and it's not working.
Target application is hosted on: AWS
JMeter version: 2.9 r1437961
Browser: Chrome version 29.0.1547.65
Java: java version "1.6.0_27"
OpenJDK Runtime Environment (IcedTea6 1.12.5) (6b27-1.12.5-0ubuntu0.12.04.1)
OpenJDK Server VM (build 20.0-b12, mixed mode)
OS: Ubuntu 12.04
Proxy server config:
Port: 8084
Target Controller: Test Plan > Thread Group
Capture HTTP headers is checked.
HTTP Sample settings:
Type: not selected. Follow Redirects and Use KeepAlive checked.
URL patterns to exclude:
1. Added Suggested Excludes
2. .*\.jsf
Exceptions that are getting thrown (from JMeter.log):
ERROR - jmeter.protocol.http.proxy.Proxy: java.net.SocketException: Connection closed by remote host
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1377)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:62)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.jmeter.protocol.http.proxy.Proxy.writeToClient(Proxy.java:404)
at org.apache.jmeter.protocol.http.proxy.Proxy.run(Proxy.java:218)
ERROR - jmeter.protocol.http.proxy.Proxy: Problem with SSL certificate? Ensure browser is set to accept the JMeter proxy cert: Connection closed by remote host java.net.SocketException: Connection closed by remote host
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1377)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:62)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.jmeter.protocol.http.proxy.Proxy.writeToClient(Proxy.java:404)
at org.apache.jmeter.protocol.http.proxy.Proxy.run(Proxy.java:218)
The steps I am following are:
1. Set proxy server pointing to 8084.
2. Change proxy settings from chrome:
Set https proxy to 8084.
3. Disabled all chrome extensions and chrome account.
4. Started jmeter proxy server and hit https://url/login
5. Certificate confirmation page appears on browser. Meanwhile, jmeter.log shows:
2013/09/11 13:16:30 INFO - jmeter.protocol.http.proxy.Daemon: Creating Daemon Socket on port: 8084
2013/09/11 13:16:30 INFO - jmeter.protocol.http.proxy.Daemon: Proxy up and running!
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: Proxy will remove the headers: If-Modified-Since,If-None-Match,Host
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: Opened Keystore file: /home/abhijeet/Automation_Dev/LoadAutomation/Jmeter/apache-jmeter-2.9/bin/proxyserver.jks
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: KeyStore for SSL loaded OK and put host in map (clients4.google.com)
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: Opened Keystore file: /home/abhijeet/Automation_Dev/LoadAutomation/Jmeter/apache-jmeter-2.9/bin/proxyserver.jks
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: KeyStore for SSL loaded OK and put host in map (translate.googleapis.com)
2013/09/11 13:22:40 INFO - jmeter.protocol.http.sampler.HTTPHCAbstractImpl: Local host = abhijeet-desktop
2013/09/11 13:22:40 INFO - jmeter.protocol.http.sampler.HTTPHC4Impl: HTTP request retry count = 1
2013/09/11 13:22:40 INFO - jmeter.protocol.http.sampler.HTTPHC4Impl: Setting up HTTPS TrustAll scheme
2013/09/11 13:22:40 INFO - jmeter.protocol.http.proxy.FormCharSetFinder: Using htmlparser version: 2.0 (Release Build Sep 17, 2006)<br>
6. Thread group starts showing unknown requests to these domains:
1. translate.googleapis.com
2. clients4.google.com
3. www.google.co.in
4. www.google.com
5. ssl.gstatic.com
6. safebrowsing.google.com
7. alt1-safebrowsing.google.com
8. clients4.google.com
9. www.gstatic.com
.
.
n all other requests going to the target application.
(For every request the above exceptions are thrown)
I believe, the google domain requests above are getting recorded because chrome is dynamically searching the keywords on google, while I am typing the url string in the address bar. But I don't want these requests to get recorded in the Thread Group.
Also, I tried the solutions from these pages but they didn't work for me:
Link 1
Link 2
Link 3
I don't understand, why is JMeter not able to use the fake certificate that it already has. I checked the SSL settings in chrome and I could not find any JMeter certificates. Need help!!
To do it in chrome/IE we have to place the certificate into 'Trusted Root Certificates Store'
Double click the certificate created
Certificate Import Wizard opens
Click Next
Select Second radio button (Place All Certificates in the following store)
Click Browse and select 'Trusted Root Certificates Authorities'. Click Next
Click Finish
Check your certificate installed in Chrome Settings (under Http/SSL) - Manage certificates.. (Trusted Root Certificates Authorities Tab)
This should at cure the exceptions thrown as your screenshot shows.
I have the same problem and solve it to trust the certificate. Just like you when i look at the
Options > Advanced > Certificates > View Certificates ==> Authorities
and couldn't see a name ApacheJMeterRootCertificate.crt or a related name, but i realize that there is a name something like
_DO NOT INSTALL unless this is your certificate
I click this object and 'Edit_Trust' both item under this object. I share my screenshot. I hope this can be help you and others.
I use Firefox. At chrome there should be similar way to edit the certificate.
jmeter 2.12 has good support for HTTPS. Under the WorkBench, just select Add -> Non-Test Elements -> HTTP(S) Test Script Recorder. This version worked first time for me.
Latest versions of Google Chrome made difficult to bypass security settings to avoid security Threats as Phishing or Man-in-the-middle attacks.
I have successfully configured Google Chrome (v.54.0) to allow JMeter Self-Signed Certificate for HTTP(S) Recording.
Here the instructions (on Windows):
Open MMC console (SUPER + R, Type mmc, Press Enter)
Select File Add/Remove Snap-in
Select Certificates Snap-in for Current User
Select Trusted Root Certification Authorities >> Certificates
Right-click over Certificates folder and select All Tasks >> Import...
Import JMeter Self-Signed certificate using the wizard keeping the default options.
Once installed, right-click over JMeter Self-Signed certificate and select Properties
On General tab, make sure Enable for all purposes option is selected
On Cross-Certificates, include the URL of the application you want to record (make sure you enter the full url, e.g. https://www.live.com)
Close all windows.
Done. You should now be able to reach the destination bypassing Chrome security alert and start recording.