How to configure Gzip for JBoss? - json

I think to try to speed up my Web App by reducing the size of transferred data. For example, in Nginx there is a special module. How to enable compression for JBoss server?

JBoss AS version 6 or lower
To enable gzip compression, settings need to be added to your existing HTTP connector.
Located at /server/default/deploy/jbossweb.sar/server.xml:
<!-- A HTTP/1.1 Connector on port 8080 -->
<Connector protocol="HTTP/1.1" port="${jboss.web.http.port}"
address="${jboss.bind.address}" redirectPort="${jboss.web.https.port}"
compression="force"
compressionMinSize="512"
noCompressionUserAgents=""
compressableMimeType="text/html,text/xml,text/css,text/javascript"
/>
JBoss AS 7.0.x
JBoss 7.0.x - 7.1.0 have no support for gzip compression build in.
See also issue report at: https://issues.jboss.org/browse/AS7-2991
One way to add gzip compression in JBoss 7.0 is to add is as filter.
For details: https://code.google.com/p/webutilities/wiki/CompressionFilter
JBoss AS 7.1.1
Just recently JBoss finished adding gzip compression to JBoss. As of version 7.1.1Final, gzip compression is supported out of the box again. To enable, add to the server launch params:
-Dorg.apache.coyote.http11.Http11Protocol.COMPRESSION=on

Related

Specify JFROG_ACCESS home instead of ~/.jfrog_access (Artifactory 5.5.2)

I managed to set up artifactory using our existing tomcat. I have set to ARTIFACTORY_HOME=/opt/artifactory, that part works well. There is, however, also the jfrog access.war file, which needs to be running as well. I didn't figure out which variable to use to specify its home, therefore it defaults to ~/.jfrog_access, which is not at all what I like.
I moved the content over to my $ARTIFACTORY_HOME/access and symlinked it, but that's not the way to go for sure. Any help appreciated.
In case someone is stumbling over this thread and struggles with the same problem:
Solution for me was to also extract the Context files (access.xml and artifactory.xml which are available in the zip file under <zip extract>/misc/tomcat) to the Tomcat configuration folder, e.g. $CATALINA_HOME/conf/Catalina/localhost/. After that the $ARTIFACTORY_HOME env will be recognized on Access startup.
A previous answer finally put me on the right track for solving this problem on Amazon Linux.
In addition to copying access.xml and artifactory.xml to ${catalina.home}/host/MY_HOSTNAME, I found that some other changes were needed.
I modified the docBase attributes in the XML context files because my server has multiple hostnames:
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/access.xml
<Context path="/access" docBase="${catalina.home}/host/repo.mydomain.org/access.war">
<Parameter name="jfrog.access.bundled" value="true" override="true"/>
<!-- enable annotations scanning of access jar files -->
<JarScanner scanClassPath="false">
<JarScanFilter defaultPluggabilityScan="false" pluggabilityScan="access*" defaultTldScan="false"/>
</JarScanner>
</Context>
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/artifactory.xml
<Context crossContext="true" path="/artifactory" docBase="${catalina.home}/host/repo.mydomain.org/artifactory.war">
</Context>
Important Note: In order to prevent the above two XML files from being deleted by Tomcat Manager during upgrades via Undeploy/Deploy WAR, make sure they are owned by root and not writable by the tomcat user:
chown root.root access.xml artifactory.xml
chmod 644 access.xml artifactory.xml
If you forget to do the above, you will likely end up missing these files, which will break the communication between the access and artifactory web applications, resulting in login failures ("Username or Password Are Incorrect"). In this case, these errors result from the lack of communication between the web applications, not a problem with the credentials themselves.
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/manager.xml
This gives me the ability to upload new versions of access.war and artifactory.war via https://repo.mydomain.org:8443/manager/html:
<Context docBase="${catalina.home}/webapps/manager" privileged="true" antiResourceLocking="false">
</Context>
Additionally, I created the following folder to serve as the artifactory.home:
sudo mkdir /usr/share/artifactory
sudo chown tomcat.tomcat /usr/share/artifactory
tomcat8.conf
Add (or modify) the following line:
JAVA_OPTS="-Dartifactory.home=/usr/share/artifactory -Djfrog.access.home=/usr/share/artifactory/access -Dartifactory.access.client.serverUrl.override=http://localhost:8080/access"
Note: The Access Client URL specified above must use localhost in order to avoid the Server HTTP parameter from being overwritten by Apache and its modules. For instance, if I use:
https://repo.mydomain.org/access/api/v1/system/ping
The Server HTTP header value in the response is:
Server: Apache/2.4.33 (Amazon) OpenSSL/1.0.2k-fips mod_jk/1.2.43
And the Access Client produces the following exception:
[ERROR] (o.j.a.c.AccessClientImpl:154) - Access client/server version mismatch. Client version: 4.1.5, Server version: 2.4.33 (Amazon) OpenSSL
Which means the Access Client is depending on the first string matching #.#.# in the server header. This seems like a really fragile part of the Access Client. They should have used X-JFrog-Access-Server or something instead of trying to control a value that is set by the web server. So, to reiterate, use http://localhost:8080/access to connect directly to the tomcat server.
Artifactory 6.2.0 depends on Apache Derby (the specific version can be found in jfrog-artifactory-oss-6.2.0.zip\artifactory-oss-6.2.0\tomcat\lib). This should be added as a shared library to Tomcat:
mkdir /usr/share/tomcat8/shared
cd /usr/share/tomcat8/shared
wget http://central.maven.org/maven2/org/apache/derby/derby/10.11.1.1/derby-10.11.1.1.jar
Add or modify the following line in catalina.properties:
shared.loader=${catalina.home}/shared/*.jar
Since we want https://repo.mydomain.org to go to the Artifactory webapp:
mkdir /usr/share/tomcat8/host/repo.mydomain.org/ROOT
echo '<html><head><meta http-equiv="refresh" content="0;URL=/artifactory"></meta></head><body></body></html>' > /usr/share/tomcat8/host/repo.mydomain.org/ROOT/index.html
And make sure the services automatically start on reboot:
sudo chkconfig httpd on
sudo chkconfig tomcat8 on
Artifactory will then be available at the url:
https://repo.mydomain.org/artifactory/webapp/

ECC Certificates not working in Chrome?

I'm attempting to configure HAProxy to serve an RSA or ECC certificate depending on the client's browser. I initially am trying to get ECC certificates configured, and I noticed that the latest version of Chrome does not support them. Wondering if anyone else is having this problem? I am using OS X 10.11.4 with the following versions:
Chrome (50.0.2661.94) (64-bit) [doesn't work]
Firefox (46.0) (64-bit) [works]
Safari (9.1 11601.5.17.1) (64-bit) [works]
cURL (7.43.0 (x86_64-apple-darwin15.0) libcurl/7.43.0 SecureTransport zlib/1.2.5) [works]
The cURL command I call via curl --ciphers ecdhe_ecdsa_aes_128_sha --ssl --head --tlsv1.2 https://<url> and it returns 200 OK.
And I am using Ubuntu Xenial 16.04 LTS on the server side with the following versions:
[root#haproxy-server]: /etc/haproxy # haproxy -vv
HA-Proxy version 1.6.4 2016/03/13
Copyright 2000-2016 Willy Tarreau <willy#haproxy.org>
Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2
OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1
Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2g 1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g-fips 1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.38 2015-11-23
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.
Here's the screenshot of the exact problem: http://imgur.com/wlmQbIi
Here's the screenshot of the same website with Safari: http://imgur.com/FEwmmj9
And finally, my haproxy.cfg file:
global
log /dev/log local0
log /dev/log local1 notice
user haproxy
group haproxy
chroot /var/lib/haproxy
daemon
stats socket /run/haproxy/admin.sock level admin
maxconn 15000
spread-checks 5
tune.ssl.default-dh-param 2048
tune.ssl.maxrecord 1400
tune.idletimer 1000
ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-server-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
defaults
log global
mode http
retries 3
balance roundrobin
hash-type map-based
option httplog
option dontlognull
option forwardfor
option http-server-close
option redispatch
option abortonclose
log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 30s
timeout http-keep-alive 10s
timeout check 10s
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend http-frontend
bind *:80 accept-proxy
reqadd X-Forwarded-Proto:\ http
use_backend %[req.hdr(host),lower,map_sub(/etc/haproxy/backend.map,test-backend)]
frontend https-frontend
bind *:443 accept-proxy ssl crt /etc/ssl/pem/ecc alpn http/1.1
log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r\ ssl_version:%sslv\ ssl_cipher:%sslc\ %[ssl_fc_sni]\ %[ssl_fc_npn]
rspadd Strict-Transport-Security:\ max-age=31536000;\ includeSubdomains;\ preload
rspadd X-Frame-Options:\ DENY
reqadd X-Forwarded-Proto:\ https
use_backend %[req.hdr(host),lower,map_sub(/etc/haproxy/backend.map,test-backend)]
backend test-backend
balance leastconn
redirect scheme https code 301 if !{ ssl_fc }
server test-server 10.10.10.40:80 check
I know this post is not in the right seciton of StackExchange (sorry!) but I wanted to post a potential solution. I think the problem is the elliptic curves support in Chrome vs. Firefox vs. Safari. From the SSLLabs website:
Safari 9 / OS X 10.11: secp256r1, secp384r1, secp521r1
Firefox 44 / OS X: secp256r1, secp384r1, secp521r1
Chrome 48 / OS X: secp256r1, secp384r1
The problem is the private key for the ECC certificate I was testing was generated with secp521r1 (http://imgur.com/dbrJQuW), which the latest version of Chrome on OS X 10.11 doesn't support.
See this issue: https://security.stackexchange.com/questions/100991/why-is-secp521r1-no-longer-supported-in-chrome-others
It seems that only the following two cipher suite are supported by your web server:
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
I suppose that missing some cipher suite (at least TLS_RSA_WITH_AES_128_CBC_SHA) is the reason of your problem.
The cipher suite TLS_RSA_WITH_AES_128_CBC_SHA must be supported in TLS 1.2 (see the section 9 Mandatory Cipher Suites or RFC5246). In the same way I would you recommend to see forward and to include protocols
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
and the suites
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
are strictly recommended too. See TLS 1.3 specification. You use Nginx web server, which should support TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 and TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, which are very good because of combination the security and the performance. I'd recommend you to include all the Cipher Suites.
I'd recommend you additionally to use or at least to examine carefully the recommendation of Nginx setting for modern or intermediate web browsers by Mozilla SSL Configuration Generator. You can read more about the suites here.

CakePHP 3 - Enable SSL on development server [duplicate]

OS: Ubuntu 12.04 64-bit
PHP version: 5.4.6-2~precise+1
When I test an https page I am writing through the built-in webserver (php5 -S localhost:8000), Firefox (16.0.1) says "Problem loading: The connection was interrupted", while the terminal tells me "::1:37026 Invalid request (Unsupported SSL request)".
phpinfo() tells me:
Registered Stream Socket Transports: tcp, udp, unix, udg, ssl, sslv3,
tls
[curl] SSL: Yes
SSL Version: OpenSSL/1.0.1
openssl:
OpenSSL support: enabled
OpenSSL Library Version OpenSSL 1.0.1 14 Mar 2012
OpenSSL Header Version OpenSSL 1.0.1 14 Mar 2012
Yes, http pages work just fine.
Any ideas?
See the manual section on the built-in webserver shim:
http://php.net/manual/en/features.commandline.webserver.php
It doesn't support SSL encryption. It's for plain HTTP requests. The openssl extension and function support is unrelated. It does not accept requests or send responses over the stream wrappers.
If you want SSL to run over it, try a stunnel wrapper:
php -S localhost:8000 &
stunnel3 -d 443 -r 8080
It's just for toying anyway.
It's been three years since the last update; here's how I got it working in 2021 on macOS (as an extension to mario's answer):
# Install stunnel
brew install stunnel
# Find the configuration directory
cd /usr/local/etc/stunnel
# Copy the sample conf file to actual conf file
cp stunnel.conf-sample stunnel.conf
# Edit conf
vim stunnel.conf
Modify stunnel.conf so it looks like this:
(all other options can be deleted)
; **************************************************************************
; * Global options *
; **************************************************************************
; Debugging stuff (may be useful for troubleshooting)
; Enable foreground = yes to make stunnel work with Homebrew services
foreground = yes
debug = info
output = /usr/local/var/log/stunnel.log
; **************************************************************************
; * Service definitions (remove all services for inetd mode) *
; **************************************************************************
; ***************************************** Example TLS server mode services
; TLS front-end to a web server
[https]
accept = 443
connect = 8000
cert = /usr/local/etc/stunnel/stunnel.pem
; "TIMEOUTclose = 0" is a workaround for a design flaw in Microsoft SChannel
; Microsoft implementations do not use TLS close-notify alert and thus they
; are vulnerable to truncation attacks
;TIMEOUTclose = 0
This accepts HTTPS / SSL at port 443 and connects to a local webserver running at port 8000, using stunnel's default bogus cert at /usr/local/etc/stunnel/stunnel.pem. Log level is info and log outputs are written to /usr/local/var/log/stunnel.log.
Start stunnel:
brew services start stunnel # Different for Linux
Start the webserver:
php -S localhost:8000
Now you can visit https://localhost:443 to visit your webserver: screenshot
There should be a cert error and you'll have to click through a browser warning but that gets you to the point where you can hit your localhost with HTTPS requests, for development.
I've been learning nginx and Laravel recently, and this error has came up many times. It's hard to diagnose because you need to align nginx with Laravel and also the SSL settings in your operating system at the same time (assuming you are making a self-signed cert).
If you are on Windows, it is even more difficult because you have to fight unix carriage returns when dealing with SSL certs. Sometimes you can go through the steps correctly, but you get ruined by cert validation issues. I find the trick is to make the certs in Ubuntu or Mac and email them to yourself, or use the linux subsystem.
In my case, I kept running into an issue where I declare HTTPS somewhere but php artisan serve only works on HTTP.
I just caused this Invalid request (Unsupported SSL request) error again after SSL was hooked up fine. It turned out to be that I was using Axios to make a POST request to https://. Changing it to POST http:// fixed it.
My recommendation to anyone would be to take a look at where and how HTTP/HTTPS is being used.
The textbook definition is probably something like php artisan serve only works over HTTP but requires underlying SSL layer.
Use Ngrok
Expose your server's port like so:
ngrok http <server port>
Browse with the ngrok's secure public address (the one with https).
Note: Though it works like a charm, it seems an overkill since it requires internet and would appreciate better recommendations.

Http requests to CEP

I installed the CEP ( Proton ) through the official documentation, https://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/CEP_GE_-_IBM_Proactive_Technology_Online_Installation_and_Administration_Guide
After that, I watched this recommended video to learn more about CEP. https://edu.fiware.org/pluginfile.php/653/mod_resource/content/1/CEP-Tutorial.mp4
But I can't check engine instance state, because appears this error in response:Could not read instance state, message: Error activating jmx proxy:
It seems that JMX is not properly configured.
As described in the installation guide, in the Apache Tomcat users configuration file you need to add manager-jmx role, and add it to the manager user name:
<tomcat-users>
...
<role rolename="manager-jmx" />
<user username="manager" password="manager" roles="manager-gui,manager-status,manager-script,manager-jmx" />
...
</tomcat-users>
You need to enable JMX access on Apache Tomcat, by adding it to CATALINA_OPTS, as described in the installation guide.
You also need to specify the JMX service port in the ProtonAdmin.properties file, as described in the same installation guide.

Server has a weak ephemeral Diffie-Hellman public ERR_SSL_WEAK_SERVER_EPHEMERAL_DH_KEY on Chrome and Firefox browser

I have a tomcat secure serlet running on Amazon AMI, I've set up a secure connector on prt 8443 with a TLS protocol and using the .jks keystore:
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS"
compression="off"
keystoreFile="\cert\localhost.jks" keystorePass="password"
/>
When I try to access to the url from the Internet, I'm getting "ERR_SSL_WEAK_SERVER_EPHEMERAL_DH_KEY" error.
I began to experience this error after updating Chrome 45. Now i'm on 45.0.2454.85 m Chrome version.
There is anyone that can help me to fix this error?
Locate your install path for Tomcat (TOMCAT_PATH). Find the XML definition of SSL HTTP/1.1 Connector in server.xml and add the following to the connector
ciphers="TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA,SSL_RSA_WITH_RC4_128_SHA"/>
For even better security, remove all references to RC4 ciphers.
Find all ciphers containing "RC4" from the list above and remove those from your list. RC4 ciphers are very old and some older smart phones and/or proxy servers still depend on the RC4 ciphers. Complete list of who still uses RC4 is here: https://blog.cloudflare.com/the-web-is-world-wide-or-who-still-needs-rc4/ For best compatibility, leave RC4, for best security remove RC4.
After a while a get the answer:
I have to set up the cipher for my connector in Tomcat 7 like this:
<Connector port="8443" protocol="HTTP/1.1"
maxThreads="200" SSLEnabled="true"
scheme="https" secure="true" disableUploadTimeout="true"
acceptCount="100"
clientAuth="false" sslProtocol="TLS"
keystoreFile="localhost.jks" keystorePass="password"
ciphers="TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_RC4_128_SHA,
TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA256,
TLS_RSA_WITH_AES_256_CBC_SHA,SSL_RSA_WITH_RC4_128_SHA"
/>
Although, Google Chrome browser still tell me the cipher it's deprecated.
To fix the issue you have to configure ciphers matching your JDK. Refer to answer in below link https://stackoverflow.com/a/32473771