Running non-www stuff on an Elastic Beanstalk Docker container - smtp

I want to run a SMTP server on a Docker container in Elastic Beanstalk, so in my Dockerfile I have exposed the port 25 (and no other ports)
EXPOSE 25
I also edited the beanstalk load balancer (using EC2 web admin) and added port 25 to it:
| LB Protocol | LB Port | Instance Protocol | Instance Port | SSL |
| TCP | 25 | TCP | 25 | N/A |
....
And edited the security group of the instance to allow inbound TCP traffic to port 25 (allowed all locations to be able to connect to the instance directly).
Doesn't seem to work though. If I use the same Dockerfile in Virtualbox (using option -p 25:25) I can connect to the port 25 through the host machine and the SMTP server is listening. If I run the container in Elastic Beanstalk using the before-mentioned configuration I can't connect to the port 25 neither using the load balancer or directly the EC2 instance.
Any ideas what I'm doing wrong here?

Instead of editing the Load Balancer configuration directly from EC2 web admin it is recommended you do it using elasticbeanstalk ebextensions because those changes persist for your environment even if your EC2 instances in the auto-scaling group are replaced.
Can you try the following?
Create a file "01-elb.config" in a folder called .ebextensions in your app source with the following contents:
option_settings:
- namespace: aws:cloudformation:template:parameter
option_name: InstancePort
value: 25
Resources:
AWSEBLoadBalancer:
Type: AWS::ElasticLoadBalancing::LoadBalancer
Properties:
Listeners:
- InstancePort: 25
LoadBalancerPort: 80
Protocol: TCP
- InstancePort: 25
LoadBalancerPort: 25
Protocol: TCP
AvailabilityZones:
- us-west-2a
us-west-2b
us-west-2c
HealthCheck:
Timeout: 5
Target: TCP:25
Interval: 30
HealthyThreshold: 3
UnhealthyThreshold: 5
This file is in YAML format and hence indentation is important.
The option setting ('aws:cloudformation:template:parameter', 'InstancePort') sets the instance port to 25 and also modifies the security group to make sure that port 25 is accessible by the load balancer.
This file is overriding the default Load Balancer Resource created by Elastic Beanstalk with two listeners both having instance port set to 25. Hope that helps.
Read more about customizing your environment with ebextensions here.
Can you try creating a new environment with the above file in .ebextensions/01-elb.config file in the appsource directory? Let me know if you run into any issues.

Related

Graphhopper server not accessing on a same network

I have successfully deployed graph-hopper on my local server. The problem is that i can access the server using local-host on the server but unable to access it using the server IP locally or from other machine on the same network. For same port if i use docker it works but not the other way around. Here is my configuration:
# Dropwizard server configuration
server:
applicationConnectors:
- type: http
port: 8989
requestLog:
appenders: []
adminConnectors:
- type: http
port: 8991
You need to bind the host, maybe you need to change localhost with the IP address of your server. Avoid using 0.0.0.0.
server:
application_connectors:
- type: http
port: 8989
bind_host: localhost

How would I connect to a MySQL on the host machine from inside a docker kubernetes pod? [duplicate]

This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 11 months ago.
I have a docker container running jenkins. As part of the build process, I need to access a web server that is run locally on the host machine. Is there a way the host web server (which can be configured to run on a port) can be exposed to the jenkins container?
I'm running docker natively on a Linux machine.
UPDATE:
In addition to #larsks answer below, to get the IP address of the Host IP from the host machine, I do the following:
ip addr show docker0 | grep -Po 'inet \K[\d.]+'
For all platforms
Docker v 20.10 and above (since December 14th 2020)
On Linux, add --add-host=host.docker.internal:host-gateway to your Docker command to enable this feature. (See below for Docker Compose configuration.)
Use your internal IP address or connect to the special DNS name host.docker.internal which will resolve to the internal IP address used by the host.
To enable this in Docker Compose on Linux, add the following lines to the container definition:
extra_hosts:
- "host.docker.internal:host-gateway"
For macOS and Windows
Docker v 18.03 and above (since March 21st 2018)
Use your internal IP address or connect to the special DNS name host.docker.internal which will resolve to the internal IP address used by the host.
Linux support pending https://github.com/docker/for-linux/issues/264
MacOS with earlier versions of Docker
Docker for Mac v 17.12 to v 18.02
Same as above but use docker.for.mac.host.internal instead.
Docker for Mac v 17.06 to v 17.11
Same as above but use docker.for.mac.localhost instead.
Docker for Mac 17.05 and below
To access host machine from the docker container you must attach an IP alias to your network interface. You can bind whichever IP you want, just make sure you're not using it to anything else.
sudo ifconfig lo0 alias 123.123.123.123/24
Then make sure that you server is listening to the IP mentioned above or 0.0.0.0. If it's listening on localhost 127.0.0.1 it will not accept the connection.
Then just point your docker container to this IP and you can access the host machine!
To test you can run something like curl -X GET 123.123.123.123:3000 inside the container.
The alias will reset on every reboot so create a start-up script if necessary.
Solution and more documentation here: https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds
When running Docker natively on Linux, you can access host services using the IP address of the docker0 interface. From inside the container, this will be your default route.
For example, on my system:
$ ip addr show docker0
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::f4d2:49ff:fedd:28a0/64 scope link
valid_lft forever preferred_lft forever
And inside a container:
# ip route show
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 src 172.17.0.4
It's fairly easy to extract this IP address using a simple shell
script:
#!/bin/sh
hostip=$(ip route show | awk '/default/ {print $3}')
echo $hostip
You may need to modify the iptables rules on your host to permit
connections from Docker containers. Something like this will do the
trick:
# iptables -A INPUT -i docker0 -j ACCEPT
This would permit access to any ports on the host from Docker
containers. Note that:
iptables rules are ordered, and this rule may or may not do the
right thing depending on what other rules come before it.
you will only be able to access host services that are either (a)
listening on INADDR_ANY (aka 0.0.0.0) or that are explicitly
listening on the docker0 interface.
If you are using Docker on MacOS or Windows 18.03+, you can connect to the magic hostname host.docker.internal.
Lastly, under Linux you can run your container in the host network namespace by setting --net=host; in this case localhost on your host is the same as localhost inside the container, so containerized service will act like non-containerized services and will be accessible without any additional configuration.
Use --net="host" in your docker run command, then localhost in your docker container will point to your docker host.
The answer is...
Replace http://127.0.0.1 or http://localhost with http://host.docker.internal.
Why?
Source in the docs of Docker.
My google search brought me to here, and after digging in the comments I found it's a duplicate of From inside of a Docker container, how do I connect to the localhost of the machine?. I voted for closing this one as a duplicate, but since people (including myself!) often scroll down on the answers rather than reading the comments carefully, here is a short answer.
For linux systems, you can – starting from major version 20.04 of the docker engine – now also communicate with the host via host.docker.internal. This won't work automatically, but you need to provide the following run flag:
--add-host=host.docker.internal:host-gateway
See
https://github.com/moby/moby/pull/40007#issuecomment-578729356
https://github.com/docker/for-linux/issues/264#issuecomment-598864064
Solution with docker-compose:
For accessing to host-based service, you can use network_mode parameter
https://docs.docker.com/compose/compose-file/#network_mode
version: '3'
services:
jenkins:
network_mode: host
EDIT 2020-04-27: recommended for use only in local development environment.
EDIT 2021-09-21: IHaveHandedInMyResignation wrote it does not work for Mac and Windows. Option is supported only for Linux
I created a docker container for doing exactly that https://github.com/qoomon/docker-host
You can then simply use container name dns to access host system e.g.
curl http://dockerhost:9200
Currently the easiest way to do this on Mac and Windows is using host host.docker.internal, that resolves to host machine's IP address. Unfortunately it does not work on linux yet (as of April 2018).
We found that a simpler solution to all this networking junk is to just use the domain socket for the service. If you're trying to connect to the host anyway, just mount the socket as a volume, and you're on your way. For postgresql, this was as simple as:
docker run -v /var/run/postgresql:/var/run/postgresql
Then we just set up our database connection to use the socket instead of network. Literally that easy.
I've explored the various solution and I find this the least hacky solution:
Define a static IP address for the bridge gateway IP.
Add the gateway IP as an extra entry in the extra_hosts directive.
The only downside is if you have multiple networks or projects doing this, you have to ensure that their IP address range do not conflict.
Here is a Docker Compose example:
version: '2.3'
services:
redis:
image: "redis"
extra_hosts:
- "dockerhost:172.20.0.1"
networks:
default:
ipam:
driver: default
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
You can then access ports on the host from inside the container using the hostname "dockerhost".
For docker-compose using bridge networking to create a private network between containers, the accepted solution using docker0 doesn't work because the egress interface from the containers is not docker0, but instead, it's a randomly generated interface id, such as:
$ ifconfig
br-02d7f5ba5a51: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.32.1 netmask 255.255.240.0 broadcast 192.168.47.255
Unfortunately that random id is not predictable and will change each time compose has to recreate the network (e.g. on a host reboot). My solution to this is to create the private network in a known subnet and configure iptables to accept that range:
Compose file snippet:
version: "3.7"
services:
mongodb:
image: mongo:4.2.2
networks:
- mynet
# rest of service config and other services removed for clarity
networks:
mynet:
name: mynet
ipam:
driver: default
config:
- subnet: "192.168.32.0/20"
You can change the subnet if your environment requires it. I arbitrarily selected 192.168.32.0/20 by using docker network inspect to see what was being created by default.
Configure iptables on the host to permit the private subnet as a source:
$ iptables -I INPUT 1 -s 192.168.32.0/20 -j ACCEPT
This is the simplest possible iptables rule. You may wish to add other restrictions, for example by destination port. Don't forget to persist your iptables rules when you're happy they're working.
This approach has the advantage of being repeatable and therefore automatable. I use ansible's template module to deploy my compose file with variable substitution and then use the iptables and shell modules to configure and persist the firewall rules, respectively.
This is an old question and had many answers, but none of those fit well enough to my context. In my case, the containers are very lean and do not contain any of the networking tools necessary to extract the host's ip address from within the container.
Also, usin the --net="host" approach is a very rough approach that is not applicable when one wants to have well isolated network configuration with several containers.
So, my approach is to extract the hosts' address at the host's side, and then pass it to the container with --add-host parameter:
$ docker run --add-host=docker-host:`ip addr show docker0 | grep -Po 'inet \K[\d.]+'` image_name
or, save the host's IP address in an environment variable and use the variable later:
$ DOCKERIP=`ip addr show docker0 | grep -Po 'inet \K[\d.]+'`
$ docker run --add-host=docker-host:$DOCKERIP image_name
And then the docker-host is added to the container's hosts file, and you can use it in your database connection strings or API URLs.
For me (Windows 10, Docker Engine v19.03.8) it was a mix of https://stackoverflow.com/a/43541732/7924573 and https://stackoverflow.com/a/50866007/7924573 .
change the host/ip to host.docker.internal
e.g.: LOGGER_URL = "http://host.docker.internal:8085/log"
set the network_mode to bridge (if you want to maintain the port forwarding; if not use host):
version: '3.7'
services:
server:
build: .
ports:
- "5000:5000"
network_mode: bridge
or alternatively: Use --net="bridge" if you are not using docker-compose (similar to https://stackoverflow.com/a/48806927/7924573)
As pointed out in previous answers: This should only be used in a local development environment.
For more information read: https://docs.docker.com/compose/compose-file/#network_mode and https://docs.docker.com/docker-for-windows/networking/#use-cases-and-workarounds
You can access the local webserver which is running in your host machine in two ways.
Approach 1 with public IP
Use host machine public IP address to access webserver in Jenkins docker container.
Approach 2 with the host network
Use "--net host" to add the Jenkins docker container on the host's network stack. Containers which are deployed on host's stack have entire access to the host interface. You can access local webserver in docker container with a private IP address of the host machine.
NETWORK ID NAME DRIVER SCOPE
b3554ea51ca3 bridge bridge local
2f0d6d6fdd88 host host local
b9c2a4bc23b2 none null local
Start a container with the host network
Eg: docker run --net host -it ubuntu and run ifconfig to list all available network IP addresses which are reachable from docker container.
Eg: I started a nginx server in my local host machine and I am able to access the nginx website URLs from Ubuntu docker container.
docker run --net host -it ubuntu
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a604f7af5e36 ubuntu "/bin/bash" 22 seconds ago Up 20 seconds ubuntu_server
Accessing the Nginx web server (running in local host machine) from Ubuntu docker container with private network IP address.
root#linuxkit-025000000001:/# curl 192.168.x.x -I
HTTP/1.1 200 OK
Server: nginx/1.15.10
Date: Tue, 09 Apr 2019 05:12:12 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 26 Mar 2019 14:04:38 GMT
Connection: keep-alive
ETag: "5c9a3176-264"
Accept-Ranges: bytes
In almost 7 years the question was asked, it is either docker has changed, or no one tried this way. So I will include my own answer.
I have found all answers use complex methods. Today, I have needed this, and found 2 very simple ways:
use ipconfig or ifconfig on your host and make note of all IP addresses. At least two of them can be used by the container.
I have a fixed local network address on WiFi LAN Adapter: 192.168.1.101. This could be 10.0.1.101. the result will change depending on your router
I use WSL on windows, and it has its own vEthernet address: 172.19.192.1
use host.docker.internal. Most answers have this or another form of it depending on OS. The name suggests it is now globally used by docker.
A third option is to use WAN address of the machine, or in other words IP given by the service provider. However, this may not work if IP is not static, and requires routing and firewall settings.
PS: Although pretty identical to this question here, and I posted this answer there, I first found this post, so I post it here too as may forget my own answer.
The simplest option that worked for me was,
I used the IP address of my machine on the local network(assigned by the router)
You can find this using the ifconfig command
e.g
ifconfig
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=400<CHANNEL_IO>
ether f0:18:98:08:74:d4
inet 192.168.178.63 netmask 0xffffff00 broadcast 192.168.178.255
media: autoselect
status: active
and then used the inet address. This worked for me to connect any ports on my machine.
When you have two docker images "already" created and you want to put two containers to communicate with one-another.
For that, you can conveniently run each container with its own --name and use the --link flag to enable communication between them. You do not get this during docker build though.
When you are in a scenario like myself, and it is your
docker build -t "centos7/someApp" someApp/
That breaks when you try to
curl http://172.17.0.1:localPort/fileIWouldLikeToDownload.tar.gz > dump.tar.gz
and you get stuck on "curl/wget" returning no "route to host".
The reason is security that is set in place by docker that by default is banning communication from a container towards the host or other containers running on your host.
This was quite surprising to me, I must say, you would expect the echosystem of docker machines running on a local machine just flawlessly can access each other without too much hurdle.
The explanation for this is described in detail in the following documentation.
http://www.dedoimedo.com/computers/docker-networking.html
Two quick workarounds are given that help you get moving by lowering down the network security.
The simplest alternative is just to turn the firewall off - or allow all. This means running the necessary command, which could be systemctl stop firewalld, iptables -F or equivalent.

Can't expose mysql tcp service running inside kubernetes cluster publicly using nginx-ingress

I ran into a problem exposing a mysql database running inside a kubernetes cluster publicly. The cluster runs with kops on AWS. Im using a helm chart for nginx-ingress: https://github.com/helm/charts/tree/master/stable/nginx-ingress
controller:
config:
use-proxy-protocol: "true"
metrics:
enabled: true
replicaCount: 2
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
stats:
enabled: true
rbac:
create: true
tcp:
5000: default/cbioportal-prod-db-mysql:3306
From within the cluster I can telnet to the db through nginx over port 5000 :
# telnet eating-dingo-nginx-ingress-controller 5000
J
5.7.14
ke_|c&tc"ui%]}mysql_native_passwordConnection closed by foreign host
But i can't seem to connect from outside using the hostname of the aws load balancer.
telnet xxx.us-east-1.elb.amazonaws.com 5000
Trying x.x.x.x...
When i look in aws ec2 dashboard i see the load balancer's security group allows connections from everywhere on port 5000.
UPDATE
I can connect when I use port 3306 instead of 5000:
tcp:
3306: default/cbioportal-prod-db-mysql:3306
However now that the port is open:
$ nmap --verbose -Pn x.x.x.x
PORT STATE SERVICE
21/tcp open ftp
80/tcp open http
443/tcp open https
3306/tcp open mysql
I am getting an authorization issue:
$ mysql -h x.x.x.x -uroot -pabcdef
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading authorization packet', system error: 2
I can connect directly to the nginx controller without issues from within the cluster:
kubectl run -it --rm --image=mysql:5.7 --restart=Never mysql-client -- mysql -h eating-dingo-nginx-ingress-controller -uroot -pabcdef
I'm using this mysql helm chart:
https://github.com/helm/charts/tree/master/stable/mysql

Wildfly on OpenShift 3 with path-base routing and accessible console

I have Wildfly 10 running on Openshift Origin 3 in AWS with an elastic ip.
I setup a Route in Openshift to map / to the wildfly service. This is working fine. If I go to http://my.ip.address I get the WildFly welcome page.
But if I map a different path, say /wf01, it doesn't work. I get a 404 Not Found error.
My guess is the router is passing along the /wf01 to the service? If that's the case, can I stop it from doing it? Otherwise how can I map http://my.ip.address/wf01 to my wildfly service?
I also want the wildfly console to be accessible from outside (this is a demo server for my own use). I added "-bmanagement","0.0.0.0" to the deploymentconfig but looking at the wildfly logs it is still binding to 127.0.0.1:
02:55:41,483 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051:
Admin console listening on http://127.0.0.1:9990
A router today cannot remap/rewrite the incoming HTTP path to another path value before passing it along. A workaround is to mount another route+service at the root that handles the root and redirects / forwards.
You can also use port-forward :
oc port-forward -h
Forward 1 or more local ports to a pod
Usage:
oc port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [options]
Examples:
# Listens on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod
$ oc port-forward -p mypod 5000 6000
# Listens on port 8888 locally, forwarding to 5000 in the pod
$ oc port-forward -p mypod 8888:5000
# Listens on a random port locally, forwarding to 5000 in the pod
$ oc port-forward -p mypod :5000
# Listens on a random port locally, forwarding to 5000 in the pod
$ oc port-forward -p mypod 0:5000

Zabbix JMX Tomcat monitoring

I have been trying to setup Zabbix to monitor my 2 tomcat servers on 2 different Amazon EC2 machines, but in vain.
The Z on the host is green, however te JMX is red with these errors
- ZBX_TCP_READ() failed: [4] Interrupted system call
- Someother error [111] connection refused
and many such errors, one after another, in the sense I resolve an error to see one more new error popping up.
These are some assumptions
All the machines run Ubuntu 12.10 and later
Server's IP address: 66.55.12.120 (Runs Zabbix server v2.2.4 (revision 46772) (23 June 2014) )
Agent's IP address: 87.52.45.198 ( Runs Zabbix agent v2.2.2 (revision 42525) (12 February 2014) )
My local machine's IP address: 76.89.54.111
Here is what I've done so far.
On Server Side:
1) Installed Zabbix_server using sudo apt-get install zabbix-server-mysql.
2) The GUI, mysql database all have been installed and configured.
3) The following are the only 3 changes that I've made in the file /etc/zabbix/zabbix_server.conf
...
JavaGateway=localhost
JavaGatewayPort=10052
StartJavaPollers=5
...
4) The Zabbix Java gateway was installed using sudo apt-get install zabbix-java-gateway.
5) The following are the only 3 changes that I've made in the file
/etc/zabbix/zabbix_java_gateway.conf
...
LISTEN_IP="127.0.0.1"
LISTEN_PORT=10052
START_POLLERS=5
...
On Client Side:
1) Installed Zabbix Client using
sudo apt-get install zabbix-agent
2) The following are the only 3 changes that I've made in the file
/etc/zabbix/zabbix_agentd.conf
...
Server=66.55.12.120
StartAgents=5
ServerActive=66.55.12.120:10051
Hostname=Security-test-JMX-EC2
... <br />
3) The Hostname is the same as the one that is mentioned while creating the Host on the GUI.
I believe that there are some issues with the IP and ports. So, here are the outbound rules for both the machines as obtained from Amazon EC2 Security Groups for the machines
OUTBOUND RULES for SERVER SECURITY GROUP:
Type Protocol Port Source Reasoning
Custom- TCP 8080 0.0.0.0/0
TCP Rule
All ICMP All N/A 0.0.0.0/0
Custom- TCP 10052 27.52.52.128/32 For access from Agent
TCP Rule
Custom- TCP 8081 76.84.120.130/32 To access Zabbix GUI from-
TCP Rule -my local machine's web browser
Custom- TCP 10051 27.52.52.128/32 As the agent responds to-
TCP Rule -the server on Port 10051TCP Rule-
-Must allow inbound communications-
- from the agent.
Custom- TCP 11000 27.52.52.128/32 The agent's JMX reporting-
TCP Rule -happens on port 11000(not on 12345).
OUTBOUND RULES for CLIENT SECURITY GROUP:
Type Protocol Port Source
HTTPS TCP 443 0.0.0.0/0
Custom- TCP 10050 66.55.12.120/32
TCP Rule
Custom- TCP 10052 66.55.12.120/32
TCP Rule
Custom- TCP 11000 66.55.12.120/32
TCP Rule
HTTP TCP 80 76.89.54.111/32
Custom- TCP 8080 76.89.54.111/32
TCP Rule
Custom- TCP 8443 76.89.54.111/32
TCP Rule
What am I missing? Please guide me.
Any help is appreciated.
Thanks
Goutham
If you can, then run VisualVm (probably using a tunneled X session) on the zabbix host, and see if you can connect to the target JVM with that. If you can't connect from that, you won't be able to connect from Zabbix.
Try with the following CATALINA_OPTS, replacing with the IP on the target that you want JMX to listen on:
export CATALINA_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.local.only=falseom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=<LOCAL_IP>"
This will disable all JMX security so be aware!
Once you hopefully get it to connect, the "Tomcat JMX" items in Zabbix are also all incorrect! e.g.
Incorrect Zabbix default:
jmx["Catalina:type=GlobalRequestProcessor,name=http-8080",bytesReceived]
Correct entry:
jmx["Catalina:type=ThreadPool,name=\"http-bio-8080\"", bytesReceived]
Note the escaped quotes and incorrect thread name. Add the Mbeans plugin to VisualVM, and use that to browse the MBeans on the target VM, and check the Zabbix names.
It does work eventually, but is a real pain to setup. Zabbix is however one of the few open source monitoring tools that supports JMX at all!
By default, JMX does not work very well with firewalls. You might find related bug reports on Zabbix tracker useful: ZBX-5326 and ZBX-6815. The first one contains a workaround for Tomcat which might work for you.
#gvatreya wrote:
Server: (Runs Zabbix server)
Agent: (Runs Zabbix agent)
It looks like you have to start Zabbix Java gateway as well on host where it is installed (it is a daemon/service).
I configured as follows:
Server: (Runs Zabbix server, Zabbix Java gateway)
Agent: (Runs Zabbix agent)
I think it is possible to install it on a dedicated host.
Have you tried adding -Djava.net.preferIPv4Stack=true to the VM options?
to make it work add next java_opts to your tomcat startup script
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.port=2345
-Dcom.sun.management.jmxremote.rmi.port=12345
-Djava.rmi.server.hostname=<tomcat_hostname>