I have set up Amazon ECS using Fargate, and the task definition contains two containers, one listening to port 9090 and the other to port 8080 . By creating a service and running the task, logs show that both services are up and running. Port mapping is also done in the container configuration of the task definition.
The security group used in the network interface of the task also allows both ports. (tested also by opening all ports)
But I can only access the service running on port 8080, and not the 9090!
Anything I am missing in the configuration? or any thoughts about what to check?
Related
I have deployed the Spring boot app on the OCI compute and its comping up nicely. Compute is created with public ip and have the security list updated to allow connections from internet. But, I wasn't able to hit the end point from internet. For that reason, I thought of configuring the load balancer.
Created load balancer in a separate subnet(10.0.1.0/24), routing table and security list. Configured the LB's security list to send all protocol packets to compute's CIDR(10.0.0.0/24) and configured compute's security list to accept the packets from LB. I was expecting LB to make connection with back end. But, its not.
I am able to hit the LB from internet :-
Lb's routing table with all ips routed through internet gateway. There is no routing defined for compute's CIDR as its in the VCN.
LB has its own security list, which has allowed out going packets to compute and incoming from internet as below:
Compute's security list accepting packet's from LB:
Let me know, if I am missing something here.
My internet gateway :-
My backend set connection configuration from LB:
LB fails to make connection with backend, there seems to be no logging info available :
App is working fine , if I access from the compute node :
The LB has a health check that tests the connection to your service. If it fails, the LB will keep your backend out of rotation and give you the critical health like you're seeing.
You can get to it by looking at the backend set and clicking the Update Health Check button.
Edit:
Ultimately I figured it out, you should run the following commands on your backend:
sudo firewall-cmd --permanent --add-port=8080/tcp
sudo firewall-cmd --reload
Use the port that you configured your app to listen on.
I used httpd instead of spring, but I also did the following
sudo semanage fcontext -a -t httpd_sys_content_t "/var/www/html(/.*)?"
sudo restorecon -F -R -v /var/www/html
I'm not really too familiar with selinux but you may need to do something similar for your application.
Additionally, setting up a second host in the same subnet to login to and test connecting to the other host will help troubleshooting, since it will verify if your app is accessible at all outside the host that it's on. Once it is, the LB should come up fine.
TL;DR In my case it helped to switch the Security List rules from stateful to stateless on the 2 relevant subnets (where the loadbalancer was hosted and where the backends were located).
In our deployment I had a loadbalancer with public IP located on one subnet, while the backend to this loadbalancer was on another subnet. Both subnets had one ingress and one egress rule - to allow everything (i.e. 0.0.0.0/0 and all ports allowed). The backends were still not reachable from the loadbalancer and the healthchecks were failing.
Even despite the fact that in my case as per the documentation switching between stateful and stateless should not have an effect, it solved my issue.
I have an ElasticBeanstalk application that uses Docker to run a small Laravel PHP API.
The app cannot connect to MySQL when running in ElasticBeanstalk.
The MySQL DB is a publically available AWS RDS instance.
I've run my Docker container locally and the app can connect just fine.
When I deploy to ElasticBeanstalk the app cannot connect...
Can anyone point me in a direction to help debug this?
SOLUTION
For anyone else who stumbles on this:
The solution was to create a new security group for both the EC2 instances and the RDS database. The two security groups opened up access via port 3306 for the instances and the database.
I also ensured the EC2 instances were available across every subnet and in the same VPC as my database.
Taken from the answers below and bit of help from a SysOps friend of mine.
You may want to check the EC2 Security Group Rules attached to Elastic Beanstalk to allow TCP at port 3306 for MYSQL type.
High chance that your ec2 instance does not have a public ip assigned to it. If you're trying to connect to the public IP of RDS without a public ip on the ec2 instance you won't be able to.
The ec2 instance will either need a public ip or it will need to have external internet connectivity through NAT.
I am new to Google Cloud. Instance has been created with Ubuntu16.04 image on Compute Engine. Three applications has been installed on it. One is running on nginx on port 80 [say A], second is on 8001[say B] and other one is on 8080 [say C].
I can able to access application A directly when click on external IP [or if give port 80 along with IP]. This application internally access application B on port 8001. Configuration of two applications has been updated for. There is inbound firewall rule for 8001. This application can not be accessible when we try to access with IP and port.
Same case with application C. That application is running on port 8080 in tomcat. Inbound Firewall rule has been created for this port too. This application is not accessible with IP and port. Server.xml for this application is updated to 0.0.0.0 instead of localhost [as mentioned not able to access port(11444 & 5072 ) externally(using Ubuntu on Google compute Engine)
I am not sure about the issue. Can anyone help me out?
I searched around but did not find anything for multiple applications. And most of the time example has given for port 80 only.
This application internally access application B on port 8001
Same case with application C.
It sounds like you don't actually want 8001 or 8080 to be accessible; in this case, leave the firewall rules alone (don't permit traffic to them from the outside) and configure them to listen only on localhost (which is not firewalled anyway).
In case you do want these to be accessible, then post a screenshot of your firewall configuration and we'll take a look.
Im trying to connect to a postgres database, from a springboot application deployed in minishift.
The postgres server is running on the same host that minishift is running on.
I've tried setting the postgres serve to listen on a specific IP address, and use this same address in the springboot jdbc connection url but I still get org.postgresql.util.PSQLException: Connection to 172.99.0.1:5432 refused
I've also tried using 10.0.2.2
Also tried, in /etc/postgresql/9.5/main/postgresql.conf, setting:
listen_addresses = '*'
How can I connect to a database external to minishift, running on same host?
Besides the answer referenced in my comment, which suggests to make your database listen on the IP address of the Docker bridge, you could make your pod use the network stack of your host. This way you could reach Postgres on the loopback. This works only if can guarantee that the pod will always run on the same host as the database.
The Kubernetes documentation discourages using hostNetwork. If you understand the consequences you can enable it as in this example.
If a pod inside kubernetes can't see the IP address from the host then I guess its an underlying firewall or networking issue. Try opening a shell inside the pod...
kubectl exec -it mypodname bash
Then trying to ping, telnet, curl, wget or whatever to see if you can see the IP address.
It sounds like something's wrong with the networking setup of your minishift. It might be worth raising an issue with minishift: https://github.com/minishift/minishift/issues/new
If you can find an IP address on the host which is accessible from a docker pod you can create a Kubernetes Service and then an Endpoint for the service with the IP address of the database on your host; then you can use the usual DNS discovery of kubernetes services (i.e. using the service name as the DNS name) which will then resolve to the IP address. Over time you could have multiple IP addresses for failover etc.
See: https://kubernetes.io/docs/user-guide/services/#without-selectors
Then you can use Services to talk to all your actual network endpoints with your application code completely decoupled on if the endpoints are implemented inside kubernetes, outside with load balancing baked in!
I am trying to check if the following is possible.
I have a single apache config file that listens on port 80 for external traffic and port 8080 for internal traffic.
Can I configure in such a way that there are (say) 10 httpd processes that are handling my external traffic on port 80 and another set of (say) 10 httpd processes that are handling my internal traffic on port 8080?
(Or do I need to run two instances of apache to achieve this?)
Thanks,
Vivek
You can listen on both ports with one instance, but you don't get to determine the instances assigned to each port. You would have to run two instances, each with their own configuration, to get that level of control.