I have a k8 cluster setup where Im using kustomize to build my deployment.yaml then actually deploying with kubectl. I have a FE, BE and a local DB with persistant volume. Works just fine however when I add skaffold for local dev, the container logs indicate DB connection fails. Been at it for a few days and I can't figure out whats wrong
Related
I was trying to deploy a new application version for my python beanstalk environment. While deploying the version, the associated RDS instance shutdown and restarted and it took almost 2 hours. Also the db instance class and the allocated storage changed automatically, application had downtime for almost 2 hours. How i can troubleshoot the reason for the RDS restart/update on the application deployment time.
Please advice
I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.
I am facing issue with my app it can connect to mysql in kubernetes when i ran it locally but ones i deploy it on pod it only can connect and executes first query but it never get response back and fails on timeout. I have tried running them in the same pod, various pods etc.
I am trying to deploy a MySQL Docker Image to Kubernetes. I mostly managed all tasks, Docker Image up and running in Docker, one final thing is missing from Kubernetes deployment.
MySQL has one configuration which is stating which user can log on from which Host 'MYSQL_ROOT_HOST' to configure that for Docker is no problem, Docker Networking is using '172.17.0.1' for bridging.
The problem with Kubernetes, this must be the IP of the Pod trying to connect MySQL Pod and every time a Pod starts, this IP changes.
I try to put the Label of the Pod connecting to the MySQL Pod but it is still looking the IP of the Pod instead of DNS name.
Do you have an idea how I can overcome this problem, I can't even figure out how this should work if I set AutoScaling for the Pod that is trying to connect MySQL, replicas will all have a different IP.
Thx for answers....
As #RyanDowson and #siloko mentioned, you should use Service, Ingress or Helm Charts for these purposes.
Additional information you can find on Service, Ingress and Helm Charts pages.
I have my Sails application on an AWS instance with all dependancies installed with no apparent issues. However, each time I try to launch the app I am getting the following error.
error: AdapterError: Connection is already registered
I have not managed to successfully lift sails yet on the instance and sails-mysql was freshly installed so no connections should be registered.
I have taken the following steps to deploy my app..
Set up a MySql RDS instance (EU-West)
Created and set up an Ubuntu AMD-64 t2.micro EC2 instance (EU-West)
Installed all prerequisites (Git, NVM, NodeJs, Sails, etc.)
Cloned my Sails project
Installed dependencies for Sails
Correctly configured my connection settings for Sails to use my RDS instance.
I know that my connection settings are correct as I have been able to run Sails on my local machine with a connection to my RDS instance and it would consistently lift without any issues.
I am also able to connect to my RDS instance using SequelPro with no problems.
I have had issues with dependencies in the past but have managed to fix those issues and have not had any of them on my local machine or with my EC2 instance.
After searching for a while I have come across a few users who have had similar issues but have managed to fix them with Waterline's teardown methods, however, I am unsure how to achieve this.
I have done my best to provide as much information as possible and any help would be massively appreciated.
Sails Version: 0.12.11
Thank you in advance.
I managed to fix the issue by carrying out the following:
Switched my environment to production in config/bootstrap.js
In connections.js add connectTimeout: 20000 to make sure the request does not time out before the connection is made.
eg. process.env.NODE_ENV = 'development'
Ensure that the security group inbounds rules for the RDS allows connections from the security group associated with my EC2 instance.
Type: MySQL/Aurora
Protocol: TCP
Port Range: 3306
Source: < Your security group ID >
Following the above points also meant I overcame the issue with handshake timeouts when communicating with the RDS.