App deployed on Kubernetes cannot be accessed from Internet - containers

I am new to Kubernetes & Docker. I created a simple nodejs application and deployed on BlueMix Kubernetes. But I am unable to accesses the application on internet. The ip & port mentioned in the kubernetes is not accessible. Can somebody help me.
I tried to http://10.76.193.146:31972, but it did not go through. I am not sure if this the public ip as its 10. series.
I also tried the public ip ( http://184.173.1.79:31972 ) mentioned in the blue mix kubernetes cluster - screenshot below. But that too failed.
This are steps I followed.
Created a nodejs app locally. It ran as desired on the local
// Load the http module to create an http server.
var http = require('http');
// Configure our HTTP server to respond with Hello World to all requests.
var server = http.createServer(function (request, response) {
response.writeHead(200, {"Content-Type": "text/plain"});
response.end("Hello World\n");
});
// Listen on port 8000, IP defaults to 127.0.0.1
server.listen(8000);
// Put a friendly message on the terminal
console.log("Server running at http://127.0.0.1:8000/");
---------- package.json
{
"name": "helloworld-nodejs",
"version": "0.0.1",
"description": "First Docker",
"main": "app.js",
"scripts": {
"start": "PORT=8000 node ./app.js"
},
"author": "",
"license": "ISC"
}
Created a docker container locally and ran the docker. It worked properly
Uploaded the docker container on Bluemix registry as
registry.ng.bluemix.net/testkubernetes/helloworld-nodejs:0.0.1
Created the Nodes and Services in Kubernetes, using the following YAML file
----------Node YAML file
apiVersion: v1
kind: Pod
metadata:
name: helloworld-nodejs
labels:
name: helloworld-nodejs
spec:
containers:
- name: helloworld-nodejs
image: registry.ng.bluemix.net/testkubernetes/helloworld-nodejs:0.0.1
ports:
- containerPort: 8000
---------- Services YAML
apiVersion: v1
kind: Service
metadata:
name: helloworld-nodejs
labels:
name: helloworld-nodejs
spec:
type: NodePort
selector:
name: helloworld-nodejs
ports:
- port: 8080
The application gets deployed properly and is also running, which I can confirm from the logs
Result of kubectl get services & kubectl get nodes command

Since your service's port is different from your pod's containerPort, you will have to specify targetPort in your service.
spec:
type: NodePort
selector:
name: helloworld-nodejs
ports:
- port: 8080
targetPort: 8000
According to the Kubernetes documentation on targetPort, it is the:
Number or name of the port to access on the pods targeted by the
service. .... If this is not specified, the value of the 'port' field
is used (an identity map).

Related

AKS Ingress Nginx external IP unreachable

I m using helm charts to deploy an app in Azure Kubernetes Service with Ingress-nginx
My helm charts used to work fine last month, but when I created something new with my charts, the public IP of my ingress nginx is not working anymore.
PS C:\Zetaris\HelmDeployment> kubectl get ingress -n zetaris
NAME CLASS HOSTS ADDRESS PORTS AGE
xx-gui-nginx <none> uisaas.xxxxxxxxxxxxxx 20.167.72.8 80, 443 53m
xx-rest-nginx <none> restsaas.xxxxx 20.167.72.8 80, 443 55m
Here is my yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: lightning-gui-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: letsencrypt-enterprise
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "server: hide";
more_set_headers "X-Content-Type-Options: nosniff";
more_set_headers "X-Frame-Options: DENY";
more_set_headers "X-Xss-Protection: 1";
more_set_headers "referrer-policy: no-referrer";
more_set_headers "Content-Security-Policy: frame-ancestors 'none'";
spec:
tls:
- hosts:
- {{ .Values.ingress.guiurl }}
secretName: tls-secret-gui
rules:
- host: {{ .Values.ingress.guiurl }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: xx-gui-svc
port:
number: 80
---
apiVersion: v1
kind: Service
metadata:
name: xx-gui-svc
namespace: {{ .Values.namespace }}
spec:
ports:
- port: 80
targetPort: 9001
type: ClusterIP
selector:
app: {{ .Values.deployment.name }}
I can reach my app on port 9001 if I port-forward directly my pod.
The error is coming from the IP:
telnet 20.167.72.8 80
Connecting To 20.167.72.8...Could not open connection to the host, on port 80: Connect failed
As far as i know, there no IP restrictions or firewalls in AKS.
Do you have any idea on why would my ingress IP would not be reachable on internet?
Tried to create another app with same helm chart, uninstall and reinstall ingress service, public IP is never reachable.
Never mind I found the issue, needed to add:
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz"
When creating the ingress-nginx

redirect http to https using a kubernetes ingress controller on Amazon EKS

I need to configure a new listener from the Ingress.yaml manifest, currently the aws loadbalancer driver version is 2.4.4, when I perform the process from the AWS console it allows me to add the new listener and redirect the traffic to HTTPS without problem but after a few minutes it disappears, I perform the configuration directly in the ingress manifest with the annotations but the listener does not come out correctly in the AWS console.
Manifiest:
`
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig":{ "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:xxx-xxxxx:xxxxx:certificate/xxxx-xxxxxxx
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internal
kubernetes.io/ingress.class: alb
name: xxxxxx
namespace: xxxxx
spec:
rules:
- host: xxxxxxx
http:
paths:
- backend:
service:
name: service
port:
number: 80
path: /*
pathType: ImplementationSpecific
listener configured from maniefiest ingress
enter image description here
listener configuring it manually from AWS Console
enter image description here

define name for ALB when creating kubernetes ingress in AKS

I’m creating Kubernetes nginx ingress controller using Helm https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx Since I’m provisioning a private AKS cluster, I instruct via annotations that the Azure Load Balancer that gets created has a private rather than a public IP address (service.beta.kubernetes.io/azure-load-balancer-internal and service.beta.kubernetes.io/azure-load-balancer-internal-subnet). Here's the values.yaml file that I provide when running helm install
controller:
replicaCount: `
image:
registry: foo.azurecr.io
digest: ""
pullPolicy: Always
ingressClassResource:
# -- Name of the ingressClass
name: "internal-nginx"
# -- Is this ingressClass enabled or not
enabled: true
# -- Is this the default ingressClass for the cluster
default: false
# -- Controller-value of the controller that is processing this ingressClass
controllerValue: "k8s.io/internal-ingress-nginx"
admissionWebhooks:
patch:
image:
registry: foo.azurecr.io
digest: ""
service:
annotations:
"service.beta.kubernetes.io/azure-load-balancer-internal": "true"
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet": subnet01
loadBalancerIP: "x.x.x.x"
watchIngressWithoutClass: true
ingressClassResource:
default: true
defaultBackend:
enabled: true
image:
registry: foo.azurecr.io
digest: ""
Each single ingress controller creates an Azure Load Balancer named kubernetes-internal:
Kubernetes-internal
I've searched LoadBalancer annotations but can't find a way to control what the actual name for the ALB will be, or is it always kubernetes-internal ?
Anyone has any ideas please ?

Unable to connect: Communications link failure

I am trying to follow the tutorial Deploying Debezium using the new KafkaConnector resource.
Based on the tutorial, I am also using minikube but with docker driver. Basically just follow exactly step by step.
However, for the step "Create the connector", after creating the connector by
cat <<EOF | kubectl -n kafka apply -f -
apiVersion: "kafka.strimzi.io/v1alpha1"
kind: "KafkaConnector"
metadata:
name: "inventory-connector"
labels:
strimzi.io/cluster: my-connect-cluster
spec:
class: io.debezium.connector.mysql.MySqlConnector
tasksMax: 1
config:
database.hostname: 192.168.99.1
database.port: "3306"
database.user: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_username}"
database.password: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_password}"
database.server.id: "184054"
database.server.name: "dbserver1"
database.whitelist: "inventory"
database.history.kafka.bootstrap.servers: "my-cluster-kafka-bootstrap:9092"
database.history.kafka.topic: "schema-changes.inventory"
include.schema.changes: "true"
EOF
and check by
kubectl -n kafka get kctr inventory-connector -o yaml
I got error
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"192.168.49.2","database.password":"","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T18:20:11Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "12777"
uid: 083df9a3-83ce-4170-a9bc-9573dafdb286
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: 192.168.49.2
database.password: ""
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: ""
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T18:20:11.548Z"
message: |-
PUT /connectors/inventory-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
reason: ConnectRestException
status: "True"
type: NotReady
observedGeneration: 1
I tried to change
database.user: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_username}"
database.password: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_password}"
to
database.user: "debezium"
database.password: "dbz"
directly and re-apply, based on the user and password info in "Secure the database credentials" step.
Also, based on the description in the tutorial
I’m using database.hostname: 192.168.99.1 as IP address for connecting to MySQL because I’m using minikube with the virtualbox VM driver If you’re using a different VM driver with minikube you might need a different IP address.
I am actually a little confused for above description. MySQL in the demo is deployed in Docker, while the rest of parts like Kafka are deployed in minikube. Why the description about database.hostname says minikube instead of Docker?
Anyway, when I run minikube ip, I got 192.168.49.2. However, after I change database.hostname to 192.168.49.2, and run kubectl get kctr inventory-connector -o yaml -n kafka, I got
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"192.168.49.2","database.password":"","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T18:20:11Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "12777"
uid: 083df9a3-83ce-4170-a9bc-9573dafdb286
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: 192.168.49.2
database.password: ""
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: ""
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T18:20:11.548Z"
message: |-
PUT /connectors/inventory-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
reason: ConnectRestException
status: "True"
type: NotReady
observedGeneration: 1
I can access MySQL by localhost as it is hosted in Docker.
However, I still same error when I changed database.hostname to localhost.
Any idea? Thanks!
The issue is related with the service in minikube failed to communicate with the MySQL in the docker.
Regarding how to access host's localhost from inside Kubernetes cluster, I found How to access host's localhost from inside kubernetes cluster
However, I end up with deploying MySQL in Kubernetes direction by
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml
(Copied from https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)
with
database.hostname: "mysql.default" # service `mysql` in namespace `default`
database.port: "3306"
database.user: "root"
database.password: "password"
Now when I run
kubectl -n kafka get kctr inventory-connector -o yaml
I got a new error saying MySQL not enabling row-level binlog, however, it means it can connect the MySQL now.
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"mysql.default","database.password":"password","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"root","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T19:36:52Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "2918"
uid: 48bb46e1-42bb-4574-a3dc-221ae7d6a803
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: mysql.default
database.password: password
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: root
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T19:36:53.605Z"
status: "True"
type: Ready
connectorStatus:
connector:
state: UNASSIGNED
worker_id: 172.17.0.8:8083
name: inventory-connector
tasks:
- id: 0
state: FAILED
trace: "org.apache.kafka.connect.errors.ConnectException: The MySQL server is
not configured to use a row-level binlog, which is required for this connector
to work properly. Change the MySQL configuration to use a row-level binlog
and restart the connector.\n\tat io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:207)\n\tat
io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:49)\n\tat
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:208)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\tat
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
java.lang.Thread.run(Thread.java:748)\n"
worker_id: 172.17.0.8:8083
type: source
observedGeneration: 1

Kubernetes - NodeJs MySQL pod does not connect with MySQL pod

I have a MySQL pod up and running. I opened a terminal for this pod and created a database and a user.
create database demodb;
create user demo identified by 'Passw0rd';
grant all on demodb.* to 'demo';
I have this Deployment to launch a NodeJs client for the MySQL pod. This is on my local minikube installation.
apiVersion: apps/v1
kind: Deployment
metadata:
name: demos
spec:
selector:
matchLabels:
app: demos
template:
metadata:
labels:
app: demos
spec:
containers:
- name: demo-db
image: 172.30.1.1:5000/demo/db-demos:0.1.0
resources:
limits:
memory: "128Mi"
cpu: "200m"
ports:
- containerPort: 4000
name: probe-port
---
apiVersion: v1
kind: Service
metadata:
name: demos
spec:
selector:
app: demos
ports:
- name: probe-port
port: 4001
targetPort: probe-port
The Dockerfile for the image passes the environment variables for the NodeJs client to use.
FROM node:alpine
ADD . .
RUN npm i
WORKDIR /app
ENV PROBE_PORT 4001
ENV MYSQL_HOST "mysql.demo.svc"
ENV MYSQL_PORT "3306"
ENV MYSQL_USER "demo"
ENV MYSQL_PASSWORD "Passw0rd"
ENV MYSQL_DATABASE "demodb"
CMD ["node", "index.js"]
And, the NodeJs client connects as follows.
const mysql = require('mysql')
const connection = mysql.createConnection({
host: process.env.MYSQL_HOST,
port: process.env.MYSQL_PORT,
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
database: process.env.MYSQL_DATABASE
});
connection.connect((err) => {
if (err) {
console.log('Database connection failed. ' + err.message)
} else {
console.log('Database connected.')
}
});
The database connection keeps failing with a message as Database connection failed. connect ENOENT tcp://172.30.88.64:3306. The TCP/IP address shown in this message is correct i.e. it matches with the service mysql.demo.svc of the running MySQL pod.
In the MySQL configuration files, I don't see bind-address. This should mean that, MySQL should accept connections from 'every where'. I am creating the user without the location qualifier i.e. the user is 'demo'#'%'. The connection is, obviously, not through sockets as I am passing the host and port values for connection.
What am I missing?
I got it working as follows.
const mysql = require('mysql')
const connection = mysql.createConnection({
host: process.env.MYSQL_HOST,
// port: process.env.MYSQL_PORT,
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
database: process.env.MYSQL_DATABASE
});
connection.connect((err) => {
if (err) {
console.log('Database connection failed. ' + err.message)
} else {
console.log('Database connected.')
}
});
That's right; I removed the port number from the option. :rolleyes: This example from RedHat is closest I have seen.
Also, I created the user with mysql_native_password as that is the only plugin mechanism that is supported by NodeJs client. See here.