Configure port range mapping into containers.yaml for google container engine - google-compute-engine

I followed all the google documentation to deploy a docker image into goole compute (this one) but I can't find more informations about google-container-manifest options.
For example I can't add a port range.
I tried that without success :
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
- containerPort: "10000-20000"
hostPort: "10000-20000"
Where can we find all parameters we can use for google container manifest ?
And is it possible to add a port range mapping ?
Thx
[Edit with #alex solution]
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
hostNetwork: true
containers:
- name: test1
image: eu.gcr.io/app-1234/image
imagePullPolicy: Always
And now all port on docker container are expose on google VM.
Do not forget to configure a network to expose all port you need like that :
gcloud compute networks create test-network
gcloud compute firewall-rules create test-allow-http --allow tcp:80 --network test-network
gcloud compute firewall-rules create test-allow-ssh --allow tcp:22 --network test-network
gcloud compute firewall-rules create test-allow-https --allow tcp:443 --network test-network
gcloud compute firewall-rules create test-allow-video --allow udp:10000-20000,icmp --network test-network
And run instance like that :
gcloud compute instances create test-example \
--image container-vm \
--metadata-from-file google-container-manifest=containers.yaml \
--zone europe-west1-b \
--machine-type n1-standard-2 \
--network test-network

As mentioned a little lower down on that docs page:
Documentation for the container manifest can be found in the
Kubernetes API Pod Specification. The container VM is running a
simple Kubelet and not the entire Kubernetes control plane, so the
v1.PodSpec honored by the container VM is limited to containers,
volumes, and restartPolicy.
Regarding adding such a large range of ports, though, would you mind explaining your use case? Currently the API does not support arbitrary port ranges, only lists of explicit ports. If what you really want is for all the ports on the machine to be usable by your container, you might want to consider the hostNetwork option in the v1.PodSpec, which will run your container directly on the host's network with no need for port mapping.

Related

How to specify/modiy target-port on a newly created app through openshift CLI?

I am trying to expose a new app created via openshift command line(oc). This is a nodeJS server listening on port=3000. However, opeshift defaults the target-port to 8080 as shown in the following service.yaml:
kind: Service
apiVersion: v1
.............
.............
spec:
ports:
- name: 8080-tcp
protocol: TCP
port: 8080
targetPort: 8080
.........
I want to be able to update targetPort via the command line. I already followed these steps, but no luck so far:
step1: oc new-project my-new-project
step2: oc new-app https:\\github.org.com\my-new-app.git
step3: oc expose service my-new-app --target-port=3000
Error: **cannot use --target-port with --generator=route/v1**
Note: I am able to access the app(i.e. port=3000) only when I manually update targetPort to 3000 in Services.yaml.
You didnt specify port . Try this.
oc expose service my-new-app --target-port=3000 --port=8080

Is it possible to have hostname based routing for MySQL in kubernetes?

I have a scenario where I have multiple mysql servers running in different namespaces in single kubernetes cluster. All mysql servers belong to different departments.
My requirement is I should be able to connect to different mysql servers using hostname, i.e.,
mysqlServerA running in ServerA namespace should be reachable from outside the cluster using:
mysql -h mysqlServerA.mydomain.com -A
mysqlServerB running in ServerB namespace should be reachable from outside the cluster using:
mysql -h mysqlServerB.mydomain.com -A
and so on.
I have tried TCP based routing using config maps of Nginx Ingress controller, where I am routing traffic from clients to different mysql servers by assigning different port numbers:
for mysqlServerA:
mysql -h mysqlServer.mydomain.com -A -P 3301
for mysqlServerB:
mysql -h mysqlServer.mydomain.com -A -P 3302
this works perfectly. But I want to know if hostname based routing is possible or not, because I don't want separate load balancer for each mysql service.
Thanks
General info
I routing traffic by different port numbers
You are right, the reason for that is that connection to Mysql is done via TCP. That is why it is definitely not possible to have two simultaneous connections to two servers on the same IP:port.
Unlike HTTP, the TCP don't have headers that allows distinguishing the host the traffic shall be routed to. However, still there are at least two ways to achieve the functionality you'd like to achieve :) I'll describe that later.
I want to know if hostname based routing is possible or not
I don't want separate load balancer for each mysql service.
K8s allows a few methods for service to be reachable outside the cluster (namely hostNetwork, hostPort, NodePort , LoadBalancer, Ingress )
The LoadBalancer is the simplest way to serve traffic on LoadBalancerIP:port ; however, due to TCP nature of connection you'll have to use one LoadBalancer per one mysql instance.
kind: Service
apiVersion: v1
metadata:
name: mysql
spec:
type: LoadBalancer
ports:
- port: 3306
selector:
name: my-mysql
The NodePort looks good, but it allows you connecting only when you know port (which can be tedious work for clients)
Proposed solutions
External IPs
If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those externalIPs. Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.
In the Service spec, externalIPs can be specified along with any of the ServiceTypes. In the example below, mysql-1 can be accessed by clients on 1.2.3.4:3306 (externalIP:port) and mysql-2 can be accessed by clients on 4.3.2.1:3306
$ cat stereo-mysql-3306.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-1234-inst-1
spec:
selector:
app: mysql-prod
ports:
- name: mysql
protocol: TCP
port: 3306
targetPort: 3306
externalIPs:
- 1.2.3.4
---
apiVersion: v1
kind: Service
metadata:
name: mysql-4321-inst-1
spec:
selector:
app: mysql-repl
ports:
- name: mysql
protocol: TCP
port: 3306
targetPort: 3306
externalIPs:
- 4.3.2.1
Note: you need to have 1.2.3.4 and 4.3.2.1 assigned to your Nodes (and resolve mysqlServerA / mysqlserverB at mydomain.comto these IPs as well). I've tested that solution on my GKE cluster and it works :).
With that config all the requests for mysqlServerA.mydomain.com:3306 that resolves to 1.2.3.4 are going to be routed to the Endpoints for service mysql-1234-inst-1 with the app: mysql-prod selector, and mysqlServerA.mydomain.com:3306 will be served by app: mysql-repl.
Of course it is possible to split that config for 2 namespaces (one namespace - one mysql - one service per one namespace).
ClusterIP+OpenVPN
Taking into consideration that your mysql pods have ClusterIPs, it is possible to spawn additional VPN pod in cluster and connect to mysqls through it.
As a result, you can establish VPN connection and have access to all the cluster resources. That is very limited solution which requires establishing the VPN connection for anyone who needs access to mysql.
Good practice is to add a bastion server on top of that solution. That server will be responsible for providing access to cluster services via VPN.
Hope that helps.

How to specifc exec on volumeMount

We have OpenShift 3.9 running in our Cluster.
I currently am trying out the Pipeline capabilities of OpenShift. It turns out the default recipies for Jenkins are not working.
The problem is that the volumeMount specified in the dc leads to a noexec mount in the Container. When the Jenkins git plugin is then trying to execute its ssh wrapper in /var/lib/jenkins it of course fails.
The config they use in the dc is:
volumes:
- emptyDir: {}
name: jenkins-data
and then mount it via:
volumeMounts:
- mountPath: /var/lib/jenkins
name: jenkins-data
I could not find any option to configure which mount options are to be used in the container.
Is there any way to work around that?

How to put containers network to the kubernetes YAML file

For exmaple, I created network at docker
docker network create hello-rails
Then, I have mySQL, which is connected to this network
docker run -p 3306 -d --network=hello-rails --network-alias=db -e MYSQL_ROOT_PASSWORD=password --name hello-rails-db mysql
And also, I have rails server, which also rely on this network
docker run -it -p 3000:3000 --network=hello-rails -e MYSQL_USER=root -e MYSQL_PASSWORD=password -e MYSQL_HOST=db --name hello-rails benjamincaldwell/hello-docker-rails:latest
I want to write deployment on kubernetes for these two containers with YAML file. But I don't know, how to put network inside containers in the file. Do you have any recommendations?
In Kubernetes you would solve this by creating two services.
The MySQL service will look something like this:
kind: Service
apiVersion: v1
metadata:
name: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
In your rails server, you can access the MySQL service by either using the mysql DNS name or using the MYSQL_SERVICE_HOST and MYSQL_SERVICE_PORT environment variables. There is no need to link the containers or specifying a network, as would be done in Docker.
Your Rails service will look like this:
kind: Service
apiVersion: v1
metadata:
name: rails
spec:
type: LoadBalancer
selector:
app: rails
ports:
- port: 3000
Notice the type: LoadBalancer, which specifies that this service will be published to the outside world. Depending on where you run Kubernetes, a public IP address will be automatically assigned to this service.
For more information, have a look at the Services documentation.

Why does MySQL Docker container complain that MYSQL_ROOT_PASSWORD env must be defined when using Docker 17.03 secrets?

I'm trying to adapt Docker's Wordpress secret example (link below) to work in my Docker Compose setup (for Drupal).
https://docs.docker.com/engine/swarm/secrets/#/advanced-example-use-secrets-with-a-wordpress-service
However, when the 'mysql' container is spun up, the following error is output:
"error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD"
I created the secrets using the 'docker secret create' command:
docker secret create mysql_root_pw tmp-file-holding-root-pw.txt
docker secret create mysql_pw tmp-file-holding-pw.txt
After running the above, the secrets 'mysql_root_pw' and 'mysql_pw' now exist in the swarm environment. Verified by doing:
docker secret ls
Here are the relevant parts from my docker-compose.yml file:
version: '3.1'
services:
mysql:
image: mysql/mysql-server:5.7.17
environment:
- MYSQL_ROOT_PASSWORD_FILE="/run/secrets/mysql_root_pw"
- MYSQL_PASSWORD_FILE="/run/secrets/mysql_pw"
secrets:
- mysql_pw
- mysql_root_pw
secrets:
mysql_pw:
external: true
mysql_root_pw:
external: true
When I do "docker stack deploy MYSTACK", I get the error mentioned above when the 'mysql' container attempts to start.
It seems like "MYSQL_PASSWORD_FILE" and "MYSQL_ROOT_PASSWORD_FILE" are not standard environment variables recognized by MySQL, and it's still expecting "MYSQL_ROOT_PASSWORD" environment variable.
I'm using Docker 17.03.
Any suggestions?
Thanks.
You get this error if your secret is a empty string as well. That is what happened to me, secret is mounted and service properly configured, but still fails because there is not password.