Change Deployment config from shell - openshift

I need to vary the deployment config of an application, by adding an extra YAML section within it (in the example the section name: ping and its two attributes)
containers:
- name: openshift-wf-cluster
image: 172.30.1.1:5000/demo/openshift-wf#sha256:5d7e13e981f25b8933d54c8716d169fadf1c4b9c03468a5b6a7170492d5b9d93
ports:
- containerPort: 8080
protocol: TCP
- name: ping
containerPort: 8888
protocol: TCP
Is it possible to do it from the oc shell command ?(without manually editing the file) A sort of adding an extra node to one section of the YAML ?

You can use the oc patch command to achieve this. See oc patch --help for more info. Try the following with your own deployment config name:
oc patch dc/YOURDC -p '[{"op": "replace", "path": "/spec/template/spec/containers/0/ports/1", "value":{"name":"ping","containerPort":8888,"protocol":"TCP"}}]' --type=json

Yes. You can edit your deployment config in place with the openshift tools
oc edit dc/deployment-1-name will open an editor for you to change your config.

Related

Connect openshift pod to external mysql database

I am trying to set up a generic pod on OpenShift 4 that can connect to a mysql server running on a separate VM outside the OpenShift cluster (testing using local openshift crc). However when creating the deployment, I'm unable to connect to the mysql server from inside the pod (for testing purposes).
The following is the deployment that I use:
kind: "Service"
apiVersion: "v1"
metadata:
name: "mysql"
spec:
ports:
- name: "mysql"
protocol: "TCP"
port: 3306
targetPort: 3306
nodePort: 0
selector: {}
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "mysql"
subsets:
- addresses:
- ip: "***ip of host with mysql database on it***"
ports:
- port: 3306
name: "mysql"
---
apiVersion: v1
kind: DeploymentConfig
metadata:
name: "deployment"
spec:
template:
metadata:
labels:
name: "mysql"
spec:
containers:
- name: "test-mysql"
image: "***image repo with docker image that has mysql package installed***"
ports:
- containerPort: 3306
protocol: "TCP"
env:
- name: "MYSQL_USER"
value: "user"
- name: "MYSQL_PASSWORD"
value: "******"
- name: "MYSQL_DATABASE"
value: "mysql_db"
- name: "MYSQL_HOST"
value: "***ip of host with mysql database on it***"
- name: "MYSQL_PORT"
value: "3306"
I'm just using a generic image for testing purposes that has standard packages installed (net-tools, openjdk, etc.)
I'm testing by going into the deployed pod via the command:
$ oc rsh {{ deployed pod name }}
however when I try to run the following command, I cannot connect to the server running mysql-server
$ mysql --host **hostname** --port 3306 -u user -p
I get this error:
ERROR 2003 (HY000): Can't connect to MySQL server on '**hostname**:3306' (111)
I've also tried to create a route from the service and point to that as a "fqdn" alternative but still no luck.
If I try to ping the host (when inside the pod), I cannot reach it either. But I can reach the host from outside the pod, and from inside the pod, I can ping sites like google.com or github.com
For reference, the image being used is essentially the following dockerfile
FROM ubi:8.0
RUN dnf install -y python3 \
wget \
java-1.8.0-openjdk \
https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm \
postgresql-devel
WORKDIR /tmp
RUN wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm && \
rpm -ivh mysql-community-release-el7-5.noarch.rpm && \
dnf update -y && \
dnf install mysql -y && \
wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gz && \
tar zxvf mysql-connector-java-5.1.48.tar.gz && \
mkdir -p /usr/share/java/ && \
cp mysql-connector-java-5.1.48/mysql-connector-java-5.1.48-bin.jar /usr/share/java/mysql-connector-java.jar
RUN dnf install -y tcping \
iputils \
net-tools
I imagine there is something I am fundamentally misunderstanding about connecting to an external database from inside OpenShift, and/or my deployment configs need some adjustment somewhere. Any help would be greatly appreciated.
As mentioned in the conversation for the post, it looks to be a firewall issue. I've tested again with the same config, but instead of an external mysql db, I've tested via deploying mysql in openshift as well and the pods can connect. Since I don't have control of the firewall in the organisation, and the config didn't change between the two deployments, I'll mark this as solved as there isn't much more I can do to test it

Is it possible to define host mappings in GitHub Actions?

Now I am using GitHub Actions to build my project. On my local machine, I define the local host address mapping in /etc/hosts like this:
11.19.178.213 postgres.dolphin.com
and in my database config like this:
spring.datasource.druid.post.master.jdbc-url = jdbc:postgresql://postgres.dolphin.com:5432/dolphin
On my production server I could edit the mapping to locate a different IP address of my database. But now I am running unit tests in GitHub Actions. How do I edit the host mapping to make the database mapping to container of GitHub Actions? I defined the container in GitHub Actions like this:
jobs:
build:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:13.2
env:
POSTGRES_PASSWORD: postgrespassword
POSTGRES_DB: dolphin
POSTGRES_USER: postgres
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
What should I do to handle the host mapping? I have already searched the docs, but I have found nothing about this problem?
do it like this:
- name: Add hosts to /etc/hosts
run: |
sudo echo "127.0.0.1 postgres.dolphin.com" | sudo tee -a /etc/hosts

I want to print the current podname in which my application is running in application logs in openshift

So my java application is running in several pod in openshift and I want to print the podname in application logs for some business purpose. Is there any way to do so? Thanks
You should be able to expose the Pod name to the application using the Kubernetes "Downward API". This can either be done by exposing an environment variable with the Pod name, or mounting a file that contains the name.
Here's the docs for doing so with an environment variable: https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api
Here's a trimmed down version of the example on that page, to highlight just the Pod name:
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_NAME;
sleep 10;
done;
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
restartPolicy: Never
As you can see from the docs, there's a bunch of other context that you can expose also.
The equivalent docs for mounting a volume file can be found here: https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api

How to specify/modiy target-port on a newly created app through openshift CLI?

I am trying to expose a new app created via openshift command line(oc). This is a nodeJS server listening on port=3000. However, opeshift defaults the target-port to 8080 as shown in the following service.yaml:
kind: Service
apiVersion: v1
.............
.............
spec:
ports:
- name: 8080-tcp
protocol: TCP
port: 8080
targetPort: 8080
.........
I want to be able to update targetPort via the command line. I already followed these steps, but no luck so far:
step1: oc new-project my-new-project
step2: oc new-app https:\\github.org.com\my-new-app.git
step3: oc expose service my-new-app --target-port=3000
Error: **cannot use --target-port with --generator=route/v1**
Note: I am able to access the app(i.e. port=3000) only when I manually update targetPort to 3000 in Services.yaml.
You didnt specify port . Try this.
oc expose service my-new-app --target-port=3000 --port=8080

How to put containers network to the kubernetes YAML file

For exmaple, I created network at docker
docker network create hello-rails
Then, I have mySQL, which is connected to this network
docker run -p 3306 -d --network=hello-rails --network-alias=db -e MYSQL_ROOT_PASSWORD=password --name hello-rails-db mysql
And also, I have rails server, which also rely on this network
docker run -it -p 3000:3000 --network=hello-rails -e MYSQL_USER=root -e MYSQL_PASSWORD=password -e MYSQL_HOST=db --name hello-rails benjamincaldwell/hello-docker-rails:latest
I want to write deployment on kubernetes for these two containers with YAML file. But I don't know, how to put network inside containers in the file. Do you have any recommendations?
In Kubernetes you would solve this by creating two services.
The MySQL service will look something like this:
kind: Service
apiVersion: v1
metadata:
name: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
In your rails server, you can access the MySQL service by either using the mysql DNS name or using the MYSQL_SERVICE_HOST and MYSQL_SERVICE_PORT environment variables. There is no need to link the containers or specifying a network, as would be done in Docker.
Your Rails service will look like this:
kind: Service
apiVersion: v1
metadata:
name: rails
spec:
type: LoadBalancer
selector:
app: rails
ports:
- port: 3000
Notice the type: LoadBalancer, which specifies that this service will be published to the outside world. Depending on where you run Kubernetes, a public IP address will be automatically assigned to this service.
For more information, have a look at the Services documentation.