openshiftv3 service url connection failed from within a project - openshift

I'm just starting to use OpenShift v3. I've been looking for examples on setting up a ci/cd pipeline using jenkins, nexus, sonarqube on openshift. I've found this nice example project but unfortunately I can't get it to work. The project can be found here: https://github.com/OpenShiftDemos/openshift-cd-demo
The problem I'm running into is that once a jenkins job is starting it will try to connect to the nexus service using this url: nexus:8081. This url is made up out of the openshift template by this section:
# Sonatype Nexus
- apiVersion: v1
kind: Service
metadata:
annotations:
description: Sonatype Nexus repository manager's http port
labels:
app: nexus
name: **nexus**
spec:
ports:
- name: web
port: **8081**
protocol: TCP
targetPort: 8081
selector:
app: nexus
deploymentconfig: nexus
sessionAffinity: None
type: ClusterIP
However it seems that jenkins (ran as a pod on openshift within the same project as nexus) can't connect to the url http://nexus:8081 and shows the following:
Connect to nexus:8081 [nexus/172.30.190.210] failed: Connection refused # line 81, column 25
any idea what is going on?

Related

Getting error while apply ingress resource: zone is too small

I am new to Kubernetes. I have create simple cluster with 1 master and 1 worker nodes(both running in 2 different VMs). Additionally there is HA proxy setup in a separate VM.
Client Version: v1.24.0
Kustomize Version: v4.5.4
Server Version: v1.26.1
I have setup NGINX ingress controller using manifests(https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/).
When I try to apply ingress resource with rules I am getting the error:
Configuration for default/i1 was added or updated ; but was not applied: error reloading NGINX for default/i1: nginx reload failed: command /usr/sbin/nginx -s reload -e stderr stdout: "" stderr: "2023/02/06 12:49:28 [emerg] 30#30: zone \"default-i1-sim.daniyar.uk-first-web-app-service-80\" is too small in /etc/nginx/conf.d/default-i1.conf:4\n" finished with error: exit status 1
My ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: i1
spec:
rules:
- host: sim.daniyar.uk
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: first-web-app-service
port:
number: 80
IngressClass yaml:
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: nginx.org/ingress-controller
Let me know if you need more info
Thanks
Found a solution.
In my case I had to disable zones in NGINX config by using annotation:
nginx.org/upstream-zone-size: "0"
in my ingress resource file.

How to deploy docker image in minikube?

I have a container image that I want to run in minikube. My conainer image has MySQL, redis and some other components needed to run my application. I have an external application. My external application need to be connected to this MySQL server. The container image is written such that it starts the MySQL and redis server on its startup.
I thought I could access my MySQL server from outside If I run the container image inside docker daemon in minikube after setting the env as mentioned as below,
eval $(minikube docker-env)
But this didn't help me since the MySQL server is not accessible in 3306 port.
I tried the second method that creating a pod,
I created the deployment using the below yaml file
apiVersion: v1
kind: Service
metadata:
name: c-service
spec:
selector:
app: c-app
ports:
- protocol: "TCP"
port: 3306
targetPort: 5000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: c-app
spec:
selector:
matchLabels:
app: c-app
replicas: 3
template:
metadata:
labels:
app: c-app
spec:
containers:
- name: c-app
image: c-app-latest:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
After creating the deployment, containers are creating. As I said before my image is designed such that it starts the MySQL and Redis server on startup.
In docker desktop while running the same image it starts the servers, opens ports and stays still and I can also perform some operations in the container terminal.
In minikube, it starts the servers and once it has started the servers, the minikube marks the pod's status as completed and tries restarting it again. It didn't open any ports. It just starts and restarts again and again until eventually it results in "CrashLoopBackOff" error.
How I did it before:
I have a container image that I previously was running in docker desktop. It was running fine in docker desktop. It starts all servers and it establishes connection with my external app. It also provides an option to interact with the container terminal.
My current requirements:
I want to run the same image in minikube that I ran in docker desktop. Upon running the image it should open ports for external connection (say port 3306 for MySQL connection) and I should be able to interact with the container through,
kubectl exec -it <pod> -- bin/bash
More importantly I don't want the pod restarting again and again. It should start one time, start all servers and open ports and stay still.
Sorry for the long post. Can anyone please help me with this.

SRVE0255E: A WebGroup/Virtual Host to handle /ibm/console/ has not been defined

I have deployed WebSphere Traditional on RedHat OpenShift but I'm unable to get the admin console. I could see that the server is running inside the pod. Attaching the yaml files I have used and the pod logs that are generated. Please help. Thanks!
YAML Files for Pod and Service -
apiVersion: v1
kind: Pod
metadata:
name: was-traditional
labels:
app: websphere
spec:
containers:
- name: was-container
image: ibmcom/websphere-traditional:8.5.5.17
------------------------------------
apiVersion : v1
kind : Service
metadata :
name : was-service
spec :
selector :
app : websphere
type : NodePort
ports :
- protocol : TCP
port : 9043
targetPort : 9043
nodePort : 31085
WAS Pod Logs -
{"type":"was_message","host":"was-traditional","ibm_cellName":"DefaultCell01","ibm_nodeName":"DefaultNode01","ibm_serverName":"server1","ibm_sequence":"1611228360189_0000000000113","message":"SRVE0255E: A WebGroup\/Virtual Host to handle \/ibm\/console\/ has not been defined.","ibm_datetime":"2021-01-21T11:26:00.189+0000","ibm_messageId":"SRVE0255E","ibm_methodName":"handleRequest","ibm_className":"com.ibm.ws.webcontainer.internal.WebContainer","ibm_threadId":"0000006c","module":"com.ibm.ws.webcontainer.internal.WebContainer","loglevel":"SEVERE"}
You can oc port-forward <pod> 9043:9043 on a workstation that needs to view the admin console then access it via localhost:9043. It will not work with an alternate port due to how virtual hosting works in traditional websphere.

Windows Jenkins Slave unable to connect to master hosted on Openshift instance

Unable to connect jenkins master hosted On Openshift Cluster. Terminates with below error after handshaking:
may 23, 2020 2:05:55 PM hudson.remoting.jnlp.Main$CuiListener error
GRAVE: Failed to connect to jenkins-jnlp-poc:50000
java.io.IOException: Failed to connect to jenkins-jnlp-poc:50000
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:246)
at hudson.remoting.Engine.connectTcp(Engine.java:678)
at hudson.remoting.Engine.innerRun(Engine.java:556)
at hudson.remoting.Engine.run(Engine.java:488)
Caused by: java.net.ConnectException: Connection timed out: connect
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at java.nio.channels.SocketChannel.open(SocketChannel.java:189)
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:204)
... 3 more
I added route to jenkins-jnlp service but I'm not able to expose the port, I'been trying to configure nodePort but I couldn't archive it yet. Any help will be welcomed!
Thanks.
A Route will only work with HTTP / HTTPS traffic and will not work in this case and as you correctly noted, NodePorts is most likely what you want. Here is an example for a Service type NodePort using Port 32000:
apiVersion: v1
kind: Service
metadata:
name: jenkins-jnlp-poc-service
spec:
selector:
app: jenkins-jnlp-poc
type: NodePort
ports:
- name: jnlp
port: 50000
targetPort: 50000
nodePort: 32000
protocol: TCP
Note that you may need to change multiple parts of the Service:
The port and targetPort specifying which port the Service "listens" on and where traffic is forwarded to (typically to the port your container exposes)
The selector, which Pods are targeted (you'll need to check your Pods which labels are used and adjust accordingly)

http2 ingress ssl-passthrough, curl works, chrome goes banana's

This works perfect for curl and chrome
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: example
http:
paths:
- backend:
serviceName: example
servicePort: 443
until you create a webpage where you are connecting to that service and using img call's (tiles) from another service on the same kube using the same ssl certificate. Then chrome wants to recycle the http2 connection resulting in requests getting send to the wrong pod. Note that curl keeps working for both services because it doesn't try to recycle the previous curl command http2 connection. Is there a workaround for this, other than running two different kube clusters so chrome doesn't recycle the http2 connection?