Push Docker Image to Private Repo In Openshift - openshift

I am trying to push the docker image to the private jfrog repo.But it is pushing openshift repo by S2i.
apiVersion: v1
items:
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
openshift.io/image.insecureRepository: "true"
creationTimestamp: null
labels:
app: bulkhold-s2i
name: bulkhold-s2i
spec:
output:
to:
kind: DockerImage
name: 'bulkhold-s2i:latest'
dockerImageRepository: *****/it-mfg/bulkhold
pushSecret:
name: itmfg-docker-dev-local
status:
dockerImageRepository: ******/it-mfg/bulkhold
tag: latest

Try to put spec.output in your BuildConfig*1 instead of imagestream.
*1: https://docs.openshift.com/container-platform/4.11/cicd/builds/managing-build-output.html

Related

setting up kerberos, timezone and ansible.cfg in Customized setup of AWX-Operator in airgapped environment

Im really new to this and I'm going to setup a functional proof of concept with k3s.
im able to setup a default awx environment by the following files after doing make deploy from a downloaded version of awx-operator.
What i now need is to be able to get everything away fro UTC to Europe/Oslo as timezone,
make winRM remoting working. (unsure how as with default deployment kerberos login will fail. Might be due to timeskew but also missing krb5 config). How to configure awx operator to setup krb5 functionally?
Also id like to mount a local version of /etc/ansible/ansible.cfg persistent. So even if i restart server etc this file will still be read from host and used by awx-operator.
i saw k8tz but doesnt seem to be able to install this without internet access. Also i havent fully grasped the setup of ut yet.
I kindly ask for full yaml examples as im not that great yet at understanding the full buildup yet.
---
apiVersion: v1
kind: Secret
metadata:
name: awx-postgres-configuration
namespace: awx
stringData:
host: awx-postgres-13
port: "5432"
database: awx
username: awx
password: MyPassword
type: managed
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
name: awx-admin-password
namespace: awx
stringData:
password: MyPassword
type: Opaque
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: awx-postgres-volume
namespace: awx
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 8Gi
storageClassName: awx-postgres-volume
hostPath:
path: /data/postgres-13
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: awx-projects-volume
namespace: awx
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 2Gi
storageClassName: awx-projects-volume
hostPath:
path: /data/projects
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: awx-projects-claim
namespace: awx
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 2Gi
storageClassName: awx-projects-volume
Then the last part to create the pods etc.
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
# Set the replicas count to scale AWX pods
#replicas: 3
admin_user: admin
admin_password_secret: awx-admin-password
ingress_type: ingress
ingress_tls_secret: awx-secret-tls
hostname: my.domain.com # Replace fqdn.awx.host.com with Host FQDN and DO NOT use IP.
postgres_configuration_secret: awx-postgres-configuration
postgres_storage_class: awx-postgres-volume
postgres_storage_requirements:
requests:
storage: 8Gi
projects_persistence: true
projects_existing_claim: awx-projects-claim
web_resource_requirements: {}
ee_resource_requirements: {}
task_resource_requirements: {}
no_log: "false"
#ldap_cacert_secret: awx-ldap-cert
UPDATE:
I got krb5 kerberos to work by creating a configmap
---
kind: ConfigMap
apiVersion: v1
metadata:
name: awx-extra-config
namespace: awx
data:
krb5.conf: |-
# To opt out of the system crypto-policies configuration of krb5, remove the
# symlink at /etc/krb5.conf.d/crypto-policies which will not be recreated.
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = MYDOMAIN.COM
ticket_lifetime = 24h
renew_lifetime = 7d
[realms]
MYDOMAIN.COM = {
kdc = pdc.mydomain.com
admin_server = pdc.mydomain.com
}
[domain_realm]
.MYDOMAIN.COM = MYDOMAIN.COM
mydomain.com = MYDOMAIN.COM
then refering to this in the last deployment file:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
web_extra_volume_mounts: |
- name: krb5-conf
mountPath: /etc/krb5.conf
subPath: krb5.conf
task_extra_volume_mounts: |
- name: krb5-conf
mountPath: /etc/krb5.conf
subPath: krb5.conf
ee_extra_volume_mounts: |
- name: krb5-conf
mountPath: /etc/krb5.conf
subPath: krb5.conf
extra_volumes: |
- name: krb5-conf
configMap:
defaultMode: 420
items:
- key: krb5.conf
path: krb5.conf
name: awx-extra-config
Got Timezone to work as well. Only now missing mount of hostfile /etc/ansible/ansible.cfg
here was the solution i used:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
ee_extra_env: |
- name: TZ
value: Europe/Paris
task_extra_env: |
- name: TZ
value: Europe/Paris
web_extra_env: |
- name: TZ
value: Europe/Paris

Application Gateway Ingress Controller I am not able to access the application throgh Ingress controller but was able to access throgh LB External IP

I have created Ingress Controller for my deployment and I am able to access for some time and when i tried the same sometime later i was not able to access the application. But, I was able to access the application with LoadBalancer External IP at the same time. Please can someone help here.
Here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: lable
name: label
spec:
replicas: 1
selector:
matchLabels:
app: label
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: label
spec:
containers:
- image: <Image Name>
name: label
env:
- name: ASPNETCORE_ENVIRONMENT
value: "UAT"
- name: EnvironmentName
value: "UAT"
volumeMounts:
- name: mst-storage
mountPath: /home/appuser/.aspnet/DataProtection-Keys
volumes:
- name: mst-storage
emptyDir: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: label
spec:
selector:
app: label
ports:
- protocol: TCP
port: 80
targetPort: 5000
type: LoadBalancer
status:
loadBalancer: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: label
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /api/*
backend:
service:
name: label
port:
number: 80
pathType: Prefix
- path: /api/*
backend:
service:
name: label
port:
number: 80
pathType: Prefix
- path: /
backend:
service:
name: label
port:
number: 80
pathType: Prefix
```
Are you still facing this issue? You can always check application gateway ingress controller logs which constantly monitors the changes in the AKS PODS and pass that information to Application Gateway so that both components are in sync. If there are any issues those logs will clearly show the error information.
Example Command:
kubectl logs ingress-appgw-deployment-* -n kube-system
Also in the above deployment YAML file , can you kindly check the below section under metadata->labels. It was mentioned as app: lable (There is a spell mistake ?)
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: lable
Compare that with Service YAML file (Spec -> Selector -> app: label )
spec:
selector:
app: label

How to create a MySQL cluster within an Istio injected namespace?

I'm currently trying to create a 2 nodes MySQL cluster in an 1.13.10 K8s using the Oracle MySQL Operator. It works fine within a standard namespace. However, once it's created within an Istio 1.4 injected namespace, the MySQL Agent, that is in charge of setting the replication up, returns the following error:
Error bootstrapping cluster: failed to create new cluster: SystemError: RuntimeError: Dba.create_cluster: ERROR: Error starting cluster: The port '33061' for localAddress option is already in use. Specify an available port to be used with localAddress option or free port '33061'.
I was not able to find any support on this so far.
How can I configure Istio to enable the agent to manage the replication ?
Below my yaml manifests:
apiVersion: v1
kind: Namespace
metadata:
name: test
labels:
istio-injection: enabled
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: mysql-agent
namespace: test
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: mysql-agent
namespace: test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: mysql-agent
subjects:
- kind: ServiceAccount
name: mysql-agent
namespace: test
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test-mysql-data0
labels:
namespace: test
type: data
app: mysql
spec:
storageClassName: hostpath
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
hostPath:
path: /data/test/mysql/data0
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test-mysql-data1
labels:
namespace: test
type: data
app: mysql
spec:
storageClassName: hostpath
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
hostPath:
path: /share/test/mysql/data1
type: DirectoryOrCreate
---
apiVersion: v1
kind: Secret
metadata:
name: mysql-root-user-secret
namespace: test
stringData:
password: password
---
apiVersion: mysql.oracle.com/v1alpha1
kind: Cluster
metadata:
name: mysql
namespace: test
labels:
app: mysql
namespace: test
spec:
multiMaster: true
members: 2
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "v1alpha1.mysql.oracle.com/cluster"
operator: In
values:
- mysql
topologyKey: "kubernetes.io/hostname"
rootPasswordSecret:
name: mysql-root-user-secret
volumeClaimTemplate:
metadata:
name: data
spec:
storageClassName: hostpath
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
selector:
matchLabels:
namespace: test
type: data
app: mysql

How to fix environment variables not being picked up in a .json file

I am setting environment variables in k8s through a configMap. Also creating a generate.json file and mounting it via a configMap also. All this works fine. The problem is that the env vars are not being picked up by generate.json file.
I am trying get the environment variables passed by the configMap to the generate.json that is mounted via a configMap. My guess is that, the file generate.json will read container env vars passed via configMap and interpolate those where neccesary.
Here are the 2 config maps
ConfigMap to create the env variables
apiVersion: v1
kind: ConfigMap
metadata:
name: config-cm
namespace: default
data:
ADMIN_PW: "P#ssw0rd"
EMAIL: "support#gluu.com"
ORG_NAME: "Gluu"
COUNTRY_CODE: US
STATE: TE
CITY: SLC
LDAP_TYPE: opendj
GLUU_CONFIG_ADAPTER: kubernetes
GLUU_SECRET_ADAPTER: kubernetes
---
`---configmap to interpolate the created environment vars---`
apiVersion: v1
kind: ConfigMap
metadata:
name: gen-json-file
namespace: default
data:
generate.json: |-
{
"hostname": "$DOMAIN",
"country_code": "$COUNTRY_CODE",
"state": "$STATE",
"city": "$CITY",
"admin_pw": "$ADMIN_PW",
"email": "$EMAIL",
"org_name": "$ORG_NAME"
}
---
apiVersion: batch/v1
kind: Job
metadata:
name: config-init
spec:
parallelism: 1
template:
metadata:
name: job
labels:
app: load
spec:
volumes:
- name: config
persistentVolumeClaim:
claimName: config-volume-claim
- name: mount-gen-file
configMap:
name: gen-json-file
containers:
- name: load
image: gluufederation/config-init:4.0.0_dev
lifecycle:
preStop:
exec:
command: [ "/bin/sh", -c , "printenv" ]
volumeMounts:
- mountPath: /opt/config-init/db/
name: config
- mountPath: /opt/config-init/db/generate.json
name: mount-gen-file
subPath: generate.json
envFrom:
- configMapRef:
name: config-cm
args: [ "load" ]
restartPolicy: Never
The expected result is that the file generate.json should be populated/interpolated from env vars to the variables.

Error when upload a yml with cluster policy bindings in openshift "already exist"

When I try to execute this:
oc create -f custom_clusterPolicyBinding.yml
Error from server: error when creating "custom_clusterPolicyBinding.yml": clusterpolicybindings ":default" already exists
oc version
oc v1.4.1
kubernetes v1.4.0+776c994
features: Basic-Auth GSSAPI Kerberos SPNEGO
This is the custom_clusterPolicyBinding.yml
apiVersion: v1
kind: ClusterPolicyBinding
metadata:
name: custom
policyRef:
name: custom
roleBindings:
- name: custom:label-nodos
roleBinding:
groupNames:
- pachi
metadata:
name: custom:label-nodos
roleRef:
name: custom:label-nodos
subjects:
- kind: Group
name: pachi
userNames: null
The cluster role binding custom:label-nodos already exist
oc get clusterroleBinding | grep custom:label-nodos
custom:label-nodos /custom:label-nodos
And the content of cluster role binding yaml is:
apiVersion: v1
groupNames: null
kind: ClusterRoleBinding
metadata:
name: custom:label-nodos
roleRef:
name: custom:label-nodos
subjects: []
userNames: null
Any idea?
Do not directly edit policy. There is only one cluster policy and cluster policy binding.
Instead you would want to create a clusterrole with content similar to this (edit it to grant the permissions you want to give out):
apiVersion: v1
kind: ClusterRole
metadata:
name: some-user
rules:
- apiGroups:
- project.openshift.io
- ""
resources:
- projects
verbs:
- list
And a clusterrolebinding with content like this (edit it to bind to the correct subjects):
apiVersion: v1
kind: ClusterRoleBinding
metadata:
name: some-users
roleRef:
name: some-user
subjects:
- kind: User
name: foo
You can also use the oadm policy add-*role-to-* commands to help with binding roles:
add-cluster-role-to-group
add-cluster-role-to-user
add-role-to-group
add-role-to-user