Failing to install IdP configuration with helm in Openshift 4 - openshift

I'm trying to apply htpasswd IdP configuration with oc apply commands which is working, but when I'm using configuration with helm it is failing to install with following error,
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: OAuth "cluster" in namespace "" exists and cannot be imported into the current release:
Can someone help ?
Regards
Mallikharjuna Rao Polisetty

This is a helm issue. The helm chart you're using is trying to place an resource of type oauth named cluster onto your OpenShift cluster, which already exists (supposedly because you created it by hand).
Clean up the existing oauth and try again.

Related

Can I store a helm chart in OpenShift and make use of it?

I have uploaded a helm chart as an OCI compliant container to Openshift. However, I can't add the openshift registry as a helm repo, or find another way to persuade helm to use the "imagestream" as a source to install.
export HELM_EXPERIMENTAL_OCI=1
oc whoami --show-token | helm registry login my-cluster.com -u $(oc whoami) --password-stdin
helm create mychart
cd mychart/
helm chart save . my-cluster.com/$(oc project -q)/mychart:latest
helm chart push my-cluster.com/$(oc project -q)/mychart:latest
And that creates a "mychart" imagestream with a dockerImageManifestMediaType: application/vnd.oci.image.manifest.v1+json
But whenever I try to add my-cluster.com as a repo or install any other way, it just gives me a 404 error :
helm install --username $(oc whoami) --password $(oc whoami --show-token) --repo https://my-cluster.com/$(oc project -q) mychart chart
Error: looks like "https://my-cluster.com/project" is not a valid chart repository or cannot be reached: failed to fetch https://my-cluster.com/project/index.yaml : 404 Not Found
Would it require the registry to do something "clever" to create the index.yaml which is missing in Openshift registry?
a helm chart is not the same as an OCI compliant container!
a helm chart basically is a compressed directory which a specified layout.
However, it is possible to host a chartmuseum on openshift, push your charts to said chartmuseum and then add it as a helm repository.
see the chartmuseum github repository for more information on how to host your own chartmuseum and this tutorial on how the chart should look like exactly and how to push it to your hosted chartmuseum.
I figured out you need to do :
helm chart pull registry/mychart:tag
helm chart export registry/mychart:tag .
So that your chart is in the current directory. And then you can :
helm install release mychart
There is a PR for future versions of helm to do a install of something like oci://registry/mychart:tag saving a few steps. I suppose this is the difference between registry and repository that was causing me problems.
Absolutely no need for chartmuseum or other third party apps.

Run docker container on OpenShift from Nexus unsecure private registry

I'm trying to run a containerized app which is stored in Nexus docker hosted on url 12.23.34.55:8086
I'm trying to run it on my Openshift Cluster, but I'm getting error. Commands I'm using to run
oc create secret docker-registry mysecret --docker-server=http://12.23.34.55/ --docker-username=aditya --docker-password=aditya --docker-email=aditya#example.org
oc secrets link default mysecret --for=pull
My nexus is running on http://12.23.34.55:8081
Now I'm using command to launch in OpenShift using below command.
oc new-app 12.23.34.55:8085/mytestapp:11 --insecure-registry=true
as per $ oc new-app myregistry:5000/example/myimage
https://docs.openshift.com/container-platform/4.1/applications/application_life_cycle_management/creating-new-applications.html
But it does not work, it asks for password and not able to deploy from console too, can anyone help me with exact commmand.
Creating the secret is not enough for OpenShift to be able to pull from the registry. You still need to link that secret as well.
Take a look at the official documentation here:
https://docs.openshift.com/container-platform/4.1/openshift_images/managing_images/using-image-pull-secrets.html#images-allow-pods-to-reference-images-from-secure-registries_using-image-pull-secrets
Okay! I found an answer, so using private registry first we should import image using
oc import name url/imagename:tag
then we can create new app with the same
oc new app name

Using a connector with Helm-installed Kafka/Confluent

I have installed Kafka on a local Minikube by using the Helm charts https://github.com/confluentinc/cp-helm-charts following these instructions https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html like so:
helm install -f kafka_config.yaml confluentinc/cp-helm-charts --name kafka-home-delivery --namespace cust360
The kafka_config.yaml is almost identical to the default yaml, with the one exception being that I scaled it down to 1 server/broker instead of 3 (just because I'm trying to conserve resources on my local minikube; hopefully that's not relevant to my problem).
Also running on Minikube is a MySQL instance. Here's the output of kubectl get pods --namespace myNamespace:
I want to connect MySQL and Kafka, using one of the connectors (like Debezium MySQL CDC, for instance). In the instructions, it says:
Install your connector
Use the Confluent Hub client to install this
connector with:
confluent-hub install debezium/debezium-connector-mysql:0.9.2
Sounds good, except 1) I don't know which pod to run this command on, 2) None of the pods seem to have a confluent-hub command available.
Questions:
Does confluent-hub not come installed via those Helm charts?
Do I have to install confluent-hub myself?
If so, which pod do I have to install it on?
Ideally this should be configurable as part of the helm script, but unfortunately it is not as of now. One way to work around this is to build a new Docker from Confluent's Kafka Connect Docker image. Download the connector manually and extract the contents into a folder. Copy the contents of this to a path in the container. Something like below.
Contents of Dockerfile
FROM confluentinc/cp-kafka-connect:5.2.1
COPY <connector-directory> /usr/share/java
/usr/share/java is the default location where Kafka Connect looks for plugins. You could also use different location and provide the new location (plugin.path) during your helm installation.
Build this image and host it somewhere accessible. You will also have to provide/override the image and tag details during the helm installation.
Here is the path to the values.yaml file. You can find the image and plugin.path values here.
Just an add-on to Jegan's comment above: https://stackoverflow.com/a/56049585/6002912
You can choose to do the Dockerfile below. Recommended.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
RUN confluent-hub install --no-prompt debezium/debezium-connector-postgresql:1.0.0
Or you can use a Docker's multi-stage build instead.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
COPY --from=debezium/connect:1.0 \
/kafka/connect/debezium-connector-postgres/ \
/usr/share/confluent-hub-components/debezium-connector-postgres/
This will help you to save time on getting the right jar files for your plugins like debezium-connector-postgres.
From Confluent documentation: https://docs.confluent.io/current/connect/managing/extending.html#create-a-docker-image-containing-c-hub-connectors
The Kafka Connect pod should already have the confluent-hub installed. It is that pod you should run the commands on.
The cp kafka connect pod has 2 containers, one of them is a cp-kafka-connect-server container.That container has confluent-hub installed.You can login into that container and run your connector commands there.To login into that container, run the following command:
kubectl exec -it {pod-name} -c cp-kafka-connect-server -- /bin/bash
As of latest version of chart, this can be achieved using customEnv.CUSTOM_SCRIPT_PATH
See README.md
Script can be passed as a secret and mounted as a volume

SciChart and SwiftyJSON

I am getting an [!] Invalid Podfile file: syntax error,
source 'https://github.com/ABTSoftware/PodSpecs.git'
platform :ios, '9.0'
target 'HavingFunWithScichart' do
use_frameworks!
# Pods for HavingFunWithScichart
pod 'SciChart'
pod 'SwiftyJSON'
pod 'Alamofire'
end
I know that I am getting it due to the pre-install script file. For the script instructs only to install the SciChart pod file.
My question is, how can I add my SwiftyJSON and Alamofire so that I can do my
pod install
??
Can anyone help me on how I can do this? I have checked and read over and over the documentation for Cocoapods but it does not cover anything on how I can over come the Pre-install script so that I can add the additional pods that I need for my project.
How Do I delete this request please? I just heard back that it was a omission in their script and they will be fixing it.

openshift start build forbidden

I am trying to create a build and deployment pipeline in OpenShift via Jenkins. I have followed their official tutorial: https://github.com/OpenShiftDemos/openshift-cd-demo
and properly set all policies ( i am using different project names and application but the same strategy ) yet the Jenkins app deployed on cicd project cant start to build in dev project.
Error:
Error from server (Forbidden): buildconfigs.build.openshift.io buildconfig not found though the build is created and can be seen via the web console.
I am using the --from-file instead of --from-dir for binary input.
Please help if any other policies need to be set for the Jenkins service account in cicd project to "start-build" in dev project.
Yes, the Jenkins need to have access to dev project, you can use the following command to give access:
oc policy add-role-to-user edit system:serviceaccount:cicd-tools:jenkins -n example-openshift-dockerfile
cicd-tools: Project jenkins is installed in
example-openshift-dockerfile: Project that will be changed by Jenkins