Allowing my python code to read data from DB2 on openshift - openshift

I have a python code that needs to read data from a restricted DB2. I do know about secrets in build configs of Openshift,however I did not get any which can be useful like this. Also what should be the appropriate syntax in my code to read from that Secret. Kindly help.

There's a couple of options of using secrets or configMaps in DeploymentConfig for your scenario:
Propagate data through environment variables
Mount secret as a volume
There're both examples
I suggest to propagate password through secretKeyRef environment variable:
env:
- name: DB2_PASSWORD
valueFrom:
secretKeyRef:
name: bd2-password
key: password
To create secret:
oc create secret generic db2-password --from-literal=password=Qwerty123

Related

How to pass correctly credentials via api token to helm chart?

I'm trying to pass credentials to Helm Chart ---> https://artifacthub.io/packages/helm/bitnami/mysql
I use Secret Store CSI Driver and Driver from AWS. Credentials are passed without any problems to my deployment via volume and environmental variables, but when I try to import these credentials using parameter --> auth.existingSecret , then I'm not authenticated to database and my instance from database is not provisioned.
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: aws-secrets
namespace: default
spec:
provider: aws
secretObjects:
- secretName: api-token
type: Opaque
data:
- objectName: secret-token
key: mysql-root-password
- objectName: secret-token
key: mysql-replication-password
- objectName: secret-token
key: mysql-password
parameters:
objects: |
- objectName: prod/zajac/mysql
objectType: secretsmanager
objectAlias: secret-token
I want to emphasize, that I was providing 3 different values to not duplicate replication and root password. So there's no problem with replication. In the parameter auth.existingSecret, I provide name of the secret "api-token", that was generated before creating the chart of database.

how to deploy keycloak container in production openshift?

i found an official guide how to deploy keycloak container in openshift in development mode, but is there any guide how to deploy keycloak container in production mode in openshift with external db?
I hope that you use the Keycloak Operator to deploy your keycloak.
Keycloak Custom Resource has an option to use external database as follows:
apiVersion: k8s.keycloak.org/v2alpha1
kind: Keycloak
metadata:
name: example-kc
spec:
...
serverConfiguration:
- name: db
value: postgres # plain text value
- name: db-url-host
value: postgres-db # plain text value
- name: db-username
secret: # Secret reference
name: keycloak-db-secret # name of the Secret
key: username # name of the Key in the Secret
- name: db-password
secret: # secret reference
name: keycloak-db-secret # name of the Secret
key: password # name of the Key in the Secret
*: https://www.keycloak.org/operator/advanced-configuration

Is it possible to define host mappings in GitHub Actions?

Now I am using GitHub Actions to build my project. On my local machine, I define the local host address mapping in /etc/hosts like this:
11.19.178.213 postgres.dolphin.com
and in my database config like this:
spring.datasource.druid.post.master.jdbc-url = jdbc:postgresql://postgres.dolphin.com:5432/dolphin
On my production server I could edit the mapping to locate a different IP address of my database. But now I am running unit tests in GitHub Actions. How do I edit the host mapping to make the database mapping to container of GitHub Actions? I defined the container in GitHub Actions like this:
jobs:
build:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:13.2
env:
POSTGRES_PASSWORD: postgrespassword
POSTGRES_DB: dolphin
POSTGRES_USER: postgres
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
What should I do to handle the host mapping? I have already searched the docs, but I have found nothing about this problem?
do it like this:
- name: Add hosts to /etc/hosts
run: |
sudo echo "127.0.0.1 postgres.dolphin.com" | sudo tee -a /etc/hosts

How to use Docker Hub private repos at GKE?

I am migrating a huge cloud cluster from AWS to GKE.
But I am having trouble authenticating with Docker Hub, I keep getting
Failed to pull image "myorg/mycontainer": rpc error: code = Unknown desc = Error response from daemon: repository myorg/mycontainer not found: does not exist or no pull access
It seems that the way to authenticate gcloud with docker has recently changed, so whats the proper way of tdoint this?
You have to pass your docker hub login credentials as a secret
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
where --docker-server=https://index.docker.io/v1/
Now, you can create pods which reference that secret by adding an imagePullSecrets section to a pod definition.
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey```

Why does MySQL Docker container complain that MYSQL_ROOT_PASSWORD env must be defined when using Docker 17.03 secrets?

I'm trying to adapt Docker's Wordpress secret example (link below) to work in my Docker Compose setup (for Drupal).
https://docs.docker.com/engine/swarm/secrets/#/advanced-example-use-secrets-with-a-wordpress-service
However, when the 'mysql' container is spun up, the following error is output:
"error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD"
I created the secrets using the 'docker secret create' command:
docker secret create mysql_root_pw tmp-file-holding-root-pw.txt
docker secret create mysql_pw tmp-file-holding-pw.txt
After running the above, the secrets 'mysql_root_pw' and 'mysql_pw' now exist in the swarm environment. Verified by doing:
docker secret ls
Here are the relevant parts from my docker-compose.yml file:
version: '3.1'
services:
mysql:
image: mysql/mysql-server:5.7.17
environment:
- MYSQL_ROOT_PASSWORD_FILE="/run/secrets/mysql_root_pw"
- MYSQL_PASSWORD_FILE="/run/secrets/mysql_pw"
secrets:
- mysql_pw
- mysql_root_pw
secrets:
mysql_pw:
external: true
mysql_root_pw:
external: true
When I do "docker stack deploy MYSTACK", I get the error mentioned above when the 'mysql' container attempts to start.
It seems like "MYSQL_PASSWORD_FILE" and "MYSQL_ROOT_PASSWORD_FILE" are not standard environment variables recognized by MySQL, and it's still expecting "MYSQL_ROOT_PASSWORD" environment variable.
I'm using Docker 17.03.
Any suggestions?
Thanks.
You get this error if your secret is a empty string as well. That is what happened to me, secret is mounted and service properly configured, but still fails because there is not password.