setting up microcks in openshift - openshift

I am trying to set up microcks in the openshift..
I am just using the free starter from openshift at the https://console.starter-us-west-2.openshift.com/console/catalog
In the http://microcks.github.io/installing/openshift/ , the command is given as below
oc new-app --template=microcks-persistent --param=APP_ROUTE_HOSTNAME=microcks-microcks.192.168.99.100.nip.io --param=KEYCLOAK_ROUTE_HOSTNAME=keycloak-microcks.192.168.99.100.nip.io --param=OPENSHIFT_MASTER=https://192.168.99.100:8443 --param=OPENSHIFT_OAUTH_CLIENT_NAME=microcks-client
In that , how can i find the route for my project ? my project is called testcoolers .
so what will be instead microcks-microcks.192.168.99.100.nip.io? I guess something will replace 192.168.99.100.nip.io
same with keycloak hostname ?also what will be the Public OpenShift master address? Its now https://192.168.99.100:8443

Installing Microcks appears to assume some level of OpenShift familiarity. Also, there are several restrictions that make this not an ideal install for OpenShift Online Starter, but it can definitely still be made to work.
# Create the template within your namespace
oc create -f https://raw.githubusercontent.com/microcks/microcks/master/install/openshift/openshift-persistent-full-template-https.yml
# Deploy the application from the template, be sure to replace <NAMESPACE> with your proper namespace
oc new-app --template=microcks-persistent-https \
--param=APP_ROUTE_HOSTNAME=microcks-<NAMESPACE>.7e14.starter-us-west- 2.openshiftapps.com \
--param=KEYCLOAK_ROUTE_HOSTNAME=keycloak-<NAMESPACE>.7e14.starter-us-west-2.openshiftapps.com \
--param=OPENSHIFT_MASTER=https://api.starter-us-west-2.openshift.com \
--param=OPENSHIFT_OAUTH_CLIENT_NAME=microcks-client \
--param=MONGODB_VOL_SIZE=1Gi \
--param=MEMORY_LIMIT=384Mi \
--param=MONGODB_MEMORY_LIMIT=384Mi
# The ROUTE params above are still necessary for the variables, but in Starter, you can't specify a hostname in a route, so you'll have to manually create the routes
oc create route edge microcks --service=microcks --insecure-policy=Redirect
oc create route edge keycloak --service=microcks-keycloak --insecure-policy=Redirect
You should also see an error about not being able to create the OAuthClient. This is expected because you don't have permissions to create this for the whole cluster. You will instead need to manually create a user in KeyCloak.
I was able to get this to successfully deploy and logged in on OpenShift Online Starter, so use the comments if you struggle at all.

Related

How to allow IP dynamically using ingress controller

My structure
Kubernetes cluster on GKE
Ingress controller deployed using helm
An application which will return list of IP ranges note: it will get updated periodically
curl https://allowed.domain.com
172.30.1.210/32,172.30.2.60/32
Secured application which is not working
What I am trying to do?
Have my clients IPs in my API endpoint which is done
curl https://allowed.domain.com
172.30.1.210/32,172.30.2.60/32
Deploy my example app with ingress so it can pull from the https://allowed.domain.com and allow people to access to the app
What I tried and didn't work?
Deploy the application with include feature of nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
include /tmp/allowed-ips.conf;
deny all;
yes its working but the problem is when /tmp/allowed-ips.conf gets updated the ingress config doesn't
I tried to use if condition to pull the IPs from the endpoint and deny if user is not in the list
nginx.ingress.kubernetes.io/configuration-snippet: |
set $deny_access off;
if ($remote_addr !~ (https://2ce8-73-56-131-204.ngrok.io)) {
set $deny_access on;
}
I am using nginx.ingress.kubernetes.io/whitelist-source-range annotation but that is not what I am looking for
None of the options are working for me.
From the official docs of ingress-nginx controller:
The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app)
After the nginx ingress resource was initially created, the ingress controller assembles the nginx.conf file and uses it for routing traffic. Nginx web server does not auto-reload its configuration if the nginx.conf and other config files were changed.
So, you can work around this problem in several ways:
update the k8s ingress resource with new IP addresses and then apply changes to the Kubernetes cluster (kubectl apply / kubectl patch / smth else) / for your options 2 and 3.
run nginx -s reload inside an ingress Pod to reload nginx configuration / for your option 1 with include the allowed list file.
$ kubectl exec ingress-nginx-controller-xxx-xxx -n ingress-nginx -- nginx -s reload
try to write a Lua script (there is a good example for Nginx+Lua+Redis here and here). You should have a good understanding of nginx and lua to estimate if it is worth trying.
Sharing what I implemented at my workplace. We had a managed monitoring tool called Site24x7. The tool pings our server from their VMs with dynamic IPs and we had to automate the whitelisting of the IPs at GKE.
nginx.ingress.kubernetes.io/configuration-snippet allows you to set arbitrary Nginx configurations.
Set up a K8s CronJob resource on the specific namespace.
The CronJob runs a shell script, which
fetches the list of IPs to be allowed (curl, getent, etc.)
generates a set of NGINX configurations (= the value for nginx.ingress.kubernetes.io/configuration-snippet)
runs a kubectl command which overwrites the annotation of the target ingresses.
Example shell/bash script:
#!/bin/bash
site24x7_ip_lookup_url="site24x7.enduserexp.com"
site247_ips=$(getent ahosts $site24x7_ip_lookup_url | awk '{print "allow "$1";"}' | sort -u)
ip_whitelist=$(cat <<-EOT
# ---------- Default whitelist (Static IPs) ----------
# Office
allow vv.xx.yyy.zzz;
# VPN
allow aa.bbb.ccc.ddd;
# ---------- Custom whitelist (Dynamic IPs) ----------
$site247_ips # Here!
deny all;
EOT
)
for target_ingress in $TARGET_INGRESS_NAMES; do
kubectl -n $NAMESPACE annotate ingress/$target_ingress \
--overwrite \
nginx.ingress.kubernetes.io/satisfy="any" \
nginx.ingress.kubernetes.io/configuration-snippet="$ip_whitelist" \
description="*** $(date '+%Y/%m/%d %H:%M:%S') NGINX annotation 'configuration-snippet' updated by cronjob $CRONJOB_NAME ***"
done
The shell/bash script can be stored as ConfigMap to be mounted on the CronJob resource.

oc-command to forward local-ports to remote debug ports based on service-name instead of pod-name

To minimize the setup-time for attaching a debug session to the remote pod (microservice deployed on OpenShift) using intelliJ,
I am trying to get the most out of the 'Before launch'-setting of the Remote Debug-Configuration.
I use 2 steps before attaching the debugger to the JVM Socket with following command-line arguments (this setup works but needs editing every new deploy);
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000
step 1:
external tools: oc with arguments:
login
https://url.of.openshift.environment
--username=<login>
--password=<password>
step 2:
external tools: oc with arguments:
port-forward
microservice-name-65-6bhz8 -> this needs to be changed after every deploy
8000
3000
3001
background info:
this is the info in the service his YAML under spec>containers>env:
- name: JAVA_TOOL_OPTIONS
value: >-
-agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n
-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=3000
-Dcom.sun.management.jmxremote.rmi.port=3001
-Djava.rmi.server.hostname=127.0.0.1
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
As the name of the pod changes every (re-)deploy I am trying to find a oc-command which can be used to port-forward without having to provide the pod-name.(eg. based on the service-name)
Or a completely other solution that allows me to hit 1 button to setup a debug-session (preferably in intelliJ).
> Screenshot IntelliJ settings
----------------------------- edit after tips -------------------------------
For now I made a small batch-script which does the trick:
Feel free to help on a even faster solution
(I'm checking https://openshiftdo.org/)
or other intelliJent solutions
set /p _username=Type your username:
set /p _password=Type your password:
oc login replace-with-openshift-console-url --username=%_username% --password=%_password%
oc project replace-with-project-name
oc get pods --selector app=replace-with-app-name -o jsonpath={.items[?(#.status.phase=='Running')].metadata.name} > temp.txt
set /p PODNAME= <temp.txt
del temp.txt
oc port-forward %PODNAME% 8000 3000 3001
Your going to need the pod name in order to port forward but of course you can fetch that programatically consistantly so you don't need to update in place every time.
There are a number of ways you can do this, via jsonpath, go template, bash etc. An example would be to use the following, replacing your app name as required:
oc get pod -l app=replace-me -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'

GCE Service Account with Compute Instance Admin permissions

I have setup a compute instance called to run cronjobs on Google Compute engine using a service account with the following roles:
Custom Compute Image User + Deletion rights
Compute Admin
Compute Instance Admin (beta)
Kubernetes Engine Developer
Logs Writer
Logs Viewer
Pub/Sub Editor
Source Repository Reader
Storage Admin
Unfortunately, when I ssh into this cronjob runner instance and then run:
sudo gcloud compute --project {REDACTED} instances create e-latest \
--zone {REDACTED} --machine-type n1-highmem-8 --subnet default \
--maintenance-policy TERMINATE \
--scopes https://www.googleapis.com/auth/cloud-platform \
--boot-disk-size 200 \
--boot-disk-type pd-standard --boot-disk-device-name e-latest \
--image {REDACTED} --image-project {REDACTED} \
--service-account NAME_OF_SERVICE_ACCOUNT \
--accelerator type=nvidia-tesla-p100,count=1 --min-cpu-platform Automatic
I get the following error:
The user does not have access to service account {NAME_OF_SERVICE_ACCOUNT}. User: {NAME_OF_SERVICE_ACCOUNT} . Ask a project owner to grant you the iam.serviceAccountUser role on the service account.
Is there some other privilege besides compute instance admin that I need to be able to create instances with my instance?
Further notes: (1) when I try to not specify --service-account the error is the same except that the service account my user doesn't have access to is the default '51958873628-compute#developer.gserviceaccount.com'.
(2) adding/removing sudo doesn't change anything
Creating an instance that uses a service account requires you have the compute.instances.setServiceAccount permission on that service account. To make this work, grant the iam.serviceAccountUser role to your service account (either on the entire project or on the specific service account you want to be able to create instances with).
Find out who you are first
if you are using Web UI: what email address did you use to login?
if you are using local gcloud or terraform: find the json file that contains your credentials for gcloud (often named similarly to myproject*.json) and see if it contains the email: grep client_email myproject*.json
GCP IAM change
Go to https://console.cloud.google.com
Go to IAM
Find your email address
Member -> Edit -> Add Another Role -> type in the role name Service Account User -> Add
(You can narrow it down with a Condition, but lets keep it simple for a while).
Make sure that NAME_OF_SERVICE_ACCOUNT is service account from current project.
If you change project ID, and don't change NAME_OF_SERVICE_ACCOUNT, then you will encounter this error.
This can be checked on Google Console -> IAM & Admin -> IAM.
Then look for service name ....-compute#developer.gserviceaccount.com and check if numbers at the beginning are correct. Each project will have different numbers in this service name.

Creating Openshift build from Dockerfile

I'd need to create an openshift build from a Docker image (tomcat9) which is extended through a Dockerfile. The Dockerfile adds an applications in the webapps folder.
Here's my attempt:
oc new-build --name=runtime --docker-image="docker.io/tomcat:9.0-jre8" \
--source-image=tomcat:9.0-jre8 \
--source-image-path=/home/carla/build/example.war:. \
--dockerfile=$'FROM tomcat:9.0-jre8\nCOPY example.war /usr/local/tomcat/webapps/example.war'
I have placed the webapplication (example.war) in the path /home/carla/build and I need to copy in in the /usr/local/tomcat/webapps folder of the Docker image.
The error I get is:
error: BuildConfig "runtime" is invalid: spec.triggers: Invalid value:
. . .
multiple ImageChange triggers refer to the same image stream tag
--> Failed
I think the problem is related to the -source-image parameter, however I cannot omit it as I have specified the --source-image-path.
Any idea how to fix it ?
Thanks

How to create re-encrypting route for hawkular-metrics in OpenShift Enterprise 3.2

As per documentation to enable cluster metrics, I should create re-encrypting route as per the below statement
$ oc create route reencrypt hawkular-metrics-reencrypt \
--hostname hawkular-metrics.example.com \
--key /path/to/key \
--cert /path/to/cert \
--ca-cert /path/to/ca.crt \
--service hawkular-metrics
--dest-ca-cert /path/to/internal-ca.crt
What exactly should I use for these keys and certificates?
Are these already exists somewhere or I need to create them?
Openshift Metrics developer here.
Sorry if the docs were not clear enough.
The route is used to expose Hawkular Metrics, particularly to the browser running the OpenShift console.
If you don't specify any certificates, the system will use a self signed certificate instead. The browser will complain that this self signed certificate is not trusted, but you can usually just click through to accept it anyways. If you are ok with this, then you don't need to do any extra steps.
If you want the browser to trust this connection by default, then you will need to provide your own certificates signed by a trusted certificate authority. This is exactly similar to how you would have to generate your own certificate if you are running a normal site under https.
From the following command:
$ oc create route reencrypt hawkular-metrics-reencrypt \ --hostname hawkular-metrics.example.com \ --key /path/to/key \ --cert /path/to/cert \ --ca-cert /path/to/ca.crt \ --service hawkular-metrics --dest-ca-cert /path/to/internal-ca.crt
'cert' corresponds to your certificate signed by the certificate authority
'key' corresponds to the key for your certificate
'ca-cert' corresponds to the certificate authorities certificate
'dest-ca-cert' corresponds to the certificate authority which signed the self signed certificate generated by the metrics deployer
The docs https://docs.openshift.com/enterprise/3.2/install_config/cluster_metrics.html#metrics-reencrypting-route should explain how to get the dest-ca-cert from the system
First of all and as far as I know, note that using a re-encrypting route is optional. The documentation mentions deploying without importing any certificate:
oc secrets new metrics-deployer nothing=/dev/null
And you should be able to start with that and make hawkular working (for instance you'll be able to curl with '-k' option). But re-encrypting route is sometimes necessary, some clients refuse to communicate with untrusted certificates.
This page explains what are the certificates needed here: https://docs.openshift.com/enterprise/3.1/install_config/cluster_metrics.html#metrics-reencrypting-route
Note that you can also configure it from the web console if you find it more convenient: from https://(your_openshift_host)/console/project/openshift-infra/browse/routes , you can create a new route and upload the certificate files from that page. Under "TLS termination" select "Re-Encrypt", then provide the 4 certificate files.
If you don't know how to generate self-signed certificates you can follow steps described here: https://datacenteroverlords.com/2012/03/01/creating-your-own-ssl-certificate-authority/ . You'll end up with a rootCA.pem file (use it for "CA Certificate"), a device.key file (or name it hawkular.key, and upload it as private key) and a device.crt file (you can name it hawkular.pem, it's the PEM format certificate). When asked for the Common Name, make sure to enter the hostname for your hawkular server, like "hawkular-metrics.example.com"
The final one to provide is the current self-signed certificate used by Hawkular, under so-called "Destination CA Certificate". OpenShift documentation explains how to get it: run
base64 -d <<< \
`oc get -o yaml secrets hawkular-metrics-certificate \
| grep -i hawkular-metrics-ca.certificate | awk '{print $2}'`
and, if you're using the web console, save it to a file then upload it under Destination CA Certificate.
Now you should be done with re-encrypting.