How to setup okd to use external nexus docker registry? - openshift

I like to replace my default docker registry in OKD with an existing Nexus Docker registry in OKD. How do I replace the exist docker registry in OKD with the Nexus Docker Registry?
The idea is to host all my images in that Nexus repo. For example when I use S2I.

This topic (as #GhostCat indicates) leaves your options pretty open until you make some additional decisions. There are a couple of resources/topics that you may want to investigate to help hone on in your desired outcome:
1) Pushing/Pulling from an external registry
In this topic, I'd suggest that you don't "replace" the internal registry, but rather run a nexus registry alongside the internal registry. The internal registry can still be used to store system components, s2i image builders, etc etc., while Nexus can be the primary registry for your applications.
For image pulling: Each docker host can be configured with additional external registries to search through, and by simply editing your image streams or including the full image path in your deployment / statefulset / etc.
You can modify your build configs to store the built images in an external registry https://blog.openshift.com/pushing-application-images-to-an-external-registry/
2) Moving all registry features/components to Nexus
In this topic, you will need to read up on the disconnected installation notes around OpenShift and adapt them to your own Nexus config. Essentially, you need to load up all of the images in Nexus and modify all of the templates/imagestreams in openshift to point at the new repository.
Follow the guidance here and modify accordingly https://blog.openshift.com/pushing-application-images-to-an-external-registry/
This is a less common pattern and really only followed by teams in fully disconnected environments. Most teams we meet follow the guidance in #1 above.

Thanks for that article, its helpful but is there not a way to make this automatic select as default? So that always all my build images goes to my external Nexus Docker repo? Because if it goes to the internal registry then we lost the images by a reboot of a crash of the internal registry container.
Especially the S2I images.

I see a couple of ideas to discuss;
1) If you just want your build outputs to target another external repo instead of the default internal, this can be codified in any of your external ci/cd systems as part of the build config. The most common way in openshift would be to have Jenkins create the build configs through a Jenkinsfile in your code repo.
I'm not aware of a simple documented way to override a default configuration or setting for all build outputs across the system.
2) If you are losing your images on the internal registry, then you are not leveraging persistent storage, which should be considered. See [here] (https://docs.okd.io/3.10/install_config/registry/extended_registry_configuration.html#docker-registry-configuration-reference-storage)

Related

How to manage settings in Openshift?

profile.properties file not found in Source code in repository?
Is it possible using environment variable in openshift?
If yes, how can set -Dkeycloak.profile.feature.scripts=enabled in Openshift environment?
Environment Variables are a first class concept in Openshift. There are many ways to use them:
You can set them directly on your BuildConfig to ”bake them into” your containers. This isn't best practices as then they won't change when you move them through environments but may be necessary to configure your build or set things that won't change (e.g. set the port number node.js uses to match the official node.js image with ”PORT=8080”)
You can put such variables into either ConfigMap or Secret configuration objects to easily share them between many similar BuildConfig
You can set them directly on DeploymentConfig so that they are set for every pod that is launched by that deployment. This is a fairly common way of setting up application specific environment variables. Its not a good idea to use this for settings that are shared between multiple applications as you would have to change common variables in many places.
You can set them up in ConfigMaps and Secrets and apply them to multiple DeploymentConfigs. That way you can manage them in one place.
Its common to see devs use a .env file that is named in .gitignore so not in git. In the past I have written scripts to load that into a Secret within openshift then use envFrom to set that secret on the deployment. Then have an .env.staging and .env.live that we git secret encrypt into git.
The problem with .env files is that they tend to get messy and have unused junk after a while. So we broke the file into one Secret to be database creds, separate Secrets for each api creds, a ConfigMap for app specific settngs. A ConfigMap for shared settings.
These days we use Helmfile to load all our config from git based on git webhooks. All the config is yaml in a git repo (with secret yaml encrypted). If you merge a change to the config git repo a webhook handler decrypts the config and runs Helmfile to update the settings in openshift. I am in the process of open sourcing everything including using a chatbot to manage releases (optional) over on GitHub
I should also say that openshift automatically creates many environment variables to help you configure you apps. In each project a lot of variables are set in every pod telling you the details of all the services you have setup in that project.
Openshift also sets up internal dns entries for your services. This means that if App A uses App B you don't have to configure A with a URL for B yourself. Rather there will be a dns entry for B and you can use the env vars that openshift sets on A to work out the dns entry to and the port number to use (e.g. dns entry includes project name and that is automatically set as an env var by openshift). So our apps can find a redis service running in the same project using that technique.

How to set where to download the VM in minishift?

It downloads openshift into C:\Users\[user]\.minishift\machines folder. How to change this location to, say, D:\My VMs\? The config set is not very helpful in explaining setting which config for which.
Minishift verision: v1.15.1
Platform: Windows
Driver: Hyper-V
Any help would be greatly appreciated.
It looks like the machines directory can't be set directly through config. It is set relative to a base directory in instance_dirs.go.
That base directory, by default, is the .minishift directory in the home directory of the user, e.g. C:\Users\[user]\.minishift on Windows, but this can be overridden by setting the environment variable MINISHIFT_HOME.
The base directory could also be a profile directory, if you are not using the default profile (the default being minishift).
$ minishift profile list
- minishift Stopped
$ minishift profile myprofile
Profile 'myprofile' set as active profile.
The machines directory for myprofile would then be created under $MINISHIFT_HOME/profiles/myprofile/machines, e.g. on Windows C:\Users\[user]\.minishift\profiles\myprofile\machines.
So you can set MINISHIFT_HOME and move the whole contents of the .minishift directory, including machines, somewhere else but it doesn't look like you can move just machines alone.
Perhaps, you could solve this at the OS-level by creating a symlink between C:\Users\[user]\.minishift\machines and D:\My VMs\.
In case it helps others and so they don't need to test the different ways of using symlink as well as to expand on #codemonkey great answer this is what I did to use symlink as my C drive had no available space. I'm also using hyper-v as the driver.
Note: I do have minishift.exe installed in the apps folder on my D drive
Note 2: I did have to run the command prompt in admin mode
From the C:\Users\[user]\.minishift folder I moved the "machines" folder to D:\Apps\minishift-1.32.0-windows-amd64\
I first tried a soft link which didn't work, I then tried a hadr link, but I was getting errors so I used a "directory junction" link with the /J switch as such C:\WINDOWS\system32>mklink /J C:\Users\[user]\.minishift\machines D:\Apps\minishift-1.32.0-windows-amd64\machines
You should get the following result Junction created for C:\Users\[user]\.minishift\machines <<===>> D:\Apps\minishift-1.32.0-windows-amd64\machines
Then if necessary run minishift delete --clear-cache WARNING this will delete any previous images and hosts you might have!
Then start minishift as normal with minishift start
Grab a cup of coffee or go smoke a cigarette or vape as it will take awhile for the OpenShift server to be started.
Hope this answer might help others who face a similar issue.

How to run base centos image in minishift?

I try to learn about Open Shift, how it works, how to run apps, build images etc.
To start with something, which I thought will be rather simple, I decided to run a pod with pure centos7 OS, based on this image. I installed locally minishift v1.11.0+4459917, I created a new project, and performed command:
oc new-app openshift/base-centos7 in this project. As a result I received the following message:
--> Found Docker image bb81a09 (11 months old) from Docker Hub for "openshift/base-centos7"
* An image stream will be created as "pon3:latest" that will track this image
* This image will be deployed in deployment config "pon3"
* The image does not expose any ports - if you want to load balance or send traffic to this component
you will need to create a service with 'expose dc/pon3 --port=[port]' later
* WARNING: Image "openshift/base-centos7" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ...
imagestream "pon3" created
deploymentconfig "pon3" created
--> Success
Run 'oc status' to view your app.
As I can see in the warning this image runs as root, which is clearly not a good practice, but it may be worked around, as described here and here. I tried both approaches - I have created a new service account with anyuid scc, and I assigned anyuid scc to default sa. Unfortunately I'm still not able to run a pod based on this image. The result looks like this:
oc get pods
mycentos-1-deploy 1/1 Running 0 32s
mycentos-1-p1vh5 0/1 CrashLoopBackOff 1 30s
I try to troubleshoot this way:
oc logs -p mycentos-1-p1vh5
This image serves as the base image for all OpenShift v3 S2I builder images.
It provides all essential libraries and development tools needed to
successfully build and run an application.
To use this image as a base image, you need to have 's2i/bin' directory in the
same directory as your S2I image Dockerfile. This directory should contain S2I
scripts.
This base image also provides the default user you should use to run your
application. Your Dockerfile should include this instruction after you finish
installing software:
USER default
The default directory for installing your application sources is
'/opt/app-root/src' and the WORKDIR and HOME for the 'default' user is set
to this directory as well. In your S2I scripts, you don't have to use absolute
path, but rather rely on the relative path.
To learn more about S2I visit: https://github.com/openshift/source-to-image
Additionally I tried to troubleshoot with oc adm diagnostics but to be honest I didn't see anything relevant to this issue.
I'm clearly missing something here. Can someone give me a hint how this should be handled or how can I try to troubleshoot this? Is there a different way to run pure centos OS?
Thank you for any help.
You need the image you want to deploy using oc new-app to have an actual application in it. The openshift/base-centos7 image is a base image only on which other images are built and doesn't have an application in it.
If you just want to spin up a container and be presented with a shell environment in which you can play in use the oc run command instead.
OpenShift isn't like a traditional VPS where you just spin up permanent shell environments which you then access to set up your application manually. The idea is that you build your application into an image and deploy the application.
I would suggest you go read:
https://www.openshift.com/promotions/for-developers.html
https://www.openshift.com/promotions/devops-with-openshift.html
and work through the exercises at:
https://learn.openshift.com
to learn more about what OpenShift is and how to use it.

Managing composer and deployment

So, I'm enjoying using composer, but I'm struggling to understand how others use it in relation to a deployment service. Currently I'm using deployhq, and yes, I can set it to deploy and run composer when there is an update to the repo, but this doesn't make sense to me now.
My main composer repo, containing just the json file of all of the packages I want to include in my build, only gets updated when I add a new package to the list.
When I update my theme, or custom extension (which is referenced in the json file), there is no "hook" to update my deployment service. So I have to log in to my server and manually run composer (which takes the site down until it's finished).
So how do others manage this? Should I only run composer locally and include the vendor folder in my repo?
Any answers would be greatly appreciated.
James
There will always be arguments as to the best way to do things such as this and there are different answers and different options - the trick is to find the one that works best for you.
Firstly
I would first take a step back and look at how you are managing your composer.json
I would recommend that all of your packages in composer.json be locked down to the exact version number of the item in Packagist. If you are using github repo's for any of the packages (or they are set to dev-master) then I would ensure that these packages are locked to a specific commit hash! It sounds like you are basically there with this as you say nothing updates out of the packages when you run it.
Why?
This is to ensure that when you run composer update on the server, these packages are taken from the cache if they exist and to ensure that you dont accidentally deploy untested code if one of the modules happens to get updated between you testing and your deployment.
Actual deployments
Possible Method 1
My opinion is slightly controversial in that when it comes to Composer for many of my projects that don't go through a CI system, I will commit the entire vendor directory to version control. This is quite simply to ensure that I have a completely deployable branch at any stage, it also makes deployments incredibly quick and easy (git pull).
There will already be people saying that this is unnecessary and that locking down the version numbers will be enough to ensure any remote system failures will be handled, it clogs up the VCS tree etc etc - I won't go into these now, there are arguments for and against (a lot of it opinion based), but as you mentioned it in your question I thought I would let you know that it has served me well on a lot of projects in the past and it is a viable option.
Possible Method 2
By using symlinks on your server to your document root you can ensure that the build completes before you switch over the symlink to the new directory once you have confirmed the build completed.
This is the least resistance path towards a safe deployment for a basic code set using composer update on the server. I actually use this method in conjunction with most of my deployments (including the ones above and below).
Possible Method 3
Composer can use "artifacts" rather than a remote server, this will mean that you will basically be creating a "repository folder" of your vendor files, this is an alternative to adding the entire vendor folder into your VCS - but it also protects you against Github / Packagist outages / files being removed and various other potential issues. The files are retrieved from the artifacts folder and installed directly from the zip file rather than being retrieved from a server - this folder can be stored remotely - think of it as a poor mans private packagist (another option btw).
IMO - The best method overall
Set up a CI system (like Jenkins), create some tests for your application and have them respond to push webhooks on your VCS so it builds each time something is pushed. In this build you will set up the system to:
run tests on your application (If they exist)
run composer update
generate an artifact of these files (if the above items succeed)
Jenkins can also do an actual deployment for you if you wish (and the build process doesn't fail), it can:
push the artifact to the server via SSH
deploy the artifact using a script
But if you already have a deployment system in place, having a tested artifact to be deployed will probably be one of its deployment scenarios.
Hope this helps :)

How to customize an openshift repo

I have read stuffs about Openshift cartridge and I still can't see how to just customize repo for Openshift like this one for WP https://github.com/openshift/wordpress-example
I clone that repo on my local machine and I'd like to just add some new plugins. Can someone explain or point me to an article to do so?
They've updated their wordpress cartridge from https://openshift.redhat.com/app/console/applications I guess after so many people were interested in having it scale they tweaked it a bit so it could, when they did they made it a little too easy it seems.
When you clone an application created from the standard cartridge (scaled or not) all your plugins and themes should be added in
/.openshift. Important to note you don't use .zip files in these folders, you'll have to extract your plugins and themes them and drop those in the appropriate folder.
I'm not sure if they've designed this cartridge to upload plugins to this same directory from within wordpress*.
I'm also not so certain if they dealt with uploading media so that can scale as well.
What I do know is that after going through this you'll find wordpress up and running and if you load-test it you'll see (if you check /haproxy-status) that multiple gears will start up.