How can I upgrade to the latest Operator Lifecycle Manager on OpenShift 3.11? - openshift

I've found version 0.6.0 of the Operator Framework's Operator Lifecycle Manager (OLM) to be lacking and see that 0.12.0 is available with lots of new updates. How can I upgrade to that version?
Also, what do I need to consider regarding this upgrade? Will I lose any integrations, etc.

One thing that needs to be considered is that in OpenShift 3, OLM is running from a namespace called operator-lifecycle-manager. In future versions that becomes simply olm. Some things to consider,
Do you have operators running right now and if you make this change will your catalog names change? This will need to be reflected in your subscriptions.
Do you want to change any of the default install configuration?
Look into values.yaml to configure your OLM
Look into the yaml files in step 2 to and adjust if needed.
1) First, turn off OLM 0.6.0 or whatever version you might have.
You can delete that namespace, or as I did stopped the deployments within and scale the replicasets down to 0 pods which effectively turns OLM 0.6.0 off.
2) Install OLM 0.12.0
oc create -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.12.0/crds.yaml
oc create -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.12.0/olm.yaml
alt 2) If you'd rather just install the latest from the repo's master branch:
oc create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/crds.yamlcrds.yaml
oc create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/olm.yaml
So now you have OLM 0.12.0 installed. You should be able to see in the logs that it picks up where 0.6.0 left off. You'll need to start learning about OperatorGroups though as that's new and going to start impacting your operation of operators pretty quick. The Cluster Console's ability to show your catalogs does seem to be lost, but you can still view that information through the commandline with oc get packagemanifests.

Related

Adding Labels to Images with Openshift s2i Binary build

I would like to add some labels (commit hash, branch name,...) to images I create using Openshift source-to-image binary build. These labels will have naturally different values for every build.
Currently oc start-build does not even support -e flags to add environment variables. (At least is seems to, it works for Git source, its a bug?)
And for binary build does not supports --build-arg to pass argument for docker file.
The only way I was able to accomplish this to call oc set env bc [build-name] then start the build. And use Label in Dockerfile with values from environment variables.
My question is isn't there a better way to do this? (Ideally in a way that Dockerfile is not necessarily changed) Doesn't s2i supports passing --label to docker build behind?
Thank you.
Do you want to add environment variable when you start oc start-build ? Because you mentioned oc set env bc [build-name].
Then you can use --env=<key>=<value> option, refer Starting Build for more details.
$ oc start-build <buildconfig_name> --env=<key>=<value>
I hope it help you.

Debian install MySQL specific version not available

I got two serveurs. ClientServ and DevServ. The client serveur is on Debian 7 with Mysql 5.5.49-0+deb7u1. My goal is to have the same package in my DevServ.
Unfortunaly, I tried to apt-get install I only got "5.5.55-0+deb7u1". I checked the repo actually there isn't any 5.5.49 package in wheezy...
I tried everything.
Using mysql's .deb, it gives only mysql with the correct version but not the other composents (mysql-server etc).
I saw that in Jessie repo there is a "mysql-server-5.5 5.5.49-0+deb8u1"
Is it possible to use it ?
Please help me... :)
Thank you very much in advance,
Good day
There should be no issues simply downloading all the *.deb packages and installing them for the relevant version of the mysql server. Note that you'd also need to grab the *.deb files for any mysql modules you need at the same time.
Then you just install them directly with dpkg, after first having purged all the installed mysql packages ( apt-get remove --purge mysql* ). Personally however I would not do this, I have never found any significant issue using varying mysql server versions between live and dev machines, and particularly if both are the same version, 5.5, I don't see why you'd experience any significant issues, but if it's actually and critically mandatory to run precisely the same versions, then directly installing the deb files should work fine.
Just make sure to download them and store all the mysql files you'd ever potentially need in a directory somewhere so you have them to install in case the versions you needed go away, or in case you realized you'd forgotten a module or something.
If this is only a dev system, I think personally I'd just install the debs directly to avoid versions changing.
But unless you are absolutely certain some key difference exists between those two debian versions, which probably is not the case, it's probably just some security update or something that has no impact on how mysql server processes sql, I'd just use what is in jesse, and not worry about it.
Sample:
http://mirrors.kernel.org/debian/pool/main/m/mysql-5.5/
There you see versions 47 and 55, for example, of the server, and you'd also grab the 'core' package as well to match. Tnen you'd look for any other modules you might need here:
http://mirrors.kernel.org/debian/pool/main/
keeping in mind that with dpkg, you have to install the dependency first, then the next package, or both together in some cases. However, what I would do first, not last, in your case, is to make sure there actually is a functional difference between the different 5.5 versions before dealing with the potential headaches involved in trying to maintain a server using dpkg deb package installs.
Here's, for example, a list of mysql packages you might need. Note that I just grabbed this off a dev box, this is not intended to be an authoritative list, just an example, but it does show that mysql is used in many different places on a system, and you might run into issues trying to downgrade manually, which is why I'd in general avoid trying this method (for illustration purposes I changed 5.6 to 5.5). The key is to take the absolute minimum package list to download manually the deb files for.
dpkg -l | grep mysql | awk '{print $2}'
libaprutil1-dbd-mysql:i386
libdbd-mysql-perl
libmysqlclient15off
libmysqlclient16
libmysqlclient18:i386
libqt4-sql-mysql:i386
libqt5sql5-mysql:i386
mysql-client-5.5
mysql-client-core-5.5
mysql-common
mysql-server-5.5
mysql-server-core-5.5
php5-mysql
You'd just take the existing mysql install that is working and run that command to see the packages you need to download. As you can see, it's a pain, which is why I'd generally avoid trying to do a development install in this way, I've never hit any sql issues, or return issues using vastly differing mysql versions, so unless your sql queries are using things only found in the specific version, which is very unlikely, you are unlikely to gain much. But this is how you do it in case future searchers land here.
Note that most dev boxes are probably running desktops, and have more mysql dependencies than just the mysql server stuff for web development, and that can lead to issues.

Elastic Beanstalk stops at EbExtensionPostBuild

I am having a problem deploying an EB instance with a custom .ebextensions file. This is the relevant part in that file:
container_commands:
01_migrate:
command: 'python db_migrate.py'
02_npm_build:
command: 'npm install && npm run prod'
As you can see, these commands are for migrating my PostgreSQL database (via a Flask backend) and building my React .jsx files.
If I leave these commands out, the deployment completes perfectly well. However, once I put them in, looking at the eb-activity.log it stalls at this part forever (as far as I can tell):
[2017-04-10T02:39:24.106Z] INFO [3023] - [Application deployment app-613e-170409_223418#1/StartupStage0/EbExtensionPostBuild] : Starting activity...
I also get this message on the Health overview in the console (this is after 1 day):
Performing application deployment (running for 1 day).
I have also tried to deploy it without those container_commands, and then including it back after the successful initial deployment. Then I get the same error message as before in eb-activity.log, and I also get this message on the Health overview:
Incorrect application version "app-2a3d-170409_214923" (deployment 1). Expected version "app-2a3d-170409_214923" (deployment 1).
Which is very strange because those two versions referenced are the same versions. I don't know what this means!
I found a solution.
Remove all you container_commands from .ebextensions/
Go ssh to instance, kill process with.
sudo killall python
Then Deploy new version without container_commands.
And start debuging all your container_commands, one by one on ssh..
Have fun.

How do I choose an artifact from Nexus in a Hudson / Jenkins job?

I have a job in Hudson server A which builds an artifact and deploys it to Nexus. I have another job in a completely separate Hudson server B which needs to download the artifact and deploy it. This job is normally run manually, and the person running it needs to indicate which version of the artifact to deploy - they may not always want to deploy the latest version (e.g. to roll back to a previous known good version).
Currently, I achieve this by using a parameterized build, and require the user to pass in the artifact version number; the job then uses the Execute shell build step to run wget on a URL constructed using the parameter. This is error prone.
Ideally I'd like a plugin that lets the user browse the artifact versions in the Nexus repository and pick and choose the one to deploy, but I'm open to other suggestions. A plugin that also handles the download would be nice, but I can live without it as long as I can still get a string that I can use in shell commands.
I've looked through the available Hudson & Jenkins plugins around Maven style artifact repositories, but they all seem more concerned with pushing artifacts into repos rather than getting them back down.
I'm using Hudson's "Copy Artifact" in other jobs, to get artifacts from other Hudson jobs on the same server, but this doesn't work across different Hudson servers, which is why I've turned to Nexus (which we're already using anyway).
Does anyone have any suggestions?
I recommend using rundeck to execute your deployments.
There is a rundeck plugin for Nexus that enables rundeck to display a pull down menu of available versions in Nexus.
There is a rundeck plugin for Jenkins that can be used to invoke deployments using rundeck and kick-off post deployment jobs (like integration testing) inn Jenkins.

Run a task before svn check-out

I would like to run a task (stop a running vm machine) before Jenkins starts the check-out.
The reason is: VM blocks access to some files I have to update via subversion.
Is this possible?
There are two plugins for controlling virtual machines, depending on whether you are using VirtualBox or VMWare.
I'm quite sure you can configure the pre-build step to be "Suspend" as shown in the images, at least for VMWare.
VMware Plugin
VirtualBox Plugin
Edit your project and set:
Configure M2 Extra Build Steps --> Execute shell --> Type in whatever you'd like to do. For example:
# Wipe the local repository before each build.
rm -rf $WORKSPACE/.repository
Have a look at How do I trigger another job from hudson as a pre-build step?. I think this has been asked before there.