why etcd.conf is deprecated after etcd 2.x? - configuration

When I tried to start etcd based on the version 2.1, I found there is no options for -config and -f any more. I am confused why starting etcd by etcd.conf is deprecated after 2.x.

Related

slurm with JSON output - version update required?

the slurm commands provide a json output option, for example:
"--json
Dump job information as JSON. All other formating and filtering arugments will be ignored. "
Source: https://slurm.schedmd.com/squeue.html#OPT_json
On ubuntu 20.04 with slurm 19.05, this option is not recognized.
"squeue: unrecognized option '--json'"
Is it available on later releases?
If required, can I update the slurm version (installed from the ubuntu 20.04 repository) ?
Beware that the online documentation is always valid for the latest stable version. According to the changelog this option was introduced in version 21.08.

How can I enable metrics on minishift?

My minishift version is v1.16.1+d9a86c9 and I'm running openshift origin 3.9.
I want to use a horizontal pod autoscaler in minishift and for that I need the metrics pods to be installed. I have searched the minishift docs but there's no info about how to install the hauwkular metrics.
Apparently minishift start --metrics used to work, but it's not a valid flag anymore.
It has been removed from Minishift indeed, see https://github.com/minishift/minishift/pull/2241 (and same for the command-line tool oc cluster up: https://github.com/openshift/origin/pull/19209 )
However you can still install Hawkular with some extra steps, using the ansible playbook: see https://docs.openshift.com/container-platform/3.9/install_config/cluster_metrics.html#deploying-the-metrics-components
You can use -extra-clusterup-flags "--metrics" with minishift now, to pass the flag to oc cluster up.

etcd backup / snapshot in openshift 3.7

I am confused about the etcd backup / restore documentation of OpenShift 3.7: The OpenShift Container Platform 37 Admin Guide
tells us to use etcdctl backup. This looks like a etcd version 2 command to me - I'm new to etcd so I'm please bear with me. The etcd 3.2.9 recovery guide mentions only etcdctl snapshot save, no etcdctl backup.
OpenShift 3.7 comes with etcd version: 3.2.9:
Starting in OpenShift Container Platform 3.7, the use of the etcd3 v3
data model is required.
Shouldn't the OpenShift admins be using etcdctl snapshot then?
OpenShift Container Platform 3.7 Release notes
The documentation is correct, in OpenShift 3.7 we go with API version 2 and etcdctl backup.
I was indeed confused about the etcd versions. In OpenShift 3.7, we have:
# etcdctl -v
etcdctl version: 3.2.9
API version: 2
and data model version v3.
I'm not sure this is accurate - the version is showing API version: 2 because it is the default API without any other changes. You can simply specify ETCDCTL_API=3 to use the new API version which is much more helpful in OCP 3.7 onwards.
[~]# etcdctl --version
etcdctl version: 3.2.9
API version: 2
[~]# ETCDCTL_API=3 etcdctl version
etcdctl version: 3.2.9
API version: 3.2
The documentation appears to be lacking on the RH side (attempt to do an ls with etcdctl API version 2 -- nothing will display despite the RH documentation indicating it will. It feels like this is due to the etcdctl API v2 not interacting with the v3 data model as far as I can tell).

When I type "libvirtd --listen" in linux shell, there is error "GNUTLS support not available in this build"

I was doing some experiment on live migration using virsh. When I input anything related to tls, such as "# virsh -c qemu+tls://source/system",or "libvirtd --listen", there would be errors like "GNUTLS support not available in this build". So, here is what I tried to fix it:
Reinstall libvirt with --with-gnutls
#yum install gnutls-devel
#sh ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var --libdir=/usr/lib64 --with-gnutls
After it successfully installed, reload it
# systemctl daemon-reload
# systemctl restart libvirtd
then I tried again my experiment, still there are errors "GNUTLS support not available in this build".
Any hints helpful will be appreciated sincerely.
Create TLS certificate first, follow the instruction on libvirt official website. Then reinstall libvirt again, everything goes right.

How do I configure starting jdk for hudson(now jenkins)?

I googled a lot and read through the wiki, just could not find out where setting starting jdk can be done. It needs jdk 1.5 or later. The os is centOS and I set the JAVA_HOME environment variable to a 1.6 version and add jdk bin directory into PATH environment variable, when I run the command '/etc/init.d/jenkins start' , I received below error:
Jenkins requires Java5 or later, but you are running 1.4.2 from /usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
java.lang.UnsupportedClassVersionError: 48.0
at Main.main(Main.java:90)
Don't know why jenkins look for jdk from path above, I don't see any environment variable containing /usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre.
Any ideas?
========update
to firelore:
I tried to run command 'update-alternatives --install java java /home/irteam/app/jdk1.6.0_07 ',it doesn't work,prompting command parameters, like :
alternatives version 1.3.30.1 - Copyright (C) 2001 Red Hat, Inc.
This may be freely redistributed under the terms of the GNU Public License.
usage: alternatives --install <link> <name> <path> <priority>
[--initscript <service>]
[--slave <link> <name> <path>]*
alternatives --remove <name> <path>
alternatives --auto <name>
alternatives --config <name>
alternatives --display <name>
alternatives --set <name> <path>
common options: --verbose --test --help --usage --version
--altdir <directory> --admindir <directory>
The 1.4.2 version was bundled with your centOS install and made default. You will need to run the update-alternatives command to change your symlink to your updated jdk location.
Try to set both PATH and JAVA_HOME variable to new JRE which you have downloaded. If you are using slaves, I would suggest to delete the slave and recreate the same slave so you done loose the attached jobs to it. Check the console log, you should see it running with new jre.
You can configure it in Jenkins directly.
->Manage Jenkins -> Configure system -> Global Properties -> Environment Variables
Just add the JAVA_HOME.
Then add the JDK path in JDK section.