I am confused about the etcd backup / restore documentation of OpenShift 3.7: The OpenShift Container Platform 37 Admin Guide
tells us to use etcdctl backup. This looks like a etcd version 2 command to me - I'm new to etcd so I'm please bear with me. The etcd 3.2.9 recovery guide mentions only etcdctl snapshot save, no etcdctl backup.
OpenShift 3.7 comes with etcd version: 3.2.9:
Starting in OpenShift Container Platform 3.7, the use of the etcd3 v3
data model is required.
Shouldn't the OpenShift admins be using etcdctl snapshot then?
OpenShift Container Platform 3.7 Release notes
The documentation is correct, in OpenShift 3.7 we go with API version 2 and etcdctl backup.
I was indeed confused about the etcd versions. In OpenShift 3.7, we have:
# etcdctl -v
etcdctl version: 3.2.9
API version: 2
and data model version v3.
I'm not sure this is accurate - the version is showing API version: 2 because it is the default API without any other changes. You can simply specify ETCDCTL_API=3 to use the new API version which is much more helpful in OCP 3.7 onwards.
[~]# etcdctl --version
etcdctl version: 3.2.9
API version: 2
[~]# ETCDCTL_API=3 etcdctl version
etcdctl version: 3.2.9
API version: 3.2
The documentation appears to be lacking on the RH side (attempt to do an ls with etcdctl API version 2 -- nothing will display despite the RH documentation indicating it will. It feels like this is due to the etcdctl API v2 not interacting with the v3 data model as far as I can tell).
Related
I am building an OCP 4.9 deployment in my lab with RHCOS 4.9 for the control plane and RHEL 8.4 for the worker nodes on VCenter 6.7.
I am using RHEL 8.4 as the bastion host with Apache, HAProxy and the OCP install files, Bootstrap etc.
Question #1 - Does anyone have the latest version of what the append-bootstrap.ign should look like as I heard the "append" is replaced with "merge" and the version is 3.2.0?
Question #2 - Can the firewalld be disabled or is this required in RHEL 8.4 for OCP 4.9?
Question #3 - If the firewalld is required, what should the configuration look like in addition to opening ports tcp ports 8080, 6443 and 22623
Thanks,
I tried to install podman on SLES 12 straightforwardly, but it looks like the package is missing.
dmitry#sles12:~> sudo zypper in podman
Refreshing service 'Advanced_Systems_Management_Module_x86_64'.
Refreshing service 'Containers_Module_x86_64'.
Refreshing service 'Legacy_Module_x86_64'.
Refreshing service 'Public_Cloud_Module_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_x86_64'.
Refreshing service 'Web_and_Scripting_Module_x86_64'.
Loading repository data...
Reading installed packages...
'podman' not found in package names. Trying capabilities.
No provider of 'podman' found.
Resolving package dependencies...
Nothing to do.
The only piece of information I found about running podman on SLES 12 is
SLES 12 is a bad platform to play with current container technology.
It's too old for that and build/based around docker, not podman.
source: https://lists.opensuse.org/opensuse-kubic/2019-10/msg00009.html
As far as I know SLES 12 is still supported.
I checked the latest release notes of SLES 12 SP4 and SLES 12 SP5, these two were released after the first public release of podman, but there is no any mention of podman.
Podman is not officially provided for SLE-11.
Please have a look at the release notes of SLES 15 SP2 ("5.2.1 Support for podman"):
Starting with SUSE Linux Enterprise Server 15 SP2, podman is a supported container engine. However, certain features of podman are currently not supported: [...]
Source: https://www.suse.com/releasenotes/x86_64/SUSE-SLES/15-SP2/#jsc-SLE-9112
You could try an unsupported version of podman by using the package from the community-based Virtualization:containers repository:
https://build.opensuse.org/package/show/Virtualization%3Acontainers/podman
Exemplary procedure:
zypper ar --refresh https://download.opensuse.org/repositories/Virtualization:/containers/SLE_12_SP5/Virtualization:containers.repo
zypper ref
zypper in podman
You'd surely need to accept/trust the repo's signing details during the procedure and eventually you're going to need to install additional dependencies from other repositories as well.
Moving to SLES 15 SP2 might be the easier way to go.
Is there any link to find OpenShift 4.2 Rest API documentation?
I could see latest that we have is for 3.11
https://docs.openshift.com/container-platform/3.11/rest_api/index.html.
Will is absolutely right, and here is the link to doco:
https://docs.openshift.com/container-platform/4.3/rest_api/index.html
When in doubt, this command is your best friend
oc --loglevel=9
OpenShift Rest API documentation is expected to be available once 4.3 is released. In the meantime, OpenShift 4.2 is built on top of Kubernetes 1.14; the 1.14 Rest API documentation is available at https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/.
For OpenShift specific resources, if you're looking for the resource definitions, you can use oc explain <RESOURCE> --recursive from the cli to see a full resource definition.
Environment:
NAME="Red Hat Enterprise Linux Atomic Host"
VERSION="7.3"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Atomic Host"
VARIANT_ID=atomic.host
VERSION_ID="7.3"
PRETTY_NAME="Red Hat Enterprise Linux Atomic Host 7.3"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.3:GA:atomic-host"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION=7.3
OpenShift-Ansible Version:
3.6.11-1
This is openshift-ansible setup with Atomic hosts, so OpenShift itself is containerized.
Question:
Has anyone configured OpenShift using OpenShift-Ansible for MS Active Directory? I found this reference, but it implies that OpenShift master node service runs under systemd:
http://www.greenreedtech.com/openshift-origin-active-directory-integration/
Any suggestions?
(Unfortunately, I don't have ability to test it but) OpenShift documentation says that
The installation process creates relevant systemd units which can be used to start, stop, and poll services using normal systemctl commands.
So, I'd expect that command systemctl restart origin-master should work (except, in your case it will be atomic-openshift-master)
It also says that
configuration files are placed in the same locations during containerized installation as RPM based installations
so I'd expect that this instruction would work.
Are ActiveMQ and Kaazing jms installed on local Mac OS running on any JVM by default [how can i get that info] or standalone start scripts. Please suggest.
Bee,
When you download and extract the gateway (I recommend you to get the "Gateway + Demos" packaging for Linux/Unix/Mac - it contains Apache ActiveMQ preconfigured with the gateway, in addition to the documentation and out-of-the-box demos), you'll find a README.txt file in the root directory. It lists the Java requirements.
For your convenience, here's the Java Requirements snippet from the 4.0.3 version of the gateway:
Java Requirements
* Java Developer Kit (JDK) or Java Runtime Environment (JRE)
Java 7 (version 1.7.0_21) or higher
* The JAVA_HOME environment variable must be set to the directory where
the JDK is installed, for example C:\Program Files\Java\jdk1.7.0_21
* Note:
* For information on installing JDK, see Oracle's Java SE documentation:
http://download.oracle.com/javase/.
Hope this helps!