I am building an OCP 4.9 deployment in my lab with RHCOS 4.9 for the control plane and RHEL 8.4 for the worker nodes on VCenter 6.7.
I am using RHEL 8.4 as the bastion host with Apache, HAProxy and the OCP install files, Bootstrap etc.
Question #1 - Does anyone have the latest version of what the append-bootstrap.ign should look like as I heard the "append" is replaced with "merge" and the version is 3.2.0?
Question #2 - Can the firewalld be disabled or is this required in RHEL 8.4 for OCP 4.9?
Question #3 - If the firewalld is required, what should the configuration look like in addition to opening ports tcp ports 8080, 6443 and 22623
Thanks,
Related
I've installed qemu and libvirt on macos big sur with m1 using macports. When i've tried install libvirt with homebrew, it was broken (asked /proc/cpuinfo, lol). Libvirt started as daemon, i've got libvirt-sock on unix-socket directory, but have no virtqemud-sock.
Actions (all with sudo):
port install qemu
port install libvirt
virsh -c qemu:///system
Result:
error: unable to connect to socket «/opt/local/var/run/libvirt/virtqemud-sock»: No such file or directory
Libvirt started as daemon, i've got libvirt-sock on unix-socket directory, but have no virtqemud-sock.
Libvirt has two ways to running - the monolithic daemon (libvirtd) is the traditional approach and the modular daemons (virt${XXXX}d for varying ${XXXX}) is the new way. The libvirt client here is attempting to connect to the modular daemon virtqemud, but it seems like you've started the monolithic libvirtd
More guidance can be found at
https://libvirt.org/daemons.html
https://libvirt.org/uri.html#mode-parameter
I tried to install podman on SLES 12 straightforwardly, but it looks like the package is missing.
dmitry#sles12:~> sudo zypper in podman
Refreshing service 'Advanced_Systems_Management_Module_x86_64'.
Refreshing service 'Containers_Module_x86_64'.
Refreshing service 'Legacy_Module_x86_64'.
Refreshing service 'Public_Cloud_Module_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_x86_64'.
Refreshing service 'Web_and_Scripting_Module_x86_64'.
Loading repository data...
Reading installed packages...
'podman' not found in package names. Trying capabilities.
No provider of 'podman' found.
Resolving package dependencies...
Nothing to do.
The only piece of information I found about running podman on SLES 12 is
SLES 12 is a bad platform to play with current container technology.
It's too old for that and build/based around docker, not podman.
source: https://lists.opensuse.org/opensuse-kubic/2019-10/msg00009.html
As far as I know SLES 12 is still supported.
I checked the latest release notes of SLES 12 SP4 and SLES 12 SP5, these two were released after the first public release of podman, but there is no any mention of podman.
Podman is not officially provided for SLE-11.
Please have a look at the release notes of SLES 15 SP2 ("5.2.1 Support for podman"):
Starting with SUSE Linux Enterprise Server 15 SP2, podman is a supported container engine. However, certain features of podman are currently not supported: [...]
Source: https://www.suse.com/releasenotes/x86_64/SUSE-SLES/15-SP2/#jsc-SLE-9112
You could try an unsupported version of podman by using the package from the community-based Virtualization:containers repository:
https://build.opensuse.org/package/show/Virtualization%3Acontainers/podman
Exemplary procedure:
zypper ar --refresh https://download.opensuse.org/repositories/Virtualization:/containers/SLE_12_SP5/Virtualization:containers.repo
zypper ref
zypper in podman
You'd surely need to accept/trust the repo's signing details during the procedure and eventually you're going to need to install additional dependencies from other repositories as well.
Moving to SLES 15 SP2 might be the easier way to go.
I am confused about the etcd backup / restore documentation of OpenShift 3.7: The OpenShift Container Platform 37 Admin Guide
tells us to use etcdctl backup. This looks like a etcd version 2 command to me - I'm new to etcd so I'm please bear with me. The etcd 3.2.9 recovery guide mentions only etcdctl snapshot save, no etcdctl backup.
OpenShift 3.7 comes with etcd version: 3.2.9:
Starting in OpenShift Container Platform 3.7, the use of the etcd3 v3
data model is required.
Shouldn't the OpenShift admins be using etcdctl snapshot then?
OpenShift Container Platform 3.7 Release notes
The documentation is correct, in OpenShift 3.7 we go with API version 2 and etcdctl backup.
I was indeed confused about the etcd versions. In OpenShift 3.7, we have:
# etcdctl -v
etcdctl version: 3.2.9
API version: 2
and data model version v3.
I'm not sure this is accurate - the version is showing API version: 2 because it is the default API without any other changes. You can simply specify ETCDCTL_API=3 to use the new API version which is much more helpful in OCP 3.7 onwards.
[~]# etcdctl --version
etcdctl version: 3.2.9
API version: 2
[~]# ETCDCTL_API=3 etcdctl version
etcdctl version: 3.2.9
API version: 3.2
The documentation appears to be lacking on the RH side (attempt to do an ls with etcdctl API version 2 -- nothing will display despite the RH documentation indicating it will. It feels like this is due to the etcdctl API v2 not interacting with the v3 data model as far as I can tell).
Environment:
NAME="Red Hat Enterprise Linux Atomic Host"
VERSION="7.3"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Atomic Host"
VARIANT_ID=atomic.host
VERSION_ID="7.3"
PRETTY_NAME="Red Hat Enterprise Linux Atomic Host 7.3"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.3:GA:atomic-host"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION=7.3
OpenShift-Ansible Version:
3.6.11-1
This is openshift-ansible setup with Atomic hosts, so OpenShift itself is containerized.
Question:
Has anyone configured OpenShift using OpenShift-Ansible for MS Active Directory? I found this reference, but it implies that OpenShift master node service runs under systemd:
http://www.greenreedtech.com/openshift-origin-active-directory-integration/
Any suggestions?
(Unfortunately, I don't have ability to test it but) OpenShift documentation says that
The installation process creates relevant systemd units which can be used to start, stop, and poll services using normal systemctl commands.
So, I'd expect that command systemctl restart origin-master should work (except, in your case it will be atomic-openshift-master)
It also says that
configuration files are placed in the same locations during containerized installation as RPM based installations
so I'd expect that this instruction would work.
We have deployed Rails 4.1.0 / Ruby 2.1.6 on a windows 12 server in development mode with MySQL, using the WeBrick web server. We are now looking to deploy the application to production environment with rails 4.1.0, ruby 2.1.6, windows 12 server, MySQL server 5.6, and apache with Mongrel or XAMPP
Could you point us to the steps / suggestions and ideas to help deploy our rails application to production
From experience: the best thing to do is to forget about windows deployment. If this is not an option, then maybe look into JRuby and Warbler. Just don't expect:
much help from the community (because "nobody" deploys on windows)
comfortable worfkflow
stuff that works out of the box
Also i don't see why you would need XAMPP?
If you are at your first attempts on deploying I'd recommend you Heroku.
Heroku
The nice benefit is you can install addons (eg. a mysql database) in a matter of clicks:
- https://elements.heroku.com/
Steps are really easy:
https://devcenter.heroku.com/articles/getting-started-with-rails4#write-your-app
Briefly:
# Install the `Heroku Toolbelt`
# inside Gemfile: gem 'rails_12factor', group: :production
# [git init & commit]
$ heroku login
$ apps:create my-app-name # run `heroku create --help` for further help
$ heroku addons:create jawsdb # mysql addon for `heroku`
$ git push heroku master
$ heroku run rake db:schema:load
AWS
After a while you may realize although it's easy to deploy you'll want more tuning and probably better pricing.
At that point usually comes AWS, which has a good balance of all this, I'd recommend you Elastic Beanstalk.
Install EB CLI 3
Setup git
$ eb init
$ eb use your-environment-name
$ eb deploy
$ eb ssh # to enter into the machine