OpenShift-Ansible with MS Active Directory - openshift

Environment:
NAME="Red Hat Enterprise Linux Atomic Host"
VERSION="7.3"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Atomic Host"
VARIANT_ID=atomic.host
VERSION_ID="7.3"
PRETTY_NAME="Red Hat Enterprise Linux Atomic Host 7.3"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.3:GA:atomic-host"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION=7.3
OpenShift-Ansible Version:
3.6.11-1
This is openshift-ansible setup with Atomic hosts, so OpenShift itself is containerized.
Question:
Has anyone configured OpenShift using OpenShift-Ansible for MS Active Directory? I found this reference, but it implies that OpenShift master node service runs under systemd:
http://www.greenreedtech.com/openshift-origin-active-directory-integration/
Any suggestions?

(Unfortunately, I don't have ability to test it but) OpenShift documentation says that
The installation process creates relevant systemd units which can be used to start, stop, and poll services using normal systemctl commands.
So, I'd expect that command systemctl restart origin-master should work (except, in your case it will be atomic-openshift-master)
It also says that
configuration files are placed in the same locations during containerized installation as RPM based installations
so I'd expect that this instruction would work.

Related

libvirt doesn't create virtqemud-sock on macos

I've installed qemu and libvirt on macos big sur with m1 using macports. When i've tried install libvirt with homebrew, it was broken (asked /proc/cpuinfo, lol). Libvirt started as daemon, i've got libvirt-sock on unix-socket directory, but have no virtqemud-sock.
Actions (all with sudo):
port install qemu
port install libvirt
virsh -c qemu:///system
Result:
error: unable to connect to socket «/opt/local/var/run/libvirt/virtqemud-sock»: No such file or directory
Libvirt started as daemon, i've got libvirt-sock on unix-socket directory, but have no virtqemud-sock.
Libvirt has two ways to running - the monolithic daemon (libvirtd) is the traditional approach and the modular daemons (virt${XXXX}d for varying ${XXXX}) is the new way. The libvirt client here is attempting to connect to the modular daemon virtqemud, but it seems like you've started the monolithic libvirtd
More guidance can be found at
https://libvirt.org/daemons.html
https://libvirt.org/uri.html#mode-parameter

Is it possible to setup Hyperledger Fabric network using RedHat Openshift in production?

Keeping in mind that "peer" node creates and starts "chaincode" container (dev-*) using communication path /var/run/docker.sock to the Docker demon, I have some doubts it is doable in production ready RH Openshift cluster.
Please correct me if I'm wrong, but the only solutions for running HLF components in Openshift clusters are:
a) step into Docker-In-Docker setup - cons: requires privileged containers in Openshift. It's unacceptable for production ready clusters.
b) run "chaincode" in dev-mode - cons: dev-mode is for development only. It's not suitable for production.
Starting "chaincode" containers outside Openshift cluster and communicate with them using TCP/IP connection is not possible because Openshift cluster uses layer7 reverse proxy for communication with pods.
so the question remains:
Q: Is it possible to setup HLF network using RedHat Openshift in production?
No immediate way around Docker in Docker
○ Security risk remains so evaluate risks before production use
○ setenforce permissive
■ Allows use of docker.sock
■ Make sure you change it on all the nodes
○ oc adm policy add-scc-to-user anyuid -z default
■ Privileged mode
Short term Brute force solution
● Look to use Secrets and ConfigMaps to replace host mounts
● Use NFS mounts where needed
○ oc adm policy add-scc-to-user hostmount-anyuid -z default
● Replace docker-compose, docker calls with:
○ kubectl , oc, podman,Buildah, kompose
● Convert docker-compose.yaml files with kompose
○ kompose convert --provider=openshift -f
■ Then edit and merge files
● Alternate (if fairly simple yaml file)
○ kompose up --provider==openshift -f
Read more from here: https://www.redhat.com/files/summit/session-assets/2019/T905A4.pdf
Yes, it is possible to run HLF in OpenShift, but DinD is required to do it right now on v1.4.4, and thus privileged pods. Properly securing the cluster can negate the risk and many, many organizations are running in production using OpenShift and Kube with privileged pods.
That being said, Fabric v2.0.0 will ship with a new chaincode model that will allow you to run Fabric without DinD. We are planning to release the official v2.0.0 release before the end of the month. If you want to test it out now, v2.0.0-beta is available here: https://github.com/hyperledger/fabric/releases/tag/v2.0.0-beta

How does one restart Openshift Origin master on Centos 7?

Openshift origin was installed via the ansible playbooks.
According to this documentation, the correct command to restart is:
$ systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
However, this just results in:
Failed to restart atomic-openshift-master-api.service: Unit not found.
Failed to restart atomic-openshift-master-controllers.service: Unit not found.
What is the correct way to restart openshift origin (okd) after installing via ansible on Centos7?
If you get the following error:
bash: master-restart: command not found
try:
/usr/local/bin/master-restart
If you installed the OKD as v3.10, you should restart master services as follows. [0] The service is running as pod from v3.10, so you should use the specific command for restarting the master services, such as api and controllers
# master-restart api
# master-restart controllers
[0] RESTARTING MASTER SERVICES
As far as I know, you have two alternatives:
Using ansible
Use the same inventory.ini as you used when installing OpenShift origin.
Assuming that you have the inventory.ini file and the openshift-ansible repository cloned under /home/user/, execute the master restart playbook:
ansible-playbook -i /home/user/inventory.ini /home/user/openshift-ansible/playbooks/openshift-master/restart.yml
Restart the services
To restart the services manually, the service names are origin-master-api and origin-master-controllers. Thus the command to restart them should be:
systemctl restart origin-master-api origin-master-controllers
I strongly recommend using the first option.

Installing Bosun component in on VM or in several VMs

I would like to use the Bosun GE by my own but it is not clear if I could install the 2 components of this GE (fiware-facts and fiware-cloto) into different VM or they can be installed in the same VM.
Yes, you can install Fiware Bosun in both ways.
By default configure files are written to work in the same VM, so if you install all required software in the same VM everything will works perfectly.
If you want to distribute fiware-facts and cloto in two different VM, you must configure the IP address in both components:
Fiware-cloto config file:
cloto: {{Fiware-Cloto-Public-IP}} (example: 83.53.21.33)
rabbitMQ: RabbitIP
Fiware-facts config file:
NOTIFICATION_URL: http://{{Fiware-Facts-Public-IP}}:5000/v1.0
RABBITMQ_URL: RabbitIP
In addition, Note that MYSQL could also be installed into a different VM, so you should edit mysql host too in order to provide the IP address where Database is installed in both configuration files (fiware-cloto and fiware-facts)

Chrome OS + VirtualBox Guest Additions

I can't install the VirtualBox Guest Additions in the latest build of Google Chrome OS. When I run the installer, I get the following error:
Verifying archive integrity... All good.
Uncompressing VirtualBox 4.1.8 Guest Additions for Linux.........
VirtualBox Guest Additions installer
mkdir: cannot create directory `/opt/VBoxGuestAdditions-4.1.8': Read-only file system
tar: /opt/VBoxGuestAdditions-4.1.8: Cannot chdir: No such file or directory
tar: Error is not recoverable: exiting now
What do I do now? Can I mount the filesystem in read-write mode? Does the Lime build support the guest additions? I'm using the Vanilla build.
Host OS: Mac OS X Lion (10.7)
Guest OS: Google Chrome OS Vanilla from http://chromeos.hexxeh.net/
VirtualBox version: 4.1.8
So someone on the VirtualBox IRC channel irc://chat.freenode.net/#vbox told me that the guest additions won't work on Chrome OS.
After trying the suggestion from #sarnold, running mount -oremount,rw /, I was told Unable to determine your Linux distribution
If you want to try remounting the filesystem as read-write, the command is:
mount -oremount,rw /
But there might be a good reason for / to be mounted read-only. I doubt the VirtualBox guest tools care where they are installed, so if you just unpack the archive using tar or ar or whatever is necessary, you can probably install them somewhere that is mounted read-write and configure them appropriately.
If you don't mind using dev mode, I was able to run a parrot os vm with qemu and kvm on a pixelbook. I used the change kernel flags script from crouton repo, then installed qemu and kvm packages normally for my debian 9 crouton chroot. Virt manager doesn't work, but I can make a hard drive image with CLI and boot up a vm with CLI and it all works, albeit it is a bit slow even with kvm. Probably cause even a pixelbook has low resources compared to a normal laptop.