Openshift unable to connect to the server - openshift

I am having issues with setting up Open shift and getting the following error after connecting to my server domain:
Command:
User$ rhc setup --server=app-domain.rhcloud.com
Result:
The server has rejected your connection attempt with an older SSL protocol.
Pass --ssl-version=sslv3 on the command line to connect to this server.
I am not sure what this is telling me to do. I tried using the instruction literally and it does not recognize the command.
Any ideas?

You should not pass rhc setup the --server flag unless you are running your own OpenShift Origin or OpenShift Enterprise broker. For OpenShift Online, just run the rhc setup command with no other options and it will setup fine. If that command messed up your express.conf file (which it should not have) you just need to delete your ~/.openshift/express.conf file then run rhc setup again without any flags. Basically you tried to point rhc to your gear as an OpenShift Online broker, which will not work.

I ended up answering this on another forum post:
The only way that this worked for me was to actually create a SSH key locally with ssh-keygen -p without rhc setup and "not" giving it a password. I then went back to OpenShift clicked add a key and pasted the contents of my rsa file.
There is obviously some kind of bug with authentication on Openshift or the installation is not right.
It would be good to find out what is going on and why does it work if I do it, this way.

Related

SASL Error connecting to remote libvirt over SSH: No worthy mechs found

I have a server running Ovirt Node that I'm trying to manage remotely using libvirt. I have an SSH keypair installed and can ssh user#server -i ssh-privkey successfully. When I try to connect to qemu+ssh//user#host/system?keyfile=ssh-privkey, I get this error:
authentication failed: Failed to start SASL negotiation: -4 (SASL(-4): no mechanism available: No worthy mechs found)
That led me down the path of getting TLS keys and certificates installed on the client and the server mostly according to these instructions (the configuration is slightly different because I have only one host and am using Terraform to manage the certificates*). However, I still get the same error. When I look at the output of libvirt --listen --verbose on the server when a connection failed, the only useful output is this:
error : virNetSocketReadWire:1792 : End of file while reading data: Input/output error
I have checked every firewall between the client and the server and they should all be wide open. What else could be the cause of this error?
* The goal is ultimately to use Terraform to provision libvirt resources, however I get the same errors trying to connect with virsh and virt-manager.
UPDATE: It's easier to connect just via SSH; this question exists because I couldn't figure out how to turn off SASL. It turns out SASL is enabled for SSH connections due to vdsm setting auth_unix_rw="sasl" in /etc/libvirt/libvirtd.conf. Removing that config means I can just use my SSH private key as I intended. The TLS configuration was a wild goose chase that was further hindered by vdsm changing the configured location of all the PKI files.
You're likely missing a RPM package on your client host. First on the virtualization host check /etc/sasl2/libvirt.conf and see what 'mech_list' setting is uncommented.
Back on your client you'll need to install a 'cyrus-sasl-XXXX' RPM that provides the same mechanism that the server is set to use. For a modern libvirt install it will probably be using 'cyrus-sasl-scram' for plain username/password auth, but for older installs, it might still be using 'cyrus-sasl-md5'

Openshift OKD 4.5 on VMware

I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.

IBM-containers: Retrieving client certificate for IBM containers failed

I am a newbie to Bluemix and have problems with containers. I installed Docker Toolbox (my OS is Windows 7). Afterwards I installed Bluemix plugin IBM-containers. (I checked both installations, everything seemed fine to me.) Then I logged into Bluemix (cf login -a ...). And then I run command: cf ic login. And I got an error message
I've tried to deploy a sample java application to Bluemix before. It worked OK, I did not encounter any network connection problems. I don't understand what is the cause of the error.
Any ideas what could be the problem or how to fix it? Thanks in advance.
You need to login using:
ice login -k API_KEY -R registry-ice.ng.bluemix.net
Source: Alaa Youssef on DeveloperWorks
If this is the first time you use the container service you may need to run the cf ic init and the cf ic namespace set commands. Take a look here for more information.
If your registry is already configured, please always remember to do a cf login just before doing a cf ic login. If the problem persists and you are able to access containers service from your Bluemix Dashboard, you could try to use the new bx CLI (that wraps the ic plugin).

how to recover credentials of cartridge installation?

is possible to recover the credentials generated during installation? In particular jboss bpm suite' s console and dashboard.
It happens that the creation last longer and i need to refresh the page for this reason i can't get the green frame with all details.
If you have rhc command-line installed, then you can simply do
rhc ssh --app <app_name> --command 'env'
Get to your application console page (through the openshift web ui), grab the ssh connection link (Remote Access) and use it to get a terminal connection open in your application space.
Then type "env" and you should see a bunch of environnement variables, your credentials should be stored in one of them.

How to ssh into HA application gears?

As was explained in the answer to this question: https://stackoverflow.com/questions/11730590/what-are-some-of-the-tricks-to-using-openshift it should be possible to ssh into some of the other gears when using a scaled app with openshift.
Unfortunately the link mentioned there (https://openshift.redhat.com/community/faq/can-i-access-my-applications-gear) seems to be gone.
Via [my app url]/haproxy-status/ I can see the names of the other gears. They are long names like gear-[long number]-[app name]. Using that name I can no longer ssh into them when I'm ssh'ed into the main gear. ssh there just immediately returns without any error.
If I do ssh blala the same thing happened, so it looks like ssh had been replaced by a noop command on the primary gear?
When I examine the haproxy conf file, I see entries like;
server gear-[long number]-[app name] ex-std-node[number].prod.rhcloud.com:[number] check fall 2 ...
I tried ssh'ing into this ext-std-node... address as well, both from the main/primary application gear as well as from my desktop, but it didn't work in both cases.
How can I get shell access to my other gears?
This command shows how to access individual gears:
rhc app show <appname> --gears
The last column of output is the ssh URL. It is of the form $UUID#$UUID-$NAMESPACE.rhcloud.com . You can ssh into them directly, and they are also accessible via ssh from the "head" gear; they have to be, since git pushes are synchronized from the head gear to the others via ssh.