How to configure k3s through Rancher Desktop - configuration

I have a dedicated server running Ubuntu and Rancher Desktop.
I'd like to be able to access the Kubernetes cluster (K3S) from another computer in the same network.
In doing so, after I've setup my kube configuration, I'm getting an error.
Unable to connect to the server: x509: certificate is valid for 10.43.0.1, 127.0.0.1, 192.168.5.15, ::1, fec0::5055:55ff:fe8e:47db, not 192.168.1.8
Passing the following config through to K3S should solve my problem
INSTALL_K3S_EXEC="--tls-san 192.168.1.8"
Reading through Rancher Desktop documentation, I found a potential solution.
Based on the documentation I should be able to pass config through to k3s via the Provisional Script for Rancher Desktop. It is still unclear to me as how I do this for the K3S configurations.

The k3s repository has this issue with a proposed solution at https://github.com/k3s-io/k3s/issues/3369
"SANs should be added to the dynamiclistener cert on demand, based on the SNI hostname requested by the client. Try running the following on the server:"
curl -vk --resolve 172.31.13.97:6443:127.0.0.1 https://172.31.13.97:6443/ping

Related

SASL Error connecting to remote libvirt over SSH: No worthy mechs found

I have a server running Ovirt Node that I'm trying to manage remotely using libvirt. I have an SSH keypair installed and can ssh user#server -i ssh-privkey successfully. When I try to connect to qemu+ssh//user#host/system?keyfile=ssh-privkey, I get this error:
authentication failed: Failed to start SASL negotiation: -4 (SASL(-4): no mechanism available: No worthy mechs found)
That led me down the path of getting TLS keys and certificates installed on the client and the server mostly according to these instructions (the configuration is slightly different because I have only one host and am using Terraform to manage the certificates*). However, I still get the same error. When I look at the output of libvirt --listen --verbose on the server when a connection failed, the only useful output is this:
error : virNetSocketReadWire:1792 : End of file while reading data: Input/output error
I have checked every firewall between the client and the server and they should all be wide open. What else could be the cause of this error?
* The goal is ultimately to use Terraform to provision libvirt resources, however I get the same errors trying to connect with virsh and virt-manager.
UPDATE: It's easier to connect just via SSH; this question exists because I couldn't figure out how to turn off SASL. It turns out SASL is enabled for SSH connections due to vdsm setting auth_unix_rw="sasl" in /etc/libvirt/libvirtd.conf. Removing that config means I can just use my SSH private key as I intended. The TLS configuration was a wild goose chase that was further hindered by vdsm changing the configured location of all the PKI files.
You're likely missing a RPM package on your client host. First on the virtualization host check /etc/sasl2/libvirt.conf and see what 'mech_list' setting is uncommented.
Back on your client you'll need to install a 'cyrus-sasl-XXXX' RPM that provides the same mechanism that the server is set to use. For a modern libvirt install it will probably be using 'cyrus-sasl-scram' for plain username/password auth, but for older installs, it might still be using 'cyrus-sasl-md5'

error occurring on docker hub image pull openshift codereddy

I have a CodeReddy Red Hat OpenShift Container platform cluster running on Windows 10.
I have docker image hosted in docker hub (public). When I try to create a Pod using this docker hub hosted image, it's failing om image download.
What could be possible reason?
Internal error occurred: docker.io/ajmaly/public:latest: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 10.217.4.10:53: server misbehaving
The output for Resolve-DnsName registry-1.docker.io is below,
Looks like the DNS configuration wasn't setuped properly.
5.1. DNS configuration details
The registry-1.docker.io resolves here to this IP's
dig +short registry-1.docker.io
34.238.187.50
54.224.119.26
34.231.251.252
107.23.149.57
3.224.96.239
3.229.227.53
18.213.137.78
3.220.36.210
Maybe this link could help you to fix your issue.
Installing CodeReady Containers (OpenShift4) on Hyper-V (Win 10)

Openshift OKD 4.5 on VMware

I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.

Can't do cf ic login with http proxy

I am using Bluemix container service and am unable to do cf ic login from behind a firewall, even though I have configured proxies.
When I do
cf ic -v login
I get the error message:
Authenticating with the IBM Containers registry host
registry.ng.bluemix.net... FAILED The attempt to authenticate with the
IBM Containers registry host registry.ng.bluemix.net was unsuccessful.
****Warning: '-e' is deprecated, it will be removed soon. See usage. Error response from daemon: Get
https://registry.ng.bluemix.net/v1/users/: dial tcp
198.23.117.106:443: i/o timeout
To test that my proxy is configured, I do this:
wget https://registry.ng.bluemix.net/v1/users/
--2016-10-25 11:25:23-- https://registry.ng.bluemix.net/v1/users/ Resolving proxy-chain.intel.com (proxy-chain.intel.com)... 10.19.8.225
Connecting to proxy-chain.intel.com
(proxy-chain.intel.com)|10.19.8.225|:912... connected. Proxy request
sent, awaiting response... 404 Not Found 2016-10-25 11:25:24 ERROR
404: Not Found.
If I disconnect VPN so I no longer have a firewall and need a proxy, and unset my proxies, it works.
These are the proxies I have set:
printenv | grep -i proxy
http_proxy=http://proxy-chain.intel.com:911
ftp_proxy=http://proxy-chain.intel.com:911
socks_proxy=http://proxy-chain.intel.com:1080
https_proxy=http://proxy-chain.intel.com:912
no_proxy=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,127.0.0.0/8,134.134.0.0/16
>
More experiments:
When I set the proxy to something bogus, it fails immediately:
> export https_proxy=http://foobarsfsdf.com
> cf ic login
FAILED
auth request failed: Error performing request: Post https://login.ng.bluemix.net/UAALoginServerWAR/oauth/token: http: error connecting to proxy http://foobarsfsdf.com: dial tcp: lookup foobarsfsdf.com on 10.0.2.3:53: no such host
>
When I set the proxy correctly, it fails later:
> cf ic login
Deleting old configuration file...
Retrieving client certificates for IBM Containers...
Storing client certificates in /home/rscohn1/.ice/certs/...
Storing client certificates in /home/rscohn1/.ice/certs/containers-api.ng.bluemix.net/80cc2e8c-4df0-4700-bd04-77f2e8777f80...
OK
The client certificates were retrieved.
Checking local Docker configuration...
OK
Authenticating with the IBM Containers registry host registry.ng.bluemix.net...
FAILED
The attempt to authenticate with the IBM Containers registry host registry.ng.bluemix.net was unsuccessful.
****Warning: '-e' is deprecated, it will be removed soon. See usage.
Error response from daemon: Get https://registry.ng.bluemix.net/v1/users/: dial tcp 198.23.117.106:443: i/o timeout
When you are not connected to the IBM Containers registry host, you can run only a limited number of IBM Containers commands. Check the spelling of the host URL and try again. If the host URL is correct, open a new command line or terminal window before retrying.
It looks like some parts of the ic plugin uses proxies, and some parts do not.
You need to add the proxy on to your Docker daemon configuration. Also note that as Alex says, you should make sure to configure a HTTPS proxy.
See here for some information on how to do that with Systemd on Linux (Ubuntu 16.04+): https://docs.docker.com/engine/admin/systemd/#http-proxy
For older Linux distributions, such as Ubuntu versions before 16.04, Docker uses Upstart. You'll find the Upstart configuration file at /etc/default/docker, with a sample of how to set the proxy up in comments inside that file.
If you're using the Docker for Mac or Docker for Windows apps, you'll find the proxy configuration options in Preferences -> Advanced.
Make sure to restart Docker after changing the configuration, so that your changes take effect. On Linux: sudo service docker restart. On Mac or Windows, right-click the Docker icon and click restart.

Openshift unable to connect to the server

I am having issues with setting up Open shift and getting the following error after connecting to my server domain:
Command:
User$ rhc setup --server=app-domain.rhcloud.com
Result:
The server has rejected your connection attempt with an older SSL protocol.
Pass --ssl-version=sslv3 on the command line to connect to this server.
I am not sure what this is telling me to do. I tried using the instruction literally and it does not recognize the command.
Any ideas?
You should not pass rhc setup the --server flag unless you are running your own OpenShift Origin or OpenShift Enterprise broker. For OpenShift Online, just run the rhc setup command with no other options and it will setup fine. If that command messed up your express.conf file (which it should not have) you just need to delete your ~/.openshift/express.conf file then run rhc setup again without any flags. Basically you tried to point rhc to your gear as an OpenShift Online broker, which will not work.
I ended up answering this on another forum post:
The only way that this worked for me was to actually create a SSH key locally with ssh-keygen -p without rhc setup and "not" giving it a password. I then went back to OpenShift clicked add a key and pasted the contents of my rsa file.
There is obviously some kind of bug with authentication on Openshift or the installation is not right.
It would be good to find out what is going on and why does it work if I do it, this way.