I am using Bluemix container service and am unable to do cf ic login from behind a firewall, even though I have configured proxies.
When I do
cf ic -v login
I get the error message:
Authenticating with the IBM Containers registry host
registry.ng.bluemix.net... FAILED The attempt to authenticate with the
IBM Containers registry host registry.ng.bluemix.net was unsuccessful.
****Warning: '-e' is deprecated, it will be removed soon. See usage. Error response from daemon: Get
https://registry.ng.bluemix.net/v1/users/: dial tcp
198.23.117.106:443: i/o timeout
To test that my proxy is configured, I do this:
wget https://registry.ng.bluemix.net/v1/users/
--2016-10-25 11:25:23-- https://registry.ng.bluemix.net/v1/users/ Resolving proxy-chain.intel.com (proxy-chain.intel.com)... 10.19.8.225
Connecting to proxy-chain.intel.com
(proxy-chain.intel.com)|10.19.8.225|:912... connected. Proxy request
sent, awaiting response... 404 Not Found 2016-10-25 11:25:24 ERROR
404: Not Found.
If I disconnect VPN so I no longer have a firewall and need a proxy, and unset my proxies, it works.
These are the proxies I have set:
printenv | grep -i proxy
http_proxy=http://proxy-chain.intel.com:911
ftp_proxy=http://proxy-chain.intel.com:911
socks_proxy=http://proxy-chain.intel.com:1080
https_proxy=http://proxy-chain.intel.com:912
no_proxy=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,127.0.0.0/8,134.134.0.0/16
>
More experiments:
When I set the proxy to something bogus, it fails immediately:
> export https_proxy=http://foobarsfsdf.com
> cf ic login
FAILED
auth request failed: Error performing request: Post https://login.ng.bluemix.net/UAALoginServerWAR/oauth/token: http: error connecting to proxy http://foobarsfsdf.com: dial tcp: lookup foobarsfsdf.com on 10.0.2.3:53: no such host
>
When I set the proxy correctly, it fails later:
> cf ic login
Deleting old configuration file...
Retrieving client certificates for IBM Containers...
Storing client certificates in /home/rscohn1/.ice/certs/...
Storing client certificates in /home/rscohn1/.ice/certs/containers-api.ng.bluemix.net/80cc2e8c-4df0-4700-bd04-77f2e8777f80...
OK
The client certificates were retrieved.
Checking local Docker configuration...
OK
Authenticating with the IBM Containers registry host registry.ng.bluemix.net...
FAILED
The attempt to authenticate with the IBM Containers registry host registry.ng.bluemix.net was unsuccessful.
****Warning: '-e' is deprecated, it will be removed soon. See usage.
Error response from daemon: Get https://registry.ng.bluemix.net/v1/users/: dial tcp 198.23.117.106:443: i/o timeout
When you are not connected to the IBM Containers registry host, you can run only a limited number of IBM Containers commands. Check the spelling of the host URL and try again. If the host URL is correct, open a new command line or terminal window before retrying.
It looks like some parts of the ic plugin uses proxies, and some parts do not.
You need to add the proxy on to your Docker daemon configuration. Also note that as Alex says, you should make sure to configure a HTTPS proxy.
See here for some information on how to do that with Systemd on Linux (Ubuntu 16.04+): https://docs.docker.com/engine/admin/systemd/#http-proxy
For older Linux distributions, such as Ubuntu versions before 16.04, Docker uses Upstart. You'll find the Upstart configuration file at /etc/default/docker, with a sample of how to set the proxy up in comments inside that file.
If you're using the Docker for Mac or Docker for Windows apps, you'll find the proxy configuration options in Preferences -> Advanced.
Make sure to restart Docker after changing the configuration, so that your changes take effect. On Linux: sudo service docker restart. On Mac or Windows, right-click the Docker icon and click restart.
Related
I have a dedicated server running Ubuntu and Rancher Desktop.
I'd like to be able to access the Kubernetes cluster (K3S) from another computer in the same network.
In doing so, after I've setup my kube configuration, I'm getting an error.
Unable to connect to the server: x509: certificate is valid for 10.43.0.1, 127.0.0.1, 192.168.5.15, ::1, fec0::5055:55ff:fe8e:47db, not 192.168.1.8
Passing the following config through to K3S should solve my problem
INSTALL_K3S_EXEC="--tls-san 192.168.1.8"
Reading through Rancher Desktop documentation, I found a potential solution.
Based on the documentation I should be able to pass config through to k3s via the Provisional Script for Rancher Desktop. It is still unclear to me as how I do this for the K3S configurations.
The k3s repository has this issue with a proposed solution at https://github.com/k3s-io/k3s/issues/3369
"SANs should be added to the dynamiclistener cert on demand, based on the SNI hostname requested by the client. Try running the following on the server:"
curl -vk --resolve 172.31.13.97:6443:127.0.0.1 https://172.31.13.97:6443/ping
I have a server running Ovirt Node that I'm trying to manage remotely using libvirt. I have an SSH keypair installed and can ssh user#server -i ssh-privkey successfully. When I try to connect to qemu+ssh//user#host/system?keyfile=ssh-privkey, I get this error:
authentication failed: Failed to start SASL negotiation: -4 (SASL(-4): no mechanism available: No worthy mechs found)
That led me down the path of getting TLS keys and certificates installed on the client and the server mostly according to these instructions (the configuration is slightly different because I have only one host and am using Terraform to manage the certificates*). However, I still get the same error. When I look at the output of libvirt --listen --verbose on the server when a connection failed, the only useful output is this:
error : virNetSocketReadWire:1792 : End of file while reading data: Input/output error
I have checked every firewall between the client and the server and they should all be wide open. What else could be the cause of this error?
* The goal is ultimately to use Terraform to provision libvirt resources, however I get the same errors trying to connect with virsh and virt-manager.
UPDATE: It's easier to connect just via SSH; this question exists because I couldn't figure out how to turn off SASL. It turns out SASL is enabled for SSH connections due to vdsm setting auth_unix_rw="sasl" in /etc/libvirt/libvirtd.conf. Removing that config means I can just use my SSH private key as I intended. The TLS configuration was a wild goose chase that was further hindered by vdsm changing the configured location of all the PKI files.
You're likely missing a RPM package on your client host. First on the virtualization host check /etc/sasl2/libvirt.conf and see what 'mech_list' setting is uncommented.
Back on your client you'll need to install a 'cyrus-sasl-XXXX' RPM that provides the same mechanism that the server is set to use. For a modern libvirt install it will probably be using 'cyrus-sasl-scram' for plain username/password auth, but for older installs, it might still be using 'cyrus-sasl-md5'
So I followed this tutorial to install and configure a MySQL server on an AWS instance that was originally running on EC2.
When I tried to login back to the server via ssh, I would get a port 22: Connection timed out error.
So I tried to do the same on Lightsail and ended up getting the same error when I try to login back.
Is this a known issue? Am I doing anything wrong? Is there a way to fix this?
Thanks.
mentioned tutorial says: enable firewall to allow mysql remote access.
sudo ufw enable
sudo ufw allow mysql
which is allowing only mysql and stopping every incoming request it can be either ssh or http or anything else which you have defied in security group of ec2 instance.
In my case i have allowed following inbound rule but nothing was working even ssh also says connection refused
To get this working either disable firewall or allow required port in firewall. Off course, still you need to login into ec2 instance to get this done.
There are 3 ways to connect with ec2 instance
SSH is not working so I choose Session Manager (Browser based ssh). I follow this video and was able to connect with instance through session manager.
After login i just disable the firewall and every thing works fine.
sudo ufw disable
All the inbound rules working properly. Hope it will work for you.
On OCP 4.3 the oc login command generated from the dashboard "Copy Login Command"
oc login --token=asdfghjk... --server=https://api.xxx.com:6443
fails with:
error: dial tcp: lookup api.xxx.com on 192.168.0.1:53: no such host - verify you have provided the correct host and port and that the server is currently running.
When I substitute the public ip of my cluster for the hostname it works.
oc login --token=asdfghjk... --server=https://1.2.3.4:6443
I can successfully ping api.xxx.com, the curl command generated by "Copy Login Command" resolves the hostname, and the curl url also works in chrome. I've tried adding the host and public ip to my /etc/hosts file but it still fails.
Is there some oc command configuration option I'm missing? Or perhaps a local proxy that I need to start? (Odd that the error msg says ...on 192.168.0.1:53...)
Versions:
$ oc version
Client Version: openshift-clients-4.3.0-201910250623-88-g6a937dfe
Server Version: 4.3.0
Kubernetes Version: v1.16.2
$
Update:
I've opened an oc issue for this:
https://github.com/openshift/oc/issues/315
This is not a problem with the oc client. It is working as expected.
The DNS server the machine you're running the oc command on does not know about the OpenShift DNS entries.
Judging by the IP 192.168.0.1 its your router.
If you deployed OpenShift in the cloud you need to make sure you're using a Public DNS zone so the DNS entries are resolvable from anywhere.
Alternately you could put those entries in the /etc/hosts file on your local linux machine (if its Windows the path is different) or you could put them in the DNS settings in your router.
I encounter the similar "No such host" problem run oc rsh command. After oc logout and oc login again, the problem is resolved.
Had same problem today on MacOS.
Ping worked to resolve the host BUT nslookup and dig both could NOT resolve the host, and the nameserver that dig and nslookup used was my default gateway address / port 53.
Fix:
Go to System Preferences > Network > Advanced > DNS tab. Add in name servers that resolve the hostname, which in my case are intranet nameservers (i'm VPN'ed). I also added in several public nameservers just in case.
Now Dig / nslookup resolve the host, and my oc login works
Conclusion?
I'm not sure this is an oc issue as much as it is a VPN configuration problem. Seems VPN did not add in intranet DNS properly. However I cannot explain why, before i added the nameservers, ping worked but dig/nslookup did not.
I have spent a couple days trying to install software on Google Compute Engine (GCE) and then remotely access it from either my windows pc or local linux machine.
I can install software, like Google Chrome, etc. but can't open the applications as I keep getting display issues (understandably because GCE is headless). So I'm trying to VNC into the GCE instance.
I have tried installing the following on the server: (Instance Name is "talend")
vnc4server: I get output saying the server is running and everything looks good. Only error I get is a Language error like the following:
steven#talend:~$ vnc4server -geometry 1440x900 :1
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:LANGUAGE = (unset),LC_ALL = (unset),LANG = "en_ZA.UTF-8"are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
A VNC server is already running as :1
and
steven#talend:~$ vnc4server -geometry 1440x900 :2
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:LANGUAGE = (unset),LC_ALL = (unset),LANG = "en_ZA.UTF-8"are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
New 'talend:2 (steven)' desktop is talend:2
Starting applications specified in /home/steven/.vnc/xstartup
Log file is /home/steven/.vnc/talend:2.log
Remote Access: Using TightVNC client via Windows
I get the following message:
No connection could be made because the target machine actively refused it.
Remote Access: Using Vinagre via Linux
Connection to host 8.34.210.67::5902 was closed.
Via Google Compute Engine Web Console:
Tried changing to static ip > No Difference
Tried adding tcp:80 with Source: 0.0.0.0/0 > No Difference
I'm sure there is a simple solution to this but I can't seem to find it. Any help will be appreciated and then will post a link to the final solution.
Thanks.
You will need to configure three settings to all agree on the same port:
The port vnc4server is listening on.
A Compute Engine firewall rule to allow traffic on that port.
The port TightVNC is attempting to connect to.
From the error message "Connection to host 8.34.210.67::5902 was closed.", it looks like TightVNC is trying to connect to 5902. Assuming that vnc4server is also listening on that port, you should add a Compute Engine firewall rule to allow that port.
Visit the Console at https://cloud.google.com/console, click on your project, then Compute Engine, then Networks. Click the "Create new" next to "Firewalls" and add a new rule with tcp:5902 set in the Ports/Protocols field.
If you're running on Centos, there is an additional step to disable the local firewall as well: CentOS Firewall Issues on GCE
Another option is to use Guacamole and Tomcat to access your desktop via a browser or VNC client.
Use Aptitude or apt-get to install guacamole-tomcat. I have the VNC port in firewall settings (via tags) as well as http and https. I've set up a "guacamole" tag to use with the firewall as well. Your GCE instance will need these tags assigned. There are some configs to do via /etc/guacamole/ for user/login etc, but essentially it goes like this...
Once installed, the default ports are 8080. So browse to http://:8080/guacamole/ and you will get a Guacamole login screen. When you login, you will have links to click that start your desktop in a browser window.
You can also VNC directly (no browser) via :5901 - or whatever port you configured Guacamole with. It's best of course to have set up a st
Try:
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
if it is not similar flush:
sudo iptables -F