I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.
Related
I have downloaded code ready containers on windows for installing my openshift cluster. I need to deploy 3scale on it using the operator from operators hub but the operators hub page is empty.
Digging deeper I found that a few pods on the cluster are not running and show a state "ImagePullBackOff"
I deleted the pods in order to get them restarted but the error wont go away. I checked the event logs and all the screenshotted images are attached.
Pods Terminal logs
This is an error that I keep on getting when I start my cluster. Sometimes it comes up sometimes it starts normally but maybe this has something to do with it.
Quay.io Error
This is my fist time making a deployment on openshift cluster and setting up my cluster environment. So far I am not able to resolve the issue even after deleting and restarting the cluster.
I recently deployed an instance of Ubuntu 16.04 on FIWARE Lab and accessed it using putty, I downloaded docker & docker-compose, I successfully installed fiware-orion & mongo-db as I followed the tutorial, I tried to follow the iot sensor tutorial but whenever I try to start the service it keeps stucking in this infinte loop -> Context Broker HTTP state : 000 (waiting for 200).
Any suggestions?
Details
region:crete
image: ubuntu 16.04
putty infinite loop
The problem was that the docker-compose did not include Orion (and MongoDB) instance which are required dependencies for this tutorial. We have updated the corresponding docker-compose file in order to include both dependencies and now it is working properly. Tips: do not forget to open the corresponding port (3000) in the security and assign a floating IP to the virtual machine to access to the /device/monitor (do not use localhost for accessing it).
I am having issues with setting up Open shift and getting the following error after connecting to my server domain:
Command:
User$ rhc setup --server=app-domain.rhcloud.com
Result:
The server has rejected your connection attempt with an older SSL protocol.
Pass --ssl-version=sslv3 on the command line to connect to this server.
I am not sure what this is telling me to do. I tried using the instruction literally and it does not recognize the command.
Any ideas?
You should not pass rhc setup the --server flag unless you are running your own OpenShift Origin or OpenShift Enterprise broker. For OpenShift Online, just run the rhc setup command with no other options and it will setup fine. If that command messed up your express.conf file (which it should not have) you just need to delete your ~/.openshift/express.conf file then run rhc setup again without any flags. Basically you tried to point rhc to your gear as an OpenShift Online broker, which will not work.
I ended up answering this on another forum post:
The only way that this worked for me was to actually create a SSH key locally with ssh-keygen -p without rhc setup and "not" giving it a password. I then went back to OpenShift clicked add a key and pasted the contents of my rsa file.
There is obviously some kind of bug with authentication on Openshift or the installation is not right.
It would be good to find out what is going on and why does it work if I do it, this way.
We're using OpenNebula to simulate a simple replicated JBoss application.
We've installed all opennebula packages, qemu and kvm and libvirt.
We have created a simple ethernet network ad hoc between my pc (a node) and the one of my friend (which is both node and front-end) by plugging an ethernet cable between me and him (10.0.0.1 and 10.0.0.2).
So we can ping each other correctly, we've set everything to that we can ssh without a password to each other with "oneadmin" user.
We've configured all files such as below:
/etc/libvirt/libvirtd.conf
/etc/default/libvirtd-bin
And so on...
kvm and kvm-intel are both enabled.
The daemon
libvirtd -d -l
seems to start correctly.
In fact, from the gui of opennebula in the front end, we can see both the hosts monitored.
Anyway there's a problem when we try to start the virtual machine on the node which is not the front-end. I mean when we try to do a deploy of a VM on the other node. The error is something like this
cannot stat `/var/lib/one/datastores/1/f5394317d377beaa09fc07697df9ff68
but if, from the front end which has virtual machine n°1 we perform,
cd /var/lib/one/datastores/1
then we can see that file, we've also given all the permissions to it...
Any idea? :(
This may be related with the datastore configuration. If you left the default values, OpenNebula expects a shared filesystem (ie NFS) between the front-end and the virtualization nodes.
More context on the error (which I believe can be found in /var/lib/one/oned.log) would help analysing this problem.
I am trying out a "Hello World" exercise for GCE. First, I went with CentOS Image, added the instance, installed Apache, added the Firewall. All looks good as far as configuration is concerned. When I try to access the web page from outside, it cannot reach the page.
The Local Apache Server is running, from the local instance I can do a curl and all is well.
On the other hand, if I try out the same exact steps with the Debian distribution, everything works smoothly.
I saw another post that mentioned about additional firewall settings but I have not tried that out and I am not sure why it should be done either.
Can anyone explain if the CentOS setup does need additional Firewall settings and what those are?
CentOS defaults to a restrictive operating system level firewall (using iptables), while debian defaults to a permissive one. You can relax the firewall rules on CentOS as well. When running on Compute Engine, the service level firewall will only allow connections from the internet via configured ports.
To relax the CentOS firewall:
$ sudo iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited
Then test that your connections work as expected. To save this configuration across system reboots:
$ /sbin/service iptables save
See the IPTables HowTo on the CentOS wiki for more information about working with iptables on CentOS.
You need free the ports in the cloud console.
Watch this video that explain the proccess.
Google Compute Engine Test Drive