IBM-containers: Retrieving client certificate for IBM containers failed - containers

I am a newbie to Bluemix and have problems with containers. I installed Docker Toolbox (my OS is Windows 7). Afterwards I installed Bluemix plugin IBM-containers. (I checked both installations, everything seemed fine to me.) Then I logged into Bluemix (cf login -a ...). And then I run command: cf ic login. And I got an error message
I've tried to deploy a sample java application to Bluemix before. It worked OK, I did not encounter any network connection problems. I don't understand what is the cause of the error.
Any ideas what could be the problem or how to fix it? Thanks in advance.

You need to login using:
ice login -k API_KEY -R registry-ice.ng.bluemix.net
Source: Alaa Youssef on DeveloperWorks

If this is the first time you use the container service you may need to run the cf ic init and the cf ic namespace set commands. Take a look here for more information.
If your registry is already configured, please always remember to do a cf login just before doing a cf ic login. If the problem persists and you are able to access containers service from your Bluemix Dashboard, you could try to use the new bx CLI (that wraps the ic plugin).

Related

Spring boot app deployed to Heroku can't connect to a ClearDB after upgrading addon

I have a Spring Boot application which is deployed to Heroku. I've been using ClearDB ignite for some time and everything worked perfectly, but today I upgraded ClearDB addon to punch(paid), because I needed more capacity and now when I open my app it seems that it's unable to connect to a db and logs prove this.
So I imported my local mysql file into cleardb, and I can see all my tables in there when I connect to it through command line. Basically I did all the same steps as when I was using ignite.
But what's interesting, when I was using ignite, and executed heroku run env command, all variables such as SPRING_DATASOURCE_URL, SPRING_DATASOURCE_USERNAME, SPRING_DATASOURCE_PASSWORD were already set in env variables list, so I didn't have to specify them in my application.properties file.
Now when I run this command, only CLEARDB_DATABASE_URL is there. I tried to set them inside properties file.
(I modified username and password on purpose now for screenshot)
I also tried setting these using DataSourceBuilder but nothing seems to work.
Anyone has any ideas what it might be?
Thanks in advance
Resolved by creating new heroku app and only adding ClearDB punch addon, imported local db and everything works. Looks like when I added a new cleardb addon while still using free one mixed up some config settings, I don't have any other explanation. Anyways, problem solved.

How to use CDK/minishift OpenShift cluster with kubectl

I have installed CDK on my Windows 10 laptop.
I am following documentation on using IBM Blockchain Platform with RedHat OpenShift.
One of the first steps is issuing kubectl commands.
I see CDK comes with the OpenShift CLI (oc) installed but not with kubectl. Do I need to install kubectl separatelly ? If so, how do I configure kubectl to know about my OpenShift cluster running in CDK/minishift?
To answer your specific question, any time you see a "kubectl" command you can replace it with "oc".
You can also download kubectl directly from upstream, and it will use the same (by default, or use $KUBECONFIG to override) ~/.kube/config file.
However, you should know that CDK is based on OpenShift 3.11.z and is approaching end-of-life. I would suggest you take a look at CRC, which is based on 4.x. Start here for more information -- https://console.redhat.com/openshift/create/local

Openshift OKD 4.5 on VMware

I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.

Redhat Openshift 4 - Not able to make mysql connection from php pod to mysql pod

I am user of Openshift online and OKD. I am facing similar issue in both places. Please have a look.
I have created a project.
I have launched php in Developer's Catalog option. With other details, I entered my project's git url, project is cloned successfully. Now it needs to connect to mysql database only.
In Pods, I deployed mysql image from 'Deploy Image' option. It is launched successfully.
When I make mysql connection from php pod to mysql pod, it does not connect, connection time out.
How should I make connection?
Note :
I do not have datastore option to launch mysql from developer's catalog in openshift online, that's why I am launching mysql image from deploy image.
As you mentioned you are using Openshift Online and OKD and you are facing the issue at both places.
You can not create mysql from development store because currently, the OpenShift Online catalog does not provide MySQL template via the web interface directly, but you can deploy the MySQL template using the oc CLI instead. The database deployment is simplified when using templates.
Once logged in with the oc CLI, running
oc new-app -L
will list all of the templates that we were used to seeing in the web console, including the mysql-persistent. Then, you can specify all the template parameters via the oc CLI, e.g.:
oc new-app mysql-persistent -p MYSQL_USER=<desired_DB_username> -p MYSQL_PASSWORD=<mysql_password> -p MYSQL_DATABASE=<desired_database_name>
If you'd like to see all the supported template parameters, you can use
oc process <template_name> --parameters -n openshift
or, for a more detailed output,
oc describe template <template_name> -n openshift
Once the app is launched successfully, you can find this app's hostname in services and connect to it from your php pod after defining host name in php configuration file.

Openshift unable to connect to the server

I am having issues with setting up Open shift and getting the following error after connecting to my server domain:
Command:
User$ rhc setup --server=app-domain.rhcloud.com
Result:
The server has rejected your connection attempt with an older SSL protocol.
Pass --ssl-version=sslv3 on the command line to connect to this server.
I am not sure what this is telling me to do. I tried using the instruction literally and it does not recognize the command.
Any ideas?
You should not pass rhc setup the --server flag unless you are running your own OpenShift Origin or OpenShift Enterprise broker. For OpenShift Online, just run the rhc setup command with no other options and it will setup fine. If that command messed up your express.conf file (which it should not have) you just need to delete your ~/.openshift/express.conf file then run rhc setup again without any flags. Basically you tried to point rhc to your gear as an OpenShift Online broker, which will not work.
I ended up answering this on another forum post:
The only way that this worked for me was to actually create a SSH key locally with ssh-keygen -p without rhc setup and "not" giving it a password. I then went back to OpenShift clicked add a key and pasted the contents of my rsa file.
There is obviously some kind of bug with authentication on Openshift or the installation is not right.
It would be good to find out what is going on and why does it work if I do it, this way.