Bluemix user-created container - 'This page can’t be displayed' - containers

Problem: 'This page can’t be displayed' when I select a user-defined container from Bluemix catalog.
https://console.ng.bluemix.net/catalog/?category=containers&taxonomyNavigation=containers&env_id=ibm:yp:us-south
Background: The build successfully completed and the image was created
cf ic build -t tomcat-mine .
cf ic images |grep tomcat-mine
registry.ng.bluemix.net/ztechsec/tomcat-mine latest 7cd1e6870ffd 55 minutes ago 142 MB
The container that was created in the previous step shows up under Bluemix Catalog > Containers.
When selecting the container that was created (tomcat-mine), the URL generates the 'This page can’t be displayed.' This URL generates the 'This page can’t be displayed' from the Bluemix -> Catalog > Containers:
https://catalog/images/tomcat-mine?org=28bfa082-2a8e-43cf-963d-7b7b28455603&space=085c044d-55cf-497a-8219-d6b668d63668&org_region=us-south&is_org_manager=false&ims_account=1177915&env_id=ibm:yp:us-south
Questions:
What would cause this issue?
What are possible workarounds?

Are you able to inspect the image with the CLI?
cf ic inspect 7cd1e6870ffd - does that work for you?

Related

Deployment "tiller" exceeded its progress deadline

I'm trying to install tiller server to an Openshift project
Helm/tiller version: 2.9.0
My project name: paytiller
At step 3, executing this command (mentioned as per this document - https://www.openshift.com/blog/getting-started-helm-openshift)
oc rollout status deployment tiller
I get this error:
error: deployment "tiller" exceeded its progress deadline
I'm not clear on what's the error message or could find any logs.
Any idea why this error?
If this doesn't work, what are the other suggestions for templating in Openshift?
EDIT
oc get events
Events:
Type Reason Age From Message
---- ------ ---- ---- ---
Warning Failed 14m (x5493 over 21h) kubelet, example.com Error: ImagePullBackOff
Normal Pulling 9m (x255 over 21h) kubelet, example.com pulling image "gcr.io/kubernetes-helm/tiller:v2.9.0"
Normal BackOff 4m (x5537 over 21h) kubelet, example.com Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.9.0"
Thanks.
The issue was with the permissions on our OpenShift platform. We didn't have access to download from open-source directly.
We tried to add kubernetes-helm as a docker image to our organization repository and then we were able to pull the image to OpenShift project. It is working now. But still, we didn't get any clue of the issue from the logs.
The status ImagePullBackOff tells you that this image gcr.io/kubernetes-helm/tiller:v2.9.0 could not be pulled from the container registry. So your OpenShift node cannot pull that image for some reason. This is often due to network proxies, a non-existing image (not the issue here) or other restrictions in the (corporate) network.
You can use oc describe pod <pod that shows ImagePullBackOff> to find out the more detailed error message that may help you further.
Also, note that the blog post you linked is from 2017, which is very old. Here is a more current version: Build Kubernetes Operators from Helm Charts in 5 steps
.

How to deploy MySQL docker image on AWS ECS?

I have troubles deploying MySQL image on AWS ECS FARGATE.
The cloudformation script that i have is this (dont mind the syntax, i am using python lib Troposphere to manage cloudfromation templates):
TaskDefinition(
'WordpressDatabaseTaskDefinition',
RequiresCompatibilities=['FARGATE'],
Cpu='512',
Memory='2048',
NetworkMode='awsvpc',
ContainerDefinitions=[
ContainerDefinition(
Name='WordpressDatabaseContainer',
Image='mysql:5.7',
Environment=[
Environment(Name='MYSQL_ROOT_PASSWORD', Value='root'),
Environment(Name='MYSQL_DATABASE', Value='wpdb'),
Environment(Name='MYSQL_USER', Value='root'),
Environment(Name='MYSQL_PASSWORD', Value='root'),
],
PortMappings=[
PortMapping(
ContainerPort=3306
)
]
)
]
)
The deployment succeeds. I can even see that the task is running for few seconds until its state changes to STOPPED.
The only thing that i can see is:
Stopped reason Essential container in task exited
Exit Code 1
On localhost it works like a charm. What am i doing here wrong? At least - are there ways to debug this?
With AWS ECS, if it is stopping, it may be failing a health check which is causing the container to restart. What port is the container DB mapped to and can you check the container logs to see what is happening when it starts then stops? Also, check the logs in ECS under the service or task. Post it here so I can take a look at them.
So, I found out a mistake.
THE VERY FIRST THING YOU DO - is you test that docker container on localhost and see if you can reproduce the issue. In my case docker mysql container on a local machine with the exact same environment crashed too. I was able to inspect logs and found out that it fails to create "root" user. Simply changing user and password made everything work, even on ECS.
This is the complete stack to have a mysql docker image running on AWS ECS FARGATE:
self.wordpress_database_task = TaskDefinition(
'WordpressDatabaseTaskDefinition',
RequiresCompatibilities=['FARGATE'],
Cpu='512',
Memory='2048',
NetworkMode='awsvpc',
# If your tasks are using the Fargate launch type, the host and sourcePath parameters are not supported.
Volumes=[
Volume(
Name='MySqlVolume',
DockerVolumeConfiguration=DockerVolumeConfiguration(
Scope='shared',
Autoprovision=True
)
)
],
ContainerDefinitions=[
ContainerDefinition(
Name='WordpressDatabaseContainer',
Image='mysql:5.7',
Environment=[
Environment(Name='MYSQL_ROOT_PASSWORD', Value='root'),
Environment(Name='MYSQL_DATABASE', Value='wpdb'),
Environment(Name='MYSQL_USER', Value='wordpressuser'),
Environment(Name='MYSQL_PASSWORD', Value='wordpressuserpassword'),
],
PortMappings=[
PortMapping(
ContainerPort=3306
)
]
)
]
)
self.wordpress_database_service = Service(
'WordpressDatabaseService',
Cluster=Ref(self.ecs_cluster),
DesiredCount=1,
TaskDefinition=Ref(self.wordpress_database_task),
LaunchType='FARGATE',
NetworkConfiguration=NetworkConfiguration(
AwsvpcConfiguration=AwsvpcConfiguration(
Subnets=[Ref(sub) for sub in VpcFormation().public_subnets],
AssignPublicIp='ENABLED',
SecurityGroups=[Ref(self.security_group)]
)
),
)
Note the AssignPublicIp='ENABLED' option so you would be able to connect to the database remotely.
After the stack completed i was able to successfully connect with a command:
mysql -uwordpressuser -pwordpressuserpassword -h18.202.31.123
Thats it :)

oc-command to forward local-ports to remote debug ports based on service-name instead of pod-name

To minimize the setup-time for attaching a debug session to the remote pod (microservice deployed on OpenShift) using intelliJ,
I am trying to get the most out of the 'Before launch'-setting of the Remote Debug-Configuration.
I use 2 steps before attaching the debugger to the JVM Socket with following command-line arguments (this setup works but needs editing every new deploy);
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000
step 1:
external tools: oc with arguments:
login
https://url.of.openshift.environment
--username=<login>
--password=<password>
step 2:
external tools: oc with arguments:
port-forward
microservice-name-65-6bhz8 -> this needs to be changed after every deploy
8000
3000
3001
background info:
this is the info in the service his YAML under spec>containers>env:
- name: JAVA_TOOL_OPTIONS
value: >-
-agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n
-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=3000
-Dcom.sun.management.jmxremote.rmi.port=3001
-Djava.rmi.server.hostname=127.0.0.1
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
As the name of the pod changes every (re-)deploy I am trying to find a oc-command which can be used to port-forward without having to provide the pod-name.(eg. based on the service-name)
Or a completely other solution that allows me to hit 1 button to setup a debug-session (preferably in intelliJ).
> Screenshot IntelliJ settings
----------------------------- edit after tips -------------------------------
For now I made a small batch-script which does the trick:
Feel free to help on a even faster solution
(I'm checking https://openshiftdo.org/)
or other intelliJent solutions
set /p _username=Type your username:
set /p _password=Type your password:
oc login replace-with-openshift-console-url --username=%_username% --password=%_password%
oc project replace-with-project-name
oc get pods --selector app=replace-with-app-name -o jsonpath={.items[?(#.status.phase=='Running')].metadata.name} > temp.txt
set /p PODNAME= <temp.txt
del temp.txt
oc port-forward %PODNAME% 8000 3000 3001
Your going to need the pod name in order to port forward but of course you can fetch that programatically consistantly so you don't need to update in place every time.
There are a number of ways you can do this, via jsonpath, go template, bash etc. An example would be to use the following, replacing your app name as required:
oc get pod -l app=replace-me -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'

How to get gliderlabs/registrator running on on Bluemix

I'm trying to get gliderlabs registrator running on Bluemix, but I'm having issues as the container won't start with
O400 The plain HTTP request was sent to HTTPS port
What I think is happening is that my docker host is running on tcp://containers-api.eu-gb.bluemix.net:8443 - so the docker rest api's are https. However I suspect the gliderlabs/registrator is using http by default.
So anyone got any ideas how to get this to work ?
Steve
Looking at that package, it uses the library github.com/fsouza/go-dockerclient to access the docker remote api, specifically the NewClientFromEnv() call. Per the readme for go-dockerclient, it should pick up the env vars for https if they're there - i.e. make sure you're exporting all three env vars: DOCKER_HOST, DOCKER_TLS_VERIFY, DOCKER_CERT_PATH.
Another possibility - per reading the comments about registrator - you may wish to check that you're using gliderlabs/registrator:master instead of gliderlabs/registrator:latest. Just pulled both to check, and "latest" is 14 months old, vs 6 days for "master".

No logging with starting bluemix container

I create a bluemix container with a docker file.
If I look at the IBM dashboard the status of the container is fixed on Networking.
When I try to get the log file through bij cf ic in my command shell I get a 404.
I use the following command the get the container_id: cf ic ps -a.
This is the response I get back:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
57f7363b-710 <no image> "" 8 days ago Networking a day ago XXX.YYY.ZZZ.KKK:22->22/tcp, XXX.YYY.ZZZ.KKK:80->80/tcp, XXX.YYY.ZZZ.KKK:587->587/tcp, XXX.YYY.ZZZ.KKK:2812->2812/tcp containername
With the following command I try to get the logs: cf ic logs 57f7363b-710.
But then I see the following response:
FAILED
404 error encountered while processing request!`
Is there another way to see why the container is hanging on the status "Networking"?
This issue reflects a networking problem that has been fixed last week. When the container status is freeze you can use "ICE rm -f" to force the removal of a running container or "ICE stop" to stop a running container by sending SIGTERM and then SIGKILL after a grace period.
If you are unable to create a new container because the status is always "freeze", please open a ticket to bluemix support.
When a container is in 'Networking' state it means that the networking phase is not finished yet. During that step of a container creation there is for example the allocation of selected IP addresses (both public and private). When this phase ends you will be able to route requests to those IPs. When a container is in 'Networking' state for too long it usually means that there was an infrastructure problem. You can try to create a new container from the same image with cf ic run. Please consider that if you reached the maximum quota you could need to delete the stuck container or to release unbound IPs in order to create a new one.
You can delete a container using:
cf ic rm -f [containerId]
You can list all IPs (available or not) using:
cf ic ip list -a
Then you can release an IP using:
cf ic ip release [IPAddr]