How do I load a dockerimage in eclipse-che? - openshift

I'm trying to load a docker-image on openshift.io
so I attempt to just use 'hello-world' as my docker image, this is my devfile
metadata:
name: test
attributes:
persistVolumes: 'false'
components:
- mountSources: true
endpoints:
- name: hello
port: 4200
memoryLimit: 1Gi
type: dockerimage
image: 'hello-world'
alias: hello-world
apiVersion: 1.0.0
However I get this error Error: Failed to run the workspace: "The following containers have terminated: hello-world: reason = 'Completed', exit code = 0, message = 'null'"
This doesn't happen with the custom images provided by eclipse, so what do I need to change in order to get a docker-image work on openshift.io? as far as I know, I can't edit the "Dockerfile", I can only pull images from a docker registry.

The command attribute of the dockerimage along with other arguments, is used to modify the entrypoint command of the container created from the image. In Eclipse Che the container is needed to run indefinitely so that you can connect to it and execute arbitrary commands in it at any time. Because the availability of the sleep command and the support for the infinity argument for it is different and depends on the base image used in the particular images, Che cannot insert this behavior automatically on its own. However, you can take advantage of this feature to, for example, start necessary servers with modified configurations, and so on.
For the dockerimage component to have access to the project sources, you must set the mountSources attribute to true.
metadata:
name: test
attributes:
persistVolumes: 'false'
components:
- mountSources: true
endpoints:
- name: hello
port: 4200
memoryLimit: 1Gi
type: dockerimage
image: 'hello-world'
alias: hello-world
command: ['sleep', 'infinity']

This looks like the entry process for hello-world image exits. Your images should not exit by default or you should override the default entry command with a command that will not exit on your devfile. You can try to add something like below to the dockerimage component.
command: ['tail']
args: ['-f', '/dev/null']
Check out this example also

Related

How do I use an imagestream from another namespace in openshift?

I have been breaking my head over the following:
I have a set of buildconfigs that build images and create imagestreams for it in the "openshift" namespace. This gives me for example the netclient-userspace imagestream.
krist#MacBook-Pro netmaker % oc get is netclient-userspace
NAME IMAGE REPOSITORY TAGS UPDATED
netclient-userspace image-registry.openshift-image-registry.svc:5000/openshift/netclient-userspace latest About an hour ago
What I have however not been able to figure out is how to use this imagestream in a deployment in a different namespace.
Take for example this:
kind: Pod
apiVersion: v1
metadata:
name: netclient-test
namespace: "kvb-netclient-test"
spec:
containers:
- name: netclient
image: netclient-userspace:latest
When I deploy this I get errors...
Failed to pull image "netclient-userspace:latest": rpc error: code =
Unknown desc = reading manifest latest in
docker.io/library/netclient-userspace: errors: denied: requested
access to the resource is denied unauthorized: authentication required
So openshift goest and looks for the image on dockerhub. It shouldn't. How do I tell openshift to use the imagestream here?
When using an ImageStreamTag for a Deployment image source, you need to use the image.openshift.io/triggers annotation. It instructs OpenShift to replace the image: attribute in a Deployment with the value of an ImageStreamTag (and to redeploy it when the ImageStreamTag changes in the future).
Importantly, note the annotation and the image: ' ' with the explicit space character in the yaml string.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
image.openshift.io/triggers: '[{"from":{"kind":"ImageStreamTag","name":"content-netclient-userspace:latest","namespace":"openshift"},"fieldPath":"spec.template.spec.containers[?(#.name==\"netclient\")].image"}]'
name: netclient-test
namespace: "kvb-netclient-test"
spec:
...
template:
...
spec:
containers:
- command:
- ...
image: ' '
name: netclient
...
I will also mention that, in order to pull images from different namespaces, it may be required to authorize the Deployment's service account to do so: OpenShift Docs.

KNative serving is not showing Ready after installing on Openshift

Followed the link - https://docs.openshift.com/container-platform/4.1/serverless/installing-openshift-serverless.html to install KNative Serving on top of Openshift v4.1. After installing all the openshift operators, control plane. member roll etc as given in the link; I expect to see that serving component is running by executing -
C:\Knative installation>oc get knativeserving/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}'
But the above returns nothing. Just returns back the prompt.
Also below are o/p of the get resource command of the serving component -
C:\Knative installation>oc get knativeserving/knative-serving -n knative-serving
NAME VERSION READY REASON
knative-serving
C:\Knative installation>oc get knativeserving/knative-serving -n knative-serving -o yaml
apiVersion: serving.knative.dev/v1alpha1
kind: KnativeServing
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"serving.knative.dev/v1alpha1","kind":"KnativeServing","metadata": {"annotations":{},"name":"knative-serving","namespace":"knative-serving"}}
creationTimestamp: "2020-01-12T10:53:42Z"
generation: 1
name: knative-serving
namespace: knative-serving
resourceVersion: "63660251"
selfLink: /apis/serving.knative.dev/v1alpha1/namespaces/knative-serving/knativeservings/knative-serving
uid: cc4b330f-3529-11ea-83ef-0272cb600f74
What could be wrong? I believe KNative Serving did not install correctly but not sure how to debug. I uninstalled and reinstalled several times but no help.
Also, I thought to proceed and install a service using KNative Serving (ref link https://docs.openshift.com/container-platform/4.1/serverless/getting-started-knative-services.html) but, applying the very first resource shows problem.
service.yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-go
namespace: default
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
env:
- name: TARGET
value: "Go Sample v1"
Applying service.yaml returns error.
C:\start Knative service> oc apply --filename service.yaml
error: unable to recognize "service.yaml": no matches for kind "Service" in version "serving.knative.dev/v1alpha1"
Any help is appreciated. Thanks.

OpenShift runs same container twice when it should run two different ones inside pod

I want OpenShift 3.10 to create a pod (console) comprising two containers (api and console). The relevant description in the application template (under dc.spec.template.spec.containers) for DeploymentConfig console looks like this:
containers:
- image: console:api
imagePullPolicy: Always
name: api
terminationMessagePolicy: File
- image: console:console
imagePullPolicy: Always
name: console
ports:
- containerPort: 80
protocol: TCP
terminationMessagePolicy: File
oc describe is/console looks good to me and reports the following (the BuildConfigs for the two containers output to ImageStreamTags console:api and console:console respectively):
api
no spec tag
* docker-registry.default.svc:5000/registry/console#sha256:96...66
console
no spec tag
* docker-registry.default.svc:5000/registry/console#sha256:8a...02
But oc describe pods --selector deploymentconfig=console reveals that the same image has been pulled twice and hence the same container runs twice inside the pod:
Successfully pulled image "docker-registry.default.svc:5000/registry/console#sha256:8a...02"
Successfully pulled image "docker-registry.default.svc:5000/registry/console#sha256:8a...02"
How can I ensure that the pod indeed comprises the two distinct containers? And why is the image stream tag console:api apparently at times not referring to image 96...66 but also to 8a...02, contrary to what os describe is/console suggests?
UPDATE The mismatch is also apparent in oc describe dc/console, which indicates that both image stream tags console:api and console:console apparently have been resolved to the same container image 8a...02:
Containers:
api:
Image: docker-registry.default.svc:5000/registry/console#sha256:8a...02
console:
Image: docker-registry.default.svc:5000/registry/console#sha256:8a...02
The following change to dc.spec.triggers seems to have resolved the situation:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- api
from:
kind: ImageStreamTag
name: console:api
namespace: registry
type: ImageChange
- imageChangeParams:
automatic: true
containerNames:
- console
from:
kind: ImageStreamTag
name: console:console
namespace: registry
type: ImageChange
Previously, there was only a single imageChangeParams for console:console. The pod now comprises the two distinct containers.

error while loading shared libraries when running prehook pod

I am new to OpenShift, I am deploying my flask app onto it, but encountered some problem. My app/container name is flog.
I set up a lifecycle prehook to ensure the database is created correctly for the app deployment. Here is my config(critical part):
spec:
replicas: 1
selector:
deploymentconfig: flog
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
pre:
execNewPod:
command:
- flask
- init
containerName: flog
env:
- name: FLASK_APP
value: wsgi.py
failurePolicy: Abort
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
It works correctly in building but breaks in prehook
--> pre: Running hook pod ...
/opt/app-root/bin/python3: error while loading shared libraries: libpython3.5m.so.rh-python35-1.0: cannot open shared object file: No such file or directory
However, when I debug in terminal and type python3 command, it works well.
Thanks in advance for any help.
You will need to add a shell script into your image which in turns then runs your command. The shell script wrapper is needed as initialisation of the shell environment has the side effect of enabling the Python environment, including setting environment variables so it can find the Python shared library.
So change:
command:
- flask
- init
to:
command:
- somescript
And in somescript have:
#!/bin/bash
flask init

AWS Beanstalk Application Health Check

We use Beanstalk to deploy node applications. It works very well. I've created a couple of config files in an .ebextensions directory, to apply configuration info to our apps when we load them up. Again mostly works well. I have one thing that does not, and that is defining the application health check URL. I can't get it to go. One odd thing about it, it seems to be only parameter I have come across so far that has spaces in it, and I'm wondering about that. I have tried enclosing the values in quotes, just to see if that is the problem, but it still doesn't work. Has anyone done this before, and can tell me whether it works, and if there is something syntactical about this that is incorrect? As I said, the rest of the params get set correctly in beanstalk, just the last one doesn't. Note #environment# gets replaced with a grunt script before this gets deployed.
Here's the config file:
option_settings:
- namespace: aws:elasticbeanstalk:application:environment
option_name: NODE_ENV
value: #environment#
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: NodeVersion
value: 0.10.10
- namespace: aws:autoscaling:trigger
option_name: LowerThreshold
value: 40
- namespace: aws:autoscaling:trigger
option_name: MeasureName
value: CPUUtilization
- namespace: aws:autoscaling:trigger
option_name: UpperThreshold
value: 60
- namespace: aws:autoscaling:trigger
option_name: Unit
value: Percent
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /load_balance_test
Adding this worked for me:
# .ebextensions/healthcheckurl.config
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /health
- namespace: aws:elasticbeanstalk:environment:process:default
option_name: HealthCheckPath
value: /health
I discovered the second setting by doing eb config, which gives a nice overview of environment settings that can be overriden with option_settings in .ebextensions/yet-another.config files.
The spaces in this property name are weird, but it works when used with the alternative shorthand syntax for options:
option_settings:
aws:elasticbeanstalk:application:
Application Healthcheck URL: /
I use CloudFormation for EB, and in CF the syntax for that parameter is very strange. If that config file works the same as CF, the following string should work for you:
HTTP:80/load_balance_test
If you're using Terraform, then just make sure you have spaces in the name, and it will work fine:
setting {
namespace = "aws:elasticbeanstalk:application"
name = "Application Healthcheck URL"
value = "/api/health"
}
I just tried. It worked for me. Only the format specified in the original question worked for me i.e.,
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /api/v1/health/
you also might want to set the health_check_type to ELB instead of default EC2.
this is how I configured mine
$ cat .ebextensions/0090_healthcheckurl.config
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
HealthCheckType: "ELB"
HealthCheckGracePeriod: "600"
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /_status