I am trying to deploy Gitlab source code OpenShift. But I am facing an issue. Though in Gitlab pipeline it is successful. It keeps talking about the unauthorized error.
My expected output is to have deployment on OpenShift [Error message] (https://i.stack.imgur.com/CBBzO.png)
The error indicates that the Deployment Pod is unable to pull the specified image.
It appears your Deployment is in the namespace roks-test-demo-project while the image your are trying to pull is in the oc-custom-dev namespace. In order for a Deployment in one namespace to pull an image from another, the Deployment's service account must be authorized to do so.
See the OpenShift documentation for how to achieve this.
In your case, assuming your Deployment is running as the default service account:
$ oc policy add-role-to-user \
system:image-puller system:serviceaccount:roks-test-demo-project:default \
--namespace=oc-custom-dev
If your Deployment is running as a non-default service account, replace default with that service account name in the above command.
Related
I tried to deploy library/cassandra image cassandra container in Sandbox Openshift cluster but it threw me this error in pod logs,
"Running Cassandra as root user or group is not recommended - please start Cassandra using a different system user.
If you really want to force running Cassandra as root, use -R command line option."
When I checked the container description, I could see that SCC is set to Restricted...So looks like in Sandbox openshift, SCC "Restricted" is set for "Default" Service account by default..
But in AWS when I tried to install openshift with installer option, I didnt face this error with same library/cassandra image..
Looks like default Service account is not by default associated with "Restricted" SCC...
could someone clarify what is the difference in Sandbox environment which throws this error? and How can I set the same config in AWS openshift so that default Service account can be associated with restricted SCC?
I can't see your specific environment, but from the error message I suspect it's being triggered by the GROUP=0, not user=0.
To confirm:
$ oc get pods (whatever) -o yaml | grep openshift.io/scc
This will show you which SCC admitted the pod into the cluster. It should be "restricted" based on what you said. If so, then we've got some good evidence that it's just the group.
Next, you can look for something like this:
$ oc rsh (podname) id -a
uid=1000640000(1000640000) gid=0(root) groups=0(root),1000640000
UID (user) is in the expected billion+ range defined in the namespace annotation. GID (group) is zero.
With that in place, you can either ignore the error, knowing it's own group=0 that's in place, or you can set a securityContext for your pod (or container) to specify a different gid.
I came to know that "default" project has different set of permissions so even a container with user id 0 can be deployed in default namespace..
In Sandbox cluster, the project is dev or stage so it works with correct security level..
I'm trying to build a new app by using a docker image from the book Devops With OpenShift
so as per the content from the book page 19
the command is
oc new-app devopswithopenshift/welcome:latest --name=myapp
so the devopswithopenshift/welcome:latest needs to be firstly built and pushed to the docker hub.
I pulled the GIT code from https://github.com/devops-with-openshift/welcome
and ran the command C:\Docker\welcome\foo>docker build -t welcome .
Here is the response
failed to solve with frontend dockerfile.v0: failed to create LLB definition: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
When i looked into the Dockerfile
It has FROM welcome/ops:latest
so it is trying to pull welcome/ops which is not there in the registry can the authors help resolve this
Thanks
K.ThulsiDoss
Thanks for the response .Here is what i did to get going so that users can benefit from the clarifications.
1.My env is windows (client ) and open shift is on RHEL cluster .In my win env i have Git ,OC client installed & docker (win10 ) installed
2.Downloaded the book code into my git dir
3.The important thing is that i logged onto docker with my credentials on the terminal
'''
e.g docker login -- --password on the terminal where i had extracted the code.
'''
4.I then logged onto the OC cluster e.g
'''
oc login --token= --server=https://xyzopenshift.os.fyre.ibm.com:6443
What I want to do is to make a web app that lists in one single view the version of every application deployed in our Openshift (a fast view of versions). At this moment, the only way I have seen to locate the version of an app deployed in a pod is the ARTIFACT_URL parameter in the envirorment view, that's why I ask for that parameter, but if there's another way to get a pod and the version of its current app deployed, I'm also open to that option as long as I can get it through an API. Maybe I'd eventually also need an endpoint that retrieves the list of the current pods.
I've looked into the Openshift API and the only thing I've found that may help me is this GET but if the parameter :id is what I think, it changes with every deploy, so I would need to be modifying it constantly and that's not practical. Obviously, I'd also need an endpoint to get the list of IDs or whatever that let me identify the pod when I ask for the ARTIFACT_URL
Thanks!
There is a way to do that. See https://docs.openshift.com/enterprise/3.0/dev_guide/environment_variables.html
List Environment Variables
To list environment variables in pods or pod templates:
$ oc env <object-selection> --list [<common-options>]
This example lists all environment variables for pod p1:
$ oc env pod/p1 --list
I suggest redesigning builds and deployments if you don't have persistent app versioning information outside of Openshift.
If app versions need to be obtained from running pods (e.g. with oc rsh or oc env as suggested elsewhere), then you have a serious reproducibility problem. Git should be used for app versioning, and all app builds and deployments, even in dev and test environments should be fully automated.
Within Openshift you can achieve full automation with Webhook Triggers in your Build Configs and Image Change Triggers in your Deployment Configs.
Outside of Openshift, this can be done at no extra cost using Jenkins (which can even be run in a container if you have persistent storage available to preserve its settings).
As a quick workaround you may also consider:
oc describe pods | grep ARTIFACT_URL
to get the list of values of your environment variable (here: ARTIFACT_URL) from all pods.
The corresponding list of pod names can be obtained either simply using 'oc get pods' or a second call to oc describe:
oc describe pods | grep "Name: "
(notice the 8 spaces needed to filter out other Names:)
We are trying to automate the build and deployment of containers to projects created in openshift v3.3. From the documentation I can see that we will need to leverage service accounts to do this but the documentation is hard to follow and the examples I have found in the blogs don't complete the task. My workflow is as follows with examples oc commands I use:
BUILDER_TOKEN='xxx'
DEPLOYER_TOKEN='xxx'
# build and push the image works as expected
docker build -t registry.xyz.com/want/want:latest .
docker login --username=<someuser> --password=${BUILDER_TOKEN} registry.xyz.com
docker push registry.xyz.com/<repo>/<image>:<tag>
# This fails with error
oc login https://api.xyz.com --token=${DEPLOYER_TOKEN}
oc project <someproject>
oc new-app registry.xyz.com/<repo>/<image>:<tag>
Notice I login into the rest api interface, select the project and create the app but this fails with the following errors:
error: User "system:serviceaccount:want:deployer" cannot create deploymentconfigs in project "default"
error: User "system:serviceaccount:want:deployer" cannot create services in project "default"
Any ideas?
Service accounts only have permission in their owning project by default. You would need to grant deployer access to deploy in other projects.
OK so it seems that using a service account to accomplish this is not the best way to go about things. This is not helped by the documentation. The use case above is very common and the correct approach is to simply evoke the new-app with the image name and corresponding tag:
oc new-app ${APP}:${TAG}
There is no need to mess around with service accounts.
So when I bring up a GCE instance using the standard debian 7 image, and issue a "gsutil config" command, it fails with the following message:
jcortez#master:~$ gsutil config
Failure: No handler was ready to authenticate. 4 handlers were checked. ['ComputeAuth', 'OAuth2Auth', 'OAuth2ServiceAccountAuth', 'HmacAuthV1Handler'] Check your credentials.
I've tried it on the debian 6 and centos instances and had the same results. Issuing "gcutil config" works fine however. I gather I need to set up my ~/.boto file but I'm not sure what to.
What am I doing wrong?
Using service account scopes as E. Anderson mentions is the recommended way to use gsutil on Compute Engine, so the images are configured to get OAuth access tokens from the metadata server in /etc/boto.cfg:
[GoogleCompute]
service_account = default
If you want to manage gsutil config yourself, rename /etc/boto.cfg, and gsutil config should work:
$ sudo mv /etc/boto.cfg /etc/boto.cfg.orig
$ gsutil config
This script will create a boto config file at
/home/<...snipped...>/.boto
containing your credentials, based on your responses to the following questions.
<...snip...>
Are you trying to use a service account to have access to Cloud Storage without needing to enter credentials?
It sounds like gsutil is searching for an OAuth access token with the appropriate scopes and is not finding one. You can ensure that your VM has access to Google Cloud Storage by requesting the storage-rw or storage-full permission when starting your VM via gcutil, or by selecting the appropriate privileges under "Project Access" on the UI console. For gcutil, something like the following should work:
> gcutil addinstance worker-1 \
> --service_account_scopes=https://www.googleapis.com/auth/devstorage.read_write,https://www.googleapis.com/auth/compute.readonly
When you configured your GCE instance, did you set it up with a service account configured? Older versions of gsutil got confused when you attempted to run gsutil config when you already had service account credentials configured.
If you already have a service account configured you shouldn't need to run gsutil config - you should be able to simply run gsutil ls, cp, etc. (it will use credentials located elsewhere than your ~/.boto file).
If you really do want to run gsutil config (e.g., to set up credentials associated with your login identity, rather than service account credentials), you could try downloading the current gsutil from http://storage.googleapis.com/pub/gsutil.tar.gz, unpacking it, and running that copy of gsutil. Note that if you do this, the personal credentials you create by running gsutil config will essentially "hide" your service account credentials (i.e., you would need to move your .boto file aside if you ever want to user your service account credentials again).
Mike Schwartz, Google Cloud Storage team
FYI I'm working on some changes to gsutil now that will handle the problem you encountered more smoothly. That version should be out within the next week or two.
Mike