I am trying to expose the docker-registry using the below command:
# oc expose service docker-registry --hostname=<hostname> -n default
Source: https://docs.openshift.com/container-platform/3.3/install_config/registry/securing_and_exposing_registry.html#access-insecure-registry-by-exposing-route
However, I get permission issues with "forbidden" messages in the output for the current user I login with. The user has "admin" rights. I am new to OpenShift and still learning. Can someone point me in the right direction for how to expose registry service using the above command? It looks like I might need "cluster-admin" access permission in order to perform this operation but not sure how to change or add role to the current user.
Your admin role is for what project ? Basically, admin role is granted permission for one project.
As you mentioned above, you need to cluster-admin cluster role in order to create route using oc expose service in default project. Or you are required admin role of default project. Each command is as follows for granting each role.
You also are required cluster-admin role to run the following both commands.
// for instance, the following command is granting cluster-admin role to admin.
$ oc adm policy add-cluster-role-to-user cluster-admin admin
// following command is granting admin of default project role to admin.
$ oc adm policy add-role-to-user admin admin -n default
If you can login as system:admin after access master host via ssh as root, you can get cluster-admin role.
# oc login -u system:admin --config /etc/origin/master/admin.kubeconfig
I hope it help you.
Related
I installed OpenShift origin (openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit) and when I tried to login with oc login it asks for username and password.
If I type any username like bob and then any password like 123 it login successfully but without permissions.
[root#ip-10-0-0-12 centos]# oc get pods
No resources found.
Error from server (Forbidden): pods is forbidden: User "bob" cannot list pods in the namespace "default": no RBAC policy matched
So I tried to login as the admin system:admin but it asks for password and I don't have the password, I have two certificates of two system:admin users in the /root/.kube/config file:
-name: system:admin/10-0-0-12:8443
...
-name: system:admin/127-0-0-1:8443
How can I login as the admin of the cluster ?
I solved it.
tl;dr I copied the original config file to the /root/.kube/config, add it to the environment variables and upload the cluster:
cp /home/centos/openshift.local.clusterup/openshift-apiserver/admin.kubeconfig /root/.kube/config
export KUBECONFIG=/root/.kube/config
oc cluster up
Everything works fine now.
Details
system:admin credentials live in a client certificate. If you get
prompted for a password, that means your $KUBECONFIG file does not
contain those credentials. Try to login with "system:admin" user
using both, the default kube config and the config from
/etc/origin/master.
/# oc login -u system:admin --config=/etc/origin/master/admin.kubeconfig
/# oc login -u system:admin --config=/root/.kube/config If login using /etc/origin/master/admin.kubeconfig is done successfully just
simply copy this file to /home/user/.kube/config (kube-config file
inside the linux users home directory)
The system admin ~/.kube/config file that is originally generated
after installing OpenShift 3.x+ is directly copied from the
admin.kubeconfig.
To restore the ~/.kube/config file so that a administrator can log in
as system:admin just copy the admin.kubeconfig file.
cp /etc/origin/master/admin.kubeconfig ~/.kube/config.
After that try login again without providing any config file in oc
login command.
I'm working a script that copies content from my local directory on a VM to an OpenShift pod.
The script works fine. My one complaint is that I need to use the interactive portion of oc login to authenticate my user each time I run my script like below:
oc login https://url.to.openshift
Authentication required for https://url.to.openshift:port (openshift)
Username: sampleUser
Password: samplePass
I know that I can run the command like so:
oc login --username=sampleUser --password=samplePass
oc login --token='sampleGeneratedTokenFromOpenShift'
I'd rather not have a hard coded user/pass or token within the script.
Is there a way to store and default credentials for my user in a configuration file for use with oc login?
I discovered the standard method is to create a service account, and use the token generated as a part of the oc login command.
Service Account Creation Command: $ oc create sa <user>
The service account then needs to be added to a project with a specific role.
Add Role to User Command:
$ oc policy add-role-to-user <role> system:serviceaccount:<project>:<user>
The service account should be created and have a role on the project. In your login command add the token from the service account generated secret.
oc login --token=generatedServiceAccountToken
I think I have locked myself out of my VM.
I have access as a low priv user I have created but the user can't sudo.
When I do SSH->Open in browser window I get a promp asking for a password which I have never set.
Any way to reset root password from the GCP Console?
Thanks, Pavel
To reset a root password for your GCP VM you need to grant appropriate IAM roles to your user to use sudo command. There is a similar post here. You can use the command 'sudo passwd' to change the password as suggested in that post.
Answering yout question you could simple run sudo commands using a startup script as the script would run as a root user, and then add your user in sudoers with sudo usermod -aG sudo, reference.
However, you could add a IAM role to your user in order to have admin access to a GCE VM, for example the roles/compute.instanceAdmin.v1, reference.
Are there users with administrator privileges to handle cluster features (cluster management)?
How can I assign permissions to other users?
I would like to run the following command :
[root#localhost ~]# oc adm policy add-cluster-role-to-user admin vittorio
I get this error:
Error from server: User "system" cannot get clusterpolicybindings at the cluster scope
Use the bootstrap certificate based cluster admin user:
export KUBECONFIG=/path/to/openshift.local.config/master/admin.kubeconfig
oc whoami // should report system:admin
Today I have create a Debian instance on gce. when I try to copy a file as root I get the following message:
Permission denied (publickey).
lost connection
On another instance create a few months ago I able to copy-file with root.
The command used is the following:
gcloud compute copy-files test/test.txt root#test:/opt/ --project p-id --zone z
For security reasons, newer VM images don't allow direct log-in as root via SSH. You can log in as a non-root user, which will have sudo permissions, and set up root-user SSH yourself, though this is not recommended. Instead, copy the file over to a non-privileged location and use the gcloud compute ssh as a non-root user and the sudo command to move the file where it needs to be.