I tried openshift redhat k8s distro and now there are 2 projects that i need to delete. I can only login as user 'erjcan', this is my primary acc and it seems not to be allowed to do admin actions.
The 'delete button' is inactive in gui console, i tried to create a role for myself but can't.
I tried to create admin-like role and assume it as a user, but it is not allowed either.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: all-stuff
namespace: erjcan-stage
rules:
- apiGroups:
- ''
resources:
- '*'
verbs:
- '*'
This code above gives me RBAC not allowed error:
An error occurred
roles.rbac.authorization.k8s.io "all-stuff" is forbidden: user "erjcan"
(groups=["system:authenticated:oauth" "system:authenticated"]) is
attempting to grant RBAC permissions
not currently held: {APIGroups:[""], Resources:["*"],
Verbs:["*"]}
I tried to delete via cli, but i can only login as erjcan user.
Logged into "https://api.sandbox-m2.ll9k.p1.openshiftapps.com:6443" as "erjcan" using the token provided.
You have access to the following projects and can switch between them with 'oc project <projectname>':
erjcan-dev
* erjcan-stage
Using project "erjcan-stage".
bash-4.4 ~ $
bash-4.4 ~ $ oc delete project erjan-dev
Error from server (Forbidden): projects.project.openshift.io "erjan-dev" is forbidden: User "erjcan" cannot delete resource "projects" in API group "project.openshift.io" in the namespace "erjan-dev"
bash-4.4 ~ $ oc delete project erjcan-dev
Error from server (Forbidden): projects.project.openshift.io "erjcan-dev" is forbidden: User "erjcan" cannot delete resource "projects" in API group "project.openshift.io" in the namespace "erjcan-dev"
How to delete a project in redhat openshift gui console?
You appear to be talking about using Red Hat's developer sandbox. Which, indeed, does not allow you to delete projects. There's no way around that: RBAC is specifically set up to not allow you to create or delete projects.
You don't say why you need to delete the projects. They will go away eventually do to inactivity. But, if you just want a clean slate, or just need to remove what you have inside that project you do have permission to delete everything in the project (just not the project itself).
oc delete all --all will remove everything inside the current project. Obviously use that command with strict care: there is no confirmation or warning. (BTW, the first "all" is saying all types of objects: pods/deployments/routes/etc, the second --all is saying "yes, I'm deliberately not providing a filter or any other subset, I really mean delete all of the objects I'm specifying".
Similarly, the following two commands should clean up both of your projects. (Although they will still exist.)
oc delete all --all -n erjcan-stage
oc delete all --all -n erjcan-dev
Related
I am trying to install artifactory oss in a openshift cluster. I am using this helm chart https://charts.jfrog.io/artifactory-oss-107.39.4.tgz (Warning I am very new to openshift etc.. I am on a steep learning curve )
I am running the helm chart as the openshift cluster-admin account
However I am getting this error
pods "artifactory-artifactory-nginx-5c66b8c948-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{107}: 107 is not an allowed group, spec.initContainers[0].securityContext.runAsUser: Invalid value: 104: must be in the ranges: [1000970000, 1000979999], spec.containers[0].securityContext.runAsUser: Invalid
I think it is a openshift permissions error .. in that it requires a more permissive security constraint. However given I am running as cluster-admin I find that a little suprising.
Can anyone offer a suggestion how to resolve this issue and get artifactory-oss running in openshift?
Thanks in advance !
--
Tried passing some options to set the uid and gild..
I tried starting with this
helm upgrade --install artifactory --set artifactory.uid=1001010042,artifactory.gid=1001010042,nginx.uid=1001010042,nginx.gid=1001010042,artifactory.masterKey=${MASTER_KEY},artifactory.joinKey=${JOIN_KEY},artifactory.postgresql.postgresqlPassword=$POSTGRES_PASSWORD --namespace artifactory jfrog/artifactory-oss
The options should have set the uids and gids.. but I still got.. Seems the helm chart ignores efforts to overwrite the values
pods "artifactory-artifactory-nginx-5c66b8c948-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{107}: 107 is not an allowed group, spec.initContainers[0].securityContext.runAsUser: Invalid value: 104: must be in the ranges: [1000930000, 1000939999], spec.containers[0].securityContext.runAsUser: Invalid
Regarding the JFrog Artifactory OSS Helm Chart, its documentation Installing Artifactory points out to some prerequisites.
When installing Artifactory, you must run the installation as a root user or provide sudo access to a non-root user.
For Helm
Create a unique Master Key (Artifactory requires a unique master key) pass it to the template during installation.
Create a secret containing the key. The key in the secret must be named master-key
kubectl create secret generic my-masterkey-secret -n artifactory --from-literal=master-key=${MASTER_KEY}
make sure to pass the same master key on all future calls to Helm install and Helm upgrade.
This means always passing --set artifactory.masterKey=${MASTER_KEY} (for the custom master key) or --set artifactory.masterKeySecretName=my-masterkey-secret (for the manual secret) and verifying that the contents of the secret remain unchanged.
create a unique join key: By default the chart has one set in the values.yaml (artifactory.joinKey).
However, this key is for demonstration purposes only and should not be used in a production environment
The point is: it depends on the exact command used to install the Helm Chart.
helm upgrade --install artifactory --set artifactory.masterKey=${MASTER_KEY} \
--set artifactory.joinKey=${JOIN_KEY} \
--namespace artifactory jfrog/artifactory
As illustrated here, the value for "runAsUser" and "fsGroup" in values.yaml can have an influence on the error message..
Unlike other installations, Helm Chart configurations are made to the values.yaml and are then applied to the system.yaml.
Follow these steps to apply the configuration changes.
Make the changes to values.yaml.
Run the command.
helm upgrade --install artifactory -n artifactory -f values.yaml
See Managing security context constraints for more.
I have created project in GCP. Then i create service account with ComputeAdmin role. Then i enable "Compute Engine API" for project.
But can't work with instances:
#gcloud compute instances list
ERROR: (gcloud.compute.instances.list) Some requests did not succeed: - Required 'compute.zones.list'
permission for 'projects/someproject'
what am I doing wrong ?
Edited
from service account:
ERROR: (gcloud.projects.get-iam-policy) User
[cloud66#project_id.iam.gserviceaccount.com] does not have permission
to access project [project_id:getIamPolicy] (or it may not exist): The
caller does not have permission
When i switch to main google account:
$ gcloud projects get-iam-policy project_id bindings:
members:
serviceAccount:cloud66#project_id.iam.gserviceaccount.com
role: roles/compute.admin
members:
serviceAccount:service-855312803173#compute-system.iam.gserviceaccount.com
role: roles/compute.serviceAgent
members:
serviceAccount:855312803173-compute#developer.gserviceaccount.com
serviceAccount:855312803173#cloudservices.gserviceaccount.com
role: roles/editor
members:
user:my_google_user role: roles/owner
From Logs view:
2020-01-28 16:46:30.932 EET Compute Engine list zones
cloud66#project_id.iam.gserviceaccount.com PERMISSION_DENIED
code: 7
message: "PERMISSION_DENIED"
Have a look at the documentation first. Usually such error occurs if your service account doesn't have enough permissions. In this case, you should check available roles, search there for required permission like compute.zones. and add it to your service account as it described here. For example it could be Compute Instance Admin (v1) role.
EDIT It look like Compute Admin role should work for you. To be sure, check granted roles in your project with command:
gcloud projects get-iam-policy YOUR_PROJECT_ID
If want to use your service account with Cloud API from some application have a look at this instructions.
EDIT2 Try to check your service account and key from 3rd place (like some linux desktop or server) as it described in the documentation.
The first time I created a service account "cloud66" in the Google test period. Most likely, this affected the access rights. Then I switched billing from a test period to a paid one. I deleted and recreated the cloud66 service account in the "APIs & Services -> Credentials" section. But there was an access policy for "cloud66" with the role of "ComputeAdmin" in the "IAM" section. When I deleted the access policy from the "IAM" section and recreated the service account, the problem was resolved.
To minimize the setup-time for attaching a debug session to the remote pod (microservice deployed on OpenShift) using intelliJ,
I am trying to get the most out of the 'Before launch'-setting of the Remote Debug-Configuration.
I use 2 steps before attaching the debugger to the JVM Socket with following command-line arguments (this setup works but needs editing every new deploy);
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000
step 1:
external tools: oc with arguments:
login
https://url.of.openshift.environment
--username=<login>
--password=<password>
step 2:
external tools: oc with arguments:
port-forward
microservice-name-65-6bhz8 -> this needs to be changed after every deploy
8000
3000
3001
background info:
this is the info in the service his YAML under spec>containers>env:
- name: JAVA_TOOL_OPTIONS
value: >-
-agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n
-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=3000
-Dcom.sun.management.jmxremote.rmi.port=3001
-Djava.rmi.server.hostname=127.0.0.1
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
As the name of the pod changes every (re-)deploy I am trying to find a oc-command which can be used to port-forward without having to provide the pod-name.(eg. based on the service-name)
Or a completely other solution that allows me to hit 1 button to setup a debug-session (preferably in intelliJ).
> Screenshot IntelliJ settings
----------------------------- edit after tips -------------------------------
For now I made a small batch-script which does the trick:
Feel free to help on a even faster solution
(I'm checking https://openshiftdo.org/)
or other intelliJent solutions
set /p _username=Type your username:
set /p _password=Type your password:
oc login replace-with-openshift-console-url --username=%_username% --password=%_password%
oc project replace-with-project-name
oc get pods --selector app=replace-with-app-name -o jsonpath={.items[?(#.status.phase=='Running')].metadata.name} > temp.txt
set /p PODNAME= <temp.txt
del temp.txt
oc port-forward %PODNAME% 8000 3000 3001
Your going to need the pod name in order to port forward but of course you can fetch that programatically consistantly so you don't need to update in place every time.
There are a number of ways you can do this, via jsonpath, go template, bash etc. An example would be to use the following, replacing your app name as required:
oc get pod -l app=replace-me -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
Im trying to learn Hyperledger Composer but seems to be a relatively new technology, i mean there are few tutorials and few solutions to a lot of questions, tutorial does not mention possible error case when following the commands and which means there are is also no solution for those errors.
I have joined the composer channel in their community chat, looks like its running in Discord or something, and asked the same question without a response, i have a better experience here in SO.
This is the problem: I have deployed my business network, installed it, started it, created my network admin card and imported it, then to test if everything is ok i have to command composer network ping --card NAME-OF-MY-ADMIN-CARD
And this error comes:
juan#JuanDeDios:~/proyectos/inovacion/a3-poliza-microservice$ composer network ping --card admin#a3-policy-microservice
Error: transaction returned with failure: AccessException: Participant 'org.hyperledger.composer.system.NetworkAdmin#admin' does not have 'READ' access to resource 'org.hyperledger.composer.system.Network#a3-policy-microservice#0.0.1'
Command failed
I think that it has to do something with the permission.acl file, and gave permission to everyone to everything so there would not be any restrictions to anyone, and tryied again, but failed.
So i thought i had to uninstall my business network and create it again, i deleted my .bna and my network.card files also so everything would be created again, but the same error result.
My other attempt was to update the business network, but didn't work, the same error happened and I'm sure i didn't miss any step from the tutorial. I do also followed the playground tutorial. What i have not done its to create another app with the Yeoman but i will do if i don't find a solution to this problem which would not require me to create another app.
This were my steps:
1-. Created my app with Yeoman
yo hyperledger-composer:businessnetwork
2-. Selected Apache-2.0 for my license
3-. Created a3-policy-microservice as the name of the business network
4-. Created org.microservice.policy (Yeah i switched names but Im totally aware)
5-. Generated my app with a template selecting the NO option
6-. Created my assets, participants and transactions
7-. Changed my permission rules to mine
8-. I generated the .bna file
composer archive create -t dir -n .
9-. Then installed my bna file
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-microservice#0.0.1.bna
10-. Then started my network and created my networkadmin card
composer network start --networkName a3-policy-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card PeerAdmin#hlfv1 --file networkadmin.card
11-. Imported my card
composer card import --file networkadmin.card
12-. Tried to ping my network
composer network ping --card admin#a3-poliza-microservice
And the error happens
Later i tried to create everything again shutting down my fabric and started it again and creating the network from the first step.
My other attempt was to change the permissions and upgrade my bna network, but it failed too. Im running out of options
Hope this description its not too long to ignore it. Thanks in advance
thanks for the question!
First possibility is that your network name is a3-policy-network but you're pinging a network called a3-poliza-microservice - once you do get the correct ACLs in place (currently, that's the error you're trying to resolve).
The procedure for upgrade would normally be the procedure below:
After your step 12 (where you can't ping the business network due to restrictive ACL conditions, assuming you are using the right network name) you would have:
Make the changes to to include your System ACLs this time eg.
/**
* Sample access control list.
*/
rule SystemACL {
description: "System ACL to permit all access"
participant: "org.hyperledger.composer.system.Participant"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
rule NetworkAdminUser {
description: "Grant business network administrators full access to user resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminSystem {
description: "Grant business network administrators full access to system resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
Update the "version" field in your existing package.json in your Business Network project directory (ie need to change it next increment - eg. update the version property from 0.0.1 to 0.0.2.)
From the same directory, run the following command:
composer archive create --sourceType dir --sourceName . -a a3-policy-network#0.0.2.bna
Now install the new business network code firstly:
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-network#0.0.2.bna
Then perform the requisite upgrade step (single '-' for short form of the parameter):
composer network upgrade -c PeerAdmin#hlfv1 -n a3-policy-network -V 0.0.2
After a few seconds, ping the network again to see ACL changes are now in effect:
composer network ping -c a3-policy-network
Our goal is to change the “default permissions” as documented in https://docs.openshift.com/container-platform/3.6/admin_solutions/user_role_mgmt.html#leveraging-default-groups .
The groups system:authenticated , system:authenticated:oauth, system:unauthenticated
should not be able to access the API. One use case is: An ldap user who is not in the administrator group is not allowed to log into the web console. This is also how we test it.
Commands such as
oadm policy remove-cluster-role-from-user basic-user system:authenticated
oadm policy remove-cluster-role-from-user system:basic-user system:authenticated
return without error. However, we couldn’t see any effect, either. The output of oc get clusterrolebindings and oc get rolebindings remains the same, and our test user still can log on.
Are we trying the wrong commands? Or are further actions needed?
This worked:
oadm policy remove-cluster-role-from-group basic-user system:authenticated
So system:authenticated is a group, not a user. And it was the wrong command.
Thanks Red Hat Support.
Though - the cluster didn't work after running the above command, and
oadm policy remove-cluster-role-from-group basic-user system:unauthenticated
We had to revert it. I wonder if it was only the second command that wrought havoc. After nearly a week of downtime, though, the rest of the team isn't too keen on testing what happens if you only revoke basic-user from system:authenticated.