Openshift combining roles for users - openshift

I am using openshift for deploying my web application. For login, I use openshift oauth api with a service account.
Right now, I am using the scope: role:edit:<namespace> for all users when they login. But, now I want to set their access as per their roles defined in their openshift accounts.
I tried retrieving user roles, but as I do not have admin access, I cannot view user roles.
Any ideas on how to approach this problem? Any help would be appreciated.
PROGRESS: So I was able to somewhat solve my problem:
Basically, as an admin: I created a clusterrole allowing read access to clusterrolebindings and then added that role to the required user.
oc create clusterrole roleget --verb=<verb_list> --resource=<resource_list>
oc adm policy add-role-to-user roleget developer
Then if I change the scope in my application to role:roleget:<namespace>, I am able to access the rolebindings via rest api
curl -k -H "Authorization: Bearer $TOKEN" -H 'Accept: application/json' https://$ENDPOINT/apis/rbac.authorization.k8s.io/v1beta1/namespaces/$NAMESPACE/rolebindings
So now, I am able to separately fulfill my requirements. But now, I need to merge the two scopes together somehow, i.e
scope:role:edit:<namespace> and <scope:role:roleget:namespace>.
Does anybody has an idea how to do this?

Related

API for validating user credentials (username/password) in PING

Is there an API in Ping Federate/ Ping One to validate user credentials - username and password?
Here is a scenario in which I would like to use it:
user logs in via SAML SSO to my web application
certain application feature requires that the user credentials are validated again (to sign-off some operation)
SAML SSO does not make it easy to re-validate user credentials without logging out from application, users passwords are obviously not stored in the application so the only way to validate credentials is to send them via some API to Ping to validate - however I was unable to find such API in Ping.
For example, OKTA (which offers similar services as Ping) does provide such API:
curl -v -X POST \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-d '{
"username": "dade.murphy#example.com",
"password": "correcthorsebatterystaple"
}' "https://${yourOktaDomain}/api/v1/authn"
I am looking for something similar in Ping.
Yes - there are two options in PingFederate for this:
Authentication API - This enables clients to authenticate users via a REST API instead of having the adapters present login templates directly. More details here: https://docs.pingidentity.com/bundle/pingfederate-102/page/elz1592262150859.html
OAuth Resource owner password credentials grant type - If you're just looking to validate a username + password combination you could leverage PingFederate's support of OAuth ROPC grant type. It allows you to POST the credentials and get back an Access Token if it was successful. More details here: https://docs.pingidentity.com/bundle/pingfederate-102/page/lzn1564003025072.html
Karolbe, you may also wish to take a look at Adaptive Authentication feature provided by PingFederate which directly answers your second requirement as provided by you above, i.e. - certain application feature requires that the user credentials are validated again (to sign-off some operation). Here is the reference from PingIdentity website. Adaptive authentication and authorization allow you to evaluate contextual, behavioral and correlated data to make a more informed decision and gain a higher level of assurance about a user’s identity, which is what your requirement 2) is asking for. Typical use case could be, say a user tries to access a high valued application, or tries to login after a configured idle time, Adaptive authentication will force user to present authentication credentials again.

How to restrict access to google cloud function, allowing the user to auth on trigger?

I have created a Google cloud function, and in the permissions I've added the 'Cloud Functions Invoker' role to the 3 individual users I want to be able to trigger the function.
The function is accessible at the trigger endpoint provided, similar to this:
https://us-central1-name-of-my-app.cloudfunctions.net/function-name
I have assigned myself the invoker role on the function. When I enter the URL I get a 403
Your client does not have permission to get URL /function-name from
this server.
Since I am signed into my Google account already, I had assumed I would have permissions to access this function.
If not, how can I show the authentication prompt as part of the function without exposing the entire function via allUsers?
You can't call directly the function even if you are authenticated on your browser (this feature will come later, when you will be behind a Global Load Balancer and with IAP activated).
So, to call your function you have to present an identity token (not an access token). For this, you can use the gcloud SDK with a command like this (on linux and after having initialized it with your user credentials (gcloud init))
curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" https://....
You can also create an API Gateway in front of it (I wrote an article on this) and use an API Keys for example.

openshift secret token expiry

We would like to create service user to manage ci/cd workflow for the different teams. Secret tokens can be generated for the service account to perform API operations.
oc create sa sample
oc policy add-role-to-user developer system:serviceaccount:sampleproject:sample
oc describe sa sample
oc describe sa secret sample-token-5s5kl
Above describe command gives us the secret token which we hand over to different teams for their API operations. But the problem we are facing currently is, secret token expires in 4 hrs or so. Is there a way to create never expiring secret tokens ?
If I am not wrong, they don't expire. Also, I quote from Openshift documentation "The generated API token and registry credentials do not expire, but they can be revoked by deleting the secret. When the secret is deleted, a new one is automatically generated to take its place."Please refer to this page for more info

Create Google Compute Instance with a service account from another Google Project

I would like to know whether it is possible to attached a service account created in my-project-a to a Google Compute Engine instance in say my-project-b?
The following command:
gcloud beta compute instances create my-instance \
--service-account=my-service-account#my-project-a.iam.gserviceaccount.com \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--project=my-project-b
gives me the following error:
(gcloud.beta.compute.instances.create) Could not fetch resource:
- The user does not have access to service account 'my-service-account#my-project-a.iam.gserviceaccount.com'. User: 'me#mysite.com'. Ask a project owner to grant you the iam.serviceAccountUser role on the service account. me#mysite.com is my account and I'm the owner of the org.
Not sure whether this is related, but looking at the UI (in my-project-b) there is no option to add a service account from any other projects. I was hoping to be able to add the account my-service-account#my-project-a.iam.gserviceaccount.com
You could follow these steps to authenticate a service account from my-project-a to an instance in my-project-b:
Create a service account in my-project-a with the proper role for compute engine
Download the JSON file.
Copy the my-project-a new service account email
On my-project-b, add a team member by using the copied email from the previous step
Connect via SSH to your instance in my-project-b
Copy the JSON file from the step 2 on your my-project-b instance
Run the following command to activate the service account:
gcloud auth activate-service-account --key-file=YOUR_JSON_FILE
Verify by using the following command:
gcloud auth list

Concurrent users Load Testing REST API with CURL command Using Jmeter Tool

Scenario is
Heroku server sends web service call to Parse.com using CURL command. web services are written as Jason and are using REST.
I need to test performance of parse.com server for my website in case of 40 users hitting it at one time
As the communication between heroku server and parse.com is through REST Jason Web services so I assume I need to generate concurrent 40 calls of each web service to hit the parse.com.
Each Curl command has One user session token and some parameters in header which I configure in Jmeter HTTP request when generating loaded web service call
I need to test the scenario in which 40 concurrent users simultaneously create project (Create project is also a web service) on parse.com (There is no web service for creating users but each curl command has a user session token as a key of each user signed up on website)
Problem:
Curl Command for creating project on parse.com has one user session. So even if I enter 40 value in thread. It will create 40 projects against one user session. whereas I want 40 users creating 40 project simultaneously.
Here is the CURL command with one user session
curl -X POST -H "X-Parse-Application-Id: " -H "X-Parse-REST-API-Key:" -H "Content-Type: application/json" -H "X-Parse-Session-Token: l8beiq2zv6kf420nbno8k7or1" -d '{"projectType":"feedback","users":null,"ownerOnlyInvite":false,"topicName":"SERVICE UPDATE TOPIC","name":"SERVICE UPDATE","deadline":"2014/03/08","s3ProjectImageKey":"065D417C-EEAA-4E74-BB43-5BDCED126A58"}'
Question:
Should I use curl command in Jmeter for load testing or there is
another alternative for testing REST Jason WEb services. If I enter 40
user session tokens in HTTP Header while configuring HTTP request in
JMETER. Will it hit as 40 concurrent users creating 40 projects on
parse.com?
This can be achieved by following the steps mentioned below:
Using CSV, put all the session tokens you want to use in the CSV file. jmeter will use
1 token for every user.
Refer: http://ivetetecedor.com/how-to-use-a-csv-file-with-jmeter/
hope this will help.