How to restrict automatic account creation in OpenShift Origin 1.3? - openshift

An OpenShift Origin instance can be configured with Google OAuth login with or without a hosted domain restriction. On first login an account is created for the user and then permissions can be assigned.
Is it possible to restrict automatic new account creation, i.e. disable it completely to only allow certain people on the instance?

You can start by choosing which hosted domain you want to use: https://docs.openshift.org/latest/install_config/configuring_authentication.html#Google . In addition, you can choose the lookup mapping method for users to identities: https://docs.openshift.org/latest/install_config/configuring_authentication.html#mapping-identities-to-users and tightly control who can and can't have a user on your cluster.

Related

Google Drive SA accounts can't access objects on a shared disks created by other users

I would like to have an access to objects on a shared disks created by other users using SA accounts.
I discovered that by making a call to https://www.googleapis.com/drive/v3/files with the following query:
q=mimeType!='application/vnd.google-apps.folder' and 'GOOGLE_DRIVE_FOLDER_ID' in parents and trashed=false&supportsTeamDrives=true&teamDriveId=GOOGLE_TEAM_DRIVE_ID&fields=files(id ,name ,webViewLink ,webContentLink)
I get different results depending on the account. If I am using access token generated for service account we get different result than if I am using access token generated for a user account.
Service account "sees" only files that were create by that particular service account whereas regular users "see" all the files created by other users as well.
Anyone had similar issue and know any solution or workaround?
I get different results depending on the account. If I am using access token generated for service account we get different result than if I am using access token generated for a user account.
What you need to understand is that you can only see the files that you have permission to see. If you are logged in on a normal user account you will only be able to see the files that you own, or have access to. The same goes for a service account, think of a service account as a dummy user. The service account can only see the files it has been granted access to.
Assuming that your shared disks that you are talking about is gsuite then you can have the gsuite admin set up domain wide delegation on the service account and grant it access to the files on the domain.
permissions
If you dont have gsuite or dont want to give the service account full access to the domain you. You might also want to try having the owner of the drive run a permissions.create and add the service account.

Force 2-factor authentication across bim360

Since BIM360 allows users to disable 2-factor authentication for themselves one of our clients requires a way to prevent access to files / monitor if users turn this feature off.
For forge-applications managed by me I got this covered through following endpoint :
https://forge.autodesk.com/en/docs/oauth/v2/reference/http/users-#me-GET/
Problem is, users without 2-fa can then still access the files if they go directly through the BIM 360 docs webapp (docs.b360.autodesk.com). Is there a way to restrict acces on the bim360docs platform based solely on 2-fa or a way to use the users-#me endpoint in a way that allows me to monitor a project when those users aren't logging in on my applications?
Sorry, this information is only accessible for the current user (who authorized the 3 legged access token).

How to assign multiple service account credentials to Google Cloud Functions?

I have three service accounts:
App engine default service account
Datastore service account
Alert Center API service account
My cloud functions uses Firestore in datastore mode for book keeping and invokes Alert Center API.
One can assign only one service account while deploying cloud functions.
Is there way similar to AWS where one can create multiple inline policies and assign it to default service account.
P.S. I tried creating custom service account but datastore roles are not supported. Also I do not want to store credentials in environment variables or upload credentials file with source code.
You're looking at service accounts a bit backwards.
Granted, I see how the naming can lead you in this direction. "Service" in this case doesn't refer to the service being offered, but rather to the non-human entities (i.e. apps, machines, etc - called services in this case) trying to access that offered service. From Understanding service accounts:
A service account is a special type of Google account that belongs to
your application or a virtual machine (VM), instead of to an
individual end user. Your application assumes the identity of the
service account to call Google APIs, so that the users aren't
directly involved.
So you shouldn't be looking at service accounts from the offered service perspective - i.e. Datastore or Alert Center API, but rather from their "users" perspective - your CF in this case.
That single service account assigned to a particular CF is simply identifying that CF (as opposed to some other CF, app, machine, user, etc) when accessing a certain service.
If you want that CF to be able to access a certain Google service you need to give that CF's service account the proper role(s) and/or permissions to do that.
For accessing the Datastore you'd be looking at these Permissions and Roles. If the datastore that your CFs need to access is in the same GCP project the default CF service account - which is the same as the GAE app's one from that project - already has access to the Datastore (of course, if you're OK with using the default service account).
I didn't use the Alert Center API, but apparently it uses OAuth 2.0, so you probably should go through Service accounts.

S3 Object that only some of my users can access

I want some guidance as to how to go about this:
I want to have some objects in my S3 bucket by accessible only be a few users (users from my web app). I looked through the AWS docs and it seems as though I need to give each of my users AWS access keys(?).
Obviously I don't want to do this, so is there any way in my app to lock out some users and let other in? I'm using Node.JS and MySQL (to store my users) if that makes a difference.
Thanks a lot for the help.
The very simple description of the S3 access / permission scheme is...Access to S3, like most other AWS resources is based on IAM-centric access controls. So, you can either grant access to your S3 buckets by either granting users access to it (setting it on S3) or granting S3 access to a user (setting it in IAM as a policy). So, whatever or whomever is accessing S3 must be authenticated to AWS. Again, that is a very high-level description and meant to simply point out that access is based on user/role authentication.
Now, assuming your web-app is running on AWS (EC2?), than your EC2 instance has been (hopefully) assigned an IAM role. Once that IAM Role has been assigned the permissions to do so, the application running on the EC2 instance can now access any AWS resource via that Role.
But, you don't want ALL of your webapp users to access S3, so my two thoughts are:
1) Check the users credentials within your app (assuming the user needs to authenticate somehow with your application) and make the determination of whether or not to call S3 based on the users credentials. You would then use the IAM Role assigned to the EC2 instance (an EC2 instance can only have 1 IAM Role assigned to it) and access S3 or not.
This second idea is a pretty bad one and smells bad to me. I'm pointing it out merely as a possibility and to highlight how the use of IAM Users / Roles works.
2) This suggestion would not utilize the IAM Role assigned to the EC2 instance, though I would always advocate for assigning a Role to the instance, you can always lock down that role and deny access to all AWS resources, but you can't add a role to the instance after launched.
Have two IAM Users (S3Granted and S3Denied, each of which obviously have appropriate policies for accessing S3). Each user of your webapp (e.g. Hillary Clinton, Donald Trump, Bernie Sanders and Ted Cruz) would then each map to one of the two IAM Users based on whether or not they should have access to S3. This would be a field in your MySQL database. You wouldn't bother checking the credentials up-front (because then you would just be performing Option #1 and would proceed with the S3 call regardless of the user and S3 will either grant or deny access based on the IAM User account your webapp user is associated with. You technically wouldn't need the S3Denied User (you could just have no user), but I figured it would be cleaner to specify the IAM User.
e.g.:
WebAppUser/Bernie Sanders --> IAMUser/S3Granted
WebAppUser/Hillary Clinton --> IAMUser/S3Denied
WebAppUser/Ted Cruz --> IAMUser/S3Granted
WebAppUser/Donald Trump --> IAMUser/S3Denied
For Option #2, you would then need to store the Access keys for both IAM Users m(S3Granted and S3Denied) somewhere so that you could properly authenticate.
Also, you would need to do a bit of exception handling so that you could properly notify your users that they have been denied access.
Overall, #2 is just a bad idea. It would be much cleaner if you simply had a field in your MySQL database that specified whether or not they can access S3 and either make the S3 call via the IAM Role or refuse to do so within your webapp. Don't leave it to S3 to grant or deny access.

What is the intended use case for app auth and app users?

I am trying to understand what is the intended use case for app auth and app users. Im basically thinking about building an app that would use Box to store data of users that would subscribe to our service. Our service would allow each user to access and view their data.
If I have an account that basically owns the data of all the subscribed users, can I use the enterprise access token as a base for authentication while using the user account token to restrict the user to only viewing the data from their specific sub directory. Or do I have to have a unique account with its own api key for every user?
I hope this makes sense. Any assistance would be appreciated.
Thanks.
App Auth and App Users -- which is officially called Box Platform -- is essentially a white-labeled version of Box. I think of it this way: "Box" as we know it is software-as-a-service. It offers a web app, mobile apps, and all the trimmings. Box Platform is the platform layer upon which the SaaS is built, providing API-based management of users/content/comments/collaborations/etc. With Box Platform you have a walled garden in which you can build apps that leverage all the features of the APIs, but are not otherwise "Box apps."
I'm basically thinking about building an app that would use Box to store data of users that would subscribe to our service. Our service would allow each user to access and view their data.
This is an appropriate use case. With Box Platform you will be the owner and administrator of a Box enterprise and all the accounts and data contained within.
If I have an account that basically owns the data of all the subscribed users, can I use the enterprise access token as a base for authentication while using the user account token to restrict the user to only viewing the data from their specific sub directory. Or do I have to have a unique account with its own api key for every user?
I think it's generally cleanest to create unique accounts for each user as opposed to giving users a special subdirectory in the admin account. From there you can use the App Auth workflow to get an access token specific to that user.