S3 Object that only some of my users can access - mysql

I want some guidance as to how to go about this:
I want to have some objects in my S3 bucket by accessible only be a few users (users from my web app). I looked through the AWS docs and it seems as though I need to give each of my users AWS access keys(?).
Obviously I don't want to do this, so is there any way in my app to lock out some users and let other in? I'm using Node.JS and MySQL (to store my users) if that makes a difference.
Thanks a lot for the help.

The very simple description of the S3 access / permission scheme is...Access to S3, like most other AWS resources is based on IAM-centric access controls. So, you can either grant access to your S3 buckets by either granting users access to it (setting it on S3) or granting S3 access to a user (setting it in IAM as a policy). So, whatever or whomever is accessing S3 must be authenticated to AWS. Again, that is a very high-level description and meant to simply point out that access is based on user/role authentication.
Now, assuming your web-app is running on AWS (EC2?), than your EC2 instance has been (hopefully) assigned an IAM role. Once that IAM Role has been assigned the permissions to do so, the application running on the EC2 instance can now access any AWS resource via that Role.
But, you don't want ALL of your webapp users to access S3, so my two thoughts are:
1) Check the users credentials within your app (assuming the user needs to authenticate somehow with your application) and make the determination of whether or not to call S3 based on the users credentials. You would then use the IAM Role assigned to the EC2 instance (an EC2 instance can only have 1 IAM Role assigned to it) and access S3 or not.
This second idea is a pretty bad one and smells bad to me. I'm pointing it out merely as a possibility and to highlight how the use of IAM Users / Roles works.
2) This suggestion would not utilize the IAM Role assigned to the EC2 instance, though I would always advocate for assigning a Role to the instance, you can always lock down that role and deny access to all AWS resources, but you can't add a role to the instance after launched.
Have two IAM Users (S3Granted and S3Denied, each of which obviously have appropriate policies for accessing S3). Each user of your webapp (e.g. Hillary Clinton, Donald Trump, Bernie Sanders and Ted Cruz) would then each map to one of the two IAM Users based on whether or not they should have access to S3. This would be a field in your MySQL database. You wouldn't bother checking the credentials up-front (because then you would just be performing Option #1 and would proceed with the S3 call regardless of the user and S3 will either grant or deny access based on the IAM User account your webapp user is associated with. You technically wouldn't need the S3Denied User (you could just have no user), but I figured it would be cleaner to specify the IAM User.
e.g.:
WebAppUser/Bernie Sanders --> IAMUser/S3Granted
WebAppUser/Hillary Clinton --> IAMUser/S3Denied
WebAppUser/Ted Cruz --> IAMUser/S3Granted
WebAppUser/Donald Trump --> IAMUser/S3Denied
For Option #2, you would then need to store the Access keys for both IAM Users m(S3Granted and S3Denied) somewhere so that you could properly authenticate.
Also, you would need to do a bit of exception handling so that you could properly notify your users that they have been denied access.
Overall, #2 is just a bad idea. It would be much cleaner if you simply had a field in your MySQL database that specified whether or not they can access S3 and either make the S3 call via the IAM Role or refuse to do so within your webapp. Don't leave it to S3 to grant or deny access.

Related

Google Drive SA accounts can't access objects on a shared disks created by other users

I would like to have an access to objects on a shared disks created by other users using SA accounts.
I discovered that by making a call to https://www.googleapis.com/drive/v3/files with the following query:
q=mimeType!='application/vnd.google-apps.folder' and 'GOOGLE_DRIVE_FOLDER_ID' in parents and trashed=false&supportsTeamDrives=true&teamDriveId=GOOGLE_TEAM_DRIVE_ID&fields=files(id ,name ,webViewLink ,webContentLink)
I get different results depending on the account. If I am using access token generated for service account we get different result than if I am using access token generated for a user account.
Service account "sees" only files that were create by that particular service account whereas regular users "see" all the files created by other users as well.
Anyone had similar issue and know any solution or workaround?
I get different results depending on the account. If I am using access token generated for service account we get different result than if I am using access token generated for a user account.
What you need to understand is that you can only see the files that you have permission to see. If you are logged in on a normal user account you will only be able to see the files that you own, or have access to. The same goes for a service account, think of a service account as a dummy user. The service account can only see the files it has been granted access to.
Assuming that your shared disks that you are talking about is gsuite then you can have the gsuite admin set up domain wide delegation on the service account and grant it access to the files on the domain.
permissions
If you dont have gsuite or dont want to give the service account full access to the domain you. You might also want to try having the owner of the drive run a permissions.create and add the service account.

Isn't it possible to grant access to IAM user by just clicking some buttons?

I have created S3 bucket and have created IAM user in AWS.
Now I want to grant access for this user to this bucket.
My found examples of doing this in internet are all contains of edition some JSON texts and are all don't work for me with some error messages.
Is this true, that the only way to grant access for IAM user is by editing JSON text? Why isn't it possible to just add permissions in web interface by clicking something on page?
Where can I read the documentation of that JSON code I need to write a policy?
All policy documents are in Json. It is a good thing because it gives you granular control of what access you provide. If you do not want that just add a new permission -> Attach existing policies directly -> AmazonS3ReadOnlyAccess
or full access whatever you want (This is also json but you don't have to worry about what's underneath).
In my opinion you should create your own policy with granular control and attach it to your user. You can use AWS policy generator for that -
https://awspolicygen.s3.amazonaws.com/policygen.html

AWS Cognito Users + Relational Database Table. How to query/integrate both?

I'm new to AWS and I really need help with this. I have an existing RDS Schema with Users table and also with my own Users authentication algorithm/system using JWT. Everything was fine until I reached working on uploading files to S3. I discovered that when uploading to S3. We cannot pass extra parameters but only the body, key, contentType and to which bucket. I wanted to pass extra parameters like the current logged in user's access token (for user validation security), user_id, photo title and caption. But it's not possible.
What should I do? Should I use AWS Cognito User Pools instead of using an RDS Users? If I use Cognito User Pools, is it possible to do a SQL Query like joing a Cognito User and another RDS Table? I'm so confused. I'm sorry if I sound like an idiot. But I really need some help about this.
I hope somebody can. I would really appreciate it. Thank you very much in advance.
I am assuming your upload logic is in Lambda. In this case you can just do your authorization for the upload in the Lambda function. Allow the Lambda function to upload data to S3 by attaching an IAM policy to the IAM role that Lambda uses.
If you upload to S3 directly from a client, then you can either do that without authentication/authorization or use Federated Identities. In this case you can either export all your users to a Cognito User Pool (and keep them in sync) OR create your own Identity Provider and register your users for a Cognito Identity Pool.
The cleanest, but probably also hardest, way is to keep your authentication, integrate with the Cognito Identity Pool via OpenID, SAML or your own method (see http://docs.aws.amazon.com/cognito/latest/developerguide/developer-authenticated-identities.html).
You should go that way only if a) your authentication is really good and b) you have verified that having the user in a Cognito Identity Pool actually meets your requirements/business rules.

How to restrict automatic account creation in OpenShift Origin 1.3?

An OpenShift Origin instance can be configured with Google OAuth login with or without a hosted domain restriction. On first login an account is created for the user and then permissions can be assigned.
Is it possible to restrict automatic new account creation, i.e. disable it completely to only allow certain people on the instance?
You can start by choosing which hosted domain you want to use: https://docs.openshift.org/latest/install_config/configuring_authentication.html#Google . In addition, you can choose the lookup mapping method for users to identities: https://docs.openshift.org/latest/install_config/configuring_authentication.html#mapping-identities-to-users and tightly control who can and can't have a user on your cluster.

Google storage write only (no delete)

I would like to use google storage for backing up my database. However, for security reason, i would like to use a "service account" with a write only role.
But it seems like this role can also delete objects! So my question here: can we make a bucket truly "write only, no deletion"? And of course how?
This is now possible with the Google Cloud Storage Object Creator role roles/storage.objectCreator.
https://cloud.google.com/iam/docs/understanding-roles#storage.objectCreator
You cannot do this, unfortunately. There is currently no way to grant permission to insert new objects while denying the permission to delete or overwrite existing objects.
You could perhaps implement this using two systems, the first being the backup service which wrote to a temporary bucket, and the second being an administrative service that exclusively had write permission into the final backup bucket and whose sole job was to copy in objects if and only if there are no existing objects at that location. Basically you would trust this second job as an administrator.