I would like to use google storage for backing up my database. However, for security reason, i would like to use a "service account" with a write only role.
But it seems like this role can also delete objects! So my question here: can we make a bucket truly "write only, no deletion"? And of course how?
This is now possible with the Google Cloud Storage Object Creator role roles/storage.objectCreator.
https://cloud.google.com/iam/docs/understanding-roles#storage.objectCreator
You cannot do this, unfortunately. There is currently no way to grant permission to insert new objects while denying the permission to delete or overwrite existing objects.
You could perhaps implement this using two systems, the first being the backup service which wrote to a temporary bucket, and the second being an administrative service that exclusively had write permission into the final backup bucket and whose sole job was to copy in objects if and only if there are no existing objects at that location. Basically you would trust this second job as an administrator.
Related
I have a google function in one project. If I wish to access a bucket (not set to public) for purposes of reading from another project (of another user), how should the same be set for google functions project?
I was trying to approach this by setting the IAM of the storage project to that of the functions project. However, I am not clear which user-account must be provided the access?
Thanks
If you want GCF in project-a to read from GCS bucket "bucket-b" in project-b. Then give PROJECT-B#appspot.gserviceaccount.com the storage.objectViewer IAM permission in project-b (or to bucket-b specifically).
Is it possible to have compound permissions on files? For example, I'd like User A to have writer access until a set date, then after that date they can only comment (or view) the file. Right now as I'm testing on my personal files, the API explorer doesn't show all the permissions I have set.
Directly though the API you can set the roll and type of access each user has your application would have to remove it after said date.
You could try reading the documentation: Permissions: create
Have a look on the expirationTime field in the permission resource
https://developers.google.com/drive/v3/reference/permissions
You can create a permission, that will expire at the defined moment.
So for your task, you can create two permissions one for reading and commenting without expiration time, and another for writing with expiration time.
I want some guidance as to how to go about this:
I want to have some objects in my S3 bucket by accessible only be a few users (users from my web app). I looked through the AWS docs and it seems as though I need to give each of my users AWS access keys(?).
Obviously I don't want to do this, so is there any way in my app to lock out some users and let other in? I'm using Node.JS and MySQL (to store my users) if that makes a difference.
Thanks a lot for the help.
The very simple description of the S3 access / permission scheme is...Access to S3, like most other AWS resources is based on IAM-centric access controls. So, you can either grant access to your S3 buckets by either granting users access to it (setting it on S3) or granting S3 access to a user (setting it in IAM as a policy). So, whatever or whomever is accessing S3 must be authenticated to AWS. Again, that is a very high-level description and meant to simply point out that access is based on user/role authentication.
Now, assuming your web-app is running on AWS (EC2?), than your EC2 instance has been (hopefully) assigned an IAM role. Once that IAM Role has been assigned the permissions to do so, the application running on the EC2 instance can now access any AWS resource via that Role.
But, you don't want ALL of your webapp users to access S3, so my two thoughts are:
1) Check the users credentials within your app (assuming the user needs to authenticate somehow with your application) and make the determination of whether or not to call S3 based on the users credentials. You would then use the IAM Role assigned to the EC2 instance (an EC2 instance can only have 1 IAM Role assigned to it) and access S3 or not.
This second idea is a pretty bad one and smells bad to me. I'm pointing it out merely as a possibility and to highlight how the use of IAM Users / Roles works.
2) This suggestion would not utilize the IAM Role assigned to the EC2 instance, though I would always advocate for assigning a Role to the instance, you can always lock down that role and deny access to all AWS resources, but you can't add a role to the instance after launched.
Have two IAM Users (S3Granted and S3Denied, each of which obviously have appropriate policies for accessing S3). Each user of your webapp (e.g. Hillary Clinton, Donald Trump, Bernie Sanders and Ted Cruz) would then each map to one of the two IAM Users based on whether or not they should have access to S3. This would be a field in your MySQL database. You wouldn't bother checking the credentials up-front (because then you would just be performing Option #1 and would proceed with the S3 call regardless of the user and S3 will either grant or deny access based on the IAM User account your webapp user is associated with. You technically wouldn't need the S3Denied User (you could just have no user), but I figured it would be cleaner to specify the IAM User.
e.g.:
WebAppUser/Bernie Sanders --> IAMUser/S3Granted
WebAppUser/Hillary Clinton --> IAMUser/S3Denied
WebAppUser/Ted Cruz --> IAMUser/S3Granted
WebAppUser/Donald Trump --> IAMUser/S3Denied
For Option #2, you would then need to store the Access keys for both IAM Users m(S3Granted and S3Denied) somewhere so that you could properly authenticate.
Also, you would need to do a bit of exception handling so that you could properly notify your users that they have been denied access.
Overall, #2 is just a bad idea. It would be much cleaner if you simply had a field in your MySQL database that specified whether or not they can access S3 and either make the S3 call via the IAM Role or refuse to do so within your webapp. Don't leave it to S3 to grant or deny access.
First I apologize if I'm a dolt and am missing something obvious, but I've spent a few hours scouring documentation and am lost.
I'm trying to write a python script that will upload a bunch of images to a single user's Google Drive. The user already exists and will never change. I am not writing a web app and don't plan to use any user interface whatsoever. Everything will be done through code.
As best I can understand from the Google documentation, I have two choices:
1) Write a web app and register it to use the Drive DSK. This of course requires having urls and such for the web app.
2) Create a service account, which ties my "app" to a new service account email.
Neither of these options works for me. Is there any way to simply log in to a single user account and access their drive through python scripting?
There is a deprecated API called ClientLogin that would enable you to use the username and password for a login to access that Drive data.
But the basic idea is that you should be using something more secure -- from your users' point of view -- that allows them to authorize you without giving you their password.
For your use case it is possible that the user is you or someone you know and that you are accessing their account through a more personal kind of authorization. In that case, ClientLogin may be your best choice. If this is an application designed to be used by arbitrary users, the deprecation of ClientLogin is for a good reason and I would urge you to bite the bullet and choose one of the supported options.
The correct solution is to separate the authorization phase from the access phase. The authorization process needs to be run one time only, and can be done from a simple web site. The result of this is a refresh token which is analogous to a username/password. You will need to be aware of the security implications. Make sure you only grant drive.file scope to minimise the impact of a security breach.
Since you are uploading images, you might also want to look at the picassa api.
We have a google corporate account and need to transfer ALL of a user's google drive files to another account in certain instances. We want to do what is described at the following link for "all files" but programatically via the latest Drive API http://support.google.com/a/bin/answer.py?hl=en&answer=1247799
We are currently using the following API version(s) below, coupled with domain wide authority delegation as described at https://developers.google.com/drive/delegation and are able to see a user's files, iterate over them etc.
google-api-services-drive 1.14.2-beta
google-api-client 1.14.1-beta
My question is this: it appears that the only way to change permissions is by fileId by fileId etc. Instead of having to traverse and iterate over an entire set of user's files, if we just want to transfer ALL of a user's files to another particular user: is there a way in the API to do this (ownership transfer for ALL files) rather than individual requests file/by file?
Also when transferring ownershisp, must the transferee be in the same #domain or can it be another #domain we manage? I read somewhere that you can only transfer to owners in the same domain. Does this still hold true? For instance we manage #myCompany.com and have our corporate account registered under that, however that shell account has several sub-domains within it. We would like to transfer files from users in the sub-domains to a central user in the #myCompany domain.
You need to change permissions file by file, there is no updateAll type of functionality at the moment.
You cant transfer the ownership to another domain's user. Ownership can only be transferred to another user in the same domain as the current owner.
This answer doesn't directly answer your question, but it could be helpful for both you and future visitors.
As of now, you can mass transfer files to new users with Google's new Admin console. It doesn't let you filter for specific folders, but it does allow you to transfer all of one user's Drive files to a second user.
I know you were trying to create something which uses the API to iterate through folders and files, and you probably have a very specific use-case in mind. However, in the case where you have employees leaving, or you need to transfer everything, using the following method is fast and simple.
Open the Google Admin console
Go to Google Apps > Drive
Click on "Transfer ownership"
Fill out both user fields and submit
This process will even email both users once the process is completed.
You can do this with a single call to the Data Transfer API
Exactly what is needed but only with API!
Open the Google Admin console
Go to Google Apps > Drive
Click on "Transfer ownership"
Fill out both user fields and submit
This process will even email both users once the process is completed.
If this is not possible via API calls, then there is no point deleting a user using API.