Google Cloud Functions using service account file not found - json

I am trying to list the buckets located in the other GCP Project, essentially using code from here: https://cloud.google.com/docs/authentication/production#auth-cloud-explicit-python
To do this I need to validate access with Json file.
Unfortunately I cant resolve error with linking my Json file to the function. This is the code I use:
def explicit(argument):
from google.cloud import storage
# Explicitly use service account credentials by specifying the private key
# file.
storage_client = storage.Client.from_service_account_json(
'gs://PROJECT/PATH/service_account.json')
# Make an authenticated API request
buckets = list(storage_client.list_buckets())
print(buckets)
The error I am getting:
Traceback (most recent call last): File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 2073, in wsgi_app response = self.full_dispatch_request() File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 1518, in full_dispatch_request rv = self.handle_user_exception(e) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 1516, in full_dispatch_request rv = self.dispatch_request() File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 1502, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/functions_framework/__init__.py", line 99, in view_func return function(request._get_current_object()) File "/workspace/main.py", line 6, in explicit storage_client = storage.Client.from_service_account_json( File "/layers/google.python.pip/pip/lib/python3.9/site-packages/google/cloud/client.py", line 106, in from_service_account_json with io.open(json_credentials_path, "r", encoding="utf-8") as json_fi: FileNotFoundError: [Errno 2] No such file or directory: 'gs://PROJECT/PATH/service_account.json'
How can I properly refference the file to execute the function?

Google's client libraries all support Application Default Credentials (ADCs) very helpfully including finding credentials automatically. You should utilize this.
When your code runs on the Cloud Functions service, it runs as a Service Account identity. You're probably running the Cloud Functions using the default Cloud Functions Service Account. You can specify a user-defined Service Account when you deploy|update the Cloud Function.
NOTE It's better to run your Cloud Functions using user-defined, non-default Service Accounts.
When the Cloud Function tries to create a Cloud Storage client, it does so using it's identity (the Service Account).
A solution is to:
Utilize ADCs and have the Cloud Storage client be created using these i.e. storage_client = storage.Client()
Adjust the Cloud Function's Service Account to have the correct IAM roles|permissions for Cloud Storage.
NOTE You should probably adjust the Bucket's IAM Policy. Because you're using a Cloud Storage Bucket in a different project, if you want to adjust the Project's IAM Policy instead, ensure you use the Bucket's (!) Project's IAM Policy and not the Cloud Function's Project's IAM Policy.
See IAM roles for Cloud Storage: predefined roles and perhaps roles/storage.objectViewer as this includes storage.objects.list permission that you need.

You're trying to list buckets located in another GCP projects while authorizing your client by pulling a Service Account (SA) key file that you also pull from from Google Cloud Storage (GCS).
I'd recommend a different security pattern to resolve this. Essentially use a single SA that has permissions to invoke your cloud function and permissions to list contents of your GCS bucket. It'll circumvent the need for pulling in a file from GCS that contains the key file while still maintaining security since a bad actor will require access to your GCP account. The following steps show you how to do so.
Create a Service Account (SA) with the role Cloud Functions Invoker in your first Cloud Functions project
Grant your user account or user group the Service Account User role on this new SA
Change your Cloud Function to use this newly created SA, where to change Cloud Function Runtime SA
Grant the Cloud Function runtime SA a GCS role in the second project or GCS bucket
In this pattern, you will "actAs" the Cloud Function runtime SA allowing you to invoke the Cloud Function. Since the Cloud Function runtime SA has adequate permissions to your GCS bucket in your other project there's no need for an SA key since the runtime SA already has adequate permissions.

Related

Error reading credential file from location in xamarin.forms while connecting to Cloud Firestore

I am developing a mobile application in xamarin.forms. I will use Google CloudFirestore as Cloud Database. When I want to connect to the database, I get the following error:
One or more errors occurred. (Error reading credential file from location D:\ yemekbagisiyapbucak.json: Could not find file "/D:\ yemekbagisiyapbucak.json"
Please check the value of the Environment Variable GOOGLE_APPLICATION_CREDENTIALS)
public static FirestoreDb db;
path = "D:\\yemekbagisiyapbucak.json";
Environment.SetEnvironmentVariable("GOOGLE_APPLICATION_CREDENTIALS", path);
db = FirestoreDb.Create("yemekbagisiyapbucak");
At first, please make sure the .josn file is in the path you set. var path = Path.Combine(FileSystem.AppDataDirectory, "yemekbagisiyapbucak.json"); get different path on different platforms.
So you can debug it and check the value of the path, and then copy the .josn file to the path.
If you need more information, please check the following link:Connecting to Google cloud firestore in a cross platform mobile app (xamarin c#)

Why gsutil restore a file from a bucket encrypted with KMS (using a service account without DECRYPT permission)?

I am working with GCP KMS, and it seems that when I send a file to a GCP bucket (using gustil cp) it is encrypted.
However, I have a question related to the permission to restore that file from the same bucket, using a different service account. I mean, the service account that I am using to restore the file from the bucket, doesn't have Decrypt privilege and even so the gustil cp works.
My question is whether it's normal behavior, or if I'm missing something ?
Let me describe my question:
First of all, I confirm that the default encryption for the bucket is the KEY that I set up previously:
$ kms encryption gs://my-bucket
Default encryption key for gs://my-bucket:
projects/my-kms-project/locations/my-location/keyRings/my-keyring/cryptoKeys/MY-KEY
Next, with gcloud config, I set a service account, which has "Storage Object Creator" and "Cloud KMS CryptoKey Encrypter" permissions:
$ gcloud config set account my-service-account-with-Encrypter-and-object-creator-permissions
Updated property [core/account].
I send a local file to the bucket:
$ gsutil cp my-file gs://my-bucket
Copying file://my-file [Content-Type=application/vnd.openxmlformats-officedocument.presentationml.presentation]...
| [1 files][602.5 KiB/602.5 KiB]
Operation completed over 1 objects/602.5 KiB.
After sending the file to the bucket, I confirm that the file is encrypted using the KMS key I created before:
$ gsutil ls -L gs://my-bucket
gs://my-bucket/my-file:
Creation time: Mon, 25 Mar 2019 06:41:02 GMT
Update time: Mon, 25 Mar 2019 06:41:02 GMT
Storage class: REGIONAL
KMS key: projects/my-kms-project/locations/my-location/keyRings/my-keyring/cryptoKeys/MY-KEY/cryptoKeyVersions/1
Content-Language: en
Content-Length: 616959
Content-Type: application/vnd.openxmlformats-officedocument.presentationml.presentation
Hash (crc32c): 8VXRTU==
Hash (md5): fhfhfhfhfhfhfhf==
ETag: xvxvxvxvxvxvxvxvx=
Generation: 876868686868686
Metageneration: 1
ACL: []
Next, I set another service account, but this time WITHOUT DECRYPT permission and with object viewer permission (so that it be able to read files from the bucket):
$ gcloud config set account my-service-account-WITHOUT-DECRYPT-and-with-object-viewer-permissions
Updated property [core/account].
After set up the new service account (WITHOUT Decrypt permission), the gustil to restore the file from the bucket works smooth...
gsutil cp gs://my-bucket/my-file .
Copying gs://my-bucket/my-file...
\ [1 files][602.5 KiB/602.5 KiB]
Operation completed over 1 objects/602.5 KiB.
My question is whether is it a normal behavior ? Or, since the new service account doesn't have Decrypt permission, the gustil cp to restore the file shouldn't work ? I mean, it is not the idea that with KMS encryption, the 2nd gustil cp command should fail with a "403 permission denied" error message or something..
If I revoke "Storage object viewer" privilege from the 2nd service account (to restore the file from the bucket), in this case the gustil fails, but it is because it doesn't have permission to read the file:
$ gsutil cp gs://my-bucket/my-file .
AccessDeniedException: 403 my-service-account-WITHOUT-DECRYPT-and-with-object-viewer-permissions does not have storage.objects.list access to my-bucket.
I appreciate if someone else could give me a hand, and clarify the question....specifically I don't sure whether the command gsutil cp gs://my-bucket/my-file . should work or not.
I think it shouldn't work (because the service account doesn't have Decrypt permission), or should it work ?
This is working correctly. When you use Cloud KMS with Cloud Storage, the data is encrypted and decrypted under the authority of the Cloud Storage service, not under the authority of the entity requesting access to the object. This is why you have to add the Cloud Storage service account to the ACL for your key in order for CMEK to work.
When an encrypted GCS object is accessed, the KMS decrypt permission of the accessor is never used and its presence isn't relevant.
If you don't want the second service account to be able to access the file, remove its read access.
By default, Cloud Storage encrypts all object data using Google-managed encryption keys. You can instead provide your own keys. There are two types:
CSEK which you must supply
CMEK which you also supply, but this time is managed by Google KMS service (this is the one you are using).
When you use gsutil cp, you are already using the encryption method behind the curtains. So, as stated on the documentation for Using Encryption Keys:
While decrypting a CSEK-encrypted object requires supplying the CSEK
in one of the decryption_key attributes, this is not necessary for
decrypting CMEK-encrypted objects because the name of the CMEK used to
encrypt the object is stored in the object's metadata.
As you can see, the key is not necessary because it is already included on the metadata of the object which is the one the gsutil is using.
If encryption_key is not supplied, gsutil ensures that all data it
writes or copies instead uses the destination bucket's default
encryption type - if the bucket has a default KMS key set, that CMEK
is used for encryption; if not, Google-managed encryption is used.

Compute Engine accessing DataStore get Invalid Credentials (code: 401)

I am following the tutorial on
https://cloud.google.com/datastore/docs/getstarted/start_nodejs/
trying to use datastore from my Compute Engine project.
Step 2 in the tutorial mentioned I do not have to create new service account credentials when running from Compute Engine.
I run the sample with:
node test.js abc-test-123
where abc-test-123 is my Project Id and that project have enabled all cloud API access including DataStore API.
After uploaded the code and executed the sample, I got the following error:
Adams: { 'rpc error': { [Error: Invalid Credentials] code: 401,
errors: [ [Object] ] } }
Update:
I did a workaround by changing the default sample code to use the JWT credential way (with a generated .json key file) and things are working now.
Update 2:
This is the scope config when I run
gcloud compute instances describe abc-test-123
And the result:
serviceAccounts:
scopes:
- https://www.googleapis.com/auth/cloud-platform
According to the doc:
You can set scopes only when you create a new instance, and cannot
change or expand the list of scopes for existing instances. For
simplicity, you can choose to enable full access to all Google Cloud
Platform APIs with the https://www.googleapis.com/auth/cloud-platform
scope.
I still welcome any answer about why the original code not work in my case~
Thanks for reading
This most likely means that when you created the instance, you didn't specify the right scopes (datastore and userinfo-email according to the tutorial). You can check that by executing the following command:
gcloud compute instances describe <instance>
Look for serviceAccounts/scopes in the output.
There are 2 way to create an instance with right credential:
gcloud compute instances create $INSTANCE_NAME --scopes datastore,userinfo-email
Using web: on Access & Setting Enable User Info & Datastore

Authorization error: migrating db for Google cloud SQL

I've been trying to make a db using google cloud sql, but I got an error when migrating the db.
python manager.py db init works well: A folder named migrations was made.
However, python manager.py db migrate produces an error:
File "/usr/local/google_appengine/google/storage/speckle/python/api/rdbms.py", line 946, in MakeRequest
raise _ToDbApiException(response.sql_exception)
sqlalchemy.exc.InternalError: (InternalError) (0, u'End user Google Account not authorized.') None None
It looks like a kind of authorization errors. How should I solve it?
Former authentication information was saved in a file, '.googlesql_oauth2.dat', if you had been authorized with a different id.
In this case, you have to remove the file before authentication process is performed.
Mac:
~/.googlesql_oauth2.dat
Windows:
%USERPROFILE%\.googlesql_oauth2.dat
Ref: http://jhlim.kaist.ac.kr/?p=182

SSIS: Accessing a network drive using a different username and password

Is there a way to connect to a network drive that requires a different username/password than the username/password of the user running the package?
I need to copy files from a remote server. Right now I map the network drive in Windows Explorer then do I filesystem task. However, eventually this package will be ran automatically, from a different machine, and will need to map the network drive on its own. Is this possible?
You can use the Execute Process task with the "net use" command to create the mapped drive. Here's how the properties of the task should be set:
Executable: net
Arguments: use \Server\SomeShare YourPassword /user:Domain\YourUser
Any File System tasks following the Execute Process will be able to access the files.
Alternative Method
This Sql Server Select Article covers the steps in details but the basics are:
1) Create a "Execute Process Task" to map the network drive (this maps to the z:)
Executable: cmd.exe
Arguments: /c "NET USE Z: "\\servername\shareddrivename" /user:mydomain\myusername mypassword"
2) Then run a "File System Task" to perform the copy. Remember that the destination "Flat File Connection" must have "DelayValidation" set to True as z:\suchandsuch.csv won't exist at design time.
3) Finally, unmap the drive when you're done with another "Execute Process Task"
Executable: cmd.exe
Arguments: /c "NET USE Z: /delete"
Why not use an FTP task to GET the files over to the local machine? Run SSIS on the local machine. When transferring using FTP in binary, its real fast. Just remember that the ROW delimter for SSIS should be LF, not CRLF, as binary FTp does not convert LF (unix) to CRLF (windows)
You have to map the network drive, here's an example that I'm using now:
profile = "false"
landingPadDir = Dts.Variables("strLandingPadDir").Value.ToString
resultsDir = Dts.Variables("strResultsDir").Value.ToString
user = Dts.Variables("strUserName").Value.ToString
pass = Dts.Variables("strPassword").Value.ToString
driveLetter = Dts.Variables("strDriveLetter").Value.ToString
objNetwork = CreateObject("WScript.Network")
CheckDrive = objNetwork.EnumNetworkDrives()
If CheckDrive.Count > 0 Then
For intcount = 0 To CheckDrive.Count - 1 Step 2 'if drive is already mapped, then disconnect it
If CheckDrive.Item(intcount) = driveLetter Then
objNetwork.RemoveNetworkDrive(driveLetter)
End If
Next
End If
objNetwork.MapNetworkDrive(driveLetter, landingPadDir, profile, user, pass)
From There just use that driveLetter and access the file via the mapped drive.
I'm having one issue (which led me here) with a new script that accesses two share drives and performs some copy/move operations between the drives and I get an error from SSIS that says:
This network connection has files open or requests pending.
at Microsoft.VisualBasic.CompilerServices.LateBinding.InternalLateCall(Object o, Type objType, String name, Object[] args, String[] paramnames, Boolean[] CopyBack, Boolean IgnoreReturn)
at Microsoft.VisualBasic.CompilerServices.NewLateBinding.LateCall(Object Instance, Type Type, String MemberName, Object[] Arguments, String[] ArgumentNames, Type[] TypeArguments, Boolean[] CopyBack, Boolean IgnoreReturn)
at ScriptTask_3c0c366598174ec2b6a217c43470f581.ScriptMain.Main()
This is only on the "2nd run" of the process and if I run it a 3rd time it all works fine so I'm guessing the connection isn't being properly closed or it is not waiting for the copy/move to complete before moving forward or some such, but I'm unable to find a "close" or "flush" command that prevents this error. If you have any solution, please let me know, but the above code should work for getting the drive mapped using your alternate credentials and allow you to access that share.
Zach
To make the package more robust, you can do the following;
In the first Execute Process Task, set - FailTaskIfReturnCodeNotSuccessValue = False
This will let the package run if the last disconnect has not worked.
This is an older question but more recent versions of SQL Server with SSIS databases allow you to use a proxy to execute SQ Server jobs.
In SSMS Under Security<Credentials set up a credential in the database mapped to the AD account you want to use.
Under SQL Server Agent create a new proxy giving it the credential from step 1 and permissions to execute SSIS packages.
Under the SQL Server Agent jobs create a new job that executes your package
Select the step that executes the package and click EDIT. In the Run As dropdown select the Proxy you created in step 2