I have been trying to use the Google API to create files on a folder that's been shared with me by another user (I made sure I have edit permissions on it). When I was using the files.create module with supportsAllDrives=True I got the following error message:
{
"errorMessage": "<HttpError 404 when requesting https://www.googleapis.com/upload/drive/v3/files?supportsTeamDrives=true&alt=json&uploadType=multipart returned "File not found: 1aLcUoiiI36mbCt7ZzWoHr8RN1nIPlPg7.". Details: "[{'domain': 'global', 'reason': 'notFound', 'message': 'File not found: 1aLcUoiiI36mbCt7ZzWoHr8RN1nIPlPg7.', 'locationType': 'parameter', 'location': 'fileId'}]">",
"errorType": "HttpError",
"requestId": "fc549b9e-9590-4ab4-8aaa-f5cea87ba4b6",
"stackTrace": [
" File "/var/task/lambda_function.py", line 154, in lambda_handler\n upload_file(service, download_path, file_name, file_name, folder_id, 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')\n",
" File "/var/task/lambda_function.py", line 78, in upload_file\n file = service.files().create(\n",
" File "/opt/python/googleapiclient/_helpers.py", line 131, in positional_wrapper\n return wrapped(*args, **kwargs)\n",
" File "/opt/python/googleapiclient/http.py", line 937, in execute\n raise HttpError(resp, content, uri=self.uri)\n"
]
}
After a bit of digging in, I found that 'Shared Drives' is different from 'Shared with me' and all the APIs I found so far apply to the 'Shared Drives' only. The supportsTeamDrives=True has been deprecated and I was not able to find a related replacement parameter in the docs. There is a parameter sharedWithMe=True for the file.list api and I'm not sure how I can use this in my code because file.create doesn't see the folderID for a 'Shared with me' folder anyway. Any suggestions are appreciated in advance!
My current code:
def upload_file(service, file_name_with_path, file_name, description, folder_id, mime_type):
media_body = MediaFileUpload(file_name_with_path, mimetype=mime_type)
body = {
'name': file_name,
'title': file_name,
'description': description,
'mimeType': mime_type,
'parents': [folder_id]
}
file = service.files().create(
supportsAllDrives=True,
supportsTeamDrives=True,
body=body,
media_body=media_body).execute()
Modified answer to include more details:
You are correct 'Shared Drives' are different from 'Shared With Me'. First off, you need to get the ID from the shared with you folder, for this you can use files:list. To upload files to that folder or any type of folder you can use the modified code below:
from __future__ import print_function
import pickle
import os.path
from googleapiclient.http import MediaFileUpload
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from google.oauth2 import credentials, service_account
# Scopes required by this endpoint -> https://developers.google.com/drive/api/v3/reference/files/create
SCOPES = ['https://www.googleapis.com/auth/drive']
"""
To upload/create a file in to a 'Shared with me' folder this script has the following configured:
1. Project:
* Create project
* Enable the Google Workspace API the service account will be using: https://developers.google.com/workspace/guides/create-project
2.Consent screen:
* Configure the consent screen for the application
* Create credentials for your service account depending on the type of application to be used with https://developers.google.com/workspace/guides/create-credentials#create_a_service_account
Once your Service Account is created you are taken back to the credentials list (https://console.cloud.google.com/apis/credential) click on the created Service Account, next click on ‘Advanced settings’ and copy your client ID
3. Scopes
* Collect the scopes needed for your service account/application
https://developers.google.com/identity/protocols/oauth2/scopes
4. Grant access to user data to a service account in Google Workspace https://admin.google.com/ac/owl/domainwidedelegation
* In the "Client ID" field, paste the client ID from your service account
* In the "OAuth Scopes" field, enter a comma-delimited list of the scopes required by your application. This is the same set of scopes you defined when configuring the OAuth consent screen.
* Click Authorize.
5. In your code you need to impersonate the account the folder was shared with, if it was your account, you add your account here:
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
delegated_creds = credentials.with_subject('user#domain.info')
"""
def main():
SERVICE_ACCOUNT_FILE = 'drive.json' #Service Account credentials from Step 2
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
delegated_creds = credentials.with_subject('user#domain.xyz')
service = build('drive', 'v3', credentials=delegated_creds)
media = MediaFileUpload(
'xfiles.jpg',
mimetype='image/jpeg',
resumable=True
)
request = service.files().create(
media_body=media,
body={'name': 'xfile new pic', 'parents': ['1Gb0BH1NFz30eau8SbwMgXYXDjTTITByE']} #In here 1Gb0BH1NFz3xxxxxxxxxxx is the 'Shared With ME'FolderID to upload this file to
)
response = None
while response is None:
status, response = request.next_chunk()
if status:
print("Uploaded %d%%." % int(status.progress() * 100))
print("Upload Complete!")
if __name__ == '__main__':
main()
Where:
parents is the ID of the folder shared with you.
See here for more documentation details
After a chat with a Google Workspace API specialist, turns out there is no API available to perform the above task. For clarity, refer the picture where my target folder lies.
Difference between 'Shared Drive' and 'Shared with me' (image)
Here's the response from the Support Agent:
I reviewed your code and everything was done perfectly, so I spoke to
our Drive Specialists, and they have explained to me that "Shared with
me" it's more than anything a label, and because you are not the owner
of the file, (like you would be if they were in "My Drive" )nor the
co-owner (if they were located in "Shared Drive") it does not allow
you to use any type of API in order to automate file creation or
deletion or anything for that matter.
In this case you can either make a copy on your Drive and automate it
there, and just update it every now and then in the file that was
shared with you, or just ask the user to move it to the "Shared Drive"
and access it from there.
I confess I'm a little disappointed that there is no API way to add/delete/edit in another user's folder in spite of having permissions to do so. My understanding as a developer is that the CLI is the ultimate most powerful way to interact with any service. GUI comes second to CLI, it's just a more visually appealing medium. Often times, when we are not able to perform a task using the GUI, we turn to CLI and manage high granularity and precision.
But this was a completely upside down scenario! I'm failing to understand how I'm able to access the 'shared folder' and make adds and deletes through the GUI but unable to do the same using a script. I understand now that 'Shared with me' is just a label and not a 'location' for me to access the folder but surely I would have assumed there was another API way to access a folder that belonged to another user (using the person's username/ID for identification, folder path as target, verifying if I have permissions to make said changes for authentication, returning an error if I don't, lastly executing the API).
If someone's able to explain to me if there is a specific reason why this is not made available to end users, I would love to learn about it please.
EDIT
I'm a bit late posting the solution here, but the issue turned out to be that the google workspace service account that was being used by my API did not have write permissions to the Shared Drive I was trying to query. Once the service account was given the required edit permissions, my code worked perfectly.
Azure file share Python SDKs have two similar methods get_directory_client and get_subdirectory_client. It seems both are interacting with directories. But do we need two methods to perform the same task?
get_directory_client is to get the root directory, and get_subdirectory_client is to get the subdirectories of the current directory.
As you can see from the document, you must get the ShareClient object first. At this time, you can only call get_directory_client to get the root directory, and then you will get the ShareDirectoryClient object. At this time, if you want to get the subdirectory , You can only call the get_subdirectory_client method.
You can also refer to the description of the file share client to understand the difference:
==========================update======================
connection_string = "<your-connection-string>"
service = ShareServiceClient.from_connection_string(conn_str=connection_string)
share = service.get_share_client("<your-file-share-name>")
my_files = []
for item in share.list_directories_and_files():
my_files.append(item)
if item["is_directory"]:
for item2 in share.get_directory_client(item["name"]).list_directories_and_files():
my_files.append(item)
for item3 in share.get_directory_client(item["name"]).get_subdirectory_client(item2["name"]).list_directories_and_files():
my_files.append(item3)
else:
my_files.append(item)
print(my_files)
You can refer to this official documentation.
Honestly, I find it a bit confusing having to deal with two different methods to do "the same thing". I prefer to instantiate the directory client via the from_connection_string method.
For more information on how to list files from a FileShare, take a look at the following post: "Azure File Share - Recursive Directory Search like os.walk". The approach I reported allows to list files recursively, walking through the directories.
I need to import the existing Aurora cluster in to terraform. I tried terraform import aws_rds_cluster.sample_cluster cluster statement.
I got the state file ready also I could also do Terraform show However, When I try to destroy the cluster Terraform tries to delete the cluster without the instances under it -
So the destroy command is failing.
`Error: error deleting RDS Cluster (test): InvalidDBClusterStateFault: Cluster cannot be deleted, it still contains DB instances in non-deleting state.status code: 400, request id: 15dfbae8-aa13-4838-bc42-8020a2c87fe9`
Is there a way I can import the entire cluster that includes instances as well? I need to have a single statefile that can be used to manage entire cluster(including underlying instances).
Here is the main.tf that is getting used to call the import -
access_key = "***"
secret_key = "*****"
region = "us-east-1"
}
resource "aws_rds_cluster" "test" {
engine = "aurora-postgresql"
engine_version = "11.9"
instance_class = "db.r5.2xlarge"
name = "test"
username = "user"
password = "******"
parameter_group_name = "test"
}```
Based on the comments.
Importing just aws_rds_cluster into TF is not enough. One must also import all aws_rds_cluster_instance resources which are part of the cluster.
If the existing infrastructure is complex, instead of fully manual development of TF config files for the importing procedure, an open-sourced third party tool, called former2, could be considered. The tool can generate TF config files from existing resources:
Former2 allows you to generate Infrastructure-as-Code outputs from your existing resources within your AWS account.
TF is one of the outputs supported.
i'm using three instances in my account lab. All of them :
1.-web app and a proxy (ubuntu image)
2.-keyrock 5 (keyrock-R5.1.0 image)
3.-spagobi (spagobi image)
i having troubles setting spagobi to validate with keyrock, as in this link: authentication on spagobi using keyrock
"error while trying to get access token from Oauth2 provider"
I followed the guide here:
http://spagobi.readthedocs.org/en/latest/admin/README/index.html
the only difference is REST_BASE_URL, i change port to 5000. (default is 4730)
I think the problem is that spagobi tries to access token without X-Auth-Token value on headers. But i don't know where can i set this on spagobi.
Anyone know this problem and how to solve it? or maybe i'm wrong?
my oauth2.config.properties:
# Informations about OAuth2 application
CLIENT_ID = ea74f4f72ee3438a82f1af785af0ecf1
SECRET = 21cf7df4633d4ff2b92251c266c00d09
APPLICATION_ID = ea74f4f72ee3438a82f1af785af0ecf1
# OAuth2 urls
AUTHORIZE_URL = http://example.es:8000/oauth2/authorize
ACCESS_TOKEN_URL = http://example.es:8000/oauth2/token
USER_INFO_URL = http://example.es:8000/user
REDIRECT_URI = http://example.es:8080/SpagoBI/servlet/AdapterHTTP?PAGE=LoginPage&NEW_SESSION=TRUE
# REST API urls
REST_BASE_URL = http://example.es:5000/v3/
TOKEN_PATH = auth/tokens
ROLES_PATH = OS-ROLES/roles
ORGANIZATIONS_LIST_PATH = OS-ROLES/organizations/role_assignments
ORGANIZATION_INFO_PATH = projects/
# Admin credentials
ADMIN_ID = fiware-example-admin
ADMIN_EMAIL = fiware#example.es
ADMIN_PASSWORD = 01189998819991197253
Sorry. My fault:
because we have not ready the real domain, i add in my /etc/hosts a local resolution. But in spagobi instance i dont do it.
the real problem is in screenshoot:
java.net.UnknownHostException
I solved it adding in /etc/hosts on spagobi instance the same ip.
Sorry for the inconvenience.
When using the Object Storage GE node.js connector implementation from https://github.com/arvidkahl/fiware-object-storage we encounter the problem "no tenants available". We tested with two different community accounts where we first set up an object container within the fiware cloud.
We are able to Receive an Auth Token and get a connection established message, but then we do not get the tenant ID i think. has anyone experienced something like that and can help or give us a better understanding of what is going wrong here?
we installed the fiware-object-storage with npm install fiware-object-storage.
this is our connection code:
var fiwareObjectStorageConfig = {
auth : conf.fiware.auth_url, // IP of the Auth Services, likely "cloud.lab.fi-ware.org"
url : conf.fiware.object_storage_url, // IP of the Object Storage GE -> "cloud.lab.fi-ware.org"
user : conf.fiware.user, // Your FIWARE account email
password : conf.fiware.password, // Your FIWARE account password.. i know.. no comment.
container : conf.fiware.container // Whatever container you want to connect to
};
var fiwareObjectStorage = require('fiware-object-storage');
fios = fiwareObjectStorage(fiwareObjectStorageConfig);
fios.connectToObjectStorage(function() {
console.log(fios.getFileList());
});
This library is a third party library and it is not an official FIWARE implementation.
As you said, there is a problem with this library. I have tested and it needs some fixes. I could not reproduce your error with my account but I have another one while getting file list.
The best option is waiting for their developers to improve that simple library like select Tenant in config file. By now it takes the first tenant on the list.
This is my config file to access Spain2 object store:
fiwareObjectStorageConfig = {
url : '172.32.0.144',
auth : 'cloud.lab.fi-ware.org',
container : 'myContainer',
user : "", // Your FIWARE account email
password : "" // Your FIWARE account password.
};