Is there any API available in OCI to update an existing object in the bucket. Or can you please suggest any other alternative to this? I'm looking for a way to update the existing file.
As I said in the comments, Can you try using object versioning.
To enable object versioning after bucket creation
oci os bucket update --namespace <object_storage_namespace> --name <bucket_name> --compartment-id <target_compartment_id> --versioning Enabled
To list object versions
oci os object list-object-versions --namespace <object_storage_namespace> --bucket-name <bucket_name>
To get the contents of an object version
oci os object get --name <object_name> --file path/to/file/name --version-id <version_identifier> --namespace <object_storage_namespace> --bucket-name <bucket_name>
To delete an object version
oci os object delete --name <object_name> --version-id <version_identifier> --namespace <object_storage_namespace> --bucket-name <bucket_name>
to load continue to use oci os object put
More in the documentation https://docs.oracle.com/en-us/iaas/Content/Object/Tasks/usingversioning.htm. You can find also the way to do using the API or the SDK
Since you are trying to append additional content to an uploaded object, you would need to download the existing object using the GetObject API, append to the downloaded content locally, and then upload that original+appended content back to object storage using the PutObject API.
Related
I'm able to create an image from sources and create and run a Cloud Run job through the Google CLI.
gcloud builds submit --pack image=gcr.io/<my-project-id>/logger-job
gcloud beta run jobs create helloworld --image gcr.io/<my-project-id>/logger-job --region europe-west9
gcloud beta run jobs execute helloworld --region=europe-west9
From the CLI, I can also upload files to a bucket by running be python script:
import sys
from google.cloud import storage
def get_bucket(bucket_name):
storage_client = storage.Client.from_service_account_json(
'my_credentials_file.json')
# Make an authenticated API request
buckets = list(storage_client.list_buckets())
found = False
for bucket in buckets:
if bucket_name in bucket.name:
found = True
break
if not found:
print("Error: bucket not found")
sys.exit(1)
print(bucket)
return bucket
def upload_file(bucket, fname):
destination_blob_name = fname
source_file_name = fname
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
print(
f"File {source_file_name} uploaded to {destination_blob_name}."
)
if __name__ == "__main__":
bucket = get_bucket('MyBucket')
upload_file(bucket, 'testfile')
Before I can even figure out how the authentication towards the bucket works when running this script through a job, I'm getting errors on target
ModuleNotFoundError: No module named 'google'
Which kind of makes sense, but I don't know how I would include the google.cloud module when running the script on target job? Or should I access the bucket in another way?
There are multiple questions and you would benefit from reading Google's documentation.
When (!) the Python code that interacts with Cloud Storage is combined into the Cloud Run deployment, it will run using the identity of its Cloud Run service. Unless otherwise specified, this will be the default Cloud Run Service Account. You should create and reference a Service Account specific to this Cloud Run service.
You will need to determine which IAM permissions you wish to confer on the Service Account. Generally you'll reference a set of permissions using a Cloud Storage IAM role.
You're (correctly) using a Google-provided client library to interact with Cloud Storage. You will be able to benefit from Application Default Credentials. This makes it easy to run e.g. your Python code on e.g. Cloud Run as a Service Account.
Lastly, you should review Google's Cloud Storage client library (for Python) documentation to see how to use e.g. pip to install the client library.
Place a requirements.txt file, containing the 3rd party modules that are necessary for running the code, in the folder that is being built when executing
gcloud builds submit --pack image=gcr.io/<my-project-id>/logger-job
This will automatically pip install the modules when exeuting the job.
I need to use aws cli on an OpenShift Cluster that is quite restricted - it looks like the homedirectory is set to /, while the user in the container does not have permissions to write to /.
The only directory that is writeable from that user is /tmp. Now I need to use aws cli from within a pod of this OpenShift cluster. I came across the environment variables AWS_CONFIG_FILE and AWS_SHARED_CREDENTIALS_FILE. So I would place each a credentials file and a config file to /tmp.
When running aws configure list-profiles with this setup, only the one profile from AWS_SHARD_CREDENTIALS_FILE is listed. Not the one from AWS_CONFIG_FILE.
So it looks to me like AWS_CONFIG_FILE is not respected by aws cli.
Do you have an idea why these files might not be respected by the aws executable? Is there a way to pass the location of these files directly to the cli as parameter or s.th.?
Instead of configuring files for the AWS CLI, I would assume you could set the following 2 environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and issue your CLI commands immediately.
bruno#pop-os ~> export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
bruno#pop-os ~> export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
bruno#pop-os ~> aws cloudformation list-stacks --region us-east-2
{
"StackSummaries": []
}
To answer on:
So it looks to me like AWS_CONFIG_FILE is not respected by aws cli.
The AWS CLI does respect this.
You can specify a non-default location for the config file by setting
the AWS_CONFIG_FILE environment variable to another local path.
Several API operations were imported from OpenAPI YAML file using Terraform, however part of the API operations were under x-ms-paths: and import hasn't picked tags which were defined within YAML.
I found that this was raised as an issue for Microsoft: MS issue.
Wondering is there a way to workaround this problem and to assign existing tags for the API operations which are under x-ms-paths: once they are imported?
You can use the Azure CLI to update the tags.
az apim update
az apim update --name
--resource-group
[--tags]
--name -n
The name of the api management service instance.
--resource-group -g
Name of resource group. You can configure the default group using az configure --defaults group=.
--tags
Space-separated tags: key[=value] [key[=value] ...]. Use "" to clear existing tags.
I was looking at the documentation for the aws sdk and the s3 client and saw that there was a mv command with the s3 client.
If I wanted to move an object from one s3 bucket to another, is there a move function, or do I have to use copyObject followed by a deleteObject using the sdk?
The documentation for the aws sdk only shows delete and copy.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html
There isn't a move operation in the S3 API, so you're correct that it's standard practice to copy an object, then to delete it.
I am trying to automate dataset and resource uploading in my CKAN instance. I am using Ubuntu Linux 10.04 64-bit and my CKAN instance version is 1.8.
I can create a new dataset using the command like like so:
$ curl http://ckan.installation.com/api/rest/dataset -H "Authorization:<my api key>" -d '{"name": "dataset-name", "title": "The Name of the Dataset"}'
{... JSON text recieved in response, including the id of the dataset ...}
Now, how do I go about creating and uploading resources (like image files) in my CKAN instance using the command line?
Thanks!
Uploading a file through the FileStore API is somewhat complicated. You'll be better reusing ckanclient's upload_file method. A simple Python script that uses this could solve your problem of uploading from the command-line.
Or, if you're feeling brave, that's the best place to start understanding how to upload a file using cURL.