AWS s3 Bucket integrate with Salesforce - integration

Is there any way to rename the file and folder in AWS s3 bucket in the salesforce. I facing header is not match.
Using header X-AMZ-CopyFromSource for the copy the file in the bucket.

Related

How do I inject a large configuration file into AWS Elastic Beanstalk?

You can add files into .ebextension/ alongside the .config file but I want to keep it outside the build itself and saved configurations don't allow extra files. Should I upload the config to s3 separately?

Using the object URL pdf files are not downloading from aws s3 when using chrome. I get "Failed -Forbidden error. works in Firefox and Edge

When using the Object Url for a file stored in AWS S3, the pdf files are not downloading from aws s3 when using chrome. I get "Failed -Forbidden error. works in Firefox and Edge
This has to do with read/write permissions, you will need to check the S3 bucket permissions and ACL (Access control lists) modify them so you can "List" and "Read" but not "Write".
This should resolve your issue.

How to avoid AWS S3 changing file content-type from text/html?

I'm using S3 to host a static website for a portfolio. For each webpage the .htmlextension is removed. The problem with this is that AWS automatically categorizes the file as binary/octet-stream. The problem here is that when you access the extension such as myportfolio.com/contact instead of rendering the page, the file just downloads into the downloads folder.
By going to
[Object] → properties → metadata → content-type
I am able to change the type to text/html which causes the file to render instead of downloading.
Now after having made a change to all the files and re-uploading them using AWS Cli:
AWS s3 sync . s3://[bucketname]
the files went back to the old content-type. How do I permanently set the content type on these files?
Have you tried setting the --content-type flag? For example,
AWS s3 sync . s3://[bucketname] --content-type "text/html"
More info on this can be found here in the sync documentation
UPDATE: try this command for deployment,
aws s3 sync --acl public-read --sse --delete . s3://[bucketname]
--acl sets the files to be public read, --sse stores files encrypted, --delete removes files that are not in your local file.

AWS Elastic Beanstalk application folder on EC2 instance after deployed?

My context
I'm having errors in my deployment using AWS EB with my Flask application.
Now I'm inside the EC2 instance via eb ssh and need to explore the deployed source code of the application.
My problem
Where is the deployed application folder?
The source code is zipped and placed in the following directory:
/opt/elasticbeanstalk/deploy/appsource/source_bundle
There is no file extension but it is in the zip file format:
[ec2-user#ip ~]$ file /opt/elasticbeanstalk/deploy/appsource/source_bundle
/opt/elasticbeanstalk/deploy/appsource/source_bundle: Zip archive data, at least v1.0 to extract
Find for a specific/unique filename in source code folder, we will find the location of our application folder which, in AWS EB, to be
/opt/python/current
/opt/python/bundle/2/app
p.s.
Search for YOUR_FILE.py
find / -name YOUR_FILE.py -print

Open a CSV file from S3 using Roo on Heroku

I'm putting together a rake task to grab a CSV file stored on S3 and do some processing on it. I use Roo to process the files. The product is running on Heroku so I do not have local storage for physical files.
The CSV processing I am running works perfectly from the website if I upload the file.
rake task looks like this if I upload just the contents of the file:
desc "Checks uploads bucket on S3 and processes for imports"
task s3_import: [:environment] do
# depends on the environemtn task to load dev/prod env first
# get an s3 connection
s3 = AWS::S3.new(access_key_id: ENV['AWS_ACCESS_KEY_ID'], secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'])
# get the uploads bucket
bucket = s3.buckets[ENV['UPLOAD_BUCKET']]
# check each object in the bucket
bucket.objects.each do |obj|
#uri = obj.url_for(:read)
#puts uri
puts obj.key
file = obj.read
User.import(file)
end
end
I have also tried passing just the url to the object but this does not work either.
Basically, when uploading through the page this works to open the files:
Roo::Csv.new(file.path)
So, I can get the URL of the file, and I can also get the contents of the file into a variable, but I can't work out how to get Roo to open it without a physical file on disk.