Upload Image into AWS S3 bucket using Aws Lambda - function

I want some suggestion on Upload Image file in S3 bucket using Lambda function.I am able to create bucket using lambda function but unable to upload file to S3 using Lambda function. It is possible? can we upload local system files(image,text etc) files to S3 bucket using lambda?.
when I am trying upload file using C:\users\images.jpg to S3 using Lambda function its showing me error ..Error: ENOENT, no such file or directory 'C:\Users\Images'.
Please suggest.
Thanks

You have to imagine where your code is running.
If you have a desktop application, you can access to local files such as C:\users\images.jpg becasue the process is has access to the file system.
Your lambda functions are maintained by AWS and they run on Amazon's infrastructure.
Also in general you have to design your functions stateless:
Local file system access, child processes, and similar artifacts may
not extend beyond the lifetime of the request, and any persistent
state should be stored in Amazon S3, Amazon DynamoDB, or another
Internet-available storage service.
Reference: AWS Lambda FAQs
So in your case I'd upload everything to S3 first, or create a background process that does this periodically. That way you can access them via Lambda functions but not directly from your local file system.

Related

Looking for the CLI call to create a Memory store cluster that uses the keys json file?

I'm looking for the CLI call to create a Memory store cluster that uses the keys json file?
It seems like this link/command authenticates the gcloud cli with a service account credential. If the service account has IAM policy to memorystore, and the main issue is just authentication when running the create command, this might work, but I'd like to confirm.
I've reviewed I found this:
https://cloud.google.com/memorystore/docs/memcached/creating-managing-instances
and this
https://cloud.google.com/appengine/docs/standard/python/memcache/using
but am struggling on putting it all together.

How to use google cloud functions to import files from winscp? I also want to make triggers associated with this function which executes weekly

How to use google cloud functions to import files from winscp? I also want to make triggers associated with this function, which executes weekly to import these files.Please assist.
For a weekly trigger -> Use Cloud Scheduler.
Download a file from SSH server, use a SSH library to initiate the traffic from the Cloud Functions to the SCP server. I don't know your language, I can't help you on that.
Keep in mind that the files that you download is stored in memory in the /tmp directory (size correctly your Cloud Functions service) and, because it's in memory, the file is not persisted. If you need to persist it, think to put it in a persistent storage, like Cloud Storage.

Setting download directory to Cloud Storage in chrome-drive in Cloud Function

I'm trying to create a Cloud Function that access to a website and download CSV file to Cloud Storage.
I managed to access the site using headless-chrominium and chromedriver.
On my local environment I can set up the download directory like below
options.add_experimental_option("prefs", {
"download.default_directory": download_dir,
"plugins.always_open_pdf_externally": True
})
where download_dir is like "/usr/USERID/tmp/"
How in Cloud Function could I assigned the value so that it points to the right Cloud Storage?
As I understand, a GCS bucket cannot be mounted as a local drive in runtime environment used for cloud functions.
Thus, you might need to download the source csv file into the cloud function memory and save it, for example, as a file in the "/tmp" directory.
Then, you can upload it from that location into a GCS bucket. A more detailed explanation how to upload - is provided here: Move file from /tmp folder to Google Cloud Storage bucket
Note: cloud functions have some restrictions - i.e. memory and timeout. Make sure that you allocated (during deployment) enough memory and time to process your csv files.
In addition, make sure that a service account, which is used by your cloud function, has relevant IAM roles for the GCS bucket under discussion.

Amazon AWS Cloudformation JSON template to assign the LAMP www/html folder permissions to ec2-user

I have created a JSON template to create the Amazon AWS LAMP stack with RDS (free tier) and succeffully created the stack. But when I tried to move the files to the var/www/html folder it seems to have no permission for the ec2-user. I know changing permission with help of SSH. But my intention is to create a template to setup a stack (hosting environment) without using any ssh client.
Also I know how to add a file or copy a zipped source to var/ww/html with the cloudformation JSON templating. What need to do is, just create the environment and later upload the files using ftp client and db using workbench or something. Please help me attain my goal, which I will share publicly for AWS beginners who are not familiar with setting up things with SSH.
The JSON template is a bit lengthy and so here is the link to the code http://pasted.co/803836f5
use the Cloud formation init Meta instead of Userdata.
That way you can run commands on the server such as pulling down files from S3 and then running gzip to expand them.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html
tar files and distribution dependent files like .deb or .rpm include the file permissions for directories. So you could set up a tar or custom .rpm file to include ec2-user as the owner
Alternatively, whatever scripting element installs the apache could also run a set of updates to set the owner of the /var/www/html to ec2-user
Of course you might run into trouble with the User / Group that apache runs under and be able to upload with ftp but not able to read with apache. It would need some thought, and possibly adding the ec2-user to the apache group or ftp'ing as the apache user or some other combination that gives the ttpd server read access and the ssh user write access

deploy java application on aws

i have a web application running on tomcat7 and mySql, now i want to deploy it to aws..
the application need to write file on disk (such as images uploaded by users)
some one can help me pointing out how to configure a good infrastructure in aws for my need?
i read this: http://aws.amazon.com/elasticbeanstalk/ , i think that my needs are an EC2 instance for running tomcat and an Amazon RDS whit mySql...
i need something else for R/W file ?
i need to change my code in some way in order to make it work on aws?
thanks in advance,
Loris
Elasticbeanstalk is a good way to get started with an application deployment at AWS. For persistent file storage you can use S3 or an EBS volume.
S3 allows you to read and write using amazon's SDK/API. I am using this on a java application running at AWS and it works pretty smoothly.
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html
It is also possible to mount S3 over NFS, you can read some interesting points in this answer:
How stable is s3fs to mount an Amazon S3 bucket as a local directory
With EBS you can create a persistent storage volume attached to your EC2 node. Please note that EBS is a block level storage device so you'll need to format it before its usable as a filesystem. EBS allows you to help protect yourself from data loss by configuring EBS snapshot backups to S3.
http://aws.amazon.com/ebs/details/
-fred