AWS Elastic Beanstalk - Source code in inner folder - amazon-elastic-beanstalk

I am using the EB CLI to deploy to Elastic Beanstalk.
My git repo has this structure
|folder1
|folder2
...
|ebs
|src
- app.js
So I would need to configure (I assume using .ebextensions) EB that the 'ebs' folder is where app.js resides.
Is this doable ?

Related

elastic beanstalk document root resolves to /var/www/html/var/www/html/

I want to deploy a laravel site using elastic beanstalk.
I'm using pipelines pulling from a BitBucket repository.
After I created my EB application and environment, I changed the document-root to /web/public because the laravel-root is under the '[repo-root]/web' directory.
The deployment is failing:
2023/02/12 01:40:11 [error] 3857#3857: *109 "/var/www/html/var/www/html/web/public/index.php" is not found (2: No such file or directory), client: ..., server: , request: "GET / HTTP/1.1", host: "..."
A similar project where the laravel-root === 'repo-root' and document-root: public works, but this is not ideal.
How can I configure the pipeline or EB to use the '[repo-root]/web' as the document-root?
I've unsuccessfully tried various values for the document-root, but nothing seems to work.
In another forum, someone suggested changing the pipeline to return the laravel-root as an artifact, but I'm not sure how to do this. Seems like it is stored as a zip under S3 and if I change to Full Clone I get an invalid-structure error related to code build. I don't know what that means since I'm not using code build.
TIA
While I'm sure there are a number of ways to solve this, what worked for me was using CodeBuild to pull the code from the repo and a buildspec.yml file to create a zip of just the directory required for deployment.
buildspec.yml
version: 0.2
phases:
pre_build:
commands:
- cd web
- zip -r ../web.zip ./*
artifacts:
files:
- web.zip
Still under CodeBuild, I configured the Artifacts to output to an S3 bucket. Then I created a Code Pipeline with a Source stage that pulls the zip from the build bucket and a Deploy stage that sends the source artifact to Elastic Beanstalk (provider). When setting up the pipeline, it seems to want you to have a 'Build' stage between Source and Deploy, but I deleted this.
It looks like you can also leverage artifact handling and let CodeBuild do the packaging (zipping). I haven't tested this. https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.artifacts.base-directory
...
artifacts:
files:
- '**/*'
base-directory: 'my-build*'
As far as the weird pathing issue in the original post, I think there was some sort of EB config cache issue/corruption. When I rebuilt the environment, that error was gone.

Using aws cli without a homedirectory

I need to use aws cli on an OpenShift Cluster that is quite restricted - it looks like the homedirectory is set to /, while the user in the container does not have permissions to write to /.
The only directory that is writeable from that user is /tmp. Now I need to use aws cli from within a pod of this OpenShift cluster. I came across the environment variables AWS_CONFIG_FILE and AWS_SHARED_CREDENTIALS_FILE. So I would place each a credentials file and a config file to /tmp.
When running aws configure list-profiles with this setup, only the one profile from AWS_SHARD_CREDENTIALS_FILE is listed. Not the one from AWS_CONFIG_FILE.
So it looks to me like AWS_CONFIG_FILE is not respected by aws cli.
Do you have an idea why these files might not be respected by the aws executable? Is there a way to pass the location of these files directly to the cli as parameter or s.th.?
Instead of configuring files for the AWS CLI, I would assume you could set the following 2 environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and issue your CLI commands immediately.
bruno#pop-os ~> export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
bruno#pop-os ~> export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
bruno#pop-os ~> aws cloudformation list-stacks --region us-east-2
{
"StackSummaries": []
}
To answer on:
So it looks to me like AWS_CONFIG_FILE is not respected by aws cli.
The AWS CLI does respect this.
You can specify a non-default location for the config file by setting
the AWS_CONFIG_FILE environment variable to another local path.

Docker-compose.yml for NodeJs with MySQL on AWS Elastic Beanstalk single container Docker

I have an Nodejs app that is hosted on AWS EB Single container Docker. I connect to MySQL database from the app.
For now I am deploying my app from AWS console by uploading zip file. Everything is working as expected.
I would like to be able to push changes to AWS using CLI.
It's my understanding that I need docker-compose.yml file to accomplish that. I have seen samples of docker-compose file that creates two containers one for node, another for mysql.
Is there a way to user docker-compose.yml and still deploy to a single container Docker?
Thanks in advance for any guidance.
I don't think you can deploy a docker-compose file to Elastic Beanstalk. But, I can think of two ways for deploying your code from the command line:
One is to put your existing zip file in an s3 bucket (which can be scripted) and then to use the Elastic Beanstalk command line something like this:
aws elasticbeanstalk create-application-version --application-name avengers \
--version-label v1 \
--source-bundle S3Bucket="avengers-docker-eb",S3Key="deployment.zip" \
--auto-create-application \
--region eu-west-3
The full instructions are here: https://read.acloud.guru/docker-on-elastic-beanstalk-tips-e1a4e6b70ff2
The second way, and the one you might prefer is to create a Dockerrun.aws.json file that points to your docker image either in an s3 bucket or in a docker registry (you can use the aws one). From there you can update your application from the cli like so:
aws elasticbeanstalk update-environment --application-name [your_app_name] --environment-name [your_environment_name] --version-label [your_version_label]
The pertinent documentation is here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/single-container-docker.html
Y

az webapp deployment source config choose solution file

I am trying to deploy an app using the following:
az webapp deployment source config --branch master --manual-integration --name myapp --repo-url https://$GITUSERNAME:$GITUSERPASSWORD#dev.azure.com/<Company>/Project/_git/<repo> --resource-group my-windows-resources --repository-type git
The git repo contains 2 .sln solution files and this causes an error when attempting to deploy. Is there any way I can specify which solution file to use? I can seem to find a way in the docs but wondered if there might be a workaround.
I found a solution where you create a .deployment file in the root of the solution with these contents
[config]
project = <PATHTOPROJECT>
command = deploy.cmd
Then a deploy.cmd
nuget.exe restore "<PATHTOSOLUTION>" -MSBuildPath "%MSBUILD_15_DIR%"
The -MSBuildPath may be optional for you

AWS Elastic Beanstalk application folder on EC2 instance after deployed?

My context
I'm having errors in my deployment using AWS EB with my Flask application.
Now I'm inside the EC2 instance via eb ssh and need to explore the deployed source code of the application.
My problem
Where is the deployed application folder?
The source code is zipped and placed in the following directory:
/opt/elasticbeanstalk/deploy/appsource/source_bundle
There is no file extension but it is in the zip file format:
[ec2-user#ip ~]$ file /opt/elasticbeanstalk/deploy/appsource/source_bundle
/opt/elasticbeanstalk/deploy/appsource/source_bundle: Zip archive data, at least v1.0 to extract
Find for a specific/unique filename in source code folder, we will find the location of our application folder which, in AWS EB, to be
/opt/python/current
/opt/python/bundle/2/app
p.s.
Search for YOUR_FILE.py
find / -name YOUR_FILE.py -print