How do I inject a large configuration file into AWS Elastic Beanstalk? - amazon-elastic-beanstalk

You can add files into .ebextension/ alongside the .config file but I want to keep it outside the build itself and saved configurations don't allow extra files. Should I upload the config to s3 separately?

Related

How can I define an arbitrary file outside my web application to configure log4j2

My web application will be deployed to Weblogic application servers on Windows and Linux/Unix in different environments. The log file location, appenders and log levels will vary between the different deployments and we would like to be able to change the logging configuration during runtime (by exchanging the config file), so I cannot embed a log4j2.xml (or whatever other config file) into my deployment. And since I'm running on Application servers I cannot control, I've got no chance to add environment variables to point to another configuration Location.
Currently, my log4j2.xml resides in the classpath of my application and is being packaged into my war file. Is there any way to tell Log4J2 to use a configuration file e. g. relative to the application root (like Log4J's configureAndWatch(fileLocation) method)?
I found lots of examples of how to configure Log4J2, but everything I found about the config file location points to the applications class path.
I finally found a solution for my problem. I added a file named
log4j2.component.properties
to my project (in src/main/resources). This file contains a property pointing to the location of my log4j2 configuration file:
log4j.configurationFile=.//path//on//my//application//server//someLog4j2ConfigFile.xml
This causes log4j2 to read that file and configure itself from it's content.

How do I disable the creation of the artifact-[build-no].json file?

When I build a maven project from GitHub using Cloud Build (resulting in jar files in a bucket) I get an extra file uploaded to my bucket that specifies what files have been built (artifacts-[build-no].json). The file has a unique name for every build, so the bucket gets filled up with loads of unwanted files. Is there a way to disable the creation of that file?
I think the json is only generated when using the artifacts flag, such as:
artifacts:
objects:
location: 'gs://$PROJECT_ID/'
paths: ['hello']
You could manually push to a bucket in a step with the gsutil cloud builder, without the helper syntax. This would avoid the json creation.
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/gsutil
# Upload it into a GCS bucket.
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', 'gopath/bin/hello', 'gs://$PROJECT_ID/hello']

Rename appsettings.json after publish

I have a console app that has different config settings based on the environment it will be deployed into. I also have separate .json config files for each of these environments. (appsettings.test.json, appsettings.prod.json, etc) How can I do a rename on these files after I do a publish. Right now I just have a bat file that does the rename. Can I somehow integrate it into the publish step?

How to avoid AWS S3 changing file content-type from text/html?

I'm using S3 to host a static website for a portfolio. For each webpage the .htmlextension is removed. The problem with this is that AWS automatically categorizes the file as binary/octet-stream. The problem here is that when you access the extension such as myportfolio.com/contact instead of rendering the page, the file just downloads into the downloads folder.
By going to
[Object] → properties → metadata → content-type
I am able to change the type to text/html which causes the file to render instead of downloading.
Now after having made a change to all the files and re-uploading them using AWS Cli:
AWS s3 sync . s3://[bucketname]
the files went back to the old content-type. How do I permanently set the content type on these files?
Have you tried setting the --content-type flag? For example,
AWS s3 sync . s3://[bucketname] --content-type "text/html"
More info on this can be found here in the sync documentation
UPDATE: try this command for deployment,
aws s3 sync --acl public-read --sse --delete . s3://[bucketname]
--acl sets the files to be public read, --sse stores files encrypted, --delete removes files that are not in your local file.

AWS Elastic Beanstalk application folder on EC2 instance after deployed?

My context
I'm having errors in my deployment using AWS EB with my Flask application.
Now I'm inside the EC2 instance via eb ssh and need to explore the deployed source code of the application.
My problem
Where is the deployed application folder?
The source code is zipped and placed in the following directory:
/opt/elasticbeanstalk/deploy/appsource/source_bundle
There is no file extension but it is in the zip file format:
[ec2-user#ip ~]$ file /opt/elasticbeanstalk/deploy/appsource/source_bundle
/opt/elasticbeanstalk/deploy/appsource/source_bundle: Zip archive data, at least v1.0 to extract
Find for a specific/unique filename in source code folder, we will find the location of our application folder which, in AWS EB, to be
/opt/python/current
/opt/python/bundle/2/app
p.s.
Search for YOUR_FILE.py
find / -name YOUR_FILE.py -print