Using aws cli without a homedirectory - openshift

I need to use aws cli on an OpenShift Cluster that is quite restricted - it looks like the homedirectory is set to /, while the user in the container does not have permissions to write to /.
The only directory that is writeable from that user is /tmp. Now I need to use aws cli from within a pod of this OpenShift cluster. I came across the environment variables AWS_CONFIG_FILE and AWS_SHARED_CREDENTIALS_FILE. So I would place each a credentials file and a config file to /tmp.
When running aws configure list-profiles with this setup, only the one profile from AWS_SHARD_CREDENTIALS_FILE is listed. Not the one from AWS_CONFIG_FILE.
So it looks to me like AWS_CONFIG_FILE is not respected by aws cli.
Do you have an idea why these files might not be respected by the aws executable? Is there a way to pass the location of these files directly to the cli as parameter or s.th.?

Instead of configuring files for the AWS CLI, I would assume you could set the following 2 environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and issue your CLI commands immediately.
bruno#pop-os ~> export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
bruno#pop-os ~> export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
bruno#pop-os ~> aws cloudformation list-stacks --region us-east-2
{
"StackSummaries": []
}
To answer on:
So it looks to me like AWS_CONFIG_FILE is not respected by aws cli.
The AWS CLI does respect this.
You can specify a non-default location for the config file by setting
the AWS_CONFIG_FILE environment variable to another local path.

Related

Openshift 4.6 Node and Master Config Files

Where is the Openshift Master and Node Host Files in v4.6
Previously hosted below in v3
Master host files at /etc/origin/master/master-config.yaml
Node host files at /etc/origin/node/node-config.yaml
You can check your current kubelet configuration using the following procedures instead of the configuration file on the node hosts like OCPv3. Because the kubelet configuration was managed dynamically as of OCPv4.
Further information is here, Generating a file that contains the current configuration.
You can check it using above reference procedures(Generate the configuration file) or oc CLI as follows.
$ oc get --raw /api/v1/nodes/${NODE_NAME}/proxy/configz | \
jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
These files no longer exist in the same for as in OCP 3. To change anything on the machines themselves, you'll need to create MachineConfigs, as CoreOS is an immutable Operating System. If you change anything manually on the filesystem and reboot the machine, your changes will typically be reset.
To modify Worker Nodes, often the setting you are looking for can be configured via a kubeletConfig: Managing nodes - Modifying Nodes. Note that only certain settings can be changed, others cannot be changed at all.
For the Master Config, it depends on what you want to do, as you will potentially change the setting via a machineConfigPool or for example edit API Server setting via oc edit apiserver cluster. So it depends on what you actually want to change.

Docker-compose.yml for NodeJs with MySQL on AWS Elastic Beanstalk single container Docker

I have an Nodejs app that is hosted on AWS EB Single container Docker. I connect to MySQL database from the app.
For now I am deploying my app from AWS console by uploading zip file. Everything is working as expected.
I would like to be able to push changes to AWS using CLI.
It's my understanding that I need docker-compose.yml file to accomplish that. I have seen samples of docker-compose file that creates two containers one for node, another for mysql.
Is there a way to user docker-compose.yml and still deploy to a single container Docker?
Thanks in advance for any guidance.
I don't think you can deploy a docker-compose file to Elastic Beanstalk. But, I can think of two ways for deploying your code from the command line:
One is to put your existing zip file in an s3 bucket (which can be scripted) and then to use the Elastic Beanstalk command line something like this:
aws elasticbeanstalk create-application-version --application-name avengers \
--version-label v1 \
--source-bundle S3Bucket="avengers-docker-eb",S3Key="deployment.zip" \
--auto-create-application \
--region eu-west-3
The full instructions are here: https://read.acloud.guru/docker-on-elastic-beanstalk-tips-e1a4e6b70ff2
The second way, and the one you might prefer is to create a Dockerrun.aws.json file that points to your docker image either in an s3 bucket or in a docker registry (you can use the aws one). From there you can update your application from the cli like so:
aws elasticbeanstalk update-environment --application-name [your_app_name] --environment-name [your_environment_name] --version-label [your_version_label]
The pertinent documentation is here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/single-container-docker.html
Y

How to convert elastic beanstalk classic load balancer to application load balancer on a running application?

I have several EB applications that I would like to convert from a classic to an application load balancer. In the documentation it seems that the default way is to create a new environment from scratch with the proper load balancer. Considering that I have many environment variables and several environments, I would prefer not to have to rebuild applications. Is there a way to switch out the load balancer on an already running application?
It is not possible to set a a load balancer type except at creation time. You can use elastic beanstalk cli and aws cli to clone the application with the same config and version. To get the deployed application version run:
aws elasticbeanstalk describe-environments --application-name ${APPLICATION_NAME} --environment-names ${SRC_ENV_NAME} | jq -r '.Environments | .[] | .VersionLabel'
The jq pipe filters out the rest of the json blob.
After that, you can save the config of the curent appication using:
eb config save $SRC_ENV_NAME --cfg "${SRC_ENV_NAME}_save"
Then create an application clone using:
eb create $NEW_ENV_NAME --elb-type application --cfg "${SRC_ENV_NAME}_save" --version $APP_VERSION
Where APP_VERSION is the string extracted in step one.
It is not simple, but it can be done.
If the Envivornment name is important to you, it gets a little trickier.
Here is it how it should go, step by step (using the web console):
Save the configuration of the Environment you want to change
From the Saved config, generate a new Env (select Customize settings)
2.1) Change the LB type to Application and fill out all the necessary info for this
Swap the URLs from the original env to the new one (check if everything is working with the new env, if not swap back)
[STEPS ONLY NECESSARY IF ENV NAME IS IMPORTANT]
Delete the original env (which now is not receiving traffic and has a Classic LB)
Wait until the original name disappears from the console (it make take a couple of hours)
Clone the production env, and give the new env the original env name
Swap URLs
Done!

How to set environment variables for Laravel 5 on AWS EC2 with MySQL

I have successfully deployed my laravel 5 app to AWS EC2. I have also created a MySQL database with AWS RDS and associated it with my app instance.
Now I want to set my env variables so it uses homesteads default values when on my local machine in development, and my AWS database when deployed and in production.
From here I've made a major edit to my original question to reflect what I've learned since asking it
The classic .env in a laravel project for local development looks roughly like this:
APP_ENV=local
APP_DEBUG=true
APP_KEY=BF3nmfzXJ2T6XU8EVkyHtULCtwnakK5k (Note, not a real key)
DB_HOST=localhost
DB_DATABASE=homestead
DB_USERNAME=homestead
DB_PASSWORD=secret
CACHE_DRIVER=file
SESSION_DRIVER=file
QUEUE_DRIVER=sync
MAIL_DRIVER=smtp
MAIL_HOST=mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
For production, I've finally understood that I simply create a new .env file with my production variables. When using AWS, my .env file would roughly look like this:
APP_ENV=production
APP_DEBUG=false
APP_KEY=BF3nmfzXJ2T6XU8EVkyHtULCtwnakK5k (Note, not a real key)
DB_HOST=aaxxxxxxxxxxxxx.cyxxxxxxxxxx.eu-central-1.rds.amazonaws.com:3306
DB_DATABASE=MyAppsDatabaseName
DB_USERNAME=MyAWSRDSUserName
DB_PASSWORD=NotARealPassword
CACHE_DRIVER=file
SESSION_DRIVER=file
QUEUE_DRIVER=sync
MAIL_DRIVER=smtp
MAIL_HOST=mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
My question/problem now
I use AWS eb cli to deploy my app from git. But how do I deploy my production .env file without having to push it to git first?
Russ Matney above gave the right answer, so he gets the checkmark. I'll write my own answer here to add in details on how I made things work. I assume you do have your database set up and have all the credentials you need.
1. Go to your elastic beanstalk dashboard
2. Next go to your software config
3. Add your production environment variables as shown below. Remember to set the doc root to /public, and also add :3306 at the end of your database end point to avoid the PDOEXCEPTION error.
See bigger version of picture below
4. Next SSH into your apps eb instance. See details here, or try the following below:
$ ssh -i path/to/your/key/pair/pem/file.pem ec2-user#ec1-11-11-11-111.eu-central-1.compute.amazonaws.com
Note the ec1-11-11-11-111.eu-central-1.compute.amazonaws.com is your apps public DNS. You'll find yours right here:
5. cd to your app: $ cd /var/app/current
6. Give read/write access to your storage folder or the app can't write to the logs folder and that'll result in an error when running the migrations. To give access: $ sudo chmod -R ugo+rw storage
7. Finally! Run your migrations and do other artisan commands if you please! $ php artisan migrate Success should roughly look like this from gitbash:
You could create a new .env on your ec2 instance and add all the env vars in there. One option would be ssh-ing into the box and creating the file via vi or cat. Or you could write a script to remotely pull the .env in from an external location.
You could also ssh into the box and export APP_ENV=production all your env vars (assuming that's the right command for your OS).
Adding env vars to your environment will depend on the OS that your ec2 instance is running, so the solution will depend on that. ec2 has a concept of 'tags' which might be useful, but the docs show they limit the number of tags to 10, so you may have to do it manually and per ec2 instance :/
See here for one method that uses tags to pull in and set env vars (non-laravel specific).
I just went through this yesterday while getting Laravel running on Elastic Beanstalk, the solution was clean. You can actually set the env vars directly via the aws console (EB app/environment -> Configuration -> Software Configuration -> Environment Properties).
Update:
The key concept to understand is that Laravel just uses phpdotenv to dump vars from the .env file into php's global $_ENV, whereas any already existing env vars are automatically included in $_ENV when php starts the server (docs). So the .env file itself is unnecessary, really just a dev convenience. (unless I've just been spoiled by elastic beanstalk so far).

Certificate paths for Elastic Beanstalk instance

I'm trying to run CLI commands on new beanstalk instances when they start.
The CLI commands require env vars so I've set these in my bash script:
export EC2_BASE=/opt/aws
export EC2_HOME=$EC2_BASE/apitools/ec2
export EC2_PRIVATE_KEY=$(ls $EC2_BASE/certificates/*-pk.pem)
export EC2_CERT=$(ls $EC2_BASE/certificates/*-cert.pem)
export EC2_URL=https://ec2.amazonaws.com
export PATH=$PATH:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:$EC2_HOME/bin
export JAVA_HOME=/usr
In the logs I see the certificate paths are not working and causing errors.
ls: cannot access /opt/aws/certificates/*-pk.pem: No such file or directory
What is the correct path for the certificates?
I'm using the default linux ami.
The point of all this is to dynamically assign an elastic ip.
Elastic Beanstalk EC2 instances don't contain Private Key File and X.509 Certificate, you must upload them by yourself.