How to convert elastic beanstalk classic load balancer to application load balancer on a running application? - amazon-elastic-beanstalk

I have several EB applications that I would like to convert from a classic to an application load balancer. In the documentation it seems that the default way is to create a new environment from scratch with the proper load balancer. Considering that I have many environment variables and several environments, I would prefer not to have to rebuild applications. Is there a way to switch out the load balancer on an already running application?

It is not possible to set a a load balancer type except at creation time. You can use elastic beanstalk cli and aws cli to clone the application with the same config and version. To get the deployed application version run:
aws elasticbeanstalk describe-environments --application-name ${APPLICATION_NAME} --environment-names ${SRC_ENV_NAME} | jq -r '.Environments | .[] | .VersionLabel'
The jq pipe filters out the rest of the json blob.
After that, you can save the config of the curent appication using:
eb config save $SRC_ENV_NAME --cfg "${SRC_ENV_NAME}_save"
Then create an application clone using:
eb create $NEW_ENV_NAME --elb-type application --cfg "${SRC_ENV_NAME}_save" --version $APP_VERSION
Where APP_VERSION is the string extracted in step one.

It is not simple, but it can be done.
If the Envivornment name is important to you, it gets a little trickier.
Here is it how it should go, step by step (using the web console):
Save the configuration of the Environment you want to change
From the Saved config, generate a new Env (select Customize settings)
2.1) Change the LB type to Application and fill out all the necessary info for this
Swap the URLs from the original env to the new one (check if everything is working with the new env, if not swap back)
[STEPS ONLY NECESSARY IF ENV NAME IS IMPORTANT]
Delete the original env (which now is not receiving traffic and has a Classic LB)
Wait until the original name disappears from the console (it make take a couple of hours)
Clone the production env, and give the new env the original env name
Swap URLs
Done!

Related

elastic beanstalk document root resolves to /var/www/html/var/www/html/

I want to deploy a laravel site using elastic beanstalk.
I'm using pipelines pulling from a BitBucket repository.
After I created my EB application and environment, I changed the document-root to /web/public because the laravel-root is under the '[repo-root]/web' directory.
The deployment is failing:
2023/02/12 01:40:11 [error] 3857#3857: *109 "/var/www/html/var/www/html/web/public/index.php" is not found (2: No such file or directory), client: ..., server: , request: "GET / HTTP/1.1", host: "..."
A similar project where the laravel-root === 'repo-root' and document-root: public works, but this is not ideal.
How can I configure the pipeline or EB to use the '[repo-root]/web' as the document-root?
I've unsuccessfully tried various values for the document-root, but nothing seems to work.
In another forum, someone suggested changing the pipeline to return the laravel-root as an artifact, but I'm not sure how to do this. Seems like it is stored as a zip under S3 and if I change to Full Clone I get an invalid-structure error related to code build. I don't know what that means since I'm not using code build.
TIA
While I'm sure there are a number of ways to solve this, what worked for me was using CodeBuild to pull the code from the repo and a buildspec.yml file to create a zip of just the directory required for deployment.
buildspec.yml
version: 0.2
phases:
pre_build:
commands:
- cd web
- zip -r ../web.zip ./*
artifacts:
files:
- web.zip
Still under CodeBuild, I configured the Artifacts to output to an S3 bucket. Then I created a Code Pipeline with a Source stage that pulls the zip from the build bucket and a Deploy stage that sends the source artifact to Elastic Beanstalk (provider). When setting up the pipeline, it seems to want you to have a 'Build' stage between Source and Deploy, but I deleted this.
It looks like you can also leverage artifact handling and let CodeBuild do the packaging (zipping). I haven't tested this. https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.artifacts.base-directory
...
artifacts:
files:
- '**/*'
base-directory: 'my-build*'
As far as the weird pathing issue in the original post, I think there was some sort of EB config cache issue/corruption. When I rebuilt the environment, that error was gone.

Openshift 4.6 Node and Master Config Files

Where is the Openshift Master and Node Host Files in v4.6
Previously hosted below in v3
Master host files at /etc/origin/master/master-config.yaml
Node host files at /etc/origin/node/node-config.yaml
You can check your current kubelet configuration using the following procedures instead of the configuration file on the node hosts like OCPv3. Because the kubelet configuration was managed dynamically as of OCPv4.
Further information is here, Generating a file that contains the current configuration.
You can check it using above reference procedures(Generate the configuration file) or oc CLI as follows.
$ oc get --raw /api/v1/nodes/${NODE_NAME}/proxy/configz | \
jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
These files no longer exist in the same for as in OCP 3. To change anything on the machines themselves, you'll need to create MachineConfigs, as CoreOS is an immutable Operating System. If you change anything manually on the filesystem and reboot the machine, your changes will typically be reset.
To modify Worker Nodes, often the setting you are looking for can be configured via a kubeletConfig: Managing nodes - Modifying Nodes. Note that only certain settings can be changed, others cannot be changed at all.
For the Master Config, it depends on what you want to do, as you will potentially change the setting via a machineConfigPool or for example edit API Server setting via oc edit apiserver cluster. So it depends on what you actually want to change.

Openshift - API to get ARTIFACT_URL parameter of a pod or the version of its deployed app

What I want to do is to make a web app that lists in one single view the version of every application deployed in our Openshift (a fast view of versions). At this moment, the only way I have seen to locate the version of an app deployed in a pod is the ARTIFACT_URL parameter in the envirorment view, that's why I ask for that parameter, but if there's another way to get a pod and the version of its current app deployed, I'm also open to that option as long as I can get it through an API. Maybe I'd eventually also need an endpoint that retrieves the list of the current pods.
I've looked into the Openshift API and the only thing I've found that may help me is this GET but if the parameter :id is what I think, it changes with every deploy, so I would need to be modifying it constantly and that's not practical. Obviously, I'd also need an endpoint to get the list of IDs or whatever that let me identify the pod when I ask for the ARTIFACT_URL
Thanks!
There is a way to do that. See https://docs.openshift.com/enterprise/3.0/dev_guide/environment_variables.html
List Environment Variables
To list environment variables in pods or pod templates:
$ oc env <object-selection> --list [<common-options>]
This example lists all environment variables for pod p1:
$ oc env pod/p1 --list
I suggest redesigning builds and deployments if you don't have persistent app versioning information outside of Openshift.
If app versions need to be obtained from running pods (e.g. with oc rsh or oc env as suggested elsewhere), then you have a serious reproducibility problem. Git should be used for app versioning, and all app builds and deployments, even in dev and test environments should be fully automated.
Within Openshift you can achieve full automation with Webhook Triggers in your Build Configs and Image Change Triggers in your Deployment Configs.
Outside of Openshift, this can be done at no extra cost using Jenkins (which can even be run in a container if you have persistent storage available to preserve its settings).
As a quick workaround you may also consider:
oc describe pods | grep ARTIFACT_URL
to get the list of values of your environment variable (here: ARTIFACT_URL) from all pods.
The corresponding list of pod names can be obtained either simply using 'oc get pods' or a second call to oc describe:
oc describe pods | grep "Name: "
(notice the 8 spaces needed to filter out other Names:)

How to manage settings in Openshift?

profile.properties file not found in Source code in repository?
Is it possible using environment variable in openshift?
If yes, how can set -Dkeycloak.profile.feature.scripts=enabled in Openshift environment?
Environment Variables are a first class concept in Openshift. There are many ways to use them:
You can set them directly on your BuildConfig to ”bake them into” your containers. This isn't best practices as then they won't change when you move them through environments but may be necessary to configure your build or set things that won't change (e.g. set the port number node.js uses to match the official node.js image with ”PORT=8080”)
You can put such variables into either ConfigMap or Secret configuration objects to easily share them between many similar BuildConfig
You can set them directly on DeploymentConfig so that they are set for every pod that is launched by that deployment. This is a fairly common way of setting up application specific environment variables. Its not a good idea to use this for settings that are shared between multiple applications as you would have to change common variables in many places.
You can set them up in ConfigMaps and Secrets and apply them to multiple DeploymentConfigs. That way you can manage them in one place.
Its common to see devs use a .env file that is named in .gitignore so not in git. In the past I have written scripts to load that into a Secret within openshift then use envFrom to set that secret on the deployment. Then have an .env.staging and .env.live that we git secret encrypt into git.
The problem with .env files is that they tend to get messy and have unused junk after a while. So we broke the file into one Secret to be database creds, separate Secrets for each api creds, a ConfigMap for app specific settngs. A ConfigMap for shared settings.
These days we use Helmfile to load all our config from git based on git webhooks. All the config is yaml in a git repo (with secret yaml encrypted). If you merge a change to the config git repo a webhook handler decrypts the config and runs Helmfile to update the settings in openshift. I am in the process of open sourcing everything including using a chatbot to manage releases (optional) over on GitHub
I should also say that openshift automatically creates many environment variables to help you configure you apps. In each project a lot of variables are set in every pod telling you the details of all the services you have setup in that project.
Openshift also sets up internal dns entries for your services. This means that if App A uses App B you don't have to configure A with a URL for B yourself. Rather there will be a dns entry for B and you can use the env vars that openshift sets on A to work out the dns entry to and the port number to use (e.g. dns entry includes project name and that is automatically set as an env var by openshift). So our apps can find a redis service running in the same project using that technique.

how to use google api keys based on heroku application name

I've created a few different "environments" for my app that is hosted on heroku so I have:
appName-staging.heroku.com
appName-production.heroku.com
I want to use different google api keys for these applications, how do I do this?
i've created a google.yml file that looks like:
development:
api_key: 'ABCXYZ'
production:
api_key: 'DEFXYZ'
so I use ABCSZY when developing locally, and DEFXYZ for appName-production.heroku.com
question is, how do i get appName-staging.heroku.com to use a different key?
since every application deployed to Heroku is considered to be in "production", both
appName-staging.heroku.com and appName-production.heroku.com use the same key.
You could add a heroku config variable to each environment, allowing you to identify each one from within the app.
Something along the lines of:
$ heroku config:add APP_NAME_ENV=production --app appName-production
$ heroku config:add APP_NAME_ENV=staging --app appName-staging
Then you could grab the current environment from within your app using:
ENV['APP_NAME_ENV']
And if you've got your YAML file as a hash called something like GOOGLE_KEYS, the following would return the correct key for a given environment:
GOOGLE_KEYS[ENV['APP_NAME_ENV']]
The previous answer definitely works but doesn't account for the potential security threats which come with checking files which include private keys into source control. Having your google.yml file in source control will allow anyone with access to your repo to see your private API keys.
A more secure solution would be to delete the google.yml file and create different environment variables on your staging and production servers with the same key:
$ heroku config:add GOOGLE_API_KEY=<production key> --app appName-production
$ heroku config:add GOOGLE_API_KEY=<development key> --app appName-staging
Then, when this is needed you can refer to it in code via
ENV['GOOGLE_API_KEY']
This will allow you to share code without sharing your private API keys.
Some more information on using environment variables on Heroku can be found at https://devcenter.heroku.com/articles/config-vars