how to use google api keys based on heroku application name - google-maps

I've created a few different "environments" for my app that is hosted on heroku so I have:
appName-staging.heroku.com
appName-production.heroku.com
I want to use different google api keys for these applications, how do I do this?
i've created a google.yml file that looks like:
development:
api_key: 'ABCXYZ'
production:
api_key: 'DEFXYZ'
so I use ABCSZY when developing locally, and DEFXYZ for appName-production.heroku.com
question is, how do i get appName-staging.heroku.com to use a different key?
since every application deployed to Heroku is considered to be in "production", both
appName-staging.heroku.com and appName-production.heroku.com use the same key.

You could add a heroku config variable to each environment, allowing you to identify each one from within the app.
Something along the lines of:
$ heroku config:add APP_NAME_ENV=production --app appName-production
$ heroku config:add APP_NAME_ENV=staging --app appName-staging
Then you could grab the current environment from within your app using:
ENV['APP_NAME_ENV']
And if you've got your YAML file as a hash called something like GOOGLE_KEYS, the following would return the correct key for a given environment:
GOOGLE_KEYS[ENV['APP_NAME_ENV']]

The previous answer definitely works but doesn't account for the potential security threats which come with checking files which include private keys into source control. Having your google.yml file in source control will allow anyone with access to your repo to see your private API keys.
A more secure solution would be to delete the google.yml file and create different environment variables on your staging and production servers with the same key:
$ heroku config:add GOOGLE_API_KEY=<production key> --app appName-production
$ heroku config:add GOOGLE_API_KEY=<development key> --app appName-staging
Then, when this is needed you can refer to it in code via
ENV['GOOGLE_API_KEY']
This will allow you to share code without sharing your private API keys.
Some more information on using environment variables on Heroku can be found at https://devcenter.heroku.com/articles/config-vars

Related

Openshift - API to get ARTIFACT_URL parameter of a pod or the version of its deployed app

What I want to do is to make a web app that lists in one single view the version of every application deployed in our Openshift (a fast view of versions). At this moment, the only way I have seen to locate the version of an app deployed in a pod is the ARTIFACT_URL parameter in the envirorment view, that's why I ask for that parameter, but if there's another way to get a pod and the version of its current app deployed, I'm also open to that option as long as I can get it through an API. Maybe I'd eventually also need an endpoint that retrieves the list of the current pods.
I've looked into the Openshift API and the only thing I've found that may help me is this GET but if the parameter :id is what I think, it changes with every deploy, so I would need to be modifying it constantly and that's not practical. Obviously, I'd also need an endpoint to get the list of IDs or whatever that let me identify the pod when I ask for the ARTIFACT_URL
Thanks!
There is a way to do that. See https://docs.openshift.com/enterprise/3.0/dev_guide/environment_variables.html
List Environment Variables
To list environment variables in pods or pod templates:
$ oc env <object-selection> --list [<common-options>]
This example lists all environment variables for pod p1:
$ oc env pod/p1 --list
I suggest redesigning builds and deployments if you don't have persistent app versioning information outside of Openshift.
If app versions need to be obtained from running pods (e.g. with oc rsh or oc env as suggested elsewhere), then you have a serious reproducibility problem. Git should be used for app versioning, and all app builds and deployments, even in dev and test environments should be fully automated.
Within Openshift you can achieve full automation with Webhook Triggers in your Build Configs and Image Change Triggers in your Deployment Configs.
Outside of Openshift, this can be done at no extra cost using Jenkins (which can even be run in a container if you have persistent storage available to preserve its settings).
As a quick workaround you may also consider:
oc describe pods | grep ARTIFACT_URL
to get the list of values of your environment variable (here: ARTIFACT_URL) from all pods.
The corresponding list of pod names can be obtained either simply using 'oc get pods' or a second call to oc describe:
oc describe pods | grep "Name: "
(notice the 8 spaces needed to filter out other Names:)

i would like to auto detect the environment and set the appropriate variables

I have the same web application running on 2 different servers. It also runs on 2 different servers. Each of environment have its own database.
The file config.json contains every database informations.
Developement environment: The app.js and database/db.js files, are calling a file getConf which retrieves config.json informations with preprod object.
Production: I update the retrieved object from preprod to prod.
Call:
// Pre-production
var config = require('./database/getConf').preprod
// Production
var config = require('./database/getConf').prod
I would like to do this automatically, which means, to update to production environment, I don't have to update any files. Just a copy paste from the dev but it detect automatically the environment and set the appropriate database.
I tried many modules like nconf, dotconf, and config but they all required to select the environment in the command line before launching the app.
But my 2 apps are running in two different IIS servers in my company and they're launched from IIS and not from the command line.
I hope doing just a copy paste of the hole project folder from thedev to the prod.
According to the documentation, you can find out pretty much every thing you need to know about the system with the module os.
If you're looking for the CPU architecture : os.arch().
If you're looking for which type of OS : os.type().
Hope it can help you ^^.

How to manage settings in Openshift?

profile.properties file not found in Source code in repository?
Is it possible using environment variable in openshift?
If yes, how can set -Dkeycloak.profile.feature.scripts=enabled in Openshift environment?
Environment Variables are a first class concept in Openshift. There are many ways to use them:
You can set them directly on your BuildConfig to ”bake them into” your containers. This isn't best practices as then they won't change when you move them through environments but may be necessary to configure your build or set things that won't change (e.g. set the port number node.js uses to match the official node.js image with ”PORT=8080”)
You can put such variables into either ConfigMap or Secret configuration objects to easily share them between many similar BuildConfig
You can set them directly on DeploymentConfig so that they are set for every pod that is launched by that deployment. This is a fairly common way of setting up application specific environment variables. Its not a good idea to use this for settings that are shared between multiple applications as you would have to change common variables in many places.
You can set them up in ConfigMaps and Secrets and apply them to multiple DeploymentConfigs. That way you can manage them in one place.
Its common to see devs use a .env file that is named in .gitignore so not in git. In the past I have written scripts to load that into a Secret within openshift then use envFrom to set that secret on the deployment. Then have an .env.staging and .env.live that we git secret encrypt into git.
The problem with .env files is that they tend to get messy and have unused junk after a while. So we broke the file into one Secret to be database creds, separate Secrets for each api creds, a ConfigMap for app specific settngs. A ConfigMap for shared settings.
These days we use Helmfile to load all our config from git based on git webhooks. All the config is yaml in a git repo (with secret yaml encrypted). If you merge a change to the config git repo a webhook handler decrypts the config and runs Helmfile to update the settings in openshift. I am in the process of open sourcing everything including using a chatbot to manage releases (optional) over on GitHub
I should also say that openshift automatically creates many environment variables to help you configure you apps. In each project a lot of variables are set in every pod telling you the details of all the services you have setup in that project.
Openshift also sets up internal dns entries for your services. This means that if App A uses App B you don't have to configure A with a URL for B yourself. Rather there will be a dns entry for B and you can use the env vars that openshift sets on A to work out the dns entry to and the port number to use (e.g. dns entry includes project name and that is automatically set as an env var by openshift). So our apps can find a redis service running in the same project using that technique.

How to convert elastic beanstalk classic load balancer to application load balancer on a running application?

I have several EB applications that I would like to convert from a classic to an application load balancer. In the documentation it seems that the default way is to create a new environment from scratch with the proper load balancer. Considering that I have many environment variables and several environments, I would prefer not to have to rebuild applications. Is there a way to switch out the load balancer on an already running application?
It is not possible to set a a load balancer type except at creation time. You can use elastic beanstalk cli and aws cli to clone the application with the same config and version. To get the deployed application version run:
aws elasticbeanstalk describe-environments --application-name ${APPLICATION_NAME} --environment-names ${SRC_ENV_NAME} | jq -r '.Environments | .[] | .VersionLabel'
The jq pipe filters out the rest of the json blob.
After that, you can save the config of the curent appication using:
eb config save $SRC_ENV_NAME --cfg "${SRC_ENV_NAME}_save"
Then create an application clone using:
eb create $NEW_ENV_NAME --elb-type application --cfg "${SRC_ENV_NAME}_save" --version $APP_VERSION
Where APP_VERSION is the string extracted in step one.
It is not simple, but it can be done.
If the Envivornment name is important to you, it gets a little trickier.
Here is it how it should go, step by step (using the web console):
Save the configuration of the Environment you want to change
From the Saved config, generate a new Env (select Customize settings)
2.1) Change the LB type to Application and fill out all the necessary info for this
Swap the URLs from the original env to the new one (check if everything is working with the new env, if not swap back)
[STEPS ONLY NECESSARY IF ENV NAME IS IMPORTANT]
Delete the original env (which now is not receiving traffic and has a Classic LB)
Wait until the original name disappears from the console (it make take a couple of hours)
Clone the production env, and give the new env the original env name
Swap URLs
Done!

Clone Openshift application in scalable

I have an application in Openshift free plan with only one gear. I want to change it to scalabe and take usage of all of 3 free gears.
I read this blog post from openshift and I found that there is a way to do it. I should clone my current application to a new one as a scalable which will use the 2 remaining gears and then I will delete the original application. Thus, the new one will have 3 free gears.
The way that blog suggest is: rhc create-app <clone> --from-app <existing> --scaling
I have the following error: invalid option --from-app
Update
After running the command gem update rhc, I don't have the error above but...A new application with the given name has created with the same starting package (Python 2.7) just like the existing one, but all the files are missing. It actually create a blank application and not a clone of the existing.
Update 2
Here is the structure of the folder:
-.git
-.openshift
-wsgi
---static
---views
---application
---main.py
-requirements.txt
-setup.py
From what we've talked on IRC, your problem was around missing SSH configuration on Windows machine:
Creating application xxx ... done
Waiting for your DNS name to be available ...done
Setting deployment configuration ... done
No system SSH available. Please use the --ssh option to specify the path to your SSH executable, or install SSH.
I've double checked it, and it appears to be working without any problem.
The only requirement is to have the latest rhc client and putty or any other
SSH client. I'd recommend going through this tutorial once again and double-check everything to make sure everything is working properly.
Make sure you are using the newest version of the rhc gem with "gem update rhc" to make sure that you have access to that feature from the command line.
The --from-app will essentially do a 'rhc snapshot save & snapshot restore` (amoung other things) as you can see here from the source:
if from_app
say "Setting deployment configuration ... "
rest_app.configure({:auto_deploy => from_app.auto_deploy, :keep_deployments => from_app.keep_deployments , :deployment_branch => from_app.deployment_branch, :deployment_type => from_app.deployment_type})
success 'done'
snapshot_filename = temporary_snapshot_filename(from_app.name)
save_snapshot(from_app, snapshot_filename)
restore_snapshot(rest_app, snapshot_filename)
File.delete(snapshot_filename) if File.exist?(snapshot_filename)
paragraph { warn "The application '#{from_app.name}' has aliases set which were not copied. Please configure the aliases of your new application manually." } unless from_app.aliases.empty?
end
However this will not copy over anything in your $OPENSHIFT_DATA_DIR directory so if you're storing files there, you'll need to copy them over manually.