Currently I have a tfvars file in json setting key values with spaces. For example:
{
"customer": "Test Customer",
}
I then pass this variable to a ansible playbook command run locally on the provisioned EC2 host using cloud-config
sudo ansible-playbook /Playbook.yml --extra-vars 'customer=${var.customer}'
In that playbook I have a license file that I want to propagate with Ansible's template module. Currently the license file will get Test, but not Test Customer (because of the space). How can I fix this?
Also on a second note, is there a better/cleaner way of passing in terraform variables to a ansible playbook command in a Terraform config other than -e extra variables?
I think this will do the trick for you:
sudo ansible-playbook /Playbook.yml --extra-vars "customer='${var.customer}'"
Note:
I would really recommend you, for a better design, to decouple these two tools from each other. Don't make a tight coupling between ansible and terraform, as in the future you may decide to start using a different tool which will force you to rewrite your whole IaC.
Related
What I want to do is to make a web app that lists in one single view the version of every application deployed in our Openshift (a fast view of versions). At this moment, the only way I have seen to locate the version of an app deployed in a pod is the ARTIFACT_URL parameter in the envirorment view, that's why I ask for that parameter, but if there's another way to get a pod and the version of its current app deployed, I'm also open to that option as long as I can get it through an API. Maybe I'd eventually also need an endpoint that retrieves the list of the current pods.
I've looked into the Openshift API and the only thing I've found that may help me is this GET but if the parameter :id is what I think, it changes with every deploy, so I would need to be modifying it constantly and that's not practical. Obviously, I'd also need an endpoint to get the list of IDs or whatever that let me identify the pod when I ask for the ARTIFACT_URL
Thanks!
There is a way to do that. See https://docs.openshift.com/enterprise/3.0/dev_guide/environment_variables.html
List Environment Variables
To list environment variables in pods or pod templates:
$ oc env <object-selection> --list [<common-options>]
This example lists all environment variables for pod p1:
$ oc env pod/p1 --list
I suggest redesigning builds and deployments if you don't have persistent app versioning information outside of Openshift.
If app versions need to be obtained from running pods (e.g. with oc rsh or oc env as suggested elsewhere), then you have a serious reproducibility problem. Git should be used for app versioning, and all app builds and deployments, even in dev and test environments should be fully automated.
Within Openshift you can achieve full automation with Webhook Triggers in your Build Configs and Image Change Triggers in your Deployment Configs.
Outside of Openshift, this can be done at no extra cost using Jenkins (which can even be run in a container if you have persistent storage available to preserve its settings).
As a quick workaround you may also consider:
oc describe pods | grep ARTIFACT_URL
to get the list of values of your environment variable (here: ARTIFACT_URL) from all pods.
The corresponding list of pod names can be obtained either simply using 'oc get pods' or a second call to oc describe:
oc describe pods | grep "Name: "
(notice the 8 spaces needed to filter out other Names:)
Current situation:
A Dockerfile that is based on an Ubuntu image, installs Wget and declares that a bashscript will run when a container is started.
A Docker container is started based on the image with the necessary environment variables in the command. These variables will be used inside the Wget command in the bashscript.
docker run -i -e ‘ENV_VARIABLE=VALUE’ [imagename]
The container runs the bashscript, containing a Wget HTTP PUT:
wget --method=PUT --body-data=”{\“key\”:\”${ENV_VARIABLE}\”}” ……
Desired situation:
The current situation works, but I don’t prefer this solution. This is because of the quote-escaping (\”) I have to use.
I tried to solve this by constructing the --body-data as below, with surrounding single quotes.
‘{“key”:”${ENV_VARIABLE}”}’
However, this will not set the ENV_VARIABLE since the payload is a full String now.
A more preferable solution would be to separate the JSON to a JSON-file, which I can refer to in the Wget call. This arises the following questions:
How to refer to the JSON-file? My best guess is to first copy the file on the build of the image into the image and then refer to it via a path in the Wget call, but then again, how do I refer to it?
If the above point is correct, will I still be able to refer to the Docker environment variables?
I'm trying to accomplish blue/green deployments with terraform/elastic beanstalk. How would one swap environment urls with this stack? I don't see anything obvious here.
The best I can come up with is...
running a terraform apply to spin up my entire architecture
spins up aws_elastic_beanstalk_environment's blue env
When wanting to deploy a new version of an app, running terraform
apply module.elasticbeanstalk.aws_elastic_beanstalk_environment.green to only spin up the other aws_elastic_beanstalk_environment resource
Now I have both blue & green up. Time for actually swapping the URLs...
through the command line eb swap API, swap the two environment URLs
update tfstate manually
terraform push new state
I would love it if there was a solution where I didn't have to manually manipulate the state. Or is this the only way to accomplish this function using these two tools?
I believe you just toggle the cname prefix of each
If you are looking to update the state after the swap you could note the ids of the beanstalk environments and do:
# Remove the old state information, using the command line not manually
terraform state rm module.elasticbeanstalk.aws_elastic_beanstalk_environment.blue
terraform state rm module.elasticbeanstalk.aws_elastic_beanstalk_environment.green
# Import these to the opposite environment resources
terraform import module.elasticbeanstalk.aws_elastic_beanstalk_environment.blue <green id>
terraform import module.elasticbeanstalk.aws_elastic_beanstalk_environment.green <blue id>
This doesn't exactly seem ideal, but it does avoid needing to manually manipulate the state file and could be scripted fairy easily.
What is the standard way of using "oc apply" command inside the java code using "Fabric8" library ?
We have been using client.buildConfigs().inNamespace ..... etc so far. However, felt using "oc apply" command can simplify code
Thanks
San
Do you mean inside a Jenkins Pipeline or just in general purpose Java code?
We have the equivalent of kubernetes apply with:
new Controller().apply(entity);
in the kubernetes-api module in the fabric8io/fabric8 git repo.
There's a similar API in kuberetes-client too which lets you apply a file or URI directly.
In pipelines we tend to use the kubernetesApply(file) function to do a similar thing.
But inside Jenkins Pipelines you can also just use oc or kubectl directly in pipelines using the clients docker image via the clientsNode() function in the fabric8-pipeline-library like these examples then you can run oc commands directly via this code in a Jenkinsfile.
I'm using the mysql image as an example, but the question is generic.
The password used to launch mysqld in docker is not visible in docker ps however it's visible in docker inspect:
sudo docker run --name mysql-5.7.7 -e MYSQL_ROOT_PASSWORD=12345 -d mysql:5.7.7
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b98afde2fab7 mysql:5.7.7 "/entrypoint.sh mysq 6 seconds ago Up 5 seconds 3306/tcp mysql-5.7.7
sudo docker inspect b98afde2fab75ca433c46ba504759c4826fa7ffcbe09c44307c0538007499e2a
"Env": [
"MYSQL_ROOT_PASSWORD=12345",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.7-rc"
]
Is there a way to hide/obfuscate environment parameters passed when launching containers. Alternatively, is it possible to pass sensitive parameters by reference to a file?
Weirdly, I'm just writing an article on this.
I would advise against using environment variables to store secrets, mainly for the reasons Diogo Monica outlines here; they are visible in too many places (linked containers, docker inspect, child processes) and are likely to end up in debug info and issue reports. I don't think using an environment variable file will help mitigate any of these issues, although it would stop values getting saved to your shell history.
Instead, you can pass in your secret in a volume e.g:
$ docker run -v $(pwd)/my-secret-file:/secret-file ....
If you really want to use an environment variable, you could pass it in as a script to be sourced, which would at least hide it from inspect and linked containers (e.g. CMD source /secret-file && /run-my-app).
The main drawback with using a volume is that you run the risk of accidentally checking the file into version control.
A better, but more complicated solution is to get it from a key-value store such as etcd (with crypt), keywhiz or vault.
You say "Alternatively, is it possible to pass sensitive parameters by reference to a file?", extract from the doc http://docs.docker.com/reference/commandline/run/ --env-file=[] Read in a file of environment variables.