I am trying to update my /etc/hosts file on my load balancer server using ansible so I created a jinja2 template and add my vars from ansible facts - jinja2

{"changed": false, "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute 'default_ipv4'"}
here is the error I keep running into.
ansible playbook
my jinja2 template

I think you have an issue on your variables:
{{ hostvars[host].['ansible_default_ipv4']['address']}}
Try with ansible_default_ipv4

Related

Annotation Validation Error when trying to install Vault on OpenShift

Following this tutorial on installing Vault with Helm on OpenShift, I encountered the following error after executing the command:
bash
helm install vault hashicorp/vault -n $VAULT_NAMESPACE -f my_values.yaml
For the config:
values.yaml
bash
echo '# Custom values for the Vault chart
global:
# If deploying to OpenShift
openshift: true
server:
dev:
enabled: true
serviceAccount:
create: true
name: vault-sa
injector:
enabled: true
authDelegator:
enabled: true' > my_values.yaml
The error:
$ helm install vault hashicorp/vault -n $VAULT_NAMESPACE -f values.yaml
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "vault-agent-injector-clusterrole" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "my-vault-1": current value is "my-vault-0"
What exactly is happening, or how can I reset this specific name space to point to the right release namespace?
Have you by chance tried the exact same thing before, because that is what the error is hinting.
If we dissect the error, we get the to the root of the problem:
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists.
So something on the cluster already exists that you were trying to deploy via the helm chart.
Unable to continue with install:
Helm is aborting due to this failure
ClusterRole "vault-agent-injector-clusterrole" in namespace "" exists
So the cluster role vault-agent-injector-clusterrole that the helm chart is supposed to put onto the cluster already exsits. ClusterRoles aren't namespace specific, hence the "namespace" is blank.
and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "my-vault-1": current value is "my-vault-0"
The default behavior is to try to import existing resources that this chart requires, but it is not possible, because the owner of that ClusterRole is different from the deployment.
To fix this, you can remove the existing deployment of your chart and then give it an other try and it should work as expected.
Make sure all resources are gone. For this particular one you can check with kubectl get clusterroles

AWS EB undefined RDS_HOSTNAME with Database hosts array empty

Currently in a Laravel project using AWS EB with RDS.
When I run
php artisan migrate --seed
then I get
PHP Notice: Undefined index: RDS_HOSTNAME in /var/app/current/config/database.php on line 5
PHP Notice: Undefined index: RDS_USERNAME in /var/app/current/config/database.php on line 6
PHP Notice: Undefined index: RDS_PASSWORD in /var/app/current/config/database.php on line 7
PHP Notice: Undefined index: RDS_DB_NAME in /var/app/current/config/database.php on line 8
and
Database hosts array is empty. (SQL: select * from
information_schema.tables where table_schema = ? and table_name =
migrations and table_type = 'BASE TABLE')
I'm not using a .env file but defining these variables in EB configuration, like
and my ./config/database.php file
Tested changing the variables' prefix to RDS_ instead of DB_ in EB but that didn't solve the problem.
Per the Elastic Beanstalk documentation here:
Note: Environment properties aren't automatically exported to the
shell, even though they are present in the instance. Instead,
environment properties are made available to the application through
the stack that it runs in, based on which platform you're using.
So Elastic Beanstalk is going to pass those environment variables to the Apache HTTP server process, but not the Linux shell where you are running that php command.
Per the documentation here, you need to use the get-config script to pull those environment variable values into your shell.
So you'll need to do this for all
/opt/elasticbeanstalk/bin/get-config environment -k RDS_USERNAME
which will print the value of RDS_USERNAME. Then export it to use in other commands
export RDS_USERNAME="value"
Do that for all - RDS_HOSTNAME, RDS_USERNAME, RDS_PASSWORD and RDS_DB_NAME. Then if you run
export
you must be able to see RDS_HOSTNAME, RDS_USERNAME, RDS_PASSWORD and RDS_DB_NAME and the respective value in front.
Once that's done and you run
php artisan migrate --seed
you'll then get as expected

Set build number for conda metadata inside Azure pipeline

I am using the bash script to build the conda pakage in azure pipeline conda build . --output-folder $(Build.ArtifactStagingDirectory) And here is the issue, Conda build uses the build number in the meta.yml file(see here).
A solution of What I could think of is, first, copy all files to Build.ArtifactStagingDirectory and add the Azure pipeline's Build.BuildNumber into the meta.yml and build the package to Build.ArtifactStagingDirectory (within a sub-folder)
I am trying to avoid do it by writing shell script to manipulate the yaml file in Azure pipeline, because it might be error prone. Any one knows a better way? Would be nice to read a more elegant solution in the answers or comments.
I don't know much about Azure pipelines. But in general, if you want to control the build number without changing the contents of meta.yaml, you can use a jinja template variable within meta.yaml.
Choose a variable name, e.g. CUSTOM_BUILD_NUMBER and use it in meta.yaml:
package:
name: foo
version: 0.1
build:
number: {{ CUSTOM_BUILD_NUMBER }}
To define that variable, you have two options:
Use an environment variable:
export CUSTOM_BUILD_NUMBER=123
conda build foo-recipe
OR
Define the variable in conda_build_config.yaml (docs), as follows
echo "CUSTOM_BUILD_NUMBER:" >> foo-recipe/conda_build_config.yaml
echo " - 123" >> foo-recipe/conda_build_config.yaml
conda build foo-recipe
If you want, you can add an if statement so that the recipe still works even if CUSTOM_BUILD_NUMBER is not defined (using a default build number instead).
package:
name: foo
version: 0.1
build:
{% if CUSTOM_BUILD_NUMBER is defined %}
number: {{ CUSTOM_BUILD_NUMBER }}
{% else %}
number: 0
{% endif %}

Read JSON data from the YAML file

I have a .gitlab-ci.yml the file that I use to install a few plugins (craftcms/aws-s3, craftcms/redactor, etc) in the publishing stage. The file is provided below (partly):
# run the staging deploy, commands may be different baesed on the project
deploy-staging:
stage: publish
variables:
DOCKER_HOST: 127.0.0.1:2375
# ...............
# ...............
# TODO: temporary fix to the docker/composer issue
- docker-compose -p "ci-$CI_PROJECT_ID" --project-directory $CI_PROJECT_DIR -f build/docker-compose.staging.yml exec -T craft composer --working-dir=/data/craft require craftcms/aws-s3
- docker-compose -p "ci-$CI_PROJECT_ID" --project-directory $CI_PROJECT_DIR -f build/docker-compose.staging.yml exec -T craft composer --working-dir=/data/craft require craftcms/redactor
I have a JSON file that has the data for the plugins. The file is .butler.json. provided below,
{
"customer_number": "007",
"project_number": "999",
"site_name": "Welance",
"local_url": "localhost",
"db_driver": "mysql",
"composer_require": [
"craftcms/redactor",
"craftcms/aws-s3",
"nystudio107/craft-typogrify:1.1.17"
],
"local_plugins": [
"welance/zeltinger",
"ansmann/ansport"
]
}
How do I take the plugin names from the "composer_require" and the "local_plugins" inside the .butler.json file and create a for loop in the .gitlab-ci.yml file to install the plugins?
You can't create a loop in .gitlab-ci.yml since YAML is not a programming language. It only describes data. You could use a tool like jq to query for your values (cat .butler.json | jq '.composer_require') inside a script, but you cannot set variables from there (there is a feature request for it).
You could use a templating engine like Jinja (which is often used with YAML, e.g. by Ansible and SaltStack) to generate your .gitlab-ci.yml from a template. There exists a command line tool j2cli which takes variables as JSON input, you could use it like this:
j2 gitlab-ci.yml.j2 .butler.json > .gitlab-ci.yml
You could then use Jinja expression to loop over your data and create corresponding YAML in gitlab-ci.yml.j2:
{% for item in composer_require %}
# build your YAML
{% endfor %}
Drawback is that you need the processed .gitlab-ci.yml checked in to your repository. This can be done via pre-commit-hook (before each commit, regenerate the .gitlab-ci.yml file and if it changed, commit it along with other changes).

YAML and JSON error aws static.config file

I'm trying to deploy a node.js application on elasticbeanstalk (I'm following directions here http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs_express.html), where the following needs to be done:
Step 6 [To update your application with a database] Point 5. On your local computer, update node-express/.ebextensions/static.config to add a production flag to the environment variables.
option_settings:
- namespace: aws:elasticbeanstalk:container:nodejs:staticfiles
option_name: /public
value: /public
- option_name: NODE_ENV
value: production
But when I deploy, I'm getting the error:
2014-08-29 10:15:11 ERROR The configuration file .ebextensions/static.config in application version git-5376bdbd807e9f181e6a907f996068b4075dffe0-1409278503377 contains invalid YAML or JSON. YAML exception: while parsing a block mapping
in "<reader>", line 1, column 1:
option_settings:
^
expected <block end>, but found BlockEntry
in "<reader>", line 5, column 1:
- option_name: NODE_ENV
^
, JSON exception: Unexpected character (o) at position 0.. Update the configuration file.
I'm new to this and unable to figure out how to correct it. Please help.
Can you specify which text editor are you using for creating this YAML file? By any chance are you on a Windows machine? My first guess is that there could be some invalid character in your config file that is not visible in the text editor. If you have already not done so can you double check that there are no ctrl characters etc. in your file. I generally check for such invalid characters in vim.
Second thing to check: YAML is sensitive about whitespace and indentation, so can you double check your indentation are correct. I found this online website for validating your YAML format. You can give it a try.
You can alternatively try the following JSON in your config file. Just replace the contents of the YAML file with this JSON and it should work.
{
"option_settings": [
{
"option_name": "/public",
"namespace": "aws:elasticbeanstalk:container:nodejs:staticfiles",
"value": "/public"
},
{
"option_name": "NODE_ENV",
"value": "production"
}
]
}
YAML as specified in the documentation should work just fine if there are no invalid characters and indentation is correct. For now you can try the above json.
Try to set the namespace of the application environment --> aws:elasticbeanstalk:application:environment
.yaml is more human friendly
option_settings:
- namespace: aws:elasticbeanstalk:container:nodejs:staticfiles
option_name: /public
value: /public
- namespace: aws:elasticbeanstalk:application:environment
option_name: NODE_ENV
value: productio