Read JSON data from the YAML file - json

I have a .gitlab-ci.yml the file that I use to install a few plugins (craftcms/aws-s3, craftcms/redactor, etc) in the publishing stage. The file is provided below (partly):
# run the staging deploy, commands may be different baesed on the project
deploy-staging:
stage: publish
variables:
DOCKER_HOST: 127.0.0.1:2375
# ...............
# ...............
# TODO: temporary fix to the docker/composer issue
- docker-compose -p "ci-$CI_PROJECT_ID" --project-directory $CI_PROJECT_DIR -f build/docker-compose.staging.yml exec -T craft composer --working-dir=/data/craft require craftcms/aws-s3
- docker-compose -p "ci-$CI_PROJECT_ID" --project-directory $CI_PROJECT_DIR -f build/docker-compose.staging.yml exec -T craft composer --working-dir=/data/craft require craftcms/redactor
I have a JSON file that has the data for the plugins. The file is .butler.json. provided below,
{
"customer_number": "007",
"project_number": "999",
"site_name": "Welance",
"local_url": "localhost",
"db_driver": "mysql",
"composer_require": [
"craftcms/redactor",
"craftcms/aws-s3",
"nystudio107/craft-typogrify:1.1.17"
],
"local_plugins": [
"welance/zeltinger",
"ansmann/ansport"
]
}
How do I take the plugin names from the "composer_require" and the "local_plugins" inside the .butler.json file and create a for loop in the .gitlab-ci.yml file to install the plugins?

You can't create a loop in .gitlab-ci.yml since YAML is not a programming language. It only describes data. You could use a tool like jq to query for your values (cat .butler.json | jq '.composer_require') inside a script, but you cannot set variables from there (there is a feature request for it).
You could use a templating engine like Jinja (which is often used with YAML, e.g. by Ansible and SaltStack) to generate your .gitlab-ci.yml from a template. There exists a command line tool j2cli which takes variables as JSON input, you could use it like this:
j2 gitlab-ci.yml.j2 .butler.json > .gitlab-ci.yml
You could then use Jinja expression to loop over your data and create corresponding YAML in gitlab-ci.yml.j2:
{% for item in composer_require %}
# build your YAML
{% endfor %}
Drawback is that you need the processed .gitlab-ci.yml checked in to your repository. This can be done via pre-commit-hook (before each commit, regenerate the .gitlab-ci.yml file and if it changed, commit it along with other changes).

Related

Github Action Run - Security import is showing "One or more parameters passed to a function were not valid"error

I built the input file (decoded base64 file into p12 file) as CERTIFICATE_PATH, P12_PASSWORD is password in secret, KEYCHAIN_PATH is defined. when I run the command on CLI, I get "1 item imported" success message. but when I run from *.yml file on GitHub action, I get "security: SecKeychainItemImport: One or more parameters passed to a function were not valid." error. any suggestions?
security import $CERTIFICATE_PATH -P $P12_PASSWORD -A -t cert -f pkcs12 -k $KEYCHAIN_PATH
CERTIFICATE_PATH - file that contains cert.p12 data,
KEYCHAIN_PATH is TEMP/app-signing.keychain-db
Another reason in Github actions could be that you are using the wrong environment.
Take a look at this ---> Difference between Github's "Environment" and "Repository" secrets?.
Set the right environment:
environment: production
found the issue.. was passing wrong cert file.. once added correct file in the security build , was able to get it working

Packer HCL2 config file support

In https://packer.io/guides/hcl/from-json-v1/, it says
Note: Starting from version 1.5.0 Packer can read HCL2 files.
And my packer is packer_1.5.5_linux_amd64.zip which is suppose to be able to read HCL2 files. However, when I tried it, I got
$ packer build -only=docker hcl-example
Failed to parse template: Error parsing JSON: invalid character '#' looking for beginning of value
At line 1, column 1 (offset 1):
1: #
^
==> Builds finished but no artifacts were created.
$ packer build -h
Usage: packer build [options] TEMPLATE
Will execute multiple builds in parallel as defined in the template.
The various artifacts created by the template will be outputted.
Options:
-color=false Disable color output. (Default: color)
-debug Debug mode enabled for builds.
-except=foo,bar,baz Run all builds and post-procesors other than these.
-only=foo,bar,baz Build only the specified builds.
-force Force a build to continue if artifacts exist, deletes existing artifacts.
-machine-readable Produce machine-readable output.
-on-error=[cleanup|abort|ask] If the build fails do: clean up (default), abort, or ask.
-parallel=false Disable parallelization. (Default: true)
-parallel-builds=1 Number of builds to run in parallel. 0 means no limit (Default: 0)
-timestamp-ui Enable prefixing of each ui output with an RFC3339 timestamp.
-var 'key=value' Variable for templates, can be used multiple times.
-var-file=path JSON file containing user variables. [ Note that even in HCL mode this expects file to contain JSON, a fix is comming soon ]
and I don't see any switches from above to switch to HCL2 mode.
What I'm missing here?
$ packer version
Packer v1.5.5
$ cat hcl-example
# the source block is what was defined in the builders section and represents a
# reusable way to start a machine. You build your images from that source.source
"amazon-ebs" "example" {
ami_name = "packer-test"
region = "us-east-1"
instance_type = "t2.micro"
}
[UPDATE:]
To address Matt's comment/concern, I've changed the content of hcl-example to the whole list in https://packer.io/guides/hcl/from-json-v1/, and
mv hcl-example hcl-example.hcl
$ packer validate hcl-example.hcl
Failed to parse template: Error parsing JSON: invalid character '#' looking for beginning of value
At line 1, column 1 (offset 1):
1: #
^
Named it with .pkr.hcl extension solved the problem.

How to attach volume to pod's post start life cycle hook?

The use case trying out is, way to initialize the postgres database after it starts up. I saw the post start hooks in the openshift pod lifecycle. I can't put the sql statements using here-document or in command line ( Docker command fails due to max length issue ).
So looking a option to save the SQL statements in a file via ConfigMap and attach it to the post container before it starts, so that the psql command can execute it. I couldn't see a way to attach the volume from the DeploymentConfig from the official document. Is there any way I can do it ?
Document I referred - openshift-doc
I found a workaround to pass the long SQL statements to the post life-cycle pods.
Set the SQL statements in the DeploymentConfig ENV variable. These ENV variables are accessible inside the life cycle pods also, so then we can easily do the bellow command
post:
failurePolicy: Abort
execNewPod:
command:
- /bin/bash
- '-c'
- >-
echo $INIT_SQL_STATEMENTS | psql "sslmode=allow
host=postgres user=postgres password=postgres"
containerName: postgres
.....
env:
- name: POSTGRESQL_ADMIN_PASSWORD
value: postgres
- name: INIT_SQL_STATEMENTS
value: >-
create user haridas with encrypted password 'haridas';...
Another option which I've employed in the past is to pass in the sql statements in a parameters file. This will then allow you to more easily configuration manage the sql commands (e.g. check it into git) and declutter your deployment configuration (DC). Here is what I did:
Move your post hook to the DC portion of the template file. Let me
know if you need steps on how to export, modify, and re-import a
template file, but I didn't want to over-complicate this procedure
unnecessarily.
Add a parameter to the template file called SQL_COMMANDS like this:
parameters:
- description: The SQL commands to run.
displayName: SQL commands
name: SQL_COMMANDS
required: true
In the post hook code of the template (DC section) run the SQL_COMMANDS like this:
execNewPod:
command:
- /bin/sh
- -c
- echo "${SQL_COMMANDS}" | psql -h ${DATABASE_SERVICE_NAME} -U ${POSTGRESQL_USER} -d ${POSTGRESQL_DATABASE};
Note the other variables in the command are also passed in as parameters.
Create a parameters file similar to this:
POSTGRESQL_USER=postgres
POSTGRESQL_PASSWORD=somepassword
POSTGRESQL_DATABASE=myDatabase
SQL_COMMANDS="CREATE TABLE Configuration(CONFIGURATION_ID character varying(255)
NOT NULL, description character varying(255)
NOT NULL, key character varying(255) NOT NULL, value text NOT NULL,
PRIMARY KEY (CONFIGURATION_ID) ); INSERT INTO Configuration
(CONFIGURATION_ID, description, key, value) VALUES ('10', ... etc."
Deploy your app using the template and pass in the parameters from the file:
oc new-app <template name> --param-file=ParametersFile.txt

Using environment properties with files in elastic beanstalk config files

Working with Elastic Beanstalk .config files is kinda... interesting. I'm trying to use environment properties with the files: configuration option in an Elastc Beanstalk .config file. What I'd like to do is something like:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
content: |
${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY}
To create an /etc/passwd-s3fs file with content something like:
ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd
I.e. use the environment properties defined in the AWS Console (Elastic Beanstalk/Configuration/Software Configuration/Environment Properties) to initialize system configuration files and such.
I've found that it is possible to use environment properties in container-command:s, like so:
container_commands:
000-create-file:
command: echo ${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY} > /etc/passwd-s3fs
However, doing so will require me to manually set owner, group, file permissions etc. It's also much more of a hassle when dealing with larger configuration files than the Files: configuration option...
Anyone got any tips on this?
How about something like this. I will use the word "context" for dev vs. qa.
Create one file per context:
dev-envvars
export MYAPP_IP_ADDR=111.222.0.1
export MYAPP_BUCKET=dev
qa-envvars
export MYAPP_IP_ADDR=111.222.1.1
export MYAPP_BUCKET=qa
Upload those files to a private S3 folder, S3://myapp/config.
In IAM, add a policy to the aws-elasticbeanstalk-ec2-role role that allows reading S3://myapp/config.
Add the following file to your .ebextensions directory:
envvars.config
files:
"/opt/myapp_envvars" :
mode: "000644"
owner: root
group: root
# change the source when you need a different context
#source: https://s3-us-west-2.amazonaws.com/myapp/dev-envvars
source: https://s3-us-west-2.amazonaws.com/myapp/qa-envvars
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myapp
commands:
# commands executes after files per
# http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
10-load-env-vars:
command: . /opt/myapp_envvars
Per the AWS Developer's Guide, commands "run before the application and web server are set up and the application version file is extracted," and before container-commands. I guess the question will be whether that is early enough in the boot process to make the environment variables available when you need them. I actually wound up writing an init.d script to start and stop things in my EC2 instance. I used the technique above to deploy the script.
Credit for the “Resources” section that allows downloading from secured S3 goes to the May 7, 2014 post that Joshua#AWS made to this thread.
I am gravedigging but since I stumbled across this in the course of my travels, there is a "clever" way to do what you describe–at least in 2018, and at least since 2016. You can retrieve an environment variable by key with get-config:
/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY
And likewise all environment variables with (as JSON or --output YAML)
/opt/elasticbeanstalk/bin/get-config environment
Example usage in a container command:
container_commands:
00_store_env_var_in_file_and_chmod:
command: "/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_KEY | install -D /dev/stdin /etc/somefile && chmod 640 /etc/somefile"
Example usage in a file:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/00_do_stuff.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
YOUR_ENV_VAR=$(source /opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY)
echo "Hello $YOUR_ENV_VAR"
I was introduced to get-config by Thomas Reggi in https://serverfault.com/a/771067.
I assume that AWS_ACCESS_KEY_ID and AWS_SECRET_KEY are known to you prior to the app deployment.
You can create the file on your workstation and submit it to Elastic Beanstalk instance with the code on $ git aws.push
$ cd .ebextensions
$ echo 'ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd' > passwd-s3fs
In .config:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
container_commands:
10-copy-passwords-file:
command: "cat .ebextensions/passwd-s3fs > /etc/passwd-s3fs"
You might have to play with the permissions or execute cat as sudo. Also, I put the file into .ebextensions for example, it can be anywhere in your project.
Hope it helps.

How to solve %GTM-E-GDINVALID, Unrecognized Global Directory file format: mumps.gld, expected label: GTCGBDUNX007, found: GTCGBDUNX006?

I am getting this error with GT.M:
%GTM-E-GDINVALID, Unrecognized Global Directory file format: /home/blah/gt.m/example/mumps.gld, expected label: GTCGBDUNX007, found: GTCGBDUNX006
Here is what I did so far:
get the version http://sourceforge.net/projects/fis-gtm/
tar -xzf gtm_V55000_linux_i686_pro.tar.gz
chmod +x semstat2 mupip mumps lke gtmsecshr gtcm_shmclean gtcm_server gtcm_play gtcm_pkdisp gtcm_gnp_server geteuid ftok dse
Now we start like this in Bash:
mkdir example; cd example
...and invoke the mumps from the parent dir:
../mumps -r GDE
The output is this:
%GDE-I-GDUSEDEFS, Using defaults for Global Directory
/home/blah/gt.m/example/mumps.gld
Now we set the working dir to create the gld file.
GDE> change -s DEFAULT -f=/home/blah/gt.m/gt.m/example/
GDE> exit
The output from the command is this :
>%GDE-I-VERIFY, Verification OK
>%GDE-I-GDCREATE, Creating Global Directory file
> /home/blah/gt.m/example/mumps.gld
Now this creates a v6 version of gld, which mupip does not like:
strings mumps.gld | head -1
Which contains this string:
GTCGBDUNX006H
But mupip expects a 7 not a 6!
../mupip create
>%GTM-E-GDINVALID, Unrecognized Global Directory file format: >/home/blah/gt.m/example/mumps.gld, expected label: GTCGBDUNX007, found: GTCGBDUNX006
If I just edit the file and replace the 6 with a 7,
../mupip create.
This works!
Now I have a dat file, and go to gtm to save something :
GTM>s ^foo("blah")=1
%GTM-E-GDINVALID, Unrecognized Global Directory file format: >/home/blah/gt.m/example/mumps.gld, expected label: GTCGBDUNX006, found: GTCGBDUNX007
Oh so that wants a v6, so good thing i backed up the old, one, i replace it .
GTM>s ^foo("blah")=1
that works
GTM>zwr ^foo(*)
>^foo("blah")=1
So the data is stored.
Can anyone please explain this? In detail? Why does mupip operate with a different version number?
Note, I did not run any other commands, I am just learning and don't want to execute any huge install routines a root that I don't understand.
In your steps you don't show whether you installed GT.M or not.
That is only the unziped version, first:
chmod 777 configure
./configure
The installation will produce new files in the gtm_dist directory.
You either have GT.M already installed (and I would guess it is an older version) on your system somewhere else and have some environment variable defined for it in your bash/tcsh/*sh environment, or you didn't provide all the step you did to get to that error.
My guess is that you already have GT.M installed somewhere and your above commands uses part of that installation. You can easily verify this using this command : env | grep gtm.
If I follow your steps mentioned above, I get this result :
laurent#laurent /tmp/test $ tar -zxf ~/Projects/gtm_V55000_linux_i686_pro.tar.gz
laurent#laurent /tmp/test $ chmod +x semstat2 mupip mumps lke gtmsecshr gtcm_shmclean gtcm_server gtcm_play gtcm_pkdisp gtcm_gnp_server geteuid ftok dse
laurent#laurent /tmp/test $ mkdir example; cd example
laurent#laurent /tmp/test/example $ ../mumps -r GDE
%GTM-E-GTMDISTUNDEF, Environment variable $gtm_dist is not defined
So, I as said, you either did something else, or have a different GT.M version already installed and this is why some commands expect different versions of GLD.
As Bhaskar has noted in your cross post on Hardhats. Make sure you follow the installation instructions for GT.M. Instructions can be found in Chapter 2 of the UNIX Administration and Operations Guide