I'd like to use conditional statements in the packer template at the "provisioners" stage.
"provisioners": [
{
"execute_command": "echo 'vagrant'|sudo -S sh '{{.Path}}'",
"override": {
"virtualbox-iso": {
"scripts": [
"scripts/base.sh",
"scripts/puppet.sh",
]
}
},
"type": "shell",
}
]
For instance, if the user, at the "packer build" command line, specifies, somehow, a "puppet" parameter then then the "scripts/puppet.sh" will be executed otherwise skipped.
How can I do that?
I don't think that this is possible with packers native template formate, because packer uses the JSON format for configuration which as long as I know does not support flow control mechanisms like the conditional statement. But it should be possible to archive a similar behaviour with a user variables and the shell provisioner.
The idea
The easiest way should be to set a user variable from the build command and pass this variables from packer to the shell provisioner script which detects the value of the user variable and calls the appropriate provisioner script e.g. puppet, salt,...
Disclaimer:
I didn't test this approach, but it should give you a hint to what I mean and maybe you can come up with an even better solution. ;-)
The problem solving approach
1. Define the variable which indicates the used provisioner:
There are multiple ways to define a user variable,
by calling the packer build command with the -var flag (see end of answere)
define the user variables in the box-template file
The packer box template file: **template.json**
"variables": [
"using_provision_system": "puppet"
]
"provisioners": [
{...}
define a variable definition file and specify the path to it in the build command with -var-file
The variable file: **variables.json**
Is a great alternative if you want to define variables in a seperate file.
{
"using_provision_system": "puppet"
}
2. Calling the shell script which calls the provisioner scripts:
Now modify the execute_command in a way that the 'master' script is called with the defined variable as argument.
"provisioners": [
{
"execute_command": "echo 'vagrant'|sudo -S sh '{{.Path}}' '{{user `using_provision_system`}}'",
"override": {
"virtualbox-iso": {
"scripts": [
call_provisioner_script.sh
]
}
},
"type": "shell",
}
]
Notice, we only need to specify one script. This one 'master' script takes the passed variable as argument and compares the value with some predefined provisioner names in a switch condition.
(Short: It chooses which provisioner scripts will be executed.)
master provision script: call_provisioner_script.sh
case $1 in
puppet*) sh puppet_provisioner.sh
*) sh shell_provisioner.sh
Take care!
Because this script will run inside your box, you might need to upload the scripts to the box before this command gets executed.
3. Last step is to build your box :)
Calling packers build command:
#define variable in build command (first option from above)
$ packer build -var 'using_provision_system=puppet' template.json
#define variables in variable file and set path in build command
$ packer build -var-file=variables.json template.json
Instead of using user variables, there should maybe a way to set enviromental variables in your box and use this type of variable to specify the provisioner.
An alternative way is to define 2 different builders which are of the same type but having different names. You can then exclude a provisioning step from a specific build using the only field:
{
"source": "foo.txt",
"destination": "/opt/foo.txt",
"type": "file",
"only": ["docker-local"]
}
Related
I'm trying to write a shell script that passes an env variable into a .conf file so that I can manipulate the log_file and log_level keys programatically.
Actual file as station.conf
{
"SX1301_conf": {
"lorawan_public": true,
"clksrc": 1,
"radio_0": {
"type": "SX1257",
"rssi_offset": -166.0,
"tx_enable": true,
"antenna_gain": 0
},
"radio_1": {
"type": "SX1257",
"rssi_offset": -166.0,
"tx_enable": false
}
},
"station_conf": {
"log_file": "stderr",
"log_level": "DEBUG",
/* XDEBUG,DEBUG,VERBOSE,INFO,NOTICE,WARNING,ERROR,CRITICAL */
"log_size": 10000000,
"log_rotate": 3,
"CUPS_RESYNC_INTV": "1s"
}
}
I wanted to test manually before passing shell variables so I tried jq '".station_conf.log_level="ERROR"' station.conf, but I keep getting errors including shell quoting errors and invalid numeric literal errors (which btw, seems to be a open bug: https://github.com/stedolan/jq/issues/501)
Any tips on how to do this? Ideally I'd be able to replace log_level value with a $LOG_LEVEL from my env. Thanks!
Assuming the input is valid JSON, for robustness, you could start with:
jq '.station_conf.log_level="ERROR"' station.conf
To pass in a shell variable, consider:
jq —-arg v "$LOG_LEVEL" '
.station_conf.log_level=$v' station.conf
You are getting invalid numeric literal error because at least your example input is not valid json. As you can see, it contains /* comment */, which is not supported by jq. You have several options here.
keep using jq and make your input files valid json.
use another tool instead of jq, which support comments and/or other non-standard features.
If you choose second way, i.e. different tool, you can find some alternatives either on jq web page (https://github.com/stedolan/jq/wiki/FAQ#processing-not-quite-valid-json) or there is also scout (https://github.com/ABridoux/scout).
I have a Snakemake workflow where one of the top-level config entries is an array of variable size (in this particular example, a sibling may or may not be included in the analysis). Currently I'm using the following config file.
{
"case": "/scratch/standage/12175/BAMs/12175.proband.bam",
"controls": [
"/scratch/standage/12175/BAMs/12175.mother.bam",
"/scratch/standage/12175/BAMs/12175.father.bam"
]
}
I know snakemake allows one to specify config options on the command line with the --config flag. Since the case value is a single string, this is trivial to do on the command line. But what about the controls value(s)? Is it possible to pass an array/list of values as one of the config options on the command line?
Is it possible to pass an array/list of values as one of the config options on the command line
I doubt that is directly possible, but you could pass a quoted string of space (or comma or whatever) separated values that you split to list inside the Snakefile:
snakemake -C controls='control1 control2 ...'
Then inside the Snakefile:
controls= config['controls'].split(' ')
An alternative solution would be to pass variables on the command line like so...
snakemake --config case=proband.bam control1=mother.bam control2=father.bam
...and then to parse the configuration settings dynamically in the Snakefile. For example, any config key matching the regular expression control\d+ corresponds to a control sample.
So it's possible, but a bit of a stretch, and the config file is probably the better/cleaner option.
First off, I am not an expert with JSON files or with JQ. But here's my problem:
I am simply trying to download to card data (for the MtG card game) through an API, so I can use it in my own spreadsheets etc.
The card data from the API comes in pages, since there is so much of it, and I am trying to find a nice command line method in Windows to combine the files into one. That will make it nice and easy for me to use the information as external data in my workbooks.
The data from the API looks like this:
{
"object": "list",
"total_cards": 290,
"has_more": true,
"next_page": "https://api.scryfall.com/cards/search?format=json&include_extras=false&order=set&page=2&q=e%3Alea&unique=cards",
"data": [
{
"object": "card",
"id": "d5c83259-9b90-47c2-b48e-c7d78519e792",
"oracle_id": "c7a6a165-b709-46e0-ae42-6f69a17c0621",
"multiverse_ids": [
232
],
"name": "Animate Wall",
......
}
{
"object": "card",
......
}
]
}
Basically I need to take what's inside the "data" part from each file after the first, and merge it into the first file.
I have tried a few examples I found online using jq, but I can't get it to work. I think it might be because in this case the data is sort of under an extra level, since there is some basic information, then the "data" category is beneath it. I don't know.
Anyway, any help on how to get this going would be appreciated. I don't know much about this, but I can learn quickly so even any pointers would be great.
Thanks!
To merge the .data elements of all the responses into the first response, you could run:
jq 'reduce inputs.data as $s (.; .data += $s)' page1.json page2.json ...
Alternatives
You could use the following filter in conjunction with the -n command-line option:
reduce inputs as $s (input; .data += ($s.data))
Or if you simply want an object of the form {"data": [ ... ]} then (again assuming you invoke jq with the -n command-line option) the following jq filter would suffice:
{data: [inputs.data] | add}
Just to provide closure, #peak provided the solution. I am using it in conjunction with the method found here for using wildcards in batch files to address multiple files. The code looks like this now:
set expanded_list=
for /f "tokens=*" %%F in ('dir /b /a:-d "All Cards\!setname!_*.json"') do call set expanded_list=!expanded_list! "All Cards\%%F"
jq-win32 "reduce inputs.data as $s (.; .data += $s)" !expanded_list! > "All Cards\!setname!.json"
All the individual pages for each card set are named "setname"_"pagenumber".json
The code finds all the pages for each set and combines them into one variable which I can pass into jq.
Thanks again!
I have what seems to be like a valid use case for an unsupported - afaik - scenario, using packer.io and I'm worried I might be missing something...
So, in packer, I can add:
many builders,
have a different name per builder,
use the builder name in the only section of the provisioners and finally
run packer build -only=<builder_name> to effectively limit my build to only the provisioners combined with the specific builder.
This is all fine.
What I am now trying to do, is use the same base image to create 3 different builds (and resulting AMIs). Obviously, I could just copy-paste the same builder config 3 times and then use 3 different provisioners, linking each to the respective builder, using the only parameter.
This feels totally wasteful and very error prone though... It sounds like I should be able to use the same builder and just limit which provisioners are applied .. ?
Is my only solution to use 3 copy-pasted builders? Is there any better solution?
I had the same issue, where I want to build 2 different AMIs (one for staging, one for production) and the only difference between them is the ansible group to apply during the provisioning. Building off the answer by #Rickard ov Essen I wrote a bash script using jq to duplicate the builder section of the config.
Here's my packer.json file:
{
"builders": [
{
"type": "amazon-ebs",
"name": "staging",
"region": "ap-southeast-2",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*",
"root-device-type": "ebs"
},
"owners": ["099720109477"],
"most_recent": true
},
"instance_type": "t2.nano",
"ssh_username": "ubuntu",
"force_deregister": true,
"force_delete_snapshot": true,
"ami_name": "my-ami-{{ build_name }}"
}
],
"provisioners": [
{
"type": "ansible",
"playbook_file": "provisioning/site.yml",
"groups": ["{{ build_name }}"]
}
]
}
The ansible provisioner user the variable build_name to choose which ansible group to run.
Then I have a bash script build.sh which runs the packer build:
#!/bin/bash
jq '.builders += [.builders[0] | .name = "production"]' packer.json > packer_temp.json
packer build packer_temp.json
rm packer_temp.json
You can see what the packer_temp.json file looks like on this jqplay.
If you need to add more AMIs you can just keep adding more jq filters:
jq '.builders += [.builders[0] | .name = "production"] | .builders += [.builders[0] | .name = "test"]
This will add another AMI for test.
only works on filters on builder name so that is not an option.
You could solve this with any of these aproches:
Preprocess a json and create 3 templates from one.
Use a template with a user variable defining which build it is and build 3 times. Use conditions on the variable in you scripts to run the correct scripts.
Build a base AMI with the common parts of the template and then run 3 different builds on that provisioning the differences.
In general Packer try to solve one thing well, by not including a advanced DSL for describing different build flavours the scope decreses. It's easy to preprocess and create json for more advanced use cases.
I want to use different flags (sourcemap, out, target) that the typescript compiler provides. I am trying to define a build system in sublime 2 but unable to do so.
Have already read this question.
basically i want to do something like the following
tsc src/main/ts/myModule.ts --out src/main/js/myModule.js --sourcemap --target ES5
Just add them to the cmd array
{
"cmd": ["tsc","$file", "--out", "src/main/js/myModule.js"],
"file_regex": "(.*\\.ts?)\\s\\(([0-9]+)\\,([0-9]+)\\)\\:\\s(...*?)$",
"selector": "source.ts",
"osx": {
"path": "/usr/local/bin:/opt/local/bin"
}
}
First of all let me say that I'm using Sublime Text 3 on Windows and Typescript 1.0.
I don't think that SublimeText2 is so much different, though...
If you're on similar conditions, take a look at my current configuration file:
{
"cmd": ["tsc", "$file"],
"file_regex": "(.*\\.ts?)\\s*\\(([0-9]+)\\,([0-9]+)\\)\\:\\s(.+?)$",
"selector": "source.ts",
"windows": {
"cmd": ["tsc.cmd", "$file", "--target", "ES5"]
}
}
Please notice that I tweaked the regex so that it matches the TSC error format (and brings you to the line containing the error when you double click it from the error log...)
Besides of that, I think that the real command-line which gets run is the lower one: as a matter of fact I had it working only placing the options down there... (in this specific case I'm asking an ES5 compilation type, your parameters will differ).
This suppose you have a tsc.cmd avaliable on path; if not, put the full path of tsc.cmd or tsc.exe instead of "tsc.cmd" and be sure to escape backslashes \ as \\...
This works in my situation, maybe in other contexts they should also be placed on the first line...
Hope this helps :)