Prevent packer printing output for certain inline scripts? - packer

I am running a bunch of packer scripts, but some of them generate too much output for logs and it's getting really annoying. Is there any way I can change my json file so that I can disable output for one of these shell scripts in packer?
One example of my packer shell script calls that I'd like silenced:
{
"type": "shell",
"scripts": [
"scripts/yum_install_and_update"
"scripts/do_magic"
]
}

Packer doesn't natively support this, but if you can modify the scripts you could have them internally suppress the output.

Related

Specifying input.json file from another directory for use in opa test

I am new to opa/rego and have an opa test that I would like to run. Within that test .rego file, I would like to use an input.json file from a different directory. Is there a way to specify that file within a "with input as _____" statement within the test file? i.e.
test_allow {
allow with input as <path-to-file>
}
My thoughts so far have lead me to trying the -b option but the directories are pretty far apart and I do not want a bundle that large and dependent. Additionally, I have thought about the import statements but I do not have the "-i" option within the opa test subcommand. I've also tried specifying each file (the .rego policy file, the .rego test file, and an input file) within the opa test subcommand to no avail.
Any help is greatly appreciated
OPA, and by extension the OPA test runner, doesn't really consider files at the time of policy evaluation. When OPA starts, all the files/directories pointed out by the command are merged under the data attribute, and may then be referenced byt their path, e.g. data.my-mocks.mock1, and so on.
If you want to include "test data" in your tests, you could keep those files in a directory included when running the opa test command. Since all data files are merged though, you'll need to ensure there aren't conflicting paths in those files. This is commonly accomplished by using a unique "top level" attribute per item. Something like:
{
"mock1": {
"item": "here"
},
"mock2": {
"item": "here"
}
}
You may then reference this in your tests like you suggested:
test_allow {
allow with input as data.mock1.item
}

Hashicorp Packer: ways to output a variable/local string value to a file

I have some Packer templates which generate the content for configuration files which I then need to output to a configuration file. The end goal is to upload these files to the remote machine and then use the shell provisioner, but I can't seem to figure out the correct way of doing this. My current solution relies on a local shell provisioner to write the files, then I upload them to the remote, and then run the remote provisioner.
Something like,
locals {
foo = "bar"
foo_generated = templatefile("${path.root}/template-which-uses-foo.pkrtpl", foo)
}
provisioner "shell-local" {
inline = [
"cat >${path.root}/generated/foo.conf <<'STR'\n${local.foo_generated}"
]
}
provisioner "file" {
source = "${path.root}/generated/"
destination = "/tmp/"
}
provisioner "shell" {
inline = [
"/tmp/do-something-with-foo-conf.sh",
]
}
While this works, the file generation looks very awkward, and I want to simplify it and make it more robust.
I initially started with defining sourcees for configuration file (there are many of them), in addition to the "base" ec2 source. However, from the logs it looked like Packer runs provisioners for each source inside of the build block, so it didn't seem like a good idea.
Are there better options to accomplish this?

Insert header for each document before uploading to elastic search

I have a ndjson file with the below format
{"field1": "data1" , "field2": "data2"}
{"field1": "data1" , "field2": "data2"}
....
I want to add a header like
{"index": {}}
before each document before using the bulk operation
I found a similar question: Elasticsearch Bulk JSON Data
The solution is this jq command:
jq -cr ".[]" input.json | while read line; do echo '{"index":{}}'; echo $line; done > bulk.json
But I get this error:
'while' is not recognized as a internal or external command
What am I doing wrong? Im running Windows
Or is there a better solution?
Thanks
The while in your sample is a construct that is usually built-in a developer-friendly shell like e.g. sh, bash or zsh but windows doesn't provide out of the box. See the bash docs for example.
So if this is a one-time thing, probably the fastest solution is to just use some text editor and add the required action lines by using some multi-cursor functionality.
On the other hand, if you are restricted to Windows but want some kind of better shell to use this more often, you should have a look at the cmder project that brings you a bash environment to your windows desktop when using the full version that is packaged with git-for-windows. This should allow you to use such scripting features even on a non linux or mac environment.

How can I find out what is contained in a saved Docker image tar file?

Let's say that I have saved one or more Docker images to a tar file, e.g.
docker save foo:1.2.0 bar:1.4.5 bar:latest > images.tar
Looking at the tar file, I can see that in addition to the individual layer directories, there is a manifest.json file that contains some meta information about the archives contents, including a RepoTags array for each image:
[{
"Config": "...",
"Layers": [...],
"RepoTags": [
"foo:1.2.0"
]
},
{
"Config": "...",
"Layers": [...],
"RepoTags": [
"bar:latest",
"bar:1.4.5"
]
}]
Is there an easy way to extract that info from the tar file, e.g. through a Docker command - or do I have to extract the manifest.json file, run it through a tool like jq and then collect the tag info myself?
The purpose is to find out what is contained in the archive before importing/loading it on a different machine. I imagine that there must be some way to find out what's in the archive...
Since I have not received any answers so far, I'll try to answer myself using what I have tried, and it's working for me.
Using a combination of tar and jq, I came up with this command:
tar -xzOf images.tar.gz manifest.json | jq '[.[] | .RepoTags] | add'
This will extract the manifest.json file to stdout, pipe it into in the jq command, and jq combines the various RepoTags arrays into a single array:
[
"foo:1.2.0",
"bar:1.4.5",
"bar:latest"
]
The result is easy to read and works for me. Only downside is that it requires an installation of jq. I would love to have something that works without dependencies.
Still looking for a better answer, don't hesitate to post an answer if you have something that's easier to use!

Import JSON array in CouchBase

I want to use CouchBase to store lots of data. I have that data in the form:
[
{
"foo": "bar1"
},
{
"foo": "bar2"
},
{
"foo": "bar3"
}
]
I have that in a json file that I zipped into data.zip. I then call:
cbdocloader.exe -u Administrator -p **** -b mybucket C:\data.zip
However, this creates a single item in my bucket; not three as I expected. This actually makes sense as I should be able to store arrays and I did not "tell" CouchBase to expect multiple items instead of one.
The temporary solution I have is to split every items in multiplejson files, then add the lot of them in a single zip file and call cbdocloader again. The problem is that I might have lots of these entries and creating all the files might take too long. Also, I saw in the doc that cbdocloader uses the filename as a key. That might be problematic in my case...
I obviously missed a step somewhere but couldn't find what in the documentation. How should I format my json file?
You haven't missed any steps. The cbdocloader script is very limited at the moment. Couchbase will be adding a cbimport and cbexport tool in the near future that will allow you to add json files with various formats (including the one you mentioned). In the meantime you will need to use the current workaround you are using to get your data loaded.