Specifying input.json file from another directory for use in opa test - json

I am new to opa/rego and have an opa test that I would like to run. Within that test .rego file, I would like to use an input.json file from a different directory. Is there a way to specify that file within a "with input as _____" statement within the test file? i.e.
test_allow {
allow with input as <path-to-file>
}
My thoughts so far have lead me to trying the -b option but the directories are pretty far apart and I do not want a bundle that large and dependent. Additionally, I have thought about the import statements but I do not have the "-i" option within the opa test subcommand. I've also tried specifying each file (the .rego policy file, the .rego test file, and an input file) within the opa test subcommand to no avail.
Any help is greatly appreciated

OPA, and by extension the OPA test runner, doesn't really consider files at the time of policy evaluation. When OPA starts, all the files/directories pointed out by the command are merged under the data attribute, and may then be referenced byt their path, e.g. data.my-mocks.mock1, and so on.
If you want to include "test data" in your tests, you could keep those files in a directory included when running the opa test command. Since all data files are merged though, you'll need to ensure there aren't conflicting paths in those files. This is commonly accomplished by using a unique "top level" attribute per item. Something like:
{
"mock1": {
"item": "here"
},
"mock2": {
"item": "here"
}
}
You may then reference this in your tests like you suggested:
test_allow {
allow with input as data.mock1.item
}

Related

Filter out certain parts of a json file on git commit

I am looking to filter out specific parts of a json file so that the given part of the file does not get pulled into a git repository. My use case is that I am setting up a repository to keep some working files, including settings for vsCode. I have a plugin for window colors that sets different colors for different windows that are open. The current color is saved in the .vscode/setting.json file for that window.
I found where it is possible to use the .gitattributes file to apply a filter to a file or set of files, and then use "$git config" to remove certain lines from what is committed, based on a sed command per this previous question.
I would like to apply this to the "workbench.colorCustomizations" object within the following json file, so that this object does not get committed, while other settings in the file may be committed, such as the "editor.formatOnPaste" object. Does anyone know of a way to do this?
{
"workbench.colorCustomizations": {
"activityBar.background": "#102D56",
"titleBar.activeBackground": "#173F79",
"titleBar.activeForeground": "#F8FAFE"
},
"editor.formatOnPaste": true
}

Does this JSON configuration workflow have a name?

I have a system where we collect a lot of JSON configuration from different parties to configure our overall service.
The repository looks like a directory of formatted JSON files. For example foo.json:
{
"id": "3bd0e397-d8cc-46ff-9e0d-26fa078a37f3",
"name": "Example",
"logo": "https://example/foo.png"
}
We have a pipeline whereby the owner of foo.json can overwrite this file by committing a new file at any time, since fast updates are required.
However we require unfortunately to skip whole files or override some values for various $reasons.
Hence we commit something like touch foo.json.skip when we want the file to be skipped before publishing. And similarly, we have a foo.json.d/override.json to perhaps override the logo because it's poorly formatted or something.
Is there a name for this sort of JSON pipeline that we have? It's inspired by systemd configuration, but maybe system configuration was inspired by something else?

Gateway rest API resource can't find the file I provide

resource "aws_api_gateway_rest_api" "api" {
body = "${file("apigateway/json-resolved/swagger.json")}"
name = "api"
}
---------------------------------------------------------------------------------
Invalid value for "path" parameter: no file exists at apigateway/json-resolved/swagger.json;
this function works only with files that are distributed as
part of the configuration source code,
so if this file will be created by a resource in this configuration you must
instead obtain this result from an attribute of that resource.
When I try to deploy my API by providing the actual path to the API JSON, this is what it throws. Even though the file is there, even though I tried different paths, from relative to absolute, etc. It works when I paste the entire JSON in the body, but not when I provide a file. Why is that?
Since Terraform is not aware of the location of the file, you should specify it explicitly:
If the file is in the same directory, then use ./apigateway/json-resolved/swagger.json
If the file is one directory up from the directory you are running Terraform from, you could use ../apigateway/json-resolved/swagger.json
Alternatively, it is a good idea to use Terraform built-in functions for path manipulation: path.cwd, path.module, or path.root. More detailed explanation about what these three functions represent can be found in [1].
Provide a full path to the file by running pwd in the directory where the file is located (this works on Linux and MacOS) and paste the result of the command in the file function input.
Additionally, any combination of the points 2. and 3. could also work, but you should be careful.
There is also another great answer to a similar question [2].
NOTE: in some cases the path.* functions might not give expected results on Windows. As per this comment [3] from Github, if the paths are used consistently (i.e., all / or all \), Windows should also be able to work with path.* but only for versions of Terraform >=0.12. Based on the code snippet form the question it seems in this case an older version is used.
[1] https://www.terraform.io/language/expressions/references#filesystem-and-workspace-info
[2] Invalid value for "path" parameter: no file exists at
[3] https://github.com/hashicorp/terraform/issues/14986#issuecomment-448756885

Hashicorp Packer: ways to output a variable/local string value to a file

I have some Packer templates which generate the content for configuration files which I then need to output to a configuration file. The end goal is to upload these files to the remote machine and then use the shell provisioner, but I can't seem to figure out the correct way of doing this. My current solution relies on a local shell provisioner to write the files, then I upload them to the remote, and then run the remote provisioner.
Something like,
locals {
foo = "bar"
foo_generated = templatefile("${path.root}/template-which-uses-foo.pkrtpl", foo)
}
provisioner "shell-local" {
inline = [
"cat >${path.root}/generated/foo.conf <<'STR'\n${local.foo_generated}"
]
}
provisioner "file" {
source = "${path.root}/generated/"
destination = "/tmp/"
}
provisioner "shell" {
inline = [
"/tmp/do-something-with-foo-conf.sh",
]
}
While this works, the file generation looks very awkward, and I want to simplify it and make it more robust.
I initially started with defining sourcees for configuration file (there are many of them), in addition to the "base" ec2 source. However, from the logs it looked like Packer runs provisioners for each source inside of the build block, so it didn't seem like a good idea.
Are there better options to accomplish this?

Import JSON array in CouchBase

I want to use CouchBase to store lots of data. I have that data in the form:
[
{
"foo": "bar1"
},
{
"foo": "bar2"
},
{
"foo": "bar3"
}
]
I have that in a json file that I zipped into data.zip. I then call:
cbdocloader.exe -u Administrator -p **** -b mybucket C:\data.zip
However, this creates a single item in my bucket; not three as I expected. This actually makes sense as I should be able to store arrays and I did not "tell" CouchBase to expect multiple items instead of one.
The temporary solution I have is to split every items in multiplejson files, then add the lot of them in a single zip file and call cbdocloader again. The problem is that I might have lots of these entries and creating all the files might take too long. Also, I saw in the doc that cbdocloader uses the filename as a key. That might be problematic in my case...
I obviously missed a step somewhere but couldn't find what in the documentation. How should I format my json file?
You haven't missed any steps. The cbdocloader script is very limited at the moment. Couchbase will be adding a cbimport and cbexport tool in the near future that will allow you to add json files with various formats (including the one you mentioned). In the meantime you will need to use the current workaround you are using to get your data loaded.