Does this JSON configuration workflow have a name? - json

I have a system where we collect a lot of JSON configuration from different parties to configure our overall service.
The repository looks like a directory of formatted JSON files. For example foo.json:
{
"id": "3bd0e397-d8cc-46ff-9e0d-26fa078a37f3",
"name": "Example",
"logo": "https://example/foo.png"
}
We have a pipeline whereby the owner of foo.json can overwrite this file by committing a new file at any time, since fast updates are required.
However we require unfortunately to skip whole files or override some values for various $reasons.
Hence we commit something like touch foo.json.skip when we want the file to be skipped before publishing. And similarly, we have a foo.json.d/override.json to perhaps override the logo because it's poorly formatted or something.
Is there a name for this sort of JSON pipeline that we have? It's inspired by systemd configuration, but maybe system configuration was inspired by something else?

Related

Specifying input.json file from another directory for use in opa test

I am new to opa/rego and have an opa test that I would like to run. Within that test .rego file, I would like to use an input.json file from a different directory. Is there a way to specify that file within a "with input as _____" statement within the test file? i.e.
test_allow {
allow with input as <path-to-file>
}
My thoughts so far have lead me to trying the -b option but the directories are pretty far apart and I do not want a bundle that large and dependent. Additionally, I have thought about the import statements but I do not have the "-i" option within the opa test subcommand. I've also tried specifying each file (the .rego policy file, the .rego test file, and an input file) within the opa test subcommand to no avail.
Any help is greatly appreciated
OPA, and by extension the OPA test runner, doesn't really consider files at the time of policy evaluation. When OPA starts, all the files/directories pointed out by the command are merged under the data attribute, and may then be referenced byt their path, e.g. data.my-mocks.mock1, and so on.
If you want to include "test data" in your tests, you could keep those files in a directory included when running the opa test command. Since all data files are merged though, you'll need to ensure there aren't conflicting paths in those files. This is commonly accomplished by using a unique "top level" attribute per item. Something like:
{
"mock1": {
"item": "here"
},
"mock2": {
"item": "here"
}
}
You may then reference this in your tests like you suggested:
test_allow {
allow with input as data.mock1.item
}

Passing a config file to Google Cloud Function using GitHub Actions and GitHub Secrets

I have a config.py file on my development machine that stores multiple dictionaries for my config settings.
config.py:
SERVICE_ONE_CREDENTIALS = {
"CLIENT_ID": 000,
"CLIENT_SECRET": "abc123"
}
SERVICE_TWO_CREDENTIALS = {
"VERIFY_TOKEN": "abc123"
}
...
I recently setup this GitHub action to automatically deploy the changes that are pushed to the repository into a Google Cloud Function, and ran into a problem when trying to copy this configuration file over since the file is being ignore from git due to it storing sensitive credentials.
I've been trying to find a way to copy this file over to the Cloud Function but haven't been successful. I would prefer to stay away from using environment variables due to the number of keys there are. I did look into using key management services, but I first wanted to see if it would be possible to store the file in GitHub Secrets and pass it along to the function.
As a backup, I did consider encrypting the config file, adding it to the git repo, and storing the decryption key in GitHub secrets. With that, I could decrypt the file in the Cloud Function before starting the app workflow. This doesn't seem like a great idea though, but would be interested to see if anyone has done this or what your thoughts are on this.
Is something like this possible?
If you encrypt and put in a repo at least it's not clear text and someone can't get to the secret without a private key (which of course you don't check in). I do something similar in my dotfiles repo where I check in dat files with my secrets and the private key isn't checked in. That would have to be a secret in actions and written to disk to be used. It's a bit of machinery but possible.
Using github secrets is a secure path because you don't check-in anything, it's securely stored and we pass it JIT if it's referenced. dislosure, I work on actions.
One consideration with that is we redact secrets on the fly from the logs but it's done one line at a time. multiline secrets are not good.
So a couple of options ...
You can manage the actual secret (abc123) as a secret and echo the config file to a file with the secret. As you noted you have to manage each secret separately. IMHO, it's not a big deal since abc123 is actually the secret. I would probably lean into that path.
Another option is to base64 encode the config file, store that as a secret in github actions and echo base64 decoded to a file. Don't worry, base64 isn't a security mechanism here. It's a transport to get it to a single line and if it accidentally leaks into the logs (the command line you run) the base64 version of it (which could easily be decoded) will be redacted from the logs.
There's likely other options but hope that helped.

Visual Studio Code JSON completion

I'm making my first videogame in Unity and I was messing around with storing data in JSON files. More specifically, language localization files. Each file stores key-value pairs of strings. Keys are grouped in categories (like "menu.pause.quit").
Now, since each language file is essentially going to have the same keys (but different values), it would be nice if VS code recognized and helped me write these keys with tooltips.
For example, if you open settings.json in VS code, if you try to write something, there's some kind of autocompletion going on:
How does that work?
The auto completion for json files is done with json schemas. It is described in the documentation:
https://code.visualstudio.com/docs/languages/json#_json-schemas-and-settings
Basically you need to create a json schema which describes your json file and you can map it to json files via your user or workspace settings:
// settings.json
{
// ... other settings
"json.schemas": [
{
"fileMatch": [
"/language-file.json"
],
"url": "./language-file-schema.json"
}
]
}
This will provide auto completion for language-file.json (or any other file matched by the pattern) based on language-file-schema.json(in your workspace folder, but you can also use absolute paths)
The elements of fileMatch may be patterns which will match against multiple files.

Pushing an empty file instead of another to github

I have a local repository with a config.json file which contains sensible configuration data. This file follows a schema found in config.json.template. The two files look like these (not actual content):
config.json.template
{
"username": "",
"password": "",
}
config.json
{
"username": "admin",
"password": "123456789"
}
Instead of placing the config.json in my .gitignore, is there a way to replace it with config.json.template at push-time? So that my repository will contain a config.json file that just needs to be filled out by the user.
You're better off pushing just the template file and not a file that you want people to modify. People working with your repository won't appreciate having a tracked file that they must modify, since it means that git checkout and other operations will sometimes require them to stash first and then unstash.
In general, files that are intended to be modified on a per-user basis, like configuration files or editor files, shouldn't be saved directly in version control.
You can just simply push config.json.template as it is.

Import JSON array in CouchBase

I want to use CouchBase to store lots of data. I have that data in the form:
[
{
"foo": "bar1"
},
{
"foo": "bar2"
},
{
"foo": "bar3"
}
]
I have that in a json file that I zipped into data.zip. I then call:
cbdocloader.exe -u Administrator -p **** -b mybucket C:\data.zip
However, this creates a single item in my bucket; not three as I expected. This actually makes sense as I should be able to store arrays and I did not "tell" CouchBase to expect multiple items instead of one.
The temporary solution I have is to split every items in multiplejson files, then add the lot of them in a single zip file and call cbdocloader again. The problem is that I might have lots of these entries and creating all the files might take too long. Also, I saw in the doc that cbdocloader uses the filename as a key. That might be problematic in my case...
I obviously missed a step somewhere but couldn't find what in the documentation. How should I format my json file?
You haven't missed any steps. The cbdocloader script is very limited at the moment. Couchbase will be adding a cbimport and cbexport tool in the near future that will allow you to add json files with various formats (including the one you mentioned). In the meantime you will need to use the current workaround you are using to get your data loaded.