Pushing an empty file instead of another to github - json

I have a local repository with a config.json file which contains sensible configuration data. This file follows a schema found in config.json.template. The two files look like these (not actual content):
config.json.template
{
"username": "",
"password": "",
}
config.json
{
"username": "admin",
"password": "123456789"
}
Instead of placing the config.json in my .gitignore, is there a way to replace it with config.json.template at push-time? So that my repository will contain a config.json file that just needs to be filled out by the user.

You're better off pushing just the template file and not a file that you want people to modify. People working with your repository won't appreciate having a tracked file that they must modify, since it means that git checkout and other operations will sometimes require them to stash first and then unstash.
In general, files that are intended to be modified on a per-user basis, like configuration files or editor files, shouldn't be saved directly in version control.

You can just simply push config.json.template as it is.

Related

Specifying input.json file from another directory for use in opa test

I am new to opa/rego and have an opa test that I would like to run. Within that test .rego file, I would like to use an input.json file from a different directory. Is there a way to specify that file within a "with input as _____" statement within the test file? i.e.
test_allow {
allow with input as <path-to-file>
}
My thoughts so far have lead me to trying the -b option but the directories are pretty far apart and I do not want a bundle that large and dependent. Additionally, I have thought about the import statements but I do not have the "-i" option within the opa test subcommand. I've also tried specifying each file (the .rego policy file, the .rego test file, and an input file) within the opa test subcommand to no avail.
Any help is greatly appreciated
OPA, and by extension the OPA test runner, doesn't really consider files at the time of policy evaluation. When OPA starts, all the files/directories pointed out by the command are merged under the data attribute, and may then be referenced byt their path, e.g. data.my-mocks.mock1, and so on.
If you want to include "test data" in your tests, you could keep those files in a directory included when running the opa test command. Since all data files are merged though, you'll need to ensure there aren't conflicting paths in those files. This is commonly accomplished by using a unique "top level" attribute per item. Something like:
{
"mock1": {
"item": "here"
},
"mock2": {
"item": "here"
}
}
You may then reference this in your tests like you suggested:
test_allow {
allow with input as data.mock1.item
}

Filter out certain parts of a json file on git commit

I am looking to filter out specific parts of a json file so that the given part of the file does not get pulled into a git repository. My use case is that I am setting up a repository to keep some working files, including settings for vsCode. I have a plugin for window colors that sets different colors for different windows that are open. The current color is saved in the .vscode/setting.json file for that window.
I found where it is possible to use the .gitattributes file to apply a filter to a file or set of files, and then use "$git config" to remove certain lines from what is committed, based on a sed command per this previous question.
I would like to apply this to the "workbench.colorCustomizations" object within the following json file, so that this object does not get committed, while other settings in the file may be committed, such as the "editor.formatOnPaste" object. Does anyone know of a way to do this?
{
"workbench.colorCustomizations": {
"activityBar.background": "#102D56",
"titleBar.activeBackground": "#173F79",
"titleBar.activeForeground": "#F8FAFE"
},
"editor.formatOnPaste": true
}

Does this JSON configuration workflow have a name?

I have a system where we collect a lot of JSON configuration from different parties to configure our overall service.
The repository looks like a directory of formatted JSON files. For example foo.json:
{
"id": "3bd0e397-d8cc-46ff-9e0d-26fa078a37f3",
"name": "Example",
"logo": "https://example/foo.png"
}
We have a pipeline whereby the owner of foo.json can overwrite this file by committing a new file at any time, since fast updates are required.
However we require unfortunately to skip whole files or override some values for various $reasons.
Hence we commit something like touch foo.json.skip when we want the file to be skipped before publishing. And similarly, we have a foo.json.d/override.json to perhaps override the logo because it's poorly formatted or something.
Is there a name for this sort of JSON pipeline that we have? It's inspired by systemd configuration, but maybe system configuration was inspired by something else?

How to append a JSON file on FTP in Apache NiFi?

In Apache NiFi, I have a flow in which the flowfiles' contents are arrays of JSON objects. Each flowfile has a unique filename attribute.
// flowfile1:
filename: file1.json
[ {}, {}, {}, ... ]
// flowfile2:
filename: file2.json
[ {}, {}, {}, ... ]
Now, I want to put those files into a FTP server, if a file with the given filename does not exist. If such a file does exist, I want to merge those two files together (concatenate the array from the existing FTP file, with the one from the incoming flowfile) and put that updated file into the FTP. The first case (file does not yet exist) is simple, but how can I go about the second one?
You will probably want to use ListFTP to gather the list of files which exist, RouteOnAttribute/RouteOnContent to direct flowfiles referencing existing files to a queue, FetchFTP and MergeContent to join the content of the existing file and the new content, and then PutFTP to place the file on the FTP server again. You will need to investigate approaches to identify the filename attribute in the local flowfiles and match that with the remote file names (I'd suggest persisting the local filenames into a cache when you generate them and routing the remote file listing flowfiles through an enrichment processor. The DistributedMapCache and LookupAttribute processor families will probably be useful here. Abdelkrim Hadjidj has written a great article on using them.

Import JSON array in CouchBase

I want to use CouchBase to store lots of data. I have that data in the form:
[
{
"foo": "bar1"
},
{
"foo": "bar2"
},
{
"foo": "bar3"
}
]
I have that in a json file that I zipped into data.zip. I then call:
cbdocloader.exe -u Administrator -p **** -b mybucket C:\data.zip
However, this creates a single item in my bucket; not three as I expected. This actually makes sense as I should be able to store arrays and I did not "tell" CouchBase to expect multiple items instead of one.
The temporary solution I have is to split every items in multiplejson files, then add the lot of them in a single zip file and call cbdocloader again. The problem is that I might have lots of these entries and creating all the files might take too long. Also, I saw in the doc that cbdocloader uses the filename as a key. That might be problematic in my case...
I obviously missed a step somewhere but couldn't find what in the documentation. How should I format my json file?
You haven't missed any steps. The cbdocloader script is very limited at the moment. Couchbase will be adding a cbimport and cbexport tool in the near future that will allow you to add json files with various formats (including the one you mentioned). In the meantime you will need to use the current workaround you are using to get your data loaded.