YAML configuration with validity range - configuration

I have a YAML file that contains some configuration for my application. Now, some configs need to change in time. For example:
For 2020 Values are:
tax:
federal: 9
provincial: 7
But for 2021, what we need is:
tax:
federal: 9.5
provincial: 7.5
It would be easy if I could only switch yml files through time, but the yaml file contains many configs that will change at different point in time. Plus, I'm only assigned 1 yml file to configure my application, so this is not really an option.
So my question is, what would be the cleanest way to represent this using YAML ?
The only way I can think of is the following, but I can see this going very messy.
tax:
federal: [20200101-20201231]9;[20210101-20211231]9.5
provincial: [20200101-20201231]7;[20210101-20211231]7.5;

Your syntax is illegal, you would need to enclose that in quotes, parse it as single scalar, and post-process it.
I suggest to use YAML's tag system: You use a tag like !datedep to mark the values, e.g.:
tax:
federal: !datedep { [20200101, 20210101, 20220101]: [9, 7.5] }
provincial: !datedep { [20200101, 20210101, 20220101]: [7, 7.5] }
I give as value a YAML mapping with a single key-value pair. The key is a list of dates, building up the ranges in between. The value is the list of values which should be mapped to the ranges between the dates.
When loading the YAML, you have to provide a constructor for the tag !datedep. How this works depends on your implementation. The constructor should select the appropriate value based on the current date. You can use YAML anchors and aliases to reference duplicate date ranges, e.g.:
tax:
federal: !datedep { &y2021 [20200101, 20210101, 20220101]: [9, 7.5] }
provincial: !datedep { *y2021 : [7, 7.5] }

Related

DFHJS2LS - generating Json structure from cobol copybook

I have my output structure in COBOL - from which I try to generate to a JSON structure through DFHJS2LS - IBM tools. All the fields change to be required - this giving trouble when generating classes in .Net as all the fields are not present.
Question: How and where (in COBOL or DFHJS2LS) to define fields as optional in order to get them generated properly avoiding null pointer exception.
According to the documentation you can define your COBOL data items with...
data description OCCURS n TIMES
...and use mapping level 4.1 or higher and specify TRUNCATE-NULL-ARRAYS = ENABLED. There is a reference to "structured arrays" which I take to mean you would need to do something like...
05 Something Occurs 1 Times.
10 Something-Real PIC X(8).
...so you get...
"type":"array"
"maxItems":1
"minItems":0
"items":{ ... }
You could also specify mapping level 4.0 or higher and use...
data description OCCURS n TO m TIMES DEPENDING ON t
...to obtain...
"field-name":{
"type":"array",
"maxItems":m
"minItems":n
"items":{ ... }
}`
Mapping level is specified by...
//INPUT.SYSUT1 DD *
[...other control statements...]
MAPPING-LEVEL=4.3
[...other control statements...]

Apache Nifi: Replacing values in a column using Update Record Processor

I have a csv, which looks like this:
name,code,age
Himsara,9877,12
John,9437721,16
Razor,232,45
I have to replace the column code according to some regular expressions. My logic is shown in a Scala code below.
if(str.trim.length == 9 && str.startsWith("369")){"PROB"}
else if(str.trim.length < 8){"SHORT"}
else if(str.trim.startsWith("94")){"LOCAL"}
else{"INT"}
I used a UpdateRecord Processor to replace the data in the code column. I added a property called /code which contains the value.
${field.value:replaceFirst('^[0-9]{1,8}$','SHORT'):replaceFirst('[94]\w+','OFF_NET')}
This works when replacing code's with
length less than 8 with "SHORT"
starting with 94 with "LOCAL"
I am unable to find a way to replace data in the column, code when it's equal to 8 digits AND when it starts with 0. Also how can I replace the data if it doesn't fall into any condition mentioned above. (Situation which the data should be replaced with INT)
Hope you can suggest a workflow or value to be added to the property in Update record to make the above two replacements happen.
There is a length and startsWith functions.
${field.value:length():lt(8):ifElse(
'SHORT', ${field.value:startsWith(94):ifElse(
'LOCAL', ${field.value:length():equals(9):and(${field.value:startsWith(369)}):ifElse(
'PROB', 'INT'
)})})}
I have put the line breaks for easy to recognize the functions but it should be removed.
By the way, the INT means that some string values to replace? Sorry for the confusion.
Well, if you want to regular expression only, you can try the below code.
${field.value
:replaceFirst('[0-9]{1,8}', 'SHORT')
:replaceFirst('[94]\w+', 'OFF_NET')
:replaceFirst('369[0-9]{6}', 'PROB')
:replace(${field.value}, 'INT')
}

Chef multiple level attribute file override with role JSON

I have an attributes file that looks like this:
default['ftp_provision']['vsftpd']['pasv_ip'] = "192.168.0.10"
where the first attribute is the cookbook name, the second is the program, and the third is the option I want to change, implemented in a template .erb file as:
pasv_ip=<%node['ftp_provision']['vsftpd']['pasv_ip']%>
This is working correctly as expected.
However, I would like to add a role to change these attributes as required for several nodes. I'm using knife role create ftp_node1 to do that doing something like:
"default_attributes": {
"ftp_provision" => {"ftp_provision" => "vsftpd" => "pasv_ip" => "192.168.0.10"}
},
I keep getting syntax errors. All the examples I've been able to see have referenced making JSON files from Ruby DSL with only one level deep of attributes (e.g. default['key']['value']) so I'd like to know how to do this correctly per role.
you'll need to use actual JSON for this, and not sure what you mean about one level deep. this will create a hash 3 or 4 levels deep, depending on how you count it. i haven't seen issues with going further with attributes, and see many cookbooks in the wild with default['really']['freakin']['long']['strings']['of'] = attributes
i took a look at chef's examples and they're using ruby's hash format there rather than json, and that method of creating hashes makes rubocop squawk and say it's been deprecated. i can certainly see how that example would mislead you.
use a linter when building json, here's one https://jsonlint.com/
also I think this may work for you:
{
"ftp_provision": {
"vsftpd": {
"pasv_ip": "192.168.0.10"
}
}
}

How to query a ScriptDB database for key:values within objects nested in array objects Google Apps Script

Assume an object schema stored in ScriptDb:
{name: 'alice',
age: 12,
interests: [
{interest: 'tea parties', enthusiasm: 'high'},
{interest: 'croquet', enthusiasm: 'moderate'},
]
}
I understand how to query against the first two attributes but not how to run a query to return all rows where interests[enthusiasm = moderate]
Taking that example literally and trying: db.query({interests:[{enthusiasm: 'moderate'}]});
returns a ScriptDbResult but any attempt to use that result's methods results in an error:
Queries can only contain letters, numbers, spaces, dashes and underscores as keys.
This is not currently possible. It may be supported in a future update. The best you can do now is load all interests and loop through them yourself.

Getting Sphider to output JSON

I've recently added the Sphider crawler to my site in order to add search functionality. But the default search.php that comes with the distribution of Sphider that I downloaded is too plain and doesn't integrate well with the rest of my site. I have a little navigation bar at the top of the site which has a search box in it, and I'd like to be able to access Sphider's search results through that search field using Ajax. To do this, I figure I need to get Sphider to return its results in JSON format.
The way I did that is I used a "theme" that outputs JSON (Sphider supposts "theming" its output). I found that theme on this thread on Sphider's site. It seems to work, but more strict JSON parsers will not parse it. Here's some example JSON output:
{"result_report":"Displaying results 1 - 1 of 1 match (0 seconds) ", "results":[ { "idented":"false", "num":"1", "weight":"[100.00%]", "link":"http://www.avtainsys.com/articles/Triple_Contraints", "title":"Triple Contraints", "description":" on 01/06/12 Project triple constraints are time, cost, and quality. These are the three constraints that control the performance of the project. Think about this triple-constraint as a three-leg tripod. If one of the legs is elongated or", "link2":"http://www.avtainsys.com/articles/Triple_Contraints", "size":"3.3kb" }, { "num":"-1" } ], "other_pages":[ { "title":"1", "link":"search.php?query=constraints&start=1&search=1&results=10&type=and&domain=", "active":"true" }, ] }
The issue is that there is a trailing comma near the end. According to this, "trailing commas are not allowed" when using PHP's json_decode() function. This JSON also failed to parse using this online formatter. But when I took the comma out, it worked and I got this better-formatted JSON:
{
"result_report":"Displaying results 1 - 1 of 1 match (0 seconds) ",
"results":[
{
"idented":"false",
"num":"1",
"weight":"[100.00%]",
"link":"http://www.avtainsys.com/articles/Triple_Contraints",
"title":"Triple Contraints",
"description":" on 01/06/12 Project triple constraints are time, cost, and quality. These are the three constraints that control the performance of the project. Think about this triple-constraint as a three-leg tripod. If one of the legs is elongated or",
"link2":"http://www.avtainsys.com/articles/Triple_Contraints",
"size":"3.3kb"
},
{
"num":"-1"
}
],
"other_pages":[
{
"title":"1",
"link":"search.php?query=constraints&start=1&search=1&results=10&type=and&domain=",
"active":"true"
}
]
}
Now, how would I do this programmatically? And (perhaps more importantly), is there a more elegant way of accomplishing this? And you should know that PHP is the only language I can run on my shared hosting account, so a Java solution for example would not work for me.
In search_result.html, you can surround the , at the end of the foreach loop with condition to only print if the index is strictly less than the number of pages - 1.