SlowCheetah Transforming JSON Configuration File - json

I've used SlowCheetah to transform config files for some time, no problems. However, my new application is using Json files for configuration settings and I'm having trouble implementing the following.
My JSON is structured as such :
{
"Settings": [
{
"ProcessorType": 1,
"ProcessorName": "Identifying Name 1",
"BaseArchiveFolder": "\\\\servername\\staginglocation\\Archive",
"ActivityID": 21,
"SubActivityID": 155,
"OutputFolder": "c:\\temp\\Type1",
"ConnectionString": "Data Source=servername\\instance1;Initial Catalog=catalog;Integrated Security=True;Connection Timeout=240"
},
{
"ProcessorType": 2,
"ProcessorName": "Identifying Name 2",
"BaseArchiveFolder": "\\\\servername\\staginglocation\\Archive",
"ActivityID": 21,
"SubActivityID": 155,
"OutputFolder": "c:\\temp\\Type2",
"ConnectionString": "Data Source=servername\\instance1;Initial Catalog=catalog;Integrated Security=True;Connection Timeout=240"
}
]
}
When I compile the Release version, I need to change the OutputFolder key, but the value will be different depending on the value of ProcessorType.
I've successfully changed all instances of a key to the same value (for ConnectionString) but having trouble getting the transform based on another setting.
Can anyone point me in the right direction ?

Related

Does config.GetSection not work in Azure Functions? And what is the recommended alternative?

Azure Function with a complex (List of objects) configuration type is working locally (with that complex type in local.settings.json) but fails to read / create list of objects in Azure (with that complex type in Azure Function configuration settings). I'm looking for the recommended / optimal way to support that across both platforms / methods of access.
This works great in my local.settings.json where I use the configuration builder and pull data out like
var myList = config.GetSection("ConfigurationList").Get<List<MyType>>();
however this doesn't seem to work in Azure Functions?? Now I think that is because in local.settings.json it is a json file and looks like
"ConfigurationList" : [ { "Name": "A", "Value": 2 }, { "Name": "B", "Value": 3 }]
while in Azure Functions it is a setting "ConfigurationList" with the value
[ { "Name": "A", "Value": 2 }, { "Name": "B", "Value": 3 }]
(so there isn't really a "section" in Azure Functions?)
It seems like the "easy" solution to this is to just change the .json to be a quoted string and deserialize the string (and then it would work the same in both places); but that doesn't seem like it would be the "best" (or "recommended" solution)
i.e. something like
"ConfigurationList" : "[ { \"Name\": \"A\", \"Value\": 2 }, { \"Name\": \"B\", \"Value\": 3 }]"
var myList = (List<MyType>)JsonConvert.DeserializeObject(config["ConfigurationList"], typeof(List<MyType>));
Which isn't the worst; but makes the json a bit "not as nice" and doesn't "flow" across the two platforms ... if it is what I have to do, fine; but hoping for a more standard approach / recommendation
As I metioned in the comment, on local you can process local.settings.json as a json file, but when on azure, the value in configuration settings is environment variable. There is no section, it just string.
Please notice that only string values are allowed, and that anything nested will break. Learn how to use nest settings on azure web app(azure functon is based on azure app service sandbox, so it is the same.):
https://learn.microsoft.com/en-us/archive/blogs/waws/asp-net-core-settings-for-azure-app-service
For example, if this is the json structure:
{
"Parent": {
"ChildOne": "C1 from secrets.json",
"ChildTwo": "C2 from secrets.json"
}
}
Then in web app, you should save it like this:
(source: windows.net)
Not sure if you are looking something like this , it seems a list but if it is a simple JObject like
"ConfigurationList" : {
"Name": "A",
"Value": 2
}
Then you can declare ConfigurationList:Name , ConfigurationList:Value in the configuration settings of function app

Write multiple Dictionaries into Firebase Realtime database at once with Swift, using childByAutoId()

I'm building an app (Swift) where the user can select a CSV file, containing bank transactions.
I want to parse this CSV to my Firebase Realtime database.
The input CSV file would be:
amount, label
111, Uber Eats
1678, iTunes
The output on Realtime database would be:
{
"user ID 1" : {
"-M5wUNXgmTuBgZpvT0v-" : {
"amount" : 111,
"label" : "Uber Eats"
},
"-M5wUQk4wihb3OxcQ7SX" : {
"amount" : 1678,
"label" : "iTunes"
}
},
"user ID 2" : {
"-M5wUNXgmTuBgZpvT0k-" : {
"amount" : 111,
"label" : "Deliveroo"
}
}
}
In this example, I am "user ID 1" and I uploaded two transactions from the CSV file.
I can't figure out how to mass-write these lines into Firebase in one shot.
I've tried to parse the CSV into multiple dictionaries and to write in Firebase as an array:
let myParsedCSVasArray = [
["amount": 111,
"label": "Uber Eats"],
["amount": 1678,
"label": "iTunes"]
]
self.ref.child(user!.uid).childByAutoId().setValue(myParsedCSVasArray)
But the result doesn't fit my needs, as it creates an array inside the JSON:
Result of the previous code into Firebase realtime database
Any idea how I could upload multiple dictionaries at once, and add a childByAutoId to each of them?
You can solve it using a for loop by enumerating through the collection like below
let myParsedCSVasArray = [
["amount": 111,
"label": "Uber Eats"],
["amount": 1678,
"label": "iTunes"]
]
for value in myParsedCSVasArray {
ref.child(user!.uid).childByAutoId().setValue(value)
}
For reference on loops in Swift refer this link
I use Dart not Swift so I cannot easily show you correct code. I can describe the principle though.
In order to guarantee that all the data gets written, you have to do the set or update as a single database action. Therefore you need to build the dictionary (map in my terms) that contains all your records. Each record has to have the unique key generated by Firebase.
You need to use a loop to build this dictionary. In the loop, for each record you need to get a unique key by using code similar to this let key = ref.child("posts").childByAutoId().key.
Each call to this will return a new unique key.
When the dictionary is complete you add it as one atomic update (it either all works or all fails).
This Swift code seems to do that for one record so you should be able to use it as the basis for the loop:
guard let key = ref.child("posts").childByAutoId().key else { return }
let post = ["uid": userID,
"author": username,
"title": title,
"body": body]
let childUpdates = ["/posts/\(key)": post,
"/user-posts/\(userID)/\(key)/": post]
ref.updateChildValues(childUpdates)
Hope that helps.
See this link for more info: https://firebase.google.com/docs/database/ios/read-and-write

ASP.NET Core 3 - Serilog how to configure Serilog.Sinks.Map in appsettings.json file?

I came across the Serilog.Sinks.Map addon today which will solve my challenge with routing specific log events to a specific sink interface. In my environment, I am writing to a log file as well as using the SQL interface. I only want certain logs to be written to the SQL Server though.
Reading the instructions on GitHub by the author, I can only see an example for implementing the LoggerConfiguration through C# in the Program.CS, but I am using the appsettings.json file and unsure what to change from the provided example to the required json format.
Example given by Serilog on GitHub:
Log.Logger = new LoggerConfiguration()
.WriteTo.Map("Name", "Other", (name, wt) => wt.File($"./logs/log-{name}.txt"))
.CreateLogger();
My current configuration: Note I haven't implemented the Sinks.Map in my code yet.
Program.CS File:
public static void Main(string[] args)
{
// Build a configuration system with the route of the app settings.json file.
// this is becuase we dont yet have dependancy injection available, that comes later.
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.Build();
Log.Logger = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.CreateLogger();
var host = CreateHostBuilder(args).Build();
}
And here is my appsettings.json file. I want to be able configure sink name 'MSSqlServer' as the special route, then use the standard file appender sink for all the other general logging.
"AllowedHosts": "*",
"Serilog": {
"Using": [],
"MinumumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
"Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId", "WithThreadId" ],
"WriteTo": [
{ "Name": "Console" },
{
"Name": "File",
"Args": {
//"path": "C:\\NetCoreLogs\\log.txt", // Example path to Windows Drive.
"path": ".\\Logs\\logs.txt",
//"rollingInterval": "Day", // Not currently in use.
"rollOnFileSizeLimit": true,
//"retainedFileCountLimit": null, // Not currently in use.
"fileSizeLimitBytes": 10000000,
"outputTemplate": "{Timestamp:dd-MM-yyyy HH:mm:ss.fff G} {Message}{NewLine:1}{Exception:1}"
// *Template Notes*
// Timestamp 'G' means UTC Time
}
},
{
"Name": "MSSqlServer",
"Args": {
"connectionString": "DefaultConnection",
"schemaName": "EventLogging",
"tableName": "Logs",
"autoCreateSqlTable": true,
"restrictedToMinimumLevel": "Information",
"batchPostingLimit": 1000,
"period": "0.00:00:30"
}
}
//{
// "Name": "File",
// "Args": {
// "path": "C:\\NetCoreLogs\\log.json",
// "formatter": "Serilog.Formatting.Json.JsonFormatter, Serilog"
// }
//}
]
}
Lastly if i could squeeze in another quick question on the topic, when using the SQL sink interface, how do manage the automatic purging/deletion of the oldest events i.e. DB should only store max 1,000,000 events then automatically write over the oldest event first, thanks in advance
I believe it is currently impossible to configure the standard Map call in json, since it relies on a few types that have no serialization support right now, like Action<T1, T2>. I created an issue to discuss this in the repository itself:
Unable to configure default Map call in json? #22
However, there is a way to still get some functionality out of it in Json, by creating a custom extension method. In your particular case, it would be something like this:
public static class SerilogSinkConfigurationExtensions
{
public static LoggerConfiguration MapToFile(
this LoggerSinkConfiguration loggerSinkConfiguration,
string keyPropertyName,
string pathFormat,
string defaultKey)
{
return loggerSinkConfiguration.Map(
keyPropertyName,
defaultKey,
(key, config) => config.File(string.Format(pathFormat, key));
}
}
Then, on your json file, add a section like this:
"WriteTo": [
...
{
"Name": "MapToFile",
"Args": {
"KeyPropertyName": "Name",
"DefaultKey": "Other",
"PathFormat": "./logs/log-{0}.txt"
}
}
]
To have these customizations work properly, Serilog needs to understand that your assembly has these kinds of extensions, to load them during the parsing stage. As per the documentation, you either need to have these extensions on a *.Serilog.* assembly, or add the Using clause on the json:
// Assuming the extension method is inside the "Company.Domain.MyProject" dll
"Using": [ "Company.Domain.MyProject" ]
More information on these constraints here:
https://github.com/serilog/serilog-settings-configuration#using-section-and-auto-discovery-of-configuration-assemblies

After upload to server, all numbers outputed json is string

I am using Laravel 5.5.13.
I succesfully tested everything on my loclahost. I now uploaded to server.
I did an export from phpMyAdmin with default settings in my localhost (XAMPP, Windows 10), then did a import on remote phpMyAdmin with default settings.
When hit the remote host, it is now giving all fields that I setup in my migrations like this:
$table->integer('extension_id')->unsigned();
as a string, which is so weird, becuase when on localhost, it is giving as a number.
In the data below, please notice, that in localhost, displayname_id and extensions_id values are not wrapped with quotes. However id is not, which i don't understand, it is also unsigned. My goal is to make even id column be string (or if not possible, then make the *_id back to number).
Here it is from remote:
[
{
"id": 2,
"name": "Stencil",
"kind": "cws",
"created_at": "2017-11-11 00:26:52",
"updated_at": "2017-11-11 00:26:52",
"thumbs_count": "1",
"thumbs_yes_count": "0",
"latest_comment": {
"id": 1,
"body": "huh?",
"displayname_id": "1",
"extension_id": "2",
"created_at": "2017-11-11 00:26:56",
"updated_at": "2017-11-11 00:26:56"
}
}
]
Here it is from localhost:
[
{
"id": 2,
"name": "Stencil",
"kind": "cws",
"created_at": "2017-11-11 00:26:52",
"updated_at": "2017-11-11 00:26:52",
"thumbs_count": "1",
"thumbs_yes_count": "0",
"latest_comment": {
"id": 1,
"body": "huh?",
"displayname_id": 1,
"extension_id": 2,
"created_at": "2017-11-11 00:26:56",
"updated_at": "2017-11-11 00:26:56"
}
}
]
Here is screesnhots of my phpMyAdmin's:
My remote phpMyAdmin is this -
And my local is this -
Here is a screenshot of the table structure, please notice that id column's are also unsigned, however they are not being json_encode'ed into string.
Here is export screenshot: https://screenshots.firefoxusercontent.com/images/51d1e47f-fe78-4cdc-8de4-b113d4b576a9.png
Here is import screenshot: https://screenshots.firefoxusercontent.com/images/ff112f86-1c1c-4554-b03c-4b15307c042a.png
The difference is your MySQL client driver. Your local machine is using the MySQL Native Driver (mysqlnd), whereas your remote server is using the MySQL Client Library (libmysql).
The native driver (mysqlnd) will treat all integers from the database as integers in PHP. However, the client library (libmysql) will treat all fields as strings in PHP.
The reason that the id field shows up as an integer on both servers is because of some Laravel magic. Laravel uses the model's $casts property to cast specific fields to specific types when accessed. If your $incrementing property on your model is true (which it is by default), Laravel automatically adds the primary key field (default id) to the $casts property with the type defined by the $keyType property (default int). Because of this, whenever you access the id field, it will be a PHP integer.
If you want the integer fields to be treated as integers, you could install the MySQL Native Driver (mysqlnd) on your remote server.
If that is not an option, or not desirable, you can specify that those fields be treated as integers using the $casts property:
protected $casts = [
'displayname_id' => 'int',
'extension_id' => 'int',
];
Now those two fields will be treated as integers regardless of the MySQL driver used.
If you wanted the id to be treated as a string, you have a couple options.
First, you could change the $keyType value to string, but that may have unintended consequences. For example, the relationHasIncrementingId on the BelongsTo class checks if the key is incrementing and if the key type is int, so this method will return false if you change the $keyType to string.
Second, you could directly add 'id' => 'string' to your $casts array, as the $casts value takes priority over the $keyType value when accessing the attribute. This would be safer and more semantically correct than changing the $keyType value.
And third, if you wanted the id to be treated as a string only for JSON conversions, you could override the jsonSerialize() method on your model.
public function jsonSerialize()
{
$data = parent::jsonSerialize();
if (isset($data[$this->primaryKey])) {
$data[$this->primaryKey] = $this->castAttribute('string', $data[$this->primaryKey]);
}
return $data;
}
For Sure this problem is coming from your database OR an old version of php < 5.2.9
But you can try to pass JSON_NUMERIC_CHECK option to your json_encode and you are done, it will turn strings representing numbers automatically into numbers:
Native PHP solution:
echo json_encode($a, JSON_NUMERIC_CHECK);
Online Example
Laravel solution:
return response()->json($a, 200, [], JSON_NUMERIC_CHECK);

Securely pass credentials to DSC Extension from ARM Template

According to https://learn.microsoft.com/en-gb/azure/virtual-machines/windows/extensions-dsc-template, the latest method for passing credentials from an ARM template to a DSC extension is by placing the whole credential within the configurationArguments of the protectedSettings section, as shown below:
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.24",
"autoUpgradeMinorVersion": true,
"settings": {
"wmfVersion": "latest",
"configuration": {
"url": "[concat(parameters('_artifactsLocation'), '/', variables('artifactsProjectFolder'), '/', variables('dscArchiveFolder'), '/', variables('dscSitecoreInstallArchiveFileName'))]",
"script": "[variables('dscSitecoreInstallScriptName')]",
"function": "SitecoreInstall"
},
"configurationArguments": {
"nodeName": "[parameters('CMCD VMName')]",
"sitecorePackageUrl": "[concat(parameters('sitecorePackageLocation'), '/', parameters('sitecoreRelease'), '/', parameters('sitecorePackageFilename'))]",
"sitecorePackageUrlSasToken": "[parameters('sitecorePackageLocationSasToken')]",
"sitecoreLicense": "[concat(parameters('sitecorePackageLocation'), '/', parameters('sitecoreLicenseFilename'))]",
"domainName": "[parameters('domainName')]",
"joinOU": "[parameters('domainOrgUnit')]"
},
"configurationData": {
"url": "[concat(parameters('_artifactsLocation'), '/', variables('artifactsProjectFolder'), '/', variables('dscArchiveFolder'), '/', variables('dscSitecoreInstallConfigurationName'))]"
}
},
"protectedSettings": {
"configurationUrlSasToken": "[parameters('_artifactsLocationSasToken')]",
"configurationDataUrlSasToken": "[parameters('_artifactsLocationSasToken')]",
"configurationArguments": {
"domainJoinCredential": {
"userName": "[parameters('domainJoinUsername')]",
"password": "[parameters('domainJoinPassword')]"
}
}
}
}
Azure DSC is supposed to handle the encrypting/decrypting of the protectedSettings for me. This does appear to work, as I can see that the protectedSettings are encrypted within the settings file on the VM, however the operation ultimately fails with:
VM has reported a failure when processing extension 'dsc-sitecore-de
v-install'. Error message: "The DSC Extension received an incorrect input: Comp
ilation errors occurred while processing configuration 'SitecoreInstall'. Pleas
e review the errors reported in error stream and modify your configuration code
appropriately. System.InvalidOperationException error processing property 'Cre
dential' OF TYPE 'xComputer': Converting and storing encrypted passwords as pla
in text is not recommended. For more information on securing credentials in MOF
file, please refer to MSDN blog: http://go.microsoft.com/fwlink/?LinkId=393729
At C:\Packages\Plugins\Microsoft.Powershell.DSC\2.24.0.0\DSCWork\dsc-sitecore-d
ev-install.0\dsc-sitecore-dev-install.ps1:103 char:3
+ xComputer Converting and storing encrypted passwords as plain text is not r
ecommended. For more information on securing credentials in MOF file, please re
fer to MSDN blog: http://go.microsoft.com/fwlink/?LinkId=393729 Cannot find pat
h 'HKLM:\SOFTWARE\Microsoft\PowerShell\3\DSC' because it does not exist. Cannot
find path 'HKLM:\SOFTWARE\Microsoft\PowerShell\3\DSC' because it does not exis
t.
Another common error is to specify parameters of type PSCredential without an e
xplicit type. Please be sure to use a typed parameter in DSC Configuration, for
example:
configuration Example {
param([PSCredential] $UserAccount)
...
}.
Please correct the input and retry executing the extension.".
The only way that I can make it work is to add PsDscAllowPlainTextPassword = $true to my configurationData, but I thought I was using the protectedSettings section to avoid using plain text passwords...
Am I doing something wrong, or is it simply that my understanding is wrong?
Proper way of doing this:
"settings": {
"configuration": {
"url": "xxx",
"script": "xxx",
"function": "xx"
},
"configurationArguments": {
"param1": xxx,
"param2": xxx
etc...
}
},
"protectedSettings": {
"configurationArguments": {
"NameOfTheCredentialsParameter": {
"userName": "USERNAME",
"password": "PASSWORD!1"
}
}
}
this way you don't need PsDSCAllowPlainTextPassword = $true
Then you can receive the parameters in your Configuration with
Configuration MyConf
param (
[PSCredential] $NameOfTheCredentialsParameter
)
An use it in your resource
Registry DoNotOpenServerManagerAtLogon {
Ensure = "Present"
Key = "HKEY_CURRENT_USER\SOFTWARE\Microsoft\ServerManager"
ValueName = "DoNotOpenServerManagerAtLogon"
ValueData = 1
ValueType = REG_DWORD"
PsDscRunAsCredential = $NameOfTheCredentialsParameter
}
The fact that you still need to use the PsDSCAllowPlainTextPassword = $true is documented
Here is the quoted section:
However, currently you must tell PowerShell DSC it is okay for credentials to be outputted in plain text during node configuration MOF generation, because PowerShell DSC doesn’t know that Azure Automation will be encrypting the entire MOF file after its generation via a compilation job.
Based on the above, it seems that it is an order of operations issue. The MOF is generated and THEN encrypted.