ASP.NET Core 3 - Serilog how to configure Serilog.Sinks.Map in appsettings.json file? - json

I came across the Serilog.Sinks.Map addon today which will solve my challenge with routing specific log events to a specific sink interface. In my environment, I am writing to a log file as well as using the SQL interface. I only want certain logs to be written to the SQL Server though.
Reading the instructions on GitHub by the author, I can only see an example for implementing the LoggerConfiguration through C# in the Program.CS, but I am using the appsettings.json file and unsure what to change from the provided example to the required json format.
Example given by Serilog on GitHub:
Log.Logger = new LoggerConfiguration()
.WriteTo.Map("Name", "Other", (name, wt) => wt.File($"./logs/log-{name}.txt"))
.CreateLogger();
My current configuration: Note I haven't implemented the Sinks.Map in my code yet.
Program.CS File:
public static void Main(string[] args)
{
// Build a configuration system with the route of the app settings.json file.
// this is becuase we dont yet have dependancy injection available, that comes later.
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.Build();
Log.Logger = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.CreateLogger();
var host = CreateHostBuilder(args).Build();
}
And here is my appsettings.json file. I want to be able configure sink name 'MSSqlServer' as the special route, then use the standard file appender sink for all the other general logging.
"AllowedHosts": "*",
"Serilog": {
"Using": [],
"MinumumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
"Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId", "WithThreadId" ],
"WriteTo": [
{ "Name": "Console" },
{
"Name": "File",
"Args": {
//"path": "C:\\NetCoreLogs\\log.txt", // Example path to Windows Drive.
"path": ".\\Logs\\logs.txt",
//"rollingInterval": "Day", // Not currently in use.
"rollOnFileSizeLimit": true,
//"retainedFileCountLimit": null, // Not currently in use.
"fileSizeLimitBytes": 10000000,
"outputTemplate": "{Timestamp:dd-MM-yyyy HH:mm:ss.fff G} {Message}{NewLine:1}{Exception:1}"
// *Template Notes*
// Timestamp 'G' means UTC Time
}
},
{
"Name": "MSSqlServer",
"Args": {
"connectionString": "DefaultConnection",
"schemaName": "EventLogging",
"tableName": "Logs",
"autoCreateSqlTable": true,
"restrictedToMinimumLevel": "Information",
"batchPostingLimit": 1000,
"period": "0.00:00:30"
}
}
//{
// "Name": "File",
// "Args": {
// "path": "C:\\NetCoreLogs\\log.json",
// "formatter": "Serilog.Formatting.Json.JsonFormatter, Serilog"
// }
//}
]
}
Lastly if i could squeeze in another quick question on the topic, when using the SQL sink interface, how do manage the automatic purging/deletion of the oldest events i.e. DB should only store max 1,000,000 events then automatically write over the oldest event first, thanks in advance

I believe it is currently impossible to configure the standard Map call in json, since it relies on a few types that have no serialization support right now, like Action<T1, T2>. I created an issue to discuss this in the repository itself:
Unable to configure default Map call in json? #22
However, there is a way to still get some functionality out of it in Json, by creating a custom extension method. In your particular case, it would be something like this:
public static class SerilogSinkConfigurationExtensions
{
public static LoggerConfiguration MapToFile(
this LoggerSinkConfiguration loggerSinkConfiguration,
string keyPropertyName,
string pathFormat,
string defaultKey)
{
return loggerSinkConfiguration.Map(
keyPropertyName,
defaultKey,
(key, config) => config.File(string.Format(pathFormat, key));
}
}
Then, on your json file, add a section like this:
"WriteTo": [
...
{
"Name": "MapToFile",
"Args": {
"KeyPropertyName": "Name",
"DefaultKey": "Other",
"PathFormat": "./logs/log-{0}.txt"
}
}
]
To have these customizations work properly, Serilog needs to understand that your assembly has these kinds of extensions, to load them during the parsing stage. As per the documentation, you either need to have these extensions on a *.Serilog.* assembly, or add the Using clause on the json:
// Assuming the extension method is inside the "Company.Domain.MyProject" dll
"Using": [ "Company.Domain.MyProject" ]
More information on these constraints here:
https://github.com/serilog/serilog-settings-configuration#using-section-and-auto-discovery-of-configuration-assemblies

Related

How does one Configure Serilog to WriteTo AzureBlobStorage via the Storage Account Uri or a Named Connection String from appSettings.json?

Serilog configures properly when the following section. which exposes connectionString, is added to appSettings.json:
"Serilog": {
"WriteTo": [
{
"Name": "AzureBlobStorage",
"Args": {
"connectionString": "DefaultEndpointsProtocol=https;AccountName=mystorage;AccountKey=...;EndpointSuffix=core.windows.net",
//"connectionStringName": "MyStorageConnectionName",
//"storageAccountUri": "https://mystorage.blob.core.windows.net",
"storageContainerName": "myapplogs",
"storageFileName": "MyApp {yyyy}-{MM}-{dd}.log",
"writeInBatches": "true", // mandatory
"period": "0.00:00:30", // mandatory
"batchPostingLimit": "50" // optional
}
}, ...
],
}
While the above works, I have thus far been unable to replace use of the connectionString property with either connectionStringName or preferably storageAccountUri (simply leveraging Managed Identities.)
I have added the Serilog.Settings.Configuration 3.3.0 package as suggested here; and I am configuring Serilog as follows:
static void createLogger(ConfigureHostBuilder host) {
host.UseSerilog((ctx, lc) => {
lc.ReadFrom.Configuration(ctx.Configuration);
});
}

How to use custom_data parameter in ARM template in Terraform?

I have an Azure ARM template that successfully bootstraps a VM from a file directory within an Azure Storage Account. I would like to get this working in Terraform, but I am really struggling getting it to work correctly.
Here is a working Azure ARM template that creates the VM and bootstraps it with files in an Azure storage account. The bootstrapping occurs by using the customData parameter.
"variables": {
"uniqueId": "[uniqueString(resourceGroup().id)]",
"customData": "[concat('storage-account=', parameters('STORAGE_ACCOUNT'), ',access-key=', parameters('ACCESS_KEY'), ',file-share=', parameters('FILE_SHARE'), ',share-directory=', parameters('SHARE_DIRECTORY'))]"
},
"resources": [
{
"apiVersion": "2016-04-30-preview",
"type": "Microsoft.Compute/virtualMachines",
"name": "MY-VM",
"location": "[resourceGroup().location]",
"properties": {
"hardwareProfile": {
"vmSize": "Standard_DS3_v2"
},
"osProfile": {
"computerName": "My-Computer-Name",
"adminUsername": "[parameters('Username')]",
"adminPassword": "[parameters('Password')]",
"customData": "[base64(variables('customData'))]"
}
}
}
Here is my non-working Terraform script that does not work when I try to do the same type of Bootstrapping.
resource "azurerm_virtual_machine" "MY-VM" {
name = "${var.vm_name}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
vm_size = "${var.vm_size}"
primary_network_interface_id = "${azurerm_network_interface.nic0.id}"
os_profile {
computer_name = "${var.vm_name}"
admin_username = "${var.adminuser}"
admin_password = "${var.adminuserpassword}"
custom_data = "${base64encode(join("", list("storage-account=", var.STORAGE_ACCOUNT, ",access-key=", var.ACCESS_KEY, ",file-share=", var.FILE_SHARE, ",share-directory=None")))}"
}
}
This is the error that I receive when I run it. If I do not use the custom_data field, the machine launches fine, but is not bootstrapped. I am out of ideas here..
azurerm_virtual_machine.MY-VM:
compute.VirtualMachinesClient#CreateOrUpdate: Failure sending
request: StatusCode=0 -- Original Error: autorest/azure: Service
returned an error. Status=400 Code="InvalidRequestFormat"
Message="Cannot parse the request." Details=[]
i dont think join works for strings? for your case you can just do
"storage-account=${var.STORAGE_ACCCOUNT},access-key=${var.ACCESS_KEY},file-share=${var.FILE_SHARE},share-directory=None"

Using JSON arrays in SenseNet settings

If JSON arrays are used in a SenseNet settings object, they are not accessible via the OData API.
For example, consider the following SenseNet settings object, which comes installed at Root/System/Settings/Portal.settings by default:
{
ClientCacheHeaders: [
{ ContentType: "PreviewImage", MaxAge: 1 },
{ Extension: "jpeg", MaxAge: 604800 },
{ Extension: "gif", MaxAge: 604800 },
{ Extension: "jpg", MaxAge: 604800 },
{ Extension: "png", MaxAge: 604800 },
{ Extension: "swf", MaxAge: 604800 },
{ Extension: "css", MaxAge: 600 },
{ Extension: "js", MaxAge: 600 }
],
UploadFileExtensions: {
"jpg": "Image",
"jpeg": "Image",
"gif": "Image",
"png": "Image",
"bmp": "Image",
"svg": "Image",
"svgz": "Image",
"tif": "Image",
"tiff": "Image",
"xaml": "WorkflowDefinition",
"DefaultContentType": "File"
},
BinaryHandlerClientCacheMaxAge: 600,
PermittedAppsWithoutOpenPermission: "Details"
}
When viewing this object through the OData API, the ClientCacheHeaders field is not included:
{
"d": {
"UploadFileExtensions.jpg": "Image",
"UploadFileExtensions.jpeg": "Image",
"UploadFileExtensions.gif": "Image",
"UploadFileExtensions.png": "Image",
"UploadFileExtensions.bmp": "Image",
"UploadFileExtensions.svg": "Image",
"UploadFileExtensions.svgz": "Image",
"UploadFileExtensions.tif": "Image",
"UploadFileExtensions.tiff": "Image",
"UploadFileExtensions.xaml": "WorkflowDefinition",
"UploadFileExtensions.DefaultContentType": "File",
"BinaryHandlerClientCacheMaxAge": 600,
"PermittedAppsWithoutOpenPermission": "Details",
}
}
If you search specifically for the ClientCacheHeaders field using the following query:
Odata.svc/Root/System/Settings('Portal.settings')?&metadata=no&$select=ClientCacheHeaders
the API returns null:
{
"d": {
"ClientCacheHeaders": null
}
}
I know that JSON arrays are allowed in settings files because the above example is referenced in the SenseNet wiki page describing settings usage.
Am I performing my OData query incorrectly, or is this some sort of parsing bug in the SenseNet API?
Here's an implementation of the custom OData function suggested by Miklos. Once this is done, you have to register the OData call as described here.
public static class OData
{
[ODataFunction]
public static string GetMySettings(Content content)
{
var retstr = "";
try
{
var settingsFile = Settings.GetSettingsByName<Settings>("MySettings", content.Path);
var node = Node.LoadNode(settingsFile.Path) as Settings;
var bindata = node.GetBinary("Binary");
using (var sr = bindata.GetStream())
using (var tr = new System.IO.StreamReader(sr))
retstr = tr.ReadToEnd();
}
catch (Exception e)
{
SnLog.WriteException(e);
}
return retstr;
}
}
This is a limitation of the current dynamic json field conversion behind the odata api. It actually converts these setting json properties to sensenet fields, so what you see in the odata response is not the actual settings json, but only the fragments that can be converted to sensenet fields (for the curious: it happens in the JsonDynamicFieldHelper class, BuildDynamicFieldMetadata method).
And unfortunately there is no built-in field type in sensenet for handling json arrays, it is not possible to convert a json array to a field value, this is why the system skips it.
Workaround 1
Get the raw settings json in javascript in two steps. The following request gives you the binary field's direct url:
/odata.svc/Root/System/Settings('Portal.settings')?&metadata=no&$select=Binary
...something like this:
/binaryhandler.ashx?nodeid=1084&propertyname=Binary&checksum=1344168
...and if you load that, you'll get the full raw json, including the array.
Please note: Settings are not accessible to visitors by
default, for a reason: they may contain sensitive information. So if
you want to let your users access settings directly (the way you tried
or the way described in the first workaround above), you'll have to
give open permission for the necessary user groups on those setting
files. This is not the case with the second workaround.
Workaround 2
Create a custom odata action that returns settings in a format of your choice from the server. This is a better solution, because this way you control which parts of a setting file is actually accessible to the client.

Securely pass credentials to DSC Extension from ARM Template

According to https://learn.microsoft.com/en-gb/azure/virtual-machines/windows/extensions-dsc-template, the latest method for passing credentials from an ARM template to a DSC extension is by placing the whole credential within the configurationArguments of the protectedSettings section, as shown below:
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.24",
"autoUpgradeMinorVersion": true,
"settings": {
"wmfVersion": "latest",
"configuration": {
"url": "[concat(parameters('_artifactsLocation'), '/', variables('artifactsProjectFolder'), '/', variables('dscArchiveFolder'), '/', variables('dscSitecoreInstallArchiveFileName'))]",
"script": "[variables('dscSitecoreInstallScriptName')]",
"function": "SitecoreInstall"
},
"configurationArguments": {
"nodeName": "[parameters('CMCD VMName')]",
"sitecorePackageUrl": "[concat(parameters('sitecorePackageLocation'), '/', parameters('sitecoreRelease'), '/', parameters('sitecorePackageFilename'))]",
"sitecorePackageUrlSasToken": "[parameters('sitecorePackageLocationSasToken')]",
"sitecoreLicense": "[concat(parameters('sitecorePackageLocation'), '/', parameters('sitecoreLicenseFilename'))]",
"domainName": "[parameters('domainName')]",
"joinOU": "[parameters('domainOrgUnit')]"
},
"configurationData": {
"url": "[concat(parameters('_artifactsLocation'), '/', variables('artifactsProjectFolder'), '/', variables('dscArchiveFolder'), '/', variables('dscSitecoreInstallConfigurationName'))]"
}
},
"protectedSettings": {
"configurationUrlSasToken": "[parameters('_artifactsLocationSasToken')]",
"configurationDataUrlSasToken": "[parameters('_artifactsLocationSasToken')]",
"configurationArguments": {
"domainJoinCredential": {
"userName": "[parameters('domainJoinUsername')]",
"password": "[parameters('domainJoinPassword')]"
}
}
}
}
Azure DSC is supposed to handle the encrypting/decrypting of the protectedSettings for me. This does appear to work, as I can see that the protectedSettings are encrypted within the settings file on the VM, however the operation ultimately fails with:
VM has reported a failure when processing extension 'dsc-sitecore-de
v-install'. Error message: "The DSC Extension received an incorrect input: Comp
ilation errors occurred while processing configuration 'SitecoreInstall'. Pleas
e review the errors reported in error stream and modify your configuration code
appropriately. System.InvalidOperationException error processing property 'Cre
dential' OF TYPE 'xComputer': Converting and storing encrypted passwords as pla
in text is not recommended. For more information on securing credentials in MOF
file, please refer to MSDN blog: http://go.microsoft.com/fwlink/?LinkId=393729
At C:\Packages\Plugins\Microsoft.Powershell.DSC\2.24.0.0\DSCWork\dsc-sitecore-d
ev-install.0\dsc-sitecore-dev-install.ps1:103 char:3
+ xComputer Converting and storing encrypted passwords as plain text is not r
ecommended. For more information on securing credentials in MOF file, please re
fer to MSDN blog: http://go.microsoft.com/fwlink/?LinkId=393729 Cannot find pat
h 'HKLM:\SOFTWARE\Microsoft\PowerShell\3\DSC' because it does not exist. Cannot
find path 'HKLM:\SOFTWARE\Microsoft\PowerShell\3\DSC' because it does not exis
t.
Another common error is to specify parameters of type PSCredential without an e
xplicit type. Please be sure to use a typed parameter in DSC Configuration, for
example:
configuration Example {
param([PSCredential] $UserAccount)
...
}.
Please correct the input and retry executing the extension.".
The only way that I can make it work is to add PsDscAllowPlainTextPassword = $true to my configurationData, but I thought I was using the protectedSettings section to avoid using plain text passwords...
Am I doing something wrong, or is it simply that my understanding is wrong?
Proper way of doing this:
"settings": {
"configuration": {
"url": "xxx",
"script": "xxx",
"function": "xx"
},
"configurationArguments": {
"param1": xxx,
"param2": xxx
etc...
}
},
"protectedSettings": {
"configurationArguments": {
"NameOfTheCredentialsParameter": {
"userName": "USERNAME",
"password": "PASSWORD!1"
}
}
}
this way you don't need PsDSCAllowPlainTextPassword = $true
Then you can receive the parameters in your Configuration with
Configuration MyConf
param (
[PSCredential] $NameOfTheCredentialsParameter
)
An use it in your resource
Registry DoNotOpenServerManagerAtLogon {
Ensure = "Present"
Key = "HKEY_CURRENT_USER\SOFTWARE\Microsoft\ServerManager"
ValueName = "DoNotOpenServerManagerAtLogon"
ValueData = 1
ValueType = REG_DWORD"
PsDscRunAsCredential = $NameOfTheCredentialsParameter
}
The fact that you still need to use the PsDSCAllowPlainTextPassword = $true is documented
Here is the quoted section:
However, currently you must tell PowerShell DSC it is okay for credentials to be outputted in plain text during node configuration MOF generation, because PowerShell DSC doesn’t know that Azure Automation will be encrypting the entire MOF file after its generation via a compilation job.
Based on the above, it seems that it is an order of operations issue. The MOF is generated and THEN encrypted.

Generate java classes from a JSON schema

I would like to generate JAVA classes from a given JSON Schema draft 4 version
I evaluated couple of tools and jsonschema2pojo was found to be useful. But it supports json schema draft-3 version only(although json schema draft 4 is in their roadmap).
Can anyone suggest me a tool or a way to generate java classes from a json schema (compliant to json schema draft4)
?
Thanks in advance.
You might try cog, a general purpose code generator written in Ruby. I put a simple project on github called json2java which demonstrates how cog might be used to generate Java classes from json data.
Not sure exactly what you want to do, but here is what I assumed. The json data would look something like this
{
"classname": "Sample",
"methods": [
{
"name": "foo",
"rtype": "void",
"params": [
{
"name": "arg1",
"type": "int"
}
]
},
{
"name": "bar",
"rtype": "int",
"params": []
}
]
}
And the corresponding Java class would look something like this
public class Sample {
void foo(int arg1) {
// keep: foo {
// While the interface in this example is generated,
// the method bodies are preserved between multiple invocations
// of the generator.
// It doesn't have to be done this way, the method bodies can be
// generated aswell, all depends on what your json data encodes
// keep: }
}
int bar() {
// keep: bar {
return 1;
// keep: }
}
}
If you want to try cog, install it like this gem install cog, and run generators like this cog gen. Check out the cog homepage for documentation.