AMP Analytics "destinationDomains" not working in Linker config - json

I am trying to enable a linker string for links to my domains from my AMP site.
The current config is working only for links to the "canonical" domain at present, which is the default behavior.
I am also trying to enable it for links that are sent to my app's domain.
I've tried many variations of the code below (including using non-valid JSON array strings, as set out in the documentation here: https://ampbyexample.com/advanced/joining_analytics_sessions/#destination-domains) however this does not seem to work.
I am hoping this is a syntax or config issue but I am starting to have doubts. This is my code:
<amp-analytics type="gtag" data-credentials="include">
<script type="application/json">
{
"vars": {
"gtag_id": "AW-XXXXXX",
"config": {
"UA-XXXXX-X": {
"groups": "default"
},
"AW-XXXXXX": {
"groups": "default"
}
}
},
"linkers": {
"enabled": true,
"proxyOnly": false,
"destinationDomains": [ "amp.mydomain.com", "www.mydomain.com", "app.altdomain.ly" ]
},
"triggers": {
"trackPageview": {
"on": "visible",
"request": "pageview"
}
}
}
</script>
</amp-analytics>
I've also tried setting it out with a nested <paramName> object as follows, but I get the same result (works on canonical only):
...
"linkers": {
"Linker1": {
"ids": {
"_cid": "CLIENT_ID"
},
"proxyOnly": false,
"destinationDomains": [ "amp.mydomain.com", "www.mydomain.com", "app.altdomain.ly" ],
"enabled": true
}
}
...

Since you are using gtag, I think you might need to use the GTAG's configuration to configure the domains. Instructions are available here.
Basically, the config looks like this:
<amp-analytics type="gtag" data-credentials="include">
<script type="application/json">
{
"vars" : {
"gtag_id": "<GA_TRACKING_ID>",
"config" : {
"<GA_TRACKING_ID>": {
"groups": "default",
"linker": { "domains": ["example.com", "example2.com"] }
}
}
}
}
</script>
</amp-analytics>

You can check first the proper format of linkers in AMP:
"linkers": {
<paramName>: {
ids: <Object>,
proxyOnly: <boolean>,
destinationDomains: <Array<string>>,
enabled: <boolean>
}
}
paramName - This user defined name determines the name of the query
parameter appended to the links.
ids - An object containing key-value pairs that is partially encoded
and passed along in the param.
proxyOnly - (optional) Flag indicating whether the links should only
be appended on pages served on a proxy origin. Defaults to true.
destinationDomains - (optional) Links will be decorated if their
domains are included in this array. Defaults to canonical and source
domains.
enabled - Publishers must explicity set this to true to opt-in to
using this feature.
This linker uses this configuration to generate a string in this structure: <paramName>=<version>*<checkSum>*<idName1>*<idValue1>*<idName2>*<idValue2>... For more details see Linker Param Format.

Related

How does one Configure Serilog to WriteTo AzureBlobStorage via the Storage Account Uri or a Named Connection String from appSettings.json?

Serilog configures properly when the following section. which exposes connectionString, is added to appSettings.json:
"Serilog": {
"WriteTo": [
{
"Name": "AzureBlobStorage",
"Args": {
"connectionString": "DefaultEndpointsProtocol=https;AccountName=mystorage;AccountKey=...;EndpointSuffix=core.windows.net",
//"connectionStringName": "MyStorageConnectionName",
//"storageAccountUri": "https://mystorage.blob.core.windows.net",
"storageContainerName": "myapplogs",
"storageFileName": "MyApp {yyyy}-{MM}-{dd}.log",
"writeInBatches": "true", // mandatory
"period": "0.00:00:30", // mandatory
"batchPostingLimit": "50" // optional
}
}, ...
],
}
While the above works, I have thus far been unable to replace use of the connectionString property with either connectionStringName or preferably storageAccountUri (simply leveraging Managed Identities.)
I have added the Serilog.Settings.Configuration 3.3.0 package as suggested here; and I am configuring Serilog as follows:
static void createLogger(ConfigureHostBuilder host) {
host.UseSerilog((ctx, lc) => {
lc.ReadFrom.Configuration(ctx.Configuration);
});
}

Split OpenApi Paths into multiple path definition files

I want to split my paths (which are quite many) more easily into their own files.
Let's say I've got two major paths /user and /anotherPath with several subpaths. Now I've got an OpenApi spec file, whose paths are being referenced to an index file which holds references to each paths. Defining EVERY path with its' reference works, but is clumsy to write.
I want something like this:
openapi.json
{
...
"paths": {
"$ref": "paths/index.json"
}
...
}
paths/index.json
{
"/user": { // and everything that comes after user, e.g. /user/{userId}
"$ref": "./user-path.json"
},
"/anotherPath": { // and everything that comes after anotherPath, e.g. /anotherPath/{id}
"$ref": "./anotherPath-path.json"
}
}
paths/user-path.json
{
"/user": {
"get": {...}
},
"/user/{userId}": {
"get": {...}
}
}
paths/anotherPath-path.json
{
"/anotherPath": {
"get": {...}
},
"/anotherPath/{id}": {
"get": {...}
}
}
This way, whenever I add another path to /user or /anotherPath, I can simply edit their respective path file, e.g. paths/user-path.json.
EDIT1: Apparently, this topic is being discussed already.. For anyone interested: https://github.com/OAI/OpenAPI-Specification/issues/417 . By the way, I know that a $ref is not valid for the paths Object, but once figuring out how to properly split, this may not be necessary anymore.
OpenAPI does not have a concept of sub-paths / nested paths, each path is an individual entity. The paths keyword itself does not support $ref, only individual paths can be referenced.
Given your user-path.json and anotherPath-path.json files, the correct way to reference path definitions is as follows:
{
...
"paths": {
"/user": {
"$ref": "paths/user-path.json#/~1user" // ~1user is /user escaped according to JSON Pointer and JSON Reference rules
},
"/user/{id}": {
"$ref": "paths/user-path.json#/~1user~1%7Bid%7D" // ~1user~1%7Bid%7D is /user/{id} escaped
},
"/anotherPath": {
"$ref": "paths/anotherPath-path.json#/~1anotherPath" // ~1anotherPath is /anotherPath escaped
},
"/anotherPath/{id}": {
"$ref": "paths/anotherPath-path.json#/~1anotherPath~1%7Bid%7D" // ~1anotherPath~1%7Bid%7D is /anotherPath/{id} escaped
}
}
...
}
YAML version:
paths:
/user:
$ref: "paths/user-path.json#/~1user"
/user/{id}:
$ref: "paths/user-path.json#/~1user~1%7Bid%7D"
/anotherPath:
$ref: "paths/anotherPath-path.json#/~1anotherPath"
/anotherPath/{id}:
$ref: "paths/anotherPath-path.json#/~1anotherPath~1%7Bid%7D"
If you want to use $ref in arbitrary places (other than where OAS allows $refs), you'll have to pre-process your definition using a parser/tool that can resolve arbitrary $refs; this will give you a valid OpenAPI file that can be used with OpenAPI-compliant tools. One such pre-processing tool is json-refs, you can find an example of pre-processing here.

ASP.NET Core 3 - Serilog how to configure Serilog.Sinks.Map in appsettings.json file?

I came across the Serilog.Sinks.Map addon today which will solve my challenge with routing specific log events to a specific sink interface. In my environment, I am writing to a log file as well as using the SQL interface. I only want certain logs to be written to the SQL Server though.
Reading the instructions on GitHub by the author, I can only see an example for implementing the LoggerConfiguration through C# in the Program.CS, but I am using the appsettings.json file and unsure what to change from the provided example to the required json format.
Example given by Serilog on GitHub:
Log.Logger = new LoggerConfiguration()
.WriteTo.Map("Name", "Other", (name, wt) => wt.File($"./logs/log-{name}.txt"))
.CreateLogger();
My current configuration: Note I haven't implemented the Sinks.Map in my code yet.
Program.CS File:
public static void Main(string[] args)
{
// Build a configuration system with the route of the app settings.json file.
// this is becuase we dont yet have dependancy injection available, that comes later.
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.Build();
Log.Logger = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.CreateLogger();
var host = CreateHostBuilder(args).Build();
}
And here is my appsettings.json file. I want to be able configure sink name 'MSSqlServer' as the special route, then use the standard file appender sink for all the other general logging.
"AllowedHosts": "*",
"Serilog": {
"Using": [],
"MinumumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
"Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId", "WithThreadId" ],
"WriteTo": [
{ "Name": "Console" },
{
"Name": "File",
"Args": {
//"path": "C:\\NetCoreLogs\\log.txt", // Example path to Windows Drive.
"path": ".\\Logs\\logs.txt",
//"rollingInterval": "Day", // Not currently in use.
"rollOnFileSizeLimit": true,
//"retainedFileCountLimit": null, // Not currently in use.
"fileSizeLimitBytes": 10000000,
"outputTemplate": "{Timestamp:dd-MM-yyyy HH:mm:ss.fff G} {Message}{NewLine:1}{Exception:1}"
// *Template Notes*
// Timestamp 'G' means UTC Time
}
},
{
"Name": "MSSqlServer",
"Args": {
"connectionString": "DefaultConnection",
"schemaName": "EventLogging",
"tableName": "Logs",
"autoCreateSqlTable": true,
"restrictedToMinimumLevel": "Information",
"batchPostingLimit": 1000,
"period": "0.00:00:30"
}
}
//{
// "Name": "File",
// "Args": {
// "path": "C:\\NetCoreLogs\\log.json",
// "formatter": "Serilog.Formatting.Json.JsonFormatter, Serilog"
// }
//}
]
}
Lastly if i could squeeze in another quick question on the topic, when using the SQL sink interface, how do manage the automatic purging/deletion of the oldest events i.e. DB should only store max 1,000,000 events then automatically write over the oldest event first, thanks in advance
I believe it is currently impossible to configure the standard Map call in json, since it relies on a few types that have no serialization support right now, like Action<T1, T2>. I created an issue to discuss this in the repository itself:
Unable to configure default Map call in json? #22
However, there is a way to still get some functionality out of it in Json, by creating a custom extension method. In your particular case, it would be something like this:
public static class SerilogSinkConfigurationExtensions
{
public static LoggerConfiguration MapToFile(
this LoggerSinkConfiguration loggerSinkConfiguration,
string keyPropertyName,
string pathFormat,
string defaultKey)
{
return loggerSinkConfiguration.Map(
keyPropertyName,
defaultKey,
(key, config) => config.File(string.Format(pathFormat, key));
}
}
Then, on your json file, add a section like this:
"WriteTo": [
...
{
"Name": "MapToFile",
"Args": {
"KeyPropertyName": "Name",
"DefaultKey": "Other",
"PathFormat": "./logs/log-{0}.txt"
}
}
]
To have these customizations work properly, Serilog needs to understand that your assembly has these kinds of extensions, to load them during the parsing stage. As per the documentation, you either need to have these extensions on a *.Serilog.* assembly, or add the Using clause on the json:
// Assuming the extension method is inside the "Company.Domain.MyProject" dll
"Using": [ "Company.Domain.MyProject" ]
More information on these constraints here:
https://github.com/serilog/serilog-settings-configuration#using-section-and-auto-discovery-of-configuration-assemblies

Using JSON arrays in SenseNet settings

If JSON arrays are used in a SenseNet settings object, they are not accessible via the OData API.
For example, consider the following SenseNet settings object, which comes installed at Root/System/Settings/Portal.settings by default:
{
ClientCacheHeaders: [
{ ContentType: "PreviewImage", MaxAge: 1 },
{ Extension: "jpeg", MaxAge: 604800 },
{ Extension: "gif", MaxAge: 604800 },
{ Extension: "jpg", MaxAge: 604800 },
{ Extension: "png", MaxAge: 604800 },
{ Extension: "swf", MaxAge: 604800 },
{ Extension: "css", MaxAge: 600 },
{ Extension: "js", MaxAge: 600 }
],
UploadFileExtensions: {
"jpg": "Image",
"jpeg": "Image",
"gif": "Image",
"png": "Image",
"bmp": "Image",
"svg": "Image",
"svgz": "Image",
"tif": "Image",
"tiff": "Image",
"xaml": "WorkflowDefinition",
"DefaultContentType": "File"
},
BinaryHandlerClientCacheMaxAge: 600,
PermittedAppsWithoutOpenPermission: "Details"
}
When viewing this object through the OData API, the ClientCacheHeaders field is not included:
{
"d": {
"UploadFileExtensions.jpg": "Image",
"UploadFileExtensions.jpeg": "Image",
"UploadFileExtensions.gif": "Image",
"UploadFileExtensions.png": "Image",
"UploadFileExtensions.bmp": "Image",
"UploadFileExtensions.svg": "Image",
"UploadFileExtensions.svgz": "Image",
"UploadFileExtensions.tif": "Image",
"UploadFileExtensions.tiff": "Image",
"UploadFileExtensions.xaml": "WorkflowDefinition",
"UploadFileExtensions.DefaultContentType": "File",
"BinaryHandlerClientCacheMaxAge": 600,
"PermittedAppsWithoutOpenPermission": "Details",
}
}
If you search specifically for the ClientCacheHeaders field using the following query:
Odata.svc/Root/System/Settings('Portal.settings')?&metadata=no&$select=ClientCacheHeaders
the API returns null:
{
"d": {
"ClientCacheHeaders": null
}
}
I know that JSON arrays are allowed in settings files because the above example is referenced in the SenseNet wiki page describing settings usage.
Am I performing my OData query incorrectly, or is this some sort of parsing bug in the SenseNet API?
Here's an implementation of the custom OData function suggested by Miklos. Once this is done, you have to register the OData call as described here.
public static class OData
{
[ODataFunction]
public static string GetMySettings(Content content)
{
var retstr = "";
try
{
var settingsFile = Settings.GetSettingsByName<Settings>("MySettings", content.Path);
var node = Node.LoadNode(settingsFile.Path) as Settings;
var bindata = node.GetBinary("Binary");
using (var sr = bindata.GetStream())
using (var tr = new System.IO.StreamReader(sr))
retstr = tr.ReadToEnd();
}
catch (Exception e)
{
SnLog.WriteException(e);
}
return retstr;
}
}
This is a limitation of the current dynamic json field conversion behind the odata api. It actually converts these setting json properties to sensenet fields, so what you see in the odata response is not the actual settings json, but only the fragments that can be converted to sensenet fields (for the curious: it happens in the JsonDynamicFieldHelper class, BuildDynamicFieldMetadata method).
And unfortunately there is no built-in field type in sensenet for handling json arrays, it is not possible to convert a json array to a field value, this is why the system skips it.
Workaround 1
Get the raw settings json in javascript in two steps. The following request gives you the binary field's direct url:
/odata.svc/Root/System/Settings('Portal.settings')?&metadata=no&$select=Binary
...something like this:
/binaryhandler.ashx?nodeid=1084&propertyname=Binary&checksum=1344168
...and if you load that, you'll get the full raw json, including the array.
Please note: Settings are not accessible to visitors by
default, for a reason: they may contain sensitive information. So if
you want to let your users access settings directly (the way you tried
or the way described in the first workaround above), you'll have to
give open permission for the necessary user groups on those setting
files. This is not the case with the second workaround.
Workaround 2
Create a custom odata action that returns settings in a format of your choice from the server. This is a better solution, because this way you control which parts of a setting file is actually accessible to the client.

How to solve "error executing aggreation pipeline: variable 'XYZ' not bound" in MongoDB aggregation?

I am using Restheart and MongoDB and also new in these, I have to write aggregation in MongoDb. I written aggregation in mongoDb with $match.
Here Sample Code:
{
"aggrs": [{
"type": "pipeline",
"uri": "aggregation_by_time",
"stages": [{
"_$match": {
"bus::destination": {
"_$in": {
"_$var": "stand"
}
},
"bus::eta": {
"_$gte": {
"_$var": "fromDate"
},
"_$lte": {
"_$var": "toDate"
}
},
"tickets": {
"_$eq": {
"_$var": "isConfirmedTravel"
}
}
}
}]
}]
}
Here Sample Access Url:
http://..xyz../_aggrs/aggregation_by_time?avars={"stand":["A","B","C","D","E","F"]}
When I access this url then it is not working, displaying some error.
Error:
{"http status code":400,"http status description":"Bad
Request","message":"error executing aggreation pipeline: variable
isConfirmedTravel not bound"}
When I access below url Then it will work.
http://..xyz../_aggrs/aggregation_by_time?avars={"stand":["A","B","C","D","E","F"],"isConfirmedTravel":"true"}
So I want to make optional $match, Like if I don't mention as a parmater "isConfirmedTravel" in url, then it should work. And If I want to send as parameter "isConfirmedTravel" in url then It should also work. But In my case If put field in $match then you should have to mention in url. Thats why I want to set optional "isConfirmedTravel":"true" field. If I call then it should work and if I will not call then url should be work.