If JSON arrays are used in a SenseNet settings object, they are not accessible via the OData API.
For example, consider the following SenseNet settings object, which comes installed at Root/System/Settings/Portal.settings by default:
{
ClientCacheHeaders: [
{ ContentType: "PreviewImage", MaxAge: 1 },
{ Extension: "jpeg", MaxAge: 604800 },
{ Extension: "gif", MaxAge: 604800 },
{ Extension: "jpg", MaxAge: 604800 },
{ Extension: "png", MaxAge: 604800 },
{ Extension: "swf", MaxAge: 604800 },
{ Extension: "css", MaxAge: 600 },
{ Extension: "js", MaxAge: 600 }
],
UploadFileExtensions: {
"jpg": "Image",
"jpeg": "Image",
"gif": "Image",
"png": "Image",
"bmp": "Image",
"svg": "Image",
"svgz": "Image",
"tif": "Image",
"tiff": "Image",
"xaml": "WorkflowDefinition",
"DefaultContentType": "File"
},
BinaryHandlerClientCacheMaxAge: 600,
PermittedAppsWithoutOpenPermission: "Details"
}
When viewing this object through the OData API, the ClientCacheHeaders field is not included:
{
"d": {
"UploadFileExtensions.jpg": "Image",
"UploadFileExtensions.jpeg": "Image",
"UploadFileExtensions.gif": "Image",
"UploadFileExtensions.png": "Image",
"UploadFileExtensions.bmp": "Image",
"UploadFileExtensions.svg": "Image",
"UploadFileExtensions.svgz": "Image",
"UploadFileExtensions.tif": "Image",
"UploadFileExtensions.tiff": "Image",
"UploadFileExtensions.xaml": "WorkflowDefinition",
"UploadFileExtensions.DefaultContentType": "File",
"BinaryHandlerClientCacheMaxAge": 600,
"PermittedAppsWithoutOpenPermission": "Details",
}
}
If you search specifically for the ClientCacheHeaders field using the following query:
Odata.svc/Root/System/Settings('Portal.settings')?&metadata=no&$select=ClientCacheHeaders
the API returns null:
{
"d": {
"ClientCacheHeaders": null
}
}
I know that JSON arrays are allowed in settings files because the above example is referenced in the SenseNet wiki page describing settings usage.
Am I performing my OData query incorrectly, or is this some sort of parsing bug in the SenseNet API?
Here's an implementation of the custom OData function suggested by Miklos. Once this is done, you have to register the OData call as described here.
public static class OData
{
[ODataFunction]
public static string GetMySettings(Content content)
{
var retstr = "";
try
{
var settingsFile = Settings.GetSettingsByName<Settings>("MySettings", content.Path);
var node = Node.LoadNode(settingsFile.Path) as Settings;
var bindata = node.GetBinary("Binary");
using (var sr = bindata.GetStream())
using (var tr = new System.IO.StreamReader(sr))
retstr = tr.ReadToEnd();
}
catch (Exception e)
{
SnLog.WriteException(e);
}
return retstr;
}
}
This is a limitation of the current dynamic json field conversion behind the odata api. It actually converts these setting json properties to sensenet fields, so what you see in the odata response is not the actual settings json, but only the fragments that can be converted to sensenet fields (for the curious: it happens in the JsonDynamicFieldHelper class, BuildDynamicFieldMetadata method).
And unfortunately there is no built-in field type in sensenet for handling json arrays, it is not possible to convert a json array to a field value, this is why the system skips it.
Workaround 1
Get the raw settings json in javascript in two steps. The following request gives you the binary field's direct url:
/odata.svc/Root/System/Settings('Portal.settings')?&metadata=no&$select=Binary
...something like this:
/binaryhandler.ashx?nodeid=1084&propertyname=Binary&checksum=1344168
...and if you load that, you'll get the full raw json, including the array.
Please note: Settings are not accessible to visitors by
default, for a reason: they may contain sensitive information. So if
you want to let your users access settings directly (the way you tried
or the way described in the first workaround above), you'll have to
give open permission for the necessary user groups on those setting
files. This is not the case with the second workaround.
Workaround 2
Create a custom odata action that returns settings in a format of your choice from the server. This is a better solution, because this way you control which parts of a setting file is actually accessible to the client.
Related
I am trying to export a .sat file to one in .stp format. I've had trouble doing the export directly to a bucket (from .ipt to .stp no problem but it doesn't work from .sat to .stp).
Finally, I have tried using LogTrace with custom data to send the file data through a string (step format has string content and it's created correctly). Unfortunately, I can't make work the callback !ACESAPI:acesHttpOperation with the custom data (it does work by default).
This is my workitem call
{
"activityId": "DNhofWmrTzDm5Cdj3ISk0yvVA0IOBEja.InventorActivity16+3",
"arguments": {
"InventorDoc": {
"url": "https://developer.api.autodesk.com/oss/v2/signedresources/xxxxxxxx-aa45-43fa-8c0e-e594a3f671cc?region=US"
},
"InventorParams": {
"url": "data:application/json,{\"height\":\"16 in\", \"width\":\"10 in\"}"
},
"onProgress": {
"verb": "post",
"url": "https://xxxxxxx"
}
}
And this is a part of the log response
I think there is a problem with the API call from !ACESAPI:acesHttpOperation.
I follow same instructions from
callback documentation
Thanks in advance
I came across the Serilog.Sinks.Map addon today which will solve my challenge with routing specific log events to a specific sink interface. In my environment, I am writing to a log file as well as using the SQL interface. I only want certain logs to be written to the SQL Server though.
Reading the instructions on GitHub by the author, I can only see an example for implementing the LoggerConfiguration through C# in the Program.CS, but I am using the appsettings.json file and unsure what to change from the provided example to the required json format.
Example given by Serilog on GitHub:
Log.Logger = new LoggerConfiguration()
.WriteTo.Map("Name", "Other", (name, wt) => wt.File($"./logs/log-{name}.txt"))
.CreateLogger();
My current configuration: Note I haven't implemented the Sinks.Map in my code yet.
Program.CS File:
public static void Main(string[] args)
{
// Build a configuration system with the route of the app settings.json file.
// this is becuase we dont yet have dependancy injection available, that comes later.
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.Build();
Log.Logger = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.CreateLogger();
var host = CreateHostBuilder(args).Build();
}
And here is my appsettings.json file. I want to be able configure sink name 'MSSqlServer' as the special route, then use the standard file appender sink for all the other general logging.
"AllowedHosts": "*",
"Serilog": {
"Using": [],
"MinumumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
"Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId", "WithThreadId" ],
"WriteTo": [
{ "Name": "Console" },
{
"Name": "File",
"Args": {
//"path": "C:\\NetCoreLogs\\log.txt", // Example path to Windows Drive.
"path": ".\\Logs\\logs.txt",
//"rollingInterval": "Day", // Not currently in use.
"rollOnFileSizeLimit": true,
//"retainedFileCountLimit": null, // Not currently in use.
"fileSizeLimitBytes": 10000000,
"outputTemplate": "{Timestamp:dd-MM-yyyy HH:mm:ss.fff G} {Message}{NewLine:1}{Exception:1}"
// *Template Notes*
// Timestamp 'G' means UTC Time
}
},
{
"Name": "MSSqlServer",
"Args": {
"connectionString": "DefaultConnection",
"schemaName": "EventLogging",
"tableName": "Logs",
"autoCreateSqlTable": true,
"restrictedToMinimumLevel": "Information",
"batchPostingLimit": 1000,
"period": "0.00:00:30"
}
}
//{
// "Name": "File",
// "Args": {
// "path": "C:\\NetCoreLogs\\log.json",
// "formatter": "Serilog.Formatting.Json.JsonFormatter, Serilog"
// }
//}
]
}
Lastly if i could squeeze in another quick question on the topic, when using the SQL sink interface, how do manage the automatic purging/deletion of the oldest events i.e. DB should only store max 1,000,000 events then automatically write over the oldest event first, thanks in advance
I believe it is currently impossible to configure the standard Map call in json, since it relies on a few types that have no serialization support right now, like Action<T1, T2>. I created an issue to discuss this in the repository itself:
Unable to configure default Map call in json? #22
However, there is a way to still get some functionality out of it in Json, by creating a custom extension method. In your particular case, it would be something like this:
public static class SerilogSinkConfigurationExtensions
{
public static LoggerConfiguration MapToFile(
this LoggerSinkConfiguration loggerSinkConfiguration,
string keyPropertyName,
string pathFormat,
string defaultKey)
{
return loggerSinkConfiguration.Map(
keyPropertyName,
defaultKey,
(key, config) => config.File(string.Format(pathFormat, key));
}
}
Then, on your json file, add a section like this:
"WriteTo": [
...
{
"Name": "MapToFile",
"Args": {
"KeyPropertyName": "Name",
"DefaultKey": "Other",
"PathFormat": "./logs/log-{0}.txt"
}
}
]
To have these customizations work properly, Serilog needs to understand that your assembly has these kinds of extensions, to load them during the parsing stage. As per the documentation, you either need to have these extensions on a *.Serilog.* assembly, or add the Using clause on the json:
// Assuming the extension method is inside the "Company.Domain.MyProject" dll
"Using": [ "Company.Domain.MyProject" ]
More information on these constraints here:
https://github.com/serilog/serilog-settings-configuration#using-section-and-auto-discovery-of-configuration-assemblies
I am trying to enable a linker string for links to my domains from my AMP site.
The current config is working only for links to the "canonical" domain at present, which is the default behavior.
I am also trying to enable it for links that are sent to my app's domain.
I've tried many variations of the code below (including using non-valid JSON array strings, as set out in the documentation here: https://ampbyexample.com/advanced/joining_analytics_sessions/#destination-domains) however this does not seem to work.
I am hoping this is a syntax or config issue but I am starting to have doubts. This is my code:
<amp-analytics type="gtag" data-credentials="include">
<script type="application/json">
{
"vars": {
"gtag_id": "AW-XXXXXX",
"config": {
"UA-XXXXX-X": {
"groups": "default"
},
"AW-XXXXXX": {
"groups": "default"
}
}
},
"linkers": {
"enabled": true,
"proxyOnly": false,
"destinationDomains": [ "amp.mydomain.com", "www.mydomain.com", "app.altdomain.ly" ]
},
"triggers": {
"trackPageview": {
"on": "visible",
"request": "pageview"
}
}
}
</script>
</amp-analytics>
I've also tried setting it out with a nested <paramName> object as follows, but I get the same result (works on canonical only):
...
"linkers": {
"Linker1": {
"ids": {
"_cid": "CLIENT_ID"
},
"proxyOnly": false,
"destinationDomains": [ "amp.mydomain.com", "www.mydomain.com", "app.altdomain.ly" ],
"enabled": true
}
}
...
Since you are using gtag, I think you might need to use the GTAG's configuration to configure the domains. Instructions are available here.
Basically, the config looks like this:
<amp-analytics type="gtag" data-credentials="include">
<script type="application/json">
{
"vars" : {
"gtag_id": "<GA_TRACKING_ID>",
"config" : {
"<GA_TRACKING_ID>": {
"groups": "default",
"linker": { "domains": ["example.com", "example2.com"] }
}
}
}
}
</script>
</amp-analytics>
You can check first the proper format of linkers in AMP:
"linkers": {
<paramName>: {
ids: <Object>,
proxyOnly: <boolean>,
destinationDomains: <Array<string>>,
enabled: <boolean>
}
}
paramName - This user defined name determines the name of the query
parameter appended to the links.
ids - An object containing key-value pairs that is partially encoded
and passed along in the param.
proxyOnly - (optional) Flag indicating whether the links should only
be appended on pages served on a proxy origin. Defaults to true.
destinationDomains - (optional) Links will be decorated if their
domains are included in this array. Defaults to canonical and source
domains.
enabled - Publishers must explicity set this to true to opt-in to
using this feature.
This linker uses this configuration to generate a string in this structure: <paramName>=<version>*<checkSum>*<idName1>*<idValue1>*<idName2>*<idValue2>... For more details see Linker Param Format.
I need to validate some object in my NodeJS app. I have already used an awesome library express-validator, it works perfectly, but now I need to validate different object, not only requests and as far as express validator leverages validator library, that in turn doesn't support types other than the string type.
I have found different variants like Jsonschema, Ajv
They offer great features, but I need to be able to set error message and than just catch an exception or parse it from return object.
Like that
var schema = {
"id": "/SimplePerson",
"type": "object",
"properties": {
"name": {"type": "string", "error": "A name should be provided"},
"address": {"$ref": "/SimpleAddress"},
"votes": {"type": "integer", "minimum": 1}
}
};
So I can set an error message for every property.
Is there any existing solution to achieve this functionality ?
POSSIBLE SOLUTION
I have found a great library JSEN It provides necessary features.
Three powerful and popular libraries you can use for JSON validation are
AJV: https://github.com/epoberezkin/ajv
JOI: https://github.com/hapijs/joi
JSON validator: https://github.com/tdegrunt/jsonschema
All of these libraries allow you to validate different data types, do conditional validation, as well as set custom error messages.
One solution is to use Joi library :
https://github.com/hapijs/joi
This library is well maintained, used and offer lots of flexibility and possible actions.
Example :
const Joi = require('joi');
const schema = Joi.object().keys({
name: Joi.string().error(new Error('A name should be provided')),
address: Joi.ref('$SimpleAddress'),
votes: Joi.number().min(1),
});
// Return result.
const result = Joi.validate(yourObject, schema);
I use Json Pattern Validator
npm install jpv --save
usage
const jpv = require('jpv');
// your json object
var json = {
status: "OK",
id: 123,
type: {}
}
// validation pattern
var pattern = {
status: /OK/i,
id: '(number)',
type: '(object)'
};
var result = jpv.validate( json , pattern)
You can also try nonvalid, a library that supports callback-based validation with custom checks and errors (disclaimer: it is written by me).
I'm about to embark on validation of JSON submissions to my web service and will be using tcomb-validation. It's a lightweight alternative to JSON schema and is based on type combinators.
Example of 'intersections':
var t = require('tcomb-validation');
var Min = t.refinement(t.String, function (s) { return s.length > 2; }, 'Min');
var Max = t.refinement(t.String, function (s) { return s.length < 5; }, 'Max');
var MinMax = t.intersection([Min, Max], 'MinMax');
MinMax.is('abc'); // => true
MinMax.is('a'); // => false
MinMax.is('abcde'); // => false
I am currently in the process of migrating an Express app to Heroku.
To keep sensitive information out of source, Heroku uses config vars which are assigned by to process variables of the same name.
Currently, I am loading my keys using .json, such as:
{
"key": "thisismykey",
"secret": "thisismysecret"
}
However, if I try to load the variables in via Heroku's format:
{
"key": process.env.KEY
"secret": process.env.SECRET
}
Obviously, I get an error here. I would assume that it is possible to load these values into JSON, but I'm not sure. How could I do this?
To generate JSON with these values, you would first create a JavaScript object and then use JSON.stringify to turn it into JSON:
var obj = { "key": process.env.KEY
"secret": process.env.SECRET };
var json = JSON.stringify(obj);
// => '{"key":"ABCDEFGH...","secret":"MNOPQRST..."}'