I'm using serilog with this configuration:
{
"Serilog": {
"Using": [ "Serilog.Sinks.Console", "Serilog.Sinks.File" ],
"MinimumLevel": "Debug",
"Enrich": [ "FromLogContext", "WithMachineName", "WithThreadId" ],
"WriteTo": [
{ "Name": "Console" },
{
"Name": "File",
"Args": {
"path": "./logs/performance-{Date}.log",
"rollingInterval": "Day",
"fileSizeLimitBytes": 1000,
"rollOnFileSizeLimit": true,
"retainedFileCountLimit": null,
"shared": true
}
}
]
}
}
Output file should look like 20210613-performance.log But output file looks like {Date}-performance20210613.log.
What i'm doing wrong?
The {Date} placeholder is not a feature of the Serilog.Sinks.File sink that you're using. You're probably confusing with the (deprecated) Serilog.Sinks.RollingFile sink which has this feature.
With Serilog.Sinks.File, at this time, you cannot define where the date will appear. It's always appended to the end of the file name you choose (and before the sequence number if you also are rolling by file size).
There have been attempts to implement this feature, but as of this writing it's not yet there.
Related
I'm new to couchbase. I want to update channels for some of the documents through sync functions. But right now, it is not updating but adding an extra channel to the document's meta but not removing the existing channel. Can anyone suggest how I can remove the existing channel in the document?
Sync Function:
function(doc, oldDoc) {
//....
if (doc.docType === "differentType") {
channel("differentChannel");
expiry(2332800);
return;
}
//.......
}
Document:
{
"channels": [
"abcd"
],
"docType": "differentType",
"_id" : "asjnc"
}
Metadata:
{
"meta": {
"id": "asjnc",
"rev": "64-1b500000000",
"expiration": 1650383285,
"flags": 0,
"type": "json"
},
"xattrs": {
"_sync": {
"rev": "1-db30e607872",
"sequence": 777,
"recent_sequences": [
777
],
"history": {
"revs": [
"1-db30e607872"
],
"parents": [
-1
],
"channels": [
[
"differentChannel"
]
]
},
"channels": {
"differentChannel": null
}
}
}
}
Expectation of the document with the same metadata:
{
"channels": [ ], // <--- no channels
"docType": "differentType",
"_id" : "asjnc"
}
With this sync function, for the document of type differentType, the channel differentChannel is set in the xattrs section in the metadata. But the channel that was added earlier from the couchbaseLite is not getting removed. Can anyone help?
I answered this in the Couchbase Forums: https://forums.couchbase.com/t/remove-channels-from-a-document/33212
The "channels" property in a document is counter-intuitively not describing what channels the document is currently in - it's just a user-definable field that happens to be the default routing for channels if you don't specify a sync function. It's up to the writer of the document what it should contain.
If you have another means of channel assignments (like "docType" in your case), then you don't need to specify "channels" in the document. The sync metadata shows that the document is in "differentChannel" at revision 1-db30e607872 but the contents of the document can be arbitrary.
I do not code in JSON and I'm trying to configure some settings for Terminus on Sublime Text 3. Why isn't my code working? I suspect it has something to do with the colons because they appear to be a different color than on the README page. Thanks in advance!
[
"default_config": {
"linux": null,
"osx": "PowerShell",
"windows": null
},
"preserve_keys" : [
"ctrl+k",
"ctrl+p",
"ctrl+z",
"ctrl+c",
"ctrl+v",
"ctrl+x"
],
"theme": "default"
]
You should replace [ ] with { } like that:
{
"default_config": {
"linux": null,
"osx": "PowerShell",
"windows": null
},
"preserve_keys" : [
"ctrl+k",
"ctrl+p",
"ctrl+z",
"ctrl+c",
"ctrl+v",
"ctrl+x"
],
"theme": "default"
}
If you want to store data using key value, you have to use { }. If you want to store data in json as an array, you have to use [ ]. This is the difference which made you an error.
Here's a good tutorial that you can use: https://www.digitalocean.com/community/tutorials/an-introduction-to-json
It's all preety simple
What I'm doing is uploading a .zip file and creating a translation job. The .zip file contains several .CATPART files and one .CATPRODUCT file.
bellow is my payload
{
"input": {
"urn": "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6emlwX2ZpbGVzX3Rlc3RpbmcvQU1GMV8wNC56aXA",
"rootFilename": "17J20-0851---B.1.CATProduct",
"compressedUrn": true
},
"output": {
"destination": {
"region": "us"
},
"formats": [
{
"type": "stl",
"advanced":
{
"format" : "binary",
"exportColor":true,
"exportFileStructure" : "single"
}
}
]
}
}
but I'm keep getting the error "Failed to trigger translation for this file.".
I even tried uploading and translation using the provided postman collection but the result is same
However i tried uploading the whole folder(not as a zip of course) to the Autodesk viewer and it works. So i dont think there is an issue in the set of files.
What could be the reason ??
You can find the list of supported translations here:
https://forge.autodesk.com/en/docs/model-derivative/v2/developers_guide/supported-translations/
Unfortunately, you cannot translate a CATPART/CATPRODUCT to STL - you can only get a thumbnail, SVF, or SVF2 from it
Once you translated it to SVF, you'll also be able to get OBJ from it. This option is available for all file formats.
I managed to solve this with the help of #Adam's reply ,
So i changed the type to "svf"
"type": "stl"
and added the views
"views": [
"2d",
"3d"
]
and it worked .
my final payload would be ,
{
"input": {
"urn": "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6emlwX2ZpbGVzX3Rlc3RpbmcvQU1GMV8wNS56aXA",
"rootFilename": "17J20-0851---B.1.CATProduct",
"compressedUrn": true
},
"output": {
"destination": {
"region": "us"
},
"formats": [
{
"type": "svf2",
"advanced":
{
"format" : "binary",
"exportColor":true,
"exportFileStructure" : "single"
},
"views": [
"2d",
"3d"
]
}
]
}
}
Hope this helps anyone in future .
So I am working on a research project that involves using a very specific piece of software that uses its own filetype; XPPAUT using .ode files. To prevent me and my team of not-neuroscientists from ripping our hair out trying to work with this, I decided to write a syntax highlighter for these .ode files.
To start I just wanted to be able to recognize and color linecomments, which are delineated with a #, similar to Python, however when I run the development environment, the comments are not highlighted with the color I set my dev workspace to use, or highlighted at all. I'm very new to this, so any help would be appreciated.
Here is my package.json file
{
"name": "ode",
"displayName": "XPP ODE",
"description": "ODE files to be used with XPP/XPPAUT",
"version": "0.0.1",
"publisher": "wjmccann",
"engines": {
"vscode": "^1.22.0"
},
"categories": [
"Languages"
],
"contributes": {
"languages": [{
"id": "xpp",
"aliases": ["XPP ODE", "XPP", "XPPAUT"],
"extensions": [".ode"],
"configuration": "./language-configuration.json"
}],
"grammars": [{
"language": "xpp",
"scopeName": "source.xpp",
"path": "./syntaxes/xpp.tmLanguage.json"
}]
}
}
and the corresponding language-configuration.json
{
"comments": {
// symbol used for single line comment. Remove this entry if your language does not support line comments
"lineComment": "#",
},
// symbols used as brackets
"brackets": [
["{", "}"],
["[", "]"],
["(", ")"]
],
// symbols that are auto closed when typing
"autoClosingPairs": [
["{", "}"],
["[", "]"],
["(", ")"],
["\"", "\""],
["'", "'"]
],
// symbols that that can be used to surround a selection
"surroundingPairs": [
["{", "}"],
["[", "]"],
["(", ")"],
["\"", "\""],
["'", "'"]
]
}
The language-configuration.json file defines text patterns used in a variety of standard features of VS Code such as comment toggling as described here.
Syntax highlighting/colouring is via the grammars contribution point in package.json as described here.
Based on your package.jsonyou will need to create a new file at ./syntaxes/xpp.tmLanguage.json with the following content for your comments to be coloured appropriately. The actual colour used will depend on your current theme.
{
"$schema": "https://raw.githubusercontent.com/martinring/tmlanguage/master/tmlanguage.json",
"name": "xpp",
"scopeName": "source.xpp",
"patterns": [
{
"include": "#comments"
}
],
"repository": {
"comments": {
"patterns": [{
"name": "comment.line.number-sign.xpp",
"match": "#.*"
}]
}
}
}
EDIT 2014-05-01: I tried fromJSON first (as suggested below), but that only parsed the first line. I found out that there were commas missing between the brackets of each JSON line so I changed that in TextEdit and saved the file. I also added [ at the beginning of the file and ] at the end and then it worked with JSON. Now the next step: from a list (with embedded lists) to a dataframe (or csv).
I get a data package from edX every now and then on the courses we are evaluating. Some of these are just plain .csv files which are quite easy to handle, others are more difficult for me (not having a CS or programming background).
I have 2 files I want to open and parse into csv files for analysis in R. I have tried many many json2csv tools out there, but to no avail. I also tried the simple methods described here to turn json into csv.
The data is confidential, so I cannot share the entire data set, but will share the first two lines of the file, maybe that helps. The problem is that nowhere I find anything about .mongo files, which to me seems quite strange, do they even exist? Or is this just a JSON file that may be corrupted (which could explain the errors)?
Any suggestions are welcome.
The first 2 lines in one of the .mongo files:
{
"_id": {
"$oid": "52d1e62c350e7a3156000009"
},
"votes": {
"up": [
],
"down": [
],
"up_count": 0,
"down_count": 0,
"count": 0,
"point": 0
},
"visible": true,
"abuse_flaggers": [
],
"historical_abuse_flaggers": [
],
"parent_ids": [
],
"at_position_list": [
],
"body": "the delft university accredited course with the scholarship (fundamentals of water treatment) is supposed to start in about a month's time. But have the scholarship list been published? Any tentative date??",
"course_id": "DelftX/CTB3365x/2013_Fall",
"_type": "Comment",
"endorsed": false,
"anonymous": false,
"anonymous_to_peers": false,
"author_id": "269835",
"comment_thread_id": {
"$oid": "52cd40c5ab40cf347e00008d"
},
"author_username": "tachak59",
"sk": "52d1e62c350e7a3156000009",
"updated_at": {
"$date": 1389487660636
},
"created_at": {
"$date": 1389487660636
}
}{
"_id": {
"$oid": "52d0a66bcb3eee318d000012"
},
"votes": {
"up": [
],
"down": [
],
"up_count": 0,
"down_count": 0,
"count": 0,
"point": 0
},
"visible": true,
"abuse_flaggers": [
],
"historical_abuse_flaggers": [
],
"parent_ids": [
{
"$oid": "52c63278100c07c0d1000028"
}
],
"at_position_list": [
],
"body": "I got it. Thank you!",
"course_id": "DelftX/CTB3365x/2013_Fall",
"_type": "Comment",
"endorsed": false,
"anonymous": false,
"anonymous_to_peers": false,
"parent_id": {
"$oid": "52c63278100c07c0d1000028"
},
"author_id": "2655027",
"comment_thread_id": {
"$oid": "52c4f303b03c4aba51000013"
},
"author_username": "dmoronta",
"sk": "52c63278100c07c0d1000028-52d0a66bcb3eee318d000012",
"updated_at": {
"$date": 1389405803386
},
"created_at": {
"$date": 1389405803386
}
}{
"_id": {
"$oid": "52ceea0cada002b72c000059"
},
"votes": {
"up": [
],
"down": [
],
"up_count": 0,
"down_count": 0,
"count": 0,
"point": 0
},
"visible": true,
"abuse_flaggers": [
],
"historical_abuse_flaggers": [
],
"parent_ids": [
{
"$oid": "5287e8d5906c42f5aa000013"
}
],
"at_position_list": [
],
"body": "if u please send by mail \n",
"course_id": "DelftX/CTB3365x/2013_Fall",
"_type": "Comment",
"endorsed": false,
"anonymous": false,
"anonymous_to_peers": false,
"parent_id": {
"$oid": "5287e8d5906c42f5aa000013"
},
"author_id": "2276302",
"comment_thread_id": {
"$oid": "528674d784179607d0000011"
},
"author_username": "totah1993",
"sk": "5287e8d5906c42f5aa000013-52ceea0cada002b72c000059",
"updated_at": {
"$date": 1389292044203
},
"created_at": {
"$date": 1389292044203
}
}
R doesn't have "native" support for these files but there is a JSON parser with the rjson package. So I might load my .mongo file with:
myfile <- "path/to/myfile.mongo"
myJSON <- readLines(myfile)
myNiceData <- fromJSON(myJSON)
Since RJson converts into a data structure that fits the object being read, you'll have to do some additional snooping but once you have an R data type you shouldn't have any trouble working with it from there.
Another package to consider when parsing JSON data is jsonlite. It will make data frames for you so you can write them to a csv format with write.table or some other applicable method for writing objects.
NOTE: if it is easier to connect to the MongoDB and get the data from a request, then RMongo may be a good bet. The R-Bloggers also made a post about using RMongo that has a nice little walkthrough.
I used RJSON as suggested by #theWanderer and with the help of a colleague wrote the following code to parse the data into columns, choosing the specific columns that are needed, and checking each of the instances if they return the right variables.
Entire workflow:
Checked some of the data in jsonlint - corrected the errors → },{ instead of }{ between each line and [ and ] at the beginning and end of the file
Made a smaller file to play with, containing about 11 JSON lines
Used the code below to parse the datafile - however, checking the different listItems first if they are not lists themselves (that gives problems) // as you will see, I also removed things like \n because that gave errors and added an empty value for parent_id if there is none in the data (otherwise it would mix up the data)
The code to import the .mongo file into R and then parse it into CSV:
library(rjson)
###### set working directory to write out the data file
setwd("/your/favourite/dir/json to csv/")
#never ever convert strings to factors
options(stringsAsFactors = FALSE)
#import the .mongo file to R
temp.data = fromJSON(file="temp.mongo", method="C", unexpected.escape="error")
file.remove("temp.csv") ## removes the old datafile if there is one
## (so the data is not appended to the file,
## but a new file is created)
listItem = temp.data[[1]] ## prepare the listItem the first time
for (listItem in temp.data){
parent_id = ""
if (length(listItem$parent_id)>0){
parent_id = listItem$parent_id
}
write.table(t(c(
listItem$votes$up_count, listItem$visible, parent_id,
gsub("\n", "", listItem$body), listItem$course_id, unlist(listItem["_type"]),
listItem$endorsed, listItem$anonymous, listItem$author_id,
unlist(listItem$comment_thread_id), listItem$author_username,
as.POSIXct(unlist(listItem$created_at)/1000, origin="1970-01-01"))), # end t(), c()
file="temp.csv", sep="\t", append=TRUE, row.names=FALSE, col.names=FALSE)
}