Is it possible to read a csv file in Vega with a custom delimiter, such as ;? - vega-lite

I have a csv file that has a custom delimiter, such as ;. I would like to load it into Vega a la:
"data": { "url": "https://url.csv",
"format":{"type":"csv",
"sep":";"
}
Here of course the last line does not exist in the current Vega schema.
This is akin to pandas pd.read_csv(sep=';')

Something like this should work:
"data": { "url": "https://url.csv",
"format":{"type":"dsv", "delimiter":";"}
}
See https://vega.github.io/vega-lite/docs/data.html#dsv for more information.

Related

Load multiple increasing json files by ELK stack

I crawled a lot of JSON files in data folder, which all named by timestamp (./data/2021-04-05-12-00.json, ./data/2021-04-05-12-30.json, ./data/2021-04-05-13-00.json, ...).
Now I'm tring to use ELK stack to load those increasing JSON files.
The JSON file is pretty printed like:
{
"datetime": "2021-04-05 12:00:00",
"length": 3,
"data": [
{
"id": 97816,
"num_list": [1,2,3],
"meta_data": "{'abc', 'cde'}"
"short_text": "This is data 97816"
},
{
"id": 97817,
"num_list": [4,5,6],
"meta_data": "{'abc'}"
"short_text": "This is data 97817"
},
{
"id": 97818,
"num_list": [],
"meta_data": "{'abc', 'efg'}"
"short_text": "This is data 97818"
},
],
}
I tried using logstash multiline plugins to extract json file, but it seems like it will handle each file as an event. Is there any way to extract each record in JSON data fileds as an event ?
Also, what's the best practice for loading multiple increasing pretty-printed JSON files in ELK ?
Using multiline is correct if you want to handle each file as one input event.
Then you need to leverage the split filter in order to create one event for each element in the data array:
filter {
split {
field => "data"
}
}
So Logstash reads one file as a whole, it passes its content as a single event to the filter layer and then the split filter as shown above will spawn one new event for each element in the data array.

Data Factory - Retrieve value from field with dash "-" from JSON file

In my pipeline I reach through REST API using GET request to a 3rd party database. As an output I receive a bunch of JSON files. The number of JSON files I have to download (same as number of iterations I will have to use) is in one of the fields in JSON file. The problem is that the field's name is 'page-count' which contains "-".
#activity('Lookup1').output.firstRow.meta.page.page-count
Data Factory considers dash in field's name as a minus sign, so I get an error instead of value from that field.
{"code":"BadRequest","message":"ErrorCode=InvalidTemplate, ErrorMessage=Unable to parse expression 'activity('Lookup1').output.firstRow.meta.page.['page-count']'","target":"pipeline/Product_pull/runid/f615-4aa0-8fcb-5c0a144","details":null,"error":null}
This is how the structure of JSON file looks like:
"firstRow": {
"meta": {
"page": {
"number": 1,
"size": 1,
"page-count": 7300,
"record-count": 7300
},
"non-compliant-record-count": 7267
}
},
"effectiveIntegrationRuntime": "intergrationRuntimeTest1",
"billingReference": {
"activityType": "PipelineActivity",
"billableDuration": [
{
"meterType": "SelfhostedIR",
"duration": 0.016666666666666666,
"unit": "Hours"
}
]
},
"durationInQueue": {
"integrationRuntimeQueue": 1
}
}
How to solve this problem?
The below syntax works when retrieving the value for a json element with a hyphen. It is otherwise treated as a minus sign by the parser. It does not seem to be documented by Microsoft however I managed to get this to work through trial and error on a project of mine.
#activity('Lookup1').output.firstRow.meta.page['page-count']
This worked for us too. We had the same issue where we couldn't reference an output field that contained a dash(-). We referenced this post and used the square brackets and single quote and it worked!
Example below.
#activity('get_token').output.ADFWebActivityResponseHeaders['Set-Cookie']

How to encode any CSV data in JMETER and will use that as input parameter ${}

How to encode input filed "fileData"(from CSV) and use it as input parameter like ${fileData}
below is the ex-
fileData input should be converted like=
{"fileData":"QkVOLCxIb21lQmFuajc2MjkxNzI2MTcxOTU0MjI1OTQ5ODkxNjIzMjI0ODUyODI3NjI5MTcyNjE3MTk1NDIyNTk0OTg5MTYyMzIyNDg1MjgyQVZJMSwsSG9tZUJhbmtqNjI5MTcyNjE3MTk1NDIyNTk0OTg5MTYyMzIyNDg1MjgyNzYyOTE3MjYxNzE5NTQyMjU5NDk4OTE2MjMyMjQ4NVJFQ09SREEsLGFkZHJlc3MsLCwsLCwsLCwsLERTQVMyLCwsLCwsLCwsLCwsLCwsLCwsLFksLCwsLCwsLCwsLCwsLE4sLCwsLCwsLCwsLERTQVMyLCwsQ0=="
POST data:
{"fileData":""${fileData}","fileName":"JMETER1.txt","fileDescription":"testing file upload with single data","isEncrypted":"N","encryptionDetails":{"algorithm":"","secretKey":"","signatureBytes":""},"valMode":"N"}
You need to encode the data from your CSV file in Base64
There is __base64Encode() function which can do the trick for you:
Your request syntax should look like:
{
"fileData": "${__base64Encode(${fileData},)}",
"fileName": "JMETER1.txt",
"fileDescription": "testingfileuploadwithsingledata",
"isEncrypted": "N",
"encryptionDetails": {
"algorithm": "",
"secretKey": "",
"signatureBytes": ""
},
"valMode": "N"
}
and variable substitution will happen in the runtime:
You can install __base64Encode() function along with other Custom JMeter Functions using JMeter Plugins Manager

Is there a way to split a Swaggerfile / OpenAPI definition into multiple files?

Is there a way to split a swaggerfile / OpenAPI specs file, either JSON or YAML, encoding every $ref into a separate file? Because I found a lot of solutions to achieve the opposite (multiple files -> single file), but none for this.
What I'd like to achieve is the following:
I have a huge JSON swaggerfile that contains internal $refs.
I'd like to have a single file for each and every object or path definition, and, in the root file, references (local or absolute) to these files. This way I can edit the root file to easily obtain a minimal subset of the paths and objects that I need.
{
"in": "inputField",
"required": true,
"schema": {
"$ref": "#/components/schemas/MyObject"
},
"components": {
"schemas": {
"MyObject": {
"type": "object",
"properties": {
"value": {
"type": "string"
}
}
}
}
},
"____comment": "I want MyObject definition in MyObject.json file and the $ref to that file"
}
Yeah, its possible, you must only use regex, and detect depedencies etc.
regex example from my project
new Regex("\"#/definitions/(.*)\"");
new Regex("#\\/definitions\\/(.*?)\\\"");
new Regex("\"" + key + "\": ");
etc.
You must replace elements with dependencies, and save to file. I do sth like that to recive jschema from model. Your case,is a little diffrent, but smilar.

How to import a JSON file that is exported from Neo4j into D3

Neo4j is a graph data base and it can export the data to a JSON file. However, the JSON file from Neo4j is somehow very complicated for me and I could not import it into D3. My question is how to import a JSON file from Neo4j and import it into D3 for graph visualization without changing the format of the JSON file. I asked the Neo4j community once and they said it's not possible or I believed they meant that.
Here is the exported JSON file from Neo4j:
{
"table":
{
"_response":
{
"columns":["n"],
"data":[
{"row":[{"num":"A08"}],"graph":{"nodes":[{"id":"0","labels":["Person"],"properties":{"num":"A08"}}],"relationships":[]}},
{"row":[{"num":"A04"}],"graph":{"nodes":[{"id":"1","labels":["Person"],"properties":{"num":"A04"}}],"relationships":[]}},
{"row":[{"num":"A05"}],"graph":{"nodes":[{"id":"2","labels":["Person"],"properties":{"num":"A05"}}],"relationships":[]}}
],
"stats":{
"contains_updates":false,"nodes_created":0,"nodes_deleted":0,"properties_set":0,"relationships_created":0,"relationship_deleted":0,
"labels_added":0,"labels_removed":0,"indexes_added":0,"indexes_removed":0,"constraints_added":0,"constraints_removed":0
}
},
"nodes":[
{"id":"0","labels":["Person"],"properties":{"num":"A08"}},
{"id":"1","labels":["Person"],"properties":{"num":"A04"}},
{"id":"2","labels":["Person"],"properties":{"num":"A05"}}
],
"other":[],
"relationships":[],
"size":3,
"stats":{
"contains_updates":false,"nodes_created":0,"nodes_deleted":0,"properties_set":0,"relationships_created":0,"relationship_deleted":0,
"labels_added":0,"labels_removed":0,"indexes_added":0,"indexes_removed":0,"constraints_added":0,"constraints_removed":0
}
},
"graph":
{
"nodeMap":{
"0":{"num":"A08"},
"1":{"num":"A04"},
"2":{"num":"A05"}
},
"relationshipMap":{
"623":{"date":"5/01/2011","time":"18:11:48","case":4},
"624":{"date":"5/02/2011","time":"21:21:06","case":4},
"625":{"date":"6/03/2011","time":"21:23:35","case":4},
"629":{"date":"6/04/2011","time":"22:14:47","case":5}
}
}
}
The D3 that I'm using as an example is http://bl.ocks.org/mbostock/1153292
Thank you.
You can customize the output returned by Cypher using literal maps. Using this it should be possible to return exactly the json structure to be handed over to D3.
Another alternative is using some application side json transformation, as an example see http://maxdemarzi.com/2012/10/11/hubway-data-visualization-challenge-with-neo4j/