I am a real dumb with HTML and JavaScript, so please excuse any dumbness.
I am using D3 Tree Diagram, but I need to load a JSON file instead of writing it inside the JS script, which the name of the file to be loaded will be chose by the user in a select tag. Here's the D3 code
First, how can I load/read a JSON file, lets say exampleNodes.json,
And then, how can I pass the name of the selected select tag so that it reads the appropriate JSON?
Thanks for your patience, and help. Thank you.
in code
var treeData = [
{
"name": "Top Level",
"parent": "null",
"children": [
{
"name": "Level 2: A",
"parent": "Top Level",
"children": [
{
"name": "Son of A",
"parent": "Level 2: A"
},
{
"name": "Daughter of A",
"parent": "Level 2: A"
}
]
},
{
"name": "Level 2: B",
"parent": "Top Level"
}
]
}
];
you have to save it on data.json file like
{
"treeData" : [ ... your data array ...]
}
after that in d3.json() function you will receive this object
d3.json("data.json",function(json){
// do your coding
// or all code put inside one function and call it after data loaded
});
if you are using google chrome than it will gave you error on data reading from json because security Google Chrome not allow read files from file system you can get data in Firefox. to make it run upload your code on some local server. i.e in WampServer or Apache tomcat etc.
Related
I am trying to do some work with log visualization tools (Elastic and/or Splunk), but first I need to produce and format the log files from a simulation I am writing. My question, which I can't seem to find clear guidance on is:
How to store multiple, what I believe are root element JSON entries in a single text file
How to work with nested JSON structures
I am ultimately trying to have every entry follow the same form:
{"entry_id": 1,
"TIME": "12:00:12Z012/01/2022",
"LOG_TYPE":"ERROR_REPORT",
"DATA": {
"FIELD A" : "ABC",
"FIELD B" : "DEF"
}
},
{"entry_id": 2,
"TIME": "12:15:12Z012/01/2022",
"LOG_TYPE":"STATUS_REPORT",
"DATA": {
"FIELD C" : "HIJ",
"FIELD D" : 123
}
}
Some options I saw
Use an array []
Use NDJSON
Use some log template??
Any insight would be helpful
JSON files need to be a single object and can't be INVALID themselves.
Option 1: Create a single file for each of the objects, using a numeric naming system for the files, then iterating over the files in your method.
Option 2: Create a single file but have each entry contained in an array eg:
{
"entries": [
{
"entry_id": 1,
"TIME": "12:00:12Z012/01/2022",
"LOG_TYPE": "ERROR_REPORT",
"DATA": {
"FIELD A": "ABC",
"FIELD B": "DEF"
}
},
{
"entry_id": 2,
"TIME": "12:15:12Z012/01/2022",
"LOG_TYPE": "STATUS_REPORT",
"DATA": {
"FIELD C": "HIJ",
"FIELD D": 123
}
}
]
}
I have a local json file like this
{
"scroll": [
{
"id": 0,
"titles": "1",
"course": ["first", "Second"]
},
{
"id": 1,
"titles": "2",
"course": ["Third", "Fourth"]
}
]
}
So, I want that every item of "course" has its own string parameter. For example, "Third" has "third" parameter, "Fourth" has "fourth" parameter and so on. But I don't know how to do it in json file. I want that clicking "first" button in the list it navigate to the text that is written in json, and after clicking in "Second " button it navigate to another text. Please help me to solve this problem. Thanks.
I have the following JSON:
[
{
"date": "29/11/2021",
"Name": "jack",
},
{
"date": "30/11/2021",
"Name": "Adam",
},
"date": "27/11/2021",
"Name": "james",
}
]
Using Talend, I wanna add 2 lines to have something like:
[
{
"company": "AMA",
"service": "BI",
"date": "29/11/2021",
"Name": "jack",
},
{
"company": "AMA",
"service": "BI",
"date": "30/11/2021",
"Name": "Adam",
},
"company": "AMA",
"service": "BI",
"date": "27/11/2021",
"Name": "james",
}
]
Currently, I use 3 components (tJSONDocOpen, tFixedFlowInput, tJSONDocOutput) but I can't have the right configuration of components in order to get the job done !
If you are not comfortable with json .
Just do these steps :
In the metaData just create a FileJson like this then paste it in your job as a tFileInputJson
Your job design and mapping would be
In your tFileOutputJson don't forget to change in the name of the data block "Data" with ""
What you need to do there according to the Talend practices is read your JSON. Then extract each object of it, add your properties and finally rebuild your JSON in a file.
An efficient way to do this is using tMap componenent like this.
The first tFileInputJSON will have to specify what properties it has to read from the JSON by setting your 2 objects in the mapping field.
Then the tMap will simply add 2 columns to your main stream, here is an example with hard coded string values. Depending on you needs, this component will also offer you the possibility to assign dynamic data to your 2 new columns, it's a powerful tool for manipulating the structure of a data stream.
You will find more infos about this component in the official documentation : https://help.talend.com/r/en-US/7.3/tmap/tmap; especially the "tMap scenarios" part.
Note
Instead of using the tMap, if you are comfortable with Java, you can use a tjavaRow instead. Using this, you can setup your 2 new columns with whatever java code you want to put as long as you have defined the output schema of the component.
output_row.Name = input_row.Name;
output_row.date = input_row.date;
output_row.company = "AMA";
output_row.service = "BI";
I'm trying to send several PUT request to a specific endpoint.
I am using a CSV file within the columns and values, the name of the different columns reference the different variables inside the body, do you get me?
This is the endpoint:
{{URL_API}}/products/{{sku}} --sku is the id of the product, i created that variable because i use it to pass the reference
This is the body that i use:
{
"price":"{{price}}",
"tax_percentage":"{{tax_percentage}}",
"store_code":"{{store_code}}",
"markup_top":"{{markup_top}}",
"status":"{{status}}",
"group_prices": [
{
"group":"{{class_a}}",
"price":"{{price_a}}",
"website":"{{website_a}}"
}
]
}
I don't want to use the body anymore.. sometimes some products have more tan 1 group of prices, i mean:
"group_prices": [
{
"group":"{{class_a}}",
"price":"{{price_a}}",
"website":"{{website_a}}"
},
{
"group":"{{class_b}}",
"price":"{{price_b}}",
"website":"{{website_b}}"
}
Is it possible to create something like this in the CSV file?
sku,requestBody
99RE345GT, {JSON Payload}
How should i declare the {JSON Payload}?
Can you help me?
EDIT:
This is the CSV file i used:
sku,price,tax_percentage,store_code,markup_top,status,class_a,price_a,website_a
95LB645R34ER,147000,US-21,B2BUSD,1.62,1,CLASS A,700038.79,B2BUSD
I want to pass the JSON within the CSV file, i mean
sku,requestBody
95LB645R34ER,{"price":"147000","tax_percentage":"US-21","store_code":"B2BUSD","markup_top":"1.62","status":"1","group_prices":
[{ "group":"CLASS A","price":"700038.79","website":"B2BUSD"}]}
Is it okay?Should i specify anything on the request body or not? I read the documentation posted in POSTMAN website but i did not find an example like this.
As you're using JSON data as a payload, I would use a JSON file rather than a CSV file in the Collection Runner. Use this as a template and save it as data.json, it's in the correct format accepted in the Runner.
[
{
"sku": "95LB645R34ER",
"payload": {
"price": "147000",
"tax_percentage": "US-21",
"store_code": "B2BUSD",
"markup_top": "1.62",
"status": "1",
"group_prices": [
{
"group": "CLASS A",
"price": "700038.79",
"website": "B2BUSD"
}
]
}
},
{
"sku": "MADEUPSKU",
"payload": {
"price": "99999",
"tax_percentage": "UK-99",
"store_code": "BLAH",
"markup_top": "9.99",
"status": "5",
"group_prices": [
{
"group": "CLASS B",
"price": "88888.79",
"website": "BLAH"
}
]
}
}
]
In the pre-request Script of the request, add this code. It's creating a new local variable from the data under the payload key in the data file. The data needs to be transformed into a string so it's using JSON.stringify() to do this.
pm.variables.set("JSONpayload", JSON.stringify(pm.iterationData.get('payload'), null, 2));
In the Request Body, add this:
{{JSONpayload}}
In the Collection Runner, select the Collection you would like to run and then select the data.json file that you previously created. Run the Collection.
I have a large JSON file that looks similar to the code below. Is there anyway I can iterate through each object, look for the field "element_type" (it is not present in all objects in the file if that matters) and extract or write each object with the same element type to a file? For example each user would end up in a file called user.json and each book in a file called book.json?
I thought about using javascript but to my knowledge js can't write to files, I also tried to do it using linux command line tools by removing all new lines, then inserting a new line after each "}," and then iterating through each line to find the element type and write it to a file. This worked for most of the data; however, where there were objects like the "problem_type" below, it inserted a new line in the middle of the data due to the nested json in the "times" element. I've run out of ideas at this point.
{
"data": [
{
"element_type": "user",
"first": "John",
"last": "Doe"
},
{
"element_type": "user",
"first": "Lucy",
"last": "Ball"
},
{
"element_type": "book",
"name": "someBook",
"barcode": "111111"
},
{
"element_type": "book",
"name": "bookTwo",
"barcode": "111111"
},
{
"element_type": "problem_type",
"name": "problem object",
"times": "[{\"start\": \"1230\", \"end\": \"1345\", \"day\": \"T\"}, {\"start\": \"1230\", \"end\": \"1345\", \"day\": \"R\"}]"
}
]
}
I would recommend Java for this purpose. It sounds like you're running on Linux so it should be a good fit.
You'll have no problems writing to files. And you can use a library like this - http://json-lib.sourceforge.net/ - to gain access to things like JSONArray and JSONObject. Which you can easily use to iterate through the data in your JSON request, and check what's in "element_type" and write to a file accordingly.