Best approach to transitioning between different datasets with d3.js - json

I am trying to build a bar chart which will allow the user to switch among different datasets (tot1 and tot2) all included in one JSON file:
{
"users":
{
"gender":
{
"tot1":
[
{"label":"female", "value":6038},
{"label":"male", "value":45228},
{"label":"unknown", "value":32932}
]
"tot2":
[
{"label":"female", "value":6022},
{"label":"male", "value":45328},
{"label":"unknown", "value":12932}
]
}
}
}
I don't see how I should proceed. Following the solution to a previous question I would need to select tot1 or tot2 when loading the data with d3.json. Of course the chart specs don't change when switching from tot1 to tot2, so probably the right approach would be to compose the chart with a function which receives the data as an argument...
But if I try to load my data into variables I get an error
data1 = d3.json("../data.json", function(data) {return data.users.gender.tot;});

Related

ADF - Data Flow- Json Expression for Property name

I have a requirement to convert the json into csv(or a SQL table) or any other flatten structure using Data Flow in Azure Data Factory. I need to take the property names at some hierarchy and values of the child properties at lower of hierrarchy from the source json and add them both as column/row values in csv or any other flatten structure.
Source Data Rules/Constraints :
Parent level data property names will change dynamically (e.g. ABCDataPoints,CementUse, CoalUse, ABCUseIndicators names are dynamic)
The hierarchy always remains same as in below sample json.
I need some help in defining Json path/expression to get the names ABCDataPoints,CementUse, CoalUse, ABCUseIndicators etc. I am able to figure out how to retrieve the values for the properties Value,ValueDate,ValueScore,AsReported.
Source Data Structure :
{
"ABCDataPoints": {
"CementUse": {
"Value": null,
"ValueDate": null,
"ValueScore": null,
"AsReported": [],
"Sources": []
},
"CoalUse": {
"Value": null,
"ValueDate": null,
"AsReported": [],
"Sources": []
}
},
"ABCUseIndicators": {
"EnvironmentalControversies": {
"Value": false,
"ValueDate": "2021-03-06T23:22:49.870Z"
},
"RenewableEnergyUseRatio": {
"Value": null,
"ValueDate": null,
"ValueScore": null
}
},
"XYZDataPoints": {
"AccountingControversiesCount": {
"Value": null,
"ValueDate": null,
"AsReported": [],
"Sources": []
},
"AdvanceNotices": {
"Value": null,
"ValueDate": null,
"Sources": []
}
},
"XYXIndicators": {
"AccountingControversies": {
"Value": false,
"ValueDate": "2021-03-06T23:22:49.870Z"
},
"AntiTakeoverDevicesAboveTwo": {
"Value": 4,
"ValueDate": "2021-03-06T23:22:49.870Z",
"ValueScore": "0.8351945854483925"
}
}
}
Expected Flatten structure
Background:
After having multiple calls with ADF experts at Microsoft(Our workplace have Microsoft/Azure partnership), they concluded this is not possible with out of the box activities provided by ADF as is, neither by Dataflow(need not to use data flow though) nor Flatten feature. Reasons are Dataflow/Flatten only unroll the Array objects and there are no mapping functions available to pick the property names - Custom expression are in internal beta testing and will in PA in near future.
Conclusion/Solution:
We concluded with an agreement based on calls with Microsoft emps ended up to go multiple approaches but both needs the custom code - with out custom code this is not possible by using out of box activities.
Solution-1 : Use some code to flatten as per requirement using a ADF Custom Activity. The downside of this you need to use an external compute(VM/Batch), the options supported are not on-demand. So it is little bit expensive but works best if have continuous stream workloads. This approach also continuously monitor if input sources are of different sizes because the compute needs to be elastic in this case or else you will get out of memory exceptions.
Solution-2 : Still needs to write the custom code - but in a function app.
Create a Copy Activity with source as the files with Json content(preferably storage account).
Use target as Rest Endpoint of function(Not as a function activity because it has 90sec timeout when called from an ADF activity)
The function app will takes Json lines as input and parse and flatten.
If you use the above way so you can scale the number of lines cane be send in each request to function and also scale the parallel requests.
The function will do the flatten as required to one file or multiple files and store in blob storage.
The pipeline will continue from there as needed from there.
One problem with this approach is if any of the range is failed the copy activity will retry but it will run the whole process again.
Trying something very similar, is there any other / native solution to address this?
As mentioned in the response above, has this been GA yet? If yes, any reference documentation / samples would be of great help!
Custom expression are in internal beta testing and will in PA in near future.

Best Schema for a Data List in JSON for RestFul API to use in Angular

I've been wondering for some days what kind of scheme would be more appropriate to use a data list in json in a web application.
I'm developing a REST Web Application, and im using Angular for front end, i should order, filter and print these data list also in xml ...
For you what scheme is better and why?
1) {
"datas": [
{ "first":"","second":""},
{ "first":"","second":""},
{ "first":"","second":""}
]
}
2) {
"datas": [{
"data": { "first":"","second":""},
"data": { "first":"","second":""},
"data": { "first":"","second":""}
}]
}
3) [
{ "first":"","second":""},
{ "first":"","second":""},
{ "first":"","second":""}
]
Thanks so much.
The first and third notations are quite similar because the third notation is included in your first.
So the question is "Should I return my datas as an array or should I return an object with a property that contain the array ?
It will depend on either you want to have more information alongside your datas or not.
For exemple, if your API might return an error, you will want to manage it from the front end.
In case of error, the JSON will looks like this :
{
"datas": null,
"error": "An error occured because of some reasons..."
}
At the opposite, if everything goes well and your API actually return the results, it will looks like this :
{
"datas": [
{ "first":"","second":""},
{ "first":"","second":""},
{ "first":"","second":""}
],
"error": null
}
Then your front end can use the error property to manage errors sent from the API.
var result = getDatas(); // Load datas from the API
if(result.error){
// Handle the error, display a message to the user, ...
} else {
doSomething(result.datas); // Use your datas
}
If you don't need to have extra properties like error then you can stick with the third schema.
The second notation is invalid. The datas array will contain only one object which will have one property named data. In this case data is a property that is defined multiple times so the object in the array will contain only the last occurence:
var result = {
"datas": [{
"data": { "first":"a","second":"b"},
"data": { "first":"c","second":"d"},
"data": { "first":"e","second":"f"}
}]
}
console.log("Content of result.datas[0].data : ")
console.log(result.datas[0].data)
Obviously the first option would be easy to use. Once you will access datas it'll give you an array. Any operation (filter, sort, print) on that array will be easy in comparison to anything else. Everywhere you just need to pass datas not datas.data.

Pass multiple files to AutodeskForge Design Automation API

If i use just 1 file it works perfectly, but with more than 1 it fails
This is my request
{
"Arguments":{
"InputArguments":[
{
"Resource":"https://s3url.com",
"Name":"HostDwg1-050A-014"
},
{
"Resource":"https://s3url.com",
"Name":"HostDwg1-050A-015"
}
],
"OutputArguments":[
{
"Name":"Result1-050A-014",
"HttpVerb":"PUT",
"Resource":"https:://s3url.com",
"StorageProvider":"Generic"
},
{
"Name":"Result1-050A-015",
"HttpVerb":"PUT",
"Resource":"https://s3url.com",
"StorageProvider":"Generic"
}
]
},
"ActivityId":"PlotToPDF",
"Id":""
}
This is the error i get
The number of Arguments is bigger than the number of Parameters.
Parameter name: Count
How have to be done the request to convert more than one file, without doing a request for each file? thanks
The PlotToPDF activity declares exactly one input parameter and exactly one output parameter. An activity is like a function in a programming language: you can only provide as many arguments as there are parameters. So...
If you want to have a workitem that has more than one input/output argument then you should define an new custom activity that has more than one input/output parameters.
If you want plot multiple files then you should simply submit multiple workitems.

Is it possible to create graphs taking data from json in Zabbix?

Would it be possible, in any way, to create json code that zabbix can understand and recreate on a graph?
Eg:
I have this json:
{
"response:" {
"success": true,
"server": {
"name": "Test Server",
"alive": true,
"users": 25
}
}
}
And I would like to have a simple graph where I can see the value of users.
I might be asking a nonsense here but I was reading about the URL element and it looks like it is possible but couldn't find any type template or any info on how to send the data.
Create a Zabbix trapper item and send such values with the zabbix_sender. The values will be processed as any normal item values by Zabbix, and graphs will be available as well.

d3.js, using d3.extent to find min/max of nest json values

I'm building a bubble chart in d3 and I'd like the color of the bubbles to be based off of logFC values I have in a local json file.
My json file looks like this:
{
"name": "genes",
"children": [
{
"value": "9.57E-06",
"logFC": "-5.51658163",
"symbol": "BCHE",
},
{
"value": "0.0227",
"logFC": "3.17853632",
"symbol": "GRIA2",
},
{
"value": "0.00212",
"logFC": "-2.8868275",
"symbol": "GRIN2A",
}
]
}
The file is not flat which is why I think I'm having trouble referencing the leaf nodes with d3.extent. I know I could use:
var color1 = d3.scale.linear()
.domain([-5.51658163,3.17853632]) //manually putting in min/max into the domain
.range(["lightgreen", "green"]);
But my data will change, and to make the code dynamic - I've tried the following along with many other variations:
var color1 = d3.scale.linear()
.domain([d3.extent(d3.values(data, function(data) { return +d.logFC;}))])
.range(["lightgreen", "green"]);
Basically, Im having trouble using d3.extent on leaf nodes. Is there a simple way to find the min and max values using d3.extent to find the min and max values of logFC?
Thank you!
(PS if there are any problems with parenthesis it was a mistake I made when copying my data into the question box)
When I look in the console editor, I see:
d3.extent accepts an array of values and return the bounds as an array. So I guess your code should be more like:
.domain(d3.extent(d3.values(data, function(data) {
return +d.logFC;
})))
(no square brackets)
Not 100% sure, because I don't know what the data variable is in your example.
Update:
If the data variable is the loaded JSON object, do the following:
.domain(d3.extent(data.children.map(function(child) {
return +child.logFC;
})))