I'm doing a dash application. This is my first one.
I have a dropdown (id='demo-dropdown') and a dash_table.DataTable
tab : dash_table.DataTable = dash_table.DataTable(
id='datatable-interactivity',
columns=[
{"name": 'systemName', "id": 'systemName', "deletable": False, "selectable": False},
...
{"name": 'x', "id": 'x', "deletable": False, "selectable": False, "hideable": True, "type": "numeric"},
{"name": 'y', "id": 'y', "deletable": False, "selectable": False, "hideable": True, "type": "numeric"},
{"name": 'z', "id": 'z', "deletable": False, "selectable": False, "hideable": True, "type": "numeric"},
...
{"name": 'distance', "id": 'distance', "deletable": False, "selectable": False, "type": "numeric"},
],
data=df.to_dict('records'),
editable=False,
filter_action="native",
sort_action="native",
sort_mode="multi",
column_selectable=False, # "single",
row_selectable=False, # "multi",
row_deletable=False,
selected_columns=[ ],
selected_rows=[ ],
page_action="native",
page_current=0,
page_size=100,
)
The dropdown specifies a location (x0,y0,z0), and I want to fill the distance column with regular euclidean distance values from that point. For the moment, I'd be happy to increment the values in the distance column by 5 when the user clicks the dropdown.
Here's my callback:
#app.callback(
[dash.dependencies.Output('datatable-interactivity', 'data'),
dash.dependencies.Output('datatable-interactivity', 'columns')],
[dash.dependencies.Input('demo-dropdown', 'value')])
def update_output(value):
columns = [{"name": 'distance', "id": 'distance'}]
nrows = df.shape[ 0 ]
for ind in df.index:
x1: float = df.at[ind, 'x']
y1: float = df.at[ind, 'y']
z1: float = df.at[ind, 'z']
dis: float = math.sqrt((x - x1) ** 2 + (y - y1) ** 2 + (z - z1) ** 2)
df.at[ind, 'distance'] = dis
return [df['distance'].to_dict(), columns] #tried "records" "rows"
My problem seems similar to How can we create data columns in Dash Table dynamically using callback with a function providing the dataframe - but I can't seem to make it work.
The error message I get is:
Invalid argument `data` passed into DataTable with ID "datatable-interactivity".
Expected an array.
Was supplied type `object`.
Value provided:
{
"0": 37.0625,
"1": 94.53125,
"2": 62.03125,
"3": 33.65625,
"4": 33.65625,
...
"185": 63.59375
}
at propTypeErrorHandler (http://127.0.0.1:8050/_dash-component-suites/dash_renderer/dash_renderer.v1_4_1m1588701826.dev.js:37662:9)
at CheckedComponent (http://127.0.0.1:8050/_dash-component-suites/dash_renderer/dash_renderer.v1_4_1m1588701826.dev.js:32489:77)
...
I've tried various combinations of lists and dicts for the return with no luck. Also, I think, but am not sure, that updating the dataframe in place is the right thing to do.
(Just to sum up - this is what I think I'm doing: notifying the view that data in the model has changed by returning a list with two dicts, the first one has row indices and the new values, and the second identifies the column(s) that are changed.)
Thanks!
Update
As far as I can tell, changing the last two lines of the callback to the following works:
_cols = [{"name": i, "id": i} for i in df.columns]
return df.to_dict('records'), _cols
i.e. replacing all the values in the table, instead of just the last column.
Related
I have two json files. I am validating the response is same or different. I need to show the user where there is an exact change. Some what like the particular key is added or removed or changed in this file.
file1.json
[
{
"Name": "Jack",
"region": "USA",
"tags": [
{
"name": "Name",
"value": "Assistant"
}
]
},
{
"Name": "MATHEW",
"region": "USA",
"tags": [
{
"name": "Name",
"value": "Worker"
}
]
}
]
file2.json
[
{
"Name": "Jack",
"region": "USA",
"tags": [
{
"name": "Name",
"value": "Manager"
}
]
},
{
"Name": "MATHEW",
"region": "US",
"tags": [
{
"name": "Name",
"value": "Assistant"
}
]
}
]
If you see Two JSON you can find the difference as a region in file2.json has changed US and Values changed from manager to assistant and worker. Now I want to show the user that file2.json has some changes like region :US and Manager changed to Assistant.
I have used deepdiff for validating purpose.
from deepdiff import DeepDiff
def difference(oldurl_resp,newurl_resp,file1):
ddiff = DeepDiff(oldurl_resp, newurl_resp,ignore_order=True)
if(ddiff == {}):
print("BOTH JSON FILES MATCH !!!")
return True
else:
print("FAILURE")
output = ddiff
if(output.keys().__contains__('iterable_item_added')):
test = output.get('iterable_item_added')
print('The Resource name are->')
i=[]
for k in test:
print("Name: ",test[k]['Name'])
print("Region: ",test[k]['region'])
msg= (" Name ->"+ test[k]['Name'] +" Region:"+test[k]['region'] +". ")
i.append(msg)
raise JsonCompareError("The json file has KEYS changed!. Please validate for below" +str(i) +"in "+file1)
elif(output.keys().__contains__('iterable_item_removed')):
test2 = output.get('iterable_item_removed')
print('The name are->')
i=[]
for k in test2:
print(test2[k]['Name'])
print(test2[k]['region'])
msg= (" Resource Name ->"+ test2[k]['Name'] +" Region:"+test2[k]['region'] +". ")
i.append(msg)
raise JsonCompareError("The json file has Keys Removed!!. Please validate for below" +str(i)+"in "+file1)
This code just shows the resource Name I want to show the tags also which got changed and added or removed.
Can anybody guide me
If you just print out the values of "test" variables, you will find out that "tag" variable changes are inside of it, test value of test in this example will be:
test = {'root[0]': {'region': 'USA', 'Name': 'Jack', 'tags': [{'name': 'Name', 'value': 'Manager'}]}, 'root[1]': {'region': 'US', 'Name': 'MATHEW', 'tags': [{'name': 'Name', 'value': 'Assistant'}]}}
and you can print test[k]['tags'] or add it your "msg" variable.
Suggestion:
Also, if your data has some primary key (for example they have "id", or their order is always fixed), you can compare their data 1 by 1 (instead of comparing whole lists) and you can have a better comparison. For example if you compare data of "Jack" together, you will have the following comparison:
{'iterable_item_removed': {"root['tags'][0]": {'name': 'Name', 'value': 'Assistant'}}, 'iterable_item_added': {"root['tags'][0]": {'name': 'Name', 'value': 'Manager'}}}
You should try the deepdiff library. It gives you the key where the difference occurs and the old and new value.
from deepdiff import DeepDiff
ddiff = DeepDiff(json_object1, json_object2)
# if you want to compare by ignoring order
ddiff = DeepDiff(json_object1, json_object2, ignore_order=True)
I have JSON data like this:
{
"profiles": {
"auto_scaler": [
{
"auto_scaler_group_name": "myasg0",
"auto_scaler_group_options": {
":availability_zones": ["1a", "1b", "1c"],
":max_size": 1,
":min_size": 1,
":subnets": ["a", "b", "c"],
":tags": [
{":key": "Name", ":value": "app0" },
{":key": "env", ":value": "dev" },
{":key": "role", ":value": "app" },
{":key": "domain", ":value": "example.com" },
{":key": "fonzi_app", ":value": "true"},
{":key": "vpc", ":value": "nonprod"}
]
},
"dns_name": "fonz1"
},
{
"auto_scaler_group_name": "myasg1",
"auto_scaler_group_options": {
":availability_zones": ["1a", "1b", "1c"],
":max_size": 1,
":min_size": 1,
":subnets": ["a", "b", "c"],
":tags": [
{":key": "Name", ":value": "app1" },
{":key": "env", ":value": "dev" },
{":key": "role", ":value": "app" },
{":key": "domain", ":value": "example.com" },
{":key": "bozo_app", ":value": "true"},
{":key": "vpc", ":value": "nonprod"}
]
},
"dns_name": "bozo1"
}
]
}
}
I want to write a jq query to firstly select the Hash element in the Array at .profiles.auto_scaler whose Array of Hashes at .auto_scaler_group_options.tags contains Hashes containing a ":key" key whose value contains "fonzi" and a ":value" key whose value is exactly true and then return the value of the key dns_name.
In the example, the query would simply return "fonz1".
Does anyone know how to do this, if it is possible, using jq?
In brief, yes.
In long:
.profiles.auto_scaler[]
| .dns_name as $name
| .auto_scaler_group_options
| select( any(.[":tags"][];
(.[":key"] | index("fonzi")) and (.[":value"] == "true")) )
| $name
The output of the above is:
"fonz1"
The trick here is to extract the candidate .dns_name before diving more deeply into your "complex nested JSON".
An alternative
If your jq does not have any, you could (in this particular case) get away without it by replacing the select expression above with:
select( .[":tags"][]
| (.[":key"] | index("fonzi")) and (.[":value"] == "true") )
Be warned, though, that the semantics of the two expressions are slightly different. (Homework exercise: what is the difference?)
If your jq doesn't have any and if you want the semantics of any, then you could easily roll your own, or simply upgrade :-)
You can lookup JSON in number or ways ,
by checking property exists or not
by using [property] syntax,
'property' in object syntax
for your case , here's a small sample , further you can lookup on array and see
containing a "key" key whose value contains EEE and a "value" key
whose value is exactly FFF
for(var k=0; k < p['AAA']['BBB'].length;k++){
console.log(p['AAA']['BBB'][k])
}
where p is JSON object.
Hope that helps
making an API GET cal I get the following JSON structure:
{
"metadata": {
"grand_total_entities": 231,
"total_entities": 0,
"count": 231
},
"entities": [
{
"allow_live_migrate": true,
"gpus_assigned": false,
"ha_priority": 0,
"memory_mb": 1024,
"name": "test-ansible2",
"num_cores_per_vcpu": 2,
"num_vcpus": 1,
"power_state": "off",
"timezone": "UTC",
"uuid": "e1aff9d4-c834-4515-8c08-235d1674a47b",
"vm_features": {
"AGENT_VM": false
},
"vm_logical_timestamp": 1
},
{
"allow_live_migrate": true,
"gpus_assigned": false,
"ha_priority": 0,
"memory_mb": 1024,
"name": "test-ansible1",
"num_cores_per_vcpu": 1,
"num_vcpus": 1,
"power_state": "off",
"timezone": "UTC",
"uuid": "4b3b315e-f313-43bb-941b-03c298937b4d",
"vm_features": {
"AGENT_VM": false
},
"vm_logical_timestamp": 1
},
{
"allow_live_migrate": true,
"gpus_assigned": false,
"ha_priority": 0,
"memory_mb": 4096,
"name": "test",
"num_cores_per_vcpu": 1,
"num_vcpus": 2,
"power_state": "off",
"timezone": "UTC",
"uuid": "fbe9a1ac-cf45-4efa-9d65-b3257548a9f4",
"vm_features": {
"AGENT_VM": false
},
"vm_logical_timestamp": 17
},
]
}
In my Ansible playbook I register a variable holding this content.
I need to get a list of UUID of "test-ansible1" and "test-ansible2" but I'm having a hard time finding the best way to to this.
Note that I have another variable holding the list of names for which I need to lookup the UUID.
The need is to use those UUIDs to fire a poweron command for all UUIDs corresponding to specific names.
How would you guys do that?
I've taken a number of approaches but I can't seem to get what I want so I prefer an uninfluenced opinion.
P.S.: This is what Nutanix AHV returns as a get of all vms thgough APIs. There seems to me no way to get only specific VMs JSON information but only all VMs.
Thanks.
Here is some Jinja2 magic for you:
- debug:
msg: "{{ mynames | map('extract', dict(test_json | json_query('entities[].[name,uuid]'))) | list }}"
vars:
mynames:
- test-ansible1
- test-ansible2
Explanation:
test_json | json_query('entities[].[name,uuid]') reduces your original json data to a list of elements which are lists of two items – name value and uuid value:
[
[
"test-ansible2",
"e1aff9d4-c834-4515-8c08-235d1674a47b"
],
[
"test-ansible1",
"4b3b315e-f313-43bb-941b-03c298937b4d"
],
[
"test",
"fbe9a1ac-cf45-4efa-9d65-b3257548a9f4"
]
]
BTW you can use http://jmespath.org/ to test query statements.
dict(...) when applied to such structure (list of "touples") generates a dictionary:
{
"test": "fbe9a1ac-cf45-4efa-9d65-b3257548a9f4",
"test-ansible1": "4b3b315e-f313-43bb-941b-03c298937b4d",
"test-ansible2": "e1aff9d4-c834-4515-8c08-235d1674a47b"
}
Then we apply extract filter as per documentation to fetch only required elements:
[
"4b3b315e-f313-43bb-941b-03c298937b4d",
"e1aff9d4-c834-4515-8c08-235d1674a47b"
]
I have a simple json string that I read in via a URL.
jsonFile <- jsonlite::fromJSON(RCurl::getURL("http://server.com/jsonData.php"))
[
{
"X": "A",
"Y": 1,
"Z": 2
},
{
"X": "B",
"Y": 3,
"Z": 4
},
{
"X": "C",
"Y": -4,
"Z": -3
},
{
"X": "D",
"Y": -2,
"Z": -1
}
]
I am then attempting to color code the columns based on numeric values. Green if column Y or Z is positive and red if negative. I attempted this with the following function:
DT::formatStyle(jsonFile, c('Y', 'Z'), color = 'white', backgroundColor = styleInterval(0, c('green','red')))
But it yields this error: Error in name2int(name, names, rownames) :
You specified the columns: X,Y, but the column names of the data are
When I call the names of the dataframe function I get:
names(jsonFile)
[1] "X" "Y" "Z"
I think this has to do with how I am accessing the data frame itself since it came from a JSON data structure, but I haven't yet ciphered how to call the column names appropriately. I had the same issue when doing this with piping as well.
Any help is much appreciated.
Thanks
This is not an issue with the JSON-to-data.frame conversion, but rather from the DT formatting you are attempting
From the help file for ?formatStyle
table - a table objet created from datatable()
Therefore, the input to formatStyle needs to be created from the datatable() function. You can do this directly in your function call:
formatStyle(datatable(jsonFile), c('Y', 'Z'), color = 'white',
backgroundColor = styleInterval(0, c('green','red')))
I've been trying to implement a simple subgrid within jqgrid to show line items for an invoice. I finally got the subgrids to populate but each subgrid is showing the same list of line items, which is actually all of the entries in the data set.
I'm not quite sure how to debug this but here are some of my potential ideas-
Is it a problem with the way the json store is (not) responding to the GET queries?
Is it because nowhere I define what field within the subgrid data is the "foriegn key" so to speak.
Do I need the subGridUrl to point to json data with only the appropriate data (not every line item)
Example JSON for line items:
order_id points to the id of the order
{
"total": 1,
"records": 6,
"rows": [
{
"description": "PART X",
"order_id": 2,
"qty": 5,
... more fields ...
"id": 1
},
... more ...
],
page: 1
}
JSON for main grid items:
{
"total": 1,
"records": 2,
"rows": [
{
"order_no": 2,
... more fields ...
"id": 2
},
... more ...
],
page:1
}
Applicable parts of my jqqrid script:
jQuery("#mygrid").jqGrid({
... cosmetic stuff for main grid ...
url: "/my_json_url/",
datatype: "json",
colNames:['Order',...],
colModel:[
{name:'order_no', index:'order_no'},
...
],
jsonReader: {
repeatitems:false,
root: "rows",
page: "page",
total: "total",
records: "records",
cell: "",
id: "id",
subgrid: {root: "rows", cell:"", repeatitems: false}
},
prmNames: {subgridid: "order_id"},
subGrid: true,
subGridUrl: "/json_url/to_line_items/",
subGridModel: [{ name : ['qty','description'],
width: [100,100] }]
})navGrid(some options);
I suppose that the code under the URL "/json_url/to_line_items/" don't use id parameter sent by jqGrid. If the user expand the subgrid the rowid of the row will be used as additional parameter of subGridUrl. By the way I don't understand why you use id values of the grid other as the order_id. Currently the id=1 parameter will be appended to the subGridUrl in case of expanding the row with order_id=10. Is it what you want?