Populating a Dash data_table column with switches - plotly-dash

Is it possible to populate the column Toggle with ‘mantine’ switches?
I am using the code from the docs that show how to populate a column with dropdowns, but I cannot figure out how to get any further than this.
import dash, dash_table
import dash_html_components as html
import dash_mantine_components as dmc
import pandas as pd
app = dash.Dash(__name__)
df = pd.DataFrame({
'A': [1, 2],
'B': [4, 5],
'C': [7, 8]
})
def toggle_switch():
return dmc.Switch(
size="lg",
radius="sm",
label="Enable this option",
checked=True
)
app.layout = html.Div([
dash_table.DataTable(
id='datatable',
columns=[
{"name": i, "id": i} for i in df.columns
] + [{"name": "Toggle", "id": "toggle"}],
data=df.to_dict('records'),
editable=True,
)
])
if __name__ == '__main__':
app.run_server(debug=True)

Related

Using 'extendData' in a 'dcc.Interval' event will only update my graph when I'm browsing another Chrome tab

I created a very simple (one page) application in Dash that appends random data to a plotly chart using a dcc.Interval component and the extendData method (I'd like to have x values max).
The program worked like a charm, until I tried to port it to a multi-page application:
I used the following example:
https://github.com/facultyai/dash-bootstrap-components/blob/main/examples/python/templates/multi-page-apps/responsive-collapsible-sidebar/sidebar.py
and replaced:
elif pathname == "/page-1":
return html.P("This is the content of page 1. Yay!")
with:
import page_1
...
elif pathname == "/page-1":
return page_1.layout
My page_1.py contains the following code:
from dash import dcc, html
import dash_bootstrap_components as dbc
import plotly.graph_objs as go
layout = dbc.Card(dbc.CardBody([
html.H4('Live Feed'),
dcc.Graph(id='live-update-graph',
figure=go.Figure({'data':
[
{'x': [], 'y': []},
{'x': [], 'y': []},
{'x': [], 'y': []},
{'x': [], 'y': []}
]
}),
),
dcc.Interval(
id='interval-component',
interval=0.1*1000, # in milliseconds
n_intervals=0
)
]
))
I put my Callback in my app.py file:
#app.callback(Output('live-update-graph', 'extendData'),
Input('interval-component', 'n_intervals')
)
def update_graph_live(n):
# Collect some data
y1 = np.random.normal(loc = 10, scale=10)
y2 = y1 + random.randint(-5, 5)
y3 = y2 + random.randint(-10, 60)
y4 = y3 + random.randint(-40, 2)
return [{'x': [[datetime.datetime.now()]] * 4,'y': [[y1], [y2], [y3], [y4]]}, [0,1, 2, 3], 300]
...
if __name__ == '__main__':
app.run_server(debug=True)
Unfortunatly, my chart will only update when I'm browsing another tab in Chrome, not when I'm displaying it.
I have another page with some other components and an associated callback declared in my app.py file as :
#app.callback(
Output("result-code", "children"),
Input("slider", "value"),
)
def create_python_script(slider):
markdown = markdown_start
markdown += '''
msg = {{
"slider_value": {slider}
}}'''.format(slider=slider)
markdown += markdown_end
return markdown
And my Markdown component is updated in real-time, no problem with that.
Here is a copy of my callback status:
Callback status in Dash
My developper console shows every incoming message in the front-end part:
{
"multi": true,
"response": {
"live-update-graph": {
"extendData": [
{
"x": [
[
"2023-02-13T16:58:37.533426"
],
[
"2023-02-13T16:58:37.533426"
],
[
"2023-02-13T16:58:37.533426"
],
[
"2023-02-13T16:58:37.533426"
]
],
"y": [
[
-4.26648933108117
],
[
-3.2664893310811696
],
[
-8.26648933108117
],
[
-9.26648933108117
]
]
},
[
0,
1,
2,
3
],
300
]
}
}
}
Am I doing something wrong?
Thanks in advance !
I was using http://localhost:8888 instead of http://127.0.0.1:8888 to connect to my web app and somehow didn't see it, and that was preventing the chart from being updated.

How to split a specific column data into a nested json data structure using python?

I began working with a csv data which has the following data:
I want to create a json data structure which looks like this:
{
"name": "Place",
"Details": [
{
"detail": "I",
"info": [
"Iran",
"Iraq"
]
},
{
"detail": "J",
"info": "Japan"
}
]
}
Below is my code but I m unable to split the second column as required:
import pandas as pd
path="/content/file.csv/"
data=pd.read_csv(path)
df=pd.DataFrame(data)
out=df.to_json(orient="records")
print(out)
Use custom function with splitting values in Details column:
def f(x):
out = []
splitted = x.split(',')
for x in splitted:
a, b = x.split('-')
c = b.split('/')
if len(c) == 1:
d = {'detail': a, 'info':c[0]}
else:
d = {'detail': a, 'info':c}
out.append(d)
return out
df['Details'] = df['Details'].apply(f)
print (df)
Name Details Verfied
0 Alphabet [{'detail': 'A', 'info': 'Apple'}, {'detail': ... Yes
1 Place [{'detail': 'I', 'info': ['Iran', 'Iraq']}, {'... Yes
out=df[['Name','Details']].to_json(orient="records")
print(out)
[{"Name":"Alphabet","Details":[{"detail":"A","info":"Apple"},
{"detail":"B","info":"Ball"}]},
{"Name":"Place","Details":[{"detail":"I","info":["Iran","Iraq"]},
{"detail":"J","info":"Japan"}]}]

Need python code to parse the JSON in specific format in Python to expand a json data in a column in a pandas dataframe [duplicate]

I am trying here to use json_normalize to somehow format the output of an API, but I keep getting a faulty and empty csv file. I tried to change df2 = pd.json_normalize(response, record_path=['LIST']) , but keep getting this error message:
TypeError: byte indices must be integers or slices, not str
Could you please guide me on what am I doing wrong ?
Thanks a lot !
import requests
import json
import pandas as pd
url = "https://*hidden*Results/"
payload = json.dumps({
"id": 12345
})
headers = {
'Authorization': 'Basic *hidden*',
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
df1 = pd.DataFrame(response).iloc[:,:-2]
df2 = pd.json_normalize(response, record_path=None)
df = pd.concat([df1, df2], axis=1)
df.to_csv("test.csv", index=False)
You are passing the variable response in the call:
df2 = pd.json_normalize(response, record_path=None)
Which is an a requests.models.Response Object and you need to pass a dict, so you need to do something like pd.json_normalize(response.json(), record_path=['LIST'])
I tried it with this example and works:
>>> import pandas as pd
>>> data = [
... {
... "state": "Florida",
... "shortname": "FL",
... "info": {"governor": "Rick Scott"},
... "counties": [
... {"name": "Dade", "population": 12345},
... {"name": "Broward", "population": 40000},
... {"name": "Palm Beach", "population": 60000},
... ],
... },
... {
... "state": "Ohio",
... "shortname": "OH",
... "info": {"governor": "John Kasich"},
... "counties": [
... {"name": "Summit", "population": 1234},
... {"name": "Cuyahoga", "population": 1337},
... ],
... },
... ]
>>> result = pd.json_normalize(data, ["counties"])
>>> result
name population
0 Dade 12345
1 Broward 40000
2 Palm Beach 60000
3 Summit 1234
4 Cuyahoga 1337
EDIT I will try to do this:
import requests
import json
import pandas as pd
url = "https://*hidden*Results/"
payload = json.dumps({
"id": 12345
})
headers = {
'Authorization': 'Basic *hidden*',
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
json_response = response.json()
df1 = pd.DataFrame(json_response).iloc[:,:-2]
df2 = pd.json_normalize(json_response, record_path=['LIST'])
df = pd.concat([df1, df2], axis=1)
df.to_csv("test.csv", index=False)

Dash Tabulator : "movableRowsConnectedTables" is not working

I’m trying to use the "movableRowsConnectedTables" built-in functionality as explained in the tabulator.js examples
It doesn’t seem to work as expected:
import dash
from dash import html
import dash_bootstrap_components as dbc
import dash_tabulator
columns = [
{ "title": "Name",
"field": "name",}
]
options_from = {
'movableRows' : True,
'movableRowsConnectedTables':"tabulator_to",
'movableRowsReceiver': "add",
'movableRowsSender': "delete",
'height':200,
'placeholder':'No more Rows'
}
options_to = {
'movableRows' : True,
'height':200,
'placeholder':'Drag Here'
}
data = [
{"id":1, "name":"a"},
{"id":2, "name":"b"},
{"id":3, "name":"c"},
]
layout = html.Div(
[
dbc.Row(
[
dbc.Col(
[ html.Header('DRAG FROM HERE'),
dash_tabulator.DashTabulator(
id='tabulator_from',
columns=columns,
options=options_from,
data=data,
),
], width = 6
),
dbc.Col(
[ html.Header('DROP HERE'),
dash_tabulator.DashTabulator(
id='tabulator_to',
columns=columns,
options=options_to,
data = []
),
], width = 6
)
]
)
]
)
app = dash.Dash(external_stylesheets=[dbc.themes.BOOTSTRAP])
app.layout = dbc.Container(layout, fluid=True)
if __name__ == '__main__':
app.run_server(debug=True)
Is it also possible to get callbacks when elements were dropped?
It would be great to have this functionality inside dash!
example
im not familiar with tabulator_dash but the table your sending too also needs a 'movableRowsConnectedTables':"tabulator_from" option

How to query JSON data according to JSON array's size with Spark SQL?

I have a JSON like this:
{
"uin":10000,
"role":[
{"role_id":1, "role_level": 10},
{"role_id":2, "role_level": 1}
]
}
{ "uin":10001,
"role":[
{"role_id":1, "role_level": 1},
{"role_id":2, "role_level": 1},
{"role_id":3, "role_level": 1},
{"role_id":4, "role_level": 20}
]
}
I want to query a uin which has more than two roles. How can I do it using Spark SQL?
You can use DataFrame and UserDefinedFunction to achieve what you want, as shown below. I've tried in spark-shell.
val jsonRdd = sc.parallelize(Seq("""{"uin":10000,"role":[{"role_id":1, "role_level": 10},{"role_id":2, "role_level": 1}]}"""))
val df = sqlContext.jsonRDD(jsonRdd)
val predict = udf((array: Seq[Any]) => if (array.length > 2) true else false)
val df1 = df.where( predict(df("role")) )
df1.show
Her is a simplified python version
r1 = ssc.jsonFile("role.json").select("uin","role.role_id")
r1.show()
slen = udf(lambda s: len(s), IntegerType())
r2 = r1.select(r1.uin,r1.role_id,slen(r1.role_id).alias("slen"))
res = r2.filter(r2.slen>1)
res.show()
Maybe size is what you need:
size(expr) - Returns the size of an array or a map.
In your case, "role" size must be bigger than 2.
If you have this JSON:
json = \
[
{
"uin":10000,
"role":[
{"role_id":1, "role_level": 10},
{"role_id":2, "role_level": 1}
]
},
{
"uin":10001,
"role":[
{"role_id":1, "role_level": 1},
{"role_id":2, "role_level": 1},
{"role_id":3, "role_level": 1},
{"role_id":4, "role_level": 20}
]
}
]
you can use this:
from pyspark.sql import functions as F
rdd = spark.sparkContext.parallelize([json])
df = spark.read.json(rdd)
df = df.filter(F.size('role') > 2)
df.show()
#+--------------------+-----+
#| role| uin|
#+--------------------+-----+
#|[{1, 1}, {2, 1}, ...|10001|
#+--------------------+-----+