Getting Dash longcallback details - plotly-dash

just beginning with Dash 2.0 from plotly. Mainly to take advantage of longcallbacks. I try to get callback's id without success (i.e. the id I see in the worker when executing long call). Also struggling to get its state, ready(), successful() etc.
What I've got so far:
#app.long_callback(
output=Output("paragraph_id", "children"),
inputs=Input("button_id", "n_clicks"),
running=[
(Output("button_id", "disabled"), True, False),
(Output("cancel_button_id", "disabled"), False, True),
(
Output("paragraph_id", "style"),
{"visibility": "hidden"},
{"visibility": "visible"},
),
(
Output("progress_bar", "style"),
{"visibility": "visible"},
{"visibility": "hidden"},
),
],
cancel=[Input("cancel_button_id", "n_clicks")],
progress=[Output("progress_bar", "value"), Output("progress_bar", "max")],
prevent_initial_call=True
)
def update_progress(set_progress, n_clicks):
currentProgress = check_progress.delay()
i = 0
total = 15
while currentProgress.ready() == False:
time.sleep(1)
print("currentProgress.STATE")
print(currentProgress.state)
set_progress((str(i + 1), str(total)))
i += 1
return [f"Clicked {n_clicks} times" + " " + currentProgress.id]
#celery_app.task(bind=True)
def check_progress(self):
time.sleep(15)
return
I can manage to get these when executing celery task - check_progress(). How do I get id of update_progress() long callback?

Related

Dash: Get the post request query string parameters inside callback

I have a multi-page dash application.
app.py
pages
page_one.py
page_two.py
In app.py, I have
#callback(
Output(component_id="page-content", component_property="children"),
Input(component_id="url", component_property="pathname"),
)
def display_page(pathname):
if pathname == "/one":
return page_one.layout
elif pathname == "/two":
return page_two.layout
In page_one.py, I have few callbacks.
For eg:
Callback-1
#callback(
output=Output("paragraph_id", "children"),
inputs=Input("button_id", "n_clicks"),
background=True,
running=[
(Output("button_id", "disabled"), True, False),
(Output("cancel_button_id", "disabled"), False, True),
(
Output("paragraph_id", "style"),
{"visibility": "hidden"},
{"visibility": "visible"},
),
(
Output("progress_bar", "style"),
{"visibility": "visible"},
{"visibility": "hidden"},
),
],
cancel=Input("cancel_button_id", "n_clicks"),
progress=[Output("progress_bar", "value"), Output("progress_bar", "max")],
prevent_initial_call=True,
)
def update_progress_A(set_progress, n_clicks):
# Here I want to access the post request query string parameters
some code for- A
Callback-2
#callback(
output=Output("paragraph_id", "children"),
inputs=Input("button_id", "n_clicks"),
background=True,
running=[
(Output("button_id", "disabled"), True, False),
(Output("cancel_button_id", "disabled"), False, True),
(
Output("paragraph_id", "style"),
{"visibility": "hidden"},
{"visibility": "visible"},
),
(
Output("progress_bar", "style"),
{"visibility": "visible"},
{"visibility": "hidden"},
),
],
cancel=Input("cancel_button_id", "n_clicks"),
progress=[Output("progress_bar", "value"), Output("progress_bar", "max")],
prevent_initial_call=True,
)
def update_progress_B(set_progress, n_clicks):
# Here I want to access the post request query string parameters
some code for- B
Is it possible to get the post request query string parameters inside the callback?
Please suggest.
Updated: 1
I have docker services as below:
redis, postgres, web, celery, nginx
I have implemented demo code from dash website
#callback(
output=Output("paragraph_id", "children"),
inputs=Input("button_id", "n_clicks"),
background=True,
running=[
(Output("button_id", "disabled"), True, False),
(Output("cancel_button_id", "disabled"), False, True),
(
Output("paragraph_id", "style"),
{"visibility": "hidden"},
{"visibility": "visible"},
),
(
Output("progress_bar", "style"),
{"visibility": "visible"},
{"visibility": "hidden"},
),
],
cancel=Input("cancel_button_id", "n_clicks"),
progress=[Output("progress_bar", "value"), Output("progress_bar", "max")],
prevent_initial_call=True,
)
def update_progress(set_progress, n_clicks):
total = 5
# print("INPUTS: ", callback_context.inputs)
# print("INPUTS: ", callback_context.inputs_list)
for i in range(total + 1):
set_progress((str(i), str(total)))
time.sleep(1)
return f"Clicked {n_clicks} times"
When services are up, I observed following things in terminal:
For celery:
celery | [tasks]
celery | . long_callback_d0a7a49bfe3d30bfdd7a43d2a22db75123ff2fa9
After button clicking:
celery | [2022-12-27 20:18:43,090: INFO/MainProcess] Task long_callback_d0a7a49bfe3d30bfdd7a43d2a22db75123ff2fa9[b663bb71-6962-44a3-ae2c-00eb7d8869bf] received
proxy | 172.18.0.1 - - [27/Dec/2022:20:18:44 +0000] "POST /_dash-update-component?cacheKey=6067e37a0e1f0c07b6f9ff0d7ef0b208ae2783ba&job=b663bb71-6962-44a3-ae2c-00eb7d8869bf
celery | [2022-12-27 20:18:49,119: INFO/ForkPoolWorker-2] Task long_callback_d0a7a49bfe3d30bfdd7a43d2a22db75123ff2fa9[b663bb71-6962-44a3-ae2c-00eb7d8869bf] succeeded in 6.027637779999964s: None
This is working fine.
I am trying to get/access job parameter inside update_progress()
so that I can use this job parameter to pass in the result.AsyncResult(job) which is celery function inside update_progress() to get status, state of the running task, which I am trying to use to update progress bar.
Usecases:
Upload dataset: I'm using dcc.Upload() and file is getting uploaded successfully. e.g. user uploads data.csv
Create train, test, dev dataset files: I'm using dbc.Button() where user clicks it and all 3 files are getting created successfully [I have used sklearn]. e.g. we get train.csv, test.csv and dev.csv
Model training: I'm using dbc.Button() where user clicks it and model is getting generated and stored successfully. e.g. model.joblib

How to configure CloudFunctionInvokeFunctionOperator to call cloud function on composer(airflow)

I ran into the error to invoke cloud function by using CloudFunctionInvokeFunctionOperator like this.
line 915, in execute raise HttpError(resp, content, uri=self.uri) googleapiclient.errors.HttpError: <HttpError 404 when requesting https://cloudfunctions.googleapis.com/v1/projects/pongthorn/locations/asia-southeast1/functions/crypto-trading-to-bq:call?alt=json returned "Function crypto-trading-to-bq in region asia-southeast1 in project pongthorn does not exist". Details: "Function crypto-trading-to-bq in region asia-southeast1 in project pongthorn does not exist">
I assump that I my made mistake at function id , What is functoin id , the figure below is my cloud functoin and function name is crypto-trading-to-bq
function id and function name , they are the same??????
I set 3 variable names on JSON file and upload it to airflow as the following values
{
"project_id": "pongthorn",
"region_name": "asia-southeast1",
"function_name": "crypto-trading-to-bq"
}
This is my code
import datetime
import airflow
from airflow.providers.google.cloud.operators.functions import (
CloudFunctionDeleteFunctionOperator,
CloudFunctionDeployFunctionOperator,
CloudFunctionInvokeFunctionOperator,
)
YESTERDAY = datetime.datetime.now() - datetime.timedelta(days=1)
XProjdctID=airflow.models.Variable.get('project_id')
XRegion=airflow.models.Variable.get('region_name')
XFunction=airflow.models.Variable.get('function_name')
default_args = {
'owner': 'Binance Trading Transaction',
'depends_on_past': False,
'email': [''],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': datetime.timedelta(minutes=5),
'start_date': YESTERDAY,
}
with airflow.DAG(
'bn_trading_flow',
catchup=False,
default_args=default_args,
schedule_interval=datetime.timedelta(days=1)) as dag:
call_crypto_trading_to_bq = CloudFunctionInvokeFunctionOperator(
task_id = "load_crypto_trading_to_bq",
location = XRegion,
function_id = XFunction,
input_data = {},
)

Categorical data points are not displayed on scatter plot when using multi select drop-down

Suppose we have the following dataframe pulled from SQL called df:
ProdHouse Date_Year Date_Month
Software6 2001 Jan
Software6 2020 Feb
Software1 2004 Mar
Software4 2004 Apr
Software5 2004 May
Software3 2009 Dec
Software5 1995 Dec
Software3 1995 Oct
The objective is to display the total number of products per month. The year is selected using the drop down. It appears that when the x-axis is categorical (i.e month) it does not display the data points. However, if i substitute it with an integer, points are displayed.
def serve_layout():
session_id = str(uuid.uuid4())
return html.Div([ html.Div(session_id, id='session-id', style={'display': 'none'}),
html.Label('Year'),
dcc.Dropdown( id='year-dropdown',
options=[
{'label': year ,'value': year} for year in df['Date_Year'].unique()
],
value=[2020],#[df['Date_Year'].unique()],
multi=True ),
dcc.Graph(id='graph-with-dropdown')
] , style={'width':'33%','display':'inline-block'} )
app.layout = serve_layout
#app.callback(
dash.dependencies.Output('graph-with-dropdown', 'figure'),
[dash.dependencies.Input('year-dropdown', 'value')]) # Add the marks as a State
def update_figure(selected_year):
print('selected_year: ', selected_year)
filtered_df = df[df.Date_Year.isin(selected_year)]
#filtered_df = df[df.Date_Year == selected_year]
df_grouped = filtered_df.groupby(['ProdHouse','Date_Month']).size().rename('Total_Active_Products').reset_index()
traces=[]
for i in filtered_df.ProdHouse.unique():
df_by_ProdHouse = df_grouped[df_grouped['ProdHouse'] == i]
traces.append(go.Scatter(
x=df_by_ProdHouse['Date_Month'], #df_by_ProdHouse['Total_Active_Products'],
y=df_by_ProdHouse['Total_Active_Products'],
##text=df_by_ProdHouse['brand'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) )
return {
'data': traces,
'layout': dict(
xaxis={'type': 'linear', 'title': 'Active Products Per Month'},
yaxis={'title': 'Total Active Products'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest',
transition = {'duration': 500},
)
}
How would one modify the above code so that the data can be displayed on the plot?
This answers the first part of the question which is related to, the points not being displayed. I manage to get the categorical data to display by changing the scatter plot to a bar chart. Since the graph was changed, I removed the mode and type parameters.
#app.callback(
dash.dependencies.Output('graph-with-dropdown', 'figure'),
[dash.dependencies.Input('year-dropdown', 'value')]) # Add the marks as a State
def update_figure(selected_year):
print('selected_year: ', selected_year)
filtered_df = df[df.Date_Year.isin(selected_year)]
df_grouped = filtered_df.groupby(['ProdHouse','Date_Month']).size().rename('Total_Active_Products').reset_index()
traces=[]
for i in filtered_df.ProdHouse.unique():
df_by_ProdHouse = df_grouped[df_grouped['ProdHouse'] == i]
traces.append(go.Bar(
x=df_by_ProdHouse['Date_Month'],
y=df_by_ProdHouse['Total_Active_Products'],
name=i
) )
return {
'data': traces,
'layout': dict(
xaxis={ 'title': 'Active Products Per Month'},
yaxis={'title': 'Total Active Products'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest',
transition = {'duration': 500},
)
}
Alternatively if you want to still use Scatter plot, convert df['Date_Month'] and df['Date_Year'] from category to object with dates eg: May 2020 is 2020-05-01.
This can be achieved using the following example:
import pandas as pd
df = pd.DataFrame({'ProdHouse': ['software 1', 'software 2', 'software 3', 'software 4', 'software 3'],
'Date_Year': [2018, 2018, 2018, 2018, 2018], 'Date_Month': ['January', 'February', 'March', 'April', 'May'],'Total_Active_Products':[1,2,7,8,6]})
date_1 ='{}-{}'.format(df['Date_Month'].iloc[0], df['Date_Year'].iloc[0])
date_2 = '{}-{}'.format('June', df['Year'].iloc[4])
df['dates'] = pd.date_range(date_1, date_2, freq='M')
print(df)
Since you are now using objects, replace isin with the following:
filtered_df = df[(pd.to_datetime(df.dates).dt.year>=selected_year_min)& (pd.to_datetime(df.dates).dt.year<=selected_year_max)]
Please adjust the above code accordingly. It is designed to get the min and max year from the dropdown.
Lastly, change x input value in scatter plot as shown below:
traces.append(go.Scatter(
x=df_by_ProdHouse['dates'],
y=df_by_ProdHouse['Total_Active_Products'],
mode='lines+markers',
line={
'color': '#CD5C5C',
'width': 2},
marker={
'color': '#CD5C5C',
'size': 10,
'symbol': "diamond-open"
},
# marker_line_width=1.5, opacity=0.6,
) )
return {
'data': traces,
'layout': dict(
xaxis={ 'title': 'Date',
'showticklabels':True,
'linecolor':'rgb(204, 204, 204)',
'linewidth':2,
'ticks':'outside'
},
yaxis={'title': 'Total Active Products'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
#marker=dict(color='#CD5C5C', size=1,symbol="diamond-open"),
hovermode='closest',
transition = {'duration': 500},
title={
'text': "Softwares",
'y':0.9,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
font=dict(
color="#7f7f7f"
)
)
}

How to increase my container width to accomodate more items

I am building a dashboard using Plotly Dash. I am using bootstrap.min.css , I would like to increase the width of my container so that I can accommodate two graphs , in a single row.
My second graphs(Line graph) , has more width hence unable to align them in a single row.
I have attached the snapshot below,
DASH UI CODE :
# the style arguments for the sidebar. We use position:fixed and a fixed width
SIDEBAR_STYLE = {
"top": 0,
"left": 0,
"bottom": 0,
"width": "16rem",
"padding": "2rem 1rem",
"background-color": "#f8f9fa",
"position": "fixed",
"color":"#000",
}
# the styles for the main content position it to the right of the sidebar and
# add some padding.
CONTENT_STYLE = {
"margin-left": "18rem",
"margin-right": "2rem",
"padding": "2rem 1rem",
}
sidebar = html.Div(
[
html.H2("Plate", className="display-4"),
html.Hr(),
html.P(
"A simple dashboard", className="lead"
),
dbc.Nav(
[
dbc.NavLink("Dashboard", href="/dashboard", id="page-1-link"),
dbc.NavLink("Analytics", href="/page-2", id="page-2-link"),
dbc.NavLink("Page 3", href="/page-3", id="page-3-link"),
html.Hr(),
dbc.NavLink("Logout", href="/logout", id="page-4-link"),
],
vertical=True,
pills=True,
),
],
style=SIDEBAR_STYLE,
)
content = html.Div(id='page-content' , className ='container' ,style=CONTENT_STYLE)
app.layout = html.Div([dcc.Location(id="url"), sidebar, content])
app.config.suppress_callback_exceptions = True
# this callback uses the current pathname to set the active state of the
# corresponding nav link to true, allowing users to tell see page they are on
#app.callback(
[Output(f"page-{i}-link", "active") for i in range(1, 4)],
[Input("url", "pathname")],
)
def toggle_active_links(pathname):
if pathname == "/" or pathname == "/dashboard":
# Treat page 1 as the homepage / index
return True, False, False
return [pathname == f"/page-{i}" for i in range(1, 4)]
#app.callback(Output("page-content", "children"), [Input("url", "pathname")])
def render_page_content(pathname):
if pathname in ["/", "/page-1", "/dashboard"]:
dashBoard = html.Div([
html.Div([dcc.DatePickerRange(
id='my-date-picker-range',
min_date_allowed=dt(minDate[0],minDate[1],minDate[2]),
max_date_allowed=dt(maxDate[0],maxDate[1],maxDate[2]),
initial_visible_month=dt(maxDate[0],maxDate[1],maxDate[2]),
start_date=dt(minDate[0],minDate[1],minDate[2]).date(),
end_date=dt(maxDate[0],maxDate[1],maxDate[2]).date()
),
html.Button(id="date-button" , children ="Analyze" , n_clicks = 0, className = 'btn btn-outline-success')
], className = 'row'),
html.Div([
html.Br(),
html.Div([
html.H4(['Category Overview'] , className = 'display-4'),
html.Br(),
html.Br(),
], className = 'row'),
html.Div([
html.Div([dcc.Graph(id='categoryPerformance',figure = dict(data=ge.returnCategoryOverviewBarGraph(df)[0],
layout=ge.returnCategoryOverviewBarGraph(df)[1]))
], className = 'col'),
html.Div([dcc.Graph(id='categoryPerformanceTrend')
], className = 'col')
], className = 'row'),
html.Hr(),
html.Div([
html.Div([
dcc.Dropdown(id = 'category-dd', options = category_items, value = 'Food')
], className = 'col-6 col-md-4'),
html.Div([
dcc.Slider(id = 'headCount' , min = 5, max=20 , step = 5 , value = 5, marks = {i: 'Count {}'.format(i) for i in range(5,21,5)})
], className = 'col-12 col-sm-6 col-md-8')
], className = 'row'),
html.Div([
html.Br(),
html.Br(),
html.Div([
dcc.Graph(id ='idvlCategoryPerformanceBest')
], className ='col'),
html.Div([
dcc.Graph(id ='idvlCategoryPerformanceLeast')
], className = 'col')
], className = 'row')
])
] , className='container')
return dashBoard
I have zero knowledge in frontend / css , any help is much appreciated. Thanks !

Get line number while parsing a JSON file

I am using google Gson library to parse JSON files, which are ipython note book files. Is it possible to collect the line number where a JSON object or array starts or end.
JsonReader reader = new JsonReader(new FileReader(notebookFile));
Gson gson = new GsonBuilder().create();
// Read file in stream mode
reader.beginObject();
while (reader.hasNext()) {
String name = reader.nextName();
if (name.equals("cells")) {
//can we determine line number of name
reader.beginArray();
.....
}
....
}
Part of a notebook:
"metadata": {
"name": "5-Scatterplots"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "code",
"collapsed": false,
"input": [
"import pandas as pd\n",
"store = pd.HDFStore('/Volumes/FreshBooks/data/store.h5')\n",
"may07 = store['may07']\n",
"may08 = store['may08']"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 1
},
You can accomplish this in python using https://pypi.org/project/json-cfg/
Here's a recursive strategy for printing the line number for each key and each value.
import jsoncfg
from jsoncfg.config_classes import ConfigJSONObject, ConfigJSONArray, ConfigJSONScalar
def recursivePrint(element):
if isinstance(element, ConfigJSONObject):
# Dictionary
for key, value in element:
print(f"key \"{key}\" at line {jsoncfg.node_location(element[key]).line}")
recursivePrint(element[key])
elif isinstance(element, ConfigJSONArray):
# Array
for item in element:
recursivePrint(item)
elif isinstance(element, ConfigJSONScalar):
value = element()
if isinstance(value, str):
value = value.strip()
print(f"value \"{value}\" at line {jsoncfg.node_location(element).line}")
parsed = jsoncfg.load_config("example.json")
recursivePrint(parsed)
Screenshot of results
Full disclosure: I'm the maintainer of the package below.
There is now a new Python package that solves this use case: https://github.com/open-alchemy/json-source-map
Installation: pip install json_source_map
For example, in your case:
from json_source_map import calculate
source = '''
{
"metadata": {
"name": "5-Scatterplots"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "code",
"collapsed": false,
"input": [
"import pandas as pd\\n",
"store = pd.HDFStore('/Volumes/FreshBooks/data/store.h5')\\n",
"may07 = store['may07']\\n",
"may08 = store['may08']"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 1
}
]
}
]
}
'''
print(calculate(source))
This prints:
{
"": Entry(
value_start=Location(line=1, column=0, position=1),
value_end=Location(line=27, column=1, position=568),
key_start=None,
key_end=None,
),
"/metadata": Entry(
value_start=Location(line=2, column=14, position=17),
value_end=Location(line=4, column=3, position=51),
key_start=Location(line=2, column=2, position=5),
key_end=Location(line=2, column=12, position=15),
),
"/metadata/name": Entry(
value_start=Location(line=3, column=12, position=31),
value_end=Location(line=3, column=28, position=47),
key_start=Location(line=3, column=4, position=23),
key_end=Location(line=3, column=10, position=29),
),
"/nbformat": Entry(
value_start=Location(line=5, column=14, position=67),
value_end=Location(line=5, column=15, position=68),
key_start=Location(line=5, column=2, position=55),
key_end=Location(line=5, column=12, position=65),
),
"/nbformat_minor": Entry(
value_start=Location(line=6, column=20, position=90),
value_end=Location(line=6, column=21, position=91),
key_start=Location(line=6, column=2, position=72),
key_end=Location(line=6, column=18, position=88),
),
"/worksheets": Entry(
value_start=Location(line=7, column=16, position=109),
value_end=Location(line=26, column=3, position=566),
key_start=Location(line=7, column=2, position=95),
key_end=Location(line=7, column=14, position=107),
),
"/worksheets/0": Entry(
value_start=Location(line=8, column=4, position=115),
value_end=Location(line=25, column=5, position=562),
key_start=None,
key_end=None,
),
"/worksheets/0/cells": Entry(
value_start=Location(line=9, column=15, position=132),
value_end=Location(line=24, column=7, position=556),
key_start=Location(line=9, column=6, position=123),
key_end=Location(line=9, column=13, position=130),
),
"/worksheets/0/cells/0": Entry(
value_start=Location(line=10, column=8, position=142),
value_end=Location(line=23, column=9, position=548),
key_start=None,
key_end=None,
),
"/worksheets/0/cells/0/cell_type": Entry(
value_start=Location(line=11, column=23, position=167),
value_end=Location(line=11, column=29, position=173),
key_start=Location(line=11, column=10, position=154),
key_end=Location(line=11, column=21, position=165),
),
"/worksheets/0/cells/0/collapsed": Entry(
value_start=Location(line=12, column=23, position=198),
value_end=Location(line=12, column=28, position=203),
key_start=Location(line=12, column=10, position=185),
key_end=Location(line=12, column=21, position=196),
),
"/worksheets/0/cells/0/input": Entry(
value_start=Location(line=13, column=19, position=224),
value_end=Location(line=18, column=11, position=425),
key_start=Location(line=13, column=10, position=215),
key_end=Location(line=13, column=17, position=222),
),
"/worksheets/0/cells/0/input/0": Entry(
value_start=Location(line=14, column=12, position=238),
value_end=Location(line=14, column=35, position=261),
key_start=None,
key_end=None,
),
"/worksheets/0/cells/0/input/1": Entry(
value_start=Location(line=15, column=12, position=275),
value_end=Location(line=15, column=72, position=335),
key_start=None,
key_end=None,
),
"/worksheets/0/cells/0/input/2": Entry(
value_start=Location(line=16, column=12, position=349),
value_end=Location(line=16, column=38, position=375),
key_start=None,
key_end=None,
),
"/worksheets/0/cells/0/input/3": Entry(
value_start=Location(line=17, column=12, position=389),
value_end=Location(line=17, column=36, position=413),
key_start=None,
key_end=None,
),
"/worksheets/0/cells/0/language": Entry(
value_start=Location(line=19, column=22, position=449),
value_end=Location(line=19, column=30, position=457),
key_start=Location(line=19, column=10, position=437),
key_end=Location(line=19, column=20, position=447),
),
"/worksheets/0/cells/0/metadata": Entry(
value_start=Location(line=20, column=22, position=481),
value_end=Location(line=20, column=24, position=483),
key_start=Location(line=20, column=10, position=469),
key_end=Location(line=20, column=20, position=479),
),
"/worksheets/0/cells/0/outputs": Entry(
value_start=Location(line=21, column=21, position=506),
value_end=Location(line=21, column=23, position=508),
key_start=Location(line=21, column=10, position=495),
key_end=Location(line=21, column=19, position=504),
),
"/worksheets/0/cells/0/prompt_number": Entry(
value_start=Location(line=22, column=27, position=537),
value_end=Location(line=22, column=28, position=538),
key_start=Location(line=22, column=10, position=520),
key_end=Location(line=22, column=25, position=535),
),
}
This tells you the line, column and character position for the start and end location for each value in the JSON document.