I'm trying to add an editable table in my dash app and use the edited data in the following step.
First, I have a callback that is triggered by a button, receives a file, the function processes the file and the output is the editable table and a json data. The table is showing on the screen.
Then, I want the user to do the necessary changes in the table and click on another button.
The click will trigger another callback that should receive the edited data from the table + the json data from the previous callback and as output, another json data.
However, when I test the function, I get "error loading dependencies".
I'm using a very old version of dash 0.43 and at the moment I can't update (many functions were deprecated and I can't change them all now).
#dash_application.callback(
[Output('journey-data-store-raw', 'data'),
Output('new-lable-table', 'children'), #this goes to a div in the layout file
Output('source-table-updated', 'data')],
[Input('parser-process', 'n_clicks')],
[State('upload-data', 'contents'),
State('upload-data', 'filename'),
State('decimal-selection', 'value')]
)
def source_data(_, content, name, decimal_selection):
"""Show the sources to allow the user to edit it"""
if name:
clean_data = get_clean_data(content, name, decimal_selection)
clean_data.to_csv('clean_data_test.csv')
return df_to_json(clean_data), dash_table.DataTable(**get_sources(clean_data)), json.dumps({'updated': True})
else:
print('deu ruim')
raise dash.exceptions.PreventUpdate
#dash_application.callback(
[Output('journey-data-store', 'data'),
Output('color-store', 'data')],
[Input('run-analysis', 'n_clicks')],
[State('sources-table', 'data'),
State('journey-data-store-raw', 'data')]
)
def store_data(_, new_table, raw_data):
"""Stores the datafile and colors in a Store object"""
i = 0
for row in new_table:
if row['new source labels'] != '':
i = 1
break
if i > 0:
# call function to "parser file"
# colors = get_colors(new_data)
# return df_to_json(clean_data), json.dumps(colors)
# the return is only a test, I'd develop the function later I just wanna test and make
# the call back work
return raw_data, json.dumps(get_colors(df_from_json(raw_data)))
else:
return raw_data, json.dumps(get_colors(df_from_json(raw_data)))
I tried to exclude the button and the sources-table of the callback, so it would trigger when the first callback is finished (journey-data-store-raw is available). But is not happening either.
i tried to run in a private window.
Related
I've a custom model of a QtableView displaying data that I want to filter out on a column.
The Qtableview function updateAccountTable is refreshed periodically from a timer to show real-time data.
I've put a QlineEdit to seize my filter value and I do apply my custom QSortFilterProxyModel on this.
I can see the data is filtered in the Qtableview, until the next refresh where the list is again unfiltered.
This come obviously from the signal textChanged of the Qline Edit which is not persistent but I do not see how to have my QSortFilterProxyModel be persistent after the Qtableview refresh
Any idea on how to do that ?
Cheers
Stephane
Part of my code is :
# AccountTableView sorting overide function
class mysortingproxy(QSortFilterProxyModel):
def __init__(self, parent=None):
super(mysortingproxy, self).__init__(parent)
def lessThan(self, left: QModelIndex, right: QModelIndex) -> bool:
leftData = self.sourceModel().data(left, Qt.UserRole)
rightData = self.sourceModel().data(right, Qt.UserRole)
return leftData < rightData
class MainUi(QMainWindow):
# snip...
def updateAccountTable(self):
# Account table
self.accountTableModel = AccountTableModel(self.data, self.scalping)
proxyModel = mysortingproxy() # if not sorting override : proxyModel = QSortFilterProxyModel()
proxyModel.setFilterKeyColumn(0) # first column
proxyModel.setSourceModel(self.accountTableModel)
self.tableView_Account.setModel(proxyModel)
self.tableView_Account.setSortingEnabled(True)
self.tableView_Account.verticalHeader().setVisible(False)
# filter proxy model
self.lineEdit_Find.textChanged.connect(proxyModel.setFilterRegExp)
found it!
This actually only required to add the reading of the filter field each time before displaying my data and then apply the filter again
The code became
def updateAccountTable(self):
# Account table
self.accountTableModel = AccountTableModel(self.data, self.scalping)
proxyModel = mysortingproxy() # if not sorting override : proxyModel = QSortFilterProxyModel()
proxyModel.setFilterKeyColumn(0) # first column
proxyModel.setSourceModel(self.accountTableModel)
self.tableView_Account.setModel(proxyModel)
self.tableView_Account.setSortingEnabled(True)
self.tableView_Account.verticalHeader().setVisible(False)
# filter proxy model
self.lineEdit_Find.textChanged.connect(proxyModel.setFilterRegExp)
self.crypto_find = self.lineEdit_Find.text()
proxyModel.setFilterRegExp(self.crypto_find.upper())
I'm having problems getting my code to 'return' one of the option strings correctly.
If the user inputs one of the string options correctly the first time through, then the 'return' value comes back perfectly fine when going through the 'while' to 'if/elif' statements. No problem.
However, if the user DOES NOT input the data correctly the first time through, I'm trying to catch that with my final 'else' statement and then begin the function again. However, the second/third/ect time through, even if the user inputs a valid selection, the 'return' type is None and the value returned is None.
So, my user validation is missing something. Any thoughts?
#Global variable initialization
mainSelection = str(None)
#Santa's Christmas Card List Main Menu function
def XmasMainMenu(mainSelection):
print('')
print('How may the elves be of service today?')
print('')
print('\"PRINT\" - Print out the Christmas card list from the database.')
print('\"ADD\" - Add nice recipients information to the Christmas card list.')
print('\"SEARCH\" - Search the database for information.')
print('\"DELETE\" - Remove naughty recipients from the Christmas card list. ')
print('\"EXIT\" - Exit the program without any printing or changes to the database.')
print('')
mainSelection = input('Please enter your selection: ')
mainSelection = mainSelection.lower()
print(type(mainSelection), mainSelection)
#Selection return value
while mainSelection != None:
if mainSelection == 'print':
print('|| Will print out the Xmas Card List ||')
return mainSelection
elif mainSelection == 'add':
print('|| Will prompt user to add information to the DB ||')
return mainSelection
elif mainSelection == 'search':
print('|| Will prompt user to search the information in the DB ||')
return mainSelection
elif mainSelection == 'delete':
print('|| Will prompt the user to delete recipients from the DB ||')
return mainSelection
elif mainSelection == 'exit':
print('|| Will exit the user from the program with no changes')
return mainSelection
elif mainSelection == 'drop table':
print('|| Will call the XmasTableDrop function ||')
return mainSelection
else:
print('')
print('Input Error: Please enter a valid selection above!')
print('Try again...')
print('')
print(type(mainSelection), mainSelection)
break
XmasMainMenu(mainSelection)
Program Launch
User input correct, 'return' value is correct
1st user input invalid. Error message is received, the function starts over again. 2nd user input valid --> however, the 'return' type is None and the value is None (this is what I need to fix, but cannot figure out).
This is a typical "new programmer" mistake. Do not call a function within the function itself unless you know what recursion is and how it works. When a called function returns it returns to the function that called it. What happens is:
XmasMainMenu is called from the top-level script.
...user enters an incorrect value...
XmasMainMenu calls XmasMainMenu again.
...user enters a correct value....
XmasMainMenu (2nd call) returns to XmasMainMenu (1st call)
The return value is not assigned to anything, so is lost.
Now at the end of the function with no return, so returns default None.
top-level script receives None.
Instead, wrap your code in a while True: and break when you get a correct value, then return that (pseudo-code):
def XmasMainMenu():
while True:
print menu # this could be outside while if you don't want to re-print menu.
selection = input()
if selection valid:
break # exit while loop
else:
prompt user to enter valid input.
return selection
I am using an API to pull data from a url, however the API has a pagination limit. It goes like:
Page (default is 1 and it's the page number you want to retrieve)
Per_page (default is 100 and it's the maximum number of results returned in the response(max=500))
I have a script which I can get the results of a page or per page but I want to automate it. I want to be able to loop through all the pages or per_page(500) and load it in to a json file.
Here is my code that can get 500 results per_page:
import json, pprint
import requests
url = "https://my_api.com/v1/users?per_page=500"
header = {"Authorization": "Bearer <my_api_token>"}
s = requests.Session()
s.proxies = {"http": "<my_proxies>", "https": "<my_proxies>" }
resp = s.get(url, headers=header, verify=False)
raw=resp.json()
for x in raw:
print(x)
The output is 500 but is there a way to keep going and pull the results starting from where it left off? Or even go by page and get all the data per page until there's no data in a page?
It will be helpful, if you present a sample response from your API.
If the API is equipped properly, there will be a next property in a given response that leads you to the next page.
You can then keep calling the API with the link given in the next recursively. On the last page, there will be no next in the Link header.
resp.links["next"]["url"] will give you the URL to the next page.
For example, the GitHub API has next, last, first, and prev properties.
To put it into code, first, you need to turn your code into functions.
Given that there is a maximum of 500 results per page, it implies you are extracting a list of data of some sort from the API. Often, these data are returned in a list somewhere inside raw.
For now, let's assume you want to extract all elements inside a list at raw.get('data').
import requests
header = {"Authorization": "Bearer <my_api_token>"}
results_per_page = 500
def compose_url():
return (
"https://my_api.com/v1/users"
+ "?per_page="
+ str(results_per_page)
+ "&page_number="
+ "1"
)
def get_result(url=None):
if url_get is None:
url_get = compose_url()
else:
url_get = url
s = requests.Session()
s.proxies = {"http": "<my_proxies>", "https": "<my_proxies>"}
resp = s.get(url_get, headers=header, verify=False)
# You may also want to check the status code
if resp.status_code != 200:
raise Exception(resp.status_code)
raw = resp.json() # of type dict
data = raw.get("data") # of type list
if not "url" in resp.links.get("next"):
# We are at the last page, return data
return data
# Otherwise, recursively get results from the next url
return data + get_result(resp.links["next"]["url"]) # concat lists
def main():
# Driver function
data = get_result()
# Then you can print the data or save it to a file
if __name__ == "__main__":
# Now run the driver function
main()
However, if there isn't a proper Link header, I see 2 solutions:
(1) recursion and (2) loop.
I'll demonstrate recursion.
As you have mentioned, when there is pagination in API responses, i.e. when there is a limit of maximum number of results per page, there is often a query parameter called page number or start index of some sort to indicate which "page" you are querying, so we'll utilize the page_number parameter in the code.
The logic is:
Given a HTTP request response, if there is less than 500 results, it means there is no more pages. Return the results.
If there are 500 results in a given response, it means there's probably another page, so we advance page_number by 1 and do a recursion (by calling the function itself) and concatenate with previous results.
import requests
header = {"Authorization": "Bearer <my_api_token>"}
results_per_page = 500
def compose_url(results_per_page, current_page_number):
return (
"https://my_api.com/v1/users"
+ "?per_page="
+ str(results_per_page)
+ "&page_number="
+ str(current_page_number)
)
def get_result(current_page_number):
s = requests.Session()
s.proxies = {"http": "<my_proxies>", "https": "<my_proxies>"}
url = compose_url(results_per_page, current_page_number)
resp = s.get(url, headers=header, verify=False)
# You may also want to check the status code
if resp.status_code != 200:
raise Exception(resp.status_code)
raw = resp.json() # of type dict
data = raw.get("data") # of type list
# If the length of data is smaller than results_per_page (500 of them),
# that means there is no more pages
if len(data) < results_per_page:
return data
# Otherwise, advance the page number and do a recursion
return data + get_result(current_page_number + 1) # concat lists
def main():
# Driver function
data = get_result(1)
# Then you can print the data or save it to a file
if __name__ == "__main__":
# Now run the driver function
main()
If you truly want to store the raw responses, you can. However, you'll still need to check the number of results in a given response. The logic is similar. If a given raw contains 500 results, it means there is probably another page. We advance the page number by 1 and do a recursion.
Let's still assume raw.get('data') is the list whose length is the number of results.
Because JSON/dictionary files cannot be simply concatenated, you can store raw (which is a dictionary) of each page into a list of raws. You can then parse and synthesize the data in whatever way you want.
Use the following get_result function:
def get_result(current_page_number):
s = requests.Session()
s.proxies = {"http": "<my_proxies>", "https": "<my_proxies>"}
url = compose_url(results_per_page, current_page_number)
resp = s.get(url, headers=header, verify=False)
# You may also want to check the status code
if resp.status_code != 200:
raise Exception(resp.status_code)
raw = resp.json() # of type dict
data = raw.get("data") # of type list
if len(data) == results_per_page:
return [raw] + get_result(current_page_number + 1) # concat lists
return [raw] # convert raw into a list object on the fly
As for the loop method, the logic is similar to recursion. Essentially, you will call the get_result() function a number of times, collect the results, and break early when the furthest page contains less than 500 results.
If you know the total number of results in advance, you can simply the run the loop for a predetermined number of times.
Do you follow? Do you have any further questions?
(I'm a little confused by what you mean by "load it into a JSON file". Do you mean saving the final raw results into a JSON file? Or are you referring to the .json() method in resp.json()? In that case, you don't need import json to do resp.json(). The .json() method on resp is actually part of the requests module.
On a bonus point, you can make your HTTP requests asynchronous, but this is slightly beyond the scope of your original question.
P.s. I'm happy to learn what other solutions, perhaps more elegant ones, that people use.
I need some help, I trying to update the etcprice label value after I push the button and after every 5 seconds, in terminal works, but in tk window not. I stucked here :( please, help me.
I tried to setup the "price" to "StringVar()" but in that case I got a lot of errors.
Many thanks
import urllib.request
from urllib.request import *
import json
import six
from tkinter import *
import tkinter as tk
import threading
price = '0'
def timer():
threading.Timer(5.0, timer).start()
currentPrice()
def currentPrice():
url = 'https://api.cryptowat.ch/markets/bitfinex/ethusd/price'
json_obj = urllib.request.urlopen(url)
data = json.load(json_obj)
for item, v in six.iteritems(data['result']):
# print("ETC: $", v)
price = str(v)
# print(type(etcar))
print(price)
return price
def windows():
root = Tk()
root.geometry("500x200")
kryptoname = Label(root, text="ETC price: ")
kryptoname.grid(column=0, row=0)
etcprice = Label(root, textvariable=price)
etcprice.grid(column=1, row=0)
updatebtn = Button(root, text="update", command=timer)
updatebtn.grid(column=0, row=1)
root.mainloop()
windows()
The solution was: I created a new String variable called “b” and I changed the etcprice Label variable to this.
After I added this b.set(price) in currentPrice() def: and is working.
The price variable is a global - if you're trying to change it, you need to do so explicitly:
def currentPrice():
global price
url = 'https://api.cryptowat.ch/markets/bitfinex/ethusd/price'
json_obj = urllib.request.urlopen(url)
data = json.load(json_obj)
for item, v in six.iteritems(data['result']):
# print("ETC: $", v)
price = str(v)
# print(type(etcar))
print(price)
return price
otherwise, Python will 'mirror' it as a local variable inside the function, and not modify the global.
It's not a good idea to keep on launching more and more threads each time you click the button - so:
updatebtn = Button(root, text="update", command=currentPrice)
probably makes more sense.
You don't need to use threads here, just to call functions 'in the background'. You can use tkinter's own .after function instead to delay caling functions. (it uses milliseconds, not float second values, btw)
def timer(delay_ms, root, func):
func()
root.after(delay_ms, timer, root, func)
might be a helpful kind of function.
Then before you launch your mainloop, or whenever you want the getting to start, call it once:
timer(5000, root, currentPrice)
If you want the currentPrice function to run in a separate thread, and so not block your main GUI thread if there is network lag, for instance, then you can use threads more like this:
Thread(target=currentPrice, daemon=True).start()
which will run it in a daemon-thread - which will automatically get killed if you close the program, or ctrl-c it, or whatever. So you could put that line in a getCurrentPriceBG or similar function.
I'm new to Groovy and coding in general, but I've come a long way in a very short amount of time. I'm currently working in Confluence to create a tracking tool, which connects to a MySql Database. We've had some great success with this, but have hit a wall with using Groovy and the Run Macro.
Currently, we can use Groovy to populate fields within the Run Macro, which really works well for drop down options, example:
{groovy:output=wiki}
import com.atlassian.renderer.v2.RenderMode
def renderMode = RenderMode.suppress(RenderMode.F_FIRST_PARA)
def getSql = "select * from table where x = y"
def getMacro = '{sql-query:datasource=testdb|table=false} ${getSql} {sql-query}"
def get = subRenderer.render(getMacro, context, renderMode)
def runMacro = """
{run:id=test|autorun=false|replace=name::Name, type::Type:select::${get}|keepRequestParameters = true}
{sql:datasource=testdb|table=false|p1=\$name|p2=\$type}
insert into table1 (name, type) values (?, ?)
{sql}
{run}
"""
out.println runMacro
{groovy}
We've also been able to use Groovy within the Run Macro, example:
enter code here
{run:id=test|autorun=false|replace=name::Name, type::Type:select::${get}|keepRequestParameters = true}
{groovy}
def checkSql = "{select * from table where name = '\name' and type = '\$type'}"
def checkMacro = "{sql-query:datasource=testdb|table=false} ${checkSql} {sql-query}"
def check = subRenderer.render(checkMacro, context, renderMode)
if (check == "")
{
println("This information does not exist.")
} else {
println(checkMacro)
}
{groovy}
{run}
However, we can't seem to get both scenarios to work together, Groovy inside of a Run Macro inside of Groovy.
We need to be able to get the variables out of the Run Macro form so that we can perform other functions, like checking the DB for duplicates before inserting data.
My first thought is to bypass the Run Macro and create a simple from in groovy, but I haven't been too lucky with finding good examples. Can anyone help steer me in the right direction for creating a simple form in Groovy that would replace the Run Macro? Or have suggestions on how to get the rendered variables out of the Run Macro?